@itsolelehmann: Garry Tan’s custom instructions are based. It pushes any LLM past half-finished bullshit and into actually useful answe…
Summary
Garry Tan shares custom instructions (SOUL md) that make LLMs provide more useful, less half-finished answers. A practical tip for better AI interactions.
View Cached Full Text
Cached at: 05/08/26, 05:35 PM
Garry Tan’s custom instructions are based.
It pushes any LLM past half-finished bullshit and into actually useful answers.
Add this to your SOUL md: https://t.co/obwj8req73
Similar Articles
Improving instruction hierarchy in frontier LLMs
OpenAI presents a training approach using instruction-hierarchy tasks to improve LLM safety and reliability by teaching models to properly prioritize instructions based on trust levels (system > developer > user > tool). The method addresses prompt-injection attacks and safety steerability through reinforcement learning with a new dataset called IH-Challenge.
@garrytan: https://x.com/garrytan/status/2053127519872614419
Garry Tan describes using a personal AI agent system, termed 'Book Mirror', to deeply integrate reading material with his life context via Meta-Meta-Prompting. He shares insights on building real AI systems as an operating system rather than just a chat interface.
GLM 5.1 Thinks Strategically, Data-Center Revolt Intensifies, When Helpful LLMs Turn Unhelpful, Humanoid Robots Get to Work
Andrew Ng discusses how coding agents accelerate different types of software work at varying speeds, with frontend development benefiting most and research least.
LLMs Go To Confession, Automated Scientific Research, What Copilot Users Want, Reasoning For Less
DeepLearning.AI launches 'Build with Andrew,' a course enabling non-coders to build web applications using AI in under 30 minutes, while research addresses LLM transparency issues including model honesty and automated scientific research capabilities.
@garrytan: This is interesting. Anyone experimenting with this? So far anytime I have adjacent skills I just tell it to DRY itself…
Garry Tan asks if others experiment with merging adjacent AI skills into larger parameterized skills, sharing his preference for composing bigger skills with branching parameters.