Tag
Linus Ekenstam explains his preference for using HTML instead of Markdown when building context for AI, citing broader training data availability for HTML.
Garry Tan asks if others experiment with merging adjacent AI skills into larger parameterized skills, sharing his preference for composing bigger skills with branching parameters.
Anthropic philosopher Amanda Askell shared a prompting method she uses to explore topics of curiosity in a recent interview.
Anthropic’s applied AI team released a free 24-minute workshop video teaching the six key elements for properly prompting Claude, plus a companion skill to automate the techniques.
Matt Shumer shares a concise prompting framework drawn from seven years of experience to help users maximize AI agent performance.
Researchers from National Taiwan University propose replacing fixed translation-based prompting strategies in multilingual LLMs with lightweight learned classifiers that route each instance to either native or translation-based prompting. Their analysis across 10 languages and 4 benchmarks shows no single strategy is universally optimal, with translation benefiting low-resource languages most, and the learned routing achieving statistically significant improvements over fixed strategies.
Matt Pocock advises splitting LLM document creation into two phases: first a loose alignment session, then the actual writing.
An introductory guide to using ChatGPT, covering basic prompting techniques, practical use cases, and voice/dictation features to help new users get started with the conversational AI assistant.
A comprehensive tutorial covering Claude's pricing, interface settings, and advanced prompting strategies using the ICC framework to optimize AI outputs.