instruction-tuning

Tag

Cards List
#instruction-tuning

Multi-Stream LLMs: Unblocking Language Models with Parallel Streams of Thoughts, Inputs and Outputs

Hugging Face Daily Papers · 2d ago Cached

This paper proposes Multi-Stream LLMs, which transition from sequential message-based instruction tuning to parallel stream processing. This approach allows language models to simultaneously read, think, and generate across multiple concurrent data flows, addressing bottlenecks in autonomous agent applications.

0 favorites 0 likes
#instruction-tuning

More Aligned, Less Diverse? Analyzing the Grammar and Lexicon of Two Generations of LLMs

arXiv cs.CL · 5d ago Cached

This academic paper analyzes the syntactic and lexical diversity of two generations of LLMs compared to human-authored news text, finding that newer, aligned models exhibit reduced diversity.

0 favorites 0 likes
#instruction-tuning

Decomposing the Basic Abilities of Large Language Models: Mitigating Cross-Task Interference in Multi-Task Instruct-Tuning

arXiv cs.CL · 5d ago Cached

This paper proposes Badit, a method that decomposes large language model parameters into orthogonal high-singular-value LoRA experts to mitigate cross-task interference during multi-task instruction tuning.

0 favorites 0 likes
#instruction-tuning

talkie-lm/talkie-1930-13b-it

Hugging Face Models Trending · 2026-04-20 Cached

Talkie-1930-13b-it is a 13B parameter instruction-tuned language model trained on pre-1931 text and fine-tuned using reinforcement learning with DPO.

0 favorites 0 likes
← Back to home

Submit Feedback