machine-learning-research

Tag

Cards List
#machine-learning-research

Enforcing Constraints in Generative Sampling via Adaptive Correction Scheduling

arXiv cs.LG · 2d ago Cached

This research paper introduces adaptive correction scheduling for enforcing hard constraints in generative sampling, demonstrating that it improves the cost-accuracy frontier compared to terminal or stepwise projection methods.

0 favorites 0 likes
#machine-learning-research

Toeplitz MLP Mixers are Low Complexity, Information-Rich Sequence Models

arXiv cs.LG · 4d ago Cached

This paper introduces Toeplitz MLP Mixers (TMM), a novel architecture that replaces attention with Toeplitz matrix multiplication to achieve lower computational complexity while maintaining high information retention and training efficiency.

0 favorites 0 likes
#machine-learning-research

Saliency-Aware Regularized Quantization Calibration for Large Language Models

arXiv cs.AI · 2026-05-08 Cached

This paper proposes Saliency-Aware Regularized Quantization Calibration (SARQC), a unified framework that improves Post-Training Quantization (PTQ) for LLMs by adding a regularization term to preserve weight proximity, enhancing generalization and performance.

0 favorites 0 likes
#machine-learning-research

Scaling Continual Learning to 300+ Tasks with Bi-Level Routing Mixture-of-Experts

Hugging Face Daily Papers · 2026-05-08 Cached

This paper introduces CaRE, a novel continual learning framework using a bi-level routing mixture-of-experts mechanism to effectively handle class-incremental learning over sequences of 300+ tasks.

0 favorites 0 likes
#machine-learning-research

Anisotropic Modality Align

Hugging Face Daily Papers · 2026-05-08 Cached

This paper proposes AnisoAlign, a framework that addresses the modality gap in multimodal models by applying anisotropic geometric correction to enable effective unpaired modality alignment.

0 favorites 0 likes
#machine-learning-research

New technique makes AI models leaner and faster while they’re still learning

MIT News — Artificial Intelligence · 2026-04-09 Cached

Researchers from MIT CSAIL and other institutions introduced CompreSSM, a technique that compresses state-space AI models during training by removing unnecessary components early, resulting in faster training and smaller models without sacrificing performance.

0 favorites 0 likes
← Back to home

Submit Feedback