machine-learning-research

Tag

Cards List
#machine-learning-research

Saliency-Aware Regularized Quantization Calibration for Large Language Models

arXiv cs.AI · 2d ago Cached

This paper proposes Saliency-Aware Regularized Quantization Calibration (SARQC), a unified framework that improves Post-Training Quantization (PTQ) for LLMs by adding a regularization term to preserve weight proximity, enhancing generalization and performance.

0 favorites 0 likes
#machine-learning-research

New technique makes AI models leaner and faster while they’re still learning

MIT News — Artificial Intelligence · 2026-04-09 Cached

Researchers from MIT CSAIL and other institutions introduced CompreSSM, a technique that compresses state-space AI models during training by removing unnecessary components early, resulting in faster training and smaller models without sacrificing performance.

0 favorites 0 likes
← Back to home

Submit Feedback