llms

Tag

Cards List
#llms

@tom_doerr: Structured roadmaps for AI, ML, and LLM learning https://github.com/bishwaghimire/ai-learning-roadmaps…

X AI KOLs Timeline · yesterday Cached

A comprehensive, open-source GitHub repository providing structured learning roadmaps and curated resources for mastering AI, machine learning, deep learning, and large language models from beginner to advanced levels. Designed for students and professionals, it covers foundational concepts, programming frameworks, career tracks, and emerging AI topics.

0 favorites 0 likes
#llms

llm-gemini 0.31

Simon Willison's Blog · 2d ago Cached

llm-gemini 0.31 is a new release of the plugin for using Google's Gemini models with the LLM command-line tool.

0 favorites 0 likes
#llms

Stop letting LLMs edit your .bib [D]

Reddit r/MachineLearning · 3d ago

The article criticizes the reliance on Large Language Models for generating bibliographic entries, highlighting issues with hallucinated citations and incorrect author lists in academic papers.

0 favorites 0 likes
#llms

Improving understanding with language

MIT News — Artificial Intelligence · 2026-05-01 Cached

This article profiles MIT senior Olivia Honeycutt, highlighting her interdisciplinary research at the intersection of linguistics, computation, and cognition, with a focus on comparing human language processing with large language models.

0 favorites 0 likes
#llms

GLM 5.1 Thinks Strategically, Data-Center Revolt Intensifies, When Helpful LLMs Turn Unhelpful, Humanoid Robots Get to Work

The Batch · 2026-04-24 Cached

Andrew Ng discusses how coding agents accelerate different types of software work at varying speeds, with frontend development benefiting most and research least.

0 favorites 0 likes
#llms

@phosphenq: This 2 hour video by Andrej Karpathy (co-founder of OpenAI) will teach you more about using LLMs than every AI tutorial…

X AI KOLs Timeline · 2026-04-21 Cached

Andrej Karpathy posted a 2-hour educational video that promises to significantly improve viewers' practical use of large language models.

0 favorites 0 likes
#llms

What fundamental research exists anwering if / if not AGI can be achieved through LLMs?

Reddit r/artificial · 2026-04-20

A question seeking fundamental research or papers on whether AGI can or cannot be achieved through large language models, looking to move beyond opinion-based discussion.

0 favorites 0 likes
#llms

CLewR: Curriculum Learning with Restarts for Machine Translation Preference Learning

arXiv cs.CL · 2026-04-20 Cached

CLewR introduces a curriculum learning strategy with restarts for improving machine translation performance in LLMs through preference optimization. The method addresses catastrophic forgetting by iterating easy-to-hard curriculum multiple times, showing consistent gains across Gemma2, Qwen2.5, and Llama3.1 models.

0 favorites 0 likes
#llms

Disco-RAG: Discourse-Aware Retrieval-Augmented Generation

arXiv cs.CL · 2026-04-20 Cached

Disco-RAG proposes a discourse-aware retrieval-augmented generation framework that integrates discourse signals through intra-chunk discourse trees and inter-chunk rhetorical graphs to improve knowledge synthesis in LLMs. The method achieves state-of-the-art results on QA and summarization benchmarks without fine-tuning.

0 favorites 0 likes
#llms

AtManRL: Towards Faithful Reasoning via Differentiable Attention Saliency

arXiv cs.CL · 2026-04-20 Cached

AtManRL is a method that uses differentiable attention manipulation and reinforcement learning to train LLMs to generate more faithful chain-of-thought reasoning by ensuring reasoning tokens causally influence final predictions. Experiments on GSM8K and MMLU with Llama-3.2-3B demonstrate the approach can identify influential reasoning tokens and improve reasoning transparency.

0 favorites 0 likes
#llms

Google DeepMind's Senior Scientist Alexander Lerchner challenges the idea that large language models can ever achieve consciousness(not even in 100years), calling it the 'Abstraction Fallacy.'

Reddit r/singularity · 2026-04-18

Google DeepMind senior scientist Alexander Lerchner argues that large language models cannot achieve consciousness, dubbing the assumption the 'Abstraction Fallacy' and suggesting this limitation persists even over a century-long timeframe.

0 favorites 0 likes
#llms

LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models

Papers with Code Trending · 2024-03-20 Cached

LlamaFactory is a unified framework that enables efficient fine-tuning of over 100 large language models via a web-based interface, eliminating the need for coding.

0 favorites 0 likes
#llms

@karpathy: The hottest new programming language is English

X AI KOLs · 2023-01-24 Cached

A commentary on how large language models and AI have made English an effective programming language, reflecting the shift toward natural language interfaces for coding tasks.

0 favorites 0 likes
← Back to home

Submit Feedback