training

Tag

Cards List
#training

@eglyman: we trained a .35b-parameter model to navigate spreadsheets better than opus 4.6. normal corporate card company stuff.

X AI KOLs Following · 5d ago Cached

A developer trained a 350M-parameter model capable of navigating spreadsheets better than Anthropic's Opus 4.6.

0 favorites 0 likes
#training

ROCm Status in mid 2026 [D]

Reddit r/MachineLearning · 6d ago

The author asks about the current viability of AMD's ROCm ecosystem for AI training in mid-2026, comparing it to NVIDIA's CUDA and asking if it has reached a 'just works' stage for PyTorch.

0 favorites 0 likes
#training

How ChatGPT learns about the world while protecting privacy

OpenAI Blog · 2026-05-06 Cached

OpenAI explains how ChatGPT learns from public data and user interactions while protecting privacy through filtering and user controls.

0 favorites 0 likes
#training

Transition

Product Hunt · 2026-05-05

Transition is an AI-powered coaching platform designed to optimize athletic training routines and improve race performance for runners.

0 favorites 0 likes
#training

Our eighth generation TPUs: two chips for the agentic era

Hacker News Top · 2026-04-22 Cached

Google unveils 8th-gen TPUs: TPU 8t for training and TPU 8i for inference, purpose-built for power-efficient, large-scale AI agent workloads and arriving later this year.

0 favorites 0 likes
#training

What should i do to have a good OD model?[P]

Reddit r/MachineLearning · 2026-04-20

A user is seeking advice on improving their object detection model trained with YOLO11n for deployment on a Raspberry Pi 5, struggling with the gap between theoretical mAP50 metrics and practical detection performance.

0 favorites 0 likes
#training

kaizen

Product Hunt · 2026-04-16

Kaizen is a training platform that dynamically adapts running workouts based on user performance and activity data.

0 favorites 0 likes
#training

Ulysses Sequence Parallelism: Training with Million-Token Contexts

Hugging Face Blog · 2026-03-09 Cached

Ulysses Sequence Parallelism is a technique for training LLMs with million-token contexts by distributing sequence chunks across GPUs, reducing memory requirements and enabling efficient long-context training. It integrates with HuggingFace Accelerate, Transformers Trainer, and TRL, with support for Flash Attention and DeepSpeed ZeRO.

0 favorites 0 likes
#training

Introducing OpenAI Academy for News Organizations

OpenAI Blog · 2025-12-17 Cached

OpenAI has launched the OpenAI Academy for News Organizations, a learning hub offering on-demand training, playbooks, and practical AI use cases for journalists and publishers, developed in partnership with the American Journalism Project and The Lenfest Institute.

0 favorites 0 likes
#training

May 8, 2026AlignmentTeaching Claude why

Anthropic Research · 5d ago Cached

Anthropic shares lessons from improving Claude's alignment training, achieving perfect scores on agentic misalignment evaluations by teaching underlying principles rather than just demonstrations.

0 favorites 0 likes
← Back to home

Submit Feedback