foundation-model

Tag

Cards List
#foundation-model

TabPFN-3 just released: a pre-trained tabular foundation model for up to 1M rows [R][N]

Reddit r/MachineLearning · yesterday

TabPFN-3, a pre-trained tabular foundation model, was released with support for up to 1 million rows on a single GPU, 10x-1000x faster inference, and a 93% win rate over classical ML in benchmarks.

0 favorites 0 likes
#foundation-model

@MSFTResearch: MatterSim is expanding what AI can do for materials science—from faster large-scale simulations to MatterSim-MT, a new …

X AI KOLs Following · yesterday Cached

Microsoft Research announces MatterSim updates including MatterSim-MT, a multi-task foundation model for materials characterization, faster simulation (3-5x speedup), and experimental validation of thermal conductivity predictions for a new material.

0 favorites 0 likes
#foundation-model

MIT FINGERS-7B: First Multi-Omics AI Model for Alzheimer’s Prevention

Reddit r/singularity · 2d ago

MIT released FINGERS-7B, a 7-billion-parameter multi-omics foundation model trained on data from 30,000 individuals to predict Alzheimer's risk years in advance. The model is accessible via the AD Workbench and is accompanied by a research paper on OpenReview.

0 favorites 0 likes
#foundation-model

HiDream-ai/HiDream-O1-Image

Hugging Face Models Trending · 5d ago Cached

HiDream-ai has open-sourced HiDream-O1-Image (8B), a unified image generative foundation model built on a Pixel-level Unified Transformer (UiT) that natively handles text-to-image, image editing, and subject-driven personalization at up to 2048×2048 resolution without external VAEs or disjoint text encoders. It debuted at #8 in the Artificial Analysis Text to Image Arena and is positioned as a leading open-weights text-to-image model.

0 favorites 0 likes
#foundation-model

A Robust Foundation Model for Conservation Laws: Injecting Context into Flux Neural Operators via Recurrent Vision Transformers

arXiv cs.LG · 5d ago Cached

This paper proposes a new architecture that augments Flux Neural Operators with recurrent Vision Transformers to solve conservation laws as a foundation model. It demonstrates robust generalization and long-time prediction capabilities across diverse conservative systems without explicit access to governing equations.

0 favorites 0 likes
#foundation-model

A Foundation Model for Zero-Shot Logical Rule Induction

Hugging Face Daily Papers · 2026-05-06 Cached

This paper introduces the Neural Rule Inducer (NRI), a foundation model for zero-shot logical rule induction that uses domain-agnostic statistical properties to generalize across tasks without retraining.

0 favorites 0 likes
#foundation-model

@oragnes: Google quietly open-sourced the time-series forecasting base model TimesFM 2.5—params down to 200 M, context up to 16 k. Feed it raw history and get instant zero-shot forecasts; perfect for crypto predictions, fam 😂

X AI KOLs Timeline · 2026-04-20 Cached

Google open-sourced TimesFM 2.5, a 200 M-parameter, 16 k-context zero-shot time-series forecasting base model that works straight out of the box on historical data.

0 favorites 0 likes
#foundation-model

robbyant/lingbot-map

Hugging Face Models Trending · 2026-04-16 Cached

LingBot-Map is a feed-forward 3D foundation model for streaming 3D reconstruction that uses a Geometric Context Transformer architecture, achieving state-of-the-art performance with efficient ~20 FPS inference on long sequences exceeding 10,000 frames.

0 favorites 0 likes
#foundation-model

Geometric Context Transformer for Streaming 3D Reconstruction

Papers with Code Trending · 2026-04-15 Cached

Introduces LingBot-Map, a feed-forward 3D foundation model for streaming 3D reconstruction using a geometric context transformer architecture that achieves stable real-time performance at 20 FPS.

0 favorites 0 likes
#foundation-model

tencent/HY-Embodied-0.5

Hugging Face Models Trending · 2026-04-02 Cached

Tencent releases HY-Embodied-0.5, a suite of foundation models designed for embodied AI agents featuring a Mixture-of-Transformers (MoT) architecture with efficient 2B and powerful 32B variants for real-world robot control and spatial-temporal reasoning.

0 favorites 0 likes
#foundation-model

Introducing TRIBE v2: A Predictive Foundation Model Trained to Understand How the Human Brain Processes Complex Stimuli

Meta AI Blog · 2026-03-25

TRIBE v2 is a new predictive foundation model designed to understand how the human brain processes complex stimuli.

0 favorites 0 likes
#foundation-model

Lightricks/LTX-2.3

Hugging Face Models Trending · 2026-03-04 Cached

Lightricks released LTX-2.3, an open-weight diffusion-based audio-video foundation model with improved quality and prompt adherence, available in multiple checkpoints including distilled and LoRA variants for local execution.

0 favorites 0 likes
#foundation-model

LTX-2: Efficient Joint Audio-Visual Foundation Model

Papers with Code Trending · 2026-01-06 Cached

LTX-2 is introduced as an efficient joint audio-visual foundation model. The text includes a mix of the paper reference and a video script about countries facing existential threats, but the primary classification target is the AI model paper.

0 favorites 0 likes
#foundation-model

GPT-5.1-Codex-Max System Card

OpenAI Blog · 2025-11-19 Cached

OpenAI releases GPT-5.1-Codex-Max, a frontier agentic coding model trained on software engineering tasks with native multi-context window support through compaction, designed to handle millions of tokens in a single task. The system card details comprehensive safety measures and preparedness framework evaluations across cybersecurity, biology, and AI self-improvement domains.

0 favorites 0 likes
#foundation-model

AlphaEarth Foundations helps map our planet in unprecedented detail

Google DeepMind Blog · 2025-10-24 Cached

Google DeepMind introduces AlphaEarth Foundations, an AI model that integrates petabytes of Earth observation data into unified embeddings to map and monitor the planet at 10x10 meter resolution. The model's compact representations enable efficient planetary-scale analysis for applications in food security, deforestation tracking, and environmental monitoring.

0 favorites 0 likes
#foundation-model

How a Gemma model helped discover a new potential cancer therapy pathway

Google DeepMind Blog · 2025-10-23 Cached

Google DeepMind and Yale released C2S-Scale, a 27B parameter foundation model built on Gemma for single-cell analysis that discovered a promising drug combination (silmitasertib and interferon) to enhance immune visibility of "cold" tumors, with predictions validated through experimental confirmation.

0 favorites 0 likes
#foundation-model

First look at GPT-5

OpenAI Blog · 2025-08-07 Cached

OpenAI provides a first look at GPT-5, representing a major advancement in large language models with potential paradigm-shifting capabilities.

0 favorites 0 likes
#foundation-model

Kronos: A Foundation Model for the Language of Financial Markets

Papers with Code Trending · 2025-08-02 Cached

Kronos is a new foundation model for financial K-line data that uses a specialized tokenizer and autoregressive pre-training to outperform existing models in forecasting and synthetic data generation.

0 favorites 0 likes
#foundation-model

DolphinGemma: How Google AI is helping decode dolphin communication

Google DeepMind Blog · 2025-04-14 Cached

Google has developed DolphinGemma, a large language model designed to learn and generate dolphin vocalizations, collaborating with Georgia Tech and the Wild Dolphin Project to advance understanding of dolphin communication patterns and enable potential interspecies dialogue.

0 favorites 0 likes
#foundation-model

A decoder-only foundation model for time-series forecasting

Papers with Code Trending · 2023-10-14 Cached

This article presents a research paper on Time-Series Foundation Model (TimeFM), a decoder-only model that achieves near-optimal zero-shot performance across diverse time-series datasets by adapting large language model techniques.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback