@_avichawla: Hugging Face meets Claude! I built a @huggingface fine-tuning studio that lets you fine-tune any LLM directly from Clau…

X AI KOLs Timeline Tools

Summary

A developer released a Hugging Face fine-tuning studio that allows users to fine-tune LLMs directly through Claude, built using the mcp-use SDK.

Hugging Face meets Claude! I built a @huggingface fine-tuning studio that lets you fine-tune any LLM directly from Claude. The app connects to the HF Hub for model and dataset search. It handles chat template formatting for the training data, and lets you configure LoRA rank, quantization, batch size, and learning rate directly from Claude. Training runs on HF's GPU infra via AutoTrain. Once training finishes, you can also chat with your fine-tuned model (or any other LLM on HF) directly from Claude. The studio's built with @manufact's mcp-use SDK, an open-source full-stack framework to build MCP Apps for Agents. In mcp-use, any MCP tool can be associated with a UI. You define a tool handler, create a React component, and the mcp-use framework handles the tool registration, prop mapping between server and widget, bundling, and hot reload during development. The widgets follow the MCP Apps standard, inspired by OpenAI's Apps SDK. Claude is an example, but you can render them as interactive UI elements in any conversational MCP client that supports it. Similarly, this pattern works for any workflow you want to bring inside a chat, like eval dashboards, dataset explorers, or model comparison tools. I have shared my fine-tuning studio repo in the replies!
Original Article

Similar Articles

Liberate your OpenClaw

Hugging Face Blog

Hugging Face provides a guide to migrate OpenClaw agents from restricted Anthropic Claude models to open-source alternatives via Hugging Face Inference Providers or local hardware using tools like Llama.cpp.

Train AI models with Unsloth and Hugging Face Jobs for FREE

Hugging Face Blog

Hugging Face and Unsloth are offering free credits and training resources to fine-tune AI models using Hugging Face Jobs, enabling developers to train small language models like LFM2.5-1.2B-Instruct with 2x faster training and 60% less VRAM usage through coding agents like Claude Code and Codex.