@iotcoi: OpenAI trained the perfect LLM to hide data from OpenAI openai/privacy-filter Apache 2.0, 1B params MoE, runs local My …
Summary
OpenAI released a 1B-parameter Apache-2.0 MoE model that strips sensitive data before it reaches any LLM, enabling fully local, leak-proof workflows.
View Cached Full Text
Cached at: 04/23/26, 01:07 PM
OpenAI trained the perfect LLM to hide data from OpenAI openai/privacy-filter Apache 2.0, 1B params MoE, runs local My stack all living on localhost: privacy-filter → LLM→ privacy-filter Names → [PERSON] Stripe keys, IBANs→ all vaporized The LLM cannot leak what it never saw
Similar Articles
@eliebakouch: very nice release by @OpenAI! a 50M active, 1.5B total gpt-oss arch MoE, to filter private information from trillion sc…
OpenAI released a 1.5B-parameter MoE model with only 50M active parameters that can filter private data from trillion-token datasets while maintaining 128k context length.
OpenAI Privacy Filter Model
OpenAI quietly released an Apache-2.0-licensed privacy-filter model on Hugging Face with open weights, aiming to help users run local privacy-preserving filters while retaining big-lab quality.
@altryne: OpenAI just open sourced a new 1.5B (50m active) model on HuggingFace with Apache 2.0 license! It's not a new LLM, this…
OpenAI released a 1.5-billion-parameter PII detection model, Privacy Filter, under Apache 2.0 on HuggingFace.
openai/privacy-filter
OpenAI releases Privacy Filter, a 1.5B parameter bidirectional token classification model for PII detection and masking, featuring an Apache 2.0 license and long-context support for high-throughput data sanitization.
Introducing OpenAI Privacy Filter
OpenAI releases Privacy Filter, an open-weight model designed to detect and redact personally identifiable information (PII) in text with high efficiency and context awareness.