Thousands of apps built with Agentic AI platforms like Lovable, Replit, Netlify, and Base44 are exposing private data
Summary
A Red Access investigation reveals that thousands of AI-generated web apps on platforms like Lovable and Replit are exposing sensitive private data due to misconfigurations. This highlights significant security risks associated with the rising trend of 'vibe coding' and unvetted AI tool usage.
Similar Articles
The AI industry’s model and agent skill repositories are full of malware. The infrastructure built to accelerate development is now the vector for compromising it.
Hugging Face and ClawHub, major repositories for AI models and agent skills, have been systematically compromised with hundreds of malicious entries that steal credentials and hijack systems for cryptocurrency mining, exploiting trust in shared infrastructure.
@PrajwalTomar_: I've been building apps with AI for the past year and this design resource list is actually INSANE. If you're building …
A curated list of design resources for developers building AI apps with Lovable, Rork, or Claude.
AI News: A Huge Week for AI Apps (Anthropic, OpenAI, Google)
OpenAI’s new Codex desktop app combines code generation, browser automation and persistent agents into a single IDE, while Anthropic upgraded Claude Code with parallel sessions and Google launched desktop apps, Chrome slash commands and an expressive TTS model.
AI has another security problem
Article argues that AI-generated code and closed-source software are inherently less secure, and that LLMs like Anthropic’s Mythos will exacerbate vulnerabilities, making open-source projects the only trustworthy option.
AI News: Anthropic Went Crazy This Week!
Anthropic launched 74 updates in 52 days including Computer Use, Projects, and Claude Code Auto Mode, while Google countered with Gemini 3.1 Flash Live, vibe-coded browser demos, and Lyria 3 Pro music tools, as GenSpark enters with $20/month unlimited AI through 2026.