Your Secure and Responsible Technology Partner

When “By Design” Is the Breach

When “By Design” Is the Breach: What Lovable’s Week Teaches Us About Foundations

Author: Dr Joshua Scarpino

On April 20, 2026, security researcher @weezerOSINT reported that a free Lovable account, with five API calls, could read another user’s source code, database credentials, and AI chat histories.¹ The flaw was a Broken Object Level Authorization (BOLA) issue, meaning the platform’s API returned objects to any authenticated caller without verifying whether that caller actually owned the data being requested.² Lovable’s first public response was that the exposure was “intentional behavior” and a matter of “unclear documentation.”¹ Its second response walked that back and attributed the failure to its bug bounty partner, HackerOne, which had closed the original March 3 report as a duplicate.¹

The company is valued at $6.6 billion and counts Uber, Zendesk, and Deutsche Telekom among its customers.¹ This was not the first incident. In February 2026, a researcher documented sixteen vulnerabilities, six of them critical, in a single Lovable-hosted application featured on the platform’s own showcase page; that application exposed records belonging to 18,697 users across K-12 schools and universities.³ A separate class of flaw on Lovable-generated apps (CVE-2025-48757) was documented in mid-2025, when one in ten of 1,645 sampled applications were found to leak user data due to missing row-level security configurations in Supabase.⁴

Three incidents in three quarters is not a communications problem. It is a foundation problem.

The Foundations That Were Missing

Strip away the vibe-coding framing and the platform valuation, and the Lovable incident reduces to a small number of controls that mature software organizations treat as non-negotiable.

The first is authorization at the object layer. An API that returns data to the caller must verify, on every request, that the caller is permitted to access that specific object. BOLA is the number-one risk in the OWASP API Security Top 10 because the failure mode is both common and catastrophic.² The second is secure defaults. Lovable’s “public” visibility setting historically made both source code and AI chat histories available to anyone, and early free-tier users could not create private projects at all.¹ The company eventually switched to private-by-default in December 2025; in February 2026, a backend change re-enabled public access to chats on older projects.¹ The third is change control and regression testing. A permissions consolidation in production re-introduced an access flaw that had already been patched. That is a validation gap, not a coding error. The fourth is coordinated vulnerability disclosure. A researcher reported the flaw on March 3; the report was closed as a duplicate; the issue persisted for 48 days before public pressure triggered remediation.¹

None of these are novel controls. They appear in NIST SP 800-53, ISO/IEC 27001, and OWASP ASVS. They are taught in every introductory application security course. The question is not whether Lovable’s engineers knew about them; the question is why the organization shipped, scaled to eight million users, and raised a Series B before these controls were demonstrably in place.

Why This Pattern Is Structural, Not Accidental

Lovable is not uniquely negligent. It is representative of a broader category. Veracode’s 2025 GenAI Code Security Report tested more than 100 large language models across 80 coding tasks and found that AI-generated code contained security vulnerabilities in 45 percent of cases; Java code failed security tests more than 70 percent of the time.⁵ Independent benchmarking cited by Checkmarx placed the insecurity rate of AI-generated code across multiple assistants as high as 70 percent when evaluated against secure coding baselines.⁶ Veracode’s October 2025 update showed that models from Anthropic, Google, Qwen, and xAI released between July and October 2025 produced no meaningful security improvement over their predecessors; only OpenAI’s GPT-5 reasoning models broke the pattern, reaching a 70 to 72 percent pass rate.⁷

When the code generation tool does not prioritize security, the platform built on top of that code generation tool does not prioritize security, and the economic incentives reward time-to-revenue over time-to-review, the resulting product will ship with predictable failure modes. Vibe coding did not invent insecure defaults; it industrialized them.

Two patterns are worth naming directly. First, AI vendors have adopted a habit of reclassifying security flaws as “intended behavior” or “by design” when remediation would be expensive or architecturally disruptive.⁸ Anthropic, Google, and Microsoft have each done so in recent months, in some cases declining to issue CVEs or public advisories for high-severity findings.⁸ Second, the downstream consumer of the AI-generated code, often a non-developer building on a no-code or low-code platform, inherits both the vulnerability and the responsibility for detecting and fixing it. The person who prompted “build me an exam grading tool” is not equipped to audit the row-level security configuration of the Supabase backend the AI generated for them.

The ARISE Framework™ as a North Star

Organizations consuming AI-generated code, and organizations building platforms that produce it, require a governance model that does not depend on the good faith of the vendor or the security intuition of the prompt writer. The ARISE Framework™ provides that model across seven domains. Each domain maps to a specific failure observed in the Lovable incidents.

GOVERN. Establish written accountability for AI-generated code before it enters production. The organization must define who owns the security posture of code that no human authored in the conventional sense, what evidence of review is required, and what risk acceptance authority is needed to deploy it. Lovable’s shifting public statements, moving from “intentional behavior” to “documentation failure” to “our bug bounty partner’s fault,” indicate the absence of a clear internal owner for security communications.¹

MANAGE. Integrate AI code generation tools into the software development lifecycle under the same policy, risk register, and vendor management controls as any other production dependency. Treat Lovable, Cursor, Copilot, and Replit as third-party components, not as developer productivity shortcuts that bypass review. A 45 percent vulnerability rate across LLM-generated code is a supply chain input that must be measured and tracked.⁵

IDENTIFY. Inventory every application, environment, and dataset that contains AI-generated code or was built on a vibe-coding platform. The organization cannot protect what it cannot locate. Shadow AI development, where a business unit builds and deploys an app on Lovable or a similar platform without security or IT involvement, is the modern equivalent of unmanaged SaaS sprawl. It must be catalogued before it can be governed.

PROTECT. Require secure defaults at the platform and application layer. This means row-level security enabled by default on every Supabase, Firebase, or PostgreSQL instance; authorization checks at the object level on every API endpoint; and private-by-default visibility on every project, environment, and dataset. Defaults matter because most users never change them. The Lovable timeline, where public-by-default persisted until December 2025 and re-emerged in February 2026, is a direct illustration.¹

DETECT. Instrument AI-generated code with the same logging, monitoring, and anomaly detection applied to hand-written code. BOLA vulnerabilities are detectable at runtime through patterns such as a single account making sequential authenticated requests to enumerate object IDs. Detection capability must exist independent of the vendor’s willingness to acknowledge the flaw.

RESPOND. Maintain a coordinated vulnerability disclosure program with defined service level objectives for triage, remediation, and public communication. A 48-day dwell time between report and patch, with a closed-as-duplicate dismissal in the interim, fails every published disclosure standard, including ISO/IEC 29147.¹ Organizations must evaluate their platform vendors against these standards before adoption, not after breach.

VALIDATE. Continuously test the controls above against real failure modes. Regression testing must include authorization checks after every permission system change. Penetration testing must include BOLA probing on every externally exposed API. Third-party assurance must include evidence that the vendor performs the same. The February 2026 re-introduction of the Lovable flaw, during backend permissions work, is precisely the scenario that validation exists to catch.¹

What Organizations Should Do This Quarter

Three actions are appropriate for any organization with meaningful exposure to AI-generated code or vibe-coding platforms.

Organizations should conduct a rapid inventory of all production and customer-facing applications built on AI code generation platforms, and document the authorization model, default visibility settings, and disclosure history of each platform in use. Organizations should establish a minimum security baseline, expressed as a procurement and onboarding requirement, that vendors must meet before their output is permitted in production environments. Organizations must build internal validation capability, whether through an in-house application security function, an external partner, or a continuous assurance program, that does not rely on the vendor’s characterization of its own security posture.

The Lovable incident will not be the last of its kind. The structural conditions that produced it, velocity-prioritized platforms, AI-generated code with a baseline vulnerability rate near one in two, and vendor disclosure practices that treat security findings as reputational problems rather than engineering problems, are still in place. Organizations that want to consume these tools without inheriting their failure modes need a framework. ARISE provides one.


References

  1. Jessica Lyons, “Vibe coding upstart Lovable denies data leak, cites ‘intentional behavior,’ then throws HackerOne under the bus,” The Register, April 20, 2026, https://www.theregister.com/2026/04/20/lovable_denies_data_leak/
  2. OWASP Foundation, “API1:2023 Broken Object Level Authorization,” OWASP API Security Top 10: 2023 Edition, https://owasp.org/API-Security/editions/2023/en/0xa1-broken-object-level-authorization
  3. Connor Jones, “Lovable-hosted app littered with basic flaws exposed 18K users, researcher claims,” The Register, February 27, 2026, https://www.theregister.com/2026/02/27/lovable_app_vulnerabilities/
  4. Matt Palmer, “Statement on CVE-2025-48757,” mattpalmer.io, May 29, 2025, https://mattpalmer.io/posts/statement-on-CVE-2025-48757/
  5. Veracode, 2025 GenAI Code Security Report, originally published July 30, 2025, https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/; see also Jens Wessling, “We Asked 100+ AI Models to Write Code. Here’s How Many Failed Security Tests.,” Veracode blog, July 30, 2025, https://www.veracode.com/blog/genai-code-security-report/
  6. BaxBench benchmark, June 2025, as cited in Checkmarx, “Why AI Code Generators Can’t Secure Their Own Code,” 2025, https://checkmarx.com/blog/why-ai-code-generators-cant-secure-its-own-code-and-why-checkmarx-assist-can/
  7. Veracode, “AI Code Security: October 2025 Update,” November 18, 2025, https://www.veracode.com/blog/ai-code-security-october-update/
  8. Jessica Lyons, “I meant to do that! AI vendors shrug off responsibility for vulns,” The Register, April 19, 2026, https://www.theregister.com/2026/04/19/ai_vendors_response_to_security/