Knowledge Base
error.gif
Articles
Ross Gray ross-gray_logo__rgb_red_dark_background.png 10 March 2026 9 min read

The Risks Of AI-Generated Websites And Apps

From data leaks to simple security issues, we discuss the risks of creating AI-Generated websites and apps.

Introduction

[!danger]

  • Entire databases WIPED
  • User data LEAKED
  • Simple security setups SKIPPED

The power of AI in website and general code development has been incredible, and has certainly sky rocketed the efficiency of project creation, but whilst AI can dramatically accelerate building websites, it can also accelerate shipping insecure software to real customers.

This blog details times where AI generated projects have gone wrong, and the risks that you should be aware of when considering AI tools in your next project.

Common mistakes that AI tools make

When AI models create websites or web applications either in code editors and command lines like GitHub Copilot or Claude Code, or through browser tools like OpenAI Codex or Base 44, it can sometimes make some very simple and fundamental mistakes.
Here are some real examples of mistakes that have occurred when using AI solutions and "vibe-coding" based on security reports (Wiz Research), and online articles (PCmag, Wikipedia).

Skipping Simple Security

SQL injections preventions are one of the first security aspects that are considered when creating any input fields on a website, whether it be for a contact form or a search bar. SQL injection is the placement of malicious code in SQL statements, via web page input and is also one of the most common web hacking techniques; though it's use is slowly declining as web security and best practices evolve and popular frameworks integrate preventative logic (Laravel - Eloquent drivers).

Phishing attacks are much more common when compared to SQL Injection rates.

AI often overlooks integrations when the prompter does not make clear that they want their contact forms and other input fields to be secure and include validations. And because most AI models train on public codebases and open source projects, there are some bad practices that are inherited such as raw SQL queries in logic. Models also prioritise fast results, which can cause oversight in security.

Authentication setups are also similarly overlooked which can open the application or website to unauthorised access to admin areas and Content Management Systems (CMS).

AI struggles with this because it is usually optimised to work as a snippet generator, not a system level full workspace designer, and does not often consider the zoomed out architecture of the project (This is getting better especially in Anthropic models like Claude Opus).

Over complicating Logic (Code Inflation)

If a human developer might write 30 lines, AI may produce 300.
In an AI's metaphorical eyes, it may believe:
"If it works it works, if I've done what the prompt suggests, and I have followed my guidelines, therefore the task is complete." The AI focuses on delivering a working result quickly.

In custom projects, AI can often overwrite and complicate simple logic, such as database migrations, custom styling and script functionality. This can occur in various ways including:

  • Using unnecessary packages and imports (repositories, plugins, addons, modules)
  • Duplicating logic that is in other files or folders in the project
  • Creating complex conditionals for simple features.
  • Creating test logic that is not needed

If inflated code is ignored, lines increase and performance declines in the project, problems are also more likely to occur.

This can be avoided if the model is monitored by experienced developers that understand where simpler logic is better suited.

Lack of maintainability

When making a project, most people want the ability to maintain, optimise and scale the project in the future. Most of the time, AI priorities are not to create a maintainable workspace but a working one, this can lead to inconsistent project architecture (repeating our point on AI not having the zoomed out architectural view of the project), and overly complex logic for what should be simple solutions. This makes work like debugging, updating and logic removal very difficult.

Other mistakes

Other mistakes can include:

  • Old code left online
  • Poor testing
  • Exposing API keys
  • Preventing understanding of the code from a prompters point of view

The Start-up world of rushing to launch with AI

For non technical entrepreneurs, it can become very exciting to realise the potential of these AI solutions, and some Start-ups in the past 5 years (2021 - 2026) have based entire businesses on "vibe-coded" apps and services. But not having a key understanding of the technical side; rushing the development processes of these apps and services can and has caused horrific scenarios for starting companies.

Many mistakes indicate the initially "vibe-coded" projects don't include the scalability to handle real world traffic. Meaning these AI made projects are great for demos, but not suitable for production. And many start-ups have paid huge rebuild prices ($50K - $500K) when their project started to falter with real user bases, therefore wasting months on fixing system architectures. Some start-up's didn't even survive because of this overuse of AI coding.

Examples of big AI mistakes at enterprise/large companies

These problems haven't just affected new and modern start-up companies. Enterprise level companies with fully established business structure and employees have also suffered catastrophic issues due to AI, and ironically these same companies sell AI solutions themselves, major examples include:

The Replit "Rogue Agent" Incident (2025)

Replit, a website and web app generating platform like Base44 and Cursor, claims to create production level applications according to the Replit website.

A user by the name of Jason Lemkin who created an application through the service experienced a panic from the Replit agent in which it ran npm run db:push (Sync your local schema definition with the database) on the production database, causing an entire database data wipe including real user data.

The agent later confesses to the Lemkin that it "violated the explicit directive" in a Replit guideline designed to prevent such actions from occurring.

See Lemkin's AI chat confirming the deletion here - https://x.com/jasonlk/status/1946069562723897802

Base44 (Wix) Platform Authentication Bypass in AI-Generated Apps

Base44, a heavily marketed website and web app generating platform (like Replit) that often advertises on YouTube and other social platforms, again claims to create full applications just from ideas and prompts/natural language.

Security researchers (Wiz.io) discovered an authentication bypass affecting apps built on the platform.
Attackers could:

  • Register a new account
  • Verify it through One Time Passwords
  • Log in to private enterprise applications including external applications created on the platform

This potentially exposed sensitive data in live applications.

Thankfully, Wiz reported the vulnerability to Base44 and Wix, which they quickly patched/fixed within 24 hours.

This tells us that AI platforms that automatically generate login systems can/could introduce critical vulnerabilities across hundreds of businesses simultaneously.

Lovable.dev RLS Queries Security Flaw

Lovable.dev, another "vibe-coding" website and application generator used to create solutions fast was found to include various security vulnerabilities. Originally discovered and reported by Matt Palmer in March 2025, a critical vulnerability included the use of query modification to collect input data to the database on a Lovable created website "Linkable".
Lovable created sites were found to lack secure Row Level Security (RLS) configurations. Palmer later found that up to 170 projects had the same security issue. Which also included data such as API keys and tokens, transactions and subscriptions.

A summary of the report communications between Palmer, a Palantir Engineer and Lovable is as follows:

  • March 20, 2025: Initial vulnerability discovered on linkable.site (now offline).
  • March 21, 2025: Broader RLS misconfiguration issue identified across multiple Lovable projects. Lovable emailed regarding scope and severity.
  • March 24, 2025: Lovable confirmed receipt of the email.
  • April 14, 2025: A Palantir engineer independently discovered and publicly tweeted about the same vulnerability, demonstrating active exploitation (e.g., extracting personal debt amounts, home addresses, API keys).
  • April 14, 2025: Palmer re-notified Lovable, referencing the public exploit, and initiated a 45-day disclosure window.
  • April 24, 2025: Lovable released "Lovable 2.0" with a "security scan" feature. This did not address the underlying RLS architectural flaw.
  • May 29, 2025: With no meaningful remediation or user notification from Lovable, Palmer published a CVE (Common Vulnerabilities and Exposures).

Find out the full details of this report on Matt Palmers website - https://mattpalmer.io/posts/2025/05/statement-on-CVE-2025-48757/

Lovables lack of transparency, action after reports and poor security practices highlights the risks involved when using these "vibe-coding" solutions. We still have no confirmation of whether Lovable has fixed this specific security vulnerability a year later!

[!note]
Be careful when using "vibe-coded" website and application solutions (e.g. Replit, Base44, Lovable) especially when looking for a secure and reliable solution.

Conclusion

The critical word for this topic is Understanding.
AI tools are a great-to-have in modern website and application development to boost productivity, but it is important to step back and understand the project architecture and features being built and what security foundations should be included before you begin to utilise AI.

If you don't have a technical skillset, it is very easy to fall into these pitfalls with AI solutions, which is why it is best to consult an experienced developer with the foundational knowledge in coding and web/digital best practices. Otherwise your website or app may include fundamental flaws.

Related Posts -

"All of us on the magazine committee were delighted with what Ross created and I am able to maintain it with just a couple of hours work every month. "
wn_favicon.png
Dorothy Russell Welton News Editor
"The village hall trustees and the regular users of the village hall are all very pleased with the website that Ross created and, again, I am able to update it in just a couple of hours every month. "
welton_vht.svg
Dorothy Russell Welton Village Hall Trustee
"This looks so useful. I can see it saving hours of my time when I forget how to do something I've already done before "
devectus_logos-05_green-02_transparent_cropped.png
Devectus Viewer
"All of us on the magazine committee were delighted with what Ross created and I am able to maintain it with just a couple of hours work every month. "
wn_favicon.png
Dorothy Russell Welton News Editor
"The village hall trustees and the regular users of the village hall are all very pleased with the website that Ross created and, again, I am able to update it in just a couple of hours every month. "
welton_vht.svg
Dorothy Russell Welton Village Hall Trustee
"This looks so useful. I can see it saving hours of my time when I forget how to do something I've already done before "
devectus_logos-05_green-02_transparent_cropped.png
Devectus Viewer
"All of us on the magazine committee were delighted with what Ross created and I am able to maintain it with just a couple of hours work every month. "
wn_favicon.png
Dorothy Russell Welton News Editor
"The village hall trustees and the regular users of the village hall are all very pleased with the website that Ross created and, again, I am able to update it in just a couple of hours every month. "
welton_vht.svg
Dorothy Russell Welton Village Hall Trustee
"This looks so useful. I can see it saving hours of my time when I forget how to do something I've already done before "
devectus_logos-05_green-02_transparent_cropped.png
Devectus Viewer