Lovable adds AI penetration testing to its platform through Aikido partnership
Lovable just launched AI-powered penetration testing for $100. A traditional pentest costs up to $50,000. That gap deserves some scrutiny.
Lovable launched penetration testing for apps built on its platform, powered by Aikido. For $100 per test taking one to four hours, builders get a report they can attach to SOC 2 and ISO 27001 questionnaires and hand to enterprise prospects.
A traditional pentest costs between $5,000 and $50,000 and takes weeks. For most early-stage founders, that is a non-starter.
Lovable’s argument is that AI has changed what is possible. The same depth of testing that used to require senior security consultants can now be delivered by autonomous agents in hours.
How it works:
- Enable Aikido in Lovable project settings under Connectors.
- Launch a pentest from the security tab.
- Aikido’s agents attack your live application, testing for privilege escalation, authentication bypasses, injection attacks, and a full list of OWASP vulnerabilities.
- Findings sync back into Lovable as actionable issues with AI-generated fixes.
- Generate a report ready for compliance questionnaires and investor due diligence.
The whitebox testing angle is worth noting. Because Lovable already has access to your source code, Aikido can combine dynamic attack testing with code analysis, catching logic flaws that surface scanning alone would miss.
This is where it is worth slowing down.
Earlier this month, compliance startup Delve faced serious public accusations of generating fake audit evidence, fabricating conclusions, and rubber-stamping certifications. The company has not responded and no charges have been filed.
The story hit hard in startup circles because it described something entirely believable: AI-generated compliance work that looks real on paper but isn’t.
Lovable and Aikido are not Delve. The use case is different and the product is transparent about what it is.
But the question is worth sitting with. If you cannot fully trust AI to write your code without reviewing every output, why would you fully trust AI to verify that your code is secure?
A report that satisfies a questionnaire is not the same as a report that means your app is actually locked down.
Bottom line: AI-generated pentests are probably better than nothing. The danger is treating them like they are something more.
Source: Lovable
If you need on-demand GPUs for training, fine-tuning, inference, or running open-source models, give RunPod a try.
- Available hardware: H100, H200, A100, L40S, RTX 4090, RTX 5090, and 30+ more
- Cost: significantly cheaper than AWS or GCP, billed per second, no contracts
- Setup: spins up in under a minute, 30+ regions worldwide

Get the core business tech news delivered straight to your inbox. We track AI, automation, SaaS, and cybersecurity so you don't have to.
Just read what you want, and be done with it.





