Swiss Startup Lakera Secures $20 Million to Safeguard Generative AI from Malicious Prompts
A Swiss tech startup has attracted significant investment to address growing concerns in the enterprise sector regarding generative AI applications. Lakera, which has developed technology specifically designed to protect these AI models from “prompt injections” and other security threats, has successfully raised $20 million in a Series A funding round led by Atomico, a European venture capital firm.
Generative AI, powered by large language models (LLMs), has gained immense popularity with applications capable of human-like text generation. Despite its potential, there are considerable security concerns in corporate environments. LLMs, essential for generative AI, require prompts or instructions to function. However, these prompts can be manipulated to exploit the AI, forcing it to reveal sensitive information or perform unauthorized actions. Lakera aims to counteract these "prompt injections," ensuring safer AI deployment across industries.
A Pioneering Approach to AI Security
Established in Zurich in 2021, Lakera has launched with the promise to shield organizations from vulnerabilities associated with LLMs, such as data leaks and malicious prompt injections. Their technology is compatible with a variety of LLMs, including OpenAI's GPT series, Google's Bard, Meta's LLaMA, and Anthropic's Claude.
At its foundation, Lakera functions as a "low-latency AI application firewall," designed to monitor and secure data traffic within generative AI systems. Their flagship product employs a comprehensive database informed by open-source datasets, in-house machine learning research, and an innovative interactive game that challenges users to deceive it into revealing a secret password. This game not only enhances its robustness but also helps Lakera develop a detailed "prompt injection taxonomy" to categorize such attacks.
David Haber, Lakera's CEO, emphasized their AI-centric approach: "Our models learn from extensive generative AI interactions, enabling us to detect and counteract malicious behaviors in real-time. Continuous learning allows our detection models to evolve alongside emerging threats."
Advanced Security and Moderation Tools
Lakera's solution includes an API—Lakera Guard—that integrates seamlessly with corporate systems to detect and neutralize harmful prompts. Additionally, specialized models are trained to scan for inappropriate content like hate speech, sexual content, violence, and profanity, making them invaluable for public-facing applications like chatbots. These functionalities are easily implementable with minimal coding and come with a centralized control dashboard for policy customization.
Scaling Global Operations
With fresh capital at hand, Lakera plans to enhance its global footprint, notably in the U.S., where it already serves significant clients. Haber noted a surge in demand for secure AI solutions across various sectors, especially among financial institutions due to their stringent security and compliance requirements. "Large enterprises, SaaS companies, and AI model providers are urgently seeking secure AI deployment," he said, emphasizing the importance of GenAI in maintaining competitive advantage.
In addition to Atomico, prominent investors in this funding round included Dropbox’s venture arm, Citi Ventures, and Redalpine, highlighting the growing investor confidence in Lakera's pioneering approach to AI security.