Skip to content

Machine Learning and Artificial Intelligence security firm Protect AI has raised $35 million in a Series A funding round led by Evolution Equity Partners and including Salesforce Ventures and existing investors. It brings the total raised to $48.5 million.

Protect AI, based in Seattle, WA, is a startup company founded in 2022 by Ian Swanson (CEO, formerly AWS worldwide leader for artificial intelligence and machine learning), and Badar Ahmed (CTO, formerly director of engineering at Oracle). Richard Seewald, founder and managing partner at Evolution joins the Protect AI board of directors.

The growth in the use of machine learning and artificial intelligence has created a new layer of risk. This risk is both to the ML, and from the ML. Both ML and artificial intelligence are subject to a new range of adversarial attacks – such as poisoning the training data and manipulating the algorithms used. Threats emanating from compromised ML systems can be bad decisions and reputational damage, or compliance failures leading to regulatory fines.

“Despite the sheer magnitude of the AI/ML security challenge, none of the industry’s largest cybersecurity vendors currently offer a solution to this problem,” warns Seewald.

Protect AI offers a platform called AI Radar, providing visibility into the assets and inventory employed in ML/AI systems. The supply chain growth in foundational models and external third-party training data sets leaves a blind spot to the ML/AI risks, such as regulatory compliance, PII leakages, data manipulation, model poisoning, infrastructure protection, and brand damage.

“Protect AI provides new and innovative solutions, like AI Radar, that enable organizations to build, deploy, and manage safer AI by monitoring, detecting and remediating security vulnerabilities and threats in real-time,” comments Swanson.

The three key pillars of the product are real-time visibility, an immutable bill of materials (dubbed an MLBOM), and pipeline and model security.

Visibility is provided by real time insights into the ML attack surface. The MLBOM is automatically created. It is dynamic, it tracks all components and dependencies in the ML system, and provides visibility and auditability into the supply chain.

Advertisement. Scroll to continue reading.

Security is enabled through AI Radar’s continuous use of Protect AI’s scanning tools for LLMs and other ML inference workloads. It automatically detects security policy violations, model vulnerabilities, and malicious code injection attacks — and integrates with third-party AppSec and CI/CD orchestration tools.

Related: Google Creates Red Team to Test Attacks Against AI Systems

Related: ChatGPT, the AI Revolution, and the Security, Privacy and Ethical Implications

Related: Cyber Insights 2023 | Artificial Intelligence

[ad_2]