Cybersecurity

Harmonic Lands $7M Funding to Secure Generative AI Deployments

A British startup called Harmonic Security has attracted $7 million in seed-stage investment to build technology to help secure generative AI deployments in the enterprise.

Harmonic, based in London and San Francisco, said it is working on software to mitigate against the ‘wild west’ of unregulated AI apps harvesting company data at scale.

The company said the early-stage financing was led by Ten Eleven Ventures, an investment firm actively investing in cybersecurity startups. Storm Ventures and a handful of security leaders also took equity positions.

Harmonic is entering an increasingly crowded field of AI-focused cybersecurity startups looking to find profits as businesses embrace AI and LLMs (large language model) technology. 

A wave of new startups like CalypsoAI ($23 million raised) and HiddenLayer ($50 million) have raised major funding rounds to help businesses secure generative AI deployments. 

Hotshot company OpenAI is already using security as its sales pitch for ChatGPT Enterprise while Microsoft and others are putting ChatGPT to work on solving threat intelligence and other security problems. 

Harmonic, the brainchild of Alastair Paterson (previously led Digital Shadows to a $160 million acquisition by ReliaQuest/KKR), is promising technology to give businesses a complete picture of AI adoption within enterprises, offering risk assessments for all AI apps and identifying potential compliance, security, or privacy issues. 

Advertisement. Scroll to continue reading.

The company cited a Gartner study that shows that 55% of global businesses are piloting or using generative AI and warned that a majority of apps are unregulated with unclear policies on how data will be used, where it will be transmitted to or how it will be kept secure.

“Harmonic provides a risk assessment of all AI apps so that high risk AI services that could lead to compliance, security or privacy incidents are identified. This approach means that organizations can control access to AI applications as required, including selective blocking of sensitive content from being uploaded, without needing rules or exact matches,” the company explained.

Related: Investors Pivot to Safeguarding AI Training Models

Related: CalypsoAI Banks $23 Million for AI Security

Related: HiddenLayer Raises $50M Round for AI Security Tech

Related: OpenAI Using Security to Sell ChatGPT Enterprise

Related: Google Brings AI Magic to Fuzz Testing With Solid Results