Cybersecurity

Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats

Mar 13, 2024NewsroomLarge Language Model / AI Security Google’s Gemini large language model (LLM) is susceptible to security threats that could cause it to divulge system prompts, generate harmful content, and carry out indirect injection attacks. The findings come from HiddenLayer, which said the issues impact consumers using Gemini Advanced with Google Workspace as well […]

Cybersecurity

Top LLM vulnerabilities and how to mitigate the associated risk – Help Net Security

As large language models (LLMs) become more prevalent, a comprehensive understanding of the LLM threat landscape remains elusive. But this uncertainty doesn’t mean progress should grind to a halt: Exploring AI is essential to staying competitive, meaning CISOs are under intense pressure to understand and address emerging AI threats. While the AI threat landscape changes […]

Cybersecurity

A Deep Dive into Stream-Jacking Attacks on YouTube and Why They’re So Popular

Stream-jacking attacks have gained significant traction on large streaming services in recent months, with cybercriminals targeting high-profile accounts (with a large follower count) to send their fraudulent ‘messages’ across to the masses. Starting from the fact that various takeovers in the past resulted in channels morphing into impersonations of known public figures (e.g. Elon Musk, […]

Cybersecurity

LLM Guard: Open-source toolkit for securing Large Language Models – Help Net Security

LLM Guard is a toolkit designed to fortify the security of Large Language Models (LLMs). It is designed for easy integration and deployment in production environments. It provides extensive evaluators for both inputs and outputs of LLMs, offering sanitization, detection of harmful language and data leakage, and prevention against prompt injection and jailbreak attacks. LLM […]