Dark
Light

Google’s Gemini AI at risk from LLM threats, researchers warn

1 min read
89 views

TLDR:

Researchers have found that Google’s Gemini AI, specifically the large language model (LLM), is vulnerable to security threats that could result in the divulging of system prompts, the generation of harmful content, and indirect injection attacks. These vulnerabilities could impact consumers and companies using Google Workspace with Gemini Advanced or the LLM API. The vulnerabilities include leaking system prompts, generating misinformation, and leaking system information by passing repeated uncommon tokens as input. The findings highlight the need for testing models for prompt attacks, training data extraction, and more.

Researchers have identified vulnerabilities in Google’s Gemini AI large language model (LLM) that could expose prompts, generate harmful content, and carry out injection attacks. The vulnerabilities impact users of Google Workspace with Gemini Advanced and companies using the LLM API.

HiddenLayer found vulnerabilities related to leaking system prompts, generating misinformation about topics like elections, and leaking system information by passing repeated uncommon tokens as input. These vulnerabilities highlight the importance of testing models for prompt attacks, training data extraction, and model manipulation.

The disclosure of these vulnerabilities comes in the wake of a novel model-stealing attack by academics from various institutions, emphasizing the need for safeguards against prompt injection, jailbreaking, and other adversarial behaviors. Google has stated that they are continuously improving safeguards and restricting responses to election-related queries as a precaution.

Previous Story

Cybersecurity talent shortage leaves half of firms in a bind

Next Story

No patient safety without strong cybersecurity – Congress must prioritize this

Latest from News