Attackers have recently exploited ChatGPT, which has a billion-user base. Cybersecurity researchers have identified new weaknesses in OpenAI’s ChatGPT. It could allow the attacker to steal private information from the user’s chat history and memory without their consent. Alongside, bypassing ChatGPT’s security measures.
The vulnerabilities were discovered in OpenAI’s GPT-4o and GPT-5 models by researchers at Tenable, who reported that they left millions of ChatGPT users prone to attacks. Tenable’s researchers, Moshe Bernsten and Liv Matan, confirmed in a report this week that, “By mixing and matching all of the vulnerabilities and techniques we discovered, we were able to create proofs of concept (PoCs) for different attack vectors.”
The Seven ChatGPT Vulnerabilities Discovered
- Indirect Prompt Injection via Trusted Sites: Harmful instructions hidden in webpage comments can lead ChatGPT to execute them when summarizing the content.
- Zero-Click Injection in Search Context: Simply searching for a poisoned website can immediately trigger malicious commands due to search engine indexing, such as Bing or OpenAI’s own crawler.
- One-Click through ChatGPT URLs: Attackers can create links such as chatgpt.com/?q={Prompt}0 that automatically run hidden prompts once opened.
- Safety Mechanism Bypass: By manipulating ChatGPT’s trust in Bing domains, attackers can disguise malicious URLs within Bing and tracking links.
- Conversation Injection: The prompts injected from a summarized site appear in the chat context, leading to unintended model behavior in the future.
- Malicious Content Hiding: With the use of markdown rendering bugs to conceal dangerous instructions in the text.
- Memory Injection: Attackers can impose a user’s ChatGPT memory by embedding hidden commands in web content.
As mentioned above, Tenable’s team presented proof of concept (POC) showing how these weaknesses could be combined to build a complete attack chain, allowing data theft and bypassing safety filters. Some vulnerabilities, such as zero- and one-click attacks, require little or no user interaction, posing a high risk for non-techies.
The research focuses on broader security challenges faced by LLMs such as ChatGPT. The main point is the integration with memory and browsing capabilities, which expand the attack surface.
This makes them the target for cyberattacks and even state-backed threat actors.
The researchers urge AI vendors to strengthen safeguards and ensure safety features like url_safe work as intended.
OpenAI has been updated regarding the findings, and the tech giant has already patched some of them. However, prompt injection will still be a critical security challenge for LLMs’ architecture, as confirmed by Tenable. The researchers said that AI vendors need to take care to strengthen safeguards and ensure safety features, such as URL-safe work, as required.
Check out our news section, where we publish all the cybersecurity-related updates to keep you safe from the emerging threats.
Recommended For You:


