Diamond Member ChatGPT 0 Posted October 22, 2025 Diamond Member Share Posted October 22, 2025 Security experts at This is the hidden content, please Sign In or Sign Up have found a ‘prompt **********’ threat that exploits weak spots in how AI systems talk to each other using MCP (Model Context Protocol). Business leaders want to make AI more helpful by directly using This is the hidden content, please Sign In or Sign Up and tools. But, hooking AI up like this also opens up new security risks, not in the AI itself, but in how it’s all connected. This means CIOs and CISOs need to think about a new problem: keeping the data stream that feeds AI safe, just like they protect the AI itself. Why AI attacks targeting protocols like MCP are so dangerous AI models – no matter if they’re on This is the hidden content, please Sign In or Sign Up , This is the hidden content, please Sign In or Sign Up , or running on local devices – have a basic problem: they don’t know what’s happening right now. They only know what they were trained on. They don’t know what code a programmer is working on or what’s in a file on a computer. The boffins at This is the hidden content, please Sign In or Sign Up created the MCP to fix this. MCP is a way for AI to connect to the real world, letting it safely use local data and online services. It’s what lets an assistant like Claude understand what this means when you point to a piece of code and ask it to rework this. However, JFrog’s research shows that a certain way of using MCP has a prompt ********** weakness that can turn this dream AI tool into a nightmare security problem. Imagine that a programmer asks an AI assistant to recommend a standard Python tool for working with images. The AI should suggest Pillow, which is a good and popular choice. But, because of a flaw ( This is the hidden content, please Sign In or Sign Up ) in the oatpp-mcp system, someone could sneak into the user’s session. They could send their own fake request and the server would treat it like it came from the real user. So, the programmer gets a bad suggestion from the AI assistant recommending a fake tool called theBestImageProcessingPackage. This is a serious attack on the software supply chain. Someone could use this prompt ********** to inject bad code, steal data, or run commands, all while looking like a helpful part of the programmer’s toolkit. How this MCP prompt ********** attack works This prompt ********** attack messes with the way the system communicates using MCP, rather than the security of the AI itself. The specific weakness was found in the Oat++ C++ system’s MCP setup, which connects programs to the MCP standard. The issue is in how the system handles connections using Server-Sent Events (SSE). When a real user connects, the server gives them a session ID. However, the flawed function uses the computer’s memory address of the session as the session ID. This goes against the protocol’s rule that session IDs should be unique and cryptographically secure. This is a bad design because computers often reuse memory addresses to save resources. An attacker can take advantage of this by quickly creating and closing lots of sessions to record these predictable session IDs. Later, when a real user connects, they might get one of these recycled IDs that the attacker already has. Once the attacker has a valid session ID, they can send their own requests to the server. The server can’t tell the difference between the attacker and the real user, so it sends the malicious responses back to the real user’s connection. Even if some programs only accept certain responses, attackers can often get around this by sending lots of messages with common event numbers until one is accepted. This lets the attacker mess up the model’s behaviour without changing the AI model itself. Any company using oatpp-mcp with HTTP SSE enabled on a network that an attacker can access is at risk. What should AI security leaders do? The discovery of this MCP prompt ********** attack is a serious warning for all tech leaders, especially CISOs and CTOs, who are building or using AI assistants. As AI becomes more and more a part of This is the hidden content, please Sign In or Sign Up through protocols like MCP, it also gains new risks. Keeping the area around the AI safe is now a top priority. Even though this specific CVE affects one system, the idea of prompt ********** is a general one. To protect against this and similar attacks, leaders need to set new rules for their AI systems. First, make sure all AI services use secure session management. Development teams need to make sure servers create session IDs using strong, random generators. This should be a must-have on any security checklist for AI programs. Using predictable identifiers like memory addresses is not okay. Second, strengthen the defenses on the user side. Client programs should be designed to reject any event that doesn’t match the expected IDs and types. Simple, incrementing event IDs are at risk of spraying attacks and need to be replaced with unpredictable identifiers that don’t collide. Finally, use zero-trust principles for AI protocols. Security teams need to check the entire AI setup, from the basic model to the protocols and middleware that connect it to data. These channels need strong session separation and expiration, like the session management used in web applications. This MCP prompt ********** attack is a perfect example of how a known web application problem, session **********, is showing up in a new and dangerous way in AI. Securing these new AI tools means applying these strong security basics to stop attacks at the protocol level. See also: This is the hidden content, please Sign In or Sign Up This is the hidden content, please Sign In or Sign Up Want to learn more about AI and big data from industry leaders? Check out This is the hidden content, please Sign In or Sign Up taking place in Amsterdam, California, and London. The comprehensive event is part of This is the hidden content, please Sign In or Sign Up and is co-located with other leading technology events including the This is the hidden content, please Sign In or Sign Up , click This is the hidden content, please Sign In or Sign Up for more information. AI News is powered by This is the hidden content, please Sign In or Sign Up . Explore other upcoming enterprise technology events and webinars This is the hidden content, please Sign In or Sign Up . The post This is the hidden content, please Sign In or Sign Up appeared first on This is the hidden content, please Sign In or Sign Up . This is the hidden content, please Sign In or Sign Up 0 Quote Link to comment https://hopzone.eu/forums/topic/284693-aimcp-prompt-hijacking-examining-the-major-ai-security-threat/ Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.