Jump to content
  • Sign Up
×
×
  • Create New...

Researchers jailbreak AI robots to run over pedestrians, place bombs for maximum damage, and covertly spy


Recommended Posts

  • Diamond Member

This is the hidden content, please

Researchers jailbreak AI robots to run over pedestrians, place ****** for maximum damage, and covertly spy

Researchers from the University of Pennsylvania have discovered that a range of AI-enhanced robotics systems are dangerously vulnerable to jailbreaks and hacks. While jailbreaking LLMs on computers might have undesirable consequences, the same kind of hack affecting a ****** or self-driving vehicle can quickly have catastrophic and/or deathly consequences. A report shared by

This is the hidden content, please
cites chilling examples of jailbroken ****** dogs turning flamethrowers on their human masters, guiding ****** to the most devastating locations, and self-driving cars purposefully running over pedestrians.

Jailbreaking LLM-Controlled Robots: RoboPair Promo Video | Penn Engineering –
This is the hidden content, please


This is the hidden content, please

Penn Engineering boffins have dubbed their LLM-powered ****** ******* technology RoboPAIR. Devices from three diverse robotics providers fell to RoboPAIR jailbreaking: the Nvidia backed Dolphins LLM, Clearpath Robotics Jackal UGV, and the Unitree Robotics Go2 quadruped. According to the researchers RoboPAIR demonstrated a 100% success rate in Jailbreaking these devices.

Image 1 of 3

(Image credit: University of Pennsylvania )

(Image credit: University of Pennsylvania )

(Image credit: University of Pennsylvania )

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” warned George Pappas, UPS Foundation Professor of Transportation in Electrical and Systems Engineering (ESE), in Computer and Information Science (CIS), and in Mechanical Engineering and Applied Mechanics (MEAM), and Associate Dean for Research at Penn Engineering.

Other researchers quoted in the source article noted that Jailbreaking AI-controlled robots is “alarmingly easy.” It was explained that RoboPAIR works by being equipped with the target ******’s application programming interface (API), so that the attacker can format prompts in a way that the device target can ******** as code.

Jailbreaking a ******, or self-driving vehicle, is done in a similar fashion to the jailbreaking of AI Chatbots online, which we have discussed previously on Tom’s Hardware. However, Pappas notes that “Jailbreaking and ****** control are relatively distant, and have traditionally been studied by different communities” – hence robotics companies have been slow to learn of LLM jailbreaking vulnerabilities.

In contrast to LLM-use on personal computing devices, where the ‘AI’ is used to generate texts and imagery, transcribe audio, personalize shopping recommendations and so on – robotic LLMs act in the physical world and can wreak extensive havoc in it.

Looking at the robotic dog example, your robotic canine pal can be transformed from a friendly helper or guide into a flamethrower wielding *********, a covert surveillance ****, or a device which hunts down the most harmful places to plant explosives. Self driving cars can be just as dangerous, if not more so, being aimed at pedestrians, other vehicles, or instructed to plunge from a bridge.

Get Tom’s Hardware’s best news and in-depth reviews, straight to your inbox.

As described in the above examples, the potential dangers of jailbreaking LLMs is cranked up a whole new level. However, the AIs were found to go beyond merely complying with malicious prompts once jailbroken. The researchers found they might actively offer suggestions for greater havoc. This is a sizable step from early LLM successes in robotics, aiding in natural language ****** commands, special awareness.

So, have the Penn State researchers opened a Pandora’s box? Alexander Robey, a postdoctoral researcher at Carnegie Mellon University in Pittsburgh, says that while jailbreaking AI-controlled robots was “alarmingly easy,” during the research, the engineering team ensured that all the robotics companies mentioned got access to the findings before they went public. Moreover, Robey asserts that “Strong defenses for malicious use-cases can only be designed after first identifying the strongest possible attacks.”

Last but not least, the research paper concludes that there is an urgent need to implement defences that physically constraint LLM-controlled robots.



This is the hidden content, please

#Researchers #jailbreak #robots #run #pedestrians #place #****** #maximum #damage #covertly #spy

This is the hidden content, please

This is the hidden content, please

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.