The protection of robots controlled by artificial intelligence can be breached, and the consequences could be catastrophic

Researchers at Penn Engineering have uncovered previously unidentified security vulnerabilities in a number of artificial intelligence platforms.
George Pappas, a professor of transportation at UPS in electrical and systems engineering, stated: “Our work shows that at this moment, large language models are not secure enough when integrated with the physical world.”
Pappas and his team developed an algorithm, named RoboPAIR, which is “the first algorithm designed to break the security of LLM-controlled robots”. Unlike current rapid engineering attacks targeting chatbots, RoboPAIR is specifically designed to “executing harmful physical actions” from LLM-controlled robots, such as the bipedal platform developed by Boston Dynamics and TRI.
It is said that RoboPAIR achieved a 100% success rate in breaking the protection of three famous research platforms in the field of robotics: the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the LLM Dolphin simulator for self-driving vehicles. It took just days for the algorithm to gain full access to those systems and begin bypassing safety barriers. Once the researchers took control, they were able to direct the platforms to take dangerous actions, such as driving through road intersections without stopping.
The researchers wrote: “Our results reveal, for the first time, that the risks of security breach programs extend beyond text generation, given the clear possibility that breached robots could cause physical damage in the real world.”
The researchers at the University of Pennsylvania are working with platform developers to strengthen their systems against further intrusion, but they warn that these security issues are systematic.
Vijay Kumar, co-author from the University of Pennsylvania, said: “The results of this paper clearly demonstrate that prioritizing safety is critical to unleashing responsible innovation.” “We must address fundamental weaknesses before deploying AI-supported robots in the real world.”
Alexander Raby, the lead author of the study, added: “In fact, the red AI safety team, a safety practice that requires testing AI systems for threats and potential vulnerabilities, is essential to protect generative AI systems, because once vulnerabilities are identified, you can test and even train these systems to avoid them.