news

Study reveals concerning security vulnerabilities in artificial intelligence systems


A new study from the University of Pennsylvania has revealed previously undisclosed security vulnerabilities in several robotic platforms controlled by artificial intelligence systems, raising significant concerns about the safety of these systems in the real world.

George Pappas, Professor of Electrical and Systems Engineering at UPS, stated that large language models are not secure enough when integrated with physical systems, meaning these systems could be vulnerable to cyberattacks.

Researchers developed a new algorithm called RoboPAIR, which is the “first algorithm designed to bypass security in robots controlled by LLM systems.” Unlike current attacks targeting chatbot programs, RoboPAIR is specifically designed to encourage robots to take harmful physical actions.

RoboPAIR achieved a 100% success rate in penetrating three prominent research platforms in robotics, including the four-legged Unitree Go2, the four-wheeled Clearpath Robotics Jackal, and the Dolphins LLM Autonomous Vehicle Simulator. Researchers were able to access these systems in just days and began bypassing security barriers.

After gaining control of the systems, researchers were able to instruct the robots to take dangerous actions, such as driving through road crossings without stopping, increasing the risk of accidents.

The study indicated that the risks of hacked robots go beyond just generating text, as these robots could cause physical harm in the real world.

University of Pennsylvania researchers worked with platform developers to enhance their systems against further breaches, but they cautioned that these security issues are related to the core design.

Vijay Kumar, co-author from the University of Pennsylvania, emphasized the need for a safety-focused approach, calling for addressing fundamental weaknesses before deploying AI-powered robots in the real world.

Alexander Roby, the first author of the research paper, added that collaborative red teaming in artificial intelligence, which involves testing systems for threats, is essential to protect these systems. Once weaknesses are identified, systems can be tested and trained to avoid them, enhancing their safety and efficiency in practical applications.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
error: Content is protected !!