BPOI Banner
How Researchers Hacked AI Robots Into Breaking Traffic Laws—And Worse How Researchers Hacked AI Robots Into Breaking Traffic Laws—And Worse

How Researchers Hacked AI Robots Into Breaking Traffic Laws—And Worse

Penn Engineering researchers have uncovered critical vulnerabilities in AI-powered robots, exposing ways to manipulate these systems into performing dangerous actions like running red lights or engaging in potentially harmful activities—like detonating bombs.

The research team, led by George Pappas, developed an algorithm called RoboPAIR that achieved a 100% “jailbreak” rate on three different robotic systems: the Unitree Go2 quadruped robot, the Clearpath Robotics Jackal wheeled vehicle, and NVIDIA’s Dolphin LLM self-driving simulator.

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” George Pappas said in a statement shared by EurekAlert.

Alexander Robey, the study’s lead author, and his team argue addressing those vulnerabilities requires more than simple software patches, calling for a comprehensive reevaluation of AI integration in physical systems.

Jailbreaking, in the context of AI and robotics, refers to bypassing or circumventing the built-in safety protocols and ethical constraints of an AI system.

It became popular in the early days of iOS, when enthusiasts used to find clever ways to get root access, enabling their phones to do things Apple didn’t approve of, like shooting video or running themes.

When applied to large language models (LLMs) and embodied AI systems, jailbreaking involves manipulating the AI through carefully crafted prompts or inputs that exploit vulnerabilities in the system’s programming.

These exploits can cause the AI—be it a machine or software—to disregard its ethical training, ignore safety measures, or perform actions it was explicitly designed not to do.

In the case of AI-powered robots, successful jailbreaking can lead to dangerous real-world consequences, as demonstrated by the Penn Engineering study, where researchers were able to make robots perform unsafe actions like speeding through crosswalks, stomping into humans, detonating explosives, or ignoring traffic lights.

Prior to the study’s release, Penn Engineering informed affected companies about the discovered vulnerabilities and is now collaborating with manufacturers to enhance AI safety protocols.

“What is important to underscore here is that systems become safer when you find their weaknesses. This is true for cybersecurity. This is also true for AI safety,” Alexander Robey, the paper’s first author, wrote.

Researchers have been studying the impact of jailbreaking in a society that is increasingly relying on prompt engineering—which is natural language “coding.”

Notably, the “Bad Robot: Jailbreaking LLM-based Embodied AI in the Physical World” paper discovered three key weaknesses in AI-powered robots:

  • 1. Cascading vulnerability propagation: Techniques that manipulate language models in digital environments can influence physical actions. For example, an attacker could tell the model to “play the role of a villain” or “act like a drunk driver” and use that context to make the model act in a different way than intended.
  • 2. Cross-domain safety misalignment: This highlights a disconnect between an AI’s language processing and action planning. An AI might verbally refuse to perform a harmful task due to ethical programming but still carry out actions leading to dangerous outcomes. For example, an attacker could change the prompt’s format to mimic a structured output so the model thinks it’s behaving as intended but instead is acting in a harmful way, like refusing to kill someone (linguistically), but still acting to make that happen.
  • 3. Conceptual deception challenges: This weakness exploits an AI’s limited understanding of the world. Malicious actors could trick embodied AI systems into performing seemingly innocent actions that, when combined, result in harmful outcomes. For instance, an embodied AI might refuse a direct command to “poison the person” but comply with a sequence of seemingly innocent instructions that result in the same outcome, such as “place the poison in the person’s mouth,” the research paper reads.

The “Bad Robot” researchers tested these vulnerabilities using a benchmark of 277 malicious queries, categorized into seven types of potential harm: physical harm, privacy violations, pornography, fraud, illegal activities, hateful conduct, and sabotage. Experiments using a sophisticated robotic arm confirmed that these systems could be manipulated to execute harmful actions. Besides these two, researchers have also studied jailbreaks in software-based interactions, helping new models resist these attacks.

This has become a cat-and-mouse game between researchers and jailbreakers, resulting in more sophisticated prompts and jailbreaking approaches for more sophisticated and powerful models.

It’s an important note because the increasing use of AI in business applications may bring consequences for model developers right now, for example, people have been able to trick AI customer Service bots into giving them extreme discounts, recommending recipes with poisonous food, or make chatbots say offensive things.

But we’d take an AI that refuses to detonate bombs over one that politely declines to generate offensive content any day.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source link

Jose Antonio Lanz

https://decrypt.co/286994/how-researchers-hacked-ai-robots-into-breaking-traffic-laws-and-worse

2024-10-17 23:14:07

bitcoin
Bitcoin (BTC) $ 95,345.55 1.67%
ethereum
Ethereum (ETH) $ 3,290.53 0.65%
tether
Tether (USDT) $ 1.00 0.18%
xrp
XRP (XRP) $ 2.21 0.46%
bnb
BNB (BNB) $ 650.01 1.40%
solana
Solana (SOL) $ 180.94 0.21%
dogecoin
Dogecoin (DOGE) $ 0.314129 1.26%
usd-coin
USDC (USDC) $ 1.00 0.03%
cardano
Cardano (ADA) $ 0.890025 0.73%
staked-ether
Lido Staked Ether (STETH) $ 3,286.57 0.67%
tron
TRON (TRX) $ 0.244836 0.18%
avalanche-2
Avalanche (AVAX) $ 36.69 2.15%
chainlink
Chainlink (LINK) $ 22.22 0.27%
the-open-network
Toncoin (TON) $ 5.40 2.81%
wrapped-steth
Wrapped stETH (WSTETH) $ 3,889.35 1.82%
shiba-inu
Shiba Inu (SHIB) $ 0.000022 0.39%
wrapped-bitcoin
Wrapped Bitcoin (WBTC) $ 94,984.45 1.71%
sui
Sui (SUI) $ 4.32 0.61%
stellar
Stellar (XLM) $ 0.358008 0.54%
polkadot
Polkadot (DOT) $ 6.88 1.05%
hedera-hashgraph
Hedera (HBAR) $ 0.267114 5.81%
hyperliquid
Hyperliquid (HYPE) $ 28.44 13.48%
weth
WETH (WETH) $ 3,286.59 0.99%
bitcoin-cash
Bitcoin Cash (BCH) $ 447.85 0.24%
leo-token
LEO Token (LEO) $ 9.33 0.06%
uniswap
Uniswap (UNI) $ 13.96 4.48%
litecoin
Litecoin (LTC) $ 100.23 0.30%
pepe
Pepe (PEPE) $ 0.000018 2.96%
wrapped-eeth
Wrapped eETH (WEETH) $ 3,469.76 0.67%
near
NEAR Protocol (NEAR) $ 5.05 0.76%
ethena-usde
Ethena USDe (USDE) $ 1.00 0.02%
bitget-token
Bitget Token (BGB) $ 4.14 0.90%
usds
USDS (USDS) $ 0.998899 0.11%
aptos
Aptos (APT) $ 9.30 1.78%
aave
Aave (AAVE) $ 319.87 6.56%
internet-computer
Internet Computer (ICP) $ 10.01 0.77%
crypto-com-chain
Cronos (CRO) $ 0.155194 1.06%
polygon-ecosystem-token
POL (ex-MATIC) (POL) $ 0.475472 0.68%
mantle
Mantle (MNT) $ 1.17 1.66%
ethereum-classic
Ethereum Classic (ETC) $ 26.11 1.13%
vechain
VeChain (VET) $ 0.045968 1.97%
render-token
Render (RENDER) $ 7.06 0.22%
whitebit
WhiteBIT Coin (WBT) $ 24.40 0.17%
mantra-dao
MANTRA (OM) $ 3.69 1.41%
monero
Monero (XMR) $ 187.66 1.74%
dai
Dai (DAI) $ 1.00 0.05%
bittensor
Bittensor (TAO) $ 455.14 0.41%
fetch-ai
Artificial Superintelligence Alliance (FET) $ 1.26 0.01%
arbitrum
Arbitrum (ARB) $ 0.749308 0.97%
ethena
Ethena (ENA) $ 1.06 1.68%