BPOI Banner
OpenAI’s New AI Shows 'Steps Towards Biological Weapons Risks', Ex-Staffer Warns Senate OpenAI’s New AI Shows 'Steps Towards Biological Weapons Risks', Ex-Staffer Warns Senate

OpenAI’s New AI Shows ‘Steps Towards Biological Weapons Risks’, Ex-Staffer Warns Senate

OpenAI’s newest GPT-o1 AI model is the first to demonstrate capabilities that could aid experts in reproducing known—and new—biological threats, a former company insider told U.S. Senators this week.

“OpenAI’s new AI system is the first system to show steps towards biological weapons risk, as it is capable of helping experts in planning to reproduce a known biological threat,” William Saunders, a former member of technical staff at OpenAI, told the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, & the Law.

This capability, he warned, carries the potential for “catastrophic harm” if AGI systems are developed without proper safeguards.

Experts also testified that artificial intelligence is evolving so quickly that a potentially treacherous benchmark known as Artificial General Intelligence looms on the near horizon. At the AGI level, AI systems can match human intelligence across a wide range of cognitive tasks and learn autonomously. If a publicly available system can understand biology and develop new weapons without proper oversight, the potential for malicious users to cause serious harm grows exponentially.

“AI companies are making rapid progress towards building AGI,” Saunders told the Senate Committee. “It is plausible that an AGI system could be built in as little as three years.”

Helen Toner—who was also part of the OpenAI board and voted in favor of firing co-founder and CEO Sam Altman—is also expecting to see AGI sooner rather than later. “Even if the shortest estimates turn out to be wrong, the idea of human-level AI being developed in the next decade or two should be seen as a real possibility that necessitates significant preparatory action now,” she testified.

Saunders, who worked at OpenAI for three years, highlighted the company’s recent announcement of GPT-o1, an AI system that “passed significant milestones” in its capabilities. As reported by Decrypt, even OpenAI said it decided to stem away from the traditional numerical increase in the GPT versions, because this model exhibited new capabilities that made it fair to see it not just as an upgrade, but as an evolution—a brand new type of model with different skills.

Saunders is also concerned about the lack of adequate safety measures and oversight in AGI development. He pointed out that “No one knows how to ensure that AGI systems will be safe and controlled,” and criticized OpenAI for its new approach toward safe AI development, caring more about profitability than safety.

“While OpenAI has pioneered aspects of this testing, they have also repeatedly prioritized deployment over rigor,” he cautioned. “I believe there is a real risk they will miss important dangerous capabilities in future AI systems.”

The testimony also showed some of the internal challenges at OpenAI, especially the ones that came to light after Altman’s ouster. “The Superalignment team at OpenAI, tasked with developing approaches to control AGI, no longer exists. Its leaders and many key researchers resigned after struggling to get the resources they needed,” he said.

His words only add another brick in the wall of complaints and warnings that AI safety experts have been making about OpenAI’s approach. Ilya Sutskever, who co-founded OpenAI and played a key role in firing Altman, resigned after the launch of GPT-4o and founded Safe Superintelligence Inc.

OpenAI co-founder John Schulman and its head of alignment, Jan Leike, left the company to join rival Anthropic, with Leike saying that under Altman’s leadership, safety “took a backseat to shiny products.”

Likewise, former OpenAI board members Toner and Tasha McCauley wrote an op-ed published by The Economist, arguing that Sam Altman was prioritizing profits over responsible AI development, hiding key developments from the board, and fostering a toxic environment in the company.

In his statement, Saunders called for urgent regulatory action, emphasizing the need for clear safety measures in AI development, not just from the companies but from independent entities. He also stressed the importance of whistleblower protections in the tech industry.

The former OpenAI staffer highlighted the broader implications of AGI development, including the potential to entrench existing inequalities and facilitate manipulation and misinformation. Saunders has also warned that the “loss of control of autonomous AI systems” could potentially result in “human extinction.”

Edited by Josh Quittner and Andrew Hayward

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source link

Jose Antonio Lanz

https://decrypt.co/250568/opena-new-ai-steps-towards-biological-weapons-risks-warns-senate

2024-09-21 14:01:02

bitcoin
Bitcoin (BTC) $ 91,239.45 3.47%
ethereum
Ethereum (ETH) $ 3,150.86 2.37%
tether
Tether (USDT) $ 1.00 0.01%
solana
Solana (SOL) $ 220.97 5.96%
bnb
BNB (BNB) $ 624.84 0.47%
dogecoin
Dogecoin (DOGE) $ 0.378608 2.41%
xrp
XRP (XRP) $ 0.913376 10.09%
usd-coin
USDC (USDC) $ 0.99989 0.01%
staked-ether
Lido Staked Ether (STETH) $ 3,149.25 2.30%
cardano
Cardano (ADA) $ 0.737891 23.23%
tron
TRON (TRX) $ 0.189822 6.38%
shiba-inu
Shiba Inu (SHIB) $ 0.000025 7.48%
avalanche-2
Avalanche (AVAX) $ 34.43 9.10%
the-open-network
Toncoin (TON) $ 5.43 3.19%
wrapped-steth
Wrapped stETH (WSTETH) $ 3,714.53 1.84%
wrapped-bitcoin
Wrapped Bitcoin (WBTC) $ 91,118.41 3.59%
sui
Sui (SUI) $ 3.87 21.67%
pepe
Pepe (PEPE) $ 0.000023 8.15%
weth
WETH (WETH) $ 3,155.65 2.45%
chainlink
Chainlink (LINK) $ 14.26 8.74%
bitcoin-cash
Bitcoin Cash (BCH) $ 434.26 3.40%
polkadot
Polkadot (DOT) $ 5.25 8.37%
near
NEAR Protocol (NEAR) $ 6.10 12.24%
leo-token
LEO Token (LEO) $ 7.76 4.29%
aptos
Aptos (APT) $ 12.48 8.93%
litecoin
Litecoin (LTC) $ 83.75 2.47%
wrapped-eeth
Wrapped eETH (WEETH) $ 3,312.08 2.20%
uniswap
Uniswap (UNI) $ 8.81 8.23%
usds
USDS (USDS) $ 0.994887 0.73%
crypto-com-chain
Cronos (CRO) $ 0.168688 6.63%
stellar
Stellar (XLM) $ 0.145269 7.05%
internet-computer
Internet Computer (ICP) $ 9.04 12.88%
bittensor
Bittensor (TAO) $ 535.96 6.33%
dogwifcoin
dogwifhat (WIF) $ 3.91 11.30%
kaspa
Kaspa (KAS) $ 0.14075 6.24%
ethereum-classic
Ethereum Classic (ETC) $ 23.58 6.26%
fetch-ai
Artificial Superintelligence Alliance (FET) $ 1.32 8.00%
dai
Dai (DAI) $ 0.999775 0.03%
whitebit
WhiteBIT Coin (WBT) $ 22.30 0.77%
ethena-usde
Ethena USDe (USDE) $ 1.00 0.07%
bonk
Bonk (BONK) $ 0.000044 26.91%
polygon-ecosystem-token
POL (ex-MATIC) (POL) $ 0.379814 6.09%
hedera-hashgraph
Hedera (HBAR) $ 0.078807 17.82%
blockstack
Stacks (STX) $ 1.94 6.83%
render-token
Render (RENDER) $ 7.35 11.86%
monero
Monero (XMR) $ 144.09 3.05%
okb
OKB (OKB) $ 44.19 1.81%
first-digital-usd
First Digital USD (FDUSD) $ 1.00 0.18%
floki
FLOKI (FLOKI) $ 0.000265 24.42%
aave
Aave (AAVE) $ 169.45 8.86%