BPOI Banner
AI Models ‘Secretly’ Learn Capabilities Long Before They Show Them, Researchers Find AI Models ‘Secretly’ Learn Capabilities Long Before They Show Them, Researchers Find

AI Models ‘Secretly’ Learn Capabilities Long Before They Show Them, Researchers Find

Modern AI models possess hidden capabilities that emerge suddenly and consistently during training, but these abilities remain concealed until prompted in specific ways, according to new research from Harvard and the University of Michigan.

The study, which analyzed how AI systems learn concepts like color and size, revealed that models often master these skills far earlier than standard tests suggest—a finding with major implications for AI safety and development.

“Our results demonstrate that measuring an AI system’s capabilities is more complex than previously thought,” the research paper says. “A model might appear incompetent when given standard prompts while actually possessing sophisticated abilities that only emerge under specific conditions.”

This advancement joins a growing body of research aimed at demystifying how AI models develop capabilities.

Anthropic researchers unveiled “dictionary learning,” a technique that mapped millions of neural connections within their Claude language model to specific concepts the AI understands, Decrypt reported earlier this year.

While approaches differ, these studies share a common goal: bringing transparency to what has primarily been considered AI’s “black box” of learning.

“We found millions of features which appear to correspond to interpretable concepts ranging from concrete objects like people, countries, and famous buildings to abstract ideas like emotions, writing styles, and reasoning steps,” Anthropic said in its research paper.

The researchers conducted extensive experiments using diffusion models—the most popular architecture for generative AI. While tracking how these models learned to manipulate basic concepts, they discovered a consistent pattern: capabilities emerged in distinct phases, with a sharp transition point marking when the model acquired new abilities.

Models showed mastery of concepts up to 2,000 training steps earlier than standard testing could detect. Strong concepts emerged around 6,000 steps, while weaker ones appeared around 20,000 steps.

When researchers adjusted the “concept signal,” the clarity with which ideas were presented in training data.

They found direct correlations with learning speed. Alternative prompting methods could reliably extract hidden capabilities long before they appeared in standard tests.

This phenomenon of “hidden emergence” has significant implications for AI safety and evaluation. Traditional benchmarks may dramatically underestimate what models can actually do, potentially missing both beneficial and concerning capabilities.

Perhaps most intriguingly, the team discovered multiple ways to access these hidden capabilities. Using techniques they termed “linear latent intervention” and “overprompting,” researchers could reliably extract sophisticated behaviors from models long before these abilities appeared in standard tests.

In another case, researchers found that AI models learned to manipulate complex features like gender presentation and facial expressions before they could reliably demonstrate these abilities through standard prompts.

For example, models could accurately generate “smiling women” or “men with hats” individually before they could combine these features—yet detailed analysis showed they had mastered the combination much earlier. They simply couldn’t express it through conventional prompting.

The sudden emergence of capabilities observed in this study might initially seem similar to grokking—where models abruptly demonstrate perfect test performance after extended training—but there are key differences.

While grokking occurs after a training plateau and involves the gradual refinement of representations on the same data distribution, this research shows capabilities emerging during active learning and involving out-of-distribution generalization.

The authors found sharp transitions in the model’s ability to manipulate concepts in novel ways, suggesting discrete phase changes rather than the gradual representation improvements seen in grokking.

In other words, it seems AI models internalize concepts way earlier than we thought, they are just not able to show their skills—kind of how some people may understand a movie in a foreign language but still struggle to properly speak it.

For the AI industry, this is a double-edged sword. The presence of hidden capabilities indicates models might be more potent than previously thought. Still, it also proves how difficult it is to understand and control what they can do fully.

Companies developing large language models and image generators may need to revise their testing protocols.

Traditional benchmarks, while still valuable, may need to be supplemented with more sophisticated evaluation methods that can detect hidden capabilities.

Edited by Sebastian Sinclair

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.

Source link

Jose Antonio Lanz

https://decrypt.co/292892/ai-models-secretly-learn-capabilities-long-before-they-show-them-researchers-find

2024-11-24 15:01:02

bitcoin
Bitcoin (BTC) $ 83,626.41 1.54%
ethereum
Ethereum (ETH) $ 1,816.40 2.05%
tether
Tether (USDT) $ 0.99998 0.02%
xrp
XRP (XRP) $ 2.15 5.55%
bnb
BNB (BNB) $ 597.47 1.61%
solana
Solana (SOL) $ 120.87 5.17%
usd-coin
USDC (USDC) $ 1.00 0.00%
dogecoin
Dogecoin (DOGE) $ 0.169622 4.40%
cardano
Cardano (ADA) $ 0.661048 3.37%
tron
TRON (TRX) $ 0.236468 0.56%
staked-ether
Lido Staked Ether (STETH) $ 1,816.48 2.14%
wrapped-bitcoin
Wrapped Bitcoin (WBTC) $ 83,578.39 1.58%
chainlink
Chainlink (LINK) $ 12.96 3.43%
leo-token
LEO Token (LEO) $ 8.91 4.71%
the-open-network
Toncoin (TON) $ 3.28 4.72%
usds
USDS (USDS) $ 1.00 0.01%
stellar
Stellar (XLM) $ 0.25628 0.01%
wrapped-steth
Wrapped stETH (WSTETH) $ 2,176.43 1.75%
avalanche-2
Avalanche (AVAX) $ 18.26 1.98%
sui
Sui (SUI) $ 2.25 3.51%
shiba-inu
Shiba Inu (SHIB) $ 0.000012 1.96%
hedera-hashgraph
Hedera (HBAR) $ 0.162258 1.36%
litecoin
Litecoin (LTC) $ 83.62 1.14%
polkadot
Polkadot (DOT) $ 3.99 0.04%
mantra-dao
MANTRA (OM) $ 6.24 0.33%
bitcoin-cash
Bitcoin Cash (BCH) $ 302.62 0.44%
bitget-token
Bitget Token (BGB) $ 4.51 0.29%
pi-network
Pi Network (PI) $ 0.721717 36.49%
ethena-usde
Ethena USDe (USDE) $ 0.999611 0.05%
weth
WETH (WETH) $ 1,818.84 2.16%
binance-bridged-usdt-bnb-smart-chain
Binance Bridged USDT (BNB Smart Chain) (BSC-USD) $ 0.999363 0.02%
wrapped-eeth
Wrapped eETH (WEETH) $ 1,933.83 1.93%
hyperliquid
Hyperliquid (HYPE) $ 12.03 6.64%
monero
Monero (XMR) $ 217.67 0.68%
whitebit
WhiteBIT Coin (WBT) $ 27.46 0.51%
uniswap
Uniswap (UNI) $ 5.92 2.13%
dai
Dai (DAI) $ 1.00 0.01%
okb
OKB (OKB) $ 51.39 9.88%
susds
sUSDS (SUSDS) $ 1.05 0.13%
pepe
Pepe (PEPE) $ 0.000007 4.51%
aptos
Aptos (APT) $ 4.90 1.96%
near
NEAR Protocol (NEAR) $ 2.47 1.05%
gatechain-token
Gate (GT) $ 22.53 2.99%
coinbase-wrapped-btc
Coinbase Wrapped BTC (CBBTC) $ 83,649.41 1.44%
tokenize-xchange
Tokenize Xchange (TKX) $ 33.41 0.14%
ondo-finance
Ondo (ONDO) $ 0.816078 2.18%
crypto-com-chain
Cronos (CRO) $ 0.093914 0.04%
mantle
Mantle (MNT) $ 0.736363 0.17%
ethereum-classic
Ethereum Classic (ETC) $ 16.31 2.45%
internet-computer
Internet Computer (ICP) $ 5.08 0.74%
bitcoin
Bitcoin (BTC) $ 83,626.41 1.54%
ethereum
Ethereum (ETH) $ 1,816.40 2.05%
tether
Tether (USDT) $ 0.99998 0.02%
xrp
XRP (XRP) $ 2.15 5.55%
bnb
BNB (BNB) $ 597.47 1.61%
solana
Solana (SOL) $ 120.87 5.17%
usd-coin
USDC (USDC) $ 1.00 0.00%
dogecoin
Dogecoin (DOGE) $ 0.169622 4.40%
cardano
Cardano (ADA) $ 0.661048 3.37%
tron
TRON (TRX) $ 0.236468 0.56%
staked-ether
Lido Staked Ether (STETH) $ 1,816.48 2.14%
wrapped-bitcoin
Wrapped Bitcoin (WBTC) $ 83,578.39 1.58%
chainlink
Chainlink (LINK) $ 12.96 3.43%
leo-token
LEO Token (LEO) $ 8.91 4.71%
the-open-network
Toncoin (TON) $ 3.28 4.72%
usds
USDS (USDS) $ 1.00 0.01%
stellar
Stellar (XLM) $ 0.25628 0.01%
wrapped-steth
Wrapped stETH (WSTETH) $ 2,176.43 1.75%
avalanche-2
Avalanche (AVAX) $ 18.26 1.98%
sui
Sui (SUI) $ 2.25 3.51%
shiba-inu
Shiba Inu (SHIB) $ 0.000012 1.96%
hedera-hashgraph
Hedera (HBAR) $ 0.162258 1.36%
litecoin
Litecoin (LTC) $ 83.62 1.14%
polkadot
Polkadot (DOT) $ 3.99 0.04%
mantra-dao
MANTRA (OM) $ 6.24 0.33%
bitcoin-cash
Bitcoin Cash (BCH) $ 302.62 0.44%
bitget-token
Bitget Token (BGB) $ 4.51 0.29%
pi-network
Pi Network (PI) $ 0.721717 36.49%
ethena-usde
Ethena USDe (USDE) $ 0.999611 0.05%
weth
WETH (WETH) $ 1,818.84 2.16%
binance-bridged-usdt-bnb-smart-chain
Binance Bridged USDT (BNB Smart Chain) (BSC-USD) $ 0.999363 0.02%
wrapped-eeth
Wrapped eETH (WEETH) $ 1,933.83 1.93%
hyperliquid
Hyperliquid (HYPE) $ 12.03 6.64%
monero
Monero (XMR) $ 217.67 0.68%
whitebit
WhiteBIT Coin (WBT) $ 27.46 0.51%
uniswap
Uniswap (UNI) $ 5.92 2.13%
dai
Dai (DAI) $ 1.00 0.01%
okb
OKB (OKB) $ 51.39 9.88%
susds
sUSDS (SUSDS) $ 1.05 0.13%
pepe
Pepe (PEPE) $ 0.000007 4.51%
aptos
Aptos (APT) $ 4.90 1.96%
near
NEAR Protocol (NEAR) $ 2.47 1.05%
gatechain-token
Gate (GT) $ 22.53 2.99%
coinbase-wrapped-btc
Coinbase Wrapped BTC (CBBTC) $ 83,649.41 1.44%
tokenize-xchange
Tokenize Xchange (TKX) $ 33.41 0.14%
ondo-finance
Ondo (ONDO) $ 0.816078 2.18%
crypto-com-chain
Cronos (CRO) $ 0.093914 0.04%
mantle
Mantle (MNT) $ 0.736363 0.17%
ethereum-classic
Ethereum Classic (ETC) $ 16.31 2.45%
internet-computer
Internet Computer (ICP) $ 5.08 0.74%