BPOI Banner
Image: Anthropic Image: Anthropic

Secret Claude AI System Prompts Revealed–What Can We Learn From Them?

Anthropic, an artificial intelligence company founded by former OpenAI employees, has publicly released the system prompts for its latest Claude AI models. This rare move gives users a rare glimpse into the inner workings of its large language models (LLMs) and makes Anthropic the only major AI company that has officially shared such instructions.

System prompts, typically considered proprietary information, are crucial in shaping an AI’s behavior and capabilities.

The release, published Monday and dated July 12, 2024, includes detailed instructions for Claude 3.5 Sonnet, Claude 3 Opus, and Claude 3 Haiku models. These prompts outline specific guidelines on model behavior, including prohibitions on facial recognition and link access, as well as directives for handling controversial topics in a way Anthropic believes to be objective.

This is not Anthropic’s first step towards transparency. In March, Amanda Askell, the company’s AI director, shared an earlier version of Claude 3’s system prompt on social media platform X (formerly Twitter). She also explains the reasoning behind such conditioning.

“Why do we use system prompts at all? First, they let us give the model ‘live’ information like the date. Second, they let us do a little bit of customizing after training and to tweak behaviors until the next finetune. This system prompt does both,” she said on a Twitter thread.

Anthropic’s decision diverges from the practices of other major AI firms like OpenAI, Meta, or xAI, which maintain the confidentiality of their system prompts. However, hackers and LLM jailbreakers have been able to extract those instructions, with ChatGPT revealed to have a huge 1700-word prompt and Grok-2 being told it was inspired by JARVIS from Iron Man and the Hitchhiker’s Guide to the Galaxy.

Anthropic’s prompts are now accessible through their Claude apps and online platforms. The company has expressed intentions to regularly update and publish these prompts, providing ongoing insights into the evolution of AI instruction methodologies.

How to be a better prompter

The disclosure of Anthropic’s system prompts not only help users know how chatbots work, they can also help people understand how LLMs think and how users can guide their thought process with better input. LLMs, at their core, are basically highly sophisticated text predictors, where each word influences the generation of subsequent content.

So a better-crafted prompt can help users enhance our model’s capabilities and achieve more accurate, contextually appropriate, and targeted results from their interactions with AI models.

1. Contextual enrichment is key

Providing rich context is crucial for guiding AI models towards generating more precise and relevant responses. Anthropic’s prompts demonstrate the importance of detailed contextual information in shaping AI behavior.

Here is a key part of Anthropic’s system prompt:

“The assistant is Claude, created by Anthropic. The current date is {}. Claude’s knowledge base was last updated on April 2024. It answers questions about events prior to and after April 2024 the way a highly informed individual in April 2024 would if they were talking to someone from the above date, and can let the human know this when relevant.”

Notice that Anthropic explains how Claude must reply. So, the language, tone, and knowledge of its prompts will mimic the way society writes in 2024 and not how Romeo and Juliet spoke when Shakespeare was alive.

Framing tasks within a clear context, including relevant background information, helps the model generate responses that are more aligned with specific user needs. This approach avoids generic or off-target answers. By providing a rich context, users enable the model to better understand the task requirements, leading to improved results.

For instance, users can ask a model to generate a horror story, and it will do it. However, providing detailed characteristics of that style can significantly enhance the output quality. Those who take things a step further, adding examples of the desired writing style or content type, can further refine the model’s understanding of an instruction and improve the generated results.

2. Break down complex queries

Anthropic’s prompts also reveal the importance of approaching complex tasks systematically, breaking them down into manageable components instead of solving all parts just as once —as users tend to do when interacting with their favorite chabots.

Here are some quotes from Anthropic that tackle this issue:

“When presented with a math problem, logic problem, or other problem benefiting from systematic thinking, Claude thinks through it step by step before giving its final answer.”

“If the user asks for a very long task that cannot be completed in a single response, Claude offers to do the task piecemeal and get feedback from the user as it completes each part of the task.”

For multifaceted tasks, instructing the model to approach the problem step-by-step can lead to more focused and accurate responses. This segmentation allows for refinement based on feedback at each stage of the task.

The ideal approach, however, is to use a multi-shot technique like Chain of Thoughts or Skeleton of Thoughts, guiding the LLM through a series of interconnected tasks. This method reduces the probability of hallucinations by conditioning the thought process between tasks.

A multi-shot technique involves users interacting with the model with different prompts, guiding the process toward a satisfactory final output.

However, users with no time or patience to deal with many interactions can do the next best thing and prompt the model to articulate its reasoning process before providing a final answer. This can improve the quality of the output, with the model’s own reasoning output influencing the quality of its final response.

While not as effective as direct user guidance, this approach serves as a good compromise for enhancing response quality.

3. Use direct and purposeful language

Anthropic’s prompts are a great example of how important it is to use clear, unambiguous language in AI interactions.

Some quotes from Anthropic encouraging and using clear language:

“Claude ends its response by reminding the user that although it tries to be accurate, it may hallucinate in response to questions like this. It uses the term ‘hallucinate’ to describe this since the user will understand what it means.”

“Claude responds directly to all human messages without unnecessary affirmations or filler phrases like ‘Certainly!’, ‘Of course!’, ‘Absolutely!’, ‘Great!’, ‘Sure!’, etc.”

Using direct, unambiguous language helps avoid misinterpretation and ensures that the model’s responses are straightforward and purposeful. This approach eliminates unnecessary complexity or ambiguity in the AI’s output.

The prompts address potential ambiguities by providing clear guidelines on how the model should handle specific situations. Specifying the desired tone and style ensures that the response matches the intended communication style.

And just as Stable Diffusion used to rely on negative prompts to not generate elements in their images, AI LLMs can also work better if users tell the model what NOT to do and what to avoid doing.

By instructing the model to maintain a specific tone and avoid unnecessary phrases, users can enhance the clarity and professionalism of the AI’s responses. This directive guides the model to focus on delivering substantive content without superfluous language.

It can also help the model reason better if a negative instruction prevents it from taking a specific reasoning path.

Bonus: Separate your instructions with tags

You may have noticed that Claude uses XML tags in its prompts—for example, starting with and ending with . This may feel weird at first, but it’s a practice that will help you understand longer prompts and will make your chatbot more capable of understanding which part of the prompt it is analyzing.

Image: Anthropic

XML (eXtensible Markup Language) tags provide a clear, hierarchical structure to the content, allowing for more precise control over how the AI interprets and utilizes different sections of the prompt. Anthropic uses XML tags to establish distinct “modules” within the prompt, each serving a particular purpose in guiding Claude’s behavior and responses.

Wrapping some instructions inside tags helps the model separate a specific block of text and understand what is about. For example, you can use tags like

Generally Intelligent Newsletter

A weekly AI journey narrated by Gen, a generative AI model.



Source link

Jose Antonio Lanz

https://decrypt.co/246695/claude-ai-system-prompts-anthropic-tips

2024-08-28 00:47:19

bitcoin
Bitcoin (BTC) $ 91,329.47 4.72%
ethereum
Ethereum (ETH) $ 3,139.87 3.72%
tether
Tether (USDT) $ 1.00 0.09%
solana
Solana (SOL) $ 221.17 7.61%
bnb
BNB (BNB) $ 623.87 2.06%
dogecoin
Dogecoin (DOGE) $ 0.377598 5.22%
xrp
XRP (XRP) $ 0.922336 14.25%
usd-coin
USDC (USDC) $ 1.00 0.15%
staked-ether
Lido Staked Ether (STETH) $ 3,134.92 3.47%
cardano
Cardano (ADA) $ 0.734651 26.32%
tron
TRON (TRX) $ 0.189741 7.25%
shiba-inu
Shiba Inu (SHIB) $ 0.000025 10.17%
avalanche-2
Avalanche (AVAX) $ 34.59 12.03%
the-open-network
Toncoin (TON) $ 5.42 4.68%
wrapped-steth
Wrapped stETH (WSTETH) $ 3,719.41 3.21%
wrapped-bitcoin
Wrapped Bitcoin (WBTC) $ 91,151.42 4.57%
sui
Sui (SUI) $ 3.81 22.27%
pepe
Pepe (PEPE) $ 0.000023 15.18%
weth
WETH (WETH) $ 3,141.56 3.74%
chainlink
Chainlink (LINK) $ 14.20 10.67%
bitcoin-cash
Bitcoin Cash (BCH) $ 433.56 4.99%
polkadot
Polkadot (DOT) $ 5.22 10.10%
near
NEAR Protocol (NEAR) $ 6.03 14.57%
leo-token
LEO Token (LEO) $ 7.76 4.29%
aptos
Aptos (APT) $ 12.48 11.22%
litecoin
Litecoin (LTC) $ 83.96 5.34%
wrapped-eeth
Wrapped eETH (WEETH) $ 3,305.68 3.34%
uniswap
Uniswap (UNI) $ 8.75 10.26%
usds
USDS (USDS) $ 0.995562 0.89%
crypto-com-chain
Cronos (CRO) $ 0.169899 10.03%
stellar
Stellar (XLM) $ 0.146614 10.35%
internet-computer
Internet Computer (ICP) $ 8.93 13.37%
dogwifcoin
dogwifhat (WIF) $ 3.94 15.97%
bittensor
Bittensor (TAO) $ 530.62 7.39%
kaspa
Kaspa (KAS) $ 0.142086 10.21%
ethereum-classic
Ethereum Classic (ETC) $ 23.52 8.12%
fetch-ai
Artificial Superintelligence Alliance (FET) $ 1.32 9.68%
dai
Dai (DAI) $ 1.00 0.16%
whitebit
WhiteBIT Coin (WBT) $ 22.33 1.49%
ethena-usde
Ethena USDe (USDE) $ 1.00 0.02%
bonk
Bonk (BONK) $ 0.000045 33.23%
polygon-ecosystem-token
POL (ex-MATIC) (POL) $ 0.379543 7.94%
hedera-hashgraph
Hedera (HBAR) $ 0.077935 19.38%
blockstack
Stacks (STX) $ 1.94 9.98%
render-token
Render (RENDER) $ 7.24 12.82%
monero
Monero (XMR) $ 144.37 1.95%
okb
OKB (OKB) $ 44.29 2.63%
first-digital-usd
First Digital USD (FDUSD) $ 1.00 0.63%
floki
FLOKI (FLOKI) $ 0.000268 29.77%
aave
Aave (AAVE) $ 169.22 10.83%