Connect with us

Hi, what are you looking for?

Monday, Jan 19, 2026
Mugglehead Investment Magazine
Alternative investment news based in Vancouver, B.C.
Google Research discovers that repeating LLM prompt twice gives superior answers
Google Research discovers that repeating LLM prompt twice gives superior answers
Image credit: DeepSeek

AI and Autonomy

Google Research discovers that repeating LLM prompts twice gives superior answers

A simple technique to improve answers to questions that don’t require reasoning

AI scholars from Google Research have broken new ground by discovering a remarkable yet simple solution for improving large language model performance with tasks that don’t require reasoning. Simply repeating the input prompt twice!

The paper, titled Prompt Repetition Improves Non-Reasoning LLMs, has shown that duplicating a prompt and transforming it from “<QUERY>” into “<QUERY><QUERY>” can significantly boost the quality of results produced by chatbots like DeepSeek, Claude and ChatGPT.

Essentially, reading a prompt twice makes the models much more likely to answer correctly. It’s a simple yet efficacious means of getting improved responses. It is like giving the AI a second chance to review everything before coming up with an answer.

On tasks like picking the right multiple choice option, retrieving information or pulling a specific fact from a list, the researchers that put the paper together discovered that doubling up on prompts can make the LLM’s rate of accuracy skyrocket by up to 75 per cent.

They tested this trick on common AI challenges such as solving a puzzle, figuring out science questions, and math problems. Out of 70 different tests on non-thinking tasks, repeating the prompt yielded major wins in 47 cases.

“Prompt repetition does not change the lengths or formats of the generated outputs, and it might be a good default for many models and tasks, when reasoning is not used,” the authors concluded

The magic of this discovery is not that the trick is revolutionary or groundbreaking, it’s that something so shockingly simple can yield drastically improved results.

While many researchers continue to focus on developing state-of-the-art AI technologies, pasting your question into the chat box twice remains one of the most efficacious methods of squeezing noticeably improved answers out of contemporary LLMS.

“By repeating the entire prompt, the second copy provides a full forward view for every token in the first copy — and the first copy gives complete backward context to the second copy,” commented Barclays PLC (NYSE: BCS) (LON: BARC) software developer Saumya Aghera

“This creates an effective bidirectional understanding of the prompt within the strict left-to-right processing constraint. Sometimes we just overthink it 🤷‍♀️”

Read more: Rupert Murdoch’s media conglomerate employs tech startup Symbolic.ai for journalism help

 

Follow Mugglehead on X

Like Mugglehead on Facebook

Follow Rowan Dunne on X

Follow Rowan Dunne on LinkedIn

rowan@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI and Autonomy

News Corp. is responsible for major media outlets like The Wall Street Journal, Barron's and New York Post

AI and Autonomy

Those models should help Merck develop virtual cell systems that better predict which diseases drugs may treat

AI and Autonomy

OpenAI’s move comes days after it unveiled ChatGPT Health, a service letting users connect medical records and wellness apps

Technology

Infermove's robots have completed over 100,000 deliveries