AI scholars from Google Research have broken new ground by discovering a remarkable yet simple solution for improving large language model performance with tasks that don’t require reasoning. Simply repeating the input prompt twice!
The paper, titled Prompt Repetition Improves Non-Reasoning LLMs, has shown that duplicating a prompt and transforming it from “<QUERY>” into “<QUERY><QUERY>” can significantly boost the quality of results produced by chatbots like DeepSeek, Claude and ChatGPT.
Essentially, reading a prompt twice makes the models much more likely to answer correctly. It’s a simple yet efficacious means of getting improved responses. It is like giving the AI a second chance to review everything before coming up with an answer.
On tasks like picking the right multiple choice option, retrieving information or pulling a specific fact from a list, the researchers that put the paper together discovered that doubling up on prompts can make the LLM’s rate of accuracy skyrocket by up to 75 per cent.
They tested this trick on common AI challenges such as solving a puzzle, figuring out science questions, and math problems. Out of 70 different tests on non-thinking tasks, repeating the prompt yielded major wins in 47 cases.
“Prompt repetition does not change the lengths or formats of the generated outputs, and it might be a good default for many models and tasks, when reasoning is not used,” the authors concluded.
The magic of this discovery is not that the trick is revolutionary or groundbreaking, it’s that something so shockingly simple can yield drastically improved results.
While many researchers continue to focus on developing state-of-the-art AI technologies, pasting your question into the chat box twice remains one of the most efficacious methods of squeezing noticeably improved answers out of contemporary LLMS.
“By repeating the entire prompt, the second copy provides a full forward view for every token in the first copy — and the first copy gives complete backward context to the second copy,” commented Barclays PLC (NYSE: BCS) (LON: BARC) software developer Saumya Aghera
“This creates an effective bidirectional understanding of the prompt within the strict left-to-right processing constraint. Sometimes we just overthink it 🤷♀️”
A dead-simple trick to improve LLM performance:
Just repeat your prompt twice.
No fancy prompting techniques, no chain-of-thought, just plain repetition.
Google researchers tested this across Gemini, GPT, Claude, and Deepseek, and the results were surprisingly good.
Here's… pic.twitter.com/GeQ17mgayX
— Akshay 🚀 (@akshay_pachaar) January 15, 2026
Read more: Rupert Murdoch’s media conglomerate employs tech startup Symbolic.ai for journalism help
Follow Rowan Dunne on LinkedIn
rowan@mugglehead.com