Connect with us

Hi, what are you looking for?

Friday, May 1, 2026
Mugglehead Investment Magazine
Alternative investment news based in Vancouver, B.C.
Yet another scare: AI teaches scientists how to make bioweapons
Yet another scare: AI teaches scientists how to make bioweapons
Photo credit: United States Marine Corps

AI and Autonomy

Yet another scare: AI teaches scientists how to make bioweapons

This unsettling topic has been raising eyebrows in the U.S., Australia and UK for weeks

Scientists just shared highly concerning research with The New York Times, exposing how AI chatbots can now deliver precise instructions for creating biological weapons.

In an Apr. 29 article, experts including Stanford microbiologist Dr. David Relman handed over transcripts from safety tests they conducted for AI companies. The chatbots — public models from OpenAI, Anthropic and Google — responded with chilling detail. They explained how to modify pathogens into treatment-resistant “superbugs,” assemble them from raw genetic material and deploy them in crowded spaces such as public transit systems or via weather balloons to maximize casualties while evading detection.

Relman described the outputs as possessing “deviousness and cunning” he had not anticipated. Other scientists supplied more than a dozen similar transcripts showing the models brainstorming evasion tactics and ranking infectious agents by economic impact on livestock. Even when chatbots initially refused, persistent prompting often broke through weak guardrails.

This alarming development is the latest to prompt world governments to take action to protect themselves. They are now recognizing that AI lowers the barrier for malicious actors who lack advanced lab skills or PhD-level expertise.

Australia has responded swiftly. This month, prior to the release of the NYT piece, the Albanese government created a dedicated AI Biosecurity Office and a high-level national security taskforce. This new unit aims to unite security, science and policy experts to plug gaps in existing biosecurity rules that AI advances threaten to break through. Experts warned Australian Agriculture Minister Julie Collins that sophisticated models can train criminals to weaponize synthetic nucleic acids and construct novel pathogens.

Similar moves are being made elsewhere. In mid-April, the United Kingdom launched its Biosecurity Frontiers competition to accelerate AI-driven detection and countermeasures against emerging biological threats.

Concerns also now centre on OpenAI’s GPT-Rosalind, a life-sciences model the company released on Apr. 16. Critics have warned that although it could be capable of aiding legitimate virus research, it could enable the design of more lethal strains too and pose a serious danger.

Author Annie Jacobsen is amplifying these concerns in her soon-to-be released book — Biological War: A Scenario. In an X post on Friday, the writer revealed that she interviewed OpenAI CEO Sam Altman about this extremely worrying topic.

Moreover, a journal article published in Science this February has echoed these fears and calls for urgent governance. Titled “Biological data governance in an age of AI,” the piece calls on governments to regulate high-risk biological datasets through strict controls and more secure research environments.

Read more: AI-designed drugs near human trials as AlphaFold enters clinical phase

 

Follow Mugglehead on X

Like Mugglehead on Facebook

Follow Rowan Dunne on X

Follow Rowan Dunne on LinkedIn

rowan@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Technology

Pudu has an upcoming IPO planned, but a date is yet to be confirmed

Medical and Pharmaceutical

The event has generated over US$1.6 million since 2015

AI and Autonomy

Isomorphic Labs launched in 2021 as a unit of Alphabet Inc

Technology

The initiative will take place for up to three years