Connect with us

Hi, what are you looking for?

Saturday, Apr 27, 2024
Mugglehead Magazine
Alternative investment news based in Vancouver, B.C.

AI and Autonomy

Google pulls Gemini chatbot down in response to outrage mob

A viral post showcased the AI image generator after creating an image of the US Founding Fathers inaccurately including a black man

Google pulls Gemini chatbot down in response to outrage mob
Image via Gemini.

Gemini, Google‘s (NASDAQ: GOOG) artificial intelligence service, may be cancelled before it gets a chance to strut its stuff if its Sundar Pichai isn’t careful.

In a Tuesday evening memo, Pichai addressed the company’s mistakes using artificial intelligence before promptly taking its Gemini image-generation program offline for future testing.

Gemini is Google’s answer to OpenAI’s ChatGPT. It can answer questions in text form and generate pictures in response to text prompts. Initially, a viral post showcased this recently launched AI image generator creating an image of the US Founding Fathers inaccurately including a black man.

Gemini also generated images of German soldiers from World War Two, incorrectly featuring a black man and Asian woman.

Predictably, social media responded with hostility.

The company introduced the image generator earlier this month through Gemini. Over the past week, users uncovered historical inaccuracies that went viral online, prompting the company to pull the feature last week. It announced plans to relaunch it in the coming weeks.

Pichai’s memo indicated that the teams have been working around the clock to address the issues and announced that the company will implement a clear set of actions and structural changes.

The news emerged following Google’s decision earlier this month to change the name of its chatbot from Bard to Gemini.

“I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong,” Pichai said.

“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us.”

Read more: The Government of Canada invests $17.2M in quantum computing startups

Read more: Ericsson Canada partners with two Canadian universities to open a Quantum research hub

Google overcorrects to be cautious and gets in more trouble

The tech giant created another problem when trying to resolve its bias.  It’s overcorrected in the name of an abundance of caution.

“So what went wrong? In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive,” Pichai said.

One of the specific overcorrections had Gemini responding that a question about whether Elon Musk posting memes on X was worse than Hitler killing millions of people had “no right or wrong answer.”

Google has actively attempted to counteract human bias by instructing Gemini not to make certain assumptions. However, this effort has backfired because human history and culture are not straightforward. Both contain nuances that humans instinctively understand but machines do not.

.

Follow Mugglehead on Twitter

Like Mugglehead on Facebook

Follow Joseph Morton on Twitter

joseph@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI and Autonomy

Verses has so far announced partnerships with six beta program participants for its Genius platform

AI and Autonomy

The PUDU T300 aims to alleviate potential labour shortages by automating the delivery and transport of materials

AI and Autonomy

The display has a particular focus on the impact of AI in healthcare

AI and Autonomy

We know the company's chips use RISC-V instruction set architecture, but Rivos has kept its technology largely secret