Connect with us

Hi, what are you looking for?

Monday, Oct 14, 2024
Mugglehead Magazine
Alternative investment news based in Vancouver, B.C.

AI and Autonomy

Biden gets voluntary pledges from 7 big tech companies about artificial intelligence

Safety, security and trust considered big three principles of the future of AI

Biden gets voluntary pledges from seven big tech companies about artificial intelligence
Image via The White House.

President Biden successfully obtained voluntary pledges from several prominent United States tech companies regarding ensuring the safety of artificial intelligence products before they are launched into the market.

In an official statement released on Friday, Biden announced he’d met with seven leading artificial intelligence companies at the White House, including Amazon (NASDAQ: AMZN), Anthropic, Google (NASDAQ: GOOGL), Inflection AI, Meta (NASDAQ: META), Microsoft (NASDAQ: MSFT) and OpenAI to discuss the technology.

One of the key promises made by these companies involve incorporating third-party supervision to oversee the development and functionality of the next generation of artifical intelligence systems. However, specific details regarding the designated auditors or mechanisms for holding these companies accountable have not been provided at this time.

The companies have voluntarily agreed to follow three important principles for the future of AI: safety, security and trust. This is a crucial step in developing responsible AI. As technology keeps advancing quickly, the Biden-Harris Administration will consistently remind these companies of their duties.

“We’ll see more technology change in the next 10 years, or even in the next few years, than we’ve seen in the last 50 years.  That has been an astounding revelation to me, quite frankly.  Artificial intelligence is going to transform the lives of people around the world,” said President Biden in a White House release.

Biden expanded on the company’s three main responsibilities regarding artificial intelligence technology to include ensuring the safety of their systems by thoroughly testing and assessing potential risks before releasing them to the public.

Second, they need to prioritize the security of their models, protecting them from cyber threats and managing risks to national security, while sharing best practices and industry standards. Also, the companies must earn people’s trust by empowering users to make informed decisions.

This involves labelling content that’s altered or AI-generated, addressing bias and discrimination, strengthening privacy protections and safeguarding children from harm.

Read more: Cannabis and psychedelics law firm aims to help marginalized communities with AI

Read more: Scaling AI is the top priority for 78% of information executives: MIT Technology Review Insights

Testing protects against threats to biosecurity and cybersecurity

The testing will also assess the potential for societal harms, like bias and discrimination, and address theoretical concerns about advanced AI systems gaining control over physical systems or “self-replicating” by creating copies of themselves.

In addition to security testing, the companies have pledged to establish methods for reporting vulnerabilities in their systems. They have also committed to using digital watermarking techniques to differentiate between real content and AI-generated content, particularly to combat deepfakes in images and audio.

“From supporting a pilot of the National AI Research Resource to advocating for the establishment of a national registry of high-risk AI systems, we believe that these measures will help advance transparency and accountability,” said Brad Smith, president of Microsoft in a blog.

“We have also committed to broad-scale implementation of the NIST AI Risk Management Framework, and adoption of cybersecurity practices that are attuned to unique AI risks.”

European Union lawmakers have actively working on comprehensive AI regulations for the 27-nation bloc. These rules are expected to address applications considered to carry the highest risks and may impose restrictions accordingly.

On Friday, the White House announced that it has engaged in consultations with various countries regarding the voluntary commitments made by the companies.

While the pledge primarily centers on addressing safety risks associated with AI technology, it does not cover other concerns related to its impact on jobs and market competition.

Additionally, it does not specifically address the environmental resources required to build AI models, nor does it encompass copyright concerns pertaining to the use of human creations, such as writings, art and other handiwork, in teaching AI systems to generate human-like content.

 

Follow Mugglehead on Twitter

Like Mugglehead on Facebook

Follow Joseph Morton on Twitter

joseph@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI and Autonomy

Traditional energy sources and renewable sources like solar and wind alike struggle with consistency and capacity issues

AI and Autonomy

This US$13.8-million-dollar AI and robotics contract was initiated last September

Gold

The technology's near-zero surface impact and efficient smart sensors make it an environmentally friendly choice

Medical and Pharmaceutical

The study analyzed CT scans from over 10,000 participants and found at least one lung nodule in 42 per cent of them