The Biden-Harris Administration secured another series of commitments from eight influential AI developers at the White House on Tuesday, building on seven pledges for responsible development of the technology obtained by the government in July from major companies like Microsoft (NASDAQ: MSFT) and Amazon (NASDAQ: AMZN).
The administration says these commitments are a significant bridge to government action and represent an essential step toward the safe and secure establishment of artificial intelligence infrastructure in the United States.
The companies have committed to verifying the safety of products before introducing them to the public, which will involve internal and external security testing and exchanging information with governments and other stakeholders to help mitigate risks.
They have also pledged to develop systems that prioritize safety above all, which will include significant investments in cybersecurity; deploying AI systems that will assist humanity with its foremost challenges, such as cancer prevention and climate change; and publicly reporting the capabilities and flaws of those systems for collective benefit.
These commitments were made by Adobe Inc. (NASDAQ: ADBE), the Toronto-based AI startup Cohere, NVIDIA (NASDAQ: NVDA), Stability AI, IBM (NYSE: IBM), Palantir Technologies (NYSE: PLTR) and the San Francisco software companies Scale AI and Salesforce Inc (NYSE: CRM).
“Today, accelerated computing and generative AI are showing the potential to transform industries, address global challenges and profoundly benefit society,” said NVIDIA’s head scientist Bill Dally in a statement on Tuesday.
Cybersecurity is a key concern for AI technologies
Following the new commitments, The U.S. Committee on Oversight and Accountability held a hearing with top government officials titled “How are Federal Agencies Harnessing Artificial Intelligence” on Thursday led by the organization’s Cybersecurity, Information Technology, and Government Innovation subcommittee.
The key takeaways from the meeting were that AI can help federal agencies achieve their goals more efficiently when utilized properly and that the government must govern the technology with adequate oversight and accountability.
“AI-based technologies can help take advantage of opportunities and improve our capabilities and effectiveness. However, they also pose significant challenges and risks that require careful oversight, management, governance and accountability,” said Craig Martell, Chief Digital and Artificial Intelligence Officer at the Department of Defense.
The Director of the White House Office of Science and Technology Policy Arati Prabhakar emphasized the importance of the work being conducted by federal agencies in the country to utilize AI in an impactful manner.
“Our work is only becoming more important as AI’s capabilities advance, especially as AI is increasingly integrated into society,” said Prabhakar.
U.S. implements anti-discrimination and facial recognition AI policies
The U.S. Department of Homeland Security (DHS) also implemented new policies for the responsible use of AI on Thursday, which include an anti-discrimination measure and new rules for facial recognition technology.
“DHS will not collect, use, or disseminate data used in AI activities, or establish AI-enabled systems that make or support decisions, based on the inappropriate consideration of race, ethnicity, gender, national origin, religion, gender, sexual orientation, gender identity, age, nationality, medical condition, or disability,” reads the new policy developed by the country’s Artificial Intelligence Task Force.
The organization says it utilizes AI technology to combat fentanyl trafficking and sexual exploitation of children among other things.
This week, the Homeland Security Advisory Council provided recommendations on how @DHSgov can leverage AI, modernize to meet future workplace needs, & modify some of our key security grant programs to ensure resources get to those who need them. (1/3) pic.twitter.com/NZZBUAwdKK
— Secretary Alejandro Mayorkas (@SecMayorkas) September 15, 2023
The department says the facial recognition policy will give citizens in the country the right to opt-out of the identification method for purposes unrelated to law enforcement and prohibit the use of the technology as the sole basis of law or civil enforcement-related actions.
“This directive dictates that all uses of face recognition and face capture technologies will be thoroughly tested to ensure there is no unintended bias or disparate impact in accordance with national standards,” reads the policy.
The Biden-Harris Administration is currently in the process of developing an Executive Order directive on AI.