©Worawee Meepian via Canva.com
Biden Administration Gets the Ball Rolling to Write AI Standards
December 21, 2023
The U.S. AI regulation process is in motion as the Biden administration has announced that it has started writing key standards and guidance to regulate the safe implementation of generative artificial intelligence, according to Reuters.
The White House’s Commerce Department’s National Institute of Standards and Technology (NIST) said the government body is looking for public input by early February to carry out fundamental testing vital for the safety of AI systems.
This effort was influenced by President Joe Biden’s executive order on AI, according to Commerce Secretary Gina Raimondo, who stated that it is aimed at developing “industry standards around AI safety, security, and trust that will enable America to continue leading the world in the responsible development and use of this rapidly evolving technology.”
The administration is developing guidelines for laying out the standards, assessing AI, and creating testing environments for evaluating AI systems. The agency has also requested “input from AI companies and the public on generative AI risk management and reducing risks of AI-generated misinformation.”
AI has been a hot topic in 2023 with its ability to create text, photos, and videos as a result of prompts given by the user. However, there have been concerns linked to its use, from its potential to overpower humans to it being a threat to democracy.
The October order from Joe Biden directed agencies to set standards for the testing mentioned above and shone a light on the cybersecurity, chemical, biological, radiological, and nuclear risks linked to AI.
NIST is currently working on laying out a framework for testing, which includes “red-teaming,” or simulating real-world cyber attackers’ tactics to assess an organization’s security process. The term refers to U.S. Cold War computer games in which the other side was known as the “red team.” For many years, cybersecurity experts have made use of external red-teaming to crack down on threats.
According to the White House, thousands of participants came to the first-ever U.S. public assessment red-teaming event and tried to see if they “could make the systems produce undesirable outputs or otherwise fail, with the goal of better understanding the risks that these systems present.” They added that the mission “demonstrated how external red-teaming can be an effective tool to identify novel AI risks.”
Recent News
Supreme Court Rules Against Starbucks Union in Landmark Case
The Supreme Court ruled against the Starbucks union.
Adidas Launches Large-Scale Bribery Probe in China
Adidas has launched the probe after a whistleblower report.
McDonald’s Stops AI Drive-Thru Order Testing
McDonald’s is pulling the plug on its AI-driven drive-thru ordering experiment for now.
American Airlines Passenger Sued for $81K by FAA After Duct-Tape Incident
The incident with the American Airlines passenger occurred in 2021.