Michael Amelio
Biden Issues AI Executive Order Mandating Agency Development of Safety Guidelines

President Joe Biden has issued an executive order outlining regulations for generative AI, addressing the need for guidelines before any legislative actions take place.
The executive order encompasses eight key objectives, which include establishing new safety and security standards for AI, safeguarding privacy, advancing equity and civil rights, protecting the interests of consumers, patients, and students, supporting the workforce, fostering innovation and competition, strengthening the United States' leadership in AI technologies, and ensuring responsible and effective government utilization of the technology.
Multiple government agencies have been assigned specific tasks as part of this initiative. Their responsibilities include developing standards to safeguard against the malicious use of AI in engineering biological materials, establishing best practices for content authentication, and creating robust cybersecurity programs.
The National Institute of Standards and Technology (NIST) will be entrusted with the role of "red teaming" AI models before they are publicly released. The Department of Energy and the Department of Homeland Security are directed to address potential AI threats to infrastructure and risks in areas like chemical, biological, radiological, nuclear, and cybersecurity.
Furthermore, developers of large AI models, such as OpenAI's GPT and Meta's Llama 2, will be required to disclose safety test results. A senior Biden administration official clarified that these safety guidelines primarily apply to future AI models, stating that publicly available existing models will not be recalled and will remain subject to existing anti-discrimination rules.
To safeguard user privacy, the executive order calls on Congress to enact data privacy regulations. Additionally, it seeks federal support for the development of "privacy-preserving" techniques and technologies.
The order also addresses concerns related to AI discrimination, encompassing algorithmic bias and the promotion of fairness in the use of AI for tasks like sentencing, parole, and surveillance. Government agencies are instructed to provide guidelines for landlords, federal benefits programs, and contracts to prevent AI from exacerbating discrimination.
The executive order instructs agencies to examine the impact of AI on employment and produce a report on its effects on the labor market. It encourages greater participation in the AI ecosystem by workers and sets forth plans for the launch of a National AI Research
Resource to provide valuable information to students and AI researchers, along with offering technical assistance to small businesses. The order also entails the swift recruitment of AI professionals for government roles.
Prior to the executive order, the Biden administration had released an AI Bill of Rights, outlining principles that AI model developers should adhere to. Subsequently, a series of agreements were established between the White House and several major AI companies, including Meta, Google, OpenAI, Nvidia, and Adobe.
It's important to note that an executive order is temporary and typically remains in effect for the duration of the administration, highlighting the ongoing discussions among lawmakers regarding the regulation of AI. Some politicians have expressed the desire to enact AI-related legislation by the end of the year.
Industry experts view the executive order as a significant step toward establishing standards for generative AI. Navrina Singh, founder of Credo AI and a member of the National Artificial Intelligence Advisory Committee, sees the order as a strong signal of the U.S. government's commitment to addressing generative AI.
She believes it is a pragmatic move given that crafting comprehensive policies can take time while legislative discussions are ongoing, emphasizing that AI is a top priority for the government.