The comprehensive set of guidelines for artificial intelligence that the White House recently unveiled in an executive order show that the U.S. government is attempting to address the risks posed by artificial intelligence, or AI.
Anjana Susarla, Omura-Saxena Professor of Responsible AI in Michigan State University’s Broad College of Business, discusses why the executive order represents an important step in building responsible and trustworthy AI – and what it leaves unresolved.
Answers are excerpts from an article originally published in The Conversation.
Why is the White House’s executive order on AI important?
Researchers of AI ethics have long cautioned that stronger auditing of AI systems is needed to avoid giving the appearance of scrutiny without genuine accountability. As it stands, a recent study looking at public disclosures from companies found that claims of AI ethics practices outpace actual AI ethics initiatives. The executive order could help by specifying avenues for enforcing accountability.
How will this executive order enforce accountability for companies that build large AI systems?
Another important initiative outlined in the executive order is probing for vulnerabilities of very large-scale general purpose AI models trained on massive amounts of data, such as the models that power OpenAI’s ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economy to perform red teaming and report the results to the government. Red teaming is using manual or automated methods to attempt to force an AI model to produce a harmful output, such as making offensive or dangerous statements like explaining how to sell drugs.
Reporting to the government is important given that a recent study found most of the companies that make these large-scale AI systems are lacking when it comes to transparency.
How will this executive order address misinformation, disinformation and risks of harm caused by AI?
Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the U.S. Department of Commerce to develop guidance for labeling AI-generated content. Federal agencies will be required to use AI watermarking — technology that marks content as AI-generated to reduce fraud and misinformation — though it’s not required for the private sector.
The executive order also recognizes that AI systems can pose unacceptable risks of harm to civil and human rights and the well-being of individuals, noting “Artificial intelligence systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination and exacerbated online and physical harms.”
What is the White House executive order on AI lacking?
A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order’s directives in light of existing consumer privacy and data rights statutes.
Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have on data privacy and freedoms.
How much will transparency around algorithms improve our understanding of AI?
It’s worth noting that algorithmic transparency is not a one-size-fits-all solution. For example, the European Union’s General Data Protection Regulation legislation mandates “meaningful information about the logic involved” in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understand how the system affects them. But knowing how an AI system works doesn’t necessarily tell you why it made a particular decision.
With algorithmic decision-making becoming pervasive, the White House executive order and the international summit on AI safety highlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.