#1 Preliminary Thoughts on Regulating AI


illustration of ai

I prepared this inaugural blog by doing some internet searches on “regulating artificial intelligence” and came up with the materials and associated links below. What follows is an overview of some of the most interesting, provocative, and unsettling insights I found and which I thought makes for an interesting overview of what’s currently going on in the regulatory space of AI.

Defining AI and Regulation

For starters, if artificial intelligence is going to be the subject of regulation, we obviously need definitions for each term, which turns out to be controversial. The problem in defining AI is that its various applications do all sorts of things so that trying to embrace all their functions with a single definition seems impossible, not to mention the inadequacy of any such definition when a new AI capability appears. For example, older definitions of AI like the English Oxford Living Dictionary definition: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages,”  and “The Key Definitions Of Artificial Intelligence (AI) That Explain Its Importance” on Forbes.com make no mention of the newer, generative machine learning applications that create and sometimes “hallucinate” content.

Consequently, some regulatory experts have recommended a general characterization of AI functionalities rather than a definition, and have proposed using regulation to control and shape those functionalities according to their risk levels. But defining regulation runs into its own problems, such as its similarity with and difference from legislation. A characterization of regulation that I like is that a regulation provides a practice standard or a rule that realizes the intent of a law. Nir Kosti and his colleagues do a nice analytical job of differentiating regulation from legislation, so I’ll let you study it. (Legislation and regulation: three analytical distinctions)

The point I want to make is that however we associate regulation with legislation, regulatory requirements are required practices, not guidelines or suggestions. And that’s what makes them interesting and controversial. As requirements that issue from legislation or from a regulatory agency, they ultimately embody a particular political administration’s ideological values, especially on what kinds of risks are acceptable and which ones not, how risks should be calculated against benefits, how regulatory authority should be distributed, the role of the public in determining the course of AI applications and regulations, how much and what kinds of penalties to impose, and so on. Consequently, given the numerous stakeholders and the extraordinary political divisiveness in the U.S., reaching some kind of consensus on these issues—which obviously have enormous socioeconomic implications—will be difficult

Three Major Areas of Concern

Thus far, three major areas of concern are frequently targeted for AI regulation. The first is users’ privacy, which especially includes personal control of one’s data, how and whether that data will be shared or sold, and the need for strong cyber-security protections. These regulatory domains are thought to be crucial for ramping up trust in AI by the persons most impacted by it. The second area is safety and individual rights.  Worries about deepfakes, misinformation, and algorithmic unfairness towards minority groups saturate the internet, while colleges and universities have been busy the last year stipulating permissible and impermissible applications for students who want to use generative AI. A third area, which might be the most difficult to regulate, will be AI’s impact on socio-economic welfare such as the labor supply, job displacement and retraining, banking and financial institutions, liability for AI caused harm, and monopolistic practices. (Should AI be Regulated? The Arguments for and Against)

Europe Versus U.S. on AI Regulation

Europe has been much more aggressive in regulating AI than the U.S. and its very recent EU AI Act will serve as a template or serious regulatory example that U.S. stakeholders will study. The Act, which will go into effect later in 2024 and with which EU members or companies doing business in the EU will have two years to comply with, will regulate on the basis of risk classifications for specific use cases. In other words, it will be context-driven rather than specify a particular technology, although technologies that infer sensitive attributes like race, sexual orientation or workplace emotions are banned along with quite a few others including facial recognition. (High-level summary of the AI Act and EU AI Act: first regulation on artificial intelligence) Businesses that violate the act can be penalized up to either 30 million euros or 7 percent of their global turnover, whichever is higher, and be prohibited from operating in the EU.    

The Biden Administration’s Executive Order on AI

As mentioned above, though, the U.S. has been nowhere near as aggressive in laying down regulations for AI, and President Biden’s October 2023 Executive Order on Artificial Intelligence is a good example of that hesitancy. The Executive Order lays down eight sets of requirements for AI deliverables (Highlights of the 2023 Executive Order on Artificial Intelligence for Congress), but they are nothing like the General Data Protection Regulations or the EU AI Act. Indeed, President Biden’s Executive Order is not explicitly directed at the commercial sector but rather tasks over 50 federal entities like the Department of Homeland Security, the Office of Management and Budget, the Department of Defense, and so on to do things like “develop guidelines,” “evaluate and assess potential risks,” “issue best practices,” “report on results,” “develop a framework,” “issue guidance,” “conduct security reviews,” “solicit public input,” “issue a public report,” etc.

The Executive Order, although it carries the force of law, does not appear to impose any mandates on businesses like the EU’s General Data Protection Regulations or the EU AI Act. Rather, the Executive Order’s requirements seek to develop a host of informational sources and guidelines that might someday evolve into legislation or regulation. And that’s the point: The document is a first step in determining an ethical vision of “reasonable risk” when it comes to AI, but it stops way short of imposing mandates on the AI commercial sector.        

U.S. Congressional Attitude towards AI

Nevertheless, one article I read claimed that “there is an active appetite in Congress to oversee and potentially regulate AI.” (A Comparative Perspective on AI Regulation) This may have been prompted by a lot of hype surrounding ChatGPT and its generative machine learning cousins prompting people like Sam Altman, the President of OpenAI which owns ChatGPT, to urge Congress to pass AI regulation. (OpenAI CEO Sam Altman Asks Congress to Regulate AI) There also seems to be a strong recommendation in the literature directed at the commercial AI research and development sector to start seriously developing risk mitigation strategies rather than wait for governmental regulation to appear because by then companies may not be able to pivot quickly enough to avoid penalties or fines. But this would also suggest that U.S. companies will likely exert considerable lobbying pressure on state and federal legislatures to enact laws that are business friendly, which might provoke considerable consumer-rights pushback.

For example, a very brief but pointed post by Pranshu Verma and Natasha Tikulol described how 13 people, including current and former employees at OpenAI, Anthropic and Google’s DeepMind, wrote a letter that warned of grave risks from AI but pointed out the obvious: That corporations in control of the software have “strong financial incentives” to limit oversight. (Current and former AI employees warn of the technology’s dangers) Towards that end, the letter’s 13 signatories called for AI companies to not force their employees, especially their new hires, to withhold criticism of risk; to have a procedure for employees to raise concerns; to support a culture of criticism, and for leadership to promise not to retaliate against employees who raise alarms after other processes have failed.        

Risk Acceptability

In conclusion, note that the above mentions only a few of the many problem areas that regulation of AI poses. The bottom line, however, is actually quite stark: Far from connoting the kind of irritating compliance it is frequently made out to be, regulation is in fact a nation’s way of codifying its perceptions and values largely involving the plethora of risks to which citizens are exposed. Whether those regulations issue from the Food and Drug Administration, which has approved hundreds of AI applications so far, or the Federal Trade Commission, whose privacy regulations affect AI, the bottom line is inevitably the question, “How much and what kinds of risks are acceptable?” And the answers to and elaborations of that question are inevitably moral, not scientific, ones. For that reason alone, this blog and its future posts should make for important reading.  

Author
researcher in office

— by John Banja, PhD, professor at the Center for Ethics at Emory University and a member of the Regulatory Knowledge and Support Program of the Georgia CTSA, 8/2024

Comments

Continue the conversation! Please email us your comments to post on this blog. Enter the blog post # in your email Subject.

Email Your Comments!

View Blog Index

Contact

Karen A. Lindsley, DNP, RN, CDE, CCRC

404-727-1098

klindsl@emory.edu