Opinion | AI regulation can’t wait. Here’s where Congress should start.

(Daniel Hertzberg for The Washington Post)

The conversation about artificial intelligence turns to panic over humanity’s demise, or at least subjugation: Will robot overlords one day rule the world? But machine learning is more than hypothetical, and presents many immediate problems that deserve more attention, from mass production of misinformation to discrimination to widening the scope of surveillance. These disadvantages – many of which have been with us for years – should be the focus of AI regulation today.

The good news is that Congress is on guard, holding hearings and drafting bills that try to combat these new systems that traditionally require human input to capture and process data. Bipartisan legislation led by Senate Majority Leader Charles E. Schumer (DN.Y.) is under discussion.

The bad news is that nothing has come close to being comprehensive yet — and the White House and federal agencies already taking action on these proposals will cause some conflict and confusion. Before the country can agree on a single set of clear rules for these fast-growing devices, regulators need to agree on some basic principles.

AI systems must be reliable and efficient

This is very basic. Anyone designing these devices must conduct a thorough assessment of any harm they may cause, take measures to prevent it, and measure the extent of the harm. Anticipating abuse or misuse can be the most insidious of all. In the past, con artists used AI apps to convince victims to collect cash to fake their voices. Deeply fake videos of celebrities and political candidates can threaten reputations and even democracy.

In general, systems should be able to demonstrate some initial accuracy. In short, they have to do what they say they will do. But what exactly the threshold should be depends on the impact of the tool: for example, a false negative for an initial cancer diagnosis is more harmful than a false positive that can be reassessed after further evaluation.

Like ChatGPT, it can make a lot of mistakes while fabricating “serious” or nonsensical facts and figures – at the extreme end, sexual assault against a law professor includes false claims and defamation against an author. Perhaps we are willing to put up with these shortcomings to some degree in a general purpose assistant. But what about in a specialty set up to give legal advice or diagnose illness?

Then there are the risks that are probably not worth tolerating, stop altogether. Think of a gun that determines when to fire a bullet.

A final way to evaluate an AI’s performance is to weigh it against an alternative: the current state, usually involving human input. What benefits does it provide, what problems can it cause and how do they combine with each other? In other words, is it worth it?

AI systems should not discriminate

This principle is well linked to the assurance of safety and efficacy – impact evaluations, for example, can help protect against bias if they measure effects by demographic group.

But to avoid bias, it will also be important to examine the data used to train these algorithms. Consider data from criminal justice databases where minorities face higher incarceration rates. Using those numbers again, for example, to predict an offender’s likelihood of reoffending can reinforce racist policing and punishment.

The data should be representative of the society in which the system is being implemented – for example, facial recognition models made up mostly of photographs of white men are paraded around identifying black women.

The information should be reviewed in light of its historical context. For example, tech companies that tend to promote men to senior positions should be aware that relying on those past statistics to gauge employability may disadvantage female applicants. With that knowledge, device designers can adjust the differences in how the system favors or disfavors protected class members.

AI systems must respect civil liberties

As always when personal data is involved, privacy is key. Essentially, what companies can and cannot do depends on what consumers reasonably expect in that context. For example, it makes sense that Netflix is ​​cleaning up viewer preferences to fine-tune its recommendation algorithms. It would make a lot less sense for Netflix to harvest the exact location of those viewers to build a device unrelated to streaming. It makes it equal Less If Netflix sold its viewers’ polls to a political consulting firm like Cambridge Analytica.

Then there is the question of privacy in how these systems work. used. It is known that the Chinese Communist Party has installed more than 500 million cameras in the country. It is impossible to hire 500 million people to monitor them, so AI will do the job. President Xi Jinping’s regime is pushing to sell these weapons to other governments around the world. The United States can’t afford to let these privacy violations happen here — but it points to a lack of oversight that allowed companies like Clearview AI to mine more than 30 billion images from social media sites and sell them to law enforcement agencies across the country. We are not as far away as we like to think.

AI systems should be transparent and explainable

People need to be aware when interacting with AI systems, period – not only can no one fall in love with their search engine, but if one of these tools causes harm, the affected person has a way of finding it. give back. That’s why it’s important to explain to AI systems both that they are AI and how they work.

This second part is very difficult. It’s one thing to make access to validity and impact assessments or to describe sources of training data. But in detail Causes Another characteristic of AI tools is that they are general – because in many cases the people who build and operate these tools have no way to see inside the black boxes they control. The issue of “explanation” is front and center for researchers today, but that doesn’t help much in a world where these systems make all sorts of decisions about our lives.

How self-explanatory a given tool should be should vary based on what it does for us or for us. For example, think about a toaster: consumers really need to know why? Does this device determine whether a piece of bread has reached good quality? Then consider the loan scheme. The person whose ability to rent an apartment depends on the answer of the algorithm has the right to understand the reasons for the return. What about the person whose medical claim was denied by the insurance company without anyone even looking at the patient’s file? And somewhere in between are the systems that social media sites rely on to feed posts and other content to their users.

Just as the serious harm caused by some AI systems may be too great to deploy without strict standards for safety and effectiveness, the inability of similar systems to justify themselves may mean that, for now, they must. Stay unspread.

Applying principles

AI is not one thing – it is a tool that enables new ways of doing many things. Applying a single set of criteria to all machine-learning models doesn’t make much sense. But to determine what these requirements should be, in the circumstances, the country needs a single set of goals.

Putting these principles — versions of the White House’s nonbinding Blueprint for an AI Bill of Rights, as well as from the National Institute of Standards and Technology — into action won’t be easy. However, the question of who will be responsible for enforcing the new regulations remains uncertain: a new federal agency? Are existing agencies using their powers? Or perhaps there is a middle ground, a coordinating body that reviews agency standards, harmonizes authorities, and fills in gaps.

There is the issue of applying today’s rules to AI systems, and the question of whether those systems need to create new rules in new environments. What do we do about the legal liability of algorithmically generated speech? What about copyright? None of this is to mention the potential for massive job losses in some areas (many experts cite accounting as a simple example) and perhaps equally impressive job creation in others (humans need to train and nurture these models all). That subject needs to be addressed outside the scope of safety regulations. And finally, there’s the matter of ensuring that any AI innovation bottlenecks don’t get locked into the market power of modern tech titans, which can easily buy compliance and computing costs.

The downside of any strict AI regulation is that these technologies can exist even if the United States allows them. Instead, it is countries like China that build them without increasing the commitment to democratic values ​​that our people can vouch for. Certainly, the United States is better off as a participant and influencer than bowing down and sacrificing its ability to point this powerful technology in a dire direction. But that’s why these principles are an important place to start: without them, there is no direction.

Post View | About the editorial board

Editorials represent the Post’s view as an institution, which is determined by debate among editorial board members, is based on an opinion section, and is separate from the news section.

Editorial Board Members and Areas of Focus: Commentator David Shipley; Deputy Opinion Editor Karen Tumulty; Associate Commentary Editor Stephen Stromberg (National Politics and Policy); Lee Hockstader (European affairs, based in Paris); David E. Hoffman (International Public Health); James Homan (domestic policy and electoral politics, including the White House, Congress, and governors); Charles Lane (Foreign Affairs, National Security, International Economics); Heather Long (Economics); Associate Editor Ruth Marcus; Milli Mitra (public policy solutions and audience development); Keith B. Richburg (Foreign Affairs); and Molly Roberts (Technology and Society).

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *