In a chorus, tech leaders called for governments to set their regulatory sights on artificial intelligence at the recent World Economic Forum in Davos, Switzerland.
Google CEO Sundar Pichai was straightforward in an op-ed ahead of the forum: The "potential negative consequences of AI" make the need for its regulation clear. Microsoft President Brad Smith warned against waiting for the technology to mature before instituting policies to regulate its use.
IBM CEO Ginni Rometty introduced the concept of precision regulation, and announced the launch of an internal tech policy research lab to work on policy initiatives.
The technology's complexity and the growing public concern about the potentially harmful effects of AI have inspired calls for regulation.
Some government bodies have already laid the legislative groundwork around AI and the technology aspects it intersects with — the clearest U.S. example is the California Consumer Privacy Act, which regulates the way data is used.
But government bodies are historically slow to respond to technology changes. Experts believe if government is to take a sustainable approach to legislating AI it should include public awareness, the industry's point of view and the laws already on the books.
How we got here
The spectrum of impact AI technology can have on society is getting clearer. It can bring companies together and boost the economy by reshaping how businesses operate; it can also deliver harm when taken over by biases.
It's natural that companies call for government regulation once the complexities of technology grow. It's happened before, according to Philip Nichols, professor of social responsibility in business at the Wharton School of the University of Pennsylvania, in an interview with CIO Dive.
"In the 1940s, the broadcast industries [which] at first opposed government regulation fiercely, just switched 180 degrees" and favored oversight, Nichols said. The process of allocating frequencies made it so industry couldn't operate without oversight.
When it comes to AI, tech leaders know "trust is going to be a huge issue," and facing regulation in the current context might be preferable, in their view, to regulation that comes after things have gone wrong, said Nichols.
In that context, "we might get overregulation," Nichols said.
Additional pressure to regulate comes from activist groups such as the American Civil Liberties Union, which has pointed out the shortcomings of unregulated AI tools and their impact on society. One example is Amazon's facial recognition software Rekognition, which wrongly flagged 26 California lawmakers as criminals, according to the ACLU.
In its 2019 report, AI Now — a New York University-based research group — recommended government and business stop all use of facial recognition "in sensitive social and political contexts until the risks are fully studied and adequate regulations are in place."
Though he favors regulation, Microsoft's Smith vocally opposed a ban on facial recognition proposed by the European Union.
"I'm really reluctant to say 'let's stop people from using technology in a way that will reunite families when it can help them do it,'" Smith told Reuters.
Government moves
In setting up AI regulations, government has a decision to make: Should it address all AI or target the technology's use cases within government agencies?
The first option might prove difficult, said Rayid Ghani, professor at Carnegie Mellon University's Heinz College. "It's ambiguous and it's not clear what it is," Ghani said.
Instead, by focusing on specific agencies and how policy areas interact with AI. Some of that work has already started, with the White House releasing a set of general guidelines for the future federal regulation of AI.
The White House guidelines "are extremely high level and vague," said Ghani. "They're better than nothing but we'll have to figure out what those guidelines result in." Regulation ought to focus on the outcomes, which makes AI susceptible as part of specific policy areas.
AI regulation from government faces an additional hurdle, one it has in common with big tech: the need for AI talent. Most government agencies struggle to attract and retain the tech talent they'd need in order to create and oversee regulation.
"[Government wants] to figure out what regulation to put in place to make sure that we maximize the positive impact of AI, if we want to make sure AI leads to equity in society and protecting people who are traditionally marginalized, we need people inside who have the right background and training," said Ghani.
Concerted approach
In absence of regulation, business has sought to regulate itself.
Some larger organizations are appointing a chief AI ethics officer or other similar oversight positions to focus on overseeing the ethical dimensions of AI, with a sharp focus on reducing bias and harm. Others, such as IBM, have internal teams working on an AI regulatory framework which they can offer to regulatory agencies.
No matter what approach the U.S. government takes, its initiatives should attempt to incorporate insights from all involved, said Matt Sanchez, founder and CTO of CognitiveScale, in an interview with CIO Dive.
"It's gotta be a combination of public awareness, what's the industry point of view and third, from a legal standpoint what already exists," said Sanchez. "Without that, we'll get regulation that's maybe too ambiguous or maybe overreaches and is too complex."
Sanchez's company has worked with Canada and the EU in AI oversight initiatives. His company's main product automates the management of AI business risk. One way regulation could play out is by having a commonly defined AI trust index to measure bias and risk across platforms.
In other words, a credit-like score for the safety of an AI platform. "That common language is needed and I think that government can help us define what are those acceptable ranges for different types of problem areas," Sanchez said.