For government leaders looking to build more efficient and sustainable cities, artificial intelligence tools will likely be some of the most consequential technologies in the decades to come.
“I’d put it up there with the internet and the steam engine, with the ability to transform lives,” said Prabhakar Raghavan, senior vice president of Google’s knowledge and information products.
Raghavan spoke Wednesday at the U.S. Conference of Mayors meeting in Washington, D.C., about how the company works with cities on smart strategies and what mayors can do to fight misinformation, especially in a general election year.
The company’s Google Maps product is an example of a combination of real-world modeling and artificial intelligence, Raghavan said. Its data has allowed Google to work with cities to provide flood and wildfire alerts, as well as to provide fire propagation modeling to the U.S. Forest Service. Google Maps has also begun offering “eco-friendly” routing to users, suggesting the shortest but most fuel-efficient way to get from one point to another — a move Raghavan said has saved about 2.5 million tons of carbon dioxide, equivalent to taking about half a million cars off the road.
AI has also helped Google develop Project Green Light, a traffic optimization plan to help cities reduce car emissions by optimizing traffic-light timing. Google provides participating cities with recommendations for changes which city traffic engineers can implement if they choose. It’s currently working with 12 cities, with Seattle its only initial participant in North America, and it plans to continue adding more metro areas, Raghavan said. (Smart Cities Dive also has written about Google’s free tool for mapping urban tree canopies and its autonomous vehicle service, Waymo, which is operating in Phoenix, San Francisco, Los Angeles and Austin, Texas.)
Columbus, Ohio, Mayor Andrew Ginther asked the Google executive how city leaders can weigh the risks of AI against its potential. Raghavan said he hears the concerns, but AI is a field that’s been developing with the help of ethicists and technologists for decades. The tools that Google releases are heavily vetted, he said, and teams rigorously decide each project’s necessity. For example, they’ve yet to implement widespread facial recognition, he said.
“What’s critical here is not just the potential and all the good stuff we can do, but what we elect not to do,” Raghavan said.
With an election year ahead, Ginther asked how he and other city leaders can combat misinformation. Raghavan admitted it's a difficult question. Google has developed its version of what scholars call “the relativism of truth” — its algorithms’ consensus on an answer based on a collection of authoritative sources.
When misinformation is introduced, Google has to double down on its principles of elevating authoritative sources, like government and health sources, higher in its rankings. It also has to elevate these sources when it detects a developing crisis where misinformation could easily run rampant. Third, Raghavan said, is that Google has strong content policies in place to keep harmful material like child abuse or personal financial information out of search.
For elections, Google has been providing more information about the origins of images by watermark testing them for tampering. Raghavan said his team is also working on advertising products that assess if election advertisements follow local laws.
For the last two election cycles, the company has been in touch with the two major U.S. political parties to get feedback on their tools and safeguards.
“It will almost become a battle of technologies,” Raghavan said. “Just as if you think back 30 years, spam and email became a battle of technologies. Machine-generated spam tried to get through the defenses of email spam system defenses, and the same thing is now at a much more elevated level.”