As the private and public sectors look to harness the benefits of artificial intelligence in their workflows, cities nationwide are contemplating what guardrails they need to ensure their AI use is both safe and effective.
New York City is one of the latest to publish AI guidance, with Mayor Eric Adams touting the “The New York City Artificial Intelligence Action Plan” as the first of its kind for a major U.S. city. In the plan, which the city rolled out in mid-October, Adams says this era of artificial intelligence could be “one of the most impactful technological advances of our time” but warns governments to be aware of the risks.
Simply put, AI has the potential to make the government run better and provide 8 million people with easier access to services and benefits, said Alex Foard, the executive director of research and collaboration for the NYC Office of Technology and Innovation, which was founded in 2022.
Even before it published the plan, New York City was no stranger to AI, Foard said. City agencies already use AI in public health and cyber-resilience projects, and the city has for several years published a public directory of the algorithmic tools it uses.
Still, Foard said, “an obvious need” arose for a comprehensive framework to guide how city agencies use AI-powered tools and for a commitment to developing robust risk assessment. The newly released plan is the result of at least three years of city inquiry into AI use and ethics informed by public, private and academic partnerships, he added.
The action plan lists seven initiatives with short-, medium- and long-term goals through 2025. The initiatives include building AI knowledge within city departments, developing responsible AI procurement standards and ensuring that measures created are maintained and updated as technologies change.
The city aims to create an AI steering committee within the next year and expand public awareness of city AI tools. It also wants to create an AI risk assessment and project review process that checks for “reliability, fairness, bias, accountability, transparency, data privacy, cybersecurity and sustainability,” the action plan says.
Lessons from across the country
New York isn’t the only city planning for AI. Over the last few months, other cities have rolled out guidelines or policies with similar objectives.
Less than two weeks ago, Seattle published its policy for generative AI use. A press release about the new policy states that systems capable of generating text, images, video or audio have the potential to support many of the city’s services. The policy creates a vetting process around the acquisition or use of software services that make use of generative AI and requires due diligence from city employees around intellectual property, attribution of content created by AI and protecting sensitive data.
Tempe, Arizona, approved an AI policy in June, with a focus on being “intentional” in its adoption of different tools. At the time, the city had not yet used AI in government operations, but the policy mentioned chatbots and automated reviews of employment applications to gauge qualifications as potential use cases. Notably, the city requires government workers to “clearly define the problem the AI technology would solve.”
In September, San Jose, California, published its generative AI guidelines, including a requirement that employees record their use of generative AI through the city’s reporting form.
Framing the conversation about AI
New resources for local leaders have been created in response to rapidly proliferating questions about how cities should use AI.
During its annual Mayors Innovation Studio in October, Bloomberg Philanthropies and the Center for Government Excellence at Johns Hopkins University launched City AI Connect, a peer-based digital platform that anyone with a government email address can use. A survey of 80 mayors around the world by Bloomberg Philanthropies found that more than 75% are interested in AI tools, but just 2% of the cities surveyed had begun using them for government work.
Claudia Juech, one of Bloomberg Philanthropies’ senior leads on City AI Connect, said that the platform’s users are currently in an “exchange” phase: They are sharing what’s been useful or successful for them and their cities so far. Juech said government leaders are finding that generative AI tools make city operations more effective by, for example, streamlining reporting of issues raised via 311 calls or communicating with residents through chatbots.
Bloomberg Philanthropies’ research showed that generative AI has helped cities mitigate severe weather events, respond to emergencies and speed up paperwork processing. The survey showed that mayors are most interested in using AI for improving traffic and transportation, followed by infrastructure, public safety, environmental and climate projects and education.
Juech and the government officials who spoke to Smart Cities Dive for this article emphasized that they see artificial intelligence as an opportunity to grow their employees’ abilities, not downsize their workforce.
Cities are looking for “how this can help with upskilling and … training the people that they have,” Juech said.
Boston Chief Information Officer Santiago Garces agrees. His Department of Innovation and Technology published interim guidelines for city generative AI use in May not just because of the pervasiveness of AI tools but because users were finding them widely helpful in their day-to-day work.
“It wasn't just random hype,” Garces said of some early use cases. “With very little training, you could start doing things that would benefit you as an employee or as a community member — helping draft letters, helping write job descriptions.”
Garces says he knows, however, that AI technologies also bring risks such as phishing attacks or fake letters from constituents. Boston’s interim guidelines include some key principles, such as transparency, accountability, respect, innovation and risk management. The city also published three main rules city employees should follow when interacting with AI:
- Fact-check all AI-generated content. While chatbots can create some clear sentiments and sentences, they may generate outdated or simply fabricated information, Garces said.
- Disclose when you have used AI and which model you’ve used.
- Do not put sensitive or private information into generative AI prompts because the data is likely being shared with the companies powering the AI.
“By creating this environment in which people could experiment — but doing it in a way that was safer, that restricted the risks — we could truly be able to start preparing our workforce to understand the opportunities, the possibilities,” Garces said.