Dive Brief:
- The Center for Government Excellence (GovEx) at Johns Hopkins University released a new toolkit to help cities ensure that automated decisions based on algorithms in fields like criminal justice or higher education are free of bias.
- The tool leans on a risk management approach to lay out the risks and benefits of relying on algorithms for civic decisions, and to identify where automation and artificial intelligence (AI) may unfairly target certain citizens. GovEx said in a statement that the goal is to help local leaders proactively quantify risks and identify ways to handle them.
- "Instead of wringing our hands about ethics and AI, our toolkit puts an approachable and feasible solution in the hands of government practitioners — something they can use immediately, without complicated policy or overhead," Joy Bonaguro, Chief Data Officer for San Francisco, said in a statement. The city and county helped on the project alongside the Civic Analytics Network at Harvard University and Data Community DC.
Dive Insight:
Cities have turned to AI or algorithms for a variety of city functions, especially criminal justice. For example, many states use a computer program to predict whether an inmate will commit another crime, using the results to influence decisions about sentencing or parole. But those systems have been dogged with accusations of built-in bias; a 2016 ProPublica investigation found that one formula was likely to incorrectly identify black defendants as future criminals at almost twice the rate of white defendants, and that overall white defendants were mislabeled as low risk more often than black ones. A 2018 study from Dartmouth College researchers found that one risk-assessment algorithm was about as accurate at predicting recidivism as a random online poll of the general public.
New York City last year passed a bill that would create a task force to examine where automated decision systems could contain bias and recommend to agencies how to address it. (The task force was convened in April.) It's similar to what GovEx is trying to do with its new toolkit. Similarly, tech companies have been trying to tweak algorithms that have allowed some hate speech to flourish, or inadvertently cracked down on other non-inflammatory posts.
There is great potential for AI to help streamline some civic functions. San Francisco partnered with Code for America on an algorithm to help clear marijuana citations from citizens’ criminal record in response to a state law legalizing marijuana, meaning citizens wouldn’t have to go through the bureaucratic process themselves. But cities need to be wary of leaning too much on technology that might harm minorities or disadvantaged groups — better education and attention around how algorithms work is a step to keeping them fair.