Hi Welcome You can highlight texts in any article and it becomes audio news that you can hear
  • Sat. Oct 5th, 2024

If Done Right, AI Could Make Policing Fairer

Byindianadmin

Jun 25, 2020 #Fairer, #Policing
If Done Right, AI Could Make Policing Fairer

A decade ago, Fei-Fei Li, a professor of computer science at Stanford University, helped demonstrate the power of a new generation of powerful artificial intelligence algorithms. She created ImageNet, a vast collection of labeled images that could be fed to machine learning programs. Over time, that process helped machines master certain human skills remarkably well when they have enough data to learn from.

Since then, AI programs have taught themselves to do more and more useful tasks, from voice recognition and language translation to operating warehouse robots and guiding self-driving cars. But AI algorithms have also demonstrated darker potential, for example as a means of automated facial recognition that can perpetuate race and gender bias. Recently, the use of facial recognition software in law enforcement has drawn condemnation and prompted some companies to swear off selling to police.

Fei-Fei Li at SXSW in 2018.Photograph: Hubert Vestil/Getty Images

Li herself has ridden the ups and downs of the AI boom. In 2017 she joined Google to help, in her words, “democratize” the technology. Not long after, the company, and Li herself, became embroiled in a controversy over supplying AI to the military through an effort known as Maven, and attempting to keep the project quiet.

A few months after the blowup, Li left Google and returned to Stanford to colead its new Human-Centered Artificial Intelligence (HAI) institute. She also cofounded AI4All, a nonprofit dedicated to increasing diversity in AI education, research, and policy. In May, she joined the board of Twitter.

Li spoke with WIRED senior writer Will Knight over Zoom from her home in Palo Alto. This transcript has been edited for length and clarity.


WIRED: We are witnessing public outrage over systemic racism and bias in society. How can technologists make a difference?

Fei-Fei Li: I think it is very important. It goes to a core belief of mine: “There are no independent machine values. Machine values are human values.” I heard Shannon Vallor, a computational ethicist, say this years ago, and I’ve been using it since. Technology has been a part of humanity since the dawn of time, and the deployment of technology fundamentally affects humans.

We have to ensure that technology is developed in such a way that it has a positive human impact and represents the values we believe in. This takes people—on the innovation side, the application side, policy-making side—and leads to a natural belief in the importance of inclusiveness.

Let’s talk about facial recognition. In 2018, one of your students, Timnit Gebru, helped create a project called Gender Shades that highlighted racial bias in commercial face-recognition algorithms. Now companies like Amazon, IBM, and Microsoft are restricting sales of such technology to police. How can companies make sure they don’t release products with biases in the first place?

We need a multi-stakeholder dialog and an action plan. This means bringing together stakeholders from all parts of our society, including nonprofits, community, technologists, industry, policymakers, academicians, and beyond. Facial recognition technology is a double-edged sword, and obviously we need to consider individual rights and privacy versus public safety. Civil society has to come together to think about how we regulate applications of technology like this. Companies must play a role. They are responsible and should be held accountable just like other stakeholders.

Do you think AI can potentially help make policing fairer?

I want to highlight two recent research works by my Stanford colleagues related to policing, both with diverse people behind them. In one, Dan Jurafsky and Jennifer Eberhardt used AI to analyze the language used by police from bodycam footage when people were stopped. They showed there is a significant discrepancy between who was stopped by police and the language police use—officers were found to talk to black people in less respectful ways. Using natural language AI techniques, we can get insights into ourselves and our institutions in ways we couldn’t hav
Read More

Click to listen highlighted text!