Movies

‘Coded Bias’: Film features MIT researcher who found racial and gender bias in facial recognition programs

The documentary was first screened at Sundance last year and will premiere on PBS Monday night.

MIT Media Lab researched Joy Buolamwini in "Coded Bias." Steve Acevedo

https://www.youtube.com/watch?v=jZl55PsfZJQ

One of the masters of the modern era might determine if you can get a home. A job. A decent credit limit. A college acceptance letter. Even jail time.

Behold: The algorithm, powerful and inscrutable, the basic building block of artificial intelligence.

But despite being perceived as fair and objective arbiters of truth, many algorithms are anything but.

“Coded Bias” follows MIT Media Lab researcher Joy Buolamwini, who in 2018 co-authored an influential study showing that commercially available facial recognition programs had serious algorithmic bias against women and people of color. The film, which premiered at Sundance in 2020, will make its television debut on PBS’s “Independent Lens” Monday at 10 p.m.

Advertisement:

It all started with an assignment, when then-graduate student Buolamwini discovered that the facial recognition program she was using couldn’t recognize her. She could only get it to work was if she wore a white mask.

It wasn’t a fluke. The problem lies in the datasets that programmers use to develop machine learning algorithms. Facial recognition systems start learning what faces are from a massive collection of photos, but if those photos are mostly of men and lighter-skinned people, they’re not going to do a great job with detecting women and Black people.

“The systems weren’t as familiar with faces like mine,” Buolamwini says in the film. She found that services from Microsoft, IBM, and Amazon were all similarly inaccurate.

Advertisement:

Computer scientists call this phenomenon “garbage in, garbage out.” In other words, if your input data is busted, your output will be busted too. Programs act a lot like genies — they’ll perform exactly how you tell them, to a fault, and you should be especially careful with your query.

And it goes far beyond sloppy computer science. Facial recognition is used by law enforcement for surveillance, and banks and employers use artificial intelligence to determine who can get loans or job interviews. If those programs are instilled with bias, it exacerbates existing inequalities.

Director Shalini Kantayya, a self-professed sci-fi fanatic, told Boston.com that she wasn’t familiar with algorithmic bias for long before she started work on the film.

Advertisement:

“Everything I knew about artificial intelligence came from Steven Spielberg,” she joked.

She said her interest in the topic began when she saw Buolamwini’s 2017 TEDx talk and read mathematician Cathy O’Neil’s 2016 book “Weapons of Math Destruction,” which explores how algorithms are increasingly used to perpetuate inequality.

Shalini Kantayya, director of “Coded Bias.”

“I didn’t realize until that point the way algorithms became a gatekeeper in society; who gets hired, who gets what quality of healthcare, who might get undue police scrutiny,” Kantayya said. “What I came to grapple with is that these same systems that we trust implicitly to decide human destiny have not been vetted for racial bias and gender bias.

Advertisement:

“That’s when I really began to see that we could roll back 50 years of civil rights advancements.”

Kantayya, a Hampshire College alum born and raised in Connecticut, weaves in stories showing the real-life impacts of faulty or unethical artificial intelligence. A beloved teacher in Houston is arbitrarily deemed as substandard by an automated assessment. A man in the United Kingdom gets fined after covering his face from a facial recognition camera that even a police officer admits is inaccurate. Scenes in China show the potential of extensive state surveillance through the Social Credit System and mandatory facial recognition — and most citizens don’t seem to really mind.

Advertisement:

But despite a pervasive Western narrative purporting that China’s surveillance systems are unusually “dystopian” or “Orwellian,” the film notes that our reality isn’t so different. Futurist Amy Webb points out that Americans are also being evaluated, profiled, and monitored by artificial intelligence; over 117 million people have their face on record in the United States.

“The primary difference is that [China is] transparent about it,” Webb says.

Kantayya pointed out that the ethics of artificial intelligence don’t end once you scrub it of racial bias — after all, flawless facial recognition could just mean more effective and invasive surveillance. But it is especially troublesome when existing software is unregulated, untested, and developed by commercial enterprises for profit.

Advertisement:

“It’s my belief that until there are some guardrails in place in terms of public policy, we should be pressing pause on facial recognition,” she said.

There’s been progress since production on the film began in 2018. In May 2019, San Francisco was the first major American city to ban facial recognition. A smattering of other towns and cities followed over the next year, including Somerville, Brookline, Northampton, and Cambridge here in Massachusetts, culminating in a ban in Boston in June 2020. Kantayya doesn’t think that it’s a coincidence that places like San Francisco and Boston emerged as leaders in banning facial recognition.

Advertisement:

“I think we have a moonshot movement to push for greater ethics in technology,” Kantayya said. “It’s noteworthy that cities like those, technological hubs where people understand how these technologies work, have been the first to ban them.”

Boston and Cambridge serve as steady backdrops throughout the film — Buolamwini recounts seeing O’Neil speak at the Harvard Book Store, and she tells her Cambridge hairdresser about how she was inspired to go to MIT when she saw the Media Lab’s ’90s robot Kismet as a child. There are establishing shots of the Amtrak from South Station to Washington leading into CSPAN clips of the researcher testifying before Congress in 2019. Many of the experts interviewed for the film — mostly women — have connections to MIT or Harvard.

Advertisement:

Kantayya shared several ways that an average citizen could take action in the battle for ethics in artificial intelligence: Push for national literacy in AI, encourage greater inclusion in technology (“and not just for the picture”), lobby politicians to regulate tech companies, and work with organizations like Buolamwini’s Algorithmic Justice League.

And then there’s the power of the largest civil rights movement in decades. In the wake of the Black Lives Matter protests spurred by the police killings of George Floyd and Breonna Taylor last summer, IBM abandoned facial recognition completely, Microsoft announced that it wouldn’t sell its facial recognition program to police departments, and Amazon halted its facial recognition software for a year.

Advertisement:

“In June 2020, I saw sea change that I never thought possible in the making of this film,” Kantayya said. “Change comes from us, the people, pushing our policy makers to protect us and our civil rights.”

Get Boston.com's browser alerts:

Enable breaking news notifications straight to your internet browser.

Jump To Comments

Conversation

This discussion has ended. Please join elsewhere on Boston.com