Tuesday 30 May 2017

The Observer/Ian Tucker:'A white mask worked better': why algorithms are not colour blind

Computing

The Observer

'A white mask worked better': why algorithms are not colour blind

When Joy Buolamwini found that a robot recognised her face better when she wore a white mask, she knew a problem needed fixing
Joy Buolamwini gives her TED talk on the bias of algorithms
Joy Buolamwini gives her TED talk on the bias of algorithms Photograph: TED

Shares
2,395
Comments
443

Ian Tucker

Sunday 28 May 2017 13.27 BST
First published on Sunday 28 May 2017 08.30 BST

Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination.

A lot of your work concerns facial recognition technology. How did you become interested in that area?
When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with. I discovered I had a hard time being detected by the robot compared to lighter-skinned people. At the time I thought this was a one-off thing and that people would fix this.
Advertisement

Later I was in Hong Kong for an entrepreneur event where I tried out another social robot and ran into similar problems. I asked about the code that they used and it turned out we’d used the same open-source code for face detection – this is where I started to get a sense that unconscious bias might feed into the technology that we create. But again I assumed people would fix this.

So I was very surprised to come to the Media Lab about half a decade later as a graduate student, and run into the same problem. I found wearing a white mask worked better than using my actual face.

This is when I thought, you’ve known about this for some time, maybe it’s time to speak up.

How does this problem come about?
Within the facial recognition community you have benchmark data sets which are meant to show the performance of various algorithms so you can compare them. There is an assumption that if you do well on the benchmarks then you’re doing well overall. But we haven’t questioned the representativeness of the benchmarks, so if we do well on that benchmark we give ourselves a false notion of progress.
Facebook
Twitter
Pinterest
Joy Buolamwini at TED in November 2016.

It seems incredible that the people putting together these benchmarks don’t realise how undiverse they are.
When we look at it now it seems very obvious, but with work in a research lab, I understand you do the “down the hall test” – you’re putting this together quickly, you have a deadline, I can see why these skews have come about. Collecting data, particularly diverse data, is not an easy thing.

Outside of the lab, isn’t it difficult to tell that you’re discriminated against by an algorithm?
Absolutely, you don’t even know it’s an option. We’re trying to identify bias, to point out cases where bias can occur so people can know what to look out for, but also develop tools where the creators of systems can check for a bias in their design.

Instead of getting a system that works well for 98% of people in this data set, we want to know how well it works for different demographic groups. Let’s say you’re using systems that have been trained on lighter faces but the people most impacted by the use of this system have darker faces, is it fair to use that system on this specific population?

Georgetown Law recently found that one in two adults in the US has their face in the facial recognition network. That network can be searched using algorithms that haven’t been audited for accuracy. I view this as another red flag for why it matters that we highlight bias and provide tools to identify and mitigate it.

Besides facial recognition what areas have an algorithm problem?
The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone gets insurance of not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated – what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create, but that’s only going to happen if we are intentional.

If these systems are based on old data isn’t the danger that they simply preserve the status quo?
Absolutely. A study on Google found that ads for executive level positions were more likely to be shown to men than women – if you’re trying to determine who the ideal candidate is and all you have is historical data to go on, you’re going to present an ideal candidate which is based on the values of the past. Our past dwells within our algorithms. We know our past is unequal but to create a more equal future we have to look at the characteristics that we are optimising for. Who is represented? Who isn’t represented?

Isn’t there a counter-argument to transparency and openness for algorithms? One, that they are commercially sensitive and two, that once in the open they can be manipulated or gamed by hackers?
I definitely understand companies want to keep their algorithms proprietary because that gives them a competitive advantage, and depending on the types of decisions that are being made and the country they are operating in, that can be protected.

When you’re dealing with deep neural networks that are not necessarily transparent in the first place, another way of being accountable is being transparent about the outcomes and about the bias it has been tested for. Others have been working on black box testing for automated decision-making systems. You can keep your secret sauce secret, but we need to know, given these inputs, whether there is any bias across gender, ethnicity in the decisions being made.

Thinking about yourself – growing up in Mississippi, a Rhodes Scholar, a Fulbright Fellow and now at MIT – do you wonder that if those admissions decisions had been taken by algorithms you might not have ended up where you are?
If we’re thinking likely probabilities in the tech world, black women are in the 1%. But when I look at the opportunities I have had, I am a particular type of person who would do well. I come from a household where I have two college-educated parents – my grandfather was a professor in school of pharmacy in Ghana – so when you look at other people who have had the opportunity to become a Rhodes Scholar or do a Fulbright I very much fit those patterns. Yes, I’ve worked hard and I’ve had to overcome many obstacles but at the same time I’ve been positioned to do well by other metrics. So it depends on what you choose to focus on – looking from an identity perspective it’s as a very different story.

In the introduction to Hidden Figures the author Margot Lee Shetterly talks about how growing up near Nasa’s Langley Research Center in the 1960s led her to believe that it was standard for African Americans to be engineers, mathematicians and scientists…
That it becomes your norm. The movie reminded me of how important representation is. We have a very narrow vision of what technology can enable right now because we have very low participation. I’m excited to see what people create when it’s no longer just the domain of the tech elite, what happens when we open this up, that’s what I want to be part of enabling.

The headline of this article was amended on 28 May 2017 to better reflect the content of the interview.
Hello again …

… today we have a small favour to ask. More people than ever are regularly reading the Guardian, but far fewer are paying for it. Advertising revenues across the media are falling fast. And unlike many news organisations, we haven’t put up a paywall – we want to keep our journalism as open as we can. So we think it’s fair to ask people who visit us often for their help. The Guardian’s independent, investigative journalism takes a lot of time, money and hard work to produce. But we do it because we believe our perspective matters – because it might well be your perspective, too.

If everyone who reads our reporting, who likes it, helps to support it, our future would be much more secure.
Become a supporter
Make a contribution
Topics

    Computing
    The Observer

    Software
    Race issues
    Gender
    q&as

    Share on LinkedIn
    Share on Pinterest
    Share on Google+

related stories

    12 ways to hack-proof your smartphone
    Published: 26 Mar 2017
    99
    12 ways to hack-proof your smartphone
    Why American farmers are hacking their own tractors
    John Naughton
    Published: 26 Mar 2017
    151
    Why American farmers are hacking their own tractors
    ‘Your animal life is over. Machine life has begun.’ The road to immortality
    Published: 25 Mar 2017
    470
    ‘Your animal life is over. Machine life has begun.’ The road to immortality
    Daniel Dennett: ‘I begrudge every hour I have to spend worrying about politics’
    Published: 12 Feb 2017
    501
    Daniel Dennett: ‘I begrudge every hour I have to spend worrying about politics’

        After the success of Pokémon Go!, what is the future for augmented reality?
        Published: 23 Oct 2016
        88
        After the success of Pokémon Go!, what is the future for augmented reality?
        Apps to keep you in tune with the times
        Published: 16 Oct 2016
        34
        Apps to keep you in tune with the times
        Even algorithms are biased against black men
        Published: 26 Jun 2016
        55
        Even algorithms are biased against black men
        Early computers as objets d’art
        Published: 29 May 2016
        42
        Early computers as objets d’art

comments (443)

Sign in or create your Guardian account to join the discussion.
1 2 3 4 … 7 next

    Laja Dodo
    4h ago
    0 1

    The lady is BSing around, or at least lets the reporter be led somewhere... She says: it's an opensource algo = anyone can see the source and "fix", if anything needs to be fixed, so why people are waiting for an unvisible somebody to do so? she says: the data is old(?) What does that mean, there is no black faces to train the algo at all? The training data is the responsibility of the algo's user not the algo's maker per se (I'm in the algo "business" just not in face recognition). She says her profile doesn't fit... To what? She says: 1% black women in tech...Both parents are collage educated and she is now as well... Does this 1% controlled against both parents are collage educated? I'm almost certain that collage educated blacks' kids will be collage educated. So maybe it's not racism what keeps blacks back but simply poverty and lack of valuing education (we were dirt poor but my mother told me "what you learn nobody can take away from you" and she spent lots of money on books)? In addition, an algo is made for a purpose, if it doesn't serve that purpose it will be ditched especially if a robot doesn't recognize a face as a human one...
    Reply Share
    Facebook
    Twitter
    Report
    ID776729
    6h ago
    0 1

    The Algorithmic Justice League......sometimes I feel as though I'm living in an episode of Brass Eye.
    Reply Share
    Facebook
    Twitter
    Report

most viewed

    Most viewed across the guardian
    Most viewed in technology

    1
    Portland Republican says party should use militia groups after racial attack

    2
    Jared Kushner's charmed life is about to come to a screeching halt

    3
    Rosa King: Cambridgeshire zookeeper killed by tiger is named

    4
    Me and my penis: 100 men reveal all

    5
    The gender wars of household chores: a feminist comic

    6
    The world's most toxic town: the terrible legacy of Zambia's lead mines

    7
    Texas Republican 'threatens to shoot Democrat' over immigration protest

    8
    As Merkel knows, Trump’s rudeness and arrogance can unite Europe

    9
    British Airways could face £100m compensation bill over IT meltdown

    10
    Close friend of Trump investigated over alleged €170m tax evasion

The Guardian

    home
    UK
    world
    sport
    football
    opinion
    culture
    business
    lifestyle
    fashion
    environment
    travel

all

    Technology
    › Computing

    become a supporter
    make a contribution
    Facebook
    Twitter
    all topics
    all contributors
    solve technical issue
    advertise with us
    work for us
    contact us
    complaints & corrections
    terms & conditions
    privacy policy
    cookie policy
    securedrop

© 2017 Guardian News and Media Limited or its affiliated companies. All rights reserved.

No comments: