How can we learn from discriminatory algorithms?

Share on facebook
Share on google
Share on twitter
Share on linkedin
Software that does not recognize or address black faces on people with a second nationality.

Algorithms regularly appear to discriminate. How can we stop this and reverse the process?

A black colleague of Canadian Ph.D. student Colin Madland wanted to adjust his virtual background during a Zoom meeting in September. But Zoom’s facial recognition algorithm failed to recognize his face.

It was only when he stood in front of a light globe that the software recognized his head and got the desired background. When Madland shared this on Twitter, the following wrong came to light: the Twitter algorithm reduced the photo and even cut the black man out completely. Artificial Intelligence (AI), therefore, suffers from certain prejudices.

Neuroinformatics Sennay Ghebreab is researching at the University of Amsterdam how we can develop algorithms that do not discriminate. As a black man, he was not recognized in the late 2000s by an automatic revolving door, which let his white colleagues out.

The ‘revolving door incident’ inspired him to investigate the mechanisms of prejudice and discrimination within neuroinformatics, combining neuro and computer sciences. The two have a lot in common: “AI works just like our brain. Our brain is also a pattern recognizer, a discriminator. ”

Bias is in all facets.

Algorithms are trained based on large amounts of data. And that data comes from people who, by definition, have prejudices (bias). It is therefore not surprising that those large data files also contain a certain bias. Sometimes this involves harmful prejudices, Ghebreab explains.

These can lead to algorithms to disadvantage people on improper grounds, in other words, to discriminate. Because of the data already containing harmful biases, this will be reflected like a boomerang in all facets of an algorithm’s development.

“To develop algorithms, the developers select the part of that data: there is also a bias in it. This often happens unconsciously or purely out of convenience: after all, you are dealing with a deadline, et cetera. And then, there are also certain decision patterns in the algorithms.

For example, they do not consider that there is much more data about men than about women. And some statistical methods can also reinforce the inequality in the data. Finally, there may also be bias in the use and testing of algorithms. ”

That algorithms often discriminate does not surprise the neuroinformatics. And that all kinds of things are now going wrong with algorithms used on a larger scale, he sees as an opportunity to improve algorithms. Ghebreab: “There is so much doom thinking.

Data, AI, algorithms, big tech: that is the connection people are making now. But it is good that we can uncover social problems by examining at what point in the process something went wrong. ”

As an example, the researcher cites the benefits affair at the tax authorities. Thousands of parents who made use of childcare allowance were wrongly approached as fraudsters. “To a certain extent, this can be traced back to data on ‘second nationality.’ Algorithms were taught that having a second nationality is a risk. Now that that is above water, they are going to do something about it. They are now starting to think more carefully about the characteristics of people they want to record. ”

Interdisciplinary research teams

In addition to improving existing AI systems, Ghebreab is also developing new systems to promote equity. For this, he works with a diverse research team. In its Civic AI Lab, which was recently established, scientists from very different disciplines work together: social scientists, philosophers, lawyers, and AI experts. But he also wants to involve citizens in his research.

Not only by informing them what kind of research they are doing but also by questioning them.

As an example, he cites a project that was recently started on mobility poverty. In Amsterdam, not all groups move easily through the city, and it is not entirely clear what the reason for this is. “To get an answer to this, it is important to talk to citizens. But getting citizens involved is the hardest of all because, in this case, they don’t know very much about AI, and there is also a lot of doom thinking about AI in the media image.

So we must first try to regain confidence. We do this by showing people that we use their data to find solutions for their own well-being. With the use of AI, we gain insight into which parts of the city have mobility poverty. Then we can see what the cause is and what we can do about it.

AI for social good

Ghebreab’s research method fits in with a broader emerging trend: ‘AI for social good.’ A growing group of universities and tech companies are engaged in collecting data given back to the community. “If you get stuck in the thought ‘people can abuse my data,’ you will get stuck in the situation now. Moreover, people are already giving away their data en masse, but especially in the direction of big tech. ”

Unlike the big tech companies that work with ‘one size fits for all’ algorithms, Ghebreab prefers to use ‘bottom-up’ AI: local and small-scale experimentation, and if it works, upscaling. That way, you can prevent an error in an algorithm from disadvantaging many people.

This also means that there must be room for frequent experimentation. Hence, he is not in favor of overly strict regulation. “Regulations for the big-tech companies are, of course, significant. But you must also avoid regulating everything in such a way that small initiatives run out of oxygen. ”

bryan@dijkhuizenmedia.com

bryan@dijkhuizenmedia.com

Leave a Replay

Sign up for our Newsletter

Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit