Javier Castillo Coronas
Llegeix aquest contingut en català aquí
Translated by Álvaro Rodríguez Huguet
Mathematical models take millions of decisions reproducing sexist and racist biases without the citizens to notice it
If you are fired, denied of a loan or detained even, these are decisions that, nowadays, can be in hands of algorithms. The problem is that mathematics, unlike many people think, are not neutrals. There are sexist, racist or xenophobic algorithms that take thousands of decisions on the life of millions of people every day. We don’t see them. They work quietly. But the problem is not algorithms. It’s human beings. Mathematical models only imitate the prejudices of the society.
In 2014, Amazon started using an algorithm to save time when contracting staff. The tool of the Seattle’s company, that promised to be more objective and faster, gave each applicant a punctuation from one to five stars. But the artificial intelligence had been trained with the application of use data of the last ten years. In this term, the most part of contracted programmers had been men.
The algorithm learned, thus, that the best candidates had to be men and started to discriminate women. When a CV of a woman was detected, was penalised straightaway and had less punctuation. Amazon took a year to notice that was using a tool that reproduced a bias against women.
However, algorithms are not sexist, they learn to be. “An algorithm is a sequence of steps that are carried out to solve a certain task. For instance, a cuisine receipt can be considered an algorithm as well,” explains David Solans, member of the Science Web and Social Computation Research Group of the Pompeu Fabra University (UPF).
The key is the data
The algorithms that, like the Amazon’s, take decisions in an automatised way learn to identify and reproduce patterns as of the data given by the computer technicians. It is from here when discriminations appear. “A significant volume of the reproduced biases by the algorithms is acquired in consequence of the biases that showed on the data,” points out Solans. That is to say, they discriminate because they are taught it to be with data that is already biased.
Leaving the decision in hands of the artificial intelligences seems to assure a more objective process, but it does not have to be this way. The racism, sexism and xenophobia that persist in the society is reflected in the databases in which the algorithms learn from. “Facing a growing use of these systems in our daily life can raise the problem of the algorithmic discrimination as well in the years ahead,” warns Solans. In fact, as Ricardo Baeza-Yates explains, this discrimination “can affect matters such as selection of staff, bank loans or the legal system.”
Baeza-Yates, teacher of the Northeastern University in the campus of Silicon Valley, states that these stereotypes are also affecting the public administration. “The algorithm biased generates a social cost. Recently, in the Netherlands, a court ruled that the government couldn’t continue using a detection system of social fraud because it discriminated poor people,” he clarifies.
Mathematical models not only reproduce social prejudices; they are perpetuating them. “We can think how our life would look like if, for the simple reason of belonging to a minority, it was denied to us systematically a bank loan or the facial recognition of an airport didn’t allow us in a country due to it’s unable to recognise our faces,” points out Solans.
These are not examples out of reality, these are cases that have already happened.
Joy Buolamwini, an African-American researcher of the Massachusetts Institute of Technology (MIT), discovered that the facial recognition systems did not manage to identify his face, but they did recognise her white colleagues. These artificial intelligences learn through the images showed during its development. As the most part of pictures that they used to train these systems they were from white men, it’s difficult for them to detect black women faces. Buolamwini had to wear a white mask for the system to recognise her. As a result, the researcher of the MIT decided to found the Algorithmic Justice League for, according to her, “creating a world where technology work for all of us, not only for some of us.” “The algorithmic justice is about identifying, isolating and mitigating the biases in the automatised decision systems,” explains Solans, that defends the need of implementing anti-discrimination techniques.
“If we know of which bias is about, we can eradicate it processing the data to take it down. However, the problem is that many times there are some of them that we don’t know and they are not easy to find,” points out Baeza-Yates. The director of the Web Science and Social Computing Research Group of the UPF, Carlos Castillo, affirms that solving this issue “is difficult because, sometimes, the most important question is if it would use an algorithm or not.” “Sometimes, the answer is a no,” he concludes.
And the institutions?
Some administrations are giving answers to the algorithmic discrimination. For instance, the article 22 of the General Data Protection Regulation of the European Union forbid an artificial intelligence to be able to take decisions of its own. It’s always necessary that a human being intervenes. Moreover, this article states the right of a person to receive explanations about any decision in which they are involved and the participation of an automatised system.
But that a person participates in this decision is not guaranteed to be fair. “Even when there is a person in charge of taking the final decision, this person can be work overwhelmed or can be encouraged to accept blindly the algorithm’s indication. If you obey the algorithm and you are wrong, you blame the algorithm. If you go against it and make a mistake, the responsibility is yours.
Lack of transparency
“The algorithmic model itself is a black box; its content, a well-kept corporative secret,” writes Cathy O’Neil in her book Armes de destrucció matemàtica (Weapons of mathematical destruction). That’s because, if the companies are focused on designing mathematical models were not kept in secret, they would run out of their product. Another reason that explains this lack of transparency is that, if people knew how works the system that evaluates them, they would know how to mislead it. Besides, the companies are aware that, hiding the details of their programmes, people can’t question their results easily.
Under the appearance of neutrality, mathematics are taking control of our lives little by little. As it is not a visible phenomenon, it’s difficult to notice about its magnitude. However, according to Cathy O’Neil, “we have to call for responsibilities to algorithms’ creators; the era of the blind faith in massive data has to end.”
“The systems of automatised decisions are not the solution to the human stereotypes, it’s the people who has to solve this problem,” says Baeza-Yates. Nevertheless, he also recognises that “they can help us not only to take decisions less biased, but also noticing our own biases.” David Solans thinks about another topic: “A racist judge can be detrimental to a few tens of people in a day; the problem of algorithms with racist biases is that they can evaluate thousands of people in seconds.”
FAIR, the algorithm of the UPF that fights against discrimination
An algorithm that is able to detect discriminatory biases in other algorithms and correct them. This is the project that has developed the Science Web and Social Computing Research Group together with the Technical University of Berlin and the Technological Centre of Catalunya. FAIR initiative has studied databases of job offers, relapses of convicts and rankings of admissions in the university to detecting discrimination patterns in databases that benefit or are prejudicial to certain social collectives.
FA*IR algorithm detects discrimination patterns on account of genre, physical appearance or skin colour, and corrects them through mechanisms of positive action to guarantee equality.