What have algorithms got to do with human rights?

Foto: Alexander Johmann / CC BY-SA 2.0

More and more areas of our life are being influenced or even controlled by algorithms. There is often little transparency regarding how these algorithms work, what they do, or who creates them. The political researcher Ben Wagner finds this disconcerting.

iRights.Media: Mr Wagner, algorithms and human rights? What is the connection?

Ben Wagner: In our society, we are surrounded by objects which only work in conjunction with an internet connection and an automated process going on in the background. There are many examples which show that algorithms are deeply embedded in people’s lives, from decisions over loans from the bank to automated systems in public administration, or in road traffic. In terms of this their everyday ubiquity, we have to ask where algorithms might affect our basic rights, and whether an automated decision-making process can even meet the requirements of human rights.

What human rights could be affected?

An example is our right to free speech. Whether or not that is honoured in various cases depends upon algorithmic decisions. If an online platform like Facebook or Youtube removes content, currently a part of the process that decides whether or not it should be taken down is automated. But it is not clear what is going on in the automatic part of the process, or at which point a human gets involved and decides what will or will not be deleted. By implication this can lead to a situation whereby, when content is reported which violates someone’s rights, because, for example, it incites violence against them, one cannot be sure whether a human will deal with the complaint. This then is not just about freedom of speech, but also the processes which permit or obstruct the exercise of this right.

Are there other examples?

Another area where there are big problems is our private sphere. If I upload some content to a server somewhere in the world, I would like to imagine that it doesn’t exist anywhere in particular—but actually it is just in somebody else’s computer. In the so-called Cloud, algorithms are used to analyze my data, to find out whether, for example, it contains content which violates copyright. The users of Cloud services are mostly simply not aware that their private data is being X-rayed like that. So that presents a problem for the right to privacy. And in cases of copyright infringements, sometimes content can be automatically deleted. That means the data is just taken away, and when the user asks what has happened, they will receive the brusque response: “The algorithm said that the data was illegal.”

The automated processing of private data is above all problematic when users are not aware of what is happening. They should be fully aware and have control over what is done with their data. We need a debate across society that can help define in which cases such processes create a massive problem for guaranteeing human rights.

And this debate is a first step towards solving the problems you describe?

Yes. Yet at the same time, algorithms would pose a challenge here as well. We know that our collective discussions can be impaired by various automated processes. Elections are a current and relevant example. On the one hand, algorithms permit selected people from certain target groups to receive online adverts prompting them to go and vote—and these ads can target thousands of people. If this is only aimed at members of a particular party, and not to people of a different point of view, the outcome of a vote can be substantially influenced. On the other hand, there are so-called “social bots”. These are systems which are active on social media platforms where they are taken for real people. These bots engage in election-related debates with a slew of automated statements, and it is not clear who is behind them or even that the bot is not a real person. Where social bots are used in large numbers, this can also affect the outcome of an election. There should be no market for services of this kind. If individual actors become so powerful that they can use algorithms to influence elections, then that is a problem for the sovereignty of the democratic process.

You have identified problems for freedom of speech, privacy and democratic debate. How can these challenges be met?

For all these aspects, there is a fundamental need for transparency: so that we are able to recognize and assess where possible problems might arise. That does not mean, however, that all algorithm developers should make their work public. We need context-dependent transparency. If someone is an especially powerful actor, they also have to be especially transparent; the effects of the algorithms they are using are of particular significance for individuals and our society at large. Some powerful actors should not be subject to the same transparency requirements. For example, researchers and innovators should not find their freedom to try out new applications limited. It would be helpful to have a system of classification that dealt with the responsibilities of different actors, and in cases where violations of human rights are possible, or even likely, we should be able to get a closer look.

How might that work?

If decisions have a particular relevance for the affected parties, then it is no longer adequate to say, in a general sense, “such-and-such a percent of our decisions will be affected in such-and-such a way”. A whole section of the decision-making process should be presented publicly and comprehensibly. This should make the role of automated systems very clear. But nevertheless, it is hard to answer this question in general terms, because there are so many different situations in which algorithms are used. For example, one naturally has to look at where transparency would be most helpful. Where advertising bots are used in an election, an obligation to mark them would make sense. In other cases, transparency can actually be counter-productive. So, a spam-filter whose algorithm and mode of functioning was publicly available would be easy to get around, because people who send spam would be able to have a detailed look at the filter criteria. So you always have to bear the context in mind. In general, and in the realm of human rights in particular, algorithms and automation are creating new challenges, to which we need to respond in new, appropriate ways.

Interview by Eike Gräf.


“Das Netz – digitalization and Society. English edition” gathers writers, activists, scientists, politicians and entrepreneurs to think about the developments of our digital life. More than 50 contributions reflect on the digital transformation of society. It is available as a free PDF. Download here!

Ben Wagner

Ben Wagner

Dr Ben Wagner is a social and political scientist. He is the Director of the Centre of Internet & Human Rights (CIHR) at the European University of Viadrina in Frankfurt (Oder). In his research work, he concentrates on changes in communication, digital rights and the role of the internet in foreign policy.
Ben Wagner

Latest posts by Ben Wagner (see all)