Will digitalization destroy 
our values?

Foto: Dirk Schäfer. Straßburger Münster, Portal / CC BY 2.0

There is an ugly side to our brave new digital world. Arguments are all too often reduced to superficial slugging matches while the operators of social networking sites duck their social responsibilities. The time has come for a declaration of digital values.

Digitalization is entering a mature phase. Fascinating technological developments are shaping the 21st century in unforeseen ways, working their way deeper and deeper into numerous aspects of our lives. The internet’s promise of freedom is evolving, through the advent of big data, the internet of things, artificial intelligence, the technological capacity to monitor all online actions, right down to phenomena like “love robots”, into a threat to freedom, autonomy, morally responsible behaviour, free discussion, and thus to democracy.

The implications of this brave new digital world are no longer confined to the field of technology. The use of social media and countless smart labour-saving devices has led from clashes of liberties to contraventions of rights. What we need now is a broad public debate on the threats to and the defense of contested values, both online and offline. In Germany, fundamental rights are enshrined in our constitution; the right to a private life; freedom of speech and religion and freedom of the press are foundations of our democracy. They are also the basis of our understanding of citizens as autonomous individuals who are more than subjects of the state, but also who should not be left at the mercy of monopolies and market-leading companies.

At the European level, the Charter of Fundamental Rights of the EU guarantees these same rights, as do international conventions, such as, for example, the UN International Covenant on Civil and Political Rights. The problem is not a lack of rights, but rather a deficit in terms of observance and enforcement. This comes alongside a clear change in the culture of debate and political discussion online. This can lead, as we have seen in the American election, to people consciously resorting to insults, slurs and obscenity, to strong political polarization, and to the erosion of substantive, fact-based discussion. A healthy culture of debate, in which the other side is not seen as an enemy to be destroyed, is in real danger. The search for solutions to this impasse must proceed on two levels.

More consciousness of the value of debate

On the one hand, we need to create more awareness of the value of debate, to contrast this understanding with the culture of hate, and to establish a robust set of rules of fairness. The multitude of platform operators can and should support this debate, promote it and take part in it themselves. They have the ability to reach millions of people, more than any newspaper or parliamentary debate. Their outsized role in public discourse must be accompanied by greater reflection upon their attendant responsibilities. They have long outgrown the status of mere technical providers.

If democracy is simply seen as a meddlesome bureaucratic leviathan, to which one’s own views of what people “really” want are contrasted, then we are in genuinely dangerous territory. Such views can be heard loud and clear from Silicon Valley; they betray authoritarian tendencies which have to be exposed.

On the other hand it is necessary to find ways of holding to account those who are responsible for online abuses including criminal insults, vile slander and incitement. The current debate surrounding hate speech in social media is a prime example. Online people are bullied with insulting and hateful tweets or postings, mostly anonymously, because they have particular opinions or behaviours, and their rights to identity and dignity are abused. The worst examples of this are bound up with racist, xenophobic, homophobic and misogynistic behaviours.

It is the responsibility of global IT companies not to look the other way or to buy full-page adverts touting panaceas for such criminal communications, but to actually do something about them. If, as an investigation has found, only 1 percent of posts that are subject to complaints are deleted on Twitter, 10 percent on Youtube and 46 percent on Facebook, that is not enough.

Irresponsibility no longer 
an option

Appeals to different legal cultures in the extent of free speech within and outside Europe do not hold up: companies operating in Germany or Europe must abide by the regulations which apply there. If the regulations are not sufficiently clear, resulting in lengthy, obstructive litigation, that means that in this sector more explicit legislation is required, such as the European General Data Protection Regulation. On the other hand, we cannot be silent about offenses against human rights just because they affect a country where we don’t do much business. Turning a blind eye to our responsibility to help uphold rights is no longer acceptable.

Algorithms decide more and more—how can we control them?

Do we need stronger transparency when it comes to algorithms’ technical details? Currently they are protected as commercial secrets of IT giants under Germany’s constitution. Therefore, there cannot be any general public access to the inner workings of proprietary algorithms, but maybe it is possible to enforce limited access in cases of suspected manipulation? These are difficult questions and they go to the heard of digital development: they deserve a far more thorough hearing in public discussion.

Such questions involve the fundamental freedoms of all users. They are also key to issues of online political debate. For a long time, opaque algorithms have structured the newsfeeds of platform operators and sorted search results. Content is filtered according to criteria only known to the algorithms.

After a recent investigation carried out by the American Pew Research Center, a Washington-based polling company, half of all Americans under 35 see Facebook as their most important source of news. With software robots—known as Twitter bots—news stories can be planted online in order to drown out or prettify contradictory reporting in mainstream media.

With 22 million people using Facebook every day in Germany, and with Google’s market share of search enquiries reaching 90 percent for Germany, these company’s influence on public opinion should not be underestimated. Whichever gatekeeper is in a position to decide what content users can read and in what order they read it will have an ever-greater influence on familiarity and popularity amongst voters. They can be tasked with the direct mobilization of particular groups of voters, as the companies have all the relevant information they need to tailor their message around someone’s likely voting behaviour; they can see which sites users “like”, information about someone’s home, their circle of friends, age and preferences.

What is even more fundamental is the question of whether algorithms can become capable of making ethical choices, whether robots with a conscience are possible and whether ethical conflicts can be programmed out of existence. These questions are brought into relief with the development of self-driving cars. Human drivers make decisions while driving intuitively. With self-driving cars, these impulsive decisions—which will be written into the program—will be made by technical means. That gets to the heart of the matter. In difficult situations, should the self-driving car run over the child or the grandfather; the job-seeker or the manager; two or three people or nobody, thereby endangering the passengers themselves? Is the axiom that human lives cannot be weighed against one another true? If it is, and if it cannot be ruled out that these kinds of decisions will come up for self-driving vehicles, then we will find ourselves in a confounding ethical dilemma.

If this dilemma proves to be insoluble, as is currently the case, then technology will have encountered a clear limit. Where there is any doubt, such decisions cannot be left to machines. Leaving moral decisions to a technical system can also change the expectations that we hold of morality, guilt and responsibility. “It’s the machine’s fault, sorry”: Can that really be the end of the matter?

The developments which have been made in intelligent systems have reached the point where they pose ever more starkly major questions of human versus machine and technology versus values. The order of priorities should be clear: Morals and ethics cannot be replaced by technology, no matter how advanced this technology may be.

Sabine Leutheusser-Schnarrenberger

Sabine Leutheusser-Schnarrenberger

Sabine Leutheusser-Schnarrenberger was Federal Justice Minister of Germany from 1992 to 1996 and from 2009 to 2013. As a member of the Parliamentary Assembly of the Council of Europe she was also a member of the Committee for Law and Human Rights from 2003 to 2009.
Sabine Leutheusser-Schnarrenberger

Latest posts by Sabine Leutheusser-Schnarrenberger (see all)