Portraitoto von Michael W. Bader

© Foto: Alina Bader

24 Aug Reign of the Algorithms

How “Artificial Intelligence” is Threatening our Freedom

Michael W. Bader deals with the topic of “artificial intelligence” (AI) from the point of view of the incapacitating effect these new technologies have on society. Apart from the highly alarming implementation of “artificial intelligence” for autonomous weapons systems or the manipulation of elections, a closer look is taken at the conception of humankind and society that underlies these developments. In the author’s opinion, creating a socially responsible AI that caters for the common good and serves the benefit of the many rather than of the few is critically important.

Introduction

The author’s previous essay, under the title “Against monopolism and libertarianism”, dealt with the dangers of information capitalism that arose from Silicon Valley. It was observed that more and more power was being accumulated in the hands of a few companies outside of democratic control and old libertarian perspectives and new ideologies of the improvement of the world were coalescing for this purpose. Without question, the new tycoons, such as Google, Facebook, Airbnb and Uber are changing our world in their own image.1

Monopolism is back in fashion and democracy seems to be an outdated and cumbersome technology that gets in the way of the highly motivated entrepreneurs and their freedom. When discussing this topic, it is important always to maintain a certain critical distance in spite of too much enthusiasm for new exiting gimmicks and innovations of the large internet companies. This is important because their activities are giving rise to a completely unbridled form of information capitalism that is doing business by constantly collecting personal data on citizens – without asking for permission.

This essay deals with the dangers that arise, in the author’s opinion, from the completely uncontrolled implementation of Artificial Intelligence (AI). It seems necessary to tackle this topic even though the use of AI is usually associated more with opportunities than with major problems. Many experts expect that some of the biggest problems faced by civilisation in the next 10–20 years could be solved by intelligent systems. AI is to contribute to managing climate change, for example, to establishing a regenerative system of energy, allowing access to education for everyone, revolutionising medicine and genetics and above all – for almost all sectors in society – making highly complex relationships visible. It is to make possible entirely new insights into the ways in which people do live together socially. For example, tweets and Facebook messages may be used in the future to fairly reliably predict the spread of diseases or social unrest. Security forces will use forecasting software to thwart terrorist attacks. In many cities, police patrols are already deployed according to big data predictions of where the next crime is likely to be committed. In around 30 US states, a big data prediction is currently used to predict the likelihood of a person committing murder in the future in order to make a decision on their parole, as exemplified by Viktor Meyer-Schönberger.2

Approximately 70 per cent of all financial transactions are already controlled by algorithms today3 and advertisements are also generated algorithmically for maximum emotional impact4. In Hong Kong, an artificial intelligence has even become a full member of an advisory board. The “Vital” system (Validating Investment Tool for Advancing Life Sciences) has full voting rights as specified by the company in question.5

With all of these existing and potential blessings in mind, the following deals with various problems associated with AI and the underlying vision of humankind and society. This essay refers to a number of (technical) papers and books on the subject, with particular reference to the reflections of Kai Schlieter as presented in his 2015 book, “The Dominance Formula” (German: Die Herrschaftsformel)6 as well as in various other papers and interviews. Markus Morgenroth, software expert on behavioural data analysis and author of the remarkable book “They know you! They have you! They control you!” was also an important source for this work7, as well as other well-known experts.

I. What is artificial intelligence

Artificial intelligence is the software-controlled machine intelligence used for self-driving cars, industrial robots, self-learning machines, digital assistants, Industry 4.0, games, autonomous weapons and much more. In addition to these applications, artificial intelligence has become so important in society most importantly because it is what allows the enormous amount of big data that is collected every day to be dealt with.

By 2003, the amount of data that had been collected had already grown to an unbelievable five billion gigabytes, but this number is still growing day by day and in the future will continue to grow exponentially as a result of the Internet of Things (IoT). According to estimates by Cisco, in five years there will be 50 billion sensors in the world continually collecting data and passing it on.8 In ten years, this number will have risen to 150 billion networked measuring sensors; their use will mean that the amount of data collected will double every twelve hours.9

All of this means that there is an immense amount of data available and new sources of information are closing the gaps in the behavioural and personality profiles of each individual citizen. AI comes into play in precisely those areas where there is a lack of collected data. Missing data is approximately extrapolated from other information that is available.10

Technically, the use of artificial intelligence mainly comes in the form of self-learning algorithms that feed back constantly and run on ever-faster computers. An algorithm is defined as a systematic, logical rule or clear procedures that can lead to the solution of a problem or class of problems. It is a type of specification for dealing with problems that consists of clearly defined steps that can be applied as part of a computer program. According to Yvonne Hofstetter, well-known German big data expert11, a system can be called intelligent “when it shows behaviour that was not originally intended by the programmer. It makes decisions whose implications and consequences have not been anticipated and thought through.”12 Kai Schlieter has defined AI similarly in an interview: “Artificial intelligence describes the attempt to teach independent behaviour to systems – robots or software. These systems learn to orient themselves in unfamiliar environments and to solve problems.”13 That goes for real environments such as streets and self-driving cars, as well as virtual environments in the form of databases and data universes. More broadly, AI means a general mental capacity that can reason, solve problems independently, understand complex ideas, learn quickly and also draw on experience while it does all this.”14

Even though the first generation of AI systems has already produced alarmingly good results, entering history with Watson as the star of Jeopardy or “Deep Blue”15, which in 1997 had already beaten the then world chess champion Garry Kasparov16, a large leap in technology has been made possible by the development of artificial neural networks. These network technologies simulate human brain function and convert data into mathematics, but in a more efficient form than has been possible until now.17 In this regard, Google has developed an important product: an artificial neural network that combines image recognition with speech recognition. The “Neural Image Caption Generator” is able to recognise individual objects in an image and, for example, to describe it in language as follows: “A group of young people playing the game of frisbee.” Or: “A herd of elephants walking across a dry grass field.” The system, therefore, does not just recognise individual objects, but as the examples show, can also identify the relationships between them and identify the action within the image.18

Neural networks are based to a significant extent on the research of Geoffrey Hinton19, whose method of modelling the brain is built on the theory that human intelligence arises from only a very small number of algorithms and perhaps only needs a few basic algorithms that can be applied multifunctionally for processing tasks such as language, vision, logic etc. According to Hinton, two essential characteristics of intelligent systems can be summarised:

  1. Pattern recognition: AI systems can recognise objects, faces, ex-pressions, or even language,
    which can be picked up by cameras and microphones. Deviations from normal patterns and rules can also be recognised.
  2. Prediction: AI systems can also predict certain developments by using available information to calculate the probability of future events. Of course, such “predictions”, such as the development of share and currency exchange rates or future feature films, is by no means certain and cannot be considered binding.

Self-optimising AI software also works independently on analysing our behaviour on the basis of thousands of pieces of information from our web searches, online monitoring systems, cameras, sensors and more recently, heating thermostats and smoke detectors, for example. This permanent “datafication” in the form of recording and implementation of our behaviour into mathematical dimensions and machine language is being used to predict our future behaviour more and more effectively. Most important here, of course, is our behaviour as consumers, which is not only predicted on the basis of big data, but will also be programmed and initiated according to specific purchasing behaviour.

II. Techno-philosophical foundations: Artificial intelligence and cybernetics

Historically, the development of artificial intelligence is the product of a scientific worldview known as cybernetics, founded in the 1940s by American mathematician Norbert Wiener (1894–1964).20 The important article of faith for this was the assumption that machines and living beings worked according to the same principles of operation, namely the independent control of processes using information, prediction and feedback of information and re-optimised self-control of processes. According to this assumption, humans do not function very differently from, e. g., a heating thermostat that constantly measures room temperature as a basis for controlling the heating, which in turn also affects the room temperature. The same principle of operation largely applies for both humans and machines: self-control through feedback loops through which the behaviour of systems can be monitored using suitable feedback.21

Another important figure here is the physician Ivan Pavlov, who re-ceived the Nobel Prize in 1904 for his special kind of “dog training” after he proved that a specific reflex could be triggered by a specific stimulus. Behaviour, according to the results of his research, can be formed, reinforced and predicted using appropriate rewards and feedback reac-tions. The famous behavioural scientist B. F. Skinner perfected corresponding tests and trials of this reward-feedback mechanism and developed the now famous conditioning apparatus known as the Skinner box to measure and control behaviour. His experiments on pigeons have become famous; in these experiments, he was able to reinforce specific behaviours. Precisely this experimental approach of a sophisticated reward and feedback model now serves as the basis for the programming of artificial intelligence.22

III. Artificial intelligence in practice

This has led to the use of artificial intelligence in practice and to a particularly interesting commercial application of new AI technologies that has arisen in parallel with the mobile signals that the now-ubiquitous mobile broadcasts constantly once it has been turned on.

AI in the supermarket

As part of a new marketing concept known as geo-fencing, several receivers are deployed in shops, that use these mobile signals to calculate the position of a mobile device precisely using GPS data. If an organisation knows where a person is standing within a shop, it can send personalised adverts. This works particularly well using Apple’s iBeacon technology, which allows users to be identified with high accuracy within a building and makes it very easy to send visitors special offers on their smartphone. As well as mobile signals, cameras can also be used for this new form of personalised advertising. Installed in supermarkets, these devices not only know which products a person is looking at any given moment, but can also analyse facial expressions to determine that person’s mood. These emotions, once recorded and “understood”, are best used, again, to send the customer adverts on their phone tailored to their specific mood. The company Synquera from St Petersburg, for example, works in this area with facial recognition software that records a customer’s facial expressions while they are paying and tries to interpret the emotions underlying these expressions. So, for example, a stressed 30 year old might be offered a bottle of whisky one evening, perhaps with a special price attached.23 It is not difficult to see how good money might be made in the future using analysis of emotions.

Although this new form of advertising is being celebrated rapturously by marketeers, it represents a huge incursion into consumers’ rights to privacy; more and more, these consumers are being supplied with a manipulative advertising environment that is going unnoticed. The situation has scarcely been examined and so has been able to spread entirely unhindered.

Juli Zeh and Ilija Trojanow write that the constant surrender of per-sonal data and the promotions offered on the basis of it reduce the actual range of activities and options for consumption to a small number of pre-sorted opportunities. The consumer is served with an increasing number of offers tailored for them, corresponding precisely with what are assumed to be their interests. These patterns are very difficult to get rid of – people carry them around wherever they go.24

Radio wave and camera surveillance is already installed extensively – in train stations, airports, shopping centres, town and city centres, car parks, swimming pools etc. – and makes for ostensible security. This “security”, however, represents a massive intrusion into the civil liberties of citizens because deviant behaviour is often very quickly interpreted by security forces as suspicious. The (false) accusations that may go along with this are, of course, always made under the guise of the security of citizens.25 This is how Martin Schulz, President of the European Parliament, sees the situation as well; he says that the connection of big data and the “hysterical increase in security” could lead to an anti-liberal, anti-social and anti-democratic society. “If the citizen is degraded to nothing more than an economic unit and the state holds them under suspicion, this represents a dangerous link between neoliberal and authoritarian ideology.”26

AI applications in the games industry and casino

But today, it is not only when we visit shops that intelligent algorithms are calculating the best time for our attention and readiness to buy. The same applies to the use of AI in advertisements in online games. Modern AI systems, which learn to understand us better and better, can respond to us more and more effectively. AI has therefore become an important topic in the field of modern online games. We can assume a large amount of market potential in this area. According to MediaBrix, a well-known advertising company, around 250 million people play online game on their smartphones or comparable devices. It can therefore be determined exactly when players need a reward and therefore at what point in the course of the game they are most ready to make a purchase. The appropriate advertising is inserted at this exact moment. In this sector, we speak of so-called “breakthrough moments” in which players are always on the verge of achieving a particular goal or have just achieved this goal. The most exact determination possible of the correct emotional timing for advertising to appear can increase brand perception by up to 500 %.27

Casinos are a very profitable field of application for artificial intelli-gence. Those who assume that today’s slot machines are the relatively simple mechanical devices they used to be in the past are mistaken. In modern casino operation, the game is to make the players always want to keep playing; in the best case, this flowstate would never end. To achieve this, it is important to be able to deduce how it is best to proceed with a given player from their specific behaviour within the game.28

And this is precisely what is done by implementing AI, not least with reference to Skinner himself, who once pointed out that the condition-ing model he described could be of use in the operation of gaming ma-chines. Players apparently behave very similarly to his pigeons, in Las Vegas, as the anthropologist Natasha Schüll from MIT discovered.29

Schüll researched the behaviour of players over many years and discovered that players would stay for longer at the machines if the machines paid out smaller amounts than if they paid larger sums. This is apparently because larger wins represented too large a difference compared with the game up until that point, so the players would rather interrupt the game to take a break when they won a large amount.30

Profitable business could be developed from such research. Caesars Entertainment alone employs 700 IT experts to develop bonus programs to exploit this, which allow around 45 million people to be conditioned in modern casinos. These systems differentiate between 90 different demographic groups, for each of which special bonus programs are tailored. Cameras identify the corresponding player type and provide an individual game design for each individual visitor to the casino. The data of all of the players, along with a complete behaviour profile, then finds its way into a database and can be used over and over again and perhaps passed on and sold. Caesars Entertainment has the largest of these data-bases, the value of which is estimated at a billion dollars.31

AI in politics

Largely unnoticed by the public, the topic of artificial intelligence has also become very important for politics. Building on the philosophy of behaviourism according to Skinner and his colleagues, a completely new understanding of the politics of modern governments has emerged. The aforementioned mechanisms of reward and feedback are important here.

According to modern behavioural economics, humans are limited in their cognitive abilities. They often make decisions based on spontaneous suggestion that come from simple causality and not from statistical thinking and complex deliberation and therefore can often be wrong. For the behavioural economist Daniel Kahneman, we humans are “association machines” that react automatically to certain stimuli.32 Kahneman’s research shows how people tend to believe stories that sound plausible, or may even invent these stories themselves.33 In this regard, Kahneman distinguishes between two types of thinking: A “fast thinking” of spontaneous suggestion, which enters our consciousness automatically because of a multitude of past experiences etc. and often leads to hasty conclusions, and a “slow thinking”, which carefully ponders and generates conclusions based on facts, but does not always have a chance to come into being.34

In his 2012 standard reference work, “Thinking, fast and slow”, he demonstrates human limitations in this area over more than 600 pages and warns against taking human intuition on trust. Judgements we make as a result of “fast thinking” are often based on the so-called priming effect, which often leads our thoughts astray because environmental conditions help shape these judgements. According to Kahneman, priming occurs in conjunction with the so-called anchor effect, which can cause, for example, estimates of numbers to be correspondingly higher if a study participant had previously achieved a higher number when playing on a wheel of fortune.35

A similarly shocking example from the chamber of horrors of modern behaviourism is one of Kahneman’s experiments in which German judges always imposed higher sentences if they had previously rolled a higher number on a dice in tests.36 Kahneman concludes: “The illusion of competence is not only an individual error of judgement; it is rooted deeply in the culture of the society. Facts that call basic assumptions into question – and therefore threaten people’s livelihoods and self-esteem – are simply dismissed.”37 Many people would therefore trust their intuitions too much.

It has been concluded from these and other similar studies that humans, as limited and deficient beings, need paternalistic care. And this is why it is important to take people by the hand and lead them “custodially” to happiness and fortune.

Paternalism and political nudging

For some time, this approach, called paternalism, has been considered offensive in politics. The implication is that the state and other institutions should be allowed to encourage citizens towards certain decisions that will be in their best interests in the long term. The concept, transferred from the advertising industry38 to politics, is called nudging because the behaviour and decisions of citizens are “nudged” in the right direction to ensure their own wellbeing. With an explicit connection to Kahneman and mutual appreciation and referencing39, Richard Thaler and Cass Sunstein are considered the founders of (liberal) paternalism. They developed the concept in detail in their bestseller “Nudge. Improving decisions about Health, Wealth and Happiness”40 from 2008.

In the view of the paternalists, it is reasonable and justified for a state to use insights from behavioural economics and incorporate small tweaks into its legislation that use “nudges” to cause citizens to behave better: to save energy, make a retirement plan or eat more healthily.41 Sounds reasonable and is also advocated fearlessly in public, although nudging uniquely represents a more or less legal form of manipulation that almost offensively invites abuse, particularly when combined with social media.

It is important to realise that this is not simple theorising: Cass Sunstein became head of Barack Obama’s Office of Information and Regulatory Affairs and while there, introduced nudging into US policy, after Obama had already actively worked with thirty behavioural economists during his election campaign in 2008. Sunstein can point to successes such as a programme that provides greater transparency with respect to fuel consumption and fuel costs and, together with similar US government initiatives during Obama’s first term, saved 90 billion dollars per year.42

British prime minister David Cameron introduced a “nudge unit”, which is closely connected to the office of the prime minister under the title of “Behavioural Insights Team” and was formed on the advice of Richard Thaler. Nudging is also gaining popularity in Germany. Justice Minister Heiko Maas, for example, calls nudging “the wise middle ground between patronisation by the state and inaction”. Angela Merkel has launched her own project group under the auspicious title “Effective Governance”.43

Nudging can be seen in politics in the form of such measures as the deterrent product information on cigarette packets, the use of quality seals on food packaging or declaration requirements for characteristics of products, such as the obligation of household appliance manufacturers to provide information on energy efficiency so that the consumer can purchase more efficient devices. Nudging has also been employed in Denmark to great effect. In Copenhagen, for example, a Danish Nudging Network project has been set up in which green footprints have been painted on pavements that lead to bins. This apparently resulted in a reduction in waste on the streets by 40 per cent.44

Much of this is certainly common sense, such as the placement of unhealthy foods in canteens and school kitchens, which can be displayed less conspicuously in relation to healthy foods. Sunstein has repeatedly stressed that nudging influences decisions without impinging on the freedom of individuals.45

Nudging by Google

As we can see, not all nudging is equal. The most charming nudgers do not fail in their intentions and, when implemented in the spirit of openness and transparency, as Sunstein is always quick to emphasise,46 nudging can perhaps have a meaningful effect. At its core, this new government technology must be considered critically if it is used in conjunction with artificial intelligence and without the knowledge of those affected in order to achieve political influence. This today is quite possible for the large information corporations such as Google and Facebook.

On this subject, publicist Eli Pariser recalls Google’s announcement, as early as 2009, of what they called a “personalised search for everyone”. This special search option allowed Google to create a relatively precise picture of every user via 57 different pieces of identifying information (browser type, location etc.) which is why different people will receive different search results with the same search terms. Who is googling a specific term makes a difference, therefore, to the search results.47 This phenomenon, known as the “filter bubble”, ensures that through the personalisation of searches, users only receive search results that match their opinions and beliefs. Pariser observes that this filter bubble contributes to the shrinking of our horizons by providing us with increasingly similar information. In contrast with human intelligence, which re-filters which influences we perceive and which we disregard every day, artificial intelligence make systematic pre-selections based on algorithms. The result of this is “confirmation bias”48, which leads us always to confirm our own views and exclude all differing opinions. There is a danger here of polarising society and causing separate groups to arise that are increasingly finding themselves in conflict because of a lack of mutual understanding, destroying social cohesion. There is talk of potential social fragmentation, which we can see in the USA, for example, where the Democrats and Republicans are drifting further and further apart and political compromises are increasingly hard to come by.49 Long live the traditional medium of the newspaper, which in contrast to the web, presents us with other people’s opinions and allows a certain critical distance to one’s own thinking.

Google’s influence on elections

This leads us to Google’s direct ability to influence elections. According to Prof. Robert Epstein, a well-known behavioural psychologist and professor from Harvard, Google can not only influence the outcome of an election, but also directly have a say in it itself. “No matter what (Google) management’s intentions might be: the program already decides the outcome of elections throughout the world today.”50 As part of these studies, Epstein studied Google’s algorithm and the fact that in reality, 90 % of people only look at the first page of search engine results. A small change in this algorithm can have great consequences. From one second to the next, events, like politicians, can either dissolve in the air or, indeed, grow in strength. There is even a technical term for this: SEME, or the “search engine manipulation effect”.

A test with 4,556 participants in India and the USA proved the significance of what Google shows on its first page. The study revealed that a large proportion of undecided voters could be influenced in favour of one candidate or another.51 The study from the American Institute for Behavioral Research and Technology (AIBRT) by Prof. Epstein proved how the order in which the politicians appeared in the search results alone could influence up to 20 % of the undecided voters. In some groups, up to 60 % of voters could be moved in favour of a particular candidate. “99 per cent of the participants had no idea they were being manipulated.” Gerd Gigerenzer, director of the “Center for Adaptive Behavior and Cognition” at the Max Planck Institute for Educational Research in Berlin, confirmed that since users only look at the top search results, their order had a significant influence: “Whether good or bad news about Donald Trump is pushed to the top of the results could affect the outcome of the primaries in the USA.”52

Recently, there have also been studies by a computer scientist and journalism professor at the University of Maryland, Nicholas Diakopoulos. These examined whether Google treated certain candidates favourably or discriminated against them. The result was very interesting; 7 of the first 10 results in the Google search provided positive messages about the Democrats, while in similar research on Republicans, only 5.9 positive news items were shown on the politicians in question. Of course, this raises the question of bias in the algorithm. Is it echoing the general consensus on the web, that it prefers more liberal positions? Who has intervened here and who exactly will control this in the future?53

Nudging by Facebook

Now let us look at Facebook and the existing toolbox of potential in-struments for manipulation. Facebook maintains its own AI laboratory that gathers data on the relationships between users. Its goal is to know as precisely as possible what Facebook users are doing in the course of the day, where they spend time, what products they like and for which parties and politicians they vote. An important technology for this is the “DeepFace” system presented in 2014, which can recognise individual faces within a large number of people with precision of over 97 per cent.54

The “Likes” given to images and news items on Facebook can indicate the ethnicity, political views, religion, relationship status, gender, sexual orientation, nicotine, alcohol and drug consumption of a user with some accuracy.55 This was demonstrated by, among other things, a corresponding experiment by the University of Cambridge, which Markus Morgenroth reports. In the chosen set-up for the experiment, the method of statistical analysis using Likes by 58,000 participants was used to determine details about their lives. Only 68 Likes on average were required to achieve very high accuracy. 95 per cent of the Americans who were predicted to have an African or Caucasian backgrounds were correct; 93 per cent of the male/female predictions were correct, as well as the distinction between Christians and Muslims. Belonging to the “Democrats” was correctly calculated for 82 per cent of participants, and 85 per cent to the “Republicans”. The values for prediction of sexual orientation were correct for 88 per cent of men and 75 per cent of women. The values for correct determination of drug use were relatively weak, at 65 to 73 per cent.56

With these results, there is obvious scope to exploit the findings economically, which, unsurprisingly, is already happening. For example, the company Big Data Scoring from Estonia offers a credit scoring process on the basis of Facebook profiles. Using an algorithm, creditworthiness can be expressed on a scale of one to ten using the online behaviour of the person. This is a service that will be of increasing interest not only to banks, but also to online shops and property companies. According to their own assertions, around 7,000 data points can be deployed per applicant, including information about the behaviour of the applicant while a credit application is filled in, such as how long the application takes to fill in, reading terms and conditions etc.57 What’s also nice to know, the software is selling well and is already being used in 10–15 countries. In the future it will also be available in Germany, Austria and Switzerland! At the Hamburg-based company Kreditech, 15,000 data points are utilised in calculations of a person’s creditworthiness, including information from the applicant’s Facebook, Xing or LinkedIn profiles, to which the company demands access.58

In studies at Stanford University by psychologist B. J. Fogg, the News Feed has been manipulated for testing purposes. The experiment was performed on 689,000 people in two experimental groups. For one group, the News Feed was changed so that more negative news appeared; for the other, more positive news. The contents of the News Feed caused the positive group to spread more positive content, and vice versa. This demonstrated that people could be influenced deliberately. “Emotional contagion” could be produced relatively easily by changing the algorithm.59

Facebook and Twitter influencing votes

All of this demonstrates clearly that Facebook can be involved in political affairs. As already mentioned, users’ voting preferences can be predicted using Likes, for example. This makes it possible to influence certain preferences by controlling a user’s News Feed. As we know, these feeds are always curated individually on Facebook; out of 1500 news items, only 300 are shown to the user in question.60 The Facebook algorithm ensures that users are more likely to see status updates from friends who have similar opinions to their own.61 If Facebook showed news from all contacts, this would certainly be too much information and Facebook would probably not work very well. This means that on Facebook, there is also a filter bubble like the one that has already been identified at Google, with which the (pre-)selection of information could also be turned into a concrete policy.

But that is not all; for some years, there has been a button on Facebook in the USA with which a user can share with other users their intention to vote. This is the well-known “I’m a voter” button.62 In the Congress elections in 2010, within a Facebook study with two control groups of 600,000 people, it was examined what effect changing the buttons for 61 million users would have. The study, called “The 61-million-person experiment on social influence and political mobilisation” was published two years later and showed how voter behaviour can be encouraged through Facebook.63 The result was striking. Voter behaviour could, in fact, be influenced, and not only the direct user was affected, but their friends as well as friends of friends, according to the famous Facebook multiplication process. By using this Facebook “voter megaphone”, 340,000 more people were moved to go to the polls. Voters with specific political beliefs and from specific regions were also very likely to be mobilised. All of this is a clear indication of potential opportunities to manipulate the elections, which is very dangerous, particularly when used on a medium such as Facebook that is used by Americans to obtain the majority of their political news.

The most recent information is that on the primaries preceding the presidential election of 2016. The Republican Ted Cruz contracted the data company Cambridge Analytica to analyse the pages of millions of Facebook users according to their individual personality profiles in order to identify voters mostly likely to be swayed to vote for Cruz.64

But not only Google and Facebook are in the business of politics. According to Simon Hegelich, professor for political information science at the Technical University of Munich (TUM), Twitter also joined the fray some years ago, for example at the time of the Senate elections for the post of Ted Kennedy, which was influenced by Twitterbots. The same applies for Donald Trump in 2016; some Twitter accounts have been observed tweeting in favour of Trump in exactly the same words, which is a clear indication of the use of bots. According to Hegelich, 5–10 % of Twitter users are not real people, but Twitterbots, also used for propaganda, that can completely distort social trends. “If a politician asks, for instance, what the mood on the internet is like at the moment, and is then told that the internet is against refugees, then perhaps he will change his policy in order to match the supposed will of the voters.”65

In summary, it is clear that the ongoing automation of politics may mean its demise and hence, the end of democracy, but that the segmentation of information available to users can also be used politically. According to Eli Pariser, the user-specific curation of Google and Facebook is curbing political discussion and making democracy an impossibility, and it is doing this without exercising censorship in the classical sense. In this way, the internet disenfranchises citizens by hiding other political campaigns.66 Besides, most people are simply too lazy to seek out other information when they do not even know that it exists.

Democracy – a discontinued model

Followed to its logical conclusion, and as Tim O’Reilly, inventor of the term Web 2.0, put it in a nutshell, societies in future are likely to be better controlled using “algorithmic regulation” instead of laws and political persuasion67, exactly as in the future, cars will be self-controlling and drive with no accidents and aeroplanes are already largely flown by autopilot today. We can see from this that strategists such as Tim O’Reilly, Peter Thiel and many companies in Silicon Valley see democracy as “outdated technology”. Unlike machine-supported cybernetic feedback control systems, political planning processes and legislative initiatives are slow and cannot react to changes quickly enough. Instead of the old democratic processes and political procedures, therefore, in the future intelligent systems should adapt to developments and innovations by evaluating and comparing the current situation, comparable with a thermostat, with its goals. If necessary, countermeasures could be taken automatically!68

But in this model of the future, who sets the room temperature? Who sets the goals for the planned cybernetic self-controlling systems? Where – to put it precisely – does the common good still remain as an actual reference of social development, the yardstick for politics and democracy?

IV. Artificial intelligence and political nudging called into question

As mentioned, this new form of politics could mean the end of politics itself, and when the basis of democratic societies is questioned and people’s autonomy increasingly eroded – this is the point, and no later, when this new technology must be thoroughly questioned.

The economists Bruno S. Frey at the University of Basel and Jana Gallus from the Harvard Kennedy School call for clear constitutional guidelines for the use of AI-supported nudging, since this is the only way in which citizens can participate in decisions about the conditions under which the government or parties can utilise nudging. “The assumption that nudges will always be used in the interest of the people is most questionable, if not plainly wrong.”69 The lawyer Andrej Zwitter, who studies nudging, among other topics, at the University of Groningen in the Netherlands, goes a step further: “Generally, both in politics and in the private sector, the disenfranchisement of citizens through behavioural modification should be stopped. This could be implemented through a ban on nudging, which deliberately manipulates citizens and aims to keep them unaware of this.”70 According to Evgeny Morozov, the crux of the issue is the motive to use nudging at all, the naive belief that many problems should be solved through changing our individual behaviour and the cause of social problems should be understood as the failure of individuals rather than the “result of certain structural conditions of their socio-economic milieu.”71

Above all, there is great danger in the fact that we are increasingly dependent on systems that we do not understand. The AI that Google, Amazon or Facebook couple with their server farms is learning more and more and making decisions that are no longer comprehensible by humans. The control of system-relevant infrastructure such as factories, power plants, server farms or financial systems increasingly depends on AI that we no longer fully understand.72

On 1st May 2014, the physicist Stephen Hawking, together with a number of other scientists, published an open letter in which he warned of the development of artificial intelligence. Hawking and his colleagues see a great danger from artificial intelligence that could lead, above all, to an intelligence explosion when a machine has achieved the cognitive abilities of a human and could further optimise itself from that time onwards. At this point in development, AI could outwit financial markets, manipulate leaders or develop weapons systems that we can no longer understand.73

In the summer of 2015, the scientists were prompted to raise the ante in another open letter from Buenos Aires. “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”74 This time, 16,000 scientists signed and referred explicitly to the continuing development of autonomous weapons systems, stating that the development of military AI was comparable in terms of the danger to humanity with that from nuclear weapons.

Another group of scientists led by Gerd Gigerenzer, director of the Harding Centre for Risk Literacy, and Dirk Helving, Professor of Com-putational Social Science at the ETH Zurich, warn in their Digital Manifesto of a loss of control to sophisticated algorithms and artificial intelligence.75 The signatories also include Yvonne Hofstetter, who refers strikingly to the dangers associated with AI. “Such systems are an assault on the autonomy of humankind. They work only on the basis of uninterruptible total surveillance.”76

In addition to the whole legions of scientists, some Silicon Valley tycoons also warn of the dangers of artificial intelligence if it were to gain the ability even remotely to simulate the human brain.77 Bill Gates and Tesla founder Elon Musk are among this group of very prominent people warning about the dangers of AI. They warn publicly that artificial intelligence could become a threat to humanity.78 Musk, otherwise completely undaunted, for example regarding his plans for the libertarian colonisation of the planet Mars in the next 15 years, in view of the power and concentration of technology at Google, warns that the company could wipe out the planet by mistake and thinks artificial intelligence is more dangerous than the atomic bomb.

Superintelligence

However, the greatest danger seems to stem from the highest stage of AI development in the form of superintelligences. This is not primarily about the AI-supported manipulation of citizens by companies and governments, but about the potential of new intelligences to take power without effective oversight by state or business organisations.

An important authority in this field is Prof. Nick Bostrom, director of the Future of Humanity Institute in Oxford and author of the book “Superintelligence”, published in 2014.79 For Bostrom, a superintelligence is any type of intelligence that surpasses human intelligence. Through the reconstruction of the neural structures of the human brain in particular, Bostrom believes that by 2075, a form of intelligence could be created that could continually improve itself, which he thinks could lead to the kind of intelligence explosion that Hawking and his colleagues warned of in 2014.80

For Bostrom, there is a danger that this new form of intelligence might soon try to make decisions about the future itself and not let limited human intelligence encroach on its territory. Bostrom fears that this intelligence could hack all security systems and always escape any preventive detention, if only to ensure that it is not hindered in carrying out a set task. It would therefore always strive for survival and ensure it is not stopped. According to experts, there is a danger that this intelligence could, to complete a task it has been set, for example, wipe out all of civilisation.81 Therefore, the scientists wish to stop all research on AI until the question of control of the system and the problem of ethical orientation and value-based goal setting has been answered, at least approximately.82

This development is seen quite differently by incumbent technology director at Google and genius inventor Ray Kurzweil, however. He essentially sees great opportunities for humanity when it is finally able to merge human and machine intelligence. It will then be possible, he believes, to overcome the human race’s restriction caused by the limited number of a hundred billion possible neural synaptic connections in the brain using implants or connection to external systems.

Kurzweil describes in his 2005 book “Humanity 2.0”83 in very precise terms how this combination of human and machine intelligence would work, in his view. For him, it is without doubt that the first computers will pass the Turing test before the end of the 2020s.84 This means that because of the experimental design of this test, computer intelligence will no longer be distinguishable from the intelligence of biological humans. Kurzweil assumes that once the machines have reached a sufficient state of development, the traditional strengths of human and machine intelligence will be mutually reinforcing. For humans, this would be particularly the talent for pattern recognition, which, because of the massive parallelism and autonomous organisation of the brain, represents “optimal conditions for recognising hidden regularity in chaos”. People can learn from experiences and linguistic information, draw conclusions, deduce rules of how mental images build reality and then vary these rules.85

A traditional strength of computers, on the other hand, is storing vast amounts of data and being able to recall facts at any time. As well as the ability to apply acquired knowledge quickly and as often as necessary, and above all, to communicate knowledge extremely quickly, non-biological intelligence can simply download knowledge and abilities from other machines, and one day, from people as well. Combining the strengths of humans and machines, for Kurzweil, would enable an incred-ible technological push, particularly when technological progress is no longer limited by the speed of a human brain. This will especially be possible if robots made at the molecular level, a few micrometres large, “nanobots”, could take over countless tasks in the human body and among other things, halt the ageing process. But these tiny robots would not only be employed in our bodies as anti-ageing agents; they would also enter the brain’s capillaries drastically increase our intelligence.

This time would be the latest point, according to Kurzweil, that our (machine) brain power would exponentially grow, doubling every year at least. Since the biological capacity remains limited, in the end the non-biological portion of our intelligence would predominate. Kurzweil characterises this final state as follows: “The human ability to understand and respond appropriately to emotion (so-called emotional intelligence) is one of the forms of human intelligence that will be understood and mastered by future machine intelligence. … Some of our emotional responses are tuned to optimize our intelligence in the context of our limited and frail biological bodies. Future machine intelligence will also have “bodies” (for example, virtual bodies in virtual reality, or projections in real reality using foglets) in order to interact with the world, but these nanoengineered bodies will be far more capable and durable than biological human bodies. Thus, some of the “emotional” responses of future machine intelligence will be redesigned to reflect their vastly enhanced physical capabilities.”86

Overarching all of this, Kurzweil sees the concept of the singularity, by which he means entry into a human-machine civilisation in which the machines would become human, even though they did not have biological origins. This would be the next evolutionary step, in which most of civilisation’s intelligence would be non-biological.

Kurzweil comforts those who are unsettled by this scenario with the conviction that this development would not represent the end of biological intelligence as such. “In other words, future machines will be human, even if they are not biological. This will be the next step in evolution, the next high-level paradigm shift, the next level of indirec-tion. Most of the intelligence of our civilization will ultimately be nonbio-logical. By the end of this century, it will be trillions of trillions of times more powerful than human intelligence. However, to address often-expressed concerns, this does not imply the end of biological intelligence, even if it is thrown from its perch of evolutionary superiority. Even the nonbiological forms will be derived from biological design. Our civilization will remain human – indeed, in many ways it will be more exemplary of what we regard as human than it is today, although our understanding of the term will move beyond its biological origins.”87

Kurzweil says all this very seriously and is relying on technological progress to keep him healthy until the transition to singularity is achieved. “Sufficient information already exists today to slow down disease and aging processes to the point that baby boomers like myself can remain in good health until the full blossoming of the biotechnology revolution, which will itself be a bridge to the nanotechnology revolution.”88 Then, illness, age and death will no longer be insur-mountable obstacles for Kurzweil – they are circumstances that he cannot accept. He sees illness and death as inconveniences, as problems to be solved.89 According to Kurzweil, today we are already able to re-program our biochemistry, a task he intends to do everything necessary to achieve: “I have been very aggressive about reprogramming my biochemistry. I take 250 supplements (pills) a day and receive a half-dozen intravenous therapies each week (basically nutritional supplements delivered directly into my bloodstream, thereby bypassing my GI tract). As a result, the metabolic reactions in my body are completely different than they would otherwise be.”90

These ludicrous ideas about the future of humanity are not coming from a science fiction writer or Hollywood director, but – it must be remembered – from the chief technology officer of the largest and most powerful company in the world. Kurzweil is also the founder of the Singularity University, which was launched a few years ago with funding from Google, as well others, such as the NASA space agency.91 Major sponsors are Cisco, as well as Nokia, Autodesk, Deloitte and Google.92

Regardless of who supports all of these futuristic visions, and whether these will be technically feasible in the next few decades, there is one glaring question: do we, as a human society, even consider this future desirable? Do we want the non-biological proportion of intelligence to be significantly greater than the biological? Do we want robots to replace most jobs? Do we want to spread nanorobots throughout our brains? Do we want to live with the knowledge that anyone who refuses will become a second-class intelligence because they simply cannot keep up with the speed of thought of the superintelligences on the central computer and the nanobots in the brains of their colleagues? Sven Gábor Jánszky93 questions the economic utilisation of a superintelligence whose developers who are not only taking a risk for themselves, but potentially placing all humans in danger, including those, of course, who prefer to lead their lives without the involvement of superintelligences. “Since we all carry risk in this way, it would only be fair if we could all participate in the benefits and potential profits.”94

V. Solutions to the AI trap

This raises the question socially of how we are to deal with the issue of artificial intelligence in the future. According to the experts, it seems not to be too late to align AI on the basis of a social consensus for the benefit of all.

Cultivating competence!

What is needed first and foremost is the cultivation of expertise in dealing with new technologies. According to Schlieter, dealing with data as well as, for example, with social media should be taught as compulsory subjects in every school.95 The same applies to the development of competence with regard to the AI issue by the media, and particularly by journalists. The subject must no longer be treated as a fanciful technology, but must be addressed in the context of its social implications in public life. It is important to develop this competence in politicians above all. They in particular should gain some basic understanding of the AI issue, in order to ensure that the wrong legislation is not produced by mistake.

Expert committees, supervisors and digital fundamental rights

Furthermore, interdisciplinary and internationally nominated expert committees must be formed to deal seriously with the dangers of artificial intelligence. This is to address the need of a systematic assessment of technology that is worth the name and that enacts proactive measures to prevent negative developments in artificial intelligence and with which benefit can be generated for the entire international community.

Artificial intelligence should be under state supervision at al times, as also applies, for example, to nuclear energy. In this area, the International Atomic Energy Agency (IAEA)96 in its years of activity showed some effectiveness, even if the organisation may not have fully exploited its potential by a long stretch. With Yvonne Hofstetter, it can be concluded that with the right political influence on programming, damage from AI can still be averted by society. For precisely this reason, she calls for a new supervisory authority, because too much data and power today lie with a very small number of decision-makers: “We need a trusted authority, oversight for algorithms.”97 Daniel Oberhaus also sees the need, in the usually pro-technology magazine Vice, to establish supervisory authorities, although he does not make any specific proposals.98 In any case, for Frank Pasquale, law professor at the University of Maryland, it may be necessary to use legal means to halt algorithms. “Some data methods are just too invasive to be permitted in a civilised society.”99 Therefore, we might also consider the possibility of having algorithms examined by independent experts as part of a sort of algorithm MOT, and institutionalise an algorithm officer analogously to the data protection officer.100

Martin Schulz, president of the European Parliament, vigorously op-poses the tendency towards a clearly identifiable technological totalitari-anism that must be met with active protection of fundamental rights. Schulz therefore calls for a Charter of Fundamental Rights for the digital era. This is so urgent because within complex systems, more and more algorithms are being programmed by other algorithms and this raises complex ethical questions that need a basic social debate.101 Therefore, in his estimation, a social movement is needed that “provides for the inviolability of human dignity at the centre of its considerations and does not permit humans to be degraded into mere objects. … This is about nothing less than the defence of our fundamental values in the 21st century. It is about not allowing the objectification of human beings”102.

Control of algorithms during elections and referendums

The need for oversight of algorithms is particularly clear in the context of elections and referendums. As shown, the use of social bots can mislead the public and present the pretence of false information; some types of nudging may already constitute the offence of electoral manipulation. The biggest problem with this is the complete invisibility of the new forms of propaganda on the internet, which are particularly dangerous because it is so difficult to defend against them.103 Prof. Epstein therefore calls for strict regulation and oversight of search functions and the corresponding algorithms relating to elections. “We believe that search engine manipulation(s), with probability bordering on certainty, are already influencing election outcomes around the world.”104

Therefore, Google’s search algorithms, as well as the Like function at Facebook, should be severely limited during elections and other voting periods.105 This, of course, would preferably happen in conjunction with the international algorithm convention called for by Hofstetter.106 To protect elections, and other things, digital expert observers would be needed for this whose task would be to monitor not only “rogue states”, but also our own doorsteps.

The same applies for the potential influence of large online compa-nies, in particular at times of direct democratic votes and referendums. Referendums in Germany or in Switzerland, for example, should then urgently be examined to determine whether everything has proceeded properly in recent years with regard to “SEO measures”107, if this can even be determined in retrospect. In any case, appropriate protective measures must be called for, for example to disclose the contents of algorithms. In particular, Google, Facebook and other similar companies must be obliged to make an appropriately transparent declaration of all upcoming political positions for the vote on the relevant vote-related content at times of elections and referendums.108

Outlawing AI weaponry

All experts agree that the current arms race in the field of AI must be stopped, because there is currently fierce competition. Corporations, research institutions and entire countries are engaged in AI arms races against each other, all striving to get ahead and build an advantage. A process in which the pace of innovation will always take precedence over safety.

Therefore, an international ban should be implemented on the manufacture and use of AI in weapons technology, for example in the form of autonomously operated weapons, similar to the one in place against blinding laser weapons or cluster bombs. This ban should be implemented exactly as it has been called for by Hawking and his colleagues in the declaration of Buenos Aires already mentioned. It would be the task of the United Nations to ensure this implementation, modelled on the declaration in the coalition agreement of the Federal Government: “Germany will promote the incorporation of armed un-manned aircraft into international arms control regimes and advocate an international ban on fully automated weapons systems that take the choice on their use away from humans.”109

Defining AI rules socially: AI for the common good

Kai Schlieter is on the right track when he asks whether AI can be developed that works for the benefit of people and is conceived as technology that is participatory and based on democratic rules. Schlieter forms the concept of a citizens’ AI that protects personal data, is not controlled from above and endows global benefit for all.110 This must clearly be sought, because it has recently become clear with AI that technology always poses a question of social responsibility.

However, this aspect of a democratically aligned citizens’ AI must be developed further: It does not only seem appropriate to counter the furtherance of state and private AI research democratically; it should also be ensured that all further development in this field serves the benefit of all of humankind exclusively. The question of our digital future is a matter that concerns all of us and with which we should all be engaged. Under no circumstances should we turn over matters of AI to those very people who consider democracy to be an outdated concept and are compelled to seek profit above all else.

Therefore, the central mechanism of our current economic systems cannot remain unexamined, as it consists in exactly this pursuit of maximum profit for the benefit of the few. If the further development of AI technologies is left to the “free play of forces”, this mechanism will stabilise in this field as well. An economy of boundless greed will not stop, even at the merciless private exploitation of artificial intelligence.

Declaring AI technology part of the commons

Instead of continuing to accept great distortions in our civilisation and funding the transformation of an increasing number of work processes to robots and artificial intelligences with large sums of money, it would be advisable to reflect deeply on the conditions of the exploitation of the new AI technologies. The proper handling of AI, according to Schlieter, needs the use of the commons; this means that no economic, commercial or governmental interests should be prioritised.111

Taken to its logical conclusion, this means the transfer of the results of AI research to the public domain. The public domain, or commons, consists of resources that are considered to be common property of humankind. The cultural commons in question cover, for example, human knowledge, cultural know-how and cultural achievements, language, software source codes, electromagnetic waves and frequency ranges, or the internet, for example. The creation of such cultural commons – to which AI technology should also belong, according to the author – is rarely traceable to the exclusive achievement of an individual creator. Instead it is based on a treasure trove of cultural achievements grown over centuries, and above all, on the result of decades of research in universities and, in the case of AI, in the military as well. The public funding associated with this means that purely private exploitation appears unacceptable.112

 

An important result of these considerations would be a proposal to remove AI technology from private hands and to return it to the cultural commons, as is already the case for the “Allen Institute for Artificial Intelligence”, in Seattle, for example, which Microsoft co-founder Paul Allen finances with more than a billion dollars and in which all research findings are published free of charge and anyone can do what they want to do with the data.113

Viewed from the perspective of society as a whole, this would mean that all research findings were placed in the control of the commons and appropriate international regulatory bodies in order to place artificial intelligence in the service of the many, instead of serving the profit interests of the few. This is exactly what civil society should be discussing and incorporating into legislative initiatives.

Michael W. Bader
May 12th, 2016

 

Michael W. Bader – entrepreneur, author, speaker

Michael W. Bader, born on December 26, 1952, studied German and political science, among other subjects, under Professor Martin Greiff-enhagen in Stuttgart. Bader is the managing partner of the online consulting company “GFE Media GmbH”, founded in 1979 in Göppingen, and acting chairman of “Stiftung Media” (Media Foundation), Stuttgart, as well as initiator and board member of the Romanian Foundation “FCE – Foundation for Culture and Ecology”, headquartered in Mediaş, Transylvania (Romania).

 

Bibliography (English documents)

Most of the citing sources in this essay are in German, footnotes refer to these. The following titles are published in English only, or an English edition is available.

  • Bostrom, Nick, Superintelligence: Paths, dangers, strategies, 1. ed. Oxford et al.: Oxford University Press, 2014.
  • Epstein, Robert, „Democracy at risk from new forms of internet influence.“ EMMA Magazine 2014-2015 (Oct 27, 2014): 24–27. aibrt.org/downloads/EPSTEIN_2014-New_Forms_of_Internet_Influence-EMMA_Magazine.pdf (accessed: May 14, 2016).
  • Evans, Dave, „The Internet of Things: How the Next Evolution of the Internet Is Changing Everything.“. Cisco Internet Business Solutions Group. www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf (accessed: Apr 16, 2016).
  • Hawking, Stephen, „Autonomous Weapons: an Open Letter from AI & Robotics Researchers.“ Future of life Institute (Jul 28, 2015). www.futureoflife.org/open-letter-autonomous-weapons (accessed: May 14, 2016).
  • Kahneman, Daniel, Thinking, fast and slow, 1st paperback ed. New York: Farrar Straus and Giroux, 2013.
  • Kohli, Sonali, „Bill Gates joins Elon Musk and Stephen Hawking in saying artificial intelligence is scary: Danger zone.“ Quartz (Jan 29, 2015). qz.com/335768/bill-gates-joins-elon-musk-and-stephen-hawking-in-saying-artificial-intelligence-is-scary/ (accessed: May 14, 2016).
  • Kosinski, Michal, David Stillwell und Thore Graepel, „Private traits and attrib-utes are predictable from digital records of human behavior.“ PNAS.org Proceedings of the National Academy of Sciences Vol. 110, No. 15 (Feb 12, 2013). www.pnas.org/content/110/15/5802.full (accessed: May 1, 2016).
  • Kurzweil, Ray, The singularity is near: When humans transcend biology. New York, NY: Viking, 2005.
  • Kurzweil, Ray und Terry Grossman, Fantastic Voyage: Live Long Enough to Live Forever. New York, NY: Plume, 2005.
  • LeCun, Yann, Yoshua Bengio und Geoffrey Hinton, „Deep learning.“ Nature, No. 521 (2015): p. 436–444.
  • Pariser, Eli, The filter bubble: How the new personalized web is changing what we read and how we think. New York, NY et al.: Penguin Books, 2011.
  • Thompson, Andrew, „Engineers of addiction: Slot machines perfected addictive gaming. Now, tech wants their tricks.“ The Verve (May 6, 2015). www.theverge.com/2015/5/6/8544303/casino-slot-machine-gambling-addiction-psychology-mobile-games (accessed: Mar 11, 2016).

Bibliography (complete)

 

 

Download PDF
  1. See www.gfe-media.de/blog/gegen-libertarismus-und-monopolismus/ (German)
  2. Viktor Meyer-Schönberger, „Was ist Big Data? Zur Beschleunigung des menschlichen Erkenntnisprozesses.“ Aus Politik und Zeitgeschichte 65. Jg., 11-12 (09.03.2015): S. 14–19.
  3. Dirk Helbing et al., „Digitale Demokratie statt Datendiktatur: Reihe: Das Digitale Manifest.“ Spektrum der Wissenschaft Online (17.12.2015). www.spektrum.de/news/wie-algorithmen-und-big-data-unsere-zukunft-bestimmen/1375933 (letzter Zugriff: 28. März 2016).
  4. Anton Priebe und Assaf Baciu, „Mensch vs. Maschine: Wenn Algorithmen bessere Botschaften entwickeln als Marketer; Interview.“ Onlinemarketing.de (25.04.2016). onlinemarketing.de/news/mensch-vs-maschine-algorithmen-botschaften-marketer (letzter Zugriff: 25. April 2016).
  5. Kai Schlieter, „Mensch, gib mir deine Daten: Künstliche Intelligenz.“ taz (18. 09. 2015). www.taz.de/!5227616/ (letzter Zugriff: 21. September 2015).
  6. Kai Schlieter, Die Herrschaftsformel: Wie künstliche Intelligenz uns berechnet, steuert und unser Leben verändert (Frankfurt am Main: Westend, 2015).
  7. Markus Morgenroth, Sie kennen dich! Sie haben dich! Sie steuern dich!: Die wahre Macht der Datensammler (München: Droemer, 2014).
  8. Dave Evans, „The Internet of Things: How the Next Evolution of the Internet Is Changing Everything.“. Cisco Internet Business Solutions Group. www.cisco.com/c/dam/en_us/about/ac79/docs/innov/IoT_IBSG_0411FINAL.pdf (accessed: Apr 16, 2016).
  9. Helbing et al., „Digitale Demokratie statt Datendiktatur“.
  10. Morgenroth, Sie kennen dich! Sie haben dich! Sie steuern dich! S. 21ff.
  11. Yvonne Hofstetter, Sie wissen alles: Wie intelligente Maschinen in unser Leben eindringen und warum wir für unsere Freiheit kämpfen müssen (München: C. Bertelsmann, 2014).
  12. Götz Hamann und Adam Soboczynski, „Der Angriff der Intelligenz: Yvonne Hofstetter.“ Zeit Online (10.09.2014). www.zeit.de/kultur/2014-09/yvonne-hofstetter-kuenstliche-intelligenz (letzter Zugriff: 6. März 2016).
  13. Kai Schlieter und Marcus Klöckner, „Viele halten die Demokratie für eine veraltete Technologie.“ Telepolis (12.10.2015). www.heise.de/tp/artikel/46/46140/ (letzter Zugriff: 6. März 2016).
  14. Schlieter, Die Herrschaftsformel. S. 242
  15. Deep Blue (1996) and Watson (2011) were computer programs developed by IBM on the basis of their AI research. Jeopardy! was an American game show in which participants had to find the question to a given answer.
  16. Jürgen Schmieder, „Wenn Sie Angst vor dem Terminator haben, machen Sie einfach die Tür zu.“ Süddeutsche, SZ.de (11.03.2016). www.sueddeutsche.de/digital/serie-kuenstliche-intelligenz-wenn-sie-angst-vor-dem-terminator-haben-machen-sie-einfach-die-tuer-zu-1.2901564 (letzter Zugriff: 15. März 2016).
  17. Schlieter und Klöckner, „Viele halten die Demokratie für eine veraltete Technologie“.
  18. Schlieter, „Mensch, gib mir deine Daten“.
  19. Yann LeCun, Yoshua Bengio und Geoffrey Hinton, „Deep learning.“ Nature, Nr. 521 (2015): p. 436–444.
  20. At the so-called Macy Conferences (1946–1953), the basis was laid for a science that encompassed the functioning of the human brain as well as its electronic adaptation in computers. It was probably Heinz von Foerster who – with reference to Wiener’s essays in 1947 – had suggested the name “cybernetics” for this science, supported by Warren McCulloch, John von Neumann and Walter Pitts. The core group of the Macy Confer-ences also included anthropologist Margret Mead and psychologist and founder of action research Kurt Lewin.
  21. Schlieter, „Mensch, gib mir deine Daten“.
  22. Schlieter, Die Herrschaftsformel. S. 118ff.
  23. Morgenroth, Sie kennen dich! Sie haben dich! Sie steuern dich! S. 54.
  24. Juli Zeh, „Schützt den Datenkörper!“ FAZ.net (11.02.2014). www.faz.net/aktuell/feuilleton/debatten/die-digital-debatte/politik-in-der-digitalen-welt/juli-zeh-zur-ueberwachungsdebatte-schuetzt-den-datenkoerper-12794720.html (letzter Zugriff: 10. Mai 2016).
  25. Ilija Trojanow und Juli Zeh, Angriff auf die Freiheit: Sicherheitswahn, Überwachungs-staat und der Abbau bürgerlicher Rechte (München: Hanser, 2009). S. 48 und S. 138.
  26. Martin Schulz, „Warum wir jetzt kämpfen müssen: Technologischer Totalitarismus.“ FAZ.net (06.02.2014). www.faz.net/aktuell/feuilleton/debatten/die-digital-debatte/politik-in-der-digitalen-welt/technologischer-totalitarismus-warum-wir-jetzt-kaempfen-muessen-12786805.html (letzter Zugriff: 25. April 2016).
  27. Schlieter, Die Herrschaftsformel. S. 116.
  28. Ibid. p. 121.
  29. Andrew Thompson, „Engineers of addiction: Slot machines perfected addictive gaming. Now, tech wants their tricks.“ The Verve (May 6,2015). www.theverge.com/2015/5/6/8544303/casino-slot-machine-gambling-addiction-psychology-mobile-games (accessed: Mar 11, 2016).
  30. Ibid.
  31. Ibid.
  32. According to Kahneman, the problem is that “people tend to apply causal thinking improperly, namely to situations that require statistical thinking.” Daniel Kahneman, Schnelles Denken, langsames Denken, 18. Aufl. Pantheon-Ausg. (München: Pantheon, 2015). S. 103.
  33. Ibid. p. 247ff.
  34. Ibid. p. 32ff.
  35. Ibid. p. 152.
  36. Ibid. p. 152.
  37. Jens-Christian Rabe, „Misstraue dem Vertrauten!: ›Schnelles Denken, langsames Denken‹.“ Süddeutsche, SZ.de (25.05.2012). www.sueddeutsche.de/kultur/schnelles-denken-langsames-denken-misstraue-dem-vertrauten-1.1367484 (letzter Zugriff: 25. März 2016).
  38. In pre-digital times, heaps of confectionery, tobacco products and booze were still offered in the checkout areas.
  39. Kahneman, Schnelles Denken, langsames Denken. S. 510ff.
  40. Thaler and Sunstein, “Nudge. Improving Decisions about Health, Wealth and Happiness”, cited from Schlieter, Die Herrschaftsformel. S. 137.
  41. Jan Dams et al., „Merkel will die Deutschen durch Nudging erziehen: Verhaltensökonomie.“ Die Welt (12.03.2015). www.welt.de/wirtschaft/article138326984/Merkel-will-die-Deutschen-durch-Nudging-erziehen.html (letzter Zugriff: 8. Januar 2016).
  42. FehrAdvice, „Nudges in der Praxis: 5 Beispiele; 26.05.2013.“. fehradvice.com/blog/2013/05/26/nudges-in-der-praxis-5-beispiele/ (letzter Zugriff: 8. April 2016).
  43. Schlieter, Die Herrschaftsformel. S. 141.
  44. Dams et al., „Merkel will die Deutschen durch Nudging erziehen“.
  45. Ibid.
  46. Sunstein explains during a visit to the Federal Chancellery that the principles of transparency and neutrality had to be observed. “Nudging can then be a very successful instrument to increase the happiness of citizens.” Ibid.
  47. Eli Pariser is a well-known “MoveOn – Democracy in Action” activist and author of many articles for, among others, the Washington Post and the Wall Street Journal, as well as author of Eli Pariser, Filter Bubble: Wie wir im Internet entmündigt werden (München: Hanser, 2012). S. 10.
  48. Ibid. p. 96.
  49. Helbing et al., „Digitale Demokratie statt Datendiktatur“.
  50. heise.de/dpa, „US-Professor warnt: Google-Algorithmus kann Demokratie gefährden.“. www.heise.de/newsticker/meldung/US-Professor-warnt-Google-Algorithmus-kann-Demokratie-gefaehrden-2577764.html (letzter Zugriff: 5. März 2016).
  51. Ibid.
  52. Joachim Laukenmann, „Wie digitale Daten Wähler manipulieren können: Subtiler Einfluss.“ Welt.de (20.04.2016). www.welt.de/wissenschaft/article154572957/Wie-digitale-Daten-Waehler-manipulieren-koennen.html (letzter Zugriff: 21. April 2016).
  53. Christoph Drösser, „Er weiß es, bevor du es weißt: Algorithmen.“ Zeit Online (11.04.2016). www.zeit.de/2016/16/computer-algorithmen-macht-buerger-stadt (letzter Zugriff: 3. Mai 2016).
  54. Schlieter, Die Herrschaftsformel. S. 54f.
  55. A US study has shown that using only analysis of Facebook Likes, people’s ethnicity, political orientation, religion, relationship status, sexual orientation or nicotine, alcohol and drug consumption can be deduced; see Michal Kosinski, David Stillwell und Thore Graepel, „Private traits and attributes are predictable from digital records of human behavior.“ PNAS.org Proceedings of the National Academy of Sciences Vol. 110, No. 15 (Feb 12, 2013). www.pnas.org/content/110/15/5802.full (accessed: May 1, 2016).
  56. Morgenroth, Sie kennen dich! Sie haben dich! Sie steuern dich! S. 192.
  57. Ibid. p. 199.
  58. Frithjof Küchemann, „Alle Daten sind Kreditdaten: Kommerzielle Überwachung.“ FAZ.net (07.11.2014). www.faz.net/aktuell/feuilleton/debatten/eine-neue-studie-ueber-kommerzielle-ueberwachung-13253649.html (letzter Zugriff: 26. April 2016).
  59. Schlieter, Die Herrschaftsformel. S. 145.
  60. Ibid. p. 150.
  61. Pariser, Filter Bubble. S. 14 ff.
  62. Schlieter, Die Herrschaftsformel. S. 150.
  63. Ibid. p. 151.
  64. Laukenmann, „Wie digitale Daten Wähler manipulieren können“.
  65. Ibid.
  66. Pariser, Filter Bubble. S. 164 und S. 172.
  67. Schlieter, Die Herrschaftsformel. S. 253.
  68. Schlieter und Klöckner, „Viele halten die Demokratie für eine veraltete Technologie“.
  69. Bruno S. Frey and Jana Gallus according to Laukenmann, „Wie digitale Daten Wähler manipulieren können“.
  70. Andrej Zwitter according to ibid.
  71. Evgeny Morozov, „Ich habe doch nichts zu verbergen.“ Aus Politik und Zeitgeschichte 65. Jg., 11-12 (09.03.2015): S. 3–7. Hier S. 5.
  72. Schlieter und Klöckner, „Viele halten die Demokratie für eine veraltete Technologie“.
  73. Co-authors were Frank Wilczeck, Max Tegmark and Stuart Russell; according to Schlieter, Die Herrschaftsformel. S. 16.
  74. “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” Stephen Hawking, „Autonomous Weapons: an Open Letter from AI & Robotics Researchers.“ future of life Institute (July 28, 2015). www.futureoflife.org/open-letter-autonomous-weapons/ (accessed: May 14, 2016).
    See also “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.” Michael Schulze von Glaßer, „Tötungsmaschinen, selbstgesteuert: Universal Soldier.“ Der Freitag 2015, Nr. 32 (07.08.2015). www.freitag.de/autoren/michael-schulze-von-glasser/toetungsmaschinen-selbstgesteuert (letzter Zugriff: 14. Mai 2016).
  75. The authors of the Digital Manifesto, published in 2015, are Dirk Helbing, Bruno P. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari and Andrej Zwitter. Helbing et al., „Digitale Demokratie statt Datendiktatur“.
  76. Hamann und Soboczynski, „Der Angriff der Intelligenz“.
  77. Sonali Kohli, „Bill Gates joins Elon Musk and Stephen Hawking in saying artificial intelligence is scary: Danger zone.“ Quartz (Jan 29, 2015). qz.com/335768/bill-gates-joins-elon-musk-and-stephen-hawking-in-saying-artificial-intelligence-is-scary/ (accessed: May 14, 2016).
  78. Thomas Schulz, „Dieser Herr macht bald Ihren Job: Künstliche Intelligenz.“ SPON (22.12.2015). http://www.spiegel.de/netzwelt/web/google-will-maschinen-denken-beibringen-a-1069072.html (letzter Zugriff: 3. April 2016).
  79. Nick Bostrom, Superintelligenz: Szenarien einer kommenden Revolution (Berlin: Suhrkamp, 2014).
  80. Bostrom reports that many AI experts are convinced that we will achieve AI at a human level by 2075. For him, this could mean the end of humanity. Maja Beckers, „Eine gefährliche Explosion der Intelligenz: Superintelligenz.“ Die Zeit 2015, Nr. 3 (15.01.2015). www.zeit.de/2015/03/superintelligenz-nick-bostrom-kuenstliche-intelligenz (letzter Zugriff: 27. März 2016).
  81. According to the author, this is also probably because artificial intelligences struggle with the application of the “proportionality principle” in legal, ethical and quantitative terms.
  82. Nick Bostrom und Oskar Piegsa, „Maschinen sind schneller, stärker und bald klüger als wir: Künstliche Intelligenz.“ Zeit campus 2015, Nr. 3 (07.04.2015). www.zeit.de/campus/2015/03/kuenstliche-intelligenz-roboter-computer-menschheit-superintelligenz/ (letzter Zugriff: 6. März 2016).
  83. Ray Kurzweil, Menschheit 2.0: Die Singularität naht, 2., durchgesehene Aufl. (Berlin: Lola Books, 2014). S. 26. (Ray Kurzweil, The singularity is near: When humans transcend biology. New York, NY: Viking, 2005.)
  84. Ibid. p. 201.
  85. Ibid. p. 27.
  86. Ibid. p. 30.
  87. Ibid. p. 31.
  88. Ibid. p. 210.
  89. Kurzweil in Ray Kurzweil und Terry Grossman, Fantastic Voyage: Live Long Enough to Live Forever (New York, NY: Plume, 2005).
  90. Kurzweil, Menschheit 2.0. S. 211.
  91. Heike Buchter und Burkhard Straßmann, „Die Unsterblichen: Ray Kurzweil.“ Die Zeit 2013, Nr. 14 (aktualis. 06.04.2013). www.zeit.de/2013/14/utopien-ray-kurzweil-singularity-bewegung/komplettansicht (letzter Zugriff: 13. März 2014).
  92. singularityu.org/community/partners/ (accessed: May 14, 2015)
  93. Sven Gábor Jánszky is a trend researcher and head of the 2b AHEAD think tank.
  94. Sven Gábor Jánszky, „Werden wir Menschen zum Spielball der Computer?“. THINK!TANK 3/2016 (2b.AHEAD Think-Tank, Leipzig, 19.04.2016), S. 17.
  95. Schlieter und Klöckner, „Viele halten die Demokratie für eine veraltete Technologie“.
  96. “IAEA is an independent intergovernmental, science and technology-based organization, in the United Nations family, that serves as the global focal point for nuclear cooperation; develops nuclear safety standards and, based on these standards, promotes the achievement and maintenance of high levels of safety in applications of nuclear energy, as well as the protection of human health and the environment against ionizing radiation; verifies through its inspection system that States comply with their commit-ments, under the Non-Proliferation Treaty and other non-proliferation agreements, to use nuclear material and facilities only for peaceful purposes.“ www.iaea.org/about/mission (accessed: May 1, 2016)
  97. Yvonne Hofstetter according to Hamann und Soboczynski, „Der Angriff der Intelligenz“.
  98. Daniel Oberhaus, „Die ersten sechs Gebote der Virtuellen Realität haben wir bereits gebrochen.“ Motherboard (Vice) (14.03.2016). motherboard.vice.com/de/read/wir-sind-laengst-dabei-die-ersten-gebote-der-virtuellen-realitaet-zu-brechen-467 (letzter Zugriff: 18. Mai 2016).
  99. Drösser, „Er weiß es, bevor du es weißt“.
  100. Andreas Dewes und Laila Oudray, „Die Algorithmen entscheiden: Physiker über digitale Diskriminierung.“ taz (02.04.2016). www.taz.de/!5286890/ (letzter Zugriff: 21. April 2016).
  101. Martin Schulz, „Freiheit Gleichheit Datenschutz: Warum wir eine Charta der digitalen Grundrechte brauchen.“ Zeit Online (27.11.2015). www.zeit.de/2015/48/grundrechte-netz-datenschutz-eugh/ (letzter Zu-griff: 25. April 2016).
  102. Schulz, „Warum wir jetzt kämpfen müssen“.
  103. Laukenmann, „Wie digitale Daten Wähler manipulieren können“.
  104. Robert Epstein, „Democracy at risk from new forms of internet influence.“ EMMA Magazine 2014-2015 (Oct 10, 2014): 24–27. aibrt.org/downloads/EPSTEIN_2014-New_Forms_of_Internet_Influence-EMMA_Magazine.pdf (accessed: May 14, 2016). Also Schlieter, Die Herrschaftsformel. p. 149.
  105. According to Pariser, all large information companies must be obliged to use their power carefully and to disclose their filtering systems and their rules, at election times at least, monitored, for example, by an independent ombudsman. Pariser, Filter Bubble. S. 241.
  106. Hofstetter, Sie wissen alles. S. 296 ff.
  107. SEO (search engine optimisation) refers to the optimisation of a website to be listed as highly in Google’s results pages as possible.
  108. Pariser, Filter Bubble. S. 164.
  109. „Deutschlands Zukunft gestalten: Koalitionsvertrag zwischen CDU, CSU und SPD.“ (18. Legislaturperiode). www.bundesregierung.de/Content/DE/_Anlagen/2013/2013-12-17-koalitionsvertrag.pdf?__blob=publicationFile (letzter Zugriff: 14. Mai 2016). S. 178.
  110. Schlieter, Die Herrschaftsformel. S. 260.
  111. Ibid. p. 262.
  112. A form of administration of the commons proposed by Peter Barnes, among others, would be a trust fund, for example, that prices the use of resources and then divides the proceeds as commons dividends among all citizens. www.matrix-21.net/peter-barnes/
  113. Schmieder, „Wenn Sie Angst vor dem Terminator haben, machen Sie einfach die Tür zu“.
Keine Kommentare

Wir benötigen Ihren Namen* und Ihre E-Mail-Adresse*. Der Name wird veröffentlicht. Die E-Mail-Adresse wird intern gespeichert, aber nicht veröffentlicht. Wenn Sie die Domain Ihres Internet-Auftrittes mitteilen, wird diese veröffentlicht. Es gelten unsere Datenschutzbestimmungen.

DIESEN ARTIKEL KOMMENTIEREN