The algorithmic democracy – how did the technological transformation lead to the post-truth era?
The year 2016 was infused with talk about a post-truth – or post-fact – era. First, the Brexit vote for the UK’s withdrawal from the European Union succeeded with a campaign largely based on false statements. Some months later, Donald Trump was elected the President of the United States. Trump became known for his tendency to ignore facts in his campaign speeches: according to the Politifact fact-checking website, up to seven out of ten statements during the election campaign by Trump were lies.
Such observations have led many people to wonder whether facts even matter any more. In fact, Oxford Dictionaries chose post-truth as its word of the year for 2016. In their selection criteria, they noted that the concept succeeded in describing circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion. They also predicted that the term might end up becoming one of the main concepts that define our era.
So far, political researchers have been less thrilled with the new concept. According to Brendan Nyhan, a researcher from the US, we are dealing with such a new concept that there is no good long-term data which could be used to assess the “truthiness” of different eras, and it may never be available (see the news piece here). He also notes that the argument for a so-called post-truth era is also problematic, as it suggests there was some sort of a golden age when politics was based on facts. Indeed, if we are now living in a post-truth period, we must ask: when were we living in an era of truth? How long did it last, whose truth was prevailing and what was that “truth”?
Finnish researcher Paul-Erik Korvela (link in Finnish) is similarly unimpressed with the new concept. He points out that, in politics, truth has always been a slippery concept. Throughout history, facts have always been used selectively to serve ideological purposes. He thus argues that the desire for politics carried out solely based on facts and sensible arguments is based on a fairly naïve understanding of politics. The things people do tend to lend themselves to more than one narrative, which can then be propped up by rival value judgements. On the other hand, rationality is also a poor indicator of truth. A policy which may be sensible for one party might not be for another. In fact, that is why we have different political parties in competition with one another.
While the criticism of the concept by researchers is important, it also gets somewhat sidetracked. As it happens, the discussion on the post-truth era is not solely concerned with the question of politics and facts or politics and truth. The environment of information exchange and communication, which has been completely transformed by the internet, creates a bigger picture that we cannot ignore if we want to understand why the discussion on the post-truth era has come up right now. The purpose of this article is to analyse what this is all about and what the effects are of the technological transformation on our understanding of knowledge, truth and facts.
How does the internet affect us as managers of information?
The internet is not to blame for our desire to use the information flow to mainly seek confirmation for what we already believe. Nor is the internet at fault for creating a human being interested in spending time with others with similar mindsets. However, the internet might reinforce these characteristics, which come naturally to us as people.
In his book The Internet of Us, Professor Michael Lynch reflects on how the internet has reshaped our understanding of information. He suggests that the easy availability of all kinds of information has undoubtedly increased our ability to know more and more. At the same time, the easy availability of information has fooled us into believing that knowing things is as easy as typing a word into a search engine. Nevertheless, the increased amount of information does not necessarily result in improved awareness.
Lynch backs up this argument with the studies of psychologist Daniel Kahneman on fast and slow thinking. According to Kahneman, the way people process information can be roughly divided into two cognitive processes which operate differently: the fast and the slow system.
- In the fast system, information is processed in an automatic, subconscious, emotional and intuitive way. It is driven by our beliefs, previous experiences and the expectations we rely on in interpreting the surrounding reality. The fast system enables us to take quick stock of the situation and separate the unexpected from the norm, but is also prone to fallacies and overly quick judgements.
- In the slow system, information is processed in a more considerate, thorough and rational way. It is based on the conscious examination of a topic.
According to Lynch, the information acquisition enabled by the internet, which he refers to as Google-knowing, may lead to an overemphasis on fast and intuitive reasoning at the cost of slow thinking. Finding information is so fast and effortless that obtaining it requires hardly any reflection. A few clicks, and we have found an answer to our question. This method works if the sources we find are reliable. However, this is not always the case. Distinguishing a reliable source from an unreliable one online is often hard work. What is more, on social media, fake news and other kinds of disinformation might appear nearly identical to carefully researched articles. Few will have the energy to check the sources, which will easily result in the spread of the incorrect information.
It is also important to understand that the algorithms behind Google search results are a far cry from neutral calculations; instead, the algorithm aims to reward the content that enhances the effectiveness of Google’s advertising with visibility. Indeed, it might be that websites with questionable content end up at the top of Google’s search results. Journalist Hanna Nikkanen notes (link in Finnish) that the way Google’s algorithm favours actively updated interior design and family blogs is disproportionate because their readership forms an easily excited group that is focused around a single topic and spends a lot of time online. Algorithms and codes are blind in the sense that in the operations xenophobia appears to be equally communal as, for instance, blogs on motherhood. The Finnish search phrase for “what is national socialism” yields the website of the Finnish Resistance Movement, classified as a Neo-Nazi organisation by the police, at the top of Google’s search results. By contrast, Ixquick.eu, which combines the results of several search engines, produces a completely different list of priorities. This indicates the power of search engines in shaping our worlds.
The rapid increase in the number of fact-checking websites in 2015 and 2016 tells another story of the quickly changing communication environment. Nevertheless, the sites have limited capacity to tackle the spread of misinformation amid the information overload on the internet. The frequently cited study by Brendan Nyhan and Jason Reifler found that people are strongly swayed by misinformation. Once adopted, it is difficult to correct misinformation, particularly when it supports people’s views of the world. In this case, the corrections often actually enhance the belief in the original misinformation. This phenomenon is referred to as the backfire effect. It has been found to be strongest among the most ideologically committed people. The results of the study by Nyhan and Reifler emphasise the significant responsibility of search engines and information providers, even though a more recent study indicates that the backfire effect might be less common than originally estimated.
Algorithms encourage living in bubbles
Over the past year, there have been increasingly lively discussions on the power of Facebook and other similar social media in shaping our reality. Researchers and journalists have expressed particular concern over the algorithms used by social media, which profile online users based on their activities. Algorithms allow the easy tailoring of individual content for consumers and the effective targeting by advertisers. However, from the viewpoint of open and equal discussion, this practice carries many threats, of which there are plenty of recent examples.
Mark Zuckerberg , the founder of Facebook, was subject to fierce criticism after the United States presidential election of 2016. In the run-up to the elections, Facebook was flooded with false statements and fake news, some claiming that Pope Francis had endorsed Trump and others suggesting that there might be a murder involved in the Hillary Clinton e-mail controversy. According to an analysis conducted by BuzzFeed News, the most frequently shared fake news might have attracted greater visibility on Facebook just before the election than actual news content. The analysis also revealed that the fake news content was strongly in favour of Trump.
Facebook would have had the means to prevent the distribution of this content, but it chose not to interfere with the subject. According to the boldest statements, the fake news and other content spreading false information might have even led to Trump’s victory. Nonetheless, Zuckerberg has shrugged off these allegations. In his view, fake news represents such a small part of all the content on Facebook it could not have played a significant role in the outcome of the election. Other representatives of Facebook have echoed this idea.
Despite these statements, many observations contradict the dismissive comments by the Facebook representatives. First of all, a study conducted several years ago pointed out that Facebook content can influence voting behaviour. “Get out and vote” announcements displayed to one group increased their turnout compared to a control group which did not see the announcements. Second, the controversial field experiments carried out by Facebook have indicated that the social media site can manipulate the emotions of its users by guiding its content with algorithms. People whose newsfeeds were shaped to include positive content reacted by writing positive posts. Correspondingly, those subjected to negative content reacted by sharing negative things. In addition, studies conducted by Facebook have observed that the social media site reinforces the polarisation of social realities. Liberal Facebook users are exposed to more liberal content on their newsfeed, while conservative users see content complying with their views.
This “bubble effect” caused by Facebook has also been tested in practice in Finland. A journalist at the Finnish broadcasting company YLE created a fake Facebook profile for his article (link in Finnish) to find out how deeply immersed he could get in the xenophobic “hate bubble” by solely clicking “like” on anti-immigrant content. The results of the experimentation were alarming. In hardly any time, the journalist’s Facebook feed was filled with news which indicated that immigrants committing crimes appeared to be the sole talking point in the world. Content from mainstream media, such as YLE or the Helsingin Sanomat newspaper, were no longer anywhere to be seen.
When interpreting the results of the field experiments by YLE and Facebook, we must take into account the fact that what we see on Facebook is also essentially affected by our network of friends and the material they publish. This was also highlighted by Facebook’s researchers in their study. Therefore, the experiments carried out using an untouched fake profile do not fully correspond with the reality. This does not mean that we should not worry about these kinds of observations.
According to Cass Sunstein, who has studied digital culture, the internet has an insidious way of influencing the polarisation of opinions by providing radical opinions with wider visibility than before. This, in turn, might create an illusion that these radical views are more common than previously assumed, which will further increase their popularity and acceptability. In research literature, the fully segregated online discussion forums where the group members reinforce their beliefs with homogeneous messages have come to be referred to as “echo chambers”. They also provide an environment likely to strengthen different conspiracy theories. According to Sunstein, the polarisation of traditional media might also lead to similar phenomena. People’s opinions are drifting further and further apart solely based on what people are reading and watching. Liberals become more liberal by following the Huffington Post, for example, while conservatives become more conservative by watching Fox News.
As such, the phenomenon of the divergence of realities is nothing new. Political researcher Paul-Erik Korvela reminds us that people have been able to choose their own truths by only following the media that support their prejudices. He thus argues that the social media era does not essentially differ from the golden age of partisan newspapers.
There is a seed of truth in this observation. Nevertheless, simply presenting an example from the past does not prove that there is nothing new to current phenomena. New things are always constructed from old elements. Although the relationship between truthfulness and politics has been fickle throughout history, it is still important that we aim to understand how the nature of facts and telling the truth, as well as the underlying dynamics, have changed with the rise of digital communications, information searches and social media.
Social media gains popularity as news outlet
The great transformation of the mediasphere comes up in most articles analysing the post-truth era. We have known about the migration of readers from newspapers to digital formats for a long time by now, but the effects of the transformation on the revenue models of the traditional media might have been even more dramatic than expected. Readers are unwilling to pay for content, even high-quality content, and the increasing use of ad-blocking software further diminishes the revenue to traditional media. What is more, the public is becoming less and less likely to trust the traditional media. According to Gallup, Americans’ trust in the mass media has been declining for years. The results of polls in the run-up to the presidential elections showed the lowest levels of trust in Gallup’s polling history.
At the same time, social media platforms are making gigantic profits and also gaining ground as news outlets. A survey conducted by the Pew Research Center revealed that up to six out of ten US citizens get their news regularly from social media. Similarly, a report by the Reuters Institute indicates that around 50 per cent of EU residents use social media as a source of news every week. Nearly one third of 18 to 24 year olds already use social media as their main news source. Despite the large differences between different countries and age groups, there is a clear trend: social media is on its way to becoming the mass media of our era, and as a news platform too.
According to Reuters, Facebook is by far the most important social media platform for news. It is followed by YouTube and Twitter. Facebook and YouTube both have over a billion users, while Twitter has over 300 million. The social media giants reach unprecedentedly wide audiences, which also gives them a significant amount of power in shaping their users’ conceptions of truth via algorithms. Tiny partisan newspapers in Finland could only dream of having such influence.
In addition to distribution channels, traditional media outlets are also losing their position of power as the gatekeepers of information. Even though the report by the Reuters Institute suggested that the well-known media brands continue to have a strong foothold in the digital environment, they are no longer able to control the agenda of public discourse as in the past. Almost anyone can publish information these days. There is a broader range of different content available and personal interests are increasingly likely to guide media consumption. This results in the atomisation of media consumption, which creates fears that people are no longer being exposed to alternative interpretations of the world. Thanks to the influence of algorithms, our perspectives might be narrowed more inconspicuously than previously.
Few involved in the transformed mediasphere share the incentives of traditional media for ensuring that the content they produce is accurate. On the contrary, those spreading fake news in the digital media environment might even end up making lots of money (link in Finnish). The ease of spreading false information and its more extensive visibility are characteristic of our time. Although there have always been conspiracy theories, urban legends and myths, these have typically been confined to a small, limited audience. When aptly distributed online, any kind of misinformation can reach millions of views around the world in a matter of seconds, as was also observed with the US presidential elections.
Despite these concerns related to the transformation of communication technology, we should avoid creating an overly simplified dichotomy where the traditional media are perceived as noble defenders of the truth and the new media seen as suspicious spreaders of misinformation. Once again, the reality is far more complicated. For example, in the UK the Eurosceptic mass media has been spreading the wildest rumours about the EU for decades. Among other things, they have claimed that the EU was going to ban working shirtless, double decker buses and haggis, the traditional Scottish dish. While the Finnish tabloids have also done their fair share of spreading similar myths about the EU (Ikäheimo 2017), the scale and volume at which this has been done in the UK has been in a league of its own. According to media analyst James Stanyer, the EU myth has been one of the most popular ways the Eurosceptic press have approached topics related to the EU. Between 1995 and 2003 alone, the European Commission observed 126 different, widespread myths about the EU in British newspapers. In addition to factual errors and blowing things out of proportion, these articles tend to depict the EU as a bureaucratic apparatus consistently spawning rules that go against common sense and threaten the autonomy of the British public (Stanyer 2007, 134). In fact, it can be argued that the UK’s withdrawal from the EU was the natural culmination of the Euroscepticism orchestrated by the British media.
Trolls and bots create distortion
After the 2016 US presidential election, researchers from the University of Southern California Information Sciences Institute analysed the origin of the Twitter data published during the presidential debates. They noticed that up to around one fifth of the tweets published during the three televised debates were by bots, i.e. computer software represented as humans. Indeed, the researchers who conducted the study, Alessandro Bessi and Emilio Ferrara, are concerned about the influence of bots. They point out that the bots may lead to a further polarisation of online discussions and enhance the spread of false statements.
Professor Michael Lynch (2016), who has studied the communication culture on the internet, agrees with this view. According to Lynch, bots have proven to be efficient tools for deception, particularly because they are inexpensive and readily available. A large army of bots can be easily used to distort the public discourse, encourage people to act for the same cause or even vote for a particular candidate. Lynch argues that it would be a big mistake to dismiss the bots as nothing more than a new form of marketing, as has been done in some comments belittling the problem. According to him, bots have more in common with conmen trying to convince gullible people to act in a certain way.
The study by Ferrara and Bessi did not shed light on who was responsible for churning out the bots, and for what cause. However, recent studies indicate that the widespread fake news was a Russian propaganda effort aiming to undermine the credibility of Hillary Clinton. The campaign utilised the entire spectrum of modern information influence: bots, fake news sites, memes and trolls spewing out misinformation. An independent research group, PropOrNot, was able to identify over 200 websites regularly spreading propaganda orchestrated by Russia. According to their estimate, the websites were able to reach up to 15 million Americans. On Facebook, the fake news and other misinformation were viewed more than 213 million times. In fact, PropOrNot argues that the way the propaganda effort supported Trump was equivalent to a huge media campaign purchased with a lot of money.
We are no strangers to similar phenomena here in Europe. According to a report by BuzzFeed, one of Italy’s most popular political parties, the populist Five Star Movement, has built a sprawling network of websites, blogs and social media accounts that are spreading fake news, conspiracy theories and pro-Kremlin propaganda. In addition to the party’s own media channels, this network also includes a collection of profitable “independent news” outlets that are spreading propaganda against the establishment, the EU and the US. The website has claimed that the US is secretly funding human traffickers bringing migrants from North Africa to Italy, and that Barack Obama wanted to topple the current Syrian regime to create instability across the region to prevent China’s access to its oil resources, among others. Stories like these are able to reach millions of Italians on a regular basis, as the politicians of the Five Star Movement have a huge following on social media. The experts interviewed by BuzzFeed estimated that the websites spreading misinformation are also making a significant amount of money for the party.
Based on these examples, it comes as no surprise that the World Economic Forum identified the spread of misinformation online as one of the most significant societal threats of our time as early as 2013. In our hyper-connected world, the dangers of misinformation are not merely confined to individual nations, but might also cause havoc in the stock market and influence politics in ways whose impacts we do not even fully understand yet.
Steps towards a fairer internet
The discussion on the social impacts of algorithms and artificial intelligence has gradually attracted the attention of political decision-makers, also in Europe. At a German media conference held in the autumn of 2016, German Chancellor Angela Merkel expressed her concern about how algorithms distort our perceptions. She called for more transparency in the algorithms used by internet platforms. According to Merkel, internet users have the right to know how and on what basis the content they receive via search engines is channelled to them. She argued that decision-makers must also begin to pay special attention to the phenomenon. Social media and search engines have become an “eye of a needle” through which the majority of the information we use passes.
The concern expressed by the world’s most influential politicians has also set the wheels in motion in Silicon Valley. Google, Facebook and Amazon announced that they were working together to find solutions for developing the ethics of artificial intelligence. Following severe criticism after the US presidential elections, Facebook and Google have promised several changes to their publication policies in order to put a curb on the spreading of fake news. Among other things, the companies aim to more carefully monitor the websites using their online advertising service and ban those that peddle fake news from using the advertising feature.
During the most recent turn of events, Facebook announced (link in Finnish) that Germany was the first country outside the US where the company would take action to tackle the fake news problem. Measures will be launched in Germany to allow Facebook users to report stories they suspect of being fake news to outside fact-checkers. If the fact-checker finds the news item untruthful, it will be flagged “controversial” on Facebook. Meanwhile, users will receive an announcement if they attempt to share these flagged posts, and the Facebook algorithm will no longer prioritise such content. The measures aim to restrain the attempts of extremist groups and foreign powers to exert an influence using information in the run-up to the German federal elections held in the autumn of 2017. According to Facebook, the measures might also be expanded to other countries.
Nevertheless, a burning question that remains is whether self-monitoring and market logic will be enough to address the issue. Many doubt this but believe that more transparency and regulation are also needed. Several articles have suggested that independent researchers should be granted access to study the algorithms and data of the internet giants regardless of copyright laws. Only this would allow us to comprehend the social impacts of artificial intelligence more thoroughly. For instance, Facebook holds exclusive access to the information on the spread of fake news and other misinformation shared on Facebook before the US presidential elections. Critics though have compared that situation to tobacco companies having exclusive access to patient records.
In their book Big Data: A Revolution that Will Transform How We Live, Work and Think, Viktor Mayer-Schönberger and Kenneth Cukier proposed the establishment of a brand new group of professionals, the algorithmicians. After all, the increased complexity of societies due to technological development and the need for more intense monitoring is not a new thing. Where once a need arose to oversee corporate activities by employing outside auditors, we are now in need of professionals who are capable of assessing big data analyses and predictions in an impartial and trustworthy manner. When it comes to social media, the main duty of algorithmicians would be to ensure that the best interests of the public is met. Similar to ombudsmen, their work could also involve processing complaints submitted by consumers.
It is highly unlikely that the giants of the data economy will be spared of increasingly strict regulation. Indeed, the European Commission has already warned Facebook, Microsoft, YouTube and Twitter that unless they fulfil their promise of eradicating all illegal hate speech from their services within 24 hours of publication, the Commission will enforce legislation to oblige them to do so. Similarly, policymakers all around the world will be forced to consider how to tackle the problems related to the new communication channels without setting up unnecessary limitations to the freedom of expression. This will be no easy feat. The giants of the data economy are influential lobbyists that will hang on tightly to their freedom to publish. What is more, regulation may also easily lead to unintended side effects. The risk is that the clean-up of mainstream social media will result in an increase in the popularity of unmoderated alternative media or encrypted networks.
From cuneiform script to print media, the societal impacts of new communication technology have always been difficult to anticipate. In 1938, thousands of Americans were struck by panic when a radio drama, “The War of the Worlds”, based on a novel by H G Wells, suggested an alien invasion by Martians was currently taking place on Earth. Such a panic would be unlikely to happen these days, as audiences are accustomed to diverse content and have become more critical towards the media. It could be that, in time, we will achieve a similar level of literacy and tolerance when it comes to social media. However, we cannot build the future on this premise alone. A fair and functional environment for information exchange and discussions is such a crucial part of the operations of modern democracies that we must also be able to guide the development in the intended direction.
While we may have a long way to go to enhance regulation, we nevertheless need more impartial research on the effects and ethicality of artificial intelligence. People have the right to know how algorithms affect their behaviour or awareness of the surrounding reality, who will benefit from the systems that learn from the behaviour of their users, and what are the objectives promoted by these new technological means. It is also clear that social media users must have the right to know if they are being used as test subjects in different field experiments.
Bessi, Alessandro and Ferrara, Emilio (2016): Social Bots Distort the 2016 U.S. Presidential Election Online Discussion. First Monday, volume 21, number 11.
Nyhan, Brendan and Reifler, Jason (2010), When Corrections Fail: The Persistence of Political Misperceptions. Political Behavior, June 2010, Volume 32, Issue 2, pp 303-330.
Reuters Institute: “Digital News Report 2016”
World Economic Forum: “Digital Wildfires in a Hyperconnected World”
Ikäheimo, Hannu-Pekka (2017): EU nautintovarkaana: Tulkintakehykset “EU kieltää” –artikkeleissa. Politiikka-lehti, 263-280, 4. numero 2016.
Lynch, Michael (2016): The Internet of Us: Knowing More and Understanding Less in the Age of Big Data.
Mayer-Schönberger, Viktor and Cukier, Kenneth (2013): Big Data: A Revolution that Will Transform How We Live, Work and Think.
Stanyer, James (2007), Modern Political Communication, Polity Press, Cambridge.
Sunstein, Cass: Republic.com 2.0.
Online articles and news
Facebook research: “Exposure to Diverse Information On Facebook”:
Fastcodesign.com: “The Algorithmic Democracy”
Gallup.com: “Americans’ Trust in Mass Media Sinks to New Low”
Long Play: Hanna Nikkanen: “Mitä yhteistä on mammabloggareilla ja uusnatseilla?”
Pew Research Center: “News Use Across Social Media Platforms 2016”
Politiikasta.fi: Ari-Elmeri Hyvönen: “Politiikasta on syytäkin olla huolissaan”
Politiikasta.fi: Paul-Erik Korvela: “Olemme aina eläneet faktojen jälkeistä aikaa”
Poynter.org: “Fact-checking doesn’t ‘backfire’, new study suggests”
Reporterslab.org: “Fact-checkers’ reach keeps growing around the globe”
The Guardian: “Angela Merkel: internet search engines are ‘distorting perception’”
The New York Times: “Google and Facebook Take Aim at Fake News Sites”
The New York Times: “Mark Zuckerberg Is in Denial”
The New York Times: “Social Networks Affect Voter Turnout, Study Says”
The New York Times: “Facebook Tinkers With Users Emotions in News Feed Experiment, Stirring Outcry”
The Washington Post: “Russian propaganda effort helped spread ‘fake news’ during election, experts say”