Deepfakes and elections: Should Europe be worried?

DRI gathered European policymakers, civil society, academia and tech representatives for a transatlantic, multi-stakeholder discussion on deepfakes and democracy held on 29 and 30 October 2020. DRI’s Madeline Brady shares some of the highlights from the conversation, held under the Chatham House Rule as part of a project financed by the German Federal Foreign Office.  

Until now, no significant political disinformation based on deepfakes has surfaced in the EU. How big is the risk that it will? Could it significantly undermine the integrity of an election in an EU member state? What could and should be done? 

We looked at what is happening in the US elections to try to understand what could happen elsewhere in the future. We discussed the realistic risks to EU countries and possible mitigation strategies. This debate feeds into DRI’s upcoming paper on deepfakes and elections.   

Key takeaways on media manipulation in the US 2020 elections 

Polarisation primes for confirmation bias. The US is extremely polarised, so even when presented with evidence that something is not real, people’s opinions may not change. Examples of this during the 2020 election include spliced videos of Joe Biden taking a moment of silence used our of context to promote the narrative of “Sleepy Joe”. Others include false videos of poll workers colouring in ballots, which promote a narrative of voter fraud. These examples are only effective when an audience is already primed to believe them. This highlights the fact that the broader media ecosystem matters when considering the threat of deepfakes. 

Let’s re-think the jargon. Cheapfakes might be just as effective in deceiving users as deepfakes, so such jargon does not matter to every day users. These terms may even confuse or worry people. A term like “digital forgery” might be more self-evident, especially when it comes to labelling content on platforms. More behavioural research is needed to understand how people interpret and respond to content labels. Does the jargon provide users with the information they need, or does it create more confusion and less trust in online sources? A common labelling language across platforms would provide clearer signals to users. 

Complexity means AI will not solve it all. At this point in time, no AI model works perfectly to detect manipulated media. Models used by social media platforms have clear strengths and weaknesses. For example, they can easily detect nudity, although issues like satire pose added complexity that cannot be easily detected. As a result, companies cannot leave all of the content moderation work to machine learning models. Humans are still incredibly important to identify nuance. 

Strong journalistic norms are critical. Norms are needed to report on potential manipulated media and misinformation more broadly without amplifying it. For example, when then-French presidential candidate Emmanuel Macron’s emails were leaked and shared on social media in 2017, French media placed a blanket ban on reporting them. This was because they did not have time to verify the content of the leaks and the timing made clear that this was an attempt to manipulate the election. 

Is Europe ready for the deepfake threat? 

Increased understanding and specific measures are needed, but a framework is already in place. Deepfakes are not an isolated new element but are considered as part of the EU’s wider framework fighting against misinformation. New threats are always emerging, for example the use of audio messages in Belarus to mislead protesters or the creation of fake political news pages. The entire threat landscape, including the interplay between domestic and foreign actors, should be considered. With this, a better understanding is needed on what the EU can do and prepare for the deepfake threat more specifically. 

Create protocols and communication channels for rapid reaction. Governments should create a response toolbox, which would be different depending on the situation. Such a toolbox would require a methodological approach and further drilling down into hypothetical scenarios. Governments need effective communication strategies to report on facts or to share assessments made by government institutions. If deepfakes flood platforms at scale, government institutions monitoring the issue will need to define thresholds for evaluation before they are overloaded (e.g. looking at all incidents of deepfakes or focusing on the most dangerous ones, for example those that could have direct security implications). This solution assumes that governments value truth and media is independent. As a result, the quality of democratic institutions, particularly amongst various EU member states, must be considered.  

Raise the general level of resilience. The EU already has some structures and initiatives in place to promote media literacy and social cohesion. Further investment in such programmes is important to prevent a culture of disbelief by default where people no longer trust anything they see. In particular, not only school-age children but older internet users should be targeted by such programs. When it comes to video media, a more specific curriculum or set of tools may be needed for internet users to identify credible sources. 

Empower research and civil society cooperation. More research is needed to understand the problem of deepfakes and build technical solutions. The European Digital Media Observatory (EDMO) holds great potential to shed light on the use of manipulated media in Europe and build solutions. However, the institution is still new and will require financing and time to grow. Homegrown innovation should also be fostered within the private sector. Several startups like Netherlands-based Sensity are working on monitoring deepfakes and developing solutions within Europe. 

In summary, it is quite clear that disinformation trends are changing quickly. Actors around the world can learn from each other to identify emerging trends. Ensuring conversations through various actors and stakeholders, such panels provide a valuable opportunity for a collaborative exchange of solutions. 

We thank our panellists from TikTok, WITNESS, Partnership on AI, European External Action Service (EEAS) and Global Public Policy Institute (GPPI) for their contributions.

Geopolitics in a digitalised world: what do social media have to do with foreign policy?

By Rafael Goldzweig, Research Coordinator at Democracy Reporting International

The narrative of social media’s role in democracy has changed over the years. At first, social media was seen as giving a voice to groups without voices in the mainstream media, especially during the Arab Spring and popular responses to governments’ austerity measures following the 2008 global financial crisis. Evidence of Russian interference in the 2016 Brexit vote and US presidential elections reversed this narrative. Fast forward to 2020, when a wave of false information during the covid-19 pandemic brought the challenges of online disinformation to every country.  

Russia’s use of digital tools to influence other countries’ internal politics is now old news: China, Iran, Saudi Arabia have now established themselves in this field. Recently, the European Union accused China of being behind a huge wave of covid-19 disinformation campaigns aimed at weakening how governments respond to the pandemic.  

The last decade saw the transition of social media from a tool of individual empowerment to an environment where information is weaponised for geopolitical gains. New state and non-state actors have entered the disinformation game, finding new ways to deceive public opinion and achieve geopolitical goals with every technological development. Looking at the national level, elections became a focal point for influence campaigns. Internationally, foreign policy actors witness hybrid threats as the new frontier of diplomacy in the digital space. Multinational private companies such as Facebook and Twitter now have a substantial influence on national sovereignty issues such as political advertising, online political speech, and broader geopolitical developments through the power and policies of their online platforms.  

This June, Democracy Reporting International (DRI) and the Stiftung Wissenschaft und Politik (SWP) think tank brought together foreign policy and technology experts to discuss trends and effects of the strategic use of social media for political gains. The two-day discussions revolved around the need to involve the foreign policy community in the geopolitical challenges posed by disinformation campaigns. The discussion also raised awareness of the new trends in this field, focusing on potential ways to minimise the negative impacts of the online environment.  

Two case studies illustrated the complexities of this issue: Ukraine and Libya. In addition to dealing with an ongoing conflict, Ukraine held presidential and parliamentary elections in 2019 while Libya was expected to hold elections in 2019 as part of its political transition roadmap. These highly politicised and militarised contexts bring new perspectives to the strategic use of social media for geopolitical purposes. 

From the Russian perspective, the West created the environment for modern information warfare by inventing social media platforms, which facilitated revolutions and protests around the world. From the West’s perspective, Russia started online information warfare against its neighbours (Georgia and Estonia) and in particular against Ukraine in 2014. Since the Maidan revolution and the Russian annexation of Crimea and the attack on Eastern Ukraine, Russia has been trying to influence opinion in some regions of the country in favour of Moscow, while using social media to further polarise political discourse in other regions. This, in a country where media freedom and polarisation were already issues 

Pro-Russian narratives originate mainly from well-known media outlets, such as Sputnik and Russia Today. These were then widely shared and artificially pushed on different social media channels as the attack unfolded – in an online environment full of junk news and clickbait content that helped the success of these influence campaigns. In 2017, the Ukrainian government banned Russian social networks like VKontakte (VK) and Odnoklassniki, arguing that they were abused for Russian interference. Nevertheless, disinformation challenges continued, as could be seen during the 2019 elections. DRI’s analysis also showed that domestic actors, including presidential campaign teams, used disinformation tools, such as inauthentic Facebook groups, to attack the other side. 

The Libyan online environment has turned into a battlefield of competing narratives that mirror the complex interplays between local actors, competing national governments and the many foreign actors involved in the conflict. In this situation, social media exacerbates foreign policy challenges and creates vast opportunities for online manipulation 

DRI has been monitoring the Libyan online space since 2018. We found that high-trending narratives circled around national security conversations, public figures and political leaders. One of those leaders, Saif al-Islam Gaddafi (the son of deceased former leader Muammar Gaddafi), had his electoral campaign fuelled by a public relations campaign run through the “Mandela Libya” Facebook page, involving numerous fake accounts. This web page was established shortly after representatives of Gaddafi visited Moscow.  

The majority of Libyan media outlets are based abroad and, with no history of an independent media in the country, social media often follow the leads of TV stations that report from and for Libya. Many of these are highly polarised. Exacerbating the problem, without a functioning state and institutions, Libya lacks authoritative sources on developments in the country. In the case of Gaddafi’s candidacy, much of the content pushing his campaign online ended up being captured by online news outlets, reaching a larger audience than the social media users who were originally exposed to it.  

Amidst such complex geopolitical situations, social media companies often fail to adequately deal with false content or hate speech that affects public opinion. This can damage human rights, national sovereignty and conflict resolution. Both the Ukrainian and Libyan cases underline that strategic communication – be it to strengthen a country’s soft power or as a means to sow conflictsremains central in geopolitics. What is more concerning is that such actions happen in a privatised space, where a handful of companies have the power to define the limits of what can and cannot be done. Often, they choose inaction, avoiding action to prevent abuses 

There is some reason to hope, though. Tech companies partnered with the World Health Organization to use authoritative information as a guide to identify and take down false content related to covid-19 

With the increased relevance of disinformation in geopolitics, we are witnessing the strengthening of strategic communication bodies, such as EU and NATO StratCom, and several bodies within foreign ministries 

However, we do not yet see a genuine connection between traditional foreign policy channels, the tech expert community and social media platforms to define how to best deal with these hybrid forms of influence. Tech experts in foreign ministries are often siloed in communication departments rather than seen as essential political analysts that are needed across the board of foreign policy action. Traditional diplomacy will need to adapt to the new reality to face the new online threats to national sovereignty, democratic transition, conflict resolution and human rights. 

Photo credit:  KimMcKelvey, licensed under CC BY-NC-SA 2.0

DRI Unveils New Digital Democracy Toolkit!

Recent elections and events have exposed the risks of disinformation and manipulation of the democratic discourse on social media. However, there is still too little evidence that demonstrates how disinformation, hate speech, bots and others influence the online debate.

Meanwhile, the tools to monitor social media are oftentimes not easily accessible for civil society and other interested groups. To fill this gap, DRI has developed a Digital Democracy Monitor Toolkit, which makes all the necessary resources available in one place.

The Digital Democracy Monitor toolkit is designed for civil society, journalists, researchers and anyone trying to learn more about social media and democracy.

Get started on your own monitoring today and find out:

  • Why you should monitor social media
  • How to get started in your own context
  • How to assemble your own online disinformation risk assessment
  • How to build your own methodology
  • How you can access data
  • How to make an impact

Click on the image to start exploring!

The Digital Democracy Monitor toolkit is part of a project funded by NEF-Civitates with contributions from MEMO 98

Sri Lanka: DRI works with local partners to strengthen social media monitoring

DRI held a workshop on social media monitoring in Colombo on 3 and 4 December to strengthen how local partners monitor social media. The cooperation with the Centre for Monitoring Election Violence (CMEV) and People’s Action for Free and Fair Elections (PAFFREL) is a part of DRI’s efforts to prevent electoral violence in Sri Lanka by advancing electoral reforms and strengthening the integrity of public discourse.

The workshop included an overview of DRI’s methodology, examples of previous social media monitoring efforts, project objectives and relevant issues to monitor. The CMEV and PAFFREL identified key issues to monitor, such as governance, reconciliation and peace, as well as religious and ethnic issues. DRI also learnt from CMEV and PAFFREL’s experiences and challenges in observing Sri Lanka’s 2019 presidential elections.

Participants gained knowledge and skills that they will be able to apply directly in their own work during a coding exercise. The session concluded with a discussion on awareness-raising workshops for civil-society and grassroots actors on misinformation, hate speech and social media bias.

Following the workshop, DRI was invited to a roundtable meeting hosted by the European Union’s Election Observation Mission (EU EOM). DRI and the other stakeholders present exchanged views on the role of social media during Sri Lanka’s presidential elections. DRI, EU EOM and Hashtag Generation exchanged tools and methods used for observation and discussed methodological challenges around electoral observations online.

DRI’s social media monitoring in Sri Lanka aims to increase awareness and understanding of disinformation and hate speech online. DRI aims to achieve this by producing data-driven analyses on social media and by building momentum amongst key stakeholders to address the challenges posed by social media today.

If you want to understand Portugal’s political debates think of Brazil, not Europe

If you want to understand Portugal’s political debates think of Brazil, not Europe – Social media in the upcoming election campaign

Portugal’s online political debates differ from discussions taking place in other western European countries, in that migration or Islam are not major campaign issues for any party. Instead, corruption has much more resonance in social media debates, similar to the situation in Brazil. There are two key differences, though: extremist parties are weak in Portugal and there do not appear to be risks of massive online disinformation in the campaign leading up to the 6 October parliamentary elections.

Nevertheless, election monitors must remain vigilant as unexpected developments, such as wildfires, could be instrumentalized against political parties and used for disinformation. These findings were some of the conclusions from a debate DRI organised in Lisbon on 12 September with the Representation of the European Union to Portugal.

Speakers from state agencies, such as the National Election Commission and the Media Regulator explained gaps and ambiguities in Portugal’s legal framework, while DRI’s partners from Instituto Universitário de Lisboa’s media lab shared the first findings from their monitoring. Other speakers included journalists, polling experts and a representative of Google. DRI’s Rafael Goldzweig explained DRI’s approach to monitoring social media during elections. Some 60 participants attended the event.

 

Social Media Monitoring During Elections: Cases and Best Practice to Inform Electoral Observation Missions

Concern over online interference in elections is now widespread—from the fallout of the Cambridge Analytica scandal to the pernicious effects messaging apps have had in elections in Kenya or Brazil. Yet regulatory and monitoring efforts have lagged behind in addressing the challenges of how public opinion can be manipulated online, and its impact on elections. The phenomenon of online electoral interference is global. It affects established democracies, countries in transition, and places where freedom of expression and access to information are tightly controlled.

But fundamental questions of what should be legal and illegal in digital political communication have yet to be answered in order to extend the rule of electoral law from the offline to the online. Answering these questions would help determine the right scope for online election observation, too. This scoping report explains why social media is one of the elements of a democratic, rule of law–based state that observer groups should monitor. It aggregates experience from diverse civil society and nongovernmental initiatives that are innovating in this field, and sets out questions to guide the development of new mandates for election observers. The internet and new digital tools are profoundly reshaping political communication and campaigning. But an independent and authoritative assessment of the impact of these effects is wanting. Election observation organizations need to adapt their mandate and methodology in order to remain relevant and protect the integrity of democratic processes.

Download the Report here

Content not available.
Please allow cookies by clicking Accept on the banner

Photocredit: Ronda Comunicación/Flickr

How Ukraine’s Leading Presidential Candidates run respectable and dodgy Facebook pages in parallel

Jekyll and Hyde Campaigning – How Ukraine’s Leading Presidential Candidates run respectable and dodgy Facebook pages in parallel

Summary

Analysing official and unofficial political advertising for former President Poroshenko and for the new President Zelenskyy during the presidential election campaign we found the following:

  • Official campaign pages did not use manipulative strategies to discredit electoral competitors and maintained a moderate tone when criticising their opponent;
  • Zelenskyy’s campaign used more micro-targeting techniques and generated more engagement, while Poroshenko’s content was less engaging despite the use of bigger budgets to promote each post;
  • However, unofficial campaign pages by both sides used defamation against their competitor. The Anti-Zelenskyy pages spent 20 times more budget than Anti-Poroshenko pages;
  • Direct and indirect connections were found between unofficial pages and the official headquarters of the two candidates;
  • In some cases, Facebook delayed the removal of advertisements marked as ‘Doesn’t meet Facebook advertising rules’. By then, more than $1000 had been spent and many users had seen the ads, which is a significant amount considering that most posts are removed before $100 has been spent.

Given the upcoming Ukrainian Parliamentary elections on 21 July, these recommendations follow from the analysis:

  • Facebook should remove flagged inappropriate advertisements faster – before many users see problematic ads;
  • In its Ukraine Ad Library Facebook should provide further details regarding advertising funding sources, as seen in the US Ad Library, to maintain transparency and maintain standards across country operations;
  • From the side of candidates and parties, there should be a list of pages that they are officially operating under their campaign – whether they are doing the official campaigning or spreading problematic content aimed at attacking or spread false information about the opponent.
  • They should not engage in such underhand campaigning and authorities should enforce electoral rules better on social media, such as electoral silence.

Introduction

The online political campaigning landscape has changed since the last Ukrainian elections. Now tech companies have more rules to ensure a degree of transparency and avoid manipulation attempts from extreme groups and external actors in national elections. The most relevant change has been the establishment of archives of online political advertising (called ‘Ad Library’ by Facebook). For such type of ads (that are different from ads selling commercial goods), Facebook requires more information from those intending to run them. The library allows for tracking of who paid for the ads, how much, and what the targeted audience is.

In Ukraine, Facebook launched the Ad Library on 18 March 2019, two weeks before the first round of presidential elections. The Central Election Commission registered 39 candidates for the elections, which is the largest number of presidential candidates in the history of Ukraine. The incumbent Petro Poroshenko and the newcomer Volodymyr Zelenskyy came in on top in the first round of the elections. In the run-off on 21 April Zelenskyy won, gathering 73% of the votes to become the 6th President of Ukraine. The elections led to a significant increase of polarisation amongst the Ukrainian public, which was reflected in social media content before and after the presidential elections.

The elections were held  based on the Ukrainian Constitution and the Law on Elections of the President of Ukraine, adopted in 1999 and last amended on February 2019. Even though the Law regulates media activities and media involvement in electoral campaigns and elections, there are no specific regulations that take into account the specifics of social media activities.

The study analyses the use of political ads by official and unofficial campaign pages during the presidential elections, shedding light on how political advertising online was used in this electoral cycle. This analysis recommends how to analyse social media campaigning for the upcoming early parliamentary elections – scheduled only two months after the presidential elections on 21 July – and provides a more comprehensive look into this phenomenon.

Methodology

This report looks into the data from the Facebook Ad Library in Ukraine during the active campaign period of 18 March to 21 April 2019. The report includes digital campaigns of presidential candidates Petro Poroshenko and Volodymyr Zelenskyy. The monitored sample includes 14 Facebook pages, divided into two categories: official pages (run by the candidate’s campaigns) and unofficial/false pages (pages created or specially used with the objective of discrediting the opponent). The second group is not an exhaustive list, but altogether the analysis provides an overview about the tools and tactics used by official and unofficial pages in the context of the 2019 presidential elections.

All the posts assessed were manually collected from the Ad Libraries for further qualitative analysis. To identify the main narratives we conducted visual, semantic and linguistic analysis of posts. We also examined advertising promotion budgets and post’s targets in order to be able to assess the techniques used by each candidate’s team and distinguish differences between them.

Official Pages. Poroshenko vs Zelenskyy

The team of Petro Poroshenko used two pages as platforms for election campaign.

The first one is the official page of Poroshenko, which was registered in July 2014. A second page ‘Poroshenko2019’ was created in February 2019, specifically for the purpose of mobilising Poroshenko’s electorate for presidential elections. This page aimed to deliver tailored campaign messages to different audiences, which will be described below.

Volodymyr Zelenskyy’s team used only one Facebook page called ‘Zelenskyy’s team’ which was launched for election campaigning and addressed audiences with diverse content.

The official campaign pages did not use manipulative strategies or emotionally coloured posts to discredit electoral competitors. While Poroshenko’s use of political ads on the official page was more formal, Zelenskyy used more entertaining campaign strategy (building on his reputation as a comedy actor). The second Poroshenko page, “Poroshenko2019”, aimed at a younger audience (according to a poll in January 2019, only 7% of voters in an age under 29 supported Poroshenko) and the content was adopted accordingly.

Main messages

Poroshenko focused on positive campaigning and reminding voters about his Presidential achievements: the development of the army, granting of Tomos to establish an independent Ukrainian Orthodox Church and success in the promotion of the Ukrainian language. Poroshenko’s pages promoted his strategy to overcome poverty in the country, which was part of his electoral programme.

His messages also focused on geopolitics and the promotion of Ukraine’s pro-European position. One of the ads (dated 9 April) used an antithesis: ‘21 April’s choice: Europe or Russia’. In the context of the election, ‘Europe’ represented Poroshenko, while ‘Russia’ represented Zelenksyy. Being positioned behind Zelenskyy in all opinion polls, Poroshenko’s team tried to appeal to voters through the fear of Ukraine drifting into Russia’s control.

Zelenskyy’s team promoted the message that ‘the current Government will do everything it can to win in an unfair way’ (without mentioning Poroshenko directly). Subscribers were instructed to take several actions: to take their pens to avoid falsifications with disappearing ink, to become observers or members of election commissions, and to document cases of violations at the polling station and send them to the head office.

A large part of Zelenskyy’s supporters in the age from 18 to 24 never voted before, hence among other content, his page provided information on how to vote: listing required documents for the polling stations, or the voting procedure for those not voting at their place of registration.

For the younger audience group ‘Poroshenko2019’ posted videos where opinion leaders expressed support to Poroshenko. The following leaders were involved in the campaign: Yuriy Shukhevych, a Ukrainian politician, member of the Ukrainian Helsinki Group and political prisoner: ‘now it is the time to leave all emotions behind and unite’ – within the framework of the campaign ‘Think’; the publisher Ivan Malkovych; theatre actress Ada Rohovtseva; showman Dmytro Chekalkin and others. Thus, the influence of famous Ukrainians was linked to Poroshenko’s name.

Overall, official campaign pages had a moderate tone when it comes to advertising their platforms and criticising their opponent, focusing mainly on positive campaigning. Being under the scrutiny of electoral laws and voters, such official pages tended to not share false information or any sort of inflammatory speech.

Targeting

The main differences among the candidates’ advertising messages were found in the targeting strategies.

Zelenskyy’s team used more micro-targeting in their campaign. 16 of 135 unique posts (a comment, picture or other media that is posted on the Facebook page) each targeted a set of different regions and age categories. For example, a post dated 8 April had eleven different advertising audiences. The content of the post did not change depending on the targeted region, age and gender.

The largest number of promoted ads posted only for a short time (not longer than one hour) occurred on 20 and 21 April. We counted more than a thousand of such posts. The posts targeted a very narrow age and regional audience. They were unique and contained different text, video and images for each of the groups.

The overwhelming majority of posts were countdown-videos for elections encouraging residents of different regions and cities to go out and vote with the message that they can change their future.

For example, a specific target set for the residents of Odesa addressed them in the text. For the age categories ’25-34′ and ’35-44′, who might be parents already, a focus was made on the ‘future of children’, which depends on their choice. If the post targeted an elder audience, the text also mentioned grandchildren.

Zelenskyy’s page targeted mainly woman. They saw ads, on the average, two to three times more often than men. The exceptions were the posts on stereotypically male topics (related to football, cars). In total, the page spent $84,278 on ads. For more than half of the posts (134 out of 225 ads, but some of them were promoted several times with different targeting) the budget was less than $100 per post.  Only in 20 cases did the budget ranged from $500 to $999. Spending did not exceed $1000 for promotion per post. It is noteworthy that for the 1,500 short-term promotional posts, the amount did not exceed $100 per post.

Comparably, Poroshenko’s team posted less content and it was less engaging (meaning that it generated a lower number of likes, shares, comments).  They spent more money promoting each post to reach a wider audience.

The two pages promoted significantly less posts: 56 from the two groups with a total expenditure of $64817 ($24529 first official page, and $40288 ‘Poroshenko2019’ page). Unlike Zelenskyy, Poroshenko’s team promoted posts with the budget of $1,000 – $5,000 for fifteen times and $5,000 to $10,000 for three times. Such budget was spent for the electorate mobilisation post (the video of the campaign ‘The most important is not to lose the country’ with a caption reading ‘We choose our future on 31 March’).

The three main regions where promotional posts were shown were Lviv, Kyiv and Dnipro regions. Lviv region was ranked first in terms of promotional posts coverage, and this region was the only one that gave preference to Poroshenko in the second-round of 2019 presidential elections – he was supported by 63% of voters against 34% who voted for Volodymyr Zelenskyy. However, Poroshenko did have support in this region prior to the elections which would have helped maintain his percentage of votes in the second round.

On both Poroshenko pages micro-targeted ads were rarely used. As an exception, in the video of the advertising campaign ‘Think’ (the ad dated 26 March) with a commentary done by Yuriy Shukhevych, only Central and Western parts of Ukraine were targeted, namely six regions – Lviv, Ternopil, Rivne, Kyiv, Volyn and Chernivtsi regions.

In a similar fashion to Zelenskyy’s pages, the main targets were women. Men were targeted in rare cases, for example in posts related to military equipment (the ad dated 23 March about Turkish combat drones).

Official pages: Conclusion

Candidates did not focus on discrediting their opponents. The main difference was in how the messages were targeted. Poroshenko’s team used bigger budgets to promote each post, but their content was less engaging.

Comparing to Poroshenko’s more official messages, Zelenskyy’s page made use of a more modern and targeted campaign strategy. Efforts were focused on creating video content and using short, catchy messages. The content was engaging and had the potential to go viral, supported by hashtags (e.g. #let’sshowhimtogether) or calls for comments and shares.

Zelenskyy’s campaign used voters in micro-targeting. Two days before each round of the elections, it targeted specific cities or even universities. Zelenskyy’s strategy seemed to focus on mobilising his electorate to vote again in the second round, as the same turnout would be enough for him to ensure his victory. Therefore, the largest portion of his social media budget was spent in the period of 19-21 April.

Meanwhile, Poroshenko’s pages spent quite a significant budget of more than $40000 in February, focusing on two age groups of 18-24 and 25-34 to mobilize younger voters. Ultimately, the focus of both candidates’ pages in the election campaign was on the female audience.

Zelenskyy’s team seems to have violated the electoral law by publishing ads on the Day of Silence and on the day of the elections – 30-31 March and 20-21 April. In accordance with Article 212-10 “Violation of restrictions on conducting election agitation, agitation on the day of referendum” of the Code of Ukraine on Administrative Offences – conducting election campaigns outside the terms established by Law for the Election of the President of Ukraine – may result in a fine. However, it seems that no steps were made to enforce the law. Pages of Poroshenko did not run electoral ads on the Day of Silence.

Unofficial campaign pages

Aside from campaigning on official pages, a significant part of campaigning on social media during last elections were implemented through other pages. Such pages are not officially run by candidate’s headquarters, but their narratives and communication resonated with the main messages of the candidates. These pages spread misleading and compromising information about other candidates. The goal of such pages was to discredit the opponent’s reputation and electoral chances. The pages we chose to analyse had either clear Anti-Zelenskyy or Anti-Poroshenko agenda.

Anti-Zelenskyy/ pro-Poroshenko

In this analysis we looked at five pages. ‘Zhovta Strichka’ , Boycott the Party of Regions , Ministerstvo Baryh [Ministry of Hustlers], ‘Batya, ya starayus’ [Dad, I try] , Zrada_Peremoha [Betrayal_Victory], Tsynichnyi Bandera [Cynic Bandera].

Defamation

One of the Anti-Zelenskyy themes was defamation and denigration through false information, without confirmed facts or with loose interpretation of facts. One popular theme of the ads identified Zelenskyy as a drug user: “Many thanks for screen inhabitants for the candidate-drug user” (6 April); “Polling Ukrainians whether the President can use cocaine” (7 April); “It is sure that Ukraine does not need the President – drug user” (8 April). A video posted on 8 April stated that “Zelenskyy’s secret has been disclosed”, implying that Zelenskyy did not take tests and the conclusion was that he is a drug user with something to hide.

Another defamation message was that Zelenskyy has criminal relations with the Ukrainian oligarch Ihor Kolomoyskyi. The page of ‘Zhovta Strichka’  posted about this many times: ‘The choice is really simple: Putin’s personal enemy Poroshenko or Yulia’s puppet Kolomoyskyi who can’t wait to ‘dupe’ Ukrainians out of money’ (18 March) and ‘Does anybody still believe him?’ (8 April).  A video was also posted in which it is declared that Kolomoyskyi’s money from nationalisation of PrivatBank was transferred to offshore accounts of ‘Kvartal 95’ (8 April).

On the page Ministerstvo Baryh [Ministry of Hustlers] an ad dated 29 March was shared with a video, which uses a compilation of Zelenskyy’s and Kolomoisky’s statements and the Poroshenko campaign slogan ‘There are many candidates but only one President’ and the end. Another example: a promotional video dated 1 April, in which they associate Zelenskyy with the money of the oligarch Kolomoisky, which was allegedly took out to offshore accounts of ‘Kvartal 95’.

The second theme was the “incompetence” of the candidate Zelenskyy.

A post on the page ‘Zhovta Strichka’ from 8 April said: ‘We imagine the meeting of the National Security and Defence Council and it makes us already scared’: the message of the post is that Zelenskyy will not cope with the role of the Commander-in-Chief at a crucial moment when the whole country will be waiting for him to make an autocratic decision.

Similar messages were tracked on the page “Boycott the Party of Regions”. Several videos with a message about candidate Zelenskyy’s incompetence was spread within the framework of a conventional advertising campaign ‘Not ready to be the President’. For example, in a video involving actors, which simulates the situation when a full-scale war with Russia has allegedly begun, all the soldiers are waiting for a decision from the Commander-in-Chief of the army Volodymyr Zelenskyy and at the crucial moment he is nervous and does not know what to do. Another example is a video, which compares the candidate Volodymyr Zelenskyy with the ‘chef who is afraid of food’: ‘The Commander-in-Chief who is Afraid of War’.

A particularly egregious example included an ad video in which Zelenskyy is hit by a truck. It included the message ‘Everybody must walk his/her own path’ and had an image of a path with cocaine (a hint to the message ‘Zelenskyy is a drug user’). Later this video was removed by the page administrator but remained in the library of advertising. Facebook did not consider this video as violating Facebook’s rules. This video was posted on the page Zrada Peremoha [Betrayal_Victory].

Anti-Poroshenko/pro-Zelenskyy

For the analysis we have identified five pages: ‘Petro Incognito’, ‘AntiPor‘, ‘Vybory 2019’ [Elections 2019], ‘Stop Poroshenko’, ‘Karusel2019’

Defamation

On the page ‘Petro Incognito’ a post, dated 8 April accused Petro Poroshenko of copying elements of the election campaign of ex-President Leonid Kuchma.

‘Poroshenko is fawning over the youth, monkeying twenty-five-year-old techniques!  Petro’s ratings go down catastrophically and he is catching at a straw hoping to attract the youth. Before the first round he didn’t think about it at all, and Poroshenko’s bots ‘soaked’ those young people who wanted to vote for Petro Poroshenko’s opponents.’

9 ads of the group involve micro-targeting to specific regions. For example, a post from 18 March, which criticised the head of the Transcarpathian regional state administration Hennadiy Moskal (who is considered to be the person of the President Petro Poroshenko), targeted exclusively the Carpathian region.

Criticism of Poroshenko’s environment. On the same page there were posts arguing about Poroshenko’s connections to Russia. A post dated 27 March said: ’15 facts about how Petro Poroshenko and his allies are closely related to Russia! Fact No. 1. Poroshenko’s daughter-in-law Yulia Poroshenko (Alikhanova) is Russian, her parents live in St.-Petersburg, the husband of the sister is a top official in the government of Leningrad oblast and relatives in Crimea declare in public that they voted at the pseudo-referendum for separation of the peninsula from Ukraine…’.

Videos with supposed investigations on Petro Poroshenko were published on ‘AntiPor’ group. As an example – post, dated 11 March with an Investigation by ‘Ukrainian Sensations’ of 1+1 TV channel (owned by oligarch Ihor Kolomoisky associated with Volodymyr Zelenskyy) named ‘Poroshenko’s Black Cash’.

There was another video claiming that economy of Ukraine declined during Poroshenko time according to world recognised ratings without backing this information with the respective ratings data. The video features also a part of Deutsche Welle report about state of economy in Ukraine. The video text caption encourages Poroshenko’s supporters to watch it.

In this case the communication is based on references to various media that criticise Petro Poroshenko. A reference to popular media in Ukraine is one of the communication strategies, when the presidential candidate is criticized not only by the page itself, but also by reputable media.

Another theme in the Anti-Poroshenko-campaign is the allegation that he pays for votes. For the first time this topic started to be ‘spread’ in media on the site strana.ua in January 2019. On Facebook it was communicated through the page ‘Karusel2019’ (the accusation of ‘carousel voting’[1]).  The creation of this page coincides with active release of publications on the website.

Unlike the topics listed above, the communication of the ‘Karusel2019’ page appealed to the most specific features of Petro Poroshenko’s election campaign.

Praising Zelenskyy and refuting accusations against him. On the page ‘Vybory 2019’ [Elections 2019] The first promotional post was published on 27 March. In a promoted video called ‘Is Zelenskyy the President?’ television presenters do their best to defend Zelenskyy. For example, they say that the accusations of ties with Russia will help mobilise the pro-Russian electorate. They believe that the label ‘Kolomoisky’s puppet’ is not negative, explaining that this oligarch is the most positive among all other Ukrainian oligarchs. They stated that not participating in a direct debate would not have a negative impact on Zelenskyy’s rating.

The second promotional post was published on 29 March. In the promoted video, television presenters discuss visual advertising of some presidential candidates. Yulia Tymoshenko was accused of plagiarism of 2004 Viktor Yushchenko’s message and Poroshenko of plagiarising a Russian party’s ‘United Russia’ message. At the same time, they call Volodymyr Zelenskyy’s advertising ‘cool’ and show scenes from the TV show ‘Servant of People’ where Zelenskyy plays a president, namely the scene with execution of deputies of the Verkhovna Rada by president’s shooting.

Unofficial pages conclusions and findings

Promotional posts in the unofficial pages category mostly used videos and sometimes text and graphics pages supporting Poroshenko existed long before the election campaign and were used to spread messages against political opponents. Pages against Poroshenko/for Zelenskyy were mostly created during the electoral period (except of group ‘Stop Poroshenko’, created in 2014).

We noticed similar content on different Anti-Zelenskyy pages – the same actor appears in various advertising game videos (videos in which actors portray politicians) on different pages and the same videos are shared by different pages.

Same content has been promoted on 5 of 6 researched pages (except ‘Batya, ya starayus’ [Dad, I try]), which indicated an existing coordinated communication strategy behind such pages. We found out that they are connected between each other. These pages have the same phone number and address in the block ‘Funding source’. Moreover, this phone number and address was listed in the official pages of Petro Poroshenko.

This could mean that same communication team was responsible for creating content not only for official pages, but for unofficial pages. This indicates an often common strategy when it comes to the use of social media during elections: official candidate’s pages tend to keep a lower profile, moderate language and official positions when it comes to the candidate’s agenda, while more questionable techniques are spread using pages that are not directly associated with any of the campaigns.

On the anti-Poroshenko page, ‘Petro Incognito’, most of the posts were accompanied with short edited videos.

The ads targeted the West of Ukraine, where Petro Poroshenko had the highest level of support (Vinnytsia, Lviv, Rivne, Ivano-Frankivsk). Thus, the purpose of the group was to influence Poroshenko’s electorate and to cause negative emotions in relation to the incumbent President.

Unlike Anti-Zelenskyy/pro-Poroshenko pages, we did not see same information in the block ‘Funding source’ on official Zelenskyy Group and unofficial pages.

Pages working against Zelenskyy spent way more money, than their electoral rivals (hyperlinks below lead to Ad libraries with the sums).

Anti-Zelenskyy page Budget Anti-Poroshenko page Budget
Zhovta Strichka $41418 ‘Petro Incognito’ $241
‘Batya, ya starayus’ [Dad, I try] $1000 to $5000

apx. $2500

‘AntiPor’ $7708
Boycott the Party of Regions $17134 ‘Vybory 2019’ [Elections 2019] less than $100

apx. $50

Ministerstvo Baryh [Ministry of Hustlers] $58573 ‘Stop Poroshenko’ less than $100

apx. $50

Zrada_Peremoha [Betrayal_Victory] $25150 ‘Karusel2019’ $534
Tsynichnyi Bandera [Cynic Bandera] $45559
Total spent Apx. $190334 Total spent Apx. $8583

The promotion budget for Anti-Zelenskyy pages is more than 20 times more than for Anti-Poroshenko pages. Moreover, it is larger than on promotion of official pages.

Effectiveness of Facebook political ads policy

As we saw above, neither team used questionable communication on their official pages but did on unofficial pages. Direct connection was found between official pages of Petro Poroshenko and unofficial pages working against Zelenskyy.

In this case the newly launched Facebook policy provides instruments to expose the source of political advertising. Additionally, a high number of ads were removed by Facebook with the mark ‘Doesn’t meet Facebook advertising rules’ (31,7% from Anti-Zelenskyy pages and 14,2% from Anti-Poroshenko pages). As an example, on the page Zrada_Peremoha [Betrayal_Victory] half of the promoted ads were removed.

But before such inappropriate ads were deleted, they had been already shown to many people and promotion budgets spent on some of them were above $1000.

Some of the questionable ads have not been deleted entirely. As an example, shown in this report is the advertised video with a fragment edited where Volodymyr Zelenskyy is hit by a truck. This video was removed by administrators of the page, but not by Facebook.

Looking ahead at the Parliamentary elections

Currently it appears easy to sneak around campaign finance rules by paying for ads with false company information. Facebook gives details about the sources in the US ad library, but not in Ukraine, which shows different standards when it comes to transparency in different countries.

A new election campaign already started in Ukraine. On 21 July the country will go to the polls again to choose the new Parliament, and campaign strategies will likely follow the same patterns identified by this study. Thus, Facebook needs to implement necessary changes and upgrade their standards to avoid political advertising to be used as a tool to spread lies and false information to voters. Inappropriate content should not be part of paid political advertising.

Despite of the fact that political campaigns have historically rely on lies and defamation, they have a choice to allow this to be featured on their platforms or not. Facebook has been struggling with the decision to take down problematic content, such as the recent case of a manipulated video of the US House Speaker Nancy Pelosi[2] shows. However, in the case of political ads, differently from content from users that go viral, they have the choice to not run them in the first place.

We have now much more transparency than in 2016, when political advertising was a channel used by foreign manipulative actors to influence the 2016 elections. Such tool allowed for a greater transparency and reduced the problems associated with political advertising, if we take the recent European parliamentary elections as an example. The Code of Conduct enforced by the European Commission in cooperation with tech companies increased transparency and reduced the scope for manipulation via political advertising.

It does not mean that there is no work to be done. With the Parliamentary elections approaching, having the same transparency standards applied to other elections would be a good starting point. During the 2018 US Midterm elections, the ad library contained a list of how much money was being spend online in the campaign and who were the actors spending that money. This list does not exist for Ukraine as of now, which makes analysis on the use of political ads more difficult to be done.

Methodology Application Table

Platform studied Facebook
Number of ads collected Ads from 14 pages
Criteria for inclusion in the search (query applied at the Twitter public API) The 14 pages were divided into two categories: official pages (run by the candidate’s campaigns) and unofficial/false pages (pages created or specially used with the objective of discrediting the opponent). The second group is not an exhaustive list.
Type of analysis All the posts assessed were manually collected from the Ad Libraries for further qualitative analysis. To identify the main narratives we conducted visual, semantic and linguistic analysis of posts. We also examined advertising promotion budgets and post’s targets in order to be able to assess the techniques used by each candidate’s team and distinguish differences between them.

 

Source of data Facebook Ad Library
Timeframe of study Ads captured between March, 18th and April, 21st
More questions on methodology? Contact: [email protected]

 

Cover image: Animated Heaven/Flickr


[1] Carousel voting is a method of vote rigging in elections. Usually it involves “busloads of voters [being] driven around to cast ballots multiple times”

[2] More information on: https://www.theguardian.com/technology/2019/may/24/facebook-leaves-fake-nancy-pelosi-video-on-site

DRI Annual Report 2018

Our Annual Report 2018 is out. It gives an overview of our activities and our organisational development.

In 2018 we continued our work on local governance, constitutions, human rights, rule of law and human rights. Social media monitoring during elections has become an important part of our activity across the countries we work in. Last year we worked with different actors, including government, civil society, election administration and universities.  We regularly consulted and engaged them in discussions to identify their needs and support them in their work to strengthen democracy.

Download the DRI Annual Report 2018

Read the 2018 annual audit report here.

Call for submissions: research covering social media and elections in Ukraine

Democracy Reporting International (DRI) is a non-partisan, independent, not-for-profit organisation registered in Berlin. Democracy Reporting International promotes political participation of citizens, accountability of state bodies and the development of democratic institutions world-wide.

With funding from the German Federal Foreign Office, Democracy Reporting International (DRI) is implementing the Project “Going beyond Kyiv: Empowering Regional Actors of Change to Contribute to Key Political Reforms in Ukraine – Phase II” in Ukraine since September 2018, funded by the Federal Foreign Office of Germany (hereinafter referred to as “the Donor”). DRI is involved in discussions about how new technologies are affecting democracies and producing studies in different political contexts and as part of its work DRI is analysing the role of social media in elections.

Within the project we are looking for experts in civic tech to write short contributions or analysis covering the role of social media in the Ukrainian Parliamentary elections.

Contributions will be assessed on the following criteria:

  • Relevance of the topic
  • Expertise of the authors on the topic and awareness of the issue
  • Previous work and research in the area

The work should address one of the aspects related to the use of social media in the Parliamentary elections in Ukraine.  Possible topics may include:

  • Use of political ads during the electoral campaign
  • Use of fake campaign pages
  • Use of Facebook by political parties, media and fake campaign pages
  • Applicants can propose other questions to be researched

Individuals or teams may apply. The applications should be submitted in English.

Please send us ([email protected]) by 10 July your CV and Application form, which should answer the following questions:

  1. Name
  2. Please describe your/your team’s relevant expertise (100 words)
  3. Which question would you like to research?
  4. Please propose a short summary of the paper (200 words)
  5. Please provide an outline for your publication (200 words)
  6. Please describe your methodology (including timeline and work-plan) (200 words)
  7. Why do you believe this paper is important? What will be its impact?
  8. Please include your fees and time needed to deliver the work.

Incomplete applications will be disqualified.

BP100: Online Threats to Democratic Debate: A Framework for a Discussion on Challenges and Responses

Michael Meyer-Resende (Executive Director) and Rafael Goldzweig (Social Media Research Coordinator) wrote this briefing paper.

Executive Summary

Online disinformation in elections has been one of the major themes of the last years, discussed in countless articles, investigations and conferences. With this paper we want to challenge some of the notions and points of focus in the debate, namely:

The Problem

The focus on elections is too narrow. The US presidential elections in 2016 pushed online disinformation into the limelight, and as a result people have often discussed it as a danger to electoral integrity. Elections are a necessary part of democracy, but by no means sufficient. Participation takes place in many other forms. People work in political parties, engage in pressure groups, and demonstrate and share their opinions in many different ways. Journalists investigate and report, politicians discuss, propose and act. These are all essential ways of engaging in a democracy and they happen daily. And every single day these processes may be affected by online disinformation. The focus then needs to be on all these aspects of democracy.

The focus on ‘disinformation’ is often unclear. Many different issues, in particular cyber-­‐security, are conflated with disinformation. Some of these issues have overlaps, but they are not the same. Hacking into accounts or disabling electoral infrastructure is a major problem and it is not easy to defend against, but it does not raise wide-­ranging normative questions. In most cases cyber-­attacks are a crime, or are widely seen as crimes, and the only question is a technical one about how to prevent them. The question of democratic discourse is far-­more complex.

A Wider Understanding of Threats

Nothing less than democratic debate and discourse is under threat. A democracy needs a functioning public space where people and organisations freely exchange arguments. That is why freedom of expression is essential to any democracy, but it is also the reason why all democracies spend money on public broadcasting: they acknowledge that an informed public debate does not emerge by the
mere forces of the market. Democratic discourse needs to be understood widely. It encompasses all exchange of arguments and opinions, in whatever form, and can relate to public policy choices.

Discourse that is relevant to democracies includes a wide range of activity from discussions on deeply‐held beliefs (world views) to simple information that may not affect any opinion, but that may affect politically relevant action (such as finding a polling station, deciding to go there or not; deciding on joining a demonstration).

Why is it necessary to start with things as far‐reaching as worldviews? The answer is that democracy is premised on some common ground. It can live with many disagreements and different interests – indeed, it is designed to allow people to live together peacefully, despite disagreement, but it does need some common ground. If, for example, many people believe that the Earth is flat, they are rejecting
scientific evidence. Without accepting basic assumptions of science, it is simply impossible to discuss most major political questions. Again, this should not be too controversial. Democracies invest
heavily in school curricula that try to establish that common understanding.

We propose a layered understanding of threats to democratic discourse that appear at different levels of opinion and behaviour formation. These range from the level of fundamental beliefs of ethical or religious assumptions to political ideology (conservative? socialist? ecological?), voter choice to behaviour choices (vote or not, and vote where? Demonstrate or not, and where?) that may not even
impact an opinion. Threats to opinion at the deeper levels are continuous, because opinions are formed continuously. Threats to short-­term choices are more likely to emerge around specific events (such as trying to deter people from going to vote by spreading false news about police checks at polling stations). The tech firms’ remedies have focussed more on the short-­‐term threats than the longer systemic threats.

To discuss the entire panoply of challenges, we prefer the term ‘threats’ to other terms like propaganda or disinformation. The latter are mostly used with the assumption that a particular actor is actively and intentionally disinforming. But many threats to democratic discourse are non-­‐intentional. Most importantly, the entire architecture of social media and other digital offers rests on choices that are full of unintended consequences for democracy. Just think of YouTube recommending videos that veer viewers to extremist content. It recommends sensational content to keep users on the platform, but it was not designed to help extremists.

The Phenomena

‘Fake news’ has become the shorthand for all the internet’s ills. As many experts have pointed out, the word has been so abused and means so many things to so many people that it has become useless.
The boom of the term points to a deeper problem of the debate: it has centered on the question of the “message.” Is the message true or false? Is it harmful to specific persons or groups of people? Should
it be censored? These are the questions typically emerging in the debate. The focus on seeing content as the main problem resulted in fact-­checking becoming one of the favourite remedies.

But many problems of online speech are unrelated to the message. When Russian agents bought advertising on Facebook to support the ‘Black lives Matter’ movement, the messages were not the problem. We would not discuss them had genuine members of the movement posted them. The messenger was the problem. When bot networks amplify and popularise a theme or a slogan, the message may not be the problem, nor the messenger, but the messaging is problematic, i.e. the way the message is spread – implying a popularity that does not exist. Imagine a major street demonstration for a legitimate cause and later it turns out that most participants were robots or people paid to participate. We would consider that problematic.

We therefore propose to distinguish three phenomena (“3M”) that need to be discussed in their own rights:

It’s Not Only About Freedom of Speech

The focus on the message meant that most debates focus on freedom of speech issues. Viewing the broader issues of threats to democratic discourse across the “3 Ms”, it becomes clear that the rights issues are more complex. The blind spot of legal debates has been the right to political participation and to vote, which presupposes – in the words of the UN’s Human Rights Committee – that public discourse
should not be manipulated. It turns the focus from the expression of opinions to the question about how opinions are formed – the concern that stands behind the financing of public broadcasting by states. It provides the basis for discussing many of the questions related to inauthentic messengers and manipulated messaging/distribution of content. This should not be understood as a facile road to censorship, but rather showing that concerns about social media architecture – what decisions guide what users can see – are based on a human rights concern.

1. Why This Paper?

Ever since the US elections in 2016 and the Cambridge Analytica scandal, there has been a wide-­‐ranging debate on the threats to democracy in the digital space and particularly in social media. Countless conferences, reports and media pieces describe and analyse a large range of issues and challenges. Catchwords abound: Disinformation, computational propaganda, fake news, filter bubbles, dark ads, social bots or inauthentic behaviour, to name but a few.
Building on the work of other organisations, we propose a framework to disaggregate these various phenomena more clearly. We hope that this will contribute to structuring debates and conferences, to
develop practical methodologies to monitor and to respond to threats to democratic discourse online and to discuss regulation.

2. What is the Problem?

How should one describe a desirable online discourse? The tech companies sometimes use frames borrowed from biology. Facebook for example often mentions ‘healthy discourse’1, Twitter’s CEO Jack
Dorsey asked for help to measure Twitter’s health. Words like ‘toxic discourse’ or ‘contamination’ abound. But biology is a bad frame to discuss threats to online discourse.

Social media and the digital sphere are being created by humans. The digital space has no ‘natural’ qualities. The idea of natural qualities confuses the debate. For example, a widely held misunderstanding
suggests that there is a natural way of how posts are seen in social media platforms and there should be no ‘tampering’ with algorithms. Nothing we see in our Facebook, YouTube or Twitter feeds is natural.
It is entirely based on complex algorithms designed by humans to keep users on the platforms and to gain new users, to make the platforms ultimately more attractive for advertisers. If Facebook decides to reduce the reach of a post, it is not reducing its ‘natural’ position. It only gives it less prominence  compared to other posts.2

There is no obvious definition of what a ‘healthy’ discourse may be. For example, in the US the limits to freedom of speech are drawn very wide and include speech that would be characterised as incitement
to racial or religious hatred in many European countries. Neither approach is ‘naturally’ better, both have good arguments on either side. Talking about online discourse using health as a frame implies
that we only need to find the right formula to solve this problem and that it may be a matter for experts more than others. There is no such formula for human debate.

Other authors suggest that the information space should be seen as an ‘order’, meaning that ‘disorder’ is a problem.3 However, especially social media discourse, conducted by millions of people at the same
time, is disorderly and why should it not be? What order would be appropriate and who decides? Much information on social media is irrelevant for democratic discourse and order is not required.

The term computational propaganda is also used and may be useful to describe specific threats, but by implying malicious intent of actors, it is too narrow to describe the full range of threats to democratic discourse online. For example, the above-­mentioned question of how algorithms make choices in ranking posts is as such not a matter of propaganda. It stems from a company’s interest in profit-­making.

We propose the term ‘threats to democratic discourse’. Threats can follow from the intentional actions of people seeking to do harm, but threats can also be the unintended consequences, for example, from the way that social media platforms are designed.

3. What is the Democratic Discourse?

Democratic discourse is the pluralistic debate of any issue that relates directly or indirectly to public policies. A lot of interaction on social media, such as discussion of sports or celebrities, has often no strong relation to public policy and is therefore of no particular interest for a discussion on online threats to democracy.

Then again, in recent years the threat of electoral interference has often narrowed the debate. Democratic discourse is a larger concept than electoral integrity. Political participation in a democracy is
exercised around the clock and not only during elections. Citizens inform themselves, they debate (online or offline), they may demonstrate for issues or they may be active in associations or political parties. Elections are an essential element of democracy, but even the most reduced academic definition includes more than just casting votes.4 More importantly, international law is clear on the set of political rights that make a democracy, which go beyond the right to vote and to stand in elections. They include the freedoms of association, assembly and expression – summarised as political rights.5

Democratic discourse takes place constantly. When public discourse is manipulated, it may not only affect elections, it may equally be targeted at public policy choices. A high-­‐profile example is the sudden, online-­‐generated opposition against the UN Migration Pact. While opposition to the pact is legitimate in any democracy, the campaign showed elements of online disinformation. Massive resistance emerged suddenly at a late stage in the process, when there had been little opposition during the long process of negotiating the pact. Online manipulation may target even deeper roots of democracy. It may attempt to turn engaged citizens apathetic, cynical or fundamentally distrustful of the entire system of democracy.

Therefore, protecting democracies means adopting a wide notion of democratic discourse. If, for example, many people start believing that the earth is flat, a whole range of public policy debates will
become impossible (how do you discuss climate and weather patterns, if you believe the Earth is not round? If many people reject the science of vaccination, how can we discuss health policies?) And
worse: if people believe that all governments, scientists and journalists are part of a conspiracy to conceal the fact that the Earth is flat, they will not meaningfully participate in public discourse. These threats may not result from anti-­democratic intentions.

YouTube recommends videos that are sensationalist because they are more likely to be watched (the company promised to reduce such promotion). That the Earth is flat sounds more interesting than an
explanation that it is not. Our new information infrastructure follows the rules of sensationalist tabloids to catch the attention of viewers and users. This challenges democracy.

In authoritarian states deep distrust in institutions is a sign of realism. In democracies scepticism in institutions is appropriate, but if it turns into conspiratorial thinking or rejection of facts-­‐based debate,
democracy loses its basis. It is for this reason that different levels of human reasoning and behaviour can be threatened, either by disinformation or by the way that online content is organised and presented. These levels include.

  • Worldview/Weltanschauung: The worldview is the deepest level of a personal belief system, for example, a belief in rationality (even if it may not be an absolute belief), religious, moral and ethical convictions. There are far-reaching social science debates on what a worldview is, but for our purposes fit is enough to distinguish between the deepest level of beliefs and assumptions  about the world from political and ideological leanings. For example, a person who believes in relative human progress (“you can improve things”) may turn to various ideologies. She could be a conservative or a liberal, but would be unlikely to turn to more totalitarian ideologies. A person who believes in absolute progress (“if we try hard, everything will become ever better and at some point perfect”) is likely to turn to more utopian (or dystopian) ideologies like communism or fascism. Democratic compromise will feel like treason to that person. A person who turns to religious fundamentalism is unlikely to remain adaptable to democracy. Disinformation and other online manipulation try to weaken democracies’ deep roots at the level of worldviews. They will try to turn citizens into cynics (“I cannot do anything anyway”) or into paranoids, who work against democracy (“I have to bring down the false facades”). Specific myths (such as that of a flat earth or Chemtrails) may seem crazy, but they have a destructive power, because they question everything. More insidiously, the concept of science may not be attacked, but the credibility of scientists is undermined tactically to serve a political purpose, as has been the case with climate deniers. The end result is cynicism and distrust in a professional community that provides essential information for a facts-­‐based democratic discourse. The same is true when they are directly related to critical
    democratic institutions (“all journalists are liars”). If we identify worldviews as a specific target of influence operations, it becomes also clearer where to look for threats. For example, typically adolescents do not yet have firm worldviews, so actors who seek to undermine them would look for platforms that are used by them, such as Instagram or gaming platforms.
  • Political beliefs, ideology: Actors of disinformation try to influence political beliefs and ideologies that usually have an impact on electoral choice and general positioning in public discourse. For example, lobbyists for the coal industry may try to undermine climate scientists, reframing the perception of coal as a ‘green’ natural resource. They do not aim to change somebody’s worldview (the person believes in the need for clean energy), but they try to change their political belief in a specific topic. At this stage disinformation may become propaganda. It may not present false content, but its selection is one-­‐sided to build a political belief (if the only crime that a supposed news site reports are those committed by immigrants, it serves a propaganda purpose, not a news purpose). Fake news sites with such propagandistic purposes remain one of the major challenges for Facebook. Impact at this level prepares the ground to influence the next level of behaviour, namely electoral or other concrete political choices.
  • Electoral and other choices of political action: Disinformation may not aim to influence a political belief, but simply to influence an electoral choice or other choices. The campaign during the US 2016 presidential elections, for example, portraying Hillary Clinton as a criminal, did not try to turn Democratic voters into Republican ones. It signalled to democratic voters: even if you like that party, do not vote for this particular candidate. Operatives of the Democratic Party tried to divide the support for Republican candidates in the Alabama Senate elections in 2018; it did not try to change their political beliefs. The Russian Internet Agency published posts calling for  demonstrations that would not have happened otherwise. It activated existing beliefs, but it did not create or change them. Such threats usually have a more short-­term horizon, for example, aiming at influencing a specific upcoming election.
  • Electoral behaviour: Disinformation may also try to change electoral behaviour without attempting to change the voters’ minds about a candidate or a party. An example would be an ad posted in the 2018 elections in Brazil feigning support for the Workers’ Party, but indicating a wrong election day (one day too late) or misleading pictures that showed police checks at polling stations in the US, potentially deterring vulnerable voter groups who fear the police.

A wide notion of democratic discourse, which includes anything from shaping world views to influencing specific decisions, reflects the importance that discourse has in democracies. This is not a novel idea. Almost all democracies invest significantly in public broadcasting, because they consider impartial  information to be more than a commercial good and that citizens need to engage and be engaged in the public sphere.6

4. Disaggregating Digital Phenomena: Message, Messenger, and Messaging

The discussion on threats to discourse on social media focuses on many different phenomena which tend to be discussed all at once. The Council of Europe’s report on Information Disorder provided important guidance for this debate, but it had a strong focus on “the message”, i.e. the content that is spread online.
Symptoms of a strong focus on message are:

  • The popularity of the ‘fake news’ label
  • The focus of many discussions that seek remedies of fact-­checking
  • The centrality of freedom of speech in the debate

Thus, for example ,the European Commission established an Expert Group on “Fake News and Online Disinformation”, which defined disinformation as “all forms of false, inaccurate, or misleading information”, in other words, a message problem. Consequently, its strategy puts fact-­checking at the center of its response.

The focus on message is too narrow. Content may be unproblematic, but the way it is spread is  problematic. For example, the American ‘Black lives matter’ movement is a legitimate pressure group. When Russian agents bought ads to support it, there was no particular problem with their messages. The problem was the messenger. A foreign country secretly increased the voice of a domestic pressure group to exacerbate tensions. When political parties resort to building elaborate bot networks to amplify their messages, the problem is often not the message (it may be unproblematic), but the manipulation of the perception of popularity. They will become visible, show up as ‘trending’, suggesting that an issue has much popular support. To use a comparison from the offline word, we may not be against a street demonstration, we may even join it, but we would be disconcerted if we discovered that most  demonstrators were robots pretending to be humans.

It is noteworthy in this context that Facebook does not consider messaging/distribution as the main problem (though it has changed policies in this area, too). For example, the company believes that it
largely controls social bots (if they are not hybrids between human and automated action) by deleting such accounts. Its public reports now often focus on the take-down of inauthentic, orchestrated accounts. But Facebook says little about its own decisions on ranking and what its effects may be on displaying content and thereby shaping public opinion.

To distinguish these levels more clearly, we propose to break down the discussion of threats into three components with the third one differing from the Information Disorder report.7

Message/content: The message is the content provided. It may be text, but it can also be a picture, a meme, a film or a recorded message. False messages are part of disinformation and their review and possible debunking is the realm of fact-­checkers. Hate speech, intimidation, incitement of violence are problems that also have to do with the message. Policies of online companies have a lot to do with content, for example the prohibition and take-­down of messages containing terrorism and nudity on Facebook.

Messenger: The person, group or organisation that created and published or posted a message. This may include several players, for example when one person creates a message, but another person publishes it. Here it is important to look at phenomena such as authenticity of messengers, their identity/anonymity, their location and their motivations.

Messaging/distribution: How is a message distributed? Here one would look at issues like artificial boosting of content by gaming algorithms (issue of bot networks, tweaking of Google search results),
the way algorithms work in ranking content (Facebook, Twitter, YouTube), recommending (YouTube) or displaying (Google) content as well as boosting of content for money (targeted ads).

The third component (messaging/distribution) appears useful to discuss phenomena like algorithms that decide the ranking of posts, their manipulation (e.g. through social bots) and boosted content (targeted ads). There may be problems of distribution, even if the message may not be disinformation or the messenger problematic. Problems include the infamous filter bubbles, the promotion of sensationalist content (even if not disinformation) or the trade with data to target people (even if the messages and messengers are as such not problematic).

The table below shows in more detail how the various phenomena relate to these specific levels.

The breakdown into the three Ms – message, messenger, messaging – shows that some problems of message can only be addressed by a focus on messenger and messaging. For example, it is not forbidden
to lie either online or offline. Nobody should be prohibited to claim that the Earth is flat or to claim that the Pope endorsed Donald Trump. However, if algorithms favour such attention-grabbing false messages, so that they are shown to many people, the problem can only be addressed at the level of messaging/distribution.

5. The Neglected Human Rights to Political Participation

Using the framework of the ‘3 Ms’ also exposes blind spots in the legal debate. We support putting human rights at the center of the debate, as argued by many others. As mentioned above, online discourse and its manipulation is human-­made, the law provides a framework to discuss its effects and ways to shape it. Laws are human-­made, they are debated and consulted, and they can change over time.

As digital content and social media are mostly global in reach, international human rights law provides an obvious starting point.8 But the international law debate focuses mostly on the freedom of expression9 and to a lesser degree on the right to privacy. But neither of these two rights provide much guidance on many questions of messaging and distribution, in particular on algorithmic preferences
for certain content over other content.

The unexplored aspect is the right to political participation and to vote and to stand as a candidate in elections, enshrined in Article 25 of the International Covenant on Civil and Political Rights (ICCPR).
Looking at the context in which people participate in politics, Article 25 focuses also on the forming of opinions and not only on their expression.

The UN’s Human Rights Committee, the monitoring body of the International Covenant on Civil and Political Rights) noted in its General Comment on Article 25:

“Persons entitled to vote must be free to vote for any candidate for election and for or against any proposal submitted to referendum or plebiscite, and free to support or to oppose government, without
undue influence or coercion of any kind which may distort or inhibit the free expression of the elector’s will. Voters should be able to form opinions independently, free of violence or threat of violence,
compulsion, inducement or manipulative interference of any kind.”10

The mention of undue influence, distortion, inhibition and manipulative interference points to the relevance of Article 25 for the quality of public discourse. Indeed, election observation missions have found elections to be problematic, not because of some technical flaws or fraud in voting but simply because the opposition did not get any (or only negative) coverage in the media.

Given that one of the major concerns of online campaigns are manipulations, such as inauthentic behaviour, Article 25 is an important point of reference. Reducing online manipulation is not a restriction of rights, it is a protective measure to secure political participation.

Importantly, the non‐manipulation language should not be read as meaning that Article 25 would justify any kind of deletion of content or prohibitions. However, it provides a basis for discussions whether social media companies’ (algorithmic) decisions for example on ranking posts or on registering users, enable manipulation or whether they make it more difficult. Yet, so far it has not entered legal debates that were more focused on the nexus of message and freedom of expression.11

A balanced approach would therefore need to take into account of freedom of expression, the right to privacy and the right to political participation across the three levels of international law, national
legislation and the self-­regulation of companies (or ‘co-­regulation’ where states are involved in defining codes of conduct and similar commitments).12

6. Conclusion

The transformation of the public sphere by the digital space in general and social media in particular raises major questions in conceptualising the problem for democracy, the phenomena that need to be addressed and the regulatory framework for responding to these.

In many instances the problem is described too narrowly (electoral interference through false content), when a full debate needs to look at all levels of democratic discourse, all of the time, and not only during elections. It needs to take into account the different challenges that arise at the levels of message, messenger and messaging and look at these through the lens of multiple human rights provisions. There are not many easy and obvious answers on what should be done to make online discourse more  compatible with democracy, but a clear framework for discussion should help getting there.

Refrences:

  1. https://newsroom.fb.com/news/2018/01/hard-­‐questions-­‐democracy/
  2. A strictly chronological display of a feed could be considered natural, but social media
    and networks do not work that way. Messages are displayed chronologically on WhatsApp,
    hence there is no debate on algorithmic sorting in that case.
  3. See the report by Claire Wardle and Hossein Derakhsan, which brought more clarity into the debate. This paper builds on the report while proposing a different emphasis on some issues. Claire Wardle, Hossein Derakhsan, ‘Information Disorder – Toward an interdisciplinary framework for research and policy-­‐making’, Council of Europe 2017. It can be downloaded here:  https://rm.coe.int/information-­disorder-­toward-­an-­interdisciplinary-­framework-­for‐researc/168076277c.
  4. Joseph Schumpeter’s ‘competitive struggle for votes’ is considered the narrowest definition, but even that is about much more than just voting. Many elections do not even qualify for this  minimum definition due to the absence of real competition.
  5. For a detailed overview, see ‘Strengthening International Law to support democratic governance and genuine elections’, April 2012, Democracy Reporting International and The Carter Center. It can be downloaded here: https://democracy-reporting.org/dri_publications/report-­strengthening-­international-­law-­to‐support‐democratic‐governance‐and‐genuine‐elections/.
  6. For example, the Charter on the BBC states: “The Mission of the BBC is to act in the public interest, serving all audiences through the provision of impartial, high-­‐quality and distinctive output and services which inform, educate and entertain.” One of its purposes is “to provide impartial news and information to help people understand and engage with the world around them (…)”.
  7. Page 7 of the report.
  8. Mark Zuckerberg asked for new global regulation – that is not likely to happen anytime soon. Major powers like the US, the EU, China or India do not see eye to eye on fundamental questions. Existing international law is a global framework that can guide the discussion on regulation by states as well as attempts of self-­regulation by the companies.
  9. Countless policy documents on freedom of expression and the internet have been adopted at the international level in the last years. For more on this, see New Frontiers.
  10. UN Human Rights Committee, General Comment 25, 1996, point 19.
  11. Even new draft guidelines on public participation by the Office of the United High Commissioner for Human Rights merely note: “Information and Communication Technologies (ICTs) could negatively affect particiaton, for example when disinformation and propaganda are spread through ICTs to mislead a population or to interfere with the right to seek and receive, and to impart, information and ideas of all kinds, regardless of frontiers.” (point 10). The do not make a link to opinion formation, unintentional manipulation and normative guidance that may emanate from Article 25.
  12. Important cross-­cutting rights issues which affect all three rights mentioned above: non‐
    discrimination, the right to an effective remedy and ‘business and human rights’ obligations. We will explore legal issues in more detail in another paper.

 

Download the Briefing Paper here.

Photocredit: marshal anthonee/Flickr

Findings about disinformation in the European Parliament (EP) elections

Findings about disinformation in the European Parliament (EP) elections

There is little evidence of massive, covert foreign interference in social media debates before the EP elections in May 2019. While some experts found some traces of Russian activities, other pointed at the role of US-based extremists in spreading anti-democratic campaigns in Europe. However, it seems most problems resulted from activities by EU-based groups or individuals.

Monitoring of social media should not stop with the EP elections. The EU policy-making process will restart and will remain a target of disinformation. Already many campaigns are on-going to undermine effective European policy responses to pressing problems, for example, by denying the climate change emergency. As the debate on the Migration Pact showed, massive campaigns can suddenly emerge, seemingly out of nowhere.

These were principal conclusions of a roundtable on 19 June, that DRI convened in Berlin. Participants included 13 organisations and foundations that monitored aspects of the online debate, as well as representatives from Facebook and Twitter. We compared findings, methodologies and techniques of disinformation monitored. Organisations and experts represented included Avaaz, the Oxford Internet Institute, the Institute for Strategic Dialogue, Cardiff University, the Weizenbaum Institute, Wahlbeobachtung.org, Correctiv, Who targets me?, Bakamo.Social, FactcheckEU, Luca Hammer, the Vodafone Foundation and the Open Society Foundation. The Mercator Foundation kindly hosted the event and the German Foreign Office supported this roundtable financially.

The participants agreed that social media monitoring organisations need to develop a more robust methodology for their work, including some basic tenets of transparency. Too often social media monitoring reports include no information on key research parameters (period of observation, sample size, what platform was observed and which tools were used). Journalists mostly lack the skills to assess the quality and significance of such reports. The participants appreciated that Twitter and Facebook participated and were ready to answer the many questions that have arisen once more in these elections.

DRI will shortly publish a full report of the meeting, which was held under Chatham House Rule. During the event, DRI and the Open Society Policy Institute for Europe launched a joint report about how Election Observation Missions could include monitoring of social media in the scope of their work and what social media analysts can learn from classical election observation

DRI work on disinformation and elections

This roundtable on disinformation and elections was the second convened by DRI. The first took place on 29 March (https://democracy-reporting.org/de/many-groups-monitor-social-media-discourse-related-to-european-parliament-elections/) and was focused on German civil society organisations and monitoring tools.

DRI also chairs a working group to develop a methodology of social media monitoring, to be published shortly.

This event was financed by the German Federal Foreign Office. DRI is exclusively responsible for its agenda and content.

Facebook’s Ad Library for European Parliament Elections – Seven Steps to Make it More Useful

Introduction

There has been much concern about the abuse of paid political advertising on Facebook. To provide more transparency, the company has now opened an ad archive that should include all paid ads by political parties and other political ads ahead of the European Parliament elections in May 2019.

Facebook registers users that can post ads, such as political parties, in each EU member state (rather than for the entire EU at once). The somewhat complicated process of registration is motivated by the desire to make sure that only legitimate actors buy political advertising. For this reason, there are not many ads in the archive yet.

Nevertheless, it is already clear that the archive would be more useful for election observers, analysts and researchers – and therefore more transparent – if Facebook introduced the following improvements:

  1. Allow an EU-wide search

Currently one can only search for ads country-by-country. As the EP elections take place across the EU, it should be possible to search across the EU. While a lot of advertising will be in the various languages of EU member states and not run EU-wide, it is possible that some will be posted in several members states (e.g. in English or where common languages exist). For example, the party groups in the European Parliament may run Europe-wide ads (the ‘Socialists and Democrats Group in the European Parliament’is already doing so). Researchers should be able to see whether any one organisation runs campaigns across EU-Europe without having to search the archives of 27 member states.

  1. Remove or filter out commercial content

Currently the ad library is full of commercial content which makes searches more difficult. If one wants to know if any political campaign in Germany is registered with a name related to the contentious political theme of “Diesel”, Facebook shows ads of the company Diesel that have no political relevance. It should be possible to filter these out of a search on political advertising.

  1. Allow a keyword search for ad texts

Currently, one can only search the archive by looking for organisations that posted ads, such as political parties. If one wants to see all the ads bought by the German Social Democratic Party, they are easy to find. However, for researchers issue ads by interest groups are equally important, but how to find them if one does not know which interest groups post political ads? One answer is to allow a keyword search that brings up all ads that may use a word like, say ‘climate’ or ‘immigration’. Currently that is not possible (although the search bar suggests that topics can be searched).

  1. List all actors alphabetically

Facebook should list all organisations that post political ads in the European Union in alphabetic order with the option of sub-dividing per EU member state. Otherwise any search will be “fishing in the dark”. A major campaign claiming, say election fraud, may go unnoticed because nobody searched for these terms.

  1. Allow more detailed search

Facebook should provide more details. Currently the US ad library provides a lot more details than the European one.

Example of information on an ad in Europe:

 

 

Example of information on an ad in the U.S.:

 

  1. Provide more information

The ad library currently provides very scarce information on how it works, how it can be used, what is shown and any updates that may be made. Facebook should add more detailed information to make its use and purpose easier and better understandable.

  1. Allow API access

Allow researchers from certified charitable non-profit organisations access to the API to carry out more systematic and automated research.

 

For questions, contact Rafael Goldzweig, Social Media Research Co-ordinator, [email protected] 

 

Cover photo by Con Karampelas via unsplash.com