DRI gathered European policymakers, civil society, academia and tech representatives for a transatlantic, multi-stakeholder discussion on deepfakes and democracy held on 29 and 30 October 2020. DRI’s Madeline Brady shares some of the highlights from the conversation, held under the Chatham House Rule as part of a project financed by the German Federal Foreign Office.
Until now, no significant political disinformation based on deepfakes has surfaced in the EU. How big is the risk that it will? Could it significantly undermine the integrity of an election in an EU member state? What could and should be done?
We looked at what is happening in the US elections to try to understand what could happen elsewhere in the future. We discussed the realistic risks to EU countries and possible mitigation strategies. This debate feeds into DRI’s upcoming paper on deepfakes and elections.
Key takeaways on media manipulation in the US 2020 elections
Polarisation primes for confirmation bias. The US is extremely polarised, so even when presented with evidence that something is not real, people’s opinions may not change. Examples of this during the 2020 election include spliced videos of Joe Biden taking a moment of silence used our of context to promote the narrative of “Sleepy Joe”. Others include false videos of poll workers colouring in ballots, which promote a narrative of voter fraud. These examples are only effective when an audience is already primed to believe them. This highlights the fact that the broader media ecosystem matters when considering the threat of deepfakes.
Let’s re-think the jargon. Cheapfakes might be just as effective in deceiving users as deepfakes, so such jargon does not matter to every day users. These terms may even confuse or worry people. A term like “digital forgery” might be more self-evident, especially when it comes to labelling content on platforms. More behavioural research is needed to understand how people interpret and respond to content labels. Does the jargon provide users with the information they need, or does it create more confusion and less trust in online sources? A common labelling language across platforms would provide clearer signals to users.
Complexity means AI will not solve it all. At this point in time, no AI model works perfectly to detect manipulated media. Models used by social media platforms have clear strengths and weaknesses. For example, they can easily detect nudity, although issues like satire pose added complexity that cannot be easily detected. As a result, companies cannot leave all of the content moderation work to machine learning models. Humans are still incredibly important to identify nuance.
Strong journalistic norms are critical. Norms are needed to report on potential manipulated media and misinformation more broadly without amplifying it. For example, when then-French presidential candidate Emmanuel Macron’s emails were leaked and shared on social media in 2017, French media placed a blanket ban on reporting them. This was because they did not have time to verify the content of the leaks and the timing made clear that this was an attempt to manipulate the election.
Is Europe ready for the deepfake threat?
Increased understanding and specific measures are needed, but a framework is already in place. Deepfakes are not an isolated new element but are considered as part of the EU’s wider framework fighting against misinformation. New threats are always emerging, for example the use of audio messages in Belarus to mislead protesters or the creation of fake political news pages. The entire threat landscape, including the interplay between domestic and foreign actors, should be considered. With this, a better understanding is needed on what the EU can do and prepare for the deepfake threat more specifically.
Create protocols and communication channels for rapid reaction. Governments should create a response toolbox, which would be different depending on the situation. Such a toolbox would require a methodological approach and further drilling down into hypothetical scenarios. Governments need effective communication strategies to report on facts or to share assessments made by government institutions. If deepfakes flood platforms at scale, government institutions monitoring the issue will need to define thresholds for evaluation before they are overloaded (e.g. looking at all incidents of deepfakes or focusing on the most dangerous ones, for example those that could have direct security implications). This solution assumes that governments value truth and media is independent. As a result, the quality of democratic institutions, particularly amongst various EU member states, must be considered.
Raise the general level of resilience. The EU already has some structures and initiatives in place to promote media literacy and social cohesion. Further investment in such programmes is important to prevent a culture of disbelief by default where people no longer trust anything they see. In particular, not only school-age children but older internet users should be targeted by such programs. When it comes to video media, a more specific curriculum or set of tools may be needed for internet users to identify credible sources.
Empower research and civil society cooperation. More research is needed to understand the problem of deepfakes and build technical solutions. The European Digital Media Observatory (EDMO) holds great potential to shed light on the use of manipulated media in Europe and build solutions. However, the institution is still new and will require financing and time to grow. Homegrown innovation should also be fostered within the private sector. Several startups like Netherlands-based Sensity are working on monitoring deepfakes and developing solutions within Europe.
In summary, it is quite clear that disinformation trends are changing quickly. Actors around the world can learn from each other to identify emerging trends. Ensuring conversations through various actors and stakeholders, such panels provide a valuable opportunity for a collaborative exchange of solutions.
We thank our panellists from TikTok, WITNESS, Partnership on AI, European External Action Service (EEAS) and Global Public Policy Institute (GPPI) for their contributions.