Aller au contenu principal

Initiatives to counter misinformation in the form of hypertrucage

Misinformation has existed since the dawn of time, but this phenomenon has never been as important as it is today. We are constantly surrounded by an endless stream of news, videos, and publications, so much so that information overload on the Web has become a normality and even a working principle. Technology has become so easy and accessible that we use it without considering the human and social consequences of unethical uses. Where once the production of misinformation was limited to a group of insiders, it now concerns any user. Finally, what could be considered for some groups a freedom of expression can contribute to the fabrication of lies and to which anyone can participate, sometimes even without their knowledge.

This site therefore proposes a non-exhaustive, but representative inventory of initiatives taken between 2019 and 2022, to counter this trend.

Three major measures seem to be taking shape and are emerging at different levels of society.

A first measure is at the level of individual responsibility and it is mainly instilled by educational organizations. Faced with this cynical and unregulated system of disinformation, our first responsibility is to understand how it works in order to distance ourselves from it. What drives the flow of information? How is it manufactured and distributed, and most importantly, are we wise enough to it?

A second measure is based on the use of artificial intelligence. Indeed, an even more insidious and dangerous form of disinformation is called hypertruth. These synthetic media are not always detectable to the naked eye. The excess of images and videos contributes to their banality and weakens our critical eye towards them. However, the tracing of modified media will become a priority and a major issue on the Web. This detection technology is evolving very rapidly and the articles listed may already be out of date.

Finally, a third measure is at the level of the law. The reflections on the ethical issue around the use of hypertrucages are only starting to emerge and the already existing legal ways cannot cover all the damages. While the citizen’s conscience can prevent clumsy uses of information, legal tools to punish malicious acts and rules to regulate the use of technological tools are still missing.

Directory of initiatives

Knowing the system, the workings of image production, the algorithms that filter the appearance of information on the screens, is to build a bulwark against their nuisance. Recognizing the aspects of a dubious media or truncated information requires a lot of attention, contrary to the passive attitude that the media creates in the spectator. Whether it is through school, home education, or community settings, media literacy initiatives are taking place in many settings.

Beware of criminals pretending to be WHO

The COVID-19 pandemic has led to an increase in hacking such as phishing. This type of cyber attack is carried out by sending messages that mimic the WHO email address in order to solicit donations and thus request users’ personal information such as banking details. This page provides information on what a fraudulent message looks like, how to detect it, and how to report it.

How to report misinformation online

In May-June 2020, WHO partnered with the UK government by launching an awareness campaign about the spread of misleading or false information about the COVID-19 pandemic. To help with this, users are invited to report any misleading information detected online. For each social networking platform, there are links to help pages with information on how to report suspicious content or contacts. In this way, WHO wants to limit the presence of information of unreliable origin.

Les Nations Unies lancent une initiative mondiale pour lutter contre la désinformation 

In May 2020, in the context of COVID-19, the United Nations launched the “Verified” campaign to combat the spread of false information. At a local level, agencies in each country will recruit “information volunteers” who will be responsible for sharing verified information in their communities and thus counteract the abundance of misinformation. Themes such as the climate crisis, the causes of poverty and inequality will also be included in the campaign.

Let’s flatten the infodemic curve

The COVID-19 pandemic has created an “infodemia”, i.e. a continuous flow of official information, scientific articles, and opinions about the virus. How to judge if a video, an article or even a statement on social networks are trustworthy? The WHO proposes 7 steps to verify a content before deciding if it can be trusted. WHO also acts as a broadcaster with official health organizations and government or private institutions to ensure that their health messages appear first on major digital platforms.

Continuum de développement de la compétence numérique; Thème 11: Développer sa pensée critique à l’égard du numérique 

This document produced by the Ministry of Education and Higher Education describes the framework and milestones for learning digital skills in schools. Digital well-being, identification and appropriate use of resources, work and collaboration are the pillars of training in this area. The development of a critical eye towards technology, its limits and its drifts is also sought from topics such as phishing techniques, algorithms, the status of influencers and social networks.

Désinformation : quelles solutions ? (webinar)

Quebec’s Chief Scientist, Rémi Quirion, and the FRQ have been involved in the fight against misinformation for several years and in particular since the beginning of the COVID-19 pandemic. This exchange between Rémi Quirion and several science journalists took place in March 2021 and addresses several issues. Should the circulation of false information be legalized and how? What is the responsibility of the major media and the scientific community in the face of societal issues, which are particularly sensitive to misinformation? In the immediate future, training in good practices on social networks and the funding of research activities would be appropriate responses.

Digital Citizen Initiative – Online disinformation and other online harms and threats

The Digital Citizenship Initiative is a program put in place by Canadian Heritage to combat misinformation. It supports training, education and research projects with the aim of developing healthy digital media practices among citizens and stimulating a critical view of information content. As part of this project to support democracy, Canadian Heritage provided funding in 2019-2020 to deploy initiatives towards youth and local communities and to counter misinformation on COVID-19 and the Ukraine crisis.

Éducation aux médias et à l’information 

Media and information education (MIE) is part of the French educational program. This document clarifies the founding concepts of MIS, specifies its anchorage in all disciplines and proposes useful paths for its pedagogical implementation in connection with the school or institution project. Critical thinking and education in the responsible use of media are considered essential skills to face the challenges of the digital world of the 21st century.

Évènement « Combattre la désinformation au Québec » (webinar) 

At the initiative of #LaSciencedAbord, this panel composed of the Chief Scientist of Quebec and some scientific communicators, proposes solutions to counter and prevent misinformation. The development of critical thinking, education in the scientific method, the restoration of trust in public institutions, and the legal framework of hate speech are discussed.

Forum sur la lutte contre la désinformation- Parler de science dans l’espace public : facile ?

(webinar – Part 1)

(webinar – Part 2)

In December 2021, the 7th forum organized by the Fonds de recherche du Québec and the Palais des congrès de Montréal focused on the phenomenon of disinformation. What are the responsibilities of the media in the dissemination of information? Should they limit content? How to control the flood of messages and videos sometimes falsified circulating on social networks?

Helping Citizens Critically Assess and Become Resilient Against Harmful Online Disinformation 

In July 2019, Canadian Heritage announced the investment of significant funds in four programs to support actions to develop young Canadians’ knowledge of their history and heritage. The goal of these initiatives is to foster general literacy, sharpen critical thinking skills and equip citizens against online misinformation. An additional investment to support a research program on digital citizenship will support the development of policies to counter online misinformation.

International Engagement Strategy on Diversity of Content Online

Canadian Heritage has taken steps to align itself with the international trend to promote diversity of content online, which is inspired by the UNESCO Convention on Cultural Diversity. In 2018 and 2019, meetings were held between partners from the private and public sectors and civil society to discuss an international strategy to protect freedom of expression, encourage citizen participation and ensure the quality of digital content.

La plateforme francophone des initiatives de lutte contre la désinformation

With this platform, the International Organization of the Francophonie aims to promote and give visibility to the many initiatives to combat misinformation around the world in the French-speaking world, presented in the form of a directory.

Science news media

3 astuces pour distinguer l’information de l’opinion 

In the information products we read, listen to or watch, opinions and facts are mixed together and not always clearly distinguished. Yet writers have a responsibility to cite the sources of their information, to verify it, and to clearly distinguish between their ideas, the facts and what they are quoting. This article provides a few keys to help readers recognize the aspects of journalistic language that will help them determine whether a story is opinion or fact.

6 astuces pour éviter de propager la désinfo 

It is the responsibility of social network users to participate in the non-propagation of false information. The best weapon against this tendency is the use of critical sense which can be manifested by the application of six advices: vigilance, verification of sources, abstention from sharing misunderstood information, recourse to a reference authority as a second source.

Anatomie des fausses nouvelles et désinformation virale (video)

Every citizen is a potential target of disinformation but also an unwitting propagator of fake news. An image, a video, an imitation, a post, a hoax, a decontextualized scene, a simple opinion, on any subject of society, can turn out to be disinformation. The best way to minimize the circulation of this “infox” is to be vigilant and identify it.

Attention aux images et aux vidéos (video) 

Three questions should arise when faced with an image or video seen on social networks: do they come from a reliable source? Is there a caption? Can it be cross-checked with other sources? A reverse Google search using the image or its link is a useful tool to trace its source and confirm its possible manipulation. This procedure can be more complex with memes, which are by nature modified images and the most likely to spread disinformation.

Bulles de filtres et chambres d’écho (video) 

There is some danger in personalizing information, as it locks us into a biased view of information without our awareness. However, it is possible for users to expose themselves to more diverse content by changing their account settings and clearing their search history.

Comment fabrique-t-on l’information 

Agence Science-Presse has developed eight thematic sheets for secondary and primary school teachers on the theme of information production. They contain discussion topics and workshops that shed light on the different aspects of information (source, opinion, facts) and its various circulation channels (sites, media, social networks).

Des astuces pour authentifier une image repérée dans le Web

Emotions can sometimes get the better of us when confronted with a shocking image. However, a few questions should automatically arise and trigger a validation process such as reverse look-up. What’s more, by observing the image, can we deduce that the media publishing it is reliable? Who relays the information? Is the source indicated?

Études scientifiques : lesquelles sont les plus « solides » ? 

Not all scientific studies use the same methods of demonstration. So, even if they aim to prove a result, it’s important to be aware of how the research was carried out. This article explains the various methods of scientific observation and the degrees of robustness associated with them.

« Fake news » : le nouveau visage d’un vieux problème (video) 

The use of disinformation as a weapon of war is not new. However, the development of the media has increased the speed with which rumors can circulate. Paradoxically, access to these sources of information also enables us to be better informed. That’s why it’s important to learn how to distinguish between what’s false and what’s true.

Indices pour repérer les fausses nouvelles (video)

Facebook and Twitter aren’t the only ones who need to combat the spread of fake news. Reading the media before sharing it, checking the date and author, the name of the media outlet, the source of the information and whether the information is based on fact are just a few of the verification steps that can help flush out the malicious intentions of spreaders of false information.

La balado – Dépister la désinfo (podcast) 

Agence Science-Presse offers 4 podcasts to explore the theme of misinformation from various angles. Disinformation in times of pandemics, conspiracy theories, natural health and algorithms. COVID-19 has given disinformation a new face and exacerbated the anxiety-provoking tendencies of social networks. These 4 audio capsules take stock of this historic phenomenon.

Le biais de confirmation c’est quoi ? 

Human beings naturally tend to retain only the information that suits them, and to reject that which goes against their beliefs. Social networks and digital platforms aggravate this situation, as they confront us with content filtered by our choices and social relationships. However, this way of operating and these automatisms are not unstoppable if we accept to open up to a plurality of content and remain critical of what is presented to us as the truth.

Le Détecteur de rumeurs 

Certain topics in the news can be the subject of rumour and elaboration among the general public. This section takes a look at some of these controversial topics, traces their evolution and unravels the truth from the false. The Rumor Detector shows how, with a little research, we can trace the origin of a piece of information, cross-check it with other sources and validate it scientifically.

Les fausses nouvelles voyagent plus vite que les vraies (video) 

The unedited nature of fake news makes it easy to spread rapidly. What’s more, social networking algorithms are based on content that engages users. However, this phenomenon could be curbed by modifying these algorithms and encouraging users to choose reliable sites.

Nos ateliers de formation 

Since 2017, journalists from Agence Science-Presse have been providing training in critical reading of digital media to schools and the general public. Identifying fake news, understanding algorithms and the problem of misinformation in science are the main focuses of these workshops.

Personne n’est à l’abri des fausses nouvelles (video)

Social networks, which are designed to entertain and where an overload of information is constantly circulating, are fertile ground for the spread of fake news. Distinguishing between credible and fabricated information, and discerning advertising, requires skill, knowledge and vigilance on the part of most users.

Pièges technologiques et désinformation 

The sorting of information by algorithms, the personalization of content, the data vacuum, the information bubble and instant sharing are the technological traps that often confirm our beliefs. The information we see or pay attention to is not necessarily representative of reality, but is sorted according to modalities specific to the Web environment.

Si ça confirme ce que je pense, ça doit être vrai !  (video)

Psychological research shows our tendency to seek out information that confirms our thoughts. Our brains make shortcuts to facilitate our choices and decisions, especially in an information-overloaded environment. In this way, our cognitive biases can be responsible for our poor processing of information. It is therefore sometimes necessary to distance ourselves from our beliefs and prejudices to avoid being influenced by false information.

Trucs de pro pour vérifier l’info (video)

Fact-checking journalists do a thorough job of more than just tracing the source of a piece of information, identifying its author and media outlet. Cross-checking sources, doing a keyword search using references found on Wikipedia are essential aspects of parallel research when it comes to establishing the validity of an organization or a person.

#VraimentVrai – Comment vérifier l’information (video)

A series of 4 video vignettes with science journalists who share their thoughts on their profession and tips for verifying information. Four distinct but related themes are addressed: verifying information, becoming aware of our mental shortcuts, social networking algorithms, and discerning opinions.

News media

Infocalypse : la propagation des hypertrucages menace la société

A team of researchers and students in educational technology and design at Université Laval and Concordia University have examined the notion of digital agentivity to counter the phenomenon of deepfakes. This approach focuses on developing citizens’ skills to deal with misinformation, and has the advantage of attacking it at its source. A growing number of citizens’ movements and associations are carrying out awareness-raising campaigns along these lines. Training in the challenges posed by new digital technologies is an essential way of fostering resilience. It must go hand in hand with other measures, such as legislation, digital platform policies and detection technology.

Launching JHR’s program on “Fighting Disinformation through Strengthened Media and Citizen Preparedness in Canada.”

On September 27, 2019, Journalists for Human Rights (JDH) announced the launch of its latest program entitled: “Fighting misinformation through media strengthening and citizen preparedness in Canada”. This project is supported by Canadian Heritage and aims to train journalists to adequately identify misinformation online and also intervene with the general public to develop a critical sense of digital information content.

Philippines election : ‘Politicians hire me to spread fake stories’

In May 2022, the presidential campaign in the Philippines gave rise to a veritable disinformation campaign. Social network marketing consultants are paid by candidates to lead a veritable offensive aimed at sullying the image of their opponents by creating false profiles, spreading fake news, hijacking images and creating propaganda on Facebook groups. Even if, in return, major anti-disinformation movements have sprung up, and platforms such as Facebook, Twitter, Youtube and What’sApp have deleted many dubious accounts, it seems that these efforts are never enough to put an end to the problem.

Youth media

Comment combattre la désinformation  

The Decryptors team specializes in hunting down false information circulating on social networks. This fun, interactive site is aimed at young people, to make them aware of the dangers of misinformation brought about by technology, while at the same time making use of it. A conversational robot initiates a discussion with visitors, introducing them to methods for detecting falsified videos, the principle of hijacking images through decontextualization, robots, and the rapid spread of fake news.

How to spot Fake News Online

A simple, concise summary designed by News Media Canada to help young people develop their media skills. The TRUE tool details the four steps to validating information: check the source, judge the source’s objectivity, look for other sources, check the date of the article. Other web resources on how to avoid spreading false news are also available.

Survivre au web (même quand on est aveugle)(podcast)

Le guide de survie des Débrouillards is a series of science podcasts for young people. In this capsule, a brief history of the Web since 1990 is told. Even if the Internet allows artists to unleash their inventiveness, it’s also fraught with dangers: computer security, violation of privacy, echo chambers on social networks. Listeners will gain a better understanding of how the Web works and its hidden aspects, and receive a few tips on how to outsmart the search algorithm.

Vérifié ! (video)

Verify the source of a piece of information, its author, distinguish an opinion from a fact, distance yourself from your emotions. The steps of the perfect fact-checker are retraced and explained in 5 vignettes for young people aged 8 to 12.

Scientific revenues

Facebook favorise-t-il la désinformation et la polarisation idéologique des opinions ? 

Unlike the algorithms of Internet search engines, Facebook gives users access to diverse media and currents of thought through sharing. However, this is not the case with membership of closed groups, because instead of exposing users to a plurality of currents of thought, it keeps them in a closed bubble constructed by their informants, who are often affiliated to the same political lineage as them. This is particularly true of far-right populist Facebook groups. This article questions the real contribution of platforms like Facebook and closed discussion spaces to democratic debate. On the contrary, don’t alternative media risk contributing to the polarization of debate and misinformation?

Aider les citoyens à mieux s’informer 

The mission of this group of professional journalists is to help the general public become better informed and more aware of the importance of the media in democratic debate. The online and face-to-face training courses are designed to equip the public with techniques for verifying fake news on the web or social networks.

Break the Fake: How to tell what’s true online

MediaSmarts has designed a workshop for youth leaders (11+) on the theme of misinformation. Interactive presentation materials, with supporting examples, will enable digital media users to integrate the basic elements of this universe so as not to contribute to the spread of misinformation. The workshop covers the following topics: developing fact-checking reflexes, finding the source of information and verifying it with the appropriate tools, identifying journalistic language so as to be able to detect whether an article or website is reliable.

Ça se peut ou pas ? 

Le Récit offers support and guidance to teachers in developing students’ skills using technology.

“Ça se peut ou pas” is an activity, available online or to be carried out in class, that will enable teachers to apply the scientific approach to validating web content and using the right strategies for verifying information.

Checkology

Checkology is a free online learning platform offering lessons on topics such as media bias, misinformation, conspiracy theory and more.

The lessons include references to historical events, photos and commentary by specialist journalists. The aim is to teach the public how to identify credible information, search for reliable sources and develop a critical mind to distinguish between true and false information.

Comment sont réalisées les vidéos hypertruquées Deepfake? 

In 2020, the teen magazine Curium played the Deepfake game. The aim was to test whether a non-computer expert would be able to produce a convincing result using software and a computer accessible to the general public. The surprising result is cause for concern for the future…

Civic Online Reasoning

The Stanford History Education Group (SHEG) is a research and development group of Stanford’s Graduate School of Education. The COR site is a resource designed for teachers to help them teach journalistic methods for identifying misinformation in online media. The aims are precisely to awaken young people to digital citizenship and improve their ability to evaluate online content, prerequisites for thoughtful democratic participation in society.

CTRL-F 

CTRL-F is a Canada-wide educational program developed by CIVIX to help teachers impart digital literacy skills to their students. It is aimed at high school and CEGEP students, and is integrated into general course curricula. It involves learning to assess the accuracy of information, to judge the veracity of an image, and to develop lateral reading skills through the use of contextual information.

Développement et expérimentation d’outils éducatifs pour contrer la désinformation en ligne chez les jeunes adultes

The spread of fake news and the invasion of low-quality information on social networks has prompted a series of educational initiatives to prevent misinformation and promote media literacy. But are these initiatives effective? Are they having the expected impact on young people? The Centre d’études sur les médias has set up an experiment involving 500 Quebec users, to assess the effectiveness of a video tool in developing critical thinking about digital media among young people from disadvantaged backgrounds.

Entre information et désinformation : comment démêler le vrai du faux 

École branchée has put together a complete kit with activities for teachers to teach Secondary 2 students the steps involved in verifying information. Through this approach, young people learn to use digital tools (quizzes, word processing, YouTube tutorials) as research tools, to identify their limitations, but also to exploit their learning potential.

Evaluating Photos & Videos : Crash course navigating digital information #7

Crash Course creates educational videos on a variety of subjects, including science, history, politics and major social issues. The capsules are produced in clear language and illustrated with examples to enable a wide audience to update their knowledge on a particular subject. In this video, we look at the potential of images to inform about reality. Photos, like videos, can easily be used to produce disinformation: faking, false captions, deep fakes. We show how to trace the source of an image or video, and what reflexes to develop when faced with information.

Fighting Misinformation About Coronavirus

NAMLE, an organization dedicated to digital literacy education, provides a series of links to blogs, videos and articles on misinformation and the coronavirus pandemic. Sections include resources for the educational community.

Formation #30 secondes avant d’y croire 

Offered by CQEMI, #30secondes avant d’y croire training courses are designed to help participants develop their critical thinking skills. They are given by professional journalists in institutions or organizations that request them, and enable the public to discover the fact-checking techniques used by journalists as part of their profession.

Health and science misinformation

MediaSmarts has produced a fact sheet describing aspects of misinformation about science and health. Certain arguments and speeches are easy to detect if you know the codes. The use of testimonials, dubious experts and assertions based on the promotion and sale of products are just some of the strategies employed in this information, and it is important to analyze them from a distance.

International fact-checking network

The Poynter Institute, dedicated to journalism training and research, launched a program in 2015 to develop an international fact-checking network. Fellowships, training programs, conferences support this initiative to develop a global community of fact-checkers. A team monitors new trends in the field to offer up-to-date resources to journalists to better combat misinformation.

Introduction aux fausses nouvelles, viralité et impacts 

Presentation of a 60-minute training session offered by CQEMI in schools and other community organizations to equip young people to verify information.

Introduction to Crash Course Navigating Digital Information #1

How do you find your way around digital information? In partnership with Mediawise and the Stanford History Education Group, Crash Course has produced a series of 11 videos in 2019, each tackling a particular aspect of information. Here are some of the topics covered: fact checking, verifying photos and videos, reading news sideways, using Wikipedia.

Les fakes news dans les médias du Québec : perceptions des journalistes (Master’s thesis in communications by Mathieu-Robert Sauvé)

This study, presented in 2019, focuses on the spread of false information in Quebec society. It is based on the results of a survey of Quebec journalists and editors-in-chief of Quebec media. The journalists surveyed stressed the importance of this growing problem and the need to train younger generations to protect themselves against it.

October 2020: Misinformation, Disinformation, Hoaxes, and Scams

Campus Security Awareness Campaign 2020

Educause is an online magazine dedicated to information and technology in the world of education. This article explains how to protect yourself from misinformation, and how to improve your critical faculties and follow the steps for verifying information in order to avoid sharing false information. It provides a series of links where users can find tools to protect themselves from misinformation on social networks.

Portrait d’une infodémie Retour sur la première vague de COVID-19  

This 95-page report, produced as part of the work of the Observatoire international sur les impacts sociétaux de l’AI et du numérique (OBVIA), places the actors who played an important role in the production and circulation of information during the first wave of COVID-19 in Quebec (2020). It assesses the role of each player (experts, journalists, media, public players) and draws lessons from the crisis, with the aim of putting in place a global framework to combat misinformation. How can we improve the quality of information in a digital context? How can artificial intelligence help?

Positive and Proactive Behaviours

The link goes to a page entitled Positive and proactive behavior

Activities and discussion topics enable parents or educators to introduce young people to the importance of adopting positive online behaviors. Topics include verifying information, security and privacy. Parents will find advice on installing software to control the types of websites their children visit.

Quiz: Should you share it?

This quiz from the News Literacy Project tests the user’s ability to analyze the reliability of information. You’ll learn to judge different publications which, even if they seem authentic, are manipulations or come from unreliable sites. The site aims to help teachers and the general public improve their digital information literacy.

Reality Check

How do you validate the quality of information before sharing it on the Internet? The videos and tip sheets produced by MediaSmarts are aimed at people of all ages, and contain precise instructions on how to verify information. Some tip sheets warn of the serious consequences of misinformation on society, and how we can all be unwitting agents of it.

Se prémunir contre la désinformation : pistes pour développer les compétences informationnelles des étudiant·es 

The PDCI, a resource directory for Quebec’s university network, has designed a tool for librarians and teachers to explore all facets of information, from its source, through its production, to its exchange. Indeed, before we can understand the issues surrounding misinformation, we need to understand all the aspects involved in its production.

Slowing The Infodemic : How to Spot COVID-19 Misinformation (link1)

(link 2) – Slowing The Infodemic – How To Spot COVID – 19 Misinformation Podcast

(link 3) – Slowing the Infodemic: How to Spot COVID-19 Misinformation – Classroom Guide for High School and Post-Secondary Educators

In response to growing misinformation during the COVID-19 pandemic, Thomson Reuters and the National Association for Media Literacy Education (NAMLE) have teamed up to provide secondary and post-secondary teachers with educational resources. A video vignette, podcast and teaching guide will enable young people to develop skills in critiquing and verifying media messages.

Third annual national news literacy week (Jan. 24-28, 2022)

From January 24 to 28, 2022, the News Literacy Project held its annual event to celebrate the vital role of information literacy in a democracy. It aimed to inspire news consumers, educators and students to apply good practice in analyzing information products.

Tips for checking images online

École branchée offers resources for teachers wishing to develop educational activities using digital technology. In this dossier, you’ll find explanations around a healthy practice towards information circulating on social networks: doubt, check the source and verify the source of an image on the web. Tutorials on how to use Google Image and Google Lens show how to find clues as to the source of an image.

Trust and Critical Reasoning

The school of Social Networks provides parents and teachers with activities and discussion topics to raise awareness among primary school children of the risks and pitfalls of the Internet.

Verifying Online News

A complete dossier for those who want to understand how online information works, with a few fact sheets and workshops for teachers. It paints a portrait of the information industry in Canada, its consumers and how they have evolved. What sources can you trust and how can you verify information?

What kind of content is out there?

The School of Social Network offers a game to awaken young audiences to the issues and challenges of online social life. How do you correspond with your friends? What kind of behavior can you expect? How to protect your data, what to share and what not to share? The game features a cast of characters who lead children on a journey of discovery, unlocking cards as each topic is discussed with a parent or teacher.

CTRL+F – Ep.3. Le mystérieux deepfake (podcast) 

A 6-episode podcast designed for young audiences by Ubisoft Education to explain artificial intelligence, science and new technologies. In this episode, the protagonists discuss deepfakes and their uses, but also the bad intentions linked to their manipulation and the notion of fakenews. The second part of the podcast looks at the definitions of fake news, misinformation and disinformation. How can we guard against them, and how can we verify information?

Don’t believe everything you read on the internet

How to verify a declaration, its source, an image? Actufuté clearly explains the 3 validation steps and the tools required. Users are invited to test their knowledge with a game based on real fake news.

The Coronavirus Quiz. Misinformation spreads faster than a virus

Digital Public Square is a platform dedicated to the development of digital products and the promotion of ethical and healthy product practices. DPS works through research, development and design, as well as social marketing. DPS designed a simple game to counter misinformation about COVID-19. As participants progress through the quiz, they are presented with real, fact-checked answers that demystify commonly shared misinformation about COVID-19.

What is Fake News

How can you spot fake news? How can you trace its origin? In this interactive presentation, illustrator Élise Gravel shows elementary schools and parents how to identify fake news in just a few steps.

Here are two inseparable forms of intelligence, and if one seems to take all the place, our next challenge will be to bring the second one back to the side of its binomial so that we can take back the control of our information circuits. Fighting evil with evil will not be the only solution, because behind any artificial intelligence, there is also a human intelligence.

Defense Advanced Research Projects Agency
Semantic Forensics Program
Uncovering the Who, Why, and How Behind Manipulated Media

The evolution of technology makes it possible for any individual to produce falsified content (Deepfakes). The Defense Advanced Research Projects Agency (DARPA) has set up the SemaFor program to support research into the development of automated systems for detecting falsified media. Based on various algorithms, these new detection methods will make it easier to find the author or organization behind a media item, as well as identifying its methods and intentions.

Media & scientific journals

A Survey on Deepfake Video Detection

The technologies available today for manipulating files to hijack a person’s voice or image are within anyone’s reach, and can be used for malicious purposes. In the near future, researchers will have to improve their systems for detecting such content, and work on various parameters to enhance the performance of search algorithms. At present, these technologies have weaknesses when they have to distinguish between real and fake videos, or when the latter are of poor quality.

Comparison of Deepfake Detection Techniques through Deep Learning

This scientific article presents the state-of-the-art in Deep Learning technologies and the various methods for detecting deepfakes (manipulated images or videos). Research and comparative analysis of data in the scientific literature is in its infancy, and it is important to compile it given the rapid multiplication of methods for creating these deepfakes. This is important to support the development of increasingly reliable and effective technology for detecting manipulated media on social networks.

Deepfake detection by human crowds, machines, and machine-informed crowds

The machine is not necessarily superior to human faculties when it comes to detecting manipulated media. Faced with the diversity of these products, humans sometimes excel where machines fail, and vice versa. Taking context into account, for example, is an element that the machine cannot integrate. We therefore need to consider a future human-IA collaborative system including human and machine performance for deepfake detection, which implies taking into account not only perceptual cues, but also the wider context of a video to determine whether its message resembles a lie.

From Deepfakes to TikTok Filters : How do You Label AI Content?

In the near future, there may be a standard requiring image editing applications to display the source of the original file and whether it has been tampered with. This article provides an overview of the methods currently used to indicate that media has been modified.

How to Share the Tools to Spot Deepfakes (Without Breaking Them)

Can automated systems for detecting falsified images be the guarantors of online truth? Indeed, these tools are used by major media platforms, but are not useful for the general public. PAI, in collaboration with WITNESS, a non-profit organization that uses video and technology for human rights, held a workshop to hear the views of media professionals on the potential of manipulated media detectors. The summary of these meetings concludes that the developers of these systems also have the task of providing adequate training to journalists and fact-checkers on the proper use of these systems, as well as knowledge of their limitations.

Vidéo : Dans la lutte contre les deepfakes, Facebook a une solution à base d’IA  (video)

Scientists at Facebook have developed software based on reverse engineering to deconstruct the fabrication of an image or video. The aim of the system is to detect videos that have been tampered with during editing.

News media

Les géants de la technologie lancent le « deepfake challenge » pour contrer la désinformation 

In 2019, companies such as Microsoft, IBM, Apple and Amazon began bringing together researchers from the Massachusetts Institute of Technology (MIT), with major funding from Facebook, to work on developing technologies that could detect videos altered using artificial intelligence. The dangers and harms of these kinds of faked videos were beginning to alert a scientific community that saw the importance of providing users with the means to detect these products.

Educational resources

Hypertrucage: From disinformation to hyperpersonalization

École branchée has put together a dossier to help teachers tackle the phenomenon of hypertrucage in the classroom. It details the steps involved in verifying fake news, critical thinking and the problems posed by the use of deepfakes. After defining and contextualizing the problem, it provides a series of links to specialized youth media that have compiled digital teaching resources on the subject.

Le deepfake : Rupture ou continuité ? 

An online training course for secondary school teachers offered by the EMI of the University of Lyon to tackle the subject of deepfakes with students. It contains discussion proposals illustrated by videos, explanatory capsules expressly designed by the students, and a few extracts from cases of hypertrucage circulating on social networks. The course is divided into three stages: making your own judgement, communication opportunities brought about by the use of deepfakes, and legal aspects. Teaching resources and a quiz are also available.

What Parents Need to Know about Deepfakes

A document produced by the management of an elementary school to inform parents about the potential dangers of deepfakes for their children. The ill-intentioned use of these images can have repercussions for young people. Learning to detect truncated media, and ensuring privacy on social networks, are important steps to introduce to young people to protect them from these threats.

University research

A New Approach to Improve Learning-based Deepfake Detection in Realistic Conditions

Early deepfake detection devices have so far failed to operate effectively in situations of image compression or distortion. This article reports on new techniques for evaluating image detection models in a more realistic setting, with the potential to improve the analysis capabilities of these models.

Detecting Deepfake Videos : An Analysis of Three Techniques

To combat the spread of misinformation, business leaders and researchers have shown increasing interest in developing computational approaches for detecting deepfakes. This article presents three techniques and algorithms that have been tested: convolutional LSTM, eye-blink detection and grayscale histograms. The grayscale histogram technique, with the best results, is then the one that should be favored in the fight against fake media.

Detecting Deep-Fake Videos from Appearance and Behavior

This article from 2020 reports on the promising development of a technology capable of detecting hypertruqué media. Based on biometrics, this technique combines a facial recognition approach with a body behavior learning method.

Human Detection of Political Deepfake across Transcripts, Audio, and Video

The study presented in this article proposes to evaluate how video can affect the ability to discern the content of political discourse. Indeed, the creation of increasingly realistic fake videos poses a serious problem, as their misleading content is more likely to be believed due to the fact that it is conveyed by a video. This research has revealed that other elements influence the perception and judgment of a video. The viewer’s adherence to the content of the discourse can affect perception of the visual cues of fabrication.

ID-Reveal : Identity-aware DeepFake Video Detection

The detection of hypertrugged videos presents a major challenge, as automated systems are fairly unreliable on certain types of falsification. ID-Reveal is a new method for detecting fake videos, which looks promising and effective on videos with defects and which are highly compressed.

The Presidential Deepfakes Dataset

This article presents the results of an evaluation of deepfake detection techniques. The PDD (Presidential Deepfake Dataset), which consists of 32 videos, half of which are original and half manipulated, attempts to broaden and diversify the evaluation contexts in order to improve the detection performance of manipulated videos. The research results presented here outperform the DeepFake Detection Challenge (DFDC) dataset, which showed a high error rate.

Deepfake Detection Challenge (DFDC)

Deepfake Detection Challenge Dataset

The aim of the Deepfake Detection Challenge is to inspire researchers around the world to develop innovative new technologies for detecting manipulated media. The DFDC enabled experts from around the world to come together, compare their models for detecting synthetic media, try out new approaches and learn from each other’s work.

Deepfakes: Why you can’t believe everyhing you see and hear

Any media can be realistically reproduced using an algorithm. Deepfake technology is taking us into another world, where creativity will be unlimited. Soon, it will probably be impossible to distinguish between real and fake media. Creators bear an important responsibility and must become aware of this power and use it wisely.

Detect Fakes

Test your ability to recognize a deepfake. The quality of synthetic media can vary, and some can be more easily perceived than others. This is because algorithms rely on different video and sound manipulation techniques. Can automated detection systems be more effective than humans in this research task? How can technology surpass human intelligence, complementing it, collaborating with it and not necessarily replacing it? This site attempts to offer a broad spectrum of these possibilities to help users spot the nuances. These deepfakes come from Kaggle’s Deepfake Detection Challenge (DFDC).

Disrupting DeepFakes: Adversarial Attacks Against Conditional Image Translation Networks  (Podcast)

Nataniel Ruiz presents his research into the possibility of countering the damage created by deepfakes technology. His experiments involve injecting noise into an image in order to disrupt the ability of a generative model to manipulate said image. His aim would be to permanently block any attempt to transform an image. In the course of the discussion, he mentions some of the difficulties involved in implementing this work and some potential scenarios in which it could be deployed.

Fake videos of real people – and how to spot them (video)

Today’s technology makes it possible to recreate a person from existing photos and videos. It may even be possible to bring deceased public figures back to life. If this technology seems infinitely promising, it can also cause a great deal of damage. Researchers will have to assume this responsibility and in return develop the same technologies to combat their own work, i.e. artificial intelligence combined with human intelligence to detect these altered videos.

Global Desinformation Index (GDI)

*Blog « Read Now

GDI is an organization that monitors misinformation on the Internet. It acts on behalf of advertisers, science communicators, researchers and social media companies by means of artificial intelligence or in-depth journalistic work. GDI also acts as an intermediary for digital platforms and governments in setting up rules and policies to regulate the dissemination of information.

It matters how platforms label manipulated media. Here are 12 principles designers should follow

PAI is studying the best way for digital platforms or public authorities to combat the invasion of manipulated media or misinformation. The application of labels and the visual tagging of dubious media would be one route to explore among others, but it has many drawbacks. Based on extensive research carried out with experts from the technology industry, PAI has put together a range of proposals and suggestions that could be tested and applied according to the specific contexts of digital platforms.

Reducing vulnerability to information manipulation

Our tools

A French website that identifies and analyzes cases of disinformation in world news. It offers a series of tools to strengthen all public, private and civil society players in the fight against information manipulation.

Scan & Detect Deepfake Videos

Deepware is a site that allows users to scan a video and detect whether it has been manufactured.

The Deepfake Detection Challenge : Insights and Recommandations for AI and Media Integrity

Following the Deepfake Detection Challenge, the PAI (Partnership on AI) has put together six key ideas and recommendations for politicians, journalists and organizations to better combat the use of synthetic media. Detection tools need to be developed according to the contexts in which they will be used.

There is still a lot of legal vacuum regarding the malicious use of deepfakes. The awareness of the ethics surrounding the use of these media and their devastating effects on privacy is still in its infancy. The slow evolution of the legal rules is out of step with the speed of technological developments and the measures taken still have little effect.

Doit-on interdire la circulation des deepfakes ?

Legal measures to regulate or sanction the dissemination of information content modified by artificial intelligence are beginning to emerge. Researchers are developing automated systems capable of identifying the validity of an information source. These practices raise a number of questions. Could they become too coercive and limit freedom of expression? Can we completely rely on robotized technologies to carry out this screening? Above all, education in good digital practices and the development of a critical eye remain the best way to counter online disinformation.

Deep Fakes: What Can Be Done About Synthetic Audio and Video?

Current Canadian legislation provides a number of tools for recourse against the authors of falsified and hypertrusted videos, should they harm the integrity of individuals. The Canada Elections Act can also be invoked in the event that an altered digital product influences the outcome of an election. In 2018, the Canadian Centre for Cybersecurity recommended that digital platforms take steps to combat the propagation of videos altered with malicious intent and that could disrupt the normal process of an election.

L’approche multipartite : Recueil sur la défense des processus électoraux

In 2020, international meetings brought together experts to discuss best practices for key players in the information ecosystem, with a view to making cyberspace secure and conducive to the democratic process. The technologies, structures and methods to be put in place to protect the electoral process and prevent malicious foreign interference are explored through concrete examples already tested around the world.

Study of Bill C-30

Bill C-30 proposes amendments to the Canada Elections Act to clarify that the provision prohibiting false statements about certain political actors requires knowledge that the statement is false. The proposed amendment is a means employed by the government to combat misinformation. For example, to tighten transparency measures in online advertising, the Canada Elections Act requires digital platforms to keep a record of partisan and election advertising messages published on their platforms during election periods.

Deepfake : le vrai du faux d’une technologie révolutionnaire  

Based on artificial intelligence, deepfake can be an entertainment tool with a creative dimension. Ethical and legal issues arise if a face is used in a context harmful to its image. Identity theft, privacy protection and image rights are all possible legal tools. However, there are still areas of uncertainty where it is difficult to apply the law. What should be done when the face of a deceased person is used? Does the law apply to the use of a private image in the public domain? How do you determine the amount of data used in an algorithm?

RESSEMBLANCE 

The creation of deepfakes isn’t just making waves in the world of information. It’s also revolutionizing the creative industry. Journalism.design’s newsletter, SYNTH, provides a better understanding of the issues surrounding generative AI and deepfakes.

University research

Deepfake : New Era in The Age of Disinformation & End of Reliable Journalism

New technologies bring new challenges to journalism. The ability to put words in someone’s mouth is a threat to journalism. In this era of widespread misinformation, it is essential to take political, legal and technological steps to counter the threat posed by deepfakes to quality journalism.

Deep Fakes : A Looming Challenge for Privacy, Democracy, and National Security

This article provides an in-depth assessment of the causes and consequences of this disruptive technological change brought about by the use of hypertrucages.

What existing and potential tools are available to respond? Technological solutions, criminal sanctions such as civil liability and regulatory measures, military responses and economic sanctions are assessed in turn. This study puts forward recommendations for improving policies, but also for anticipating the pitfalls inherent in the various solutions proposed.

Facing reality? Law enforcement and the challenge of deepfakes

This report produced in 2022 by Europol’s Innovation Lab provides a detailed overview of the criminal use of deepfake technologies. It discusses the legal challenges that countries and companies will have to face in the near future. Regulatory measures and laws will need to be deployed to deal with the new threats posed by these technologies.

How deepfakes undermine truth and threaten democracy (video)

By Danielle Citron, Professor of Law. Deepfakes can be used in the fields of creation, entertainment and art, but they can also be used with malicious intent in a political context. Users will have to judge for themselves whether a deepfake is malicious, journalists and the media will also have to be trained in the existence of these falsified images, as will social network users. However, there is still a legal vacuum in this area, and it is difficult to punish the perpetrators of such a crime.

What Can The Law Do About ‘Deepfake’?

The technology used to produce deepfakes is evolving much faster than jurisprudence. In this article, McMillan, a Canadian company specializing in legal issues, has compiled a list of the legal tools already available to combat the use of deepfakes. Copyright infringement, defamation, invasion of privacy, harassment and human rights violations can all be invoked to obtain redress.

Recueil des initiatives par

Rédaction
par

Mise en page
par