Imagine not being able to trust what you hear or see when you watch a video online, a celebrity seen in a scandal that generates controversy but that celebrity was never actually in that video, a politician delivers a disastrous speech in the eve of an election but he never said those words, or a religious figure incites violence in a mammoth crowd but never was there in flesh. I am talking about Deep Fakes which are manipulated media; video, audio or both where someone’s likeness is replaced with someone else’s likeness, where people can be made to appear as if they have said or done something that never actually happened, but merely digital altering of contents.
This involves digitally altering media contents to show whatever the generator intends to display. The technology can create convincing but entirely fictional photos or videos from scratch and has already attained notoriety in places like the United States. For instance, a deepfake video featuring the likeness of Facebook CEO Mark Zuckerberg stating that “whoever controls the data, controls the future” surfaced on the Internet on the eve of United States Congressional hearings on Artificial Intelligence he was scheduled to attend at the time.
Such a trend no doubt supercharges the spread of intentional falsehood as actual news designed to advance a malicious goal.
The first deepfakes surfaced in 2017 when a Reddit user posted several pornographic videos with simulated celebrity faces. Since then, deep fakes have come a long way, with fewer and fewer training images or videos required for algorithms to generate believable versions of real people’s faces. One expert predicts entirely realistic deep fakes could be possible in as little as six months.
The code for algorithms to generate deep fakes is freely available on open-source platforms such as Github. Sites that offer deep fake creation services are also popping up, charging as little as $2 an hour for deep learning algorithms to be trained on source images then generate a fake video, which can take around five to eight hours to be created, most of which is the time required for algorithms to learn from source data.
There are even smartphone apps that make use of deepfake tech. Zao is a Chinese iPhone app that swaps people’s faces into famous movies, while an app that has since been taken down, DeepNude, altered photos of women, so they appeared naked.
The consequence of this when used as propaganda material is that people do not just start to believe things that are not true but they refuse to believe things that are actually true. Therefore, understanding how to detect deep fakes has become vital.
Some so-called “deep fakes” are harmless fun, but others are made with a more sinister purpose. But how do we know when a video has been manipulated?
FakeCatcher works by analyzing the subtle differences in skin color caused by the human heartbeat.
Similarly, in September 2020, Microsoft has developed a tool to spot deep fakes – computer-manipulated images in which one person’s likeness has been used to replace that of another.
The software analyses photos and videos to give a confidence score about whether the material is likely to have been artificially created.
The firm says it hopes the tech will help “combat disinformation.”
Siwei Lyu, Director of Computer Vision and Machine Learning Lab (CVML) of University at Albany, also gave the following tips:
1. Look for individual hairs, frizz, and flyaways:
One area that is often a giveaway is the hairdo of a subject’s video – faked people don’t get frizz or flyaways because individual hairs won’t be visible. “Fake videos usually have trouble generating realistic hair,” says Lyu.
2. Watch the eyes:
With the regular blinking and other minute movements that are typical of a real person, one of the biggest challenges for deep fake creators is to generate realistic eyes, “When a person is in conversation, their eyes follow the person they’re talking to. But in a deep fake, you may spot a lazy eye or an odd gaze,” says Lyu.
3. Check the teeth:
Like hair, teeth tend to be tough to generate individually. “Synthesized teeth normally look like a monolithic white slab because algorithms don’t have the ability to learn such a detail,” says Lyu.
4. Observe the facial profile:
Does the person saying that shocking thing look a little odd when they turn away from the camera? If a video has been manipulated with a face swap, they may appear to be facing the wrong direction or their facial proportions may become skewed when the subject is looking away.
5. Watch the video on the big screen:
On the smaller screens of smartphones, any of these inconsistencies of a fabricated video are less likely to be visible. Watching a video in full-screen on a laptop or desktop monitor makes it easier to spot the visual artefacts of editing, and can reveal other contextual inconsistencies between the subject of the video and where they are – for example, a clip of someone purportedly in, say, the UK, but against a backdrop with a car driving on the wrong side of the road.
If you have a video editing program such as Final Cut or iMovie, you can slow down the playback rate or zoom in to examine faces.
6. Check the emotionality of the video:
As with extraordinary headlines that just so happen to appear around major events – like elections or disasters – a well-placed video that tugs at the heartstrings or fuels righteous outrage may be a deep fake designed to do just that.
The researcher produced this media literacy article per the Dubawa 2021 Kwame Kari Kari Fellowship partnership with PRNigeria to facilitate the ethos of “truth” in journalism and enhance media literacy in the country.