How to identify a deepfake, with and without AI
BERLIN - Deepfakes, or manipulated images, videos or audios made by artificial intelligence, look deceptively real.
The trouble is, deepfakes are being used more and more to create pornographic material of real people without their consent.
You can also see deepfakes being used to sway elections and public opinion.
Watchdogs banning deepfakes
Now, regulators worldwide are cracking down, after uproar when Elon Musk’s chatbot Grok temporarily enabled users to create non-consensual sexual deepfakes of women and children.
Britain made it illegal to create sexually explicit deepfakes without the consent of the person depicted earlier this year.
Germany is also set to legislate after a famous actor and presenter accused her former husband of disseminating pornographic deepfakes of her without her consent for years.
Her accusations sparked demonstrations, with protesters calling for legal reforms to better protect women from online abuse.
But laws vary and forensic experts say deepfakes are becoming more sophisticated by the day.
And as the use of AI soars, we may be facing a future where almost all online material is created or in some way shaped by artificial intelligence.
“When we talk about AI deepfakes that went online a year ago: since then, there have already been 23 better models,” says Jens Kramosch from German-based Leak.Red, which makes AI-based software to help you find deepfakes of you online and remove them.
“At some point, there will be almost nothing but deepfakes, or only artificially generated content,” Kramosch says.
We need ways to identify originals, like a blue tick mark used to verify social media accounts.
Leak.Red’s tool, which scans for leaks, content privacy and deepfakes for €99 a month, is one of many software tools that uses AI to combat AI-generated content.
These rate content on a scale from 0 to 100, says Nicholas Müller from Germany’s Fraunhofer Institute for Applied and Integrated Security.
Zero means genuine, 100 means fake. “A deepfake usually scores around 95,” says Müller.
But AI models used to make deepfakes are getting better all the time.
However, experts are also getting better and better at exposing deepfakes. “It’s just like in IT security: The attacker gets better and the defence has to keep up,” he says.
Think like a detective
Kramosch says you can work out if you are looking at a deepfake by tackling it like a crime scene.
Imagine you are looking at a photo of someone.
“It’s best to start by taking in the overall picture. For example, I look at the hair, the hairline, the eyelashes and the texture of the skin,” he says.
Human skin depicted in deepfakes is often too smooth, he notes.
“It’s also important not just to look at the centre of the image, but at the edges: do the lines match up? Do the shadows match up? There are various AI models that focus on the object in the centre and neglect what’s around it.”
Deepfakes are most successful at authentically depicting one person in front of a diffuse background.
Another way to check whether an image or a video is fake is look at the metadata. Does the geolocation make sense? Is it likely that the photo was taken in 1970?
The metadata of deepfakes created with current versions of bots like Gemini or ChatGPT labels the content as AI-generated.
But if that data has been removed then that is another potential clue that something is up, Müller says.
Trust your own eyes and ears when taking a first look at a potential deepfake.
Classic signs that an image has been created by artificial intelligence are mismatching skin tones on the neck and upper body, a hand with six fingers or an object that merges with the hand or hovers above it.
If you are looking at an interview and it was supposedly done on a sunny day in a particular city, check the weather forecast and see if it matches up, he says.
And zoom in close on the shadows and edges.
When dealing with outdoor footage with just one source of light, use a still image and draw a line from the shadows to the source of light. Check whether all lines, if extended further, originate from a single point.
“If they don’t, then it is very likely a deepfake,” Müller says.
Establishing evidence
Leak.Red uses its software to secure “a chain of evidence” aimed at exposing deepfakes.
“If a deepfake is put on Instagram, for example, we can freeze it as evidence, and it can no longer be altered. This means it is forensically secured,” says Kramosch.
But such AI-based tools have limits when it comes to prosecuting perpetrators, says Tobias Wirth from the German Research Center for Artificial Intelligence.
“AI detectors are often black-box systems. They determine with a degree of probability whether something is a deepfake, but they do not necessarily provide an explanation as to why,” he warned.
While AI can identify systems, patterns and indicators that are imperceptible or difficult for the human eye to detect, including subtle discrepancies at the pixel level, that’s a problem in court, where comprehensible statements are required in assessing evidence, Wirth says.
Despite the rapid changes in AI, Kramosch is optimistic that AI will still be able to help identify deepfakes.
After all, any AI model used to create them follows a certain pattern - though recent months show how fast things are moving, he says.
Copyright 2026 Tribune Content Agency. All Rights Reserved.
This story was originally published April 27, 2026 at 2:29 AM.