
Online security
How AI can search the web for you
Agent-based browsers perform many tasks independently - though not without risk.
navigation

Online security
More and more videos, images and audio recordings are fakes created using artificial intelligence (AI). Expert Andrea Hauser explains how AI can be recognised and when its use is dangerous.
Deepfakes are deceptively real-looking videos, images or audio recordings generated using artificial intelligence. One current example of deepfakes are the sexualised images of real people – including children – that users on the platform X (formerly known as Twitter) have created using an AI chatbot called Grok. The EU has now initiated legal proceedings against Elon Musk’s US-based platform.
In another case at the start of the year, an advert featuring artificially generated images of Mona Vetsch caused a stir in Germany. In the ad, the fake SRF presenter promoted an investment tool.
To be honest, that’s no longer easy an easy task. This is because the technology is developing so quickly that deepfakes are getting better and better and are now almost impossible to spot. Andrea Hauser, an IT security expert at the Zurich-based information security company scip AG, tells you what to look out for:
AI images and videos sometimes look too perfect and smooth.
Unnatural faces with strange facial expressions and blurred transitions, for example between the face and hair, can also be a sign of fake images and videos.
Sometimes something is physically wrong in AI videos, like the trajectory of a ball.
According to Hauser, videos can also be unmasked by small details in the background that suddenly change.
AI videos are mostly very short. If a video is only ten seconds long, it’s likely to be AI.
The most important thing is usually the overall context. “You should always ask yourself whether videos or images are realistic,” Hauser says.

Fraudsters are increasingly using AI to make fake calls. Here, real voices are copied in order to demand money. Usually a seemingly familiar person like the victim’s own daughter calls and says that she has been involved in a serious accident and needs money immediately.
“Attackers try to stress and shock their potential victims into acting rashly,” Hauser explains. This is why it’s important to take a deep breath and not act immediately. The phone number may also be fake. For this reason, hang up and then call your acquaintance on the real number that you have stored on your phone. It will immediately be clear if the original call was fake.
“I recommend that families and good friends agree on a secret word to use in case of emergencies,” Hauser says. Incidentally, fraudsters get genuine voices from social media. Users should therefore always ask themselves what kind of content they want to upload there.
It depends on the content. Deepfakes that violate criminal law, personal rights or data-protection regulations may not be distributed. In all other cases, it’s legal to forward deepfakes.
With most content, it’s more of a moral question, Hauser explains: “Deepfakes are often designed to trigger anger or fear. So you should ask yourself whether you want to support this.”
Discover exciting stories about all aspects of Migros, our commitment and the people behind it. We also provide practical advice for everyday life.