navigation

Portrait of Miriam Meckel

GDI

"We must never humanise AI"

Life without artificial intelligence is neither possible nor desirable, says author and professor Miriam Meckel. She explains how AI can make facilitate our everyday lives and the rules that we now need worldwide.

Text
Katja Fischer De Santi
Image
Keystone / Selina Pfrüner
Date
Format
What we do, Interview

Is it still possible to live your life without AI?
It's possible, but difficult. Nowadays, AI is part of the search engines that we use, in sat-navs, e-mail spam filters, personalised news feeds and Netflix programme announcements. Many processes in industry have long been using AI. Because it works seamlessly in the background, we hardly ever notice it. So if someone claims that they live without AI, it's like saying that they don't use electricity.

How do AI apps improve your everyday life?
For me, AI models are a kind of toolbox for thinking: they're useful, sometimes surprising, but not a universal solution. I use AI-based tools for research, for structuring my ideas and as a sparring partner in creative processes. They can help identify blind spots and open up new perspectives on a topic. However, they don't replace human thought; they complement it, just as steam engines didn't replace train drivers, but rather opened up new possibilities for them.

Will there come a day when we can't create anything any longer without artificial assistance?
AI agents - which all of the major technology companies are working on intensively right now - will eventually perform entire sequences of tasks autonomously. They will be able to book trips, do your shopping online and manage projects. There are already start-ups that only have a handful of employees, each of whom leads a team of AI agents. In fact, I think that this type of collaboration between humans and AI will permeate everything.

The problem isn't that AI "lies", but rather that we use such terms and thus humanise AI.

Miriam Meckel, author and professor

For many people, ChatGPT has become a kind of best friend. Is this becoming a problem for social interaction?
The question we should be asking is why people are increasingly looking to talk to a machine. Perhaps it's because machines are more patient than many people, always have time and aren't judgemental. However, this also shows that social interactions are often impacted by time pressure and expectations. If AI models replaced real relationships, we would have a real problem. Incidentally, there have already been a few extreme cases in which AI tools prompted people to leave their partners or even take their own lives.

Why do we tend to humanise these language models so much?
Because they often sound convincing. Language models are masters of rhetoric, but not guardians of the truth. They have no idea what they are talking about, but are impressively effective word prediction machines. They work with statistical word probabilities, not with meaning. But all the authority that they appear to have is simply what we bestow upon them.

ChatGPT ignores copyright and creates variations on existing content. Can this really create something new?
Yes and no. People have always drawn inspiration from outside, but there's a difference between inspiration and simple recombination. AI won't spontaneously create a completely new style of art. Revolutionary ideas require courage, leaps and error. Only humans are capable of doing that.

You have predicted that AI is "consuming itself". What does that mean?
If AI only learns from AI-generated content, it creates an echo chamber; a kind of algorithmic incest. In research, this is known as "catastrophic forgetting" or "model collapse". It's therefore nice to think that AI needs us humans in order to be able to continue learning from original data, rather than collapsing in on itself.

You write that AI can also lead to "disenfranchisement and dehumanisation". Can you give us an example of this?
We only have to look at the US at the moment. Millions of civil servants there received an e-mail from the so-called "Department of Government Efficiency", led by Elon Musk. This e-mail ordered them to respond within hours and explain why their job should continue to exist. AI was then used to evaluate whether these people were to be allowed to remain in their jobs or be fired. That's undignified and also extremely dubious given the current developmental state of AI. There are good reasons why many legal systems, including that in Switzerland, stipulate that human beings must always have the final say.

If you could help shape global AI legislation, what would be the most important rule that you would enforce?
In this case, I'd like to see a global rule which states that AI must never decide what AI is permitted to decide about. AI is a powerful tool, but we humans don't only shape the world with our tools. These tools also shape us. That's why I hope that everyone takes a good look at AI and understands that, although it's very powerful technology, it's also just a tool.

From strategy to practice

We take responsibility for people, the environment and society. Read our stories to find out how we implement our strategy in our daily activities.

All Stories