Beware the Techno-Optimists: 4 Hidden Threats of AI

Beware the Techno-Optimists: 4 Hidden Threats of AI

02/04/2025 - 13:09

BUas lecturer of Ethics, Oscar Bastiaens, shares his reflections and philosophical approach to Artificial Intelligence, Ethics and four major concerns with the rise of AI.
Data Science & AI
  • Expertise

Writing about the ethics of AI is no simple task. Countless publications and debates have already explored this complex topic and deciding what to speak about in this blog has taken me some time. After all, speaking is one of the most powerful acts one can do. That is when I realised, speech and action should be at the heart of what I aim to address in this blog. It is for that very reason that I turn to the philosopher Hannah Arendt, who has written many important and influential books and essays on this very topic. This blog will have a philosophical approach, but know that in our next blog, Carlos and I will be diving into the practical implications and actions at BUas, based on my writing today. 

“Human plurality, the basic condition of both action and speech, has the twofold character of equality and distinction.” Hannah Arendt writes in her famous The Human Condition (1958, p.157). With this sentence, she attempts to answer the simple question as to why we act and speak as human beings – or as she calls us, political beings. For Arendt, equality and distinction are two key characteristics that answer this. For when we act and speak, we can understand one another, promoting an equality among people. After all, we must speak with one another because if there would be no difference between people, it would simply not be necessary. That is why distinction is the second characteristic.  

In every act, something new is born. It commences a process, a chain of events that will always lead to uncertain consequences. After all, we all have the power to act, and any act could lead us on another path. However, to truly reveal ourselves and make us understood by others, our actions must be paired with speech. If we are to understand one another, and ourselves for that matter, we must engage, converse, debate…we must speak and express ourselves. It is through this act of self-expression and true attempts to understand one another, that we constitute the world we live in. As well as how we want to live together in that very world.  

“Now, what does any of this have to do with AI?” you might wonder. Below, I will outline four AI-related issues which I believe could pose a threat to the human power of action and speaking that Arendt marks as essential. Those are (1) exclusive dialogues, (2) anthropomorphism, (3) the datafication of thinking, and (4) the robots taking over. Considering this is a blog, I am only skimming the surface of these issues and cannot elaborate on these in full. However, as a conversation starter, as food for thought, I hope it serves you well!
 

1. Exclusive dialogues

The first and foremost concern is the threat of experts excluding people from conversations that pertain to anyone else just as much as themselves. You might recognise this yourself in conversations with experts from the field. “But you don’t understand how it works!”. With that seemingly innocent line you are excluded from the dialogue on a topic which you wanted to address. I've been in conversations myself and have seen those on public platforms where people who were not considered a data-scientist or (AI-)programmer were excluded from conversations about the direction AI should be headed to. A seat at the table for anyone who wants or needs to speak on any given topic should be provided. Regardless of their expertise, education, or training.

Denying a voice to be heard could well lead to totalitarian characteristics. Should these scientists, who are brilliant and remarkable individuals reaching amazing feats, really have the last and only say in where this technology goes? Dictate where and how it is to be used solely on the basis of them understanding how it’s made? I would argue not in the slightest. If the impact of the technology is indeed as large as we see, there should be a seat for everyone at that table, and room for anyone to speak on the matter.
 

2. Anthropomorphism

Anthropomorphism refers to the process of attributing human qualities to things or animals that aren’t human. It is not an unfamiliar process, and we do it all the time. With our pets, for example. You might have seen social media accounts of dogs or cats where the animals ‘talk’ about their day in various posts. Or referring to our computers to be ‘thinking’, or ‘not feeling like it today’ when they are slow. In relation to AI there is some cause for concern in relation to this phenomenon. After all, the generative AI’s that we use, produce in our very own human language. It speaks like us. Moreover, we see more and more pursuits in which robots are being built to resemble humans as much as possible. From voices to facial and bodily expressions. This is also happening in AI-generated podcasts, and online profiles.  

You might have heard of something called the Duck Test. It is quite simple and goes like this: if it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. While there seems to be very little to argue with here, the argument can be used in reverse. While an AI might use the language we use, perhaps shows mannerisms that we recognize, and some robots are made to look like us, they are still not a human. If we start to consider these machines as humans – as we already do in the way we talk about ‘them’ – we might consider their decision-making just as valid as those of humans. Expecting the machines to have similar phenomenological experiences as humans, understanding good and evil, and more. Of course, there might come a day where all of this is possible, who knows. However, for now, I would argue that it is important to think carefully about the language we use about these machines and treat them for what they are – tools to aid us. Nothing more.
 

3. The datafication of thinking

While computers increasingly imitate humans, it is happening the other way around as well. Knowledge and experiences are being translated to data more and more. If we would go on a run we keep track of our distances, time, pace, and more. It is about beating the scores rather than anything else. Of course, for a run, it might not seem that problematic. After all, it is considered a sport in which we tend to set new (personal) records all the time. However, if we reduce the human experience and meaning to data, a greater risk lures in the background.  

Here Arendt provides great insight here as well. In her work “Eichmann in Jerusalem: A Report on the Banality of Evil” (1963), she writes that this banality of evil lies in the fact that very ordinary people, like you and me, cease to think about what they are doing and simply follow orders, or doing their job as prescribed. They are seeing data and numbers, rather than seeing people, stories, hopes, and dreams. When data is deplored of all meaning, when it is reduced to mere numbers, we eliminate the possibility of experience and ethics altogether. It is important to remain vigilant of our own thinking and prevent it from being reduced to mere datapoints.
 

4. The robots taking over 

When I do public speaking on AI Ethics, I am often asked if I am afraid of the robots taking over. My answer is simple. I am not afraid of robots; I am afraid of people. Not because robots are evil, because as I have argued above, they have no human capacity for understanding good and evil in the first place. The idea of ‘evil’ robots taking control over humanity is a great work of fiction (or marketing for that matter), but not to be taken seriously at this stage. People, however, are much more in charge of how much autonomy these machines will gain. As I have explained in my first point, it is precisely because of this that inclusive discussions and dialogues are of such importance.  

I do not fear robots, I fear people. Because people are more likely to surrender their freedom to act in their own accord, to speak what is on their mind to a machine out of an ease of use. Not having to give it much thought about what to do next, craving the next entertaining experience rather than a profound and difficult quest for the meaning of life. Of course, this all sounds a bit supercilious, but if there is anything that the rise of social media has taught us, I would say it is precisely that.  

The way we express ourselves online has made me wonder whether we've become micro reflections of hyped pop culture incidents, such as memes and viral videos, rather than exploring who we truly are. While machine autonomy might make perfect sense within specific circumstances, such as robot-run warehouses, it can pose a threat to the value of human life when it seeps into all facets of our everyday reality. We should remember that technology, as Martin Heidegger (1954) defines it, reveals the world to us. But in that revelation, it masks our relation to it. We only understand the world mediated by technology. While the direct experience which is much richer than only the data representing it.

The techno-optimists
It is important to note that I am not arguing against (using) AI. After all, the progression that it has brought in many fields is undeniable, and the developments and achievements in this field are astonishing. However, I do hope that we are weary for the technological optimists who would claim that AI is some sort of holy grail which will aid in many, if not all, problems we see today. In this blog, I have outlined only four major concerns that I see with the rise of AI. I have not even discussed the issues of sustainability, or others.  

That is why I hope that all of us will continue to engage in meaningful conversations about technology and its role in society and education. That we use it as the great tool that it is. And finally, to recognize it for what it truly is, and how it differs from us as humans. Us as complex phenomenological beings that experience and understand the plurality of existence.


Keep an eye out on this website, because in our next blog, we will talk more about how we do that at BUas.