
From hidden AI threats to how we address them at BUas
02/24/2025 - 16:51

- Expertise
Hannah Arendt’s ideas on speech and action remind Carlos of Activity Theory, as described by Bertelsen & Bødker (2003). This theory states that you, as a person, set yourself long-term goals—such as obtaining a degree—which must then be broken down into various activities. These, in turn, are further broken down, eventually reaching the level of individual muscle movements. Each step in this chain of events contributes to achieving the final goal.
Now for AI, there are many different visions, and different people suggesting different actions, making the outcomes unclear. This lack of directionality, or common goal, is what creates a vacuum and could instil fear and ultimately cause inaction. This could be simply because we do not understand what we are facing, we fear the unknown, or we fear what we need to learn and to re-assess how we have been doing things for a long time.
Only examining these activities will not suffice if we are to work towards the adoption of AI at BUas. We require a shared vision, a common perspective. Only then, can we break down the requirements, and adjust ourselves to what needs to be done.
At BUas, we have that AI ambition clear. We aim to be “a frontrunner in leveraging and maintaining AI as a transformative tool in education, operations, and associated industry and research domains while keeping ethical considerations at the forefront”. With this shared vision we can help each other to see the dot on the horizon and know where we are moving towards.
Now, let’s reassess the four issues outlined in the previous blog and see where we stand.
1. Exclusive dialogues
Here, Carlos and Oscar are in clear agreement: we should never exclude anyone from the conversations. However, those participating without knowledge on the topic must be guided to understand what is being addressed. If there is not enough understanding, that could lead to polarized conversations simply through the fear of the questions which need to be answered, without truly trying to understand the other. This means that:
- Exchanging knowledge is imperative. We must be able to have a common understanding of the matter discussed. Acknowledging what we do not know, not speaking of ten-years from now, but what is understandable at this stage.
- The experts gain a different role in such conversations since they need to guide and help those participating in the dialogues.
- We cannot forget that people can talk about things, even without having deep understanding of it. Take a look at a car, many of us can drive one, without knowing how it truly works.
At BUas, we have the AI Fundamentals Course, and the Advanced AI in Education Training coming up to speak that common language. We also organized open dialogues for the AI Ethics Policy, there are AI Cafés, Spark AI Sessions. All different platforms to address where we want to go with all this, with room for anyone to participate. We will continue to make sure these kinds of dialogues take place, and extend open invitations.
2. Anthropomorphism
Carlos highlights that humans have always tended to anthropomorphise objects. Think of using dolls and Lego figures for kids, who can be just as real for that particular moment of play. However, now that AI is mimicking human language, we expose our need to interact and socialize.
Let’s assume for a moment that these computers have reasoning, that doesn’t mean they have consciousness, there is no sense of self, or understanding of the consequences what they produce. This can be dangerous at many different levels in the sense that, if we look at AI as another being, equal to us, then we have a problem that we are imbuing them with sentience and sapience, while there is none. When we then use these tools, we must be conscious of the true capabilities of them, rather than the attributed ones.
Ultimately, we should not replace human interactions with machine simulations. There are cases where people have AI partners for example. While some of these cases might be unavoidable, it shows the importance of educating each other as to what these machines truly are. There is no understanding, and no sense of empathy in these machines that what they do and say might hurt the human.
Oscar reiterates that the language we use about these machines is therefore of importance. It has the potential to influence our conscious understanding of them as machines. The purpose and design of these robots should be assessed for each context in which they are used. It does not always have to be a human-like figure in order to be effective.
In BUas’ education and research activities we must analyse the intended usage and goal of the AI through careful and critical assessment. At this point, the AI Pioneer helps to mediate this process, but it is important that the members of the BUas community can make this judgement just as well.
Oscar reiterates that the language we use about these machines is therefore of importance. It has the potential to influence our conscious understanding of them as machines. The purpose and design of these robots should be assessed for each context in which they are used. It does not always have to be a human-like figure in order to be effective.
In BUas' education and research activities we must analyse the intended usage and goal of the AI through careful and critical assessment. At this point, the AI Pioneer helps to mediate this process, but it is important that the members of the BUas community can make this judgement just as well.
3. The datafication of thinking
For Carlos this is not a new situation derived from AI, it has been around for ages. We have already reduced experiences to data-points, such as armies, populations, etc. We are very bound to numbers. People can be very compassionate when they talk face-to-face, however, once we go back to our own worlds, we continue to abstract deep concepts through points of data, rather than experiences. If we are to plan for the future, if we have specific goals in mind, if we want to innovate, “datafying” processes and understanding can rather be helpful.
Ultimately, AI is just another tool, which will change us. How we want it to change us, is (kind of) up to us. With the vast amount of information available, and the amount we need to learn, we need this type of technology to help us guide what we need to know. These used to be governments, teachers, parents, or friends. Now that we are increasingly outsourcing this to a machine, does not scare Carlos. What scares him is who is behind the machine, and intentionally or unintentionally influence people perceptions, context, ultimately themselves.
Of course, we need technology in order to understand the world around us, however, in that revelation it also conceals that very relation. Oscar is concerned that we run the risk of losing our capacity to understand our relation to the world we live in, if that is only understood and mediated through media and tech. For Carlos, this is not only something propelled by AI, but is part of many other sources, such as the two issues mentioned above. The risk is that it influences the way we view the world.
Remaining aware of these relations as well as questioning how we deal with these remain of the utmost importance. This is also why the AI Pioneer team is contributing to AI in Education, so our students are able to have a critical stance towards these developments.
4. The robots taking over
People are lazy, and that is our biggest concern. Our history shows that human society is not able to balance their own interests. Look at climate change for example, there is a clear, urgent, need for action. We have all the information to know we should start balancing what we are doing, however, we are not. Under the guise of comfort or commodity we give ourselves exceptions to critically, necessary actions.
That is why we must develop the correct AI tools to evolve in the direction that is desirable. The BUas AI Ethics Policy, but also other crucial responsible AI approaches are therefore a central aspect of our decision-making, and production.
We must not be scared for what is ahead of us, as it will always surprise us and will remain unknown to some degree. However, while embracing AI at BUas, we should have a critical stance as to what we are developing and deploying. Why is that solving the problems we see, and what are the consequences of using this technology in that particular case?
Without a doubt, both Carlos and Oscar are looking forward to the future, and all the exciting developments it will bring.