top of page

Why is data protection more important today than ever before? – Ethical dilemmas in the age of AI

Author: Dóra Fekete


Robotic hand reaching towards a digital blue network pattern with glowing nodes, creating a futuristic and technological mood.

In just a few years, artificial intelligence has become such a natural part of our everyday lives that we hardly even notice how routinely we use AI tools. Chatbots answer our questions, recommender systems tell us what to watch, where to go, and what to buy. Algorithms already screen job applicants, translate foreign-language texts, and even edit photos and videos. AI is present in every corner of our lives — in our pockets, on our laptops, in our offices, and increasingly within public services as well. In this new digital environment, one question is raised more and more often: what happens to our data? Artificial intelligence is not magic; it is a data-driven technology. To function, models require enormous amounts of information, which the system processes, links together, identifies patterns in, and then uses to make decisions, recommendations, or predictions. And whenever we talk about data, the possibility inevitably arises that these datasets may include personal — or even highly sensitive — information.


The invisible price: our data

Users are often unaware that whenever they interact with an AI system — whether in text or image form — they are in fact handing over data. Even a simple question may contain

information that can be traced back to a person, an organisation, a decision, or a particular situation. Images and documents even more so: they may include faces, locations, names, health data, or any other type of personal information. The problem is that many people think, “Everything is visible on the internet anyway.” But this is not true. Our digital data is not just information about us — it is a digital footprint from which a profile can be created: our life situation, preferences, fears, financial status, movements, or relationships. For an AI system, these patterns are valuable inputs that can be used to model or predict behaviour.


The ethical dilemmas of AI: more than a data issue

In the case of AI, data protection is not merely a technical requirement. It is also an ethical responsibility, grounded in the respect for human dignity, autonomy, and privacy.

One of the most significant ethical problems is transparency: users often do not know how the system actually works. Most AI models operate as a “black box”: we see the input and the output, but the internal process — the algorithmic path from question to answer — is difficult to understand. This is not necessarily a problem in itself, but it becomes one when the system makes incorrect or biased decisions. For example, if an AI downgrades a job applicant because the training data reflects that previous successful candidates mostly belonged to a certain gender or ethnicity, the system

— unintentionally — engages in discrimination. This is an ethical issue rooted in the quality and composition of the data. Another dilemma concerns the future use of data. Some systems do not retain the data they receive, others process it in anonymised form, while some may use it for further development. For users, it is often unclear what exactly happens in the background — and this creates a problem of trust.


Whose data is it? – The question of autonomy

The foundation of data protection is always the same: who controls the data? Users want to retain control — to decide what they consent to, for what purpose their data can be used, and how long it can be stored. AI developers, on the other hand, often want as much data as possible, because more data typically means more accurate models.

This leads to one of the most important questions: is it possible to develop artificial intelligence while ensuring that users preserve full autonomy over their data? The answer is complex. One direction is for systems to rely more on anonymised or aggregated data. Another is to provide users with clearer and more understandable information — not only in lengthy legal texts, but in plain language, with real examples, clear deadlines, and actual choices.


Privacy is not a luxury – it is a fundamental requirement

Some believe that data protection is a kind of unnecessary caution that slows down

technological innovation. In reality, the opposite is true. The lack of data protection leads to scandals, misuse, and loss of trust — all of which hinder technological progress.

Privacy is, in fact, a fundamental requirement of digital culture. If people do not trust AI,

they will not use it. If they feel their data is secure, however, the technology becomes not a threat, but a useful tool.


How can the operation of an AI system be ethical?

The ethical “behavior” of artificial intelligence does not only depend on how it handles data or what rules it follows. Just as important is the way it affects people’s everyday lives — even when they are not fully aware of it. One of the foundations of an ethical AI system is that it should not overwhelm the user, but instead aim to provide real assistance. For example, this includes — and in my opinion one of the most important aspects — reducing mental and emotional burdens: a well-functioning AI recognizes when it needs to “hold back” and when it is appropriate to offer more detailed suggestions in response to a question, such as one related to health or personal life.

It is also important that the system be able to express uncertainty. It does not need to give a confident answer to everything — in fact, it is more ethical if it admits when it is not entirely sure about something. This way, the user will not blindly trust an “all-knowing” machine but will instead engage with it as a partner.

Another essential factor is supporting the user as an independent, autonomous individual. In other words, the system should not try to influence decisions or impose alternatives. Rather, it should present multiple options objectively, helping the person make their own choice — but never making that choice for them.


Robot hand and human hand touch fingers against a gray background, symbolizing connection. The human hand has tattoos on the wrist.

Finally: AI systems must be developed in a human-centered way, since the

technology does not exist on its own; it is a tool meant to make people’s lives

and everyday activities easier. So let us keep in mind that artificial intelligence

can indeed be very useful, but at the same time it can also be a dangerous tool in our hands. Let us strive to use it for good!The future: technological progress + human responsibility. AI will not disappear from our lives; in fact, it will play an increasingly important role in workplaces, education, public services, and everyday communication. But the more significant its role becomes, the greater the responsibility to use it within ethical boundaries. Artificial intelligence can make life easier — but only if we ensure that the rights, data, and dignity of the people behind the technologies remain protected.


You can also read about:


Comments


  • White Facebook Icon
  • White Twitter Icon
  • White Instagram Icon
bottom of page