top of page

Deepfakes and the Law: How Does the Legal System Protect (or Fail to Protect) Victims?

Author: Dóra Fekete


An audio recording in which a politician makes scandalous statements.A video in which a company executive justifies strange decisions.A recording that “proves” that someone said or did something they never actually said or did.Even just a few years ago, it would have been taken for granted that we should believe such materials. Today, however, the question arises more and more often: are we seeing a real recording, or one created by artificial intelligence? Deepfake technology is not merely a technical innovation, but a serious legal and social challenge as well. It creates situations to which the legal system can often respond only retrospectively and with difficulty. Today, the question is no longer whether deepfakes exist, but whether the law is capable of protecting those who become their victims.


What is a deepfake, and why is it so realistic?

Green typewriter with paper displaying the word DEEPFAKE in bold text. Background is blurry white, conveying a vintage and mysterious mood.

The term “deepfake” refers to artificial intelligence based on deep learning. Such systems learn a person’s facial features, speech style, and mimicry by analyzing enormous amounts of image, audio, or video material, and then create new content that appears extremely authentic. It is important to emphasize here that the technology itself is not inherently harmful. It is used for film industry tricks, historical reconstructions, or even educational purposes. The problem arises from the fact that the same tool can easily be used for deception, manipulation, and serious reputational damage.


The circle of victims: unfortunately, anyone can become a target

Many people believe that deepfakes primarily affect celebrities. Indeed, it is easier to find source material about them, but today even an average social media presence may be sufficient to create a convincing fake piece of content. As a result, victims often face not only legal, but serious human consequences as well. Their credibility is called into question, their workplace or business relationships may be damaged, and they may experience long-term loss of trust. All this often happens in such a way that they have committed no wrongdoing at all—they merely served as “raw material” for an algorithm.


What can the law do?

In most legal systems—including Hungary and the European Union—there is currently no legislation that was created specifically to regulate deepfakes. Instead, existing legal instruments are being applied to an entirely new phenomenon. The first line of defense is civil law, in particular personality rights. These protect an individual’s good reputation, their right to their image and voice, and respect for private life. A deepfake often violates several of these rights at the same time. In theory, the victim may request the removal of the infringing content, moral satisfaction, or even compensation. The greatest difficulty, however, is time. While legal proceedings are slow, the internet operates at extraordinary speed. A deepfake video can reach a global audience within just a few hours, and by the time a legal decision is made, in many cases the damage is already irreversible.


Applicability of criminal law

Criminal law is typically applicable only if the deepfake already constitutes a specific criminal offense, such as defamation, threats, or fraud. In itself, the fact that a false but plausible recording has been made of someone is, in many cases, still not sufficient to initiate criminal proceedings. This is clearly one of the biggest legal gaps related to deepfakes: although the harm is very real, the specific legal category that would clearly address such situations and apply meaningful sanctions is missing.


But then who is responsible? The creator, the distributor, or the platform?

Man holding a smartphone showing an app with "People and Devices" gallery. Text reads "Yarışmaya Katıl (3 days kaldı)." Laptop on bed.

Online platforms have become key players in the fight against deepfakes. Often, problematic content is removed not on the basis of a court decision, but according to the internal rules of a social media platform. This solution is much faster, but at the same time less transparent and not always predictable. The European Union’s Digital Services Act (DSA) already seeks to strengthen the responsibility of platforms, but the central question remains open: is it right that in the digital space the enforcement of “truth” depends primarily on the decisions of private companies rather than on judicial institutions?


The crisis of proof: what counts as authentic?

One of the most serious effects of deepfakes is not necessarily measurable in individual cases, but lies in the fact that they generally undermine trust. If everything can be manipulated, then genuine recordings can also be easily questioned. This is referred to as a credibility crisis: not only do false contents pose a danger, but proving the truth itself is becoming increasingly difficult.


Where is the law heading?

Fortunately, there are already encouraging developments in legislation in this area. The European Union’s forthcoming regulation on artificial intelligence (the AI Act) would require that AI-generated or manipulated content be clearly labeled. This alone does not solve every problem, but it can be an important step toward prevention and transparency. In the long term, it is likely that the law will be forced to create new concepts and rules of liability within the framework of media law. The question is not whether this is necessary, but how quickly it will happen—the mechanisms of lawmaking must definitely be brought up to speed in the digital world.


Conclusion

Deepfakes represent one of the most serious legal challenges of the digital age. Although current rules are partly capable of providing protection, in many cases they are slow and incomplete. The task facing the law is clear: it must rethink the concepts of evidence, responsibility, and credibility, and fill them with new, modern, and flexible content appropriate to our time. Until then, the most important protection remains awareness and caution, as well as the widest possible dissemination of education aimed at developing people’s digital skills.

Let us take care of ourselves and of each other, because not everything is what it seems at first glance!


You can also read about:


 Reference list:

 

Comments


  • White Facebook Icon
  • White Twitter Icon
  • White Instagram Icon
bottom of page