How are courts and international treaties reinterpreting the issue of AI’s legal responsibility in 2026?
- flaminiavisionfact
- 3 hours ago
- 5 min read
Author: Dóra Fekete
Traditionally, legal systems have proceeded from the assumption that technology is merely a tool through which people can make everyday life easier. Whether we are speaking of a simple instrument or a more complex information system, the concept and interpretation of legal responsibility have always been linked to human actions and decisions. The emergence of artificial intelligence, however, is gradually reversing this way of thinking. Because of self-learning algorithms and increasingly responsive intelligent systems - for example, self-driving taxis that are already being used on the streets of Los Angeles — meaning that AI is already assessing and making autonomous decisions in many complex traffic situations - by 2026 we will likely reach a point where the legal regulation of these mechanisms must be reframed—both theoretically and in practice. A particularly interesting and timely question is whether AI can bear legal responsibility in the legal sense. Will it ever be possible, in the course of legal proceedings, to declare that AI is a legal subject that must bear responsibility for the damages it causes in the same way as a human being?

The Turning Point
This turning point will by no means be brought about by a single major piece of legislation or one landmark judicial decision, but rather by a series of seemingly smaller, interconnected decisions across legal systems. Even now, there are examples in which courts are deciding cases where human conduct plays only an indirect role. Authorities have already imposed fines because of automated decision-making systems, even when the developers of those systems argued that they could not reasonably have foreseen the specific outcome. At various international forums, the question is raised ever more frequently and openly as to what happens, for example, if an AI system—by its very nature operating globally and across borders—causes harm, and no state considers itself competent to adjudicate the case. How, under such circumstances, will jurisdiction be established?
Risk and Responsibility at the Forefront
This shift in thinking and perspective is particularly pronounced in Europe. In recent years, the European Union has abandoned the approach of regulating AI primarily through ethical guidelines. The EU AI Act no longer asks only what is “good” or “bad,” but also what risks AI carries and who will bear responsibility for them. The emphasis has therefore shifted toward prevention and accountability: documentation obligations, risk assessments, and subsequent human oversight are becoming mandatory after every social action or legal act that AI “commits.” It is therefore logical to assume that AI no longer appears merely as an object, but as a factor that independently affects legally relevant situations. By contrast, a less uniform picture has emerged in the United States. The lack of comprehensive regulation at the federal level means that questions of responsibility are largely decided before the courts. In 2025, several lawsuits were already underway in which plaintiffs held not only users but also developers and data providers accountable. Although these cases did not recognize AI as having legal personality, they made it clearer that defenses based on the argument “we did not know what it would do” are no longer acceptable.
Can AI be a Legal Subject?

The most essential and divisive element of the debate is the question of whether AI itself can ever bear legal responsibility. According to classical legal theory, legal capacity and capacity to act are prerequisites of legal subjectivity, so that sanctions have real meaning and can fulfill their function. An algorithm, however, does not feel pain, has no property, and does not comprehend the moral content of punishment in the way a human does. Despite this, an increasing number of voices argue that the law has already created “artificial” legal subjects before, such as business associations well known from civil law. These are not natural persons either, yet they function as independent legal entities.
Responsibillity and New Models
In 2026, most legislators will likely still resist granting full legal personality to AI. Nevertheless, a middle-ground solution already appears to be taking shape based on current trends: AI will become a “participant” in a functional sense. But what does this mean? It means that the law will take into account the degree of AI’s autonomy when determining and assessing responsibility, without yet endowing it with its own rights. The focus will not be on whether AI is “guilty,” but on who bore the risk and who had the actual possibility of intervention. Accordingly, it can be said that by 2026 several new models of liability may have emerged worldwide. One of these is the extension of strict liability, particularly in the case of high-risk applications. In such situations, the developer or operator may be held liable even if all regulatory requirements were complied with. Another approach involves the distribution of liability, whereby data providers, model developers, and organizations deploying the application may all share responsibility, depending on where, when, and to what extent an error occurred. In this context, the idea of mandatory insurance is also raised with increasing frequency, under which compensation claims would be covered from a risk pool. All of this becomes even more complex at the level of international law. Damage caused by AI often affects several states simultaneously, while neither the applicable law nor the competent court is clear. In 2026, the United Nations and other international forums are no longer formulating merely principled declarations, but are already working on concrete harmonization proposals. A particularly sensitive area is the issue of state responsibility when AI is used for military, law enforcement, or administrative purposes. Here, technological autonomy directly affects questions of state sovereignty and human rights alike.
Can the Law Adapt?

All of this has significant consequences for the legal profession, for companies, and for states. Lawyers will need to develop new conceptual frameworks, definitions, and legal institutions that go beyond today’s traditional understanding of liability. For companies, AI responsibility will no longer be merely a reputational issue, but will also represent a serious financial and legal risk in both development and practical application. States, meanwhile, will be forced to recognize that fostering innovation is sustainable only if social risks are managed credibly and consistently. Overall, it is becoming clear that the debate on AI’s legal responsibility cannot remain confined to theoretical, academic discourse. Legal systems must provide practical answers to how they will deal with damage caused by increasingly autonomous algorithms. The question, therefore, is no longer only whether AI can be a legal actor, but whether the law is capable of adapting quickly and flexibly enough to a world in which an ever-growing share of decisions is made not by humans, but by algorithms.
You can also read about:
Reference list:
