Abstract
This research examines the issue of administrative liability for damages caused by artificial intelligence (AI) systems in public services. It offers a legal analysis of legality and transparency in automated administrative decisions. The topic gains importance as public administrations increasingly rely on intelligent systems, raising unprecedented legal challenges particularly with respect to oversight and accountability. The study is structured into three main sections: the first outlines the theoretical framework of administrative liability and AI; the second explores the compatibility of automated decisions with the principle of legality and the scope of judicial oversight; and the third examines the components of administrative liability, supported by comparative judicial models from France, the United States, and Egypt, with an analysis of their applicability within the Omani legal context. The study concludes that administrative liability may arise from supervisory negligence or algorithmic mismanagement-even in the absence of direct technical error. It recommends the enactment of legislation to regulate the use of AI in public administration, promote algorithmic transparency, and institutionalize the principle of “meaningful human oversight” as a legal safeguard for fair accountability.
Published in
|
Humanities and Social Sciences (Volume 13, Issue 4)
|
DOI
|
10.11648/j.hss.20251304.21
|
Page(s)
|
382-388 |
Creative Commons
|

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.
|
Copyright
|
Copyright © The Author(s), 2025. Published by Science Publishing Group
|
Keywords
Administrative Liability, Artificial Intelligence, Public Administration, Legality, Transparency, Algorithmic Decisions, Oman
1. Introduction
The world today is undergoing a rapid digital transformation, wherein modern technologies chief among them artificial intelligence are reshaping the roles of the state and public administrative practices. Intelligent systems have become central tools in administrative decision-making, public service delivery, and performance monitoring. However, with this increasing reliance on AI, new legal challenges arise, especially regarding the subjection of these systems to the core principles of administrative law, most notably legality and transparency, as well as the question of who bears liability when harm occurs.
This study aims to explore the legal issues stemming from the use of AI in public administration. It seeks to clarify the theoretical and practical framework for holding administrations liable for damages resulting from the use of such technologies. Additionally, the study proposes a clear legal vision that supports the development of a digital administrative system aligned with the requirements of legality and respectful of individual rights.
1.1. Research Objectives
This study seeks to define the legal framework governing administrative liability arising from the use of AI systems in public institutions. It analyzes the extent to which administrative decisions generated by such systems align with the principles of legality and transparency-cornerstones of administrative law. The study also proposes a set of legal safeguards aimed at curbing potential administrative abuse in the use of AI, ensuring a balance between technological progress and the protection of individual rights and freedoms.
Research Problem
The adoption of artificial intelligence in administrative decision-making and implementation processes marks a significant technological shift. However, this evolution gives rise to a fundamental legal question: to what extent are decisions issued by AI systems subject to the principles of administrative law, particularly legality and transparency? And who bears liability when harm is caused by such systems?
1.2. Research Questions
1) Can the administration be held liable for damages caused by an AI system under its control?
2) To what extent are AI-generated decisions subject to the principle of legality?
3) Who bears the responsibility: the administration, the software developer, or the AI system itself?
4) How can transparency be enforced on the algorithms used in public administration?
Significance of the Study
The importance of this research lies in its effort to assess the adaptability of traditional administrative law principles to rapid technological developments-particularly the integration of artificial intelligence into public governance. It highlights the problem of administrative will becoming ambiguous when intelligent systems are involved in decision-making and the resulting challenges in terms of oversight and accountability. The research also aims to present a legal vision that contributes to redefining the relationship between the administration and the citizen in light of these changes, through a practical analytical study that supports the development of the legislative framework in Oman and the Arab region.
1.3. Research Methodology
This study is based on two main legal methodologies:
1) First, the analytical method, which involves examining relevant legal texts, jurisprudential principles, and judicial rulings that govern administrative liability, in order to assess their applicability in the context of AI-based decision-making.
2) Second, the comparative method, which includes studying the experiences of countries such as the Sultanate of Oman, France, and Egypt, with the aim of extracting regulatory models that may be used to develop the Omani legal framework and regulate the use of AI in administration, thereby achieving a balance between technical efficiency and legal safeguards.
1.4. Contribution and Novelty of the Study
This research offers a pioneering legal analysis of the responsibility of public administration for decisions made using AI systems, with an applied focus on the Omani context. While much of the existing literature concentrates on the technical or ethical aspects of AI, this study introduces a novel legal perspective by examining how administrative law principles such as legality, transparency, and accountability can be applied to algorithmic governance.
Previous studies have addressed the topic from multiple angles but without expanding into the comparative legal framework as this research does. For example:
1) Veale and Brass examined the administrative challenges of using machine learning in the public sector, highlighting the absence of regulatory frameworks for transparency and accountability in automated decision-making, but did not address legal liability or judicial interpretation
[17] | Veale, M., & Brass, I. [2019]. Administration by Algorithm? Public Management Meets Public Sector Machine Learning. Public Money & Management, 39(5), 310-318. |
[17]
.
2) Cobbe
[6] | Cobbe, J. [2019]. Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making. Information, Communication & Society. |
[6]
presented a critical analysis of the extent to which UK administrative courts can review automated decisions, pointing to the weakness of traditional legal structures, yet did not propose a comparative or reformative legal model.
3) Binns
[22] | Binns, R. [2018]. Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543-556. |
[22]
focused on the notion of algorithmic accountability from a philosophical and political standpoint and discussed how algorithmic decisions could be linked to public responsibility, but did not address their legal or administrative implications.
4) Eubanks
[9] | Eubanks, V. [2018]. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press. |
[9]
analyzed the social impact of AI in American public administration, particularly in welfare and social support systems, and demonstrated how automated systems may lead to systemic discrimination, though without proposing concrete legal remedies.
This study stands apart from previous research by offering a comprehensive legal framework for administrative liability concerning AI-induced harm. It develops the concept of “algorithmic legality,” integrating comparative judicial precedents and legislative recommendations that can be applied within the Omani and broader Arab legal contexts.
1.5. Structure of the Study
This study is composed of three main sections as follows:
1) Theoretical Framework of Administrative Liability and Artificial Intelligence, which includes two chapters. The first discusses the concept and development of administrative liability, while the second addresses artificial intelligence and its role within the modern administrative structure.
2) Legality and the Use of Artificial Intelligence, which also includes two chapters. The first explores the principle of legality and AI-generated administrative decisions; the second focuses on judicial oversight of decisions made using artificial intelligence.
2. Theoretical Framework of Administrative Liability and Artificial Intelligence
2.1. The Concept and Development of Administrative Liability
Administrative liability is a cornerstone in the legal structure governing the relationship between the administration and citizens. It serves as an effective mechanism to maintain a balance between the authority of the administration and the protection of individual rights. Administrative law scholars define it as the obligation of the administration to compensate for harm resulting from its unlawful acts or those of third parties functionally linked to it, whether such acts are intentional or not.
This position is supported by Lamovšek’s
[10] | Lamovšek, N. [2023]. Analysis of Research on AI in Public Administration: Literature Review and Textual Analysis. ResearchGate. |
[10]
comprehensive review of administrative law literature, which affirms the urgent need to adapt traditional liability concepts to the realities of digital governance and artificial intelligence.
Administrative liability emerges as a key tool for holding public authorities accountable for actions that cause harm to individuals. French jurist René Chapus emphasizes that administrative liability differs from civil liability in its legal foundation, as it is grounded in public law and often based on the occurrence of harm rather than contractual obligations.
The evolution of administrative liability has passed through historical phases from the doctrine of state immunity, to recognition of liability for public service operations, the risk theory, and ultimately to judicial affirmation of liability even in the absence of fault. The landmark ruling of the French Conseil d'État in the Blanco case
[20] | Conseil d’État (France). [2020]. Parcoursup Case - Decision No. 428253, February 27, 2020. |
[20]
marked a pivotal shift by affirming that administrative liability is governed by distinct rules of public law, independent from civil law.
In the context of the Sultanate of Oman, administrative liability draws on the general provisions of the Civil Transactions Law, particularly Article 282, which serves as a flexible legal basis for holding the administration accountable when AI systems cause harm linked functionally to administrative operations.
Legal doctrine recognizes several forms of administrative liability, including:
1) Fault-based liability
2) Strict liability
3) Liability for acts of others, when those persons are functionally connected to the administration
This last form aligns well with the technical nature of AI, which lacks independent legal will, thus requiring the administration to assume full responsibility for supervision and control.
“Negligent or even non-deliberate administrative conduct can give rise to liability where injury is foreseeable.” Schwartz, B.
[15] | Schwartz, B. [1991]. Administrative Law (3rd ed.). Aspen Publishers. |
[15]
,
Administrative Law, p. 456.
2.2. Artificial Intelligence and Its Role in Public Administration
Artificial Intelligence (AI) is a branch of computer science concerned with designing systems and software capable of simulating human cognitive functions, such as logical reasoning, data learning, decision-making, image and speech recognition, and environmental interaction. The OECD defines AI as:
“The ability of digital systems to perform tasks typically requiring human intelligence, such as understanding, learning, reasoning, and interaction” According to the European Commission, AI is:
“Systems designed to interact with their environment, interpret data, learn from it, and make decisions to achieve specific objectives.”
In public administration, AI has been adopted in various functions including data analysis, resource allocation, routine decision-making, and performance monitoring.
Umoh
[16] | Umoh, B. E. [2025]. The Impact of Artificial Intelligence on Public Administration in the Public Sector: Opportunities and Challenges. SSRN. |
[16]
observes that a key challenge in the use of AI in public administration is the weak legislative infrastructure and the shortage of skilled personnel, which undermine the lawful and effective integration of these technologies
[16] | Umoh, B. E. [2025]. The Impact of Artificial Intelligence on Public Administration in the Public Sector: Opportunities and Challenges. SSRN. |
[16]
.
Paul Nemitz emphasizes that administrative decisions made through AI systems must not escape accountability. These decisions must remain subject to the same principles of legality, transparency, and judicial oversight that govern traditional administrative acts
[12] | Nemitz, P. [2018]. Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophy & Technology, 31, 475-489. |
[12]
.
A core legal dilemma stems from the fact that AI systems lack legal personality or autonomous will, complicating the assignment of liability. Nonetheless, contemporary legal scholarship agrees that the administration remains responsible for the systems it adopts. The absence of direct human action does not absolve the administration from legal accountability.
Veale and Brass argue that AI tools increasingly act as de facto administrative agents and thus require new models of responsibility and governance.
2.3. The Principle of Legality and Automated Administrative Decisions
The principle of legality is the cornerstone of administrative legal systems. It requires that no administrative decision may be issued unless grounded in lawful authority. Scholars assert that legality entails the subjection of all administrative acts to the law in both form and substance. It is not sufficient for a decision to come from a competent authority; it must also achieve the objectives of public service within the framework of defined legal limits.
“All public powers must be exercised lawfully. The courts have consistently reinforced this foundational principle.” Craig, P.
[8] | Craig, P. [2011]. Administrative Law (7th ed.). Oxford University Press. |
[8]
. Administrative Law (7th ed.), Oxford University Press, p. 6.
With the advancement of AI technologies, questions arise regarding whether automated administrative decisions remain subject to this principle. Dr. Mohamed Abu
[1] | Abu Al-Einen, M. [2021]. Electronic Administrative Decisions: An Analytical Study. Dar Al-Fikr Al-Jamii, Cairo. |
[1]
Al-Einen argues that:
“Automated administrative decisions must be held to the same standards of legality, and administrations must remain accountable for their content and interpretation. A legal framework must exist requiring written justifications for any decision issued by an AI system.”
“Citizens must be given meaningful explanations of algorithmic decisions, especially where these affect rights or entitlements.”
The Egyptian
[18] | Egyptian Administrative Judiciary Court. [2023]. Case No. 5564/71JY, Session 2023. |
[18]
Council of State upheld this principle in Case No. 34775/65JY, establishing that the administration is responsible for relying on unverifiable reports a precedent that clearly extends to non-transparent algorithmic decisions. This ruling confirms that the Egyptian judiciary is prepared to treat AI-based decisions under an expanded framework of administrative accountability.
In the Sultanate of Oman, although there is no explicit law regulating AI, Article 3 of the Electronic Transactions Law states:
“Every individual has the right to know the source of data used in official procedures.” This can be interpreted as a foundational legal basis for enforcing algorithmic transparency.
When AI systems are involved in decision-making, human will may be diminished or merely supervisory, weakening the legal connection between the decision and its underlying authority. This prompts a fundamental legal question: Can algorithmic outcomes still be classified as administrative decisions?
Paul Nemitz warns against so-called “black box decisions,” where a decision is issued by an algorithm without any legally verifiable explanation. He argues this undermines the principle of legality and endangers the legitimacy of public decision-making.
[Link: https://doi.org/10.1007/s13347-017-0292-3].
Likewise, Matthias Pfeffer
[14] | Pfeffer, M. [2022]. AI and Public Administration: Challenges and Legal Foundations. Springer. |
[14]
asserts that legality must extend to the design of the algorithm itself. The mechanism through which an automated decision is reached must be clear and accountable.
Frank Pasquale
[13] | Pasquale, F. [2015]. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. |
[13]
raises a similar concern, cautioning that administrative algorithms often obscure their evaluative logic, reducing the right to appeal into a mere formality.
2.4. Judicial Oversight of AI-driven Administrative Decisions
Judicial oversight is built upon the principle of legality and serves as the key legal mechanism allowing individuals to challenge administrative decisions that affect their interests. With artificial intelligence integrated into administrative decision-making, the nature of judicial review has evolved from examining human intent to analyzing the underlying algorithmic logic.
“Judicial review extends to all forms of executive action, whether performed by human officials or automated systems.” Barnett, H.
[4] | Barnett, H. [2020]. Constitutional & Administrative Law (13th ed.). Routledge. |
[4]
.
Constitutional & Administrative Law (13th ed.). Routledge, p. 583.
In Egypt, the Administrative Judiciary affirmed the right of individuals to access the reasoning behind administrative decisions that impact their legal status. In Case No. 5564/71JY [2023], the court annulled a decision denying a promotion based on an automated evaluation system. The ruling held that the absence of clear justification violates the principle of transparency and exposes the decision to annulment. It emphasized that the administration remains liable when the decision is ambiguous or when the affected party is unable to understand the evaluation criteria-thus strengthening judicial support for algorithmic transparency.
In Oman, while there is no explicit legislation on artificial intelligence, the general principles of the Law on Administrative Procedures require that decisions be justified. This can be interpreted as a foundation for judicial review of algorithmic decisions.
In comparative law, the French Conseil d’État addressed a similar issue in a 2020 ruling concerning the use of an algorithm on the Parcoursup platform for university admissions. The court affirmed the student’s right to know the algorithmic criteria used in decision-making, confirming that automated decisions remain administrative acts subject to legality and judicial scrutiny.
In the United States, the District Court of California
[21] | California Superior Court (USA). [2019]. Smith v. Department of Social Services. |
[21]
in
Smith v. Department of Social Services ruled that the use of an AI system that excluded citizens from welfare without a clear legal basis constitutes a constitutional violation. The court ordered the system’s suspension and awarded damages to the affected parties, reinforcing the role of transparency as a legal prerequisite for automated decisions.
“The duty to give reasons is now widely accepted as a constitutional requirement for legality and fairness.” Craig, P.
[8] | Craig, P. [2011]. Administrative Law (7th ed.). Oxford University Press. |
[8]
.
Administrative Law (7th ed.), Oxford University Press, p. 327.
Cobbe
[6] | Cobbe, J. [2019]. Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making. Information, Communication & Society. |
[6]
also notes that the absence of legal interpretability in algorithmic decisions limits the effectiveness of judicial review.
[Cobbe, J.
[6] | Cobbe, J. [2019]. Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making. Information, Communication & Society. |
[6]
.
Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making.
Legal Studies, 39(4), 636-655] [Link: https://journals.sagepub.com/doi/full/10.1111/lest.12251].
Bignami
emphasizes that administrative liability should rest on the authority that employs the system not the system itself especially when meaningful human oversight is lacking.
In light of these developments, the definition of administrative decisions must be updated to include cases where decisions are generated through automated processing without direct human input. Legal systems must also impose clear obligations on administrations to provide understandable justifications for such decisions, ensuring accountability and access to justice.
The researcher concludes that automated administrative decisions such as issuing driving licenses automatically or determining eligibility for government support based on algorithmic analysis should remain subject to legality, provided they are grounded in legal authority and open to judicial challenge. Public officials and institutions cannot evade responsibility simply because the decision originated from a digital system.
Therefore, the study recommends enacting explicit legislation to regulate the use of AI in public administration and establish mechanisms for human review of algorithmic decisions thus upholding the principle of legality and safeguarding individual rights.
3. Administrative Liability for AI-based Decisions
3.1. The Elements of Administrative Liability
Administrative liability is traditionally founded on three core elements: fault, damage, and causation. Administrative law distinguishes between fault based liability and strict liability, particularly in contexts involving high-risk administrative activities. This distinction aims to protect individuals and ensure compensation for harms caused by the state.
According to B. Schwartz, public administration can be held liable for harmful acts even in the absence of intent or malice, provided that there is a functional connection between the harmful act and the administrative body (Schwartz, B.
[15] | Schwartz, B. [1991]. Administrative Law (3rd ed.). Aspen Publishers. |
[15]
.
Administrative Law (3rd ed.). Aspen Law & Business).
This view aligns with the nature of decisions produced by AI systems, which inherently lack human intent or will. Thus, legal responsibility must rest on the administration’s duty to supervise and control such systems.
Similarly, Abdel Ghani
[2] | Abdel Ghani, B. [1991]. A Comparative Study of the Principles of Administrative Law and Their Application in Egypt. Monshaat Al-Maaref, Cairo. |
[2]
Basyouni highlights that French administrative jurisprudence has evolved to recognize state liability for technical decisions involving electronic tools or software when these result in violations of individual rights (Basyouni,
Administrative Law, p. 511). This reinforces the concept that AI, as an executive tool within administration, does not exempt the authority from liability but rather necessitates enhanced oversight.
In Oman, Article 282 of the Civil Transactions Law provides a general foundation:
“Whoever causes harm through his act is obligated to remedy it.” This article serves as a flexible legal basis to impose administrative liability for harm resulting from AI systems, particularly where administrative negligence in oversight or evaluation can be demonstrated.
A major legal challenge arises here:
How do we define “fault” in a digital context?
1) Is a programming error considered administrative fault?
2) Does the deployment of an untested or unsupervised system constitute negligence?
Modern legal literature advocates redefining “fault” to suit digital realities. Correia et al. stress the importance of incorporating clear legal standards into smart city strategies, especially when decisions are delegated to AI systems.
.
Paul Nemitz introduces the concept of meaningful human oversight as a key safeguard for establishing liability in AI systems. He argues that allowing untraceable automated decisions constitutes administrative negligence in itself.
Matthias Pfeffer
[14] | Pfeffer, M. [2022]. AI and Public Administration: Challenges and Legal Foundations. Springer. |
[14]
suggests that the algorithmic standards used in administrative decisions should be documented within official administrative files, so courts can evaluate their legality and link any errors to either system flaws or supervisory negligence.
Moreover, Article 22 of the EU General Data Protection Regulation (GDPR) enshrines the right to explanation for automated decisions, reinforcing procedural fairness and accountability when decisions significantly impact individuals.
When AI is used for core administrative functions-such as employee evaluation or the allocation of government subsidies harm may occur without individuals understanding its origin. In such cases, administrative fault should be presumed if there is a demonstrable lapse in oversight or if the system’s programming errors lead to unjust outcomes.
In comparative jurisprudence, courts in France, Canada, and the United States have begun applying an expanded interpretation of administrative liability, not requiring proof of direct programming errors. Instead, they emphasize administrative negligence in supervision or reliance on opaque AI systems.
The researcher concludes that modern legal and judicial trends increasingly recognize failure to evaluate or supervise intelligent systems as administrative fault triggering liability either through compensation or annulment of the decision. This approach strengthens individual rights and reinforces the principle of legality in the digital age.
3.2. Comparative Judicial Models and Practical Applications
In this landmark case, a group of citizens filed suit against the California Department of Social Services, alleging that the agency’s use of an automated eligibility system led to the wrongful exclusion of beneficiaries without legal justification. This research expands upon the legal implications of that case by integrating it into a broader framework of administrative liability and judicial review.
The California Superior Court ruled that the agency’s reliance on a flawed AI system-without meaningful human oversight-constituted administrative fault. The court awarded compensation to the affected individuals and ordered the system to be reevaluated.
The ruling was based on principles of procedural justice, the right to transparency, and the Due Process Clause of the 14th Amendment to the U.S. Constitution.
See:
California Superior Court -
Smith v. Department of Social Services, 2019 Eubanks, V.
[9] | Eubanks, V. [2018]. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press. |
[9]
.
Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
The Egyptian
[19] | Egyptian Council of State. [2010]. Case No. 34775/65JY, Session 2010. |
[19]
Administrative Judiciary Court issued a ruling suspending the enforcement of an administrative decision that denied an employee a promotion based on an AI evaluation tool.
The court determined that the decision lacked adequate justification and failed to disclose the criteria used, thereby violating the principle of transparency and good governance.
This ruling was grounded in foundational principles of Egyptian
[18] | Egyptian Administrative Judiciary Court. [2023]. Case No. 5564/71JY, Session 2023. |
[18]
administrative law specifically, the requirement that any decision affecting legal standing must be properly reasoned.
To date, the Omani administrative judiciary has not issued published rulings addressing liability for AI-generated decisions. However, the relevant legal texts open the door for judicial engagement:
1) Article 3 of the Law on Administrative Procedures requires that administrative decisions be reasoned.
2) Article 282 of the Civil Transactions Law states that “any person who causes harm is obliged to remedy it.”
These articles empower the judiciary in Oman to expand its jurisprudence to include effective oversight of algorithmic decisions, particularly when harm arises and human supervision is absent.
Analytical Summary
Judicial trends in both Arab and Western systems increasingly move toward holding public authorities accountable for harms caused by AI-even in the absence of technical fault.
Key points include:
1) Courts reject claims that automated decisions absolve administrative entities of liability when the system was adopted by the agency.
2) There is a growing need to establish a new legal doctrine: Algorithmic Legality a framework that ensures automated decisions are subject to the same legal safeguards and judicial scrutiny as human-made decisions.
Frank Pasquale
[13] | Pasquale, F. [2015]. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press. |
[13]
argues that all AI decision-making systems in public administration should be subject to both
ex-ante and ex-post review, and that AI-generated decisions should be treated identically to human decisions in terms of legal liability.
Conclusion of the Subsection
The researcher concludes that comparative jurisprudence increasingly supports the application of administrative liability principles to AI-generated decisions, even where no direct error can be identified.
This trend reflects a commitment to protecting individual rights and maintaining the rule of law in the face of administrative digitization.
4. Conclusion
The integration of artificial intelligence into administrative processes marks an essential step toward digital transformation and improving efficiency in public governance. However, such progress must not come at the expense of the fundamental principles of administrative law especially legality, transparency, and the protection of individual rights.
This study has demonstrated that administrative bodies remain legally accountable for decisions made through intelligent systems unless a clear legal framework defines the responsibilities of each actor involved in automated governance. The absence of specific regulation for the use of AI in public administration creates legal ambiguity and undermines the rule of law.
Key Findings
1) The absence of explicit legislative foundations for AI in administrative contexts has resulted in a regulatory gap that weakens the principle of legality and allows for decisions beyond effective oversight.
2) Administrative liability for AI-induced harm does not differ in essence from liability for human decisions, provided that the automated system has been adopted by the administration and its outcomes implemented.
3) Comparative legal systems increasingly hold administrative entities accountable for algorithmic decisions-even in the absence of technical error when human oversight is lacking or the process lacks transparency.
4) The principle of transparency now includes the obligation to provide explanations for algorithmic decisions, as supported by EU legislative developments and Arab court rulings.
5) Although Omani courts have not yet issued rulings in this area, general legal provisions particularly in the Law on Administrative Procedures and the Civil Transactions Law allow for judicial interpretation that encompasses oversight of algorithmic decisions.
6) The traditional legal model is insufficient to regulate the relationship between citizens and intelligent administration. New legal doctrines are needed, such as algorithmic legality, algorithmic transparency, and meaningful human oversight.
Recommendations
1) Draft a specialized law or legal annex under Oman’s Administrative Procedures Law to regulate the use of AI in public administration, establishing standards for legality, accountability, and transparency.
2) Require public authorities to include a copy of the algorithm’s logic or a summary report within the decision file to enable judicial review.
3) Provide training for judges and legal advisors in the domain of artificial intelligence to enhance the judiciary’s readiness to address such disputes.
4) Establish a national committee or unit for AI governance in public administration to monitor the safe and lawful deployment of automated systems.
5) Strengthen parliamentary and judicial oversight of AI-based decision-making systems to ensure a balanced approach between digital transformation and legal safeguards.
6) Adopt the principle of meaningful human oversight as a legal guarantee for fair attribution of responsibility in cases of harm, while ensuring the administration can explain its algorithmic decisions in court.
Author Contributions
Ahmed Mokhtar Abdel Hamid is the sole author. The author read and approved the final manuscript.
Funding
This work is not supported by any external funding.
Data Availability Statement
The data supporting the outcome of this research work has been reported in this manuscript.
Conflicts of Interest
The author declares no conflicts of interest.
References
[1] |
Abu Al-Einen, M. [2021]. Electronic Administrative Decisions: An Analytical Study. Dar Al-Fikr Al-Jamii, Cairo.
|
[2] |
Abdel Ghani, B. [1991]. A Comparative Study of the Principles of Administrative Law and Their Application in Egypt. Monshaat Al-Maaref, Cairo.
|
[3] |
Sami, F. M. [1997]. Administrative Liability in Commercial Law. Dar Al-Thaqafa Publishing, Cairo.
|
[4] |
Barnett, H. [2020]. Constitutional & Administrative Law (13th ed.). Routledge.
|
[5] |
Bignami, F. [2022]. Artificial Intelligence Accountability in Public Administration. The American Journal of Comparative Law.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4166881
|
[6] |
Cobbe, J. [2019]. Administrative Law and the Machines of Government: Judicial Review of Automated Public-Sector Decision-Making. Information, Communication & Society.
|
[7] |
Correia, P. M. A. R., et al. [2024]. The Challenges of Artificial Intelligence in Public Administration in the Framework of Smart Cities: Reflections and Legal Issues. ResearchGate.
https://www.researchgate.net/publication/377657635
|
[8] |
Craig, P. [2011]. Administrative Law (7th ed.). Oxford University Press.
|
[9] |
Eubanks, V. [2018]. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
|
[10] |
Lamovšek, N. [2023]. Analysis of Research on AI in Public Administration: Literature Review and Textual Analysis. ResearchGate.
|
[11] |
Loi, M., & Spielkamp, M. [2021]. Towards Accountability in the Use of Artificial Intelligence for Public Administrations. Preprint. arXiv.
|
[12] |
Nemitz, P. [2018]. Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophy & Technology, 31, 475-489.
|
[13] |
Pasquale, F. [2015]. The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
|
[14] |
Pfeffer, M. [2022]. AI and Public Administration: Challenges and Legal Foundations. Springer.
|
[15] |
Schwartz, B. [1991]. Administrative Law (3rd ed.). Aspen Publishers.
|
[16] |
Umoh, B. E. [2025]. The Impact of Artificial Intelligence on Public Administration in the Public Sector: Opportunities and Challenges. SSRN.
|
[17] |
Veale, M., & Brass, I. [2019]. Administration by Algorithm? Public Management Meets Public Sector Machine Learning. Public Money & Management, 39(5), 310-318.
|
[18] |
Egyptian Administrative Judiciary Court. [2023]. Case No. 5564/71JY, Session 2023.
|
[19] |
Egyptian Council of State. [2010]. Case No. 34775/65JY, Session 2010.
|
[20] |
Conseil d’État (France). [2020]. Parcoursup Case - Decision No. 428253, February 27, 2020.
|
[21] |
California Superior Court (USA). [2019]. Smith v. Department of Social Services.
|
[22] |
Binns, R. [2018]. Algorithmic Accountability and Public Reason. Philosophy & Technology, 31(4), 543-556.
|
Cite This Article
-
APA Style
Hamid, A. M. A. (2025). Administrative Liability for Damages Caused by Artificial Intelligence Systems in Public Services: An Analytical Study in Light of the Principles of Legality and Transparency. Humanities and Social Sciences, 13(4), 382-388. https://doi.org/10.11648/j.hss.20251304.21
Copy
|
Download
ACS Style
Hamid, A. M. A. Administrative Liability for Damages Caused by Artificial Intelligence Systems in Public Services: An Analytical Study in Light of the Principles of Legality and Transparency. Humanit. Soc. Sci. 2025, 13(4), 382-388. doi: 10.11648/j.hss.20251304.21
Copy
|
Download
AMA Style
Hamid AMA. Administrative Liability for Damages Caused by Artificial Intelligence Systems in Public Services: An Analytical Study in Light of the Principles of Legality and Transparency. Humanit Soc Sci. 2025;13(4):382-388. doi: 10.11648/j.hss.20251304.21
Copy
|
Download
-
@article{10.11648/j.hss.20251304.21,
author = {Ahmed Mokhtar Abdel Hamid},
title = {Administrative Liability for Damages Caused by Artificial Intelligence Systems in Public Services: An Analytical Study in Light of the Principles of Legality and Transparency
},
journal = {Humanities and Social Sciences},
volume = {13},
number = {4},
pages = {382-388},
doi = {10.11648/j.hss.20251304.21},
url = {https://doi.org/10.11648/j.hss.20251304.21},
eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.hss.20251304.21},
abstract = {This research examines the issue of administrative liability for damages caused by artificial intelligence (AI) systems in public services. It offers a legal analysis of legality and transparency in automated administrative decisions. The topic gains importance as public administrations increasingly rely on intelligent systems, raising unprecedented legal challenges particularly with respect to oversight and accountability. The study is structured into three main sections: the first outlines the theoretical framework of administrative liability and AI; the second explores the compatibility of automated decisions with the principle of legality and the scope of judicial oversight; and the third examines the components of administrative liability, supported by comparative judicial models from France, the United States, and Egypt, with an analysis of their applicability within the Omani legal context. The study concludes that administrative liability may arise from supervisory negligence or algorithmic mismanagement-even in the absence of direct technical error. It recommends the enactment of legislation to regulate the use of AI in public administration, promote algorithmic transparency, and institutionalize the principle of “meaningful human oversight” as a legal safeguard for fair accountability.},
year = {2025}
}
Copy
|
Download
-
TY - JOUR
T1 - Administrative Liability for Damages Caused by Artificial Intelligence Systems in Public Services: An Analytical Study in Light of the Principles of Legality and Transparency
AU - Ahmed Mokhtar Abdel Hamid
Y1 - 2025/08/01
PY - 2025
N1 - https://doi.org/10.11648/j.hss.20251304.21
DO - 10.11648/j.hss.20251304.21
T2 - Humanities and Social Sciences
JF - Humanities and Social Sciences
JO - Humanities and Social Sciences
SP - 382
EP - 388
PB - Science Publishing Group
SN - 2330-8184
UR - https://doi.org/10.11648/j.hss.20251304.21
AB - This research examines the issue of administrative liability for damages caused by artificial intelligence (AI) systems in public services. It offers a legal analysis of legality and transparency in automated administrative decisions. The topic gains importance as public administrations increasingly rely on intelligent systems, raising unprecedented legal challenges particularly with respect to oversight and accountability. The study is structured into three main sections: the first outlines the theoretical framework of administrative liability and AI; the second explores the compatibility of automated decisions with the principle of legality and the scope of judicial oversight; and the third examines the components of administrative liability, supported by comparative judicial models from France, the United States, and Egypt, with an analysis of their applicability within the Omani legal context. The study concludes that administrative liability may arise from supervisory negligence or algorithmic mismanagement-even in the absence of direct technical error. It recommends the enactment of legislation to regulate the use of AI in public administration, promote algorithmic transparency, and institutionalize the principle of “meaningful human oversight” as a legal safeguard for fair accountability.
VL - 13
IS - 4
ER -
Copy
|
Download