Causation in AI Tort Litigation: Legal Dilemmas of Algorithmic Black Boxes and Burden of Proof Allocation
PDF

Keywords

algorithmic black boxes; causation determination; burden of proof allocation; AI tort litigation

Abstract

Through a comparative legal analysis of different jurisdictions, this study explores the difficulties algorithmic black boxes bring to traditional causation and the allocation of the burden of proof in artificial intelligence tort claims. The study employs a cross-jurisdictional doctrinal analysis and a methodical case study methodology to examine important AI tort cases related to algorithmic decision-making platforms, medical AI systems, and autonomous vehicles. Because courts cannot determine causal relationships using traditional evidentiary frameworks, the results show that traditional “but-for” tests of causation are intrinsically limited when applied to black box machine learning systems. Systemic differences in burden allocation mechanisms are revealed by cross-jurisdictional canvassing, with different jurisdictions implementing a range of strategies, from mandates for algorithmic audits and presumptive liability frameworks to stricter requirements for expert testimony.  According to the research, there are significant informational gaps between plaintiffs and AI system controllers, which calls for creative legal solutions like updated collective liability and causation presumptions. The results show that, while maintaining core tort law principles, legal frameworks must be modified to allow probabilistic algorithmic decision-making. Advocating for the shift to technologically adaptive liability regimes that strike a balance between victim protection and innovation incentives is also necessary. It is suggested that the aforementioned be put into effect through increased judicial technical proficiency and standardized transparency requirements.

https://doi.org/10.63808/lc.v1i2.127
PDF

References

[1] Banja, J. D., Hollstein, R. D., & Bruno, M. A. (2022). When artificial intelligence models surpass physician performance: Medical malpractice liability in an era of advanced artificial intelligence. Journal of the American College of Radiology, 19(7), 816–820. https://doi.org/10.1016/j.jacr.2022.04.002

[2] Bottomley, D., & Thaldar, D. (2023). Liability for harm caused by AI in healthcare: An overview of the core legal concepts. Frontiers in Pharmacology, 14, 1297353. https://doi.org/10.3389/fphar.2023.1297353

[3] Dacoronia, E. (2023). Burden of proof: How to handle a possible need for facilitating the victim’s burden of proof for AI damage? In S. Lohsse, R. Schulze, & D. Staudenmayer (Eds.), Liability for AI: Münster Colloquia on EU Law and the Digital Economy VII (pp. 201–214). Hart Publishing. https://doi.org/10.5771/9783748911842

[4] Dheu, O., & De Bruyne, J. (2023). Artificial intelligence and tort law: A multi-faceted reality. European Review of Private Law, 31(2/3), 271–298. https://doi.org/10.54648/ERPL2023014

[5] Fraser, H., Simcock, R., & Snoswell, A. J. (2022, June). AI opacity and explainability in tort litigation. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 185–196). https://doi.org/10.1145/3531146.3533089

[6] Goicovici, J. (2023). Rebuttable presumptions of causality and reverberations of evidence disclosure, as epitomic pieces in the physiognomy of liability for defective AI. Journal of Law, Market & Innovation, 2(1), 28–54. https://heinonline.org/HOL/P?h=hein.journals/jlmin2&i=32

[7] Herrera, F. (2025). Making sense of the unsensible: Reflection, survey, and challenges for XAI in large language models toward human-centered AI. arXiv. https://arxiv.org/abs/2505.20305

[8] Llorca, D. F., Charisi, V., Hamon, R., Sánchez, I., & Gómez, E. (2023). Liability regimes in the age of AI: A use-case driven analysis of the burden of proof. Journal of Artificial Intelligence Research, 76, 613–644. https://doi.org/10.1613/jair.1.14181

[9] Rao, H. (2023). Ethical and legal considerations behind the prevalence of ChatGPT: Risks and regulations. Frontiers in Computing and Intelligent Systems, 4(1), 23–29. https://doi.org/10.54097/fcis.v4i1.11064

[10] Rico, P. (2024). AI and data governance: A legal framework for algorithmic accountability and human rights. Routledge.

[11] Tschider, C. A. (2021). Legal opacity: Artificial intelligence’s sticky wicket. Iowa Law Review Online, 106, 126–136. https://ilr.law.uiowa.edu/print/volume-106-issue-1/legal-opacity-artificial-intelligences-sticky-wicket/

[12] Wojtczak, S., & Księżak, P. (2021). Causation in civil law and the problems of transparency in AI. European Review of Private Law, 29(4), 923–950. https://doi.org/10.54648/erpl2021042

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2025 Jinbo Ma