The Risks of Adopting Artificial Intelligence Systems in Thai Law Firms

Main Article Content

Wisarut Bungutoom

Abstract

The academic article aimed to investigate the risks associated with the implementation of Artificial Intelligence (AI) systems in law firms operating in Thailand. The study sought to systematically identify, classify, and analyze the resultant core operational, legal, and ethical risks. The findings revealed several significant risks stemming from the application of this technology, categorized as follows 1. Risks from the application of Automated AI Systems in Law Firms: 1.1 Errors caused by unforeseen situations or “Edge Cases,” which may lead to Professional Malpractice liability. 1.2 Risks regarding data security breaches and privacy violations under the Personal Data Protection Act B.E. 2562 (PDPA), especially concerning cross-border data transfer scenarios. 1.3 Ethical risks arising from Algorithmic Bias which could potentially compromise fairness within the justice system. 2. Risks stemming from the Ambiguity of the Regulatory Framework Governing AI in Thailand: 2.1 The unresolved legal status of AI personhood (legal personality). 2.2 Challenges related to jurisdictional clarity, which typically relies on the domicile of the parties or the place where the cause of action arose. 2.3 Potential conflicts between the PDPA and the Bar Council regulations concerning client confidentiality, where transferring personal data to an AI platform for analysis could be interpreted as an unauthorized disclosure of client secrets to service providers. In conclusion, the research summarizes the findings by proposing a risk management approach utilizing the principles of the NIST AI Risk Management Framework (AI RMF 1.0). This framework serves as a practical guideline for Thai legal practitioners to develop their business, services, and continuous risk control mechanisms.

Article Details

How to Cite
Bungutoom, W. (2025). The Risks of Adopting Artificial Intelligence Systems in Thai Law Firms. Arts of Management Journal, 9(6), 1–13. retrieved from https://so02.tci-thaijo.org/index.php/jam/article/view/282453
Section
Articles

References

Adadi, A., & Berrada, M. (2018). Peeking inside the Black-Box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138-52160. https://doi.org/10.1109/ACCESS.2018.2870052

Armour, J., Parnham, R., & Sako, M. (2022). Augmented lawyering. University of Illinois Law Review, (1), 71–138.

Davenport, T. H., & Kirby, J. (2016). Only humans need apply: winners and losers in the age of smart machines. HarperBusiness.

Lacity, M. C., & Willcocks, L. P. (2018). Robotic process and cognitive automation - The next phase (12th ed.). SB Publishing.

Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf Technol, 6, 175–183 (2004). https://doi.org/10.1007/s10676-004-3422-1

National Artificial Intelligence Initiative Act of 2020, 15 U.S.C. § 9401(3).

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0) (NIST AI 100-1). U.S. Department of Commerce.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. University Press.

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing.

Pasquale, F. (2015). The black box society: The secret algorithms that control money and ethics information. Harvard University Press.

Sadiku, M. N. O. (1989). Artificial intelligence. IEEE Potentials, 35-39.

Sako, M., Armour. J., & Parnham, R. (2020). Lawtech adoption and training: findings from a survey of solicitors in England and Wales. University of Oxford.

Susskind, R. E. (2017). Tomorrow’s lawyers: An introduction to your future. Oxford University Press.

Taddeo, M., McCutcheon, T., & Floridi, L. (2018). Ethics, Governance, and Policies in Artificial Intelligence. Springer. https://content.e-bookshelf.de/media/reading/L-16871700-5ce6df8467.pdf