GAI will do more than just pass the bar exam

Jiang Shisong
The success of OpenAI's GPT-4, the latest model, in performing complex legal tasks could have significant implications for the legal profession.
Jiang Shisong

As per a new research paper co-authored by Daniel Martin Katz, a professor at Chicago-Kent College of Law, ChatGPT effortlessly scored 75 percent on the notoriously challenging United States' Uniform Bar Examination, surpassing the 68 percent average and ranking in the 90th percentile. This feat is awe-inspiring, given the exam's difficulty and the long hours lawyers put in to prepare for it.

Katz notes that the success of OpenAI's GPT-4, the latest model, in performing complex legal tasks such as generating language and comprehending convoluted legalese could have significant implications for the legal profession. This breakthrough could lead to improved access to legal services and an increase in the capacity of lawyers who effectively utilize the technology.

It is hardly surprising, however. The potential and "myth" of ChatGPT-like generative artificial intelligence (GAI) have now been unfolded in various fields, opening up a new realm of possibilities for human society. The legal system, which is intimately tied to our daily lives, certainly cannot be exempt from the transformative power of GAI. The impact of GAI on our legal landscape is by no means a matter of if, but rather a matter of time, degree and scope.

In effect, the time has already come and surpassed the experimental research conducted in ivory towers. Earlier this year, there was a paradigm shift in the international legal community as judges in Colombia openly utilized GAI to draft certain sections of their judicial opinions, marking a significant milestone.

The first-ever Colombian case using GAI involved a constitutional remedy seeking access to health care for a minor diagnosed with autism spectrum disorder. The trial judge ruled in favor of the plaintiff, and the appellate judge used ChatGPT-3 to further elaborate on the scope of the remedy. The judge argued that the recently enacted Law 2213/22 allows judges to use AI systems like ChatGPT as a tool to expedite judicial decision-making.

While applying GAI in adjudications may help facilitate judicial decision-making, it raises concerns about transparency, accuracy and fairness. As Brittan Heller, an expert on the intersection of technology and law at Stanford University, and her colleague emphasized in their observation, the complex algorithms that GAI relies on are difficult to interpret and may lead to biased decisions if the data used to train the AI is biased.

This bias can perpetuate existing inequalities in the legal system, especially for marginalized groups. Additionally, using GAI may limit judges' discretion in decision-making since the system lacks the capacity to understand contextual information and facts, which are crucial in making fair and just decisions. Overall, the lack of transparency and potential for bias in GAI poses a challenge in determining who is accountable for the decisions made and how to ensure fairness and justice in the legal system.

In addition, it is worth noting that the use of GAI in courtrooms involves integration with other emerging technologies, such as extended reality (XR). While GAI brings its own challenges, amalgamating these cutting-edge technologies poses unique problems. Therefore, perceiving GAI in the legal system from a broader context of technological evolution would make more sense.

Another recent case in Colombia does echo such a scenario. In this case, Judge Maria Victoria Quiñones held a virtual court appearance in the metaverse using ChatGPT to help set procedural rules. However, there are challenges to using XR in courtrooms, such as technical limitations of avatars to convey emotions and the risk of bias.

Besides, virtual appearances have led to adverse outcomes in sentencing and unfavorable impressions of child witnesses. High-end XR hardware is also expensive and unavailable in all markets, limiting access to justice. These cautionary tales suggest that using XR in virtual court appearances must be carefully considered to avoid new forms of societal harm.

Indeed, technology is a double-edged sword. Yet, the arrival of GAI and related digital technologies has given this cliché new and unpredictable dynamics in the legal landscape. As a result, a multitude of new and essential theoretical and practical questions must be addressed by either the judicial system collectively or individual lawyers.

For example, how can we ensure that the algorithms and virtual environments used in the legal system are transparent, unbiased, and fair? How can we prevent the misuse or manipulation of these technologies by those with more power or resources? How can we ensure that everyone has equal access to these technologies and that they do not exacerbate existing inequalities in the legal system?

Crafting responses to these questions is crucial for the growing number of countries aspiring to build a "smart court" system, as this undertaking inevitably involves embracing cutting-edge digital technologies. As our legal landscape advances toward a more digital future, we must prepare ourselves rather than take it for granted.

To do so, it is core that we ask ourselves a fundamental question: What kind of legal system do we want to create? Do we want to create one that emphasizes efficiency and automation at the expense of human judgment and empathy, or do we want to create one that combines the strengths of humans and machines to deliver justice in a manner that is both efficient and just? The answer to this question will shape the future of the legal profession and our society as a whole.

(The author is a research fellow with the School of Law, Chongqing University.)


Special Reports

Top