Tuesday, May 5, 2020

Artificial Intelligence and Law

Zrazhevskyi Mykhailo & Tykhenko Dariia, EO-204i, KNEU


What is AI?

To understand the relationship between artificial intelligence and law, first of all let’s define what artificial intelligence is. What is AI?
There are many ways to answer this question, but one place to begin is to consider the types of problems that AI technology is often used to address. In that spirit, we might describe AI as using technology to automate tasks that “normally require human intelligence.”

This description of AI emphasizes that the technology is often focused upon automating specific types of tasks: those that are thought to involve intelligence when people perform them. A few examples will help illustrate this depiction of AI.
Researchers have successfully applied AI technology to automate some complex activities, including playing chess, translating languages, and driving vehicles. What makes these AI tasks rather than automation tasks generally?
It is because they all share a common feature: when people perform these activities, they use various higher-order cognitive processes associated with human intelligence.
For instance, when humans play chess, they employ a range of cognitive capabilities, including reasoning, strategizing, planning, and decision-making. Similarly, when people translate from one language to another, they activate higher-order brain centers for processing symbols, context, language, and meaning.
Finally, when people drive automobiles, they engage a variety of brain systems, including those associated with vision, spatial recognition, situational awareness, movement, and judgment.
In short, when engineers automate an activity that requires cognitive activity when performed by humans, it is common to describe this as an application of AI. This definition, though not fully descriptive of all AI activities, is nonetheless helpful as a working depiction.
Let us now turn to the other major branch of AI: logical rules and knowledge representation. The goal behind this area of AI is to model real-world phenomena or processes in a form that computers can use, typically for the purposes of automation.
Often this involves programmers providing a computer with a series of rules that represent the underlying logic and knowledge of whatever activity the programmers are trying to model and automate.
Because the knowledge rules are deliberately presented in the language of the computer, this allows the computer to process them and deductively reason about them.
Knowledge representation has a long and distinguished history in the field of AI research and has contributed to many so-called expert systems. In an expert system, programmers in conjunction with experts in some field, such as medicine, aim to model that area of expertise in computer-understandable form.
Typically, system designers try to translate the knowledge of experts into a series of formal rules and structures that a computer can process. Once created, such a medical-expert system might allow later users to make automated, expert-level diagnoses using the encoded knowledge (e.g., If patient has symptoms X and Y, the expert system, using its rules, determines that it is likely medical condition Z).
A good example of a legal-expert system comes from tax-preparation software such as TurboTax. To create such a system, software developers, in consultation with tax attorneys and others experts in the personal income tax laws, translate the meaning and logic of tax provisions into a set of comparable formal rules that a computer can process.
Let us get an intuition as to what it actually means to “translate” a law into a computer rule. Imagine that there is a tax law that says that for every dollar of income that somebody makes over $91,000, she will be taxed at a marginal tax rate of 28%.
A programmer can take the logic of this legal provision and translate it into an if-then computer rule that faithfully represents the meaning of the law (e.g., If income > 91,000, then tax rate = 28%).
Once represented formally, the preparation software can use such a computer rule to analyze the income being reported by the filer and automatically apply the appropriate legal tax rate.
The same can occur with many other translated tax provisions. Although this is an over-simplified example, it illustrates the basic logic underlying the law-to-computer-rule translation process.
More broadly, these knowledge, logic, and rules-based AI methods involve a top-down approach to computation. This means that programmers must, ahead of time, explicitly provide the computer with all of its operating and decision rules.
This is in contrast to the bottom-up machine-learning approach described earlier, where the computer algorithm organically determined its operating rules on its own.

Rules-based knowledge-representation systems.

There are a few points to note about these rules-based knowledge-representation systems. Although they have not made as large an impact as machine-learning systems, there is a power to this explicit, top-down knowledge representation.
Once rules are represented in a computer-programming language, a computer can manipulate these rules in deductive chains to come to nonobvious conclusions about the world. These systems can combine facts about the world, using logical rules, to alert users about things that might be too difficult for a person to figure out on her own.
Additionally, knowledge-based AI systems can harness the power of computing to reveal hard-to-detect details—such as contradictions— embedded in systems that a human would not be able to discern. They can also engage in complex chains of computer reasoning that would be too difficult for a human to do.
Take an example from the tax context. During the course of work, one might have a separate credit card used for business trips. The income tax code often treats business expenses different than personal expenses. The computer could be programmed with a rule indicating that expenses on a particular credit card should be marked as business expenses.
Having programmed a rule about differential treatment for business expenses, the computer could automatically treat thousands of expenses differently using the tax-treatment rule.
The point is that knowledge and rules-based AI systems, in the right setting, can be very powerful tools. Knowledge-based expert systems and other policy-management systems are very widespread in the business world.
Having described AI generally, it is time to turn to how AI is being used in law. At its heart, “AI and law” involves the application of computer and mathematical techniques to make law more understandable, manageable, useful, accessible, or predictable.
With that conception, one might trace the origins of similar ideas back to Gottfried Leibniz in the 1600s. Leibniz, the mathematician who famously co-invented calculus, was also trained as a lawyer and was one of the earliest to investigate how mathematical formalisms might improve the law.

History of AI within law

More recently, since the mid-twentieth century, there has been an active history of researchers taking ideas from computer science and AI and applying them to law.
This history of AI within law roughly parallels the wider arc of AI research more generally. Like AI more broadly, AI applied to law largely began focused upon knowledge-representation and rules-based legal systems.
Most of the research arose from university laboratories, with much of the activity based in Europe. From the 1970s through 1990s, many of the early AI-and-law projects focused upon formally modeling legal argument in computer-processable form and computationally modeling legislation and legal rules.
Since at least 1987, the International Conference of Artificial Intelligence and Law (ICAIL) has held regular conferences showcasing these applications of AI techniques to law. But since about 2000, AI and law has turned away from knowledge-representation techniques toward machine-learning-based approaches, like the AI field more generally.
Many of the more recent applications in AI and law have come from legal-technology startup companies using machine learning to make the law more efficient or effective in various ways.
Other more advanced breakthroughs in AI and law have come from interdisciplinary university law-engineering research centers, such as Stanford University’s CodeX Center for Legal Informatics.
As a result of this private- and university-sector research, AI-enabled computer systems have slowly begun to make their way into various facets of the legal system.
One useful way of thinking about the use of AI within law today is to conceptually divide it into three categories of AI users: the administrators of law (i.e., those who create and apply the law, including government officials such as judges, legislators, administrative officials, and police), the practitioners of law (i.e., those who use AI in legal practice, primarily attorneys), and those who are governed by law (i.e., the people, businesses, and organizations that are governed by the law and use the law to achieve their ends).
Finally, there are a few important contemporary issues in AI and law worth highlighting. Although a fuller treatment is beyond the scope of this article, it is important to bring them to the attention of the reader.

Potential for bias in algorithmic decision-making

One of the most important contemporary issues has to do with the potential for bias in algorithmic decision-making. If government officials are using machine learning or other AI models to make important decisions that affect people’s lives or liberties (e.g., criminal sentencing), it is important to determine whether the underlying computer models are treating people fairly and equally.
Multiple critics have raised the possibility that computer models that learn patterns from data may be subtly biased against certain groups based upon biases embedded in that data.
For instance, imagine that software that uses machine learning to predict the risk of reoffending creates its predictive model based upon past police arrest records. Imagine further that police activity in a certain area is itself biased—for instance, perhaps the police tend to arrest certain ethnic minority groups at a disproportionately higher rate than nonminority’s for the same offense.
If that is the case, then the biased police activity will be subtly embedded in the recorded police arrest data. In turn, any machine-learning system that learns patterns from this data may subtly encode these biases.

Interpretability of AI systems and transparency

Another contemporary issue with AI and the law has to do with the interpretability of AI systems and transparency around how AI systems are making their decisions.
Often AI systems are designed in such a way that the underlying mechanism is not interpretable even by the programmers who created them. Various critics have raised concerns that AI systems that engage in decision-making should be explainable, interpretable, or at least transparent.
Others have advocated that the systems themselves be required to produce automated explanations as to why they came to the decision that they did.158 A final issue has to do with potential problems with deference to automated computerized decision-making as AI becomes more ingrained in government administration.
There is a concern that automated AI-enhanced decisions may disproportionately appear to be more neutral, objective, and accurate than they actually are. For instance, if a judge receives an automated report that indicates that a defendant has a 80.2% chance of reoffending according to the machine-learning model, the prediction has the aura of mechanical infallibility and neutrality.
The concern is that judges (and other government officials) may inappropriately defer to this false precision, failing to take into account the limits of the model, the uncertainties involved, the subjective decisions that went into the model’s creation, and the fact that even if the model is accurate, still two times out of ten such a criminal defendant is not likely to reoffend.
So the goal of this essay was to provide a realistic, demystified view of AI and law. As it currently stands, AI is neither magic nor is it intelligent in the human-cognitive sense of the word.
Rather, today’s AI technology is able to produce intelligent results without intelligence by harnessing patterns, rules, and heuristic proxies that allow it to make useful decisions in certain, narrow contexts.
However, current AI technology has its limitations.
Notably, it is not very good at dealing with abstractions, understanding meaning and transferring knowledge from one activity to another, and handling completely unstructured or open-ended tasks.
Rather, most tasks where AI has proven successful (e.g., chess, credit card fraud, tumor detection) involve highly structured areas where there are clear right or wrong answers and strong underlying patterns that can be algorithmically detected.
Knowing the strengths and limits of current AI technology is crucial to the understanding of AI within law. It helps us have a realistic understanding of where AI is likely to impact the practice and administration of law and, just as importantly, where it is not.
References:
Artificial Intelligence, ENG. OXFORD LIVING DICTIONARIES,
https://en.oxforddictionaries.com/definition/artificial_intelligence [https://perma.cc/WF9V-YM7C] (last
J.M. Unterrainer et al., Planning Abilities and Chess: A Comparison of Chess and Non-Chess
Players on the Tower of London Task, 97 BRIT. J. PSYCHOL. 299, 299–300, 302 (2006).
Shunichi Doi, Technological Development of Driving Support Systems Based on Human
Behavioral Characteristics, 30 IATSS RES. 19, 20–21 (2006)
RUSSELL & NORVIG, supra note 4, at 2.
Harry Surden, Machine Learning and Law, 89 WASH. L. REV. 87, 89 (2014).
Terence Mills, AI vs AGI: What’s the Difference?, FORBES (Sept. 17, 2018, 7:00 AM),
https://www.forbes.com/sites/forbestechcouncil/2018/09/17/ai-vs-agi-whats-thedifference/#517ec50d38ee
Bilge Ebiri, The 15 Best Robot Movies of All Time, VULTURE (Mar. 6, 2015),
https://www.vulture.com/2015/03/15-best-robot-movies-of-all-time.html
Jack Krupansky, Untangling the Definitions of Artificial Intelligence, Machine Intelligence)
Machine Learning, MEDIUM (June 13, 2017), https://medium.com/@jackkrupansky/untangling-thedefinitions-of-artificial-intelligence-machine-intelligence-and-machine-learning-7244882f04c7
S.I. GASS ET AL., FEDERAL EMERGENCY MANAGEMENT AGENCY, EXPERT SYSTEMS AND
EMERGENCY MANAGEMENT: AN ANNOTATED BIBLIOGRAPHY 22 (1986).
https://medium.com/@clarecorthell/hybrid-artificial-intelligence-how-artificial-assistants-workeefbafbd5334. Andrew Ng,
Autonomous Driving, COURSERA, https://www.coursera.org/learn/machinelearning/lecture/zYS8T/autonomous-driving (last visited Mar. 25,
2019) (video discussing training machine-learning algorithm to drive vehicle).
https://www.ucsusa.org/clean-vehicles/how-self-driving-cars-work

No comments:

Post a Comment