Monday, March 23, 2020

Artificial Intelligence and Law

Anisia Tatarintseva, 110i, KNEU


Introduction

Of all the jobs robots might one day take over, there are some that have always seemed off limits. For example, artificial intelligence could never gain the creativity to be an artist or a musician (except it has), learn the human emotions necessary to write comedy (sorry, that too), or possess the analytical thinking you need to become a lawyer. Well, every one of those has proven to be false. In fact, AI has been helping humans with their minor legal inconveniences for several years now. It’s only going to get more advanced from here.

Right now, AI law assistants are centered mostly on simple, straightforward legal matters, the kinds that require some complicated paperwork but don’t exactly delve into the essence of human ethics. But just because the tasks are simple doesn’t mean these bots aren’t changing the world.
Hiring a lawyer is expensive; battling an issue out in court, even more so. In a perfect world, many legal matters, from renter’s rights to immigration to bankruptcy, could be handled with 30 minutes of paperwork and perhaps a minor fee. But most people need to pay for a lawyer just to get that far. Even divorces don’t need to be as expensive as they are now.

AI &Law. CMLR and CMLA.

The AI field and the law are on the verge of a revolution that began with text-based analytics programs such as IBM Watson and Debater and the open source information management architectures on which they are based. Programs such as Watson and Debater will not perform legal arguments. They may answer legal questions in a superficial sense, but they cannot explain their answers or provide legal arguments. Computing models developed by AI & Law researchers will do the legal reasoning. Models will be able to generate arguments for and against specific outcomes of problems in the form of texts, predict the outcome of the problem, and explain their predictions with reasons that legal professionals recognize and can evaluate for themselves. The result will be a new type of legal application that allows to perform cognitive calculations, a kind of cooperative activity between a person and a computer, in which each of them performs the intellectual activity that they know how to do best.
Many AI & Law studies have focused on developing CMLRs that can provide legal arguments and use them to predict the outcome of court disputes. CMLR is a computer program that provides a process that confirms the attributes of the legal reasoning of a person. The process may include analyzing the situation and answering a legal question, predicting the result or presenting a legal argument. A subset of CMLR implements a process of legal argumentation as part of their reasoning. These are called computational models for legal argument (CMLA). CMLR and CMLA break down a complex human intellectual task, such as estimating the settlement value of a product liability claim or analyzing the problem of supply and acceptance of first-year contracts, into a set of computational steps or algorithms. Models determine how the problem is introduced and the type of legal result at the output. Between them, model builders have built a calculation mechanism that allows you to apply knowledge of the field to perform steps and convert inputs to the outputs. In developing these models, researchers consider issues such as how to represent what a legal norm means so that a computer program can decide whether it is appropriate for a situation, how to distinguish between “difficult” and “easy” legal issues, and the roles that cases play and values in interpreting legal norms. Their answers to these questions are not philosophical, but scientific; their computer programs not only model the tasks of legal reasoning, but actually perform them; and researchers carry out experiments to assess how well their programs work. Until now, the basic legal knowledge used in their computing models has had to be extracted manually from legal sources, i.e. cases, statutes, regulations, contracts and other texts that lawyers actually use. That is, legal professionals have had to read legal texts and present the relevant parts of their content in a form that could be used by computational models. The inability to automatically connect their CMLRs directly to legal texts limited researchers’ ability to apply their programs in real legal information retrieval, forecasting, and decision making.

Problems in legal practice

The technology of legal practice is changing rapidly. Different developments in text analysis offer new process models and tools for legal services, promising greater efficiency and possibly wider public availability.
These changes create challenges and opportunities for young lawyers and informatics professionals, but it has proven difficult to predict the future of legal practice. Reduced hiring of law firms has resulted in fewer applicants for law schools. Potential applicants weigh their chances of obtaining paid work with student loans and look elsewhere. There is uncertainty about what tasks related to law can be solved by technology. It is also unclear what law students need to know about technology. Law firms have long asked law faculties for “ready-to-use” students, but even firms seem confused about what technology firms will need, whether to develop technology at home or rely on external suppliers, and the skills and knowledge that would best prepare law students to evaluate and use new technologies.
Focusing on the use of process and technology to develop cost-effective ways to provide legal solutions, many professors agree that commoditization is the culmination of the evolution of legal work.
The result of legal commoditization is a software service or product that anyone can purchase, download and use to solve legal problems without the involvement of a lawyer or, at the moment, a kind of computerized legal application.

Legal Expert System

Two concepts, process engineering and commoditization, raise interesting questions. If the engineering of legal services processes rethinks the ways of providing “very cheap and very high quality” solutions, who or what will be responsible for adapting these solutions to a specific client problem? If commoditization means “do it yourself,” does it mean that the client is on his own? In other words, what support does the legal application provide? In particular, can a legal application perform some level of customization?
As a rule, legal expert systems deal with narrow areas of law, but have enough “knowledge and experience” in a narrow area to ask the client’s user appropriate questions about his problem, to adjust his answer according to the user’s answers, and to explain his reasons. Their “knowledge” includes heuristics, which experienced practitioners use when applying the law to specific facts. This heuristics are “empirical rules” that are often useful, but do not guarantee the right outcome. The rules are presented in a declaratory form with their conditions and conclusions. They are derived mainly by hand: manual interviewing of experts, presenting them with scenarios of the problem, inviting them to solve the problem, as well as the question of what rules the experts have applied in analyzing the problem and developing a solution.

Argument Retrieval and cognitive computing

Unlike expert systems, two alternative paradigms – Argument Retrieval (AR) and cognitive computing – do not pretend to solve users’ legal problems independently. Instead, computer programs extract semantic information from legal texts and use it to solve human legal problems.
Conceptual retrieval of information is certainly not new. AI has for a long time tried to identify and extract semantic elements from text, such as concepts and their relationships. For a long time AI’s goal was to make information retrieval more intelligent by using extracted semantic information to draw conclusions about the texts found.
Today, a conceptual legal information retrieval can be defined as the automatic retrieval of relevant textual legal information by comparing concepts and their roles in documents to concepts and roles needed to solve a user’s legal problem. It focuses on modeling human users’ needs for the information they seek to solve the problem, such as the legal argument the user is trying to put forward, and the concepts and their roles in the process of solving the problem.
Despite its name, cognitive computing is not the development of artificial intelligence systems that “think” or perform cognitive tasks the way people do. An operational unit of cognitive computing is not a computer or a person, but a cooperating team of a computer and a person solving problems.
In the paradigm of cognitive computing, human users are ultimately responsible for customizing their own solution with a legal application, but the technology of commoditized legal services should inform people about the need to customize and support their individual access to relevant legal information to help them build a solution. That is, the legal application will not only select, order, highlight and summarize the information in a way that is relevant to the user’s particular problem, but will also explore the information and interact with the data in new ways that were not previously possible. For this approach to succeed, technology does not need to address the user’s problem. It will not be an expert legal system. However, it needs to have some “understanding” of the information in its possession and the relevance of the information in solving human problems, and to provide easy access to information at the right time and in the right context. In this regard, AR is consistent with cognitive computing, where the responsibility for finding and using resources to solve a user’s problem is divided between the intellectual tasks that a computer can perform best and those tasks that are addressed to the experience and knowledge of the human user.

Modeling Statutory Reasoning

The law is the sphere of rules, and many of these legal norms are set out in laws and regulations. Since rules can be expressed logically and computers can reason deductively, computational modeling of legal reasoning should be simple. It is enough to simply enter a factual situation into a computer program; the program determines the relevant rules, determines whether the conditions of the rules are met and explains the answer in terms of the rules that were or were not applied. But building a computational model of legal reasoning is a complicated task. The rules are usually vague, syntactically ambiguous and semantically ambiguous and subject to structural uncertainty. If a computer program should apply a statutory rule, what logical interpretation should it apply, how can it deal with the vagueness and open texture of statutory terms, or determine whether an exception exists?
Classical logical models can break down when considering legal uncertainty, a common feature of legal reasoning: even when lawyers agree on the facts and rules of a particular issue, they can still make legally sound arguments for and against a particular proposal.
Nevertheless, legal reasoning remains an essential necessity. Various approaches to AI and law that address these issues or impose penalties are considered: a normalization process for the systematic development of multiple logical versions of a charter, a logical realization for the deductive application of the charter, and later business process matching models and network modeling of the charter, both potentially useful for cognitive computing.

Legislative drafting difficulties

Statutory and regulatory instruments are complex legal instruments. Often an intricate maze of provisions written in legal technical jargon determines what is legal and what is not. With their networks of cross-references and exceptions, statutes and regulations are often too complex to be understood by an untrained citizen. Even legal professionals may find it difficult to simply identify all and only those provisions that are relevant to the analysis of an issue, problem or topic.
The AI & Law field has long studied how to develop computer programs that can logically deal with legal norms from laws and regulations. It had made progress and had shown some success, but had also understood how complex the problem was. A number of limitations have been identified that need to be removed or clarified when attempting to develop a computer program that is capable of applying legal norms. These limitations include ambiguity and two types of uncertainty in the statutory rules, the complexity of interpreting the statutory rules, the need to maintain conflicting but reasonable arguments about what a legal rule means and practical problems in maintaining the logical representation of the charters along with the text. Of the two types of ambiguities that complicate the computational modelling of statutory interpretation, known semantic ambiguity and its cousin, vague wording. Normative notions and terms chosen by the legislator may not be well defined to determine whether or how they apply. The second type, syntactic ambiguity, may be less well known: logical terms used by legislators such as “if”, “and”, “or” and “if” do not introduce multiple interpretations of even simple laws.

Sematic ambiguity

Sematic ambiguity and vagueness are concessions to human, social and political reality. The legislature cannot develop language that is sufficiently detailed to anticipate all situations that it may wish to regulate. Instead, it uses more general terminology in normative acts and relies on courts to interpret and apply abstract terms and concepts in new factual situations. The wilful designation of key provisions in a semantically ambiguous manner may also facilitate compromise in legislation. If the legislature tries to use specific, detailed language, it may exacerbate the problem of political agreement. Even where the legislative purpose is clear and the language of the law is clear, opponents usually make reasonable but controversial arguments about what the terms of a rule mean.

Syntactic ambiguity

Another kind of ambiguity, syntactic ambiguity, comes from a different reality: the charter language does not always follow a single, consistent logical structure. In part, this results from the properties of the text in the natural language. Unlike mathematical and logical formalisms and computer code, text does not allow for the explicit indication of the areas of logical connectors such as “if”, “and”, “or” and “unless”. The syntax of the charter may also be unclear because of the language used to realize exceptions and cross-references.Exceptions to a provision may be expressed explicitly or even implicitly, and may not only appear within the framework of a provision, but also in other provisions or even in other legislation. In the logic of statement, symbols mean whole sentences. With the help of logical operators and connecting elements, sentences can be assembled into complex statements, the true meaning of which depends solely on whether the components of the sentence are true or false.

Problems of Translating Statutes into Programs

Reformulation
Formalizing an extensive charter is a trial and error process. Often one has to face a new context in which the previous wording of the rule concept is inadequate and should be reformulated to take into account additional restrictions imposed by subsequent rules in the law. Researchers had to change some existing rules, conditions or parameters or add new ones to remove the new restrictions.
Deny
For the realization of certain rules of different laws, it would be desirable to use rules that state a negative opinion, such as “x was not a British citizen at the time of birth y” or “x was not settled in the UK at the time of birth y”. Such negative conclusions require the ability to deal with ordinary or classical negation.
Researchers have demonstrated some language in the programs, where using negation as a failure would be prohibitively difficult or would lead the program to conclude contrary to what the legislator intended.
Counterfactual terms
Statutes also usually use counterfactual conditions such as “would have but for his having died or ceased to be a citizen . . . [by] renunciation.” The legislature may use such wording as a brief reference tool. “Drafters of a legislative proposal shall avoid listing a complex set of conditions by explicitly [referring] to any other part of the legislation” from which such conditions can be found. Researchers have created special rules to deal with such counterfactual conditions. They wrote “additional alternative rules; one set describing, for example, the conditions for acquiring nationality at the beginning for persons who were alive on that date, and another set for persons who died before that date but otherwise met all the other conditions necessary before death.
The researchers carefully analysed the statute to propose a hypothesis as to which conditions could reasonably be applied in a counterfactual state. This increased the number of rules that had to be formalized. Supposedly, the lawmakers used a counterfactual condition to avoid the tedious task of clarifying these conditions. On the other hand, it is always possible that the drafters intended to leave this question unlimited.
Open texture Conditions
Finally, the legislature used open text predictions in the constitution, which they did not define. The law contains such vague phrases as “to be a good character”, “to have a reasonable excuse”.
The researchers adopted a simple approach to working with vague terms. The system simply asks the user whether the term is true or not in the current study. Alternatively, they could program it on the assumption that a certain vague concept has always been (or has never been) applied and qualify its answer, for example, on the basis of this assumption: “Peter is a citizen if he has a good character”. Researchers note that to reduce the vagueness of terms can also apply empirical rules taken from the analysis of past cases in which the courts used the terms.
Problems of solving syntactic ambiguity, reformulation, negation, counterfactual conditions and semantic ambiguity are problems of interpretation of the text in natural language. They have a potential impact on any attempts to translate legislation into workable computer code, regardless of whether people do the translation manually or whether programs automatically extract the rules from the statutory texts.

Conclusion

The most recognized advantage of AI tools in legal practice seems to be increased efficiency. AI software uses algorithms that speed up the processing of documents when errors and other problems are detected. It is unclear how the transition to legal artificial intelligence will occur. On the one hand, we can expect large law firms to stimulate initial implementation, as they are most able to pay for reliable AI-based tools and integration. However, newer firms are more likely to start with a lean, automated, efficiency-oriented approach because they will not have to deal with the massive existing overheads of larger firms.
Meanwhile, a number of law firms are already using IBM’s AI ROSS to handle the mundane research and background tasks necessary for court cases, tasks that would usually be delegated to a junior attorney. While these technologies are simply making the legal process more streamlined and automated, who knows — we could one day see a bot trained to argue in the courtroom.

No comments:

Post a Comment