There is a movement underway with the goal of deciding the future of law. Technologists call it deciding the metes and bounds of what artificial intelligence and robots may do. The question is not what they can do, but what humans willallow them to do. Right now, technologists have appointed themselves the leaders of this movement hoping they can pre-empt others from taking over regulation of this quickly evolving world. This is bad for all of us, technologists included. Lawyers should not cede the future of the hybrid human-computer society to technologists.

What Limits Should We Place on A.I.?
As technology evolves, a question keeps coming up: what should be the limits of where humans take technology? This is not a new question nor is it limited to one type of technology. It is not the only question asked, but as technologies evolve it has become the most important question. One evolving technology, artificial intelligence (A.I.) stands apart from the others, because unlike the others it has the potential to become self-determinative. When we talk about artificial intelligence, the question may be more than where will humans take technology, it may be where will artificial intelligence take itself.
Self-determinative does not mean what many have come to believe about the potential of A.I. through hype and Hollywood. Evil A.I., though dramatic, is less likely than literal A.I. A software program does not have to embody an evil personality type to do egregious harm. Software is literal. One example often used is the paperclip making program. Left unchecked, the program could search for ways to continue making paperclips, which could mean using everything it can find or convert to making paperclips, even though there no longer is any need for paperclips. At its extreme, the program could subvert everything to its central goal of paperclip making, without any intent to do evil or good.
When compared to other technologies of the present and past, A.I. is unique. It alone has the potential to chart its own course. Nanotechnology can do great good or evil, but which it will determine for itself which path to follow. Nanotechnology may evolve along unintended lines and may continue evolving in the absence of some external force limiting it, but will do so without the self-determinative quality of A.I. The same is true with gene editing (such as through CRISPR-Cas9) and 3D printing.
Concern about the risks emerging technologies present to humans, A.I. in particular, has led many technologists to look for prophylactic measures to limit the risks. When there is work to be done, committees must be formed. As lawyers know, whenever there is a committee the next question is “who should be on the committee?”
Scholars have put extensive effort into studying the operation, success and, relevant here, composition of committees, yet most committees are formed without a glance at the research. This should not be surprising because those who do things (including scholars) seldom stop to glance at what scholars say they should do.
The committees formed to set the metes and bounds for A.I. have structured themselves along predictable, though concern raising, lines. The committees include technologists, some philosophers or ethicists, and occasionally sociologists or other social scientists. The technologists hope that they can self-regulate the future of A.I. development and avoid the pratfalls that accompany government or some other form of external regulation. As with most domains (including law) the belief is that those who know the domain best should be the ones to regulate the domain. History has taught us that his is seldom so (again, including law).
Technologists Take Control
On September 1, The New York Times ran an article titled, “How Tech Giants Are Devising Real Ethics for Artificial Intelligence.” Five of the largest tech companies are putting together a group that will create a standard of ethics focused on artificial intelligence. The group’s goal is to self-regulate the industry before the government steps in and does the regulation for them. At this point, they have the organizers, but have not chosen a name or individual members for the organization.
The group’s goal is highlighted by a report issued by the AI100 Standing Committee and Study Panel that discusses the “likely influences of AI in a typical North American city by the year 2030.” The Committee comes out of the One Hundred Year Study on Artificial Intelligence, which is a “long-term investigation of the field of Artificial Intelligence … and its influences on people, their communities, and society.”
A few days before, Berkeley News, published by the University of California at Berkeley, had announced that Stuart Russell would lead a new Center for Human-Compatible Artificial Intelligence. The initial investigators include, in addition to Russell who is a professor of electrical engineering and computer science, other computer scientists, a cognitive scientists, and A.I. experts. According to theNews, Russell expects the center “to add collaborators with related expertise in economics, philosophy and other social sciences.”
The Center will teach A.I. to mimic human ethics by having the software observe human behavior. This effort addresses problems such as the “Keep Off The Grass” sign. Read literally, even the groundskeepers can’t go on the grass. But, read with the value system humans use, it has a more reasonable meaning.
Missing among the committee members are those who have the most familiarity with governance systems, the lawyers. The Stanford Study Panel does include University of Washington law professor Ryan Calo, but it seems he is the exception that proves the rule. The presumption exists that technologists know how to create the algorithms and code that A.I. systems use, philosophers and ethicists know the moral quandaries of the human condition, and social scientists know the dynamics of small and large organizations, so all bases are covered. Somewhere, somehow, this group will determine how to create governance systems that will work at least as effectively as the legal systems that we have used for thousands of years. To me, this faith in the group to bridge the gap from their domains to law is misguided.
Lawyers Show Their Fear of Technology
Lawyers have a strange relationship with technology. The majority understand very little about technology, even the technology that sits on their desk. We know that most lawyers can’t use the basic tools of the trade well. Word is a mystery, Adobe Acrobat even more so, and Excel something only understood when used as a word to describe their academic performance pre-law rather than as a number crunching tool.
When we go outside those age-old lawyer tools, technology becomes the black box. Whether A.I., robotics, blockchain, or any other emerging technology, lawyers cringe when presented with anything beyond the quill and parchment. Their interest is limited to whether it will take away some of their job (the answer is yes, but not as soon as they fear or the hype tells them). How the technology works, its real potential, what is real and what is hype, are enigmas.
Though many lawyers like a good novel, they don’t like real-life mysteries. Things that are mysterious highlight that lawyers do not know everything, and lawyers do not like being in a position of weakness. Lawyers like to argue from a position of strength. Since they know little about technology, it is not a strength and they shy away from it.
There are, of course, some other explanations. Computers bring up painful memories of math, an area where there were right and wrong answers, not literature or the social sciences where persuasion rules. For many lawyers, the whole technology thing is boring. Never were interested in it, never will be.
Finally, there is the whole ripeness argument. Lawyers have been trained to hold back, let things develop, and wait until the disaster happens to jump in and try to fix things. In fact, so strong is the urge to wait until after the fact that lawyers have developed the “ripeness” doctrine. Better (and more lucrative) to solve the problem than prevent the problem. Let’s wait until A.I. is conquering the world rather than try to prevent the takeover.
Law is Not Simple Code
With A.I. on the upswing, it is commendable that many thoughtful people are asking what society should do to build some protective walls around our future. Humans don’t have a great history of considering the consequences of our actions. We prefer to let technology take its course and then ask, as the cliff looms ahead, whether it is time to change direction.
It is time for academics and practicing lawyers to step in and provide guidance  to the technologists on building a governance system. To start, we must educate those outside the legal domain about how legal processes, substantive and operational, work.
There is a belief among some (fear among lawyers) that law, regardless of its source, can simply be converted into computer code. This stems from the belief in a formalist legal system. In such a system, law is a set of principles and rules. Lawyers discover the facts, apply the principles and rules, and the algorithm of law delivers a solution. A recent biography of Richard Posner makes the point that common law does not function in this formalist way (William Domnarski quoting Richard Posner from his essay “Killing or Wounding to Protect a Property Interest,” 13 Journal of Law and Economics 201, 208 (1971).
Those saying to restate the common law in code form had a ‘propensity to compartmentalize questions and then consider each compartment in isolation from the others; a tendency to dissolve hard questions in rhetoric (for example about the transcendent value of human life); and, related to the last, a reluctance to look closely at the practical objects that a body of law is intended to achieve. Indeed, the preoccupation with completeness, conciseness, and exact verbal expression natural to codification would inevitably displace consideration of fundamental issues and obscure the flexibility and practicality that characterize the common law method.’
Law, despite the belief of many lay people, is anything but formalist. As legal realists (and pragmatists, such as Posner) explain, law involves the application of society’s values, common sense, equity, and bias to the particulars of the case, and then resolved within a set of constraints (e.g., statutes, regulations, court decisions). Attempting to code without appreciating how our governance system has evolved is attempting to backtrack to the rational person, when today we know people behave irrationally (even if predictably).
If you think I am overstating where technologists stand, then consider this effort by some of them. “MIT wants humans’ input on who self-driving cars should kill” reads the title of a recent article published by Quartz. This is the modern version of the age-old philosophy trolley question. When faced with two choices—go left and kill someone, go right and kill someone else—which is the moral choice? MIT’s Media Lab has an online test called the Moral Machine (you can take the testhere). The philosophy problem is tough, but not really changed by the technology swap (autonomous vehicle for trolley). The difference, of course, is that in one version there is a trolley driver and in the other there is a computer driver. Either way, what constitutes “kill” is a complex problem, one preferably not solved by coding a popular vote.
The rule of law may not have prevented or solved all disputes, but we do have a vast storehouse of data (poorly accessible, but still there) about how to build and operate governance systems. If may take a village to raise a child, but it takes an ecosystem of domain experts to build workable governance systems.
Governance Needs a Broad Perspective
Lawyers, technologists, ethicists, social scientists and others should work together to develop the governance structure—computer code and legal code—that will regulate the new hybrid society. That structure also will be hybrid—part analogue, part digital. Some restrictions on what A.I. can do will be built into the computer code itself. What form it will take and the best way to accomplish this is something technologists know better than lawyers. But what those restrictions should be is something for broader discussion.
While the effort at Berkeley to teach computers human ethics by having them watch humans is interesting, it is hard to see how it will capture the interaction of complex ethics systems blending in modern culture, at least anytime soon. In the meantime, technology advances many computers at a time.
Similarly, some restrictions will be built into analogue code—the types of laws humans use to govern themselves. How to write those laws is something lawyers know better than technologists, but what those restrictions should be also is something for broader discussion.
Many lawyers, believe a “wait-and-see” attitude rather than a prophylactic approach is better. When issues arise we regulate those issues. This has been the legislative history for many of our recent muck-ups. When the harm has happened we look back (hastily) and write new laws that are intended to prevent the harm from happening again. With rogue A.I., the odds of a second chance are slim.
A.I. brings a different type of threat than financial upheaval. Once computer code is embedded in billions of devices, interconnected around the world, and with many devices able to evolve code without human intervention, the threat to governance changes in degree and magnitude. Even a benign computer intent on achieving its goal of making paperclips will be hard to dislodge from its goal once it has infected the world’s devices. A.I. does not accept do-overs.
By working with technologists in companies (where many are embedding technology not well understood or protected into everyday devices), in academia, and in government, and social scientists, lawyers can create a much stronger and more workable governance system. Lawyers also can help integrate that system with the existing, complex governance system in the United States and coordinate it with governance systems in other countries (something that must happen for any A.I. system to be effective).
Lawyers must continuously step up to the problems that need solving, not simply wait for society to bring the lucrative problems to their doorstep. A.I. and other technologies will play significant roles in our future and lawyers must thrust themselves into the discussions. The past excuses, largely dependent on lawyers being ignorant of science, math, and technology, are not sufficient (though they were accurate). Lawyers who don’t feel comfortable addressing client problems in these areas are implicitly leaving the future of governance—and lawyering—to technologists.