Friday, October 26, 2018

What’s Artificial About Ethical AI In The Legal Industry? Everything.

By ANDY NEILL

With many companies and people around the world relying on artificial intelligence, considering the implications of how AI programs can act and affect outcomes (positively and negatively) is too important to avoid. And people, not technology, have the answers.

Discussions will continue what role ethics has in the deployment of AI, but some guidance already exists.
In 2017, the House of Lords in the United Kingdom appointed the Select Committee on Artificial Intelligence. The role of members is to consider the economic, ethical and social implications of advances in artificial intelligence.

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
This code is similar to Isaac Asimov’s three laws of robotics created in 1942. Seventy-six years down the line, narrow artificial intelligence now actually exists and has progressed further than it would’ve been possible to imagine so long ago.
How does ethical AI impact the legal industry?
A fundamental part of the law focuses on justice and ethics.
At a recent Law Society event, Professor Richard Susskind recommended that AI practitioners become familiar with the study of ethics, as he did as a law student. Susskind suggests that one cannot look to regulation, to the letter of the law, to tell you what is ethical and what is not; you can only use the law to find out what is “legal” and what is not.
There’s also a question of morality or whether you “should” do something, compared to whether it is permitted.
When designing a legal AI system, outcomes must be described in terms of features — what the system will do in response to an action. In any system, unexpected outputs are treated as bugs or undesirable outcomes.
In regular programming, one then reviews the code to find the logical programming error that a human has made, and changes it, so the bug is fixed. In AI, specifically machine learning, the artificial intelligence has created its own logic, so the code cannot be changed.
So with legal AI, we have to either restrict the techniques used to build artificially intelligent programs to those that can be interrogated to understand their reasoning, or restrict the domain we’re working on to one where there is significant human oversight.
In practical terms, that means focusing on the extremes in any given case, not on average or typical inputs when trying to find bugs, because an ethical bug is most likely to appear at the margins.
How can we ensure that legal AI is ethical?
The test criteria for AI systems dealing with legal matters should be explicitly defined to include scenarios revolving around the marginalized in society.
Law firms dealing with volume insurance claims are a prime example. In order to make their processes as efficient as possible, these companies create an AI system that automatically classifies claims as settle or contest. The system is trained based on thousands of previous examples. They’re confident that it can predict which category a claim should sit in.
Before going live, they must ensure that their newly formed AI has not accidentally learned the wrong lessons, therefore putting an unethical AI agent to deal with real human dilemmas. The issue could be that the AI has found some hidden metadata, or patterns, in their training data, meaning it made the right outcome, but for the wrong reasons.
For example, let’s say that the AI system learned that cases from particular postcodes should always be contested, rather than settled.
To avoid this, before the AI agent is created, the firm should check with their ethics panel (a management panel with oversight to approve) the terms of this AI agent, and what constraints there should be. If there isn’t an ethics panel in place, the Law Society recommends that there should be. This would be similar to the COLP (compliance officer for legal practice) role that reviews proposed AI solutions, discusses their dimensions and approves them if they meet ethical standards.
Following the creation of the AI agent, there should be a round of ethical testing, alongside functional testing, to ensure that the AI has not learned the wrong lessons. Humans need to think about unethical outcomes of the newly created AI, and therefore must test to ensure that ethical boundaries are not being breached.
This is a prime law for lawyers who are trained in ethics, not for AI designers and engineers, who are trained in computer science, math and physics.
The aim of this is to uncover unethical behavior by the AI, and try and trip it up by giving it deliberately biased data, to see if it comes up with biased outcomes. By retrospectively uncovering the rules that the machine learning algorithm has internalized, you can make sure that they’re going to pay attention to the correct facets, and not to bias or unethical features, such as a claimant’s race or gender.
The Law Society is one of many organizations focusing on the ethical application of AI. Recommendations include having an ethical panel or officer who will review the application of AI and the results of ethical testing in a legal context. This helps guard against unethical practice and application of AI.
Lawyers are trained and have studied ethics. They must be utilized to succeed in creating ethical AI programs.

No comments:

Post a Comment