crypto ai

There has been a lot of talk lately about the amazing performance of the most popular AI applications. Among the superstars, of course, ChatGPT, the application developed by OpenAI, holds sway on all social media: raise your hand if you tried to play around with it.

By now, these applications can be asked anything and they give jaw-dropping answers. Meanwhile, Google is preparing to launch its offensive on this front with Bard, its application that promises to be better than ChatGPT.

Indeed, for the non-techie, the reactions of the most advanced artificial intelligence applications can leave one stunned. Indeed, it is so not only for non-techies: last year, there was the case of Blake Lemoine, a Google engineer who had claimed that the LaMDA artificial intelligence (AI) system designed to converse with humans (which is then the basis of Bard) was sentient and was fired for it, causing a stir.

Now, one of the areas where the use of artificial intelligence is most controversial is in the legal field.

Over time, many doubts have been raised about the actual degree of effectiveness of these technologies, whether or not they should be used, and, not least, about a number of ethical implications that may arise from the use of artificial intelligence applications when handling the sphere of people’s rights, especially when considering the possible impact on property rights and even personal freedoms.

On a practical level, however, artificial intelligence programs, technologies and applications have been used in the legal field for several years now.

We certainly cannot speak of massive, let alone widespread, adoption, but these applications are also becoming more and more established and widespread in the world of law.

It has been at least since the second decade of the 2000s that several law firms around the world, including Italian ones, have been using systems such as Ross, developed by IBM, or Luminance. These are systems created to quickly analyze and parse, sort, group, and classify large quantities of documents, to detect anomalies in them by reporting them to professionals. These systems are mostly used to support the performance of due diligence on large transactions to speed up and facilitate the work of professionals.

In the same years, however, different uses of artificial intelligence in the legal field have also been experimented with: in 2017, the British start-up, CaseCrunch, launched a challenge in which it pitted its CaseCruncher software against a lineup of 100 highly experienced live lawyers to solve a sample of legal cases in banking. The artificial intelligence system got the better of the humans, beating them in speed, in quality (providing tighter-fitting solutions) and at shamefully lower cost (£300/hour for the humans versus £17/hour for the machines).

All of which, you will agree, is quite disturbing to anyone wearing a robe.

Crypto, artificial intelligence (AI) and the legal system

And while this may already seem disturbing, i.e., the possibility that a machine could play an “active” role in the decision-making processes underlying possible defensive strategies in the legal sphere, performing even better than a flesh-and-blood lawyer, it becomes even more delicate if this technology begins to enter the processing processes underlying possible judicial verdicts or the adoption of administrative measures capable of positively or negatively affecting the legal sphere of individuals.

If you are wondering, the answer is yes: this has also been happening for several years already. Predictive justice systems have been experimented with for many years. One of the cases that has caused quite a bit of discussion is the Compas (Correctional Offender Management Profiling for Alternative Sanctions) application, an algorithm that has been used for many years in some US courts to predict a defendant’s likelihood of recidivism in order to quantify the amount of bail.

The point is that this system has often proven to be unreliable and even discriminatory, because it tends to overestimate the risk of recidivism for African American defendants and underestimate it for Caucasian, white defendants. In addition, because the operation of this application is covered by patent-protected secrecy, the system is far from transparent because its judging criteria cannot be accessed. Nevertheless, it continues to be used and its use has been ruled legitimate by the Wisconsin Supreme Court.

University College of London several years ago conducted an interesting experiment: software simulated the judgment of the European Court of Human Rights on a sample of 584 real, already adjudicated cases (cases on torture, degrading treatment, and invasion of privacy). The result was that for 79% of them the machine verdict coincided with the decision of the Strasbourg Court. Good but not great.

In Estonia, judges-robots have been experimented with to resolve smaller disputes (up to 7 thousand euros in value) to clear the backlog. In addition, already tens of millions of disputes between eBay traders are handled and resolved with automated “online dispute resolution” systems, without the use of human lawyers and judges.

And again, Prometea, an AI system that manages and resolves repetitive and simple-structured court cases in a time space of a few seconds, has been developed in Argentina. The numbers generated are impressive: this system has been able to churn out thousands of judgments in just a few days, where traditional methods used to take as many as ten or twenty times as many, depending on the various subjects of application.

The same system has also been adopted by the Inter-American Court of Human Rights, which has achieved an increase in efficiency of up to 143%, and its use has also been evaluated by the French Council of State.

Time flies in the world of technology. Here we are talking about experiments carried out seven to eight years ago. Yet, it already seems like prehistory. To go to something closer to the present day, the use of AI in the judicial field is back in the news today with the news that in February, for the first time in the US, a case would be argued by a lawyer-robot, thanks to the DoNotPay application.

The DoNotPay case

DoNotPay is an artificial intelligence application created a few years ago to prepare and forward in an automated way a range of elementary legal acts, mostly out-of-court, such as cancellation letters to services, objections to fines, warnings etc.

Now, the creators of the application, which has since been further developed and evolved considerably, have thrown down the gauntlet: in a minor dispute (over a traffic infraction) an artificial intelligence system would dictate (literally) the defendant’s defense. In practice, a live lawyer would repeat in the courtroom, verbatim, every word dictated to him, through earphones, by the artificial intelligence system.

Nothing more was made of the matter afterwards because, apparently, the US Attorney General’s Office was going to push for it.

The fact remains that the news made its impact.

As one can easily imagine, the category of live jurists, be they lawyers or judges, is densely populated with exponents who are not particularly happy at the prospect that computers and software can steal their jobs and, what is more, do it better and cheaper.

It was precisely in recent days that I happened to come across a promotional Facebook post by a leading legal publisher and producer of databases and software for professional firms, announcing a training activity on lawyer 4.0, on topics such as the metaverse and artificial intelligence applications.

This was followed by an impressive amount of comments, negative in almost all, posted by a great many lawyers. The tones of the comments, often sarcastic, expressed a deep rejection and even outright disdain for the use of technology. The underlying sense: they want to eliminate the human element from justice. They are distorting our profession. Where are we going to end up at this rate?

Of course, lawyers are a particularly conservative profession now, especially in Italy, and this is common knowledge. However, distrust and hostility toward artificial intelligence are quite widespread sentiments in many areas, whatever the underlying reasons.

The fact remains that, like it or not, today we have to come to terms with this scenario: that the whole path that begins with the assessment of the prospects of a possible dispute, passes through the onset of a dispute and culminates with the issuance of a judicial decision, in the future may be entrusted to non-human systems, if not in full, at least for a large part.

And to the question “what about the lawyer?” as well as “what about the judge?” many possible answers can be given.