Judicial big data or judiciary big bang?

The cyborg lawyer versus the flesh and blood lawyer

At a time where the founders of Predictrice, a young sprout and start-up, have won the pitch competition dedicated to artificial intelligence, whose algorithm analyses legal decisions to save time for law firms, the question arises of the future of the legal professions.

This concern is not without legitimacy, since we have recently learned that the Estonian government now has the ambition to introduce artificial intelligence into its judicial system.

In this respect, Ott Velsberg, has been appointed by the authorities to supervise the design of a robot judge whose mission will be to render justice autonomously in cases of minor offences, for which the damages are less than 7,000 euros.

As for France, it is no longer at the experimental stage either.

A recent study tested about 20 lawyers to identify problems in five non-disclosure agreements. The twenty lawyers who are experts in contract revision found themselves opposed to an artificial intelligence, LawGeex AI. The latter succeeded in achieving an average accuracy rate of 94%, a figure far higher than that of lawyers, which was 85%. On average, it took lawyers 92 minutes to complete the research, compared to 26 seconds for artificial intelligence.

This is enough to upset legal professionals…

With the announced publication of hundreds of thousands of court decisions online as part of the judicial big data, there are voices being raised to regulate predictive justice.

At the beginning of January, the Council of Europe published the first European ethical charge on the use of AI in judicial systems. This first continental text sets out ethical principles for public and private actors involved in the design and deployment of tools using algorithms to process judicial data.

In particular, it points out that artificial intelligence is limited, that algorithms are never neutral and that they can lead to discrimination and encourage positive use of AI which guarantees respect for the rights of individuals, data protection and the right to a fair trial.

Some judges have expressed concerns about the risk of loss of independence of the judge. The development of predictive algorithms must not lead to artificial intelligence eventually replacing the legal analysis and reasoning of judges.

In addition, implementation may affect the right to a fair trial. The use of artificial intelligence could hardly rigorously respect all the guiding principles of the trial. Indeed, the documents would no longer be examined, the dispute would no longer be really argued by the parties….

On a more positive note, predictive justice can be seen as an opportunity for the lawyer, in particular by relieving him of important research work, and thus allowing him to save time. She will also be able to refine its strategy based on the information provided by the tool: chances of success, foreseeable deadlines and legal means to be used.

However, can we really avoid human intervention?

Clients may prefer that a lawyer examine the issues himself and take care of their business.

The necessary adaptation to digital developments or the preservation of proximity to the litigant is the question….

On the one hand, the assurance of a faster justice, a less important workload for legal professionals but on the other hand, a mechanical justice, undermining any principle of independence…

In any case, collaboration or competition, the most important thing is to be aware of the risks that artificial intelligence can create. Human intervention remains essential to control the potential dangers of predictive justice….

Last point: by reading this text, our algorithm has flagged you… 😊