How will the European Union’s digital strategy impact legal tech?

March 11, 2020
5 min.
By:
Mikkel Boris

Europe is behind on tech. None of the top 15 tech companies in the world are European, and only 4 % of the 200 largest digital platforms are from Europe. When it comes to big data and artificial intelligence, the European Union accounts for only a few percent of the venture capital investments and after Great Britain left, the numbers are probably going to look even grimmer.

As a consequence, the new European Commission has made the digital strategy one of its top priorities for the next 5 years. They are aiming to build upon the foundation laid out by the former commission and their so-called third way in the approach to digital strategies. While China has developed into a digital autocracy that invests heavily in state-developed artificial intelligence, and while the US benefits from a self-regulatory industry that swears by a libertarian move-fast-and-break-things-ideology, the EU has chosen to focus on “data ethics”, “human-centric AI” and protection of consumer rights. New laws on ePrivacy, the copyright directive and most importantly the GDPR are all examples of EU’s regulatory and “ethic” approach to the tech industry

But how does that influence the European legal tech industry? Is this forced focus on data ethics making European legal tech providers more attractive for the risk-averse legal industry? Or is the strict focus on data governance slowing down the development of artificial intelligence to a degree where non-European legal tech companies have an advantage?

That depends on what lawyers want.

Europe’s digital future

According to the newly published strategy paper “Shaping Europe’s Digital Future”, Ursula von der Leyen’s new Commission wants to continue the ethical approach to tech.

The commission writes that it wants: "Technology that works for people: Development, deployment and uptake of technology that makes a real difference to people’s daily lives. A strong and competitive economy that masters and shapes technology in a way that respects European values.“ And in case you are not familiar with the so-called European values, they are often understood as the values described in the Lissabon Treaty’s Article 2; human dignity, freedom, democracy, equality, the rule of law and respect for human rights.

Furthermore, digital technologies must help to pave the way for an open, democratic and sustainable society by creating a “trustworthy environment in which citizens are empowered in how they act and interact.”

The keyword here is trustworthiness since the commission explicitly mentions as fundamental to the success of the vital process of digital transformation: “Feeling safe and secure is not just a question of cybersecurity. Citizens need to be able to trust the technology itself, as well as the way in which it is used. This is particularly important when it comes to the issue of artificial intelligence.”

So what is understood by trustworthiness? The paper specifies that trust means “helping consumers take greater control of and responsibility for their own data and identity. Clearer rules on the transparency, behaviour and accountability of those who act as gatekeepers to information and data flows are needed, as is effective enforcement of existing rules. People should also be able to control their online identity when authentication is needed to access certain online services.”

This position is not only in line with that the European Union has worked for in the past 5 years, it also shows that the new Commission will continue to develop Europe’s ethical third way. In the years to come, artificial intelligence is likely to be the most important focus area for the European Union’s digital ambitions.

Last year, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence published a report with Ethics guidelines for trustworthy AI. These guidelines, were also in line with the overall digital strategy since it states that AI should be lawful, ethical and robust. It must have a human in the loop, be technically resilient and secure, transparent and traceable, and focus on privacy and data governess: “besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.”

Is data privacy what lawyers want?

The European third approach to digital technologies and artificial intelligence sounds like a dream to the legal industry: a regulated market with a focus on IT-security, trustworthiness and the protection of personal data is perfectly aligned with the legal professions’ risk-averse nature. As noted in an earlier article, lawyers tend to value cybersecurity and the privacy of their clients when they pick legal tech solutions. They are looking for safe bets rather than the newest, hottest and most innovative shiny thing.

There are also early signs that the strict rules on data governance can be beneficial for European tech companies. Consumers value data privacy higher and higher these days. Just a few years ago, Mark Zuckerberg used to say that privacy is dead. Well, now he says that ”the future is private. The European companies that excel in data privacy will thus have a competitive advantage over their negligent peers as they can charge more for data ethic products, because they are trusted more by consumers and because they spearhead an inevitable development. Evidently, the GDPR has produced what is known as ”The Brussels effect“ which is when the EU de facto externalises its laws outside its borders through market mechanisms.

All that surely sounds promising to the European legal tech companies.

However, the ethical approach to tech can also become a huge obstacle for European legal tech - in particular for those working with artificial intelligence. The restrictions on the use of personal data could pose a problem for the development of artificial intelligence.

In the European data rules, personal data may only be obtained and processed if there is a very specific purpose. However, data (and lots of it) is the key ingredient for efficient artificial intelligence. Machine-learning and other of these pattern-recognition technologies simply work best when you feed them with a huge pool of unorganised data and let them figure out what works. That makes China or the US way more likely to make the break-throughs on artificial intelligence. The US has fewer restrictions and in China, they make use of all the state-owned data they will without anyone asking questions. Of course, the use and training of artificial intelligence in legal tech do not always concern personal data, but there are other concerns. When it comes to traceability, some parts of the legal industry might find artificial intelligence useful even though it contains black box elements that are not 100 % transparent? And in other cases, law firms could benefit from automation performed by artificial intelligence where there is no human in the loop.

So in the end, it is up to the legal tech buyers and their consumers what they want. In some cases, they might be faced with the choice between the most data ethical solution and the most efficient technology. And in many cases that could be a choice between European software and American tech.

So what will they choose?

NEW Download our e-book

The Manifesto of the Fluid Law Firm

Download Free Ebook

You are more than welcome to contact us with ideas or feedback at mb@contractbook.dk or through the form at Contact us.

Author Bio

What do you think?

Don't miss out on the legal tech revolution - sign up for our newsletter

✌️ Thanks! See you in your inbox every Wednesday!
Oops! Something went wrong while submitting the form.

Read more

Don't miss out on the legal tech revolution - sign up for our newsletter

✌️ Thanks! See you in your inbox every Wednesday!
Oops! Something went wrong while submitting the form.
×
No thanks, I feel like missing out.