Clash of Titans
Not only since Mark Zuckerberg and Elon Musk famously clashed over the future of artificial intelligence, one question has dominated the discussions about AI: Should national authorities or supranational organisations seek to regulate the use of the technology and if so, how?
The prospect of manhunting machines or all-knowing computers has created an atmosphere of fear amongst some critics of artificial intelligence and parts of the wider population.
Since we fear what we don’t understand, an important task is to educate the public about the benefits and challenges of AI. Because what people often forget is that we already use the technology on an almost daily basis: instead of typing this article, you can dictate it or translate it in dozens of languages. The systems filtering your emails for spam are a form of AI and the technology forms an important element of security systems that seek to protect you from cyberattacks.
We recently discussed the use cases of AI in practice and the insights of AI being used in real life as opposed to the more theoretical discussions focusing on its potential.
Let’s disregard the stuff from science fiction movies though and we find that there are a number of serious and very real risks to consider. A common concern is the potential loss of jobs as robots increasingly become able to do jobs done by humans today.
Another question that is frequently debated in this context is the impact of AI on privacy laws, i.e. how should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, and so on, interact with the right to privacy? Or how can we reign in the already significant use of AI for spreading fake and harmful content online, influence elections or for attacking the IT systems of governments, companies, and organisations? These concerns are very real in the public mind as a recent study by the Center for the Governance of AI, Future of Humanity Institute at the University of Oxford showed last year.
The European Union, too, understands that AI already is part of our lives and the European Commission published in April 2018 its strategy paper on Artificial Intelligence for Europe. It marked the beginning of a heated discussion about the EU’s approach, whether it would hinder the progress of innovation by introducing boundaries, and how effective such rules could actually be. Sometimes the discussion even got ahead of itself, for example, when Politico last month announced that the EU was considering a ban on facial recognition in public spaces.
AI Regulation and the EU
The European Union has indeed continued its work on these fundamental questions and the provisional result was published on Wednesday in the form of a white paper “On Artificial Intelligence – A European approach to excellence and trust“.
The paper documents the benefits of AI, the progress it can bring, before it examines the risks of the technology and what respective regulation could look like in the shape of an ecosystem of trust. It is predominantly the lack of trust that is the main factor holding back a broader uptake of AI according to the Commission paper. The High-Level Expert Group the Commission established last year identified seven key requirements for trustworthy AI, which the group identified in a report last year:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental wellbeing, and
- Accountability.
These form the foundation of a framework that seeks to “ensure compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights, in particular for AI systems operated in the EU that pose a high risk“.
Preliminary results
The report recognises though that it is not more than a preliminary outcome as a key issue for the future specific regulatory framework on AI intelligence is to determine the scope of its application. The Commission determines that the scope of the regulatory framework would apply to products and services relying on AI and thus would require a clear definition on the basis of data and expertise. Both are not easy to come by though. The EU, compared to other players on a global level has limited data available and possible even less expertise in the form of experts in the field. It is an uneven arms race with China and the US (and others) that the EU has entered and its competitors – if we are talking about this as a competition – have different strategies. While little is known about the exact plans for AI regulation in China and the US that are still in the making, China will seek a centralized approach in line with its political system and culture, whereas the US seems to promote the most liberal approach possible in line with its own history of regulatory approach.
What now?
The EU thereby risks to hinder the progress of AI further by introducing a regulatory framework that might be too restrictive. As a result, Europe could potentially fall further behind, but at the same time it needs to guarantee a framework that protects the interests and rights of its citizens, especially in the light of other governments that might not have these in an equal manner in mind. Certainly, no easy task, but one that delayed a moment longer.
Credit: PlanetCompliance
Comments