By Noémie Krack – KU Leuven
When it comes to technology, law is often described as lagging behind. As for every technological revolution law needs to adapt, integrate the new technology in the existing landscape or adopt and elaborate an adequate framework. Similarly to the invention of automobiles, civil society demands for regulation elaborating safeguards, respecting fundamental rights while industry is concerned about the impact of regulation for innovation.
The rapid evolution and uptake of AI systems and the massive spread of disinformation are both containing risks and challenges for the safeguard of democracy, fundamental rights and values. Strong of a work conducted for several years, the European Union decided to step up considerably its efforts and initiatives in the digital field including these two matters. Indeed, in spring one legal proposal for an AI regulation and a policy initiative to improve the efforts made to fight disinformation were released.
Disinformation
Disinformation is actually a subject matter left for self-regulation. At the EU level, the EU Code of practice on disinformation was adopted in September 2018. The Code is part of a wider agenda of measures established by the European Commission (hereafter EC) to deal with disinformation. It calls platforms, leading social networks and the advertising industry to step up their efforts to tackle online disinformation while maintaining the right to freedom of expression and an open internet.
Later on that year, aware of the potential impact that disinformation could have played in major democratic events such as the Brexit Referendum and the US Presidential election in 2016, an Action Plan against disinformation was adopted to ensure a timely implementation of the Code before the European Election of May 2019. The Plan clarified that if the Code would turn out to be unsatisfactory, the Commission may propose further action including of regulatory nature. It is in this context that the Commission has conducted its evaluation.
Last September, the assessment of the Code showed that despite providing an unique framework for a structured dialogue and close collaboration between stakeholders, Code’s shortcomings were still numerous and considerable. Indeed, these included: inconsistent and incomplete application of the Code across platforms and Member States, limitations intrinsic to the self-regulatory nature of the Code, gaps in the coverage of the Code’s commitment, lack of an appropriate monitoring mechanism, lack of commitments on access to platforms’ data for research on disinformation and limited participation from stakeholders.
As announced in the European Democracy Action Plan (EDAP) (December 2020), the European Commission decided to step up its effort and released in May 2021 the Guidance on strengthening the Code. Guidance which aims to address the shortcomings identified by empowering users, encouraging the flagging of harmful content and tackle the platform business model. The guidance invites the platform to revisit the Code and adopt measures to expand the scope, better demonetize disinformation, ensure the integrity of the services, and improve the access to researchers. The guidance suggests achieving such objectives through accountability measures, tailored commitments based on the type of platform, setting a robust monitoring framework with a set of key performance indicators, and creating a Transparency Centre on disinformation policies by platforms, …
The current signatories of the Code should present in autumn this year a first draft of the amended Code post Guidance. In the meantime, the European Commission will be in a campaign to reach potential new signatories and interested parties.
The Code of Practice is voluntary and the guidance does not provide the legal basis for sanctioning those platforms that do not take enough measures to counter disinformation. However, the Guidance, the Code in combination with the Digital Services Act constitute the regulatory shift toward co-regulation which was awaited by experts in the field to level up the measures to tackle disinformation. Indeed, the platforms would become obliged to take some measures in line with the guidance thanks to some DSA provisions. The mechanism would improve platforms accountability without ruling on the touchy question of removal obligations for disinformation. At the frontier of freedom of expression (including parody, satire,…), it is not an easy task to classify disinformation with 100% certainty and demands a contextual and cultural appreciation on a case per case analysis. The approach chosen is therefore welcomed by civil society and eyes are now riveted to platforms.
Artificial Intelligence
For several years, we have seen a plethora of policy initiatives concerning AI systems. However, no dedicated european or international instrument was existing. Of course, a lot of existing legal instruments were already applicable to this developing technological field. The problem with existing instruments is that they are often too old, not adapted, too vague and containing gaps for innovation. Two options were then available for policy makers: interpreting and adapting the existing rules and legal frameworks or creating new regulation.
On 21st April the EC released a proposal for a regulation laying down harmonized rules on AI. The proposal was built upon the work carried out over the years including the High Level Expert Group ethics guidelines and assessment list for a trustworthy AI.
The proposal for a regulation has the general objective to ensure the proper functioning of the EU single market by creating the conditions for the development and use of trustworthy artificial intelligence in the Union. The policy makers wished to bring legal security by providing clear rules for the AI developers, deployers and users while safeguarding fundamental rights and the fundamental values, principles of the European Union.
The scope of the regulation is remarkable as it is quite broad : public and private actors; and inspired from the GDPR approach, the regulation will also be applicable to actors not established in the EU but putting products, services in the EU or having their product output used in the EU.
Concerning the definition, the choice made in the proposal is to have a broad, inclusive and future proof definition. The terms employed are quite generic in order to focus not on the technology used but rather on the usage and impact of AI systems. Some criticised the over broadness of the definition.
The proposal introduces a risk based approach for regulating AI. The unacceptable risks are banned from the EU (example: AI used as social scoring). The high risk AI systems such as a safety component of critical infrastructures or used for law enforcement or border control are subject to strict obligations before they can be put on the market and are also subject to close scrutiny after being put on the market. Strict requirements for the data sets used, human oversight, the documentation necessary and the transparency for users are established under this category. For limited risk, some transparency obligations are in place regarding the use of certain AI systems which include chat bot and deep fakes.The aim is to design and develop the AI system in such a way that the natural persons are informed that they are interacting with an AI system. For the minimal risk category, no specific obligations are in place by the EC will promote and facilitate the uptake of codes of conduct.
Image Source : Regulatory framework on AI | Shaping Europe’s digital future (europa.eu)
In case of non compliance with obligations, some penalties could be imposed which could, depending on the offense, go up to 30m EUR or 6% of the offended company’s total worldwide annual turnover. The text is only a proposal at this stage. Now, the EU democratic game is taking place with the European Parliament and the Council of the European Union analyzing the text in depth for adopting their respective positions and proposing amendments.