Connect with us

International Circuit

EU unveils tougher industry code to combat disinformation

The European Commission has just taken the wraps off a beefed up industry Code of Practice for tackling online disinformation across the EU.

Signatories to the Code — who include tech giants like Google, Meta, TikTok and Amazon-owned Twitch but also smaller players like Clubhouse and Vimeo, among a number of other industry, adtech and civil society entities — have agreed to a series of commitments and to undertake specific measures to address concerns linked to this type of potentially harmful (but non typically illegal) online content.

The 2022 Code of Practice on Disinformation, which applies from now but allows for a six-month implementation period, is being billed as a strengthened replacement for the 2018 self-regulatory regime it supersedes — bringing in “stronger and more granular commitments and measures” (44 commitments vs 21 previously; and 128 new measures), which the Commission says build on the operational lessons learnt in the past years.

In recent years the coronavirus pandemic has stepped up EU lawmakers’ concerns about the harms linked to online disinformation.

Russia’s war in Ukraine has further sharpened attention on the issue as the bloc has adopted tough measures against Kremlin propaganda channels — going so far as to ban the state-affiliated media outlets Russia Today (RT) and Sputnik earlier this year.

A review of the 2018 code which was presented in fall 2020 concluded that the self-regulatory regime was failing to deliver enough transparency or accountability from the platforms and advertisers signed up to it. The Commission went on to announced that it was preparing a rebooted Code in May 2021 — although it’s taken months longer than it had hoped to agree the details.

The new Code, which the Commission presented today, has roughly doubled the number of signatories (34 vs 16). These are not just tech giants like Google or Facebook’s parent Meta but comprise a far broader mix of players, including industry associations (such as DOT Europe) and online adverting entities (like IAB Europe), as well as fact checkers and civil society groups.

Although there are also some notable gaps, too. Apple, for instance, isn’t (yet) signed up. Nor is Amazon (in its marketplace guise). The messaging platform Telegram is another no show for now. But WhatsApp and Facebook Messenger, as well as Instagram, are signed up via parent entity Meta, which is an expansion on its involvement vs the 2018 Code (that Facebook only applied to its eponymous platform).

The new Code will also remain open to sign ups — and the Commission is hoping the list keeps growing. “The more the better,” confirmed commissioner Vera Jourová, speaking at today’s press conference.

This broader base of signatories was involved in drawing up the new Code’s expanded and more granular measures — so the Commission is hopeful this rebooted approach sets up the mechanism to be more comprehensive in tackling online disinformation; and therefore more successful at addressing the smorgasbord of threats posed by a type of online content that can cause harm by spreading lies and eroding trust.

As we reported in 2018, when the original Code was unveiled, the EU’s first attempt at responding to the threat of disinformation looked far too broad-brush to have a meaningful impact on a fast-scaling problem and the Commission has — eventually — agreed and revised the approach, with cross-industry participation.

“We believe that the fight against disinformation has to have this parameter of a ‘bottom up approach’,” noted Jourová. “This Code is the product of the signatories at the end.”

The main focus areas for the EU’s new disinformation Code are: Demonetization (i.e. putting pressure on the ad industry to avoid ads being served alongside disinformation to reduce the financial incentive to general fake nonsense); transparency around political ads (albeit the Commission’s proposal here, which was presented last November and is still pending legislation, looks weak); reducing manipulative behavior (including agreeing to measures to tackle fake accounts and bot driven amplification, and address other risks like deepfakes); protecting users (such as with more and better tools to identify and report disinformation; and through requirements that platforms surface quality information to squeeze info-gaps that may otherwise be filled with disinformation); ensuring pan-EU fact-checking coverage; and facilitating data access for researchers to support independent study of the disinformation problem.

There will also be a new transparency center set up to support the implementation and monitor operation of the Code; and a permanent task force which the Commission said will be focused on ensuring the Code adapts to changing disinformation threats by making suggestions for improvements and new requirements.

Jourová described disinformation as a growing problems in the EU — which is why she said the bloc needs to take tougher measures to ensure democratic processes are protected.

“This is a big step forward because thanks to the new Code we have created an environment of collaboration with constant monitoring with potential adjustment in the wake of new threats and new evidence,” she argued, adding: “The Code is a key instrument for creating a safer and health environment in the entire European Union.”

She also suggested the new Code would finally deliver meaningful data for measuring platforms’ performance across all EU countries and languages — which has been a major gap in earlier reporting rounds.

While it’s still not mandatory for any companies to sign up to the Code, the EU is linking being on board with the disinformation-fighting measures to the incoming Digital Services Act (DSA) regulation — saying its aim is for the Code of Practice to become a “mitigation measure” and a “Code of Conduct” (so, yes, keen policymakers will note it’s envisaged as being both a Code of Practice and a Code of Conduct) that’s “recognised under the co-regulatory framework of the DSA”.

That’s important because it gives the industry Code teeth — since the DSA bakes in a regime of major penalties (of up to 6% of global annual turnover) for infringements, providing an incentive for companies to align with the Code’s measures as part of their broader EU digital regulation compliance strategy.

Under Article 27 of the DSA proposal lists adherence to Codes of Conduct as a valid mitigation measure for systemic risks — which disinformation would be classified as, so — basically — the Commission’s expectation is that the Code of Practice will become a central plank of DSA compliance for platforms.

“Today also marks a clear departure from self-regulation only. For the big platforms [the Code] will be enforced through the Digital Services Act,” noted Jourová.

There may, however, be a question-mark over how much this linking of the Code to compliance with the DSA will be an effective means of encouraging compliance for smaller entities. Such as (smaller) adtech entities — which may still play an outsized role in the distribution of disinformation by providing tools for targeting (and therefore amplifying the spread of disinformation) as well as providing the conduits for creators of disinformation to monetize their nonsense — but may not be classified as so-called VLOPs (very large online platforms) under the DSA, meaning they would not have the same requirements to address systemic risks like disinformation.

Here the Commission appears to be relying on reputational pressure being brought to bear on non-VLOP signatories via an ongoing implementation reporting and monitoring structure that’s based on KPIs it’s attached to the Code’s measures (with both qualitative and quantitive reporting elements) — with those that have signed up being required to report on their application of the Code every six months (for VLOPs) or annually for smaller entities.

Signatories will be required to report how they have implemented the Code’s measures and commitments and provide data to back up their reporting. The first batch of these reports will be due in early January.

The Commission implemented a similar reporting structure related to COVID-19 disinformation — which led to a series of pressers in which tech giants were chided by EU commissioners that they ‘must do better’. So whether a similar reporting structure attached to the beefed up Code will deliver meaningful process changes from the adtech industry remains to be seen.

A more effective tool against online disinformation in the ad targeting sphere might be a full ban on tracking-based ad targeting — which relies upon data-mining and profiling individuals to serve behavioral advertising that’s tailored to their particular interests and views — and which, in the context of disinformation, can allow for malicious marketing messages to be tailored to individuals who may be most vulnerable to those fakes/lies which may therefore amplify the impact and spread of disinformation as a tool for manipulation.

It’s worth noting that the incoming DSA includes a ban on the use of minors’ data for targeted advertising; and a ban on the use of sensitive data for ad targeting — so the EU is taking steps to limit how tracking-based advertised can be used as a tool for manipulation.

In addition — earlier this year — a key ad industry framework, the IAB Europe’s Transparency and Consent Framework tool, was found in breach of existing European Union data protection laws, so current ‘mass surveillance’-based adtech practices are operating under a legal cloud in the EU.

At today’s press conference, the Commission also emphasized that the Code is not its only tool nor strategy to combat disinformation — which can also of course happen offline, via traditional media channels or, indeed, spew from the mouth of elected politicians… leading to conspiracy-fuelled violence.

Asked about these wider concerns, commissioners highlighted ongoing work by the EU’s executive on a European Media Freedom Act which they said will focus on addressing related issues, like transparency of media ownership and ensuring Europe’s press remains free from foreign or government influence.

Media literacy and education for children, to help kids learn critical thinking around information, is another focus for the EU, they said, as is stepping up communications to fill gaps that malicious conspiracy theories may be seeking to exploit.

As for lying politicians, Jourová said she hoped for a return to higher moral standards of conduct for elected officials, suggesting: “We used to live in maybe better times when obvious lying was clearly disqualifying behavior for the politician. Maybe we should come back to that.” Tech Crunch

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!