Overtone – MediaFutures OC3 midway blog post

March 2023

How do you know what’s on the other side of a link – and whether it is something you want to read, or more likely to be harmful?

Content flows around us in feeds, sluiced around  by clicks and shares and so on, but there is little information to tell you what the article or post actually is and whether it is what you are looking for – or if it’s potentially dangerous.

This problem will only become worse as generative AI makes the amount of content online grow exponentially, with the most convincing bots the world has ever seen shooting off reams of plausible text, instantly, for next to no up-front cost.

Many projects have set out to create binary output (‘quality information vs misinformation’) based on metrics like the source of the information, and measuring the danger by measuring virality. These methods miss the nuances of the way that information works, and are not scalable to the level of the internet when thousands of new sites can pop up instantly.

Overtone focuses on classification of content by type, using the textual signals within it to see metrics like its level of opinionation and the  level of reporting. This has helped our clients at news outlets understand why certain pieces are performing well on social media, for example, or which articles to include in their newsletters or behind a paywall. Being able to differentiate “types” of article – opinionated interviews, say, or medium depth features – means we can build recommendations based on the type of articles that people want to read, rather than just what is being clicked.

At Overtone we want to understand if misinformation can be organised by type, too. By labelling content according to what it is on an objective level, readers can make their own, informed, choices about if and how to interpret what it is saying.

That’s why our work for MediaFutures is creating an algorithm to automatically assign a type of misinformation to content. Our model can ingest text and categorise its type of misinformation. This includes overly opinionated news, conspiracy theory-based news and news with very little context (“I heard that…”). The model’s taxonomy is being created by Overtone, and is going to be tested on misinformation news articles analysed by a group of human readers from more than a dozen EU member states.

When complete, our work will give people insight into content – before they click. Moreover, the fact that our algorithm can create data the very second a piece of content exists means that our scores can be used by systems such as advertising to stop misinformation for profit.

Generative AI is making the internet more chaotic. Above, an image generated with AI using the prompt “a picture of the chaotic internet in the style of artist Pieter Bruegel”