Technology, AI and art are becoming evermore interlinked, whether it be generating digital art or using artistic methods to interrogate and highlight misinformation.
The MediaFutures programme brings together artists, startups and SMEs to create work exploring misinformation, technology and data. The programme asks project teams to devise novel ways for citizens to better engage with quality journalism, science, education and democratic processes, and to recognise and avoid content of malpractice.
While art is being directly influenced and indeed created by technology, art in turn also has a role to play in exploring technology, including current trends of online misinformation. The link between artistic practice and technological innovation is crucial to aid media literacy, develop human-centred innovations, and to help identify and challenge fake news.
Trusted fact-checking organisations are critical in the fight against misinformation, but how can AI, art and innovative startups also help? Here we delve into a few stories from the wider world, highlighting the delights and the potential pitfalls of the increasingly overlapping worlds of art, tech, data, AI, algorithmic bias and misinformation.
Artistic fuel
First, let’s look at what can help to fuel artistic success. An AI-based research study investigated whether there’s a common pattern which can identify when creative endeavours are most likely to be successful. According to this AI-based research into experimental diversity across time, a creative ‘hot streak’ is usually preceded by an experimental phase, followed by a much narrower focus on one specific approach.
This backs up the notion that allowing the mind to wander fosters creativity, which can be spotted across history – think of Newton’s plague-related quarantine year (his ’annus mirabilis’) when the space away from his usual environment allowed his mathematical discoveries to flourish. And let’s not forget Archimedes’s bath-inspired ‘eureka’ moment. Indeed, many of us are no strangers to having some of our best ideas in the tub or shower, so get the bubbles out and let the creativity flow.
AI as an artist: copyright, fair use, and real-life consequences
Now we move on to consider AI actually being part of the artistic process. Image-generating AI software, including Lensa and DALL-E, have recently been taking social media by storm. People have been using these tools to reimagine themselves as action heroes, famous portraits and even historical figures.
But what are the ethical (and potentially legal) issues with these tools? Aside from possibly devaluing the output of human artists, users report that such software perpetuates stereotypes, for example by unnecessarily sexualizing portraits of women.
Discussion around whether artists have even allowed their work to be used in the AI-training data also raises issues of intellectual property, fair use and consent, leading us to consider the financial consequences of AI-generated art. Increasingly professional-looking AI-created images can impact the livelihoods of graphic artists and illustrators, giving rise to some concern about the nature and wellbeing of a whole industry.
Copyright issues were similarly raised when a particularly photogenic primate took some fetching selfies – blurring the lines of artistic ownership and questioning the premise that the photographer is always the copyright owner.
AI as a misinformer
AI-generated content can be tainted with both accidental and malicious mistruths, due to biassed, careless, or limited training data and algorithms. Examples include Uber’s AI app not correctly identifying drivers with darker skin tones, and the snowballing of misinformation caused by AI algorithms in the vaccine-hesitancy movement, which created increasingly large echo-chambers that contributed to the public health crisis.
Contentfolks editor Fio asks: Can YOU detect ChatGPT’s bullsh*t? A tale of fabrication and misinformation. She found that ChatGPT has a tendency to make things up, although when double checked the system did actually correct and apologise for its initial blunder. But despite the software then reigning in its own fabricated falsehoods, it next started to promote existing misinformation instead, including the incorrect birth year of some authors. Although certain details such as this may seem to be of little to no consequence, it is indicative of an underlying issue that ultimately reduces trust in such systems overall.
And of course there was the infamous first question asked of Google’s Bard AI bot, giving the incorrect answer that the James Webb Space Telescope was the first to take pictures of a planet outside of the Earth’s solar system. This led to a drop of $100bn in the share value of parent company Alphabet. Ouch.
AI as a fixer
Fact-checking organisations such as Full Fact use a complementary team of humans and robots to find, check and challenge falsehoods posted across the internet.
In 2019 Full Fact – along with Africa Check, Chequeado and the Open Data Institute (ODI) – won the Google.org AI Impact Challenge, which enabled the organisation to use machine learning to improve and scale up its fact-checking service. The AI-based software is used by human fact checkers to help streamline the process of identifying the most important malinformation. The software is designed to ‘alleviate the pain points we experience in the fact checking process’.
In podcast Trust and misinformation hosted by the ODI, Andy Dudfield, Head of Full Fact AI, highlighted the importance of AI in the process, given the scale of the job at hand: ‘We have an AI model that we’ve developed based on a lot of annotations from fact checkers to identify claim-like statements. So things that we might be able to fact check. And each day we’re probably identifying around 100,000 potential things that we could fact check.’
And of course there are the MediaFutures projects themselves, fusing art and startup methodology to develop innovative, thought-provoking and alternative approaches to tackling misinformation. Among others, these include: Blind Spots, an interactive audio walk which adapts dynamically to changing real world parameters; Epic Sock Puppet Theater, an interactive media artwork featuring animatronic sock puppets that speak the words of social media posts from ‘sock puppet’ accounts known to have engaged in disinformation campaigns; and Trolls vs Elves, a synergy between a documentary film and a game, delving into the issue of online disinformation in the context of Ukrainian refugees, investigating the operations of internet trolls and activists called Cyber Elves.
By Open Data Institute (ODI)