Doppelgänger brings people face to face with their double lives as data subjects – and their double standards

In Doppelgänger, documentarist Michael Madsen and deep tech company KASPAR. address issues of mass surveillance, ethical data sourcing and misinformation as they imagine a near future where AI has hacked into the eyes of the world to collect biometric data and take a closer look at the human species.

Doppelgänger is a collaboration between the start-up KASPAR. – a Copenhagen-based deep tech company developing AI-powered tools for filmmakers and artists – and the Danish conceptual artist and documentary filmmaker Michael Madsen known for his philosophical sci-fi-documentaries The Visit (2015) and Into Eternity (2010).

A recent example of the kind of tools that KASPAR. makes is the creative autonomous editing tool, also called Kaspar (after Kaspar Hauser), currently powering untoldstori.es – a platform that functions both as a cinema, and a recycling station of sorts, where filmmakers and audiences from all over the world can experience what it’s like collaborating creatively with AI. On the site, users can play around with old material and footage from unfinished films provided by professional filmmakers and archives, and reassemble it into new original movies, with the help of Kaspar.

For MediaFutures, KASPAR. and Michael Madsen have set out to create an installation in a public space that invites passersby to approach a large cluster configuration of mirrors with a camera at the center. As people approach, they see a red bounding box appear around their face, which is blurred, along with personal information about them as text on the mirror. They realize that they are not only being watched but also read; mined for biometric and emotion data. What they don’t immediately know is who is watching and why. A sense of unease arises.

 

Easier to critique than to comply

Since the beginning, a big part of the motivation behind Doppelgänger has been to 1) remind people how vulnerable today’s video surveillance systems and facial recognition technology (FRT) make them to misuse of their personal data, and 2) make people more of aware of the rules and regulations that are in place, in Europe at least, to protect civilians’ right to privacy, to freedom of peaceful assembly, freedom of expression, equality, and non-discrimination.

The challenge has been, and still is, figuring out a format in which to do so without violating any laws ourselves – which is harder than it sounds when your artwork involves using FRT in a public space.

For this reason, Doppelgänger has already evolved quite a bit. Initially, the idea behind the project was that we would train facial recognition models that would allow anyone to search for their double in a vast archive of CCTV footage from all over the world. But it turned out that wasn’t possible. Or legal. Then Kaspar and Michael Madsen came up with the idea of deepfacking people’s faces, provided that they consent to their face being scanned during the experience, and placing “them”, their digital double, in scenarios and situations that they’ve never been in, again using found CCTV footage. Also not feasible. In order to comply with GDPR, we realized, in the end, that we had to lose the CCTV footage and find a way to make sure that no (real) faces were ever visible in the mirror configuration, and that nothing was stored.

Face value

Early on in the development of Doppelgänger, the team became fascinated by the contradictory behavior of humans around privacy protection. On one hand, people today are very aware of the dangers of mass surveillance and FRT. On the other hand, they quite willingly lend their faces to various social media platforms to find out which Pokémon they are. What mood they’re in. Which celebrity they most resemble. How symmetrical (or not) their faces are, “what their excuse is” and so on (members of the team not exempted!).

Doppelgänger uses this urge to truly see yourself, and to connect with other people, to lure in its audiences but then turns it on its head as the face people see reflected back to them is a smudged and contorted version of their features, or not their face at all but belonging to someone, or something, else entirely.

For the MediaFutures collaboration, KASPAR. has already successfully developed and tested a blurring tool that is now finding its final visual expression in the hands of Michael Madsen and the award-winning digital design studio Space and Time. However, since this type of function is already widespread on platforms like youtube, the start-up is now also exploring a more advanced and ‘invisible’ anonymization tool to implement in their business model where faces are not blurred but replaced with other faces – faces of people that do not exist.

Done in a convincing manner, this type of deepfaking tool has the potential to be a real game changer in the industry in situations where filmmakers have shot amazing material from a big public event, such as a demonstration or a concert, but struggle to use it because GDPR states that you have to get consent from everyone who is in it. Not only could such a tool instantly GDPR-proof your material – it could do so seamlessly, without disrupting the look and atmosphere of the original footage.

Training FRM on fake faces

Using deepfake technology to protect people’s privacy and build GDPR-compliant data sets of diverse faces might sound counterintuitive. Repugnant even. After all, this is a technology currently threatening democracy and trustworthy media. Deepfakes play a huge role in the generation of mis- and disinformation and are most commonly known for their grave misuses – making public figures say things they never said and launching attacks on women; placing the faces of famous actresses, or unknowing civilians, onto porn actors without their consent.

But although deepfakes are disreputable, to say the least, the technology might also solve some serious issues we are facing at the moment: a need for less-biased face recognition models – and a shortage of faces to train generative adversarial networks (GANs) on.

Due to the legal implications of GDPR for SMEs and Startups, visual data sets – and especially faces for character recognition – have become much harder to access. While big companies such as Google, Facebook, and Amazon managed to train their models before these regulations were introduced, it has now become virtually impossible for new SMEs and Start-ups to compete.

This realization forced the Doppelgänger team to think creatively and led them to the idea of experimenting with synthetic data sets and swapping real faces with faces of GAN-people. To create data sets from GAN-people, KASPAR. plans to use deepfake techniques to multiple a single frontal image of a single face into different angles, light conditions etc.. This way, they aim to create more robust and less biased face recognition systems that make searching for certain characters within footage much easier and faster for filmmakers using their tool.

The deepfake-fake face idea is still at an early stage, and it’s possible that it will fail. But even if it does, the experiment could still be valuable since it might form the basis of a dataset and a model that could then detect deepfakes in videos and help prevent their circulation.

AUTHORS: Sofie Lykke Stenstrop – DOPPELGÄNGER