Tracer not displaying correctly? View it in your browser.
Welcome to Tracer, the newsletter tracking the evolution of deepfakes and synthetic media technologies, disinformation, and emerging cybersecurity threats.
New to Tracer or know someone who would be interested? Share and subscribe here!
An investigation by the Beijing Times and Global Times found an abundance of illegal deepfake pornography on online marketplaces, with the videos featuring prominent female Chinese celebrities.
What exactly did the investigation find?
The investigation was led by reporters from the Beijing Times, who found deepfake pornography for sale on e-commerce giant Alibaba's second-hand marketplace Xianyu, and Chinese technology and internt service company Baidu's forum pages. On both sites, deepfake pornography videos of female Chinese celebrities including actresses and musicians were found for sale in bundles, with the prices ranging from less than $2 for a few dozen to $25 for hundreds of videos. In addition to finished deepfake pornography, the reporters also found deepfake services being advertised on the sites, with the seller offering to make customised videos featuring the buyer's chosen subject, as well as do it yourself tutorials.
The tip of the deepfake iceberg in Asia?
The investigation reveals the similar appetite and commodification of deepfakes (in particular deepfake pornography) in China as we have seen in the US and Europe. Pornography is currently illegal in China, and Chinese legislators have already moved to ban all deepfakes that infringe on an individual's portrait rights. Following the above investigation both websites claimed to have removed all illegal listings involving deepfakes, and have implemented bans on searches for terms like deepfake and faceswap. However, as with Porn websites such as Pornhub, these ban attempts have not been successful, with similar searches still yielding results on faceswaps and deepfakes.
The Daily Mail reports politician Arvind Limbavali made tearful protests in the New Dehli legislative assembly after being targeted by an alleged deepfake that showed him engaging in gay sexual acts.
What exactly happened?
The emotional speech focused on Limbavali's demand for an official probe into the alleged deepfake video that had been released around a week earlier, depicting him engaging in gay sex. Limbavali claimed "'It is hard to even imagine such a situation where your family has been traumatised", with other comments implying his children had been significantly affected by the video. A previous politician had also mentioned the impact of fake videos smearing politicians in India, with many circulating on social media and Whatsapp groups. Following the speech, police authorities reported a case had been filed and an effort to identify those who created or shared the footage on social media had commenced.
Deepfakes' impact is recognised in the world's largest democracy
This case follows several in Malaysia and Pakistan where politicians have been smeared or have claimed to have been smeared by deepfakes depicting them in gay or inappropriate sexual relations. Although the vast majority of deepfake pornography has targeted women, these cases highlight the potential damage deepfake pornography depicting male politicians in gay sex scenes could do within countries where such acts are illegal or still heavily stigmatised. With India already suffering from a disinformation epidemic, deepfakes targeting both politicians and marginalised groups could significantly exacerbate the frequently violent responses existing forms of disinformation provoke.
Researchers from the MIT IBM Watson Lab created an AI portrait website that synthetically creates an image of a subject in the style of a 15th-century painted portrait, derived from a photo of the subject.

How does it work?
The project is powered by GANs that identify the facial lines as well as facial features of a given photo, generating a full new face in a 15th-century style, as opposed to generating just a "painted over" mask for the existing face. The GAN based algorithm was trained on a data set of 45,000 classical portraits, with a focus on 1400's painting (due to the period's reputation as the starting point of realistic portraits in the West), but also including some early renaissance and contemporary images. The output is a 4k synthetic image that resembles the original subject convincingly 'painted' in the style of a portrait from the old masters. As can be seen from a selection of celebrity examples, the output certainly captures the likeness of the individual, while also translating certain features into a style consistent with painted portraits.
Using art to make a point about bias and the limitations of AI
The project became popular shortly after the virality of Faceapp's synthetic aging feature, which also likely used GANs to generate the final output and increase the accuracy of the synthetic result. However, the creators state the project is meant to make users think about how algorithmic bias can impact decisions and outputs. This is because AI Portrait is only trained on a specific set of Western paintings, meaning the generated images conform to a very specific Western style that doesn't take into account other different styles from distinct cultures. The creators also encouraged users to experiment by uploading images of them smiling or showing teeth, as very few 15th-century portraits contained smiling subjects, meaning the generated outputs may be warped or inaccurate in interesting ways

This week's developments


1) An investigation by Buzzfeed News revealed a large underground "link hijacking scheme" in articles from top news outlets, where links redirected to sketchy websites and services. (Buzzfeed News)


2) A report by cybersecurity experts concluded that the US 2020 Presidential election will likely be the target of disinformation campaigns by several other nation-states than Russia, including Iran. (The Hill)


3) Facebook's ex-security chief Alex Stamos received a $5m donation from Craig's List founder Craig Newmark to found the Stanford Internet Observatory, a centre for studying internet abuse. (WIRED)


4) Researchers from UC Berkeley created a database of 7,500 "natural adversarial images", normal unedited photos of real-life objects that cause unforced errors in computer vision systems. (The Verge)


5) A ransomware attack on the main energy supplier for the South African city of Johannesburg left many residents without power, after the attack encrypted the company's IT infrastructure. (TNW)


6) A US Senate Intelligence Committee report found that Russia likely targeted the election systems of all 50 US states during the 2016 election, although no evidence of vote changing was found. (Cnet)


7) The NSA is establishing a defence focused Cybersecurity Directorate dedicated to defending the US from foreign cyberthreats, with operations focusing on countering election interference. (ZDNet)


8) The New York Times has launched a News Provenance Project to help publishers explore technological means of combating disinformation, including blockchain-based solutions. (NY Times)

Opinions and analysis
Victor Riparbelli outlines synthetic media company Synthesia's vision for the future of synthetic media technologies, focusing on the maximisation of human creative potential and minimising harmful uses.
Denise Melchin explores the implications of laws banning deepfake pornography for cloud computing services, and how these services may block the creation of deepfake pornography on their platforms.
Craig Silverman presents a media literacy framework for educating older individuals on how to identify and fact check false or extreme content online, based on work conducted by media literacy experts.
Working on something interesting in the Tracer space? Let us know at
Facebook Twitter LinkedIn