Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Save for later
  • 3 min read
  • 19 Sep 2019

article-image

A new report from Data and Society published by researchers Britt Paris and Joan Donovan argues that the violence of Audio Visual manipulation - namely Deepfakes and Cheap fakes can not be addressed by artificial intelligence alone. It requires a combination of technical and social solutions.

What are Deepfakes and cheap fakes


One form of Audio Visual manipulation can be executed using experimental machine learning which is deepfakes. Most recently, a terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise went viral on YouTube. Facebook creator Mark Zuckerberg also became the target of the world’s first high profile white hat deepfake operation. This video was created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny where Zuckerberg appears to give a threatening speech about the power of Facebook.

Read Also


However, fake videos can also be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing. This form of AV manipulation – are cheap fakes. The researchers have coined the term stating they rely on cheap, accessible software, or no software at all.

Deepfakes can’t be fixed with Artificial Intelligence alone


The researchers argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. They determine that deepfakes need to address structural inequality; groups most vulnerable

to that violence should be able to influence public media systems. The authors say, “Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms.”

Researchers worry that AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others. Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.”

“It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.” This technical fix, the researchers say, must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue, Deepfakes aren’t going to disappear.”

The report states, “There should be “social” policy solutions that penalize individuals for harmful behavior. More encompassing solutions should also be formed to enact federal measures on corporations to encourage them to more meaningfully address the fallout from their massive gains.” It concludes, “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.”

Other interesting news in tech


$100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons

The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations

UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses