Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

GROVER: A GAN that fights neural fake news, as long as it creates said news

Save for later
  • 7 min read
  • 11 Jun 2019

article-image

Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models.

GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game:

  • Adversary


This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier.

  • Verifier


This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary.

The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow.

Modeling Conditional Generation of Neural Fake News using GROVER


GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation.

For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96.

The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda).

grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news-img-0

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

Image Source: Defending Against Neural Fake News


When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER.

Neural Fake News Detection using GROVER


The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles.

For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing.

It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy.

grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news-img-1

Image Source: Defending Against Neural Fake News


It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator.

Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text.

Obviously the working of the GROVER model has caught many people’s attention.

https://twitter.com/str_t5/status/1137108356588605440

https://twitter.com/currencyat/status/1137420508092391424

While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive.

A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.”

Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.”

Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.”

For more details about the GROVER model, head over to the research paper.

Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts

Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling?

OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence