Today's Editorial

16 January 2019

The challenge of detecting fake content

Source: By Rahul Matthan: Mint

The information revolution has dramatically disrupted the news industry. Where we used to rely on media organizations to distribute news and other information, we now get our content from multiple sources too numerous to name. While this has given us access to a wider range of information, it has more than a few unintended consequences.

Media organisations have always been the gatekeepers of content. Regulated, as they are, by codes of ethical conduct and laws that impose penalties for putting out fake content, most media organizations have safeguards in place that ensure the veracity of the content they disseminate. This is why, even though it has for some time been possible to doctor images with impunity, we rarely see fake photographs in the news as journalists need to take care to ensure the accuracy of the news they report.

All of this is changing as we speak. Internet distribution has ensured that much of the content that reaches us does so without having first passed through traditional gatekeepers. Thanks to the cascade of information, we no longer have the bandwidth to verify the news we receive —it’s far easier to rely on what we are told than check even if the information we receive contradicts what we believe to be true. As a result, we believe without question much of what is passed on to us through social media. As more and more people recklessly share information, falsehoods are now accorded the same level of seriousness as facts.

At the same time, digital impersonation has improved to the point where it has become possible to generate entirely believable digital fakes in virtually any medium. Technologies like generative adversarial networks (GAN) use two neural networks—one to generate a fake image and the other to evaluate whether the first neural network succeeded in creating a suitable image. Working iteratively and in tandem, GAN technologies are capable of producing digital fakes that are virtually impossible to detect.

As a result, it is now possible for digital manipulators to put words into the mouths of public personalities and to generate, on the fly, video footage that seems completely genuine when in fact it is entirely made up. This technology is called deep fakes and while at present it is being primarily used by the porn industry to generate fake celebrity videos, it isn’t hard to imagine how these techniques, in the hands of the unscrupulous, could be used for extortion, defamation and false propaganda.

Both these phenomena are about to combine with devastating effect. We have already begun to see an increase in the frequency of information cascades that bypass gatekeepers of the truth and the effect that is having on our credibility. At the same time, new technologies capable of generating fiction that is indistinguishable from fact are becoming viable enough to be deployed widely.

Governments around the world are dealing with this problem by cracking down on the platforms through which content spreads. However, it is unclear to me what these platforms are expected to do. The really good deep fakes are indistinguishable from the truth and it will be virtually impossible for platforms, on their own, to distinguish truth from falsehood. Theoretically speaking, it should be possible to use the same neural network technologies that created deep fakes in the first place to develop forensic techniques that can detect fake content. However, most experts agree that this is easier said than done. All it will take to evade detection is for the deep fake technologies to be trained on exactly what it is that the new forensic techniques are detecting in order to be able to, very quickly, learn to evade these measures.

If our reality is capable of being so easily distorted, perhaps the answer lies in finding a non-repudiable way to establish the truth. One approach could be to create immutable life logs that prove, beyond the shadow of a doubt, what actually happened in a given person’s life from minute to minute. It should be possible to achieve this using technologies that are already around us—a combination of wearables, blockchain, cloud computing, remote sensing, etc. By combining inputs from multiple sources in a tamper-proof format, it should be possible establish a record of life that can rebut fake news.

To make all this more efficient, the immutable life record could be made accessible through APIs (application program interfaces) so that social media services and other platforms that disseminate content can dynamically verify the content they carry against the true record of the life of a given person. Where the content matches the life log, platforms can continue to carry it. Where it does not, they will be able to prevent such content from portraying a false picture of an individual’s life.

The trouble with this approach is that it entails a considerable sacrifice of privacy. Comprehensive life logs, if compromised, could have a devastating effect on an individual’s personal life. That said, given the alternative, this might be a bargain that public personalities, who otherwise have a lot to lose, might feel worth making.

As we approach elections next year, we can expect a dramatic uptick in fake news as different political factions vie for our favour. We should be prepared for a good proportion of this to be based on deep fakes incapable of being distinguished from the truth.

 

[printfriendly]

Book A Free Counseling Session