Today's Editorial

Today's Editorial - 20 May 2023

The dark side of artificial intelligence

Source: By Harini Calamur: The Free Press Journal

The world smiled affectionately at pictures of two former heads of states meeting for a stroll at the beach, and having a relaxed day out. Pictures of Barack Obama and Angela Merkel, two people who enjoyed a great equation when they governed their nations, went viral. The images were, of course fake and generated by a visual artist called Julian. ‘Julian’ used to have a robust Instagram feed, but his account is now gone. You can’t generate fake images of world leaders without there being consequences.

This week, the world frowned, and the USA went on edge as images of former President Trump being arrested went viral. Those images are violent, edgy and full of rage. This too is a deepfake. Deepfakes are realistic digital manipulations of image, audio and video that use machine learning to create convincing fake content.

While both artists declared they were fake at the top of the collection, the images used on their own without the declaration would definitely cause people viewing them to pause and wonder what is going on. For those who are invested in news, and in outraging over news, these kinds of manufactured images can just drive them over the edge.

Conspiracy theorists will use both sets of images to further their agenda. The Merkel-Obama pictures will be used to discuss how the two leaders conspired to move the West in a particular way. While, the latter would be used to show the State as the oppressor, and for a call to arms against it.

In the case of the Trump images, they are definitely fake. In a rapid action kind of scenario, it would be extremely difficult to have every picture, where every object is in complete focus — Trump, the cops, the batons — and there is a perfectly shot image, with perfect lighting But most of us are not experts — most of us will see these images without the captions, on a WhatsApp feed, or an email feed.

As generative AI gains popularity, the world that is already assailed by fake news by politically motivated players has to guard against deepfakes that look real. Generating images of Merkel and Obama, or Trump being involves using sophisticated machine learning techniques. Deep learning and generative adversarial networks (GANs) are two popular methods used in this area. They allow a computer to learn from a vast dataset of images and generate new ones based on that learning. This AI would be trained on a large and diverse set of images of each person. This would include a wide range of poses, facial expressions, and lighting conditions to capture the essence of each person's appearance. This will enable the AI to generate (almost) appropriate images for different situations.

One there are enough images, and the AI is “trained” on the person, it can generate images of Merkel and Obama at the beach, or Trump being arrested by converting input data into an image. The input can be a textual description in normal conversational language “Obama and Merkel by the beach having a day out” or “Trump resisting arrest”. The AI will then generate an image and then refine it through a process called “style transfer”, which blends the generated image with other images to make it look more realistic. This can work for both images and video.

Movies have been using this for long. For example, the Star Wars film ‘The Rise of Skywalker’ was able to “cast” Carrie Fisher — who had died a year earlier — in a tiny but pivotal role. That took millions of dollars of special effects. But, today, that power is available on your phone. And much of the software is free. Tools like Lensa, Mid-journey, and Stable Diffusion are putting the power of AI and image generation in every pocket and every computer.

While this may sound like fun and games, it has huge ethical implications. We are already hearing of it being used in revenge porn. Deepfakes are now being used to create revenge pornography, to settle scores with former partners, and others. There are genuine ethical concerns around using AI to generate images of people without their consent. And, policy as always is lagging behind the rapid advances in technology.

Furthermore, the rise of deepfakes has created new challenges in the fight against fake news. Videos, created using machine learning algorithms, can manipulate someone's words and actions in a way that looks extremely convincing. In a nation with active IT cells that churn out fake news, they can be used to literally put words in the mouths of opponents. They can regenerate images of dead people saying things that were never said. Enemy nations can do the same to foment trouble across borders.

Deepfakes can be used to spread false information and misinformation on a massive scale, exacerbating the fake news pandemic. The consequences of this are serious and far-reaching. Not only can deepfakes be used to spread malicious lies, but they can also be used to discredit political opponents and to sow confusion and discord. Moreover, because deepfakes are so difficult to detect, they pose a significant threat to public trust in media and institutions.

Deepfakes are a double-edged sword, with the potential to transform media and also wreak havoc on society. Policy and regulation around AI in general, and deepfakes in particular, are urgently needed to prevent the malicious use of AI-generated content not only to protect individuals' rights to privacy and reputation, but also to prevent strife in increasingly polarized societies.

Book A Free Counseling Session

What's Today

Reviews