Deep fakes threaten individual Where humans are moving

Today, social media is the main source of information. The information updates the individual’s knowledge, and the concept of a global village is now easily understandable. As we know, if one thing has a positive effect on another, our eyes look to the negative aspect. From a historical perspective, SixDegress pioneered the first social media, which has now become a global symbol of communication. As previously stated, one has two discursive aspects: positive and negative; deep fake is the concept that belongs to the negative side. The word “fake” refers to disinformation, which means to create rumours and data that are questionable and become part of public debate. The authorities or upper-level management may use this fake news to achieve their objectives, such as diverting people’s attention from certain issues and advancing their own agenda. This can be defined as deceiving people about specific issues and attempting to sway their perspective. This topic gained too much popularity during the 2016 US presidential election.

The purpose of this article is to educate people about the effects of deep fakes on society. When a person views a video of a victim, their perception changes. The most recent trends in fake videos have a significant impact on all sectors of society. People are experiencing feelings of insecurity due to the prevalence of deepfakes. To raise awareness about this concept, educators, religious leaders, and members of society must engage. I have provided a detailed explanation to enhance readers’ understanding. This article solely aims to educate; it does not solicit funds or promote any individual. I hope this will be more interesting for all readers and motivate them to continue learning.

Deep learning, Machine learning and Artificial intelligence, Generator discriminator

Deep fakes are a new and concerning sort of threat that falls under the broader category of synthetic media. They use artificial intelligence and machine learning to produce convincing and realistic films, images, audio, and text that depict events that never actually occurred. Deep learning is a subset of machine learning, and machine learning is a subset of artificial intelligence. In the early part of 2014, the technique of deep learning was employed in the field of gaming. Deep learning was intended to uncover complex, layered models that depict probability distributions of many types of data used in artificial intelligence applications, such as visual pictures, speech audio waveforms, and linguistic symbols in natural language collections. The generative model: The generative model can be likened to a group of counterfeiters attempting to create and utilise counterfeit currency without being detected, while the discriminative model can be likened to the police’s efforts to identify the counterfeit currency.

Figure-1 How deep does a fake image convert from real to fake?
Figure-1 How deep does a fake image convert from real to fake?

How does a fake video work?

A well-known model GAN, A generative adversarial network (GAN) is an artificial intelligence technique that includes two neural networks, namely the generator and the discriminator, that collaborate to produce unique data. Several domains, including picture production, video prediction, and text-to-image synthesis, have employed GANs. In 2017, the production of fake videos of celebrities began. In this case, the Reddit website plays a crucial role. A user account has the ability to access deep fake videos of actresses and celebrities, with 99% of these videos falling into the porn category.

Figure-2 GAN along its dimensions
Figure-2 GAN along its dimensions

AI presents another challenge in deep fakes

AI-generated text is another type of deep fake that is a growing challenge. While researchers have identified a number of weaknesses in image, video, and audio deep fakes as means of detection, deep fake text is not so easy to detect. Deep fake technology could potentially replicate a user’s informal texting style. Deep fake media, including images, video, and audio, and text, can simulate or alter a specific individual or their representation. This is the primary threat to deep fakes. However, this threat encompasses not only deep fakes but also the entire field of “synthetic media” and their use in disinformation.

Figure-3 Working style of GAN
Figure-3 Working style of GAN

How a public figure gets hurt

In 2023, the total number of deep fake videos online was 95,820, up 550% from 2019, according to a report by Home Security Heroes, a group that researches best practices for online security. Pornography made up 98% of them. Recently, the famous actress from India, Rashmika Mandanna, went viral, but this was fake. The Indian government and celebrities agree wit Rashmika Mandanna. Another pop star, Taylor Swift, whose image was created with an AI generator, has gone viral and given much intention to lawmakers to protect their reputation, prompting lawmakers in several states to introduce legislation to combat deep fake porn, including Missouri’s Taylor Swift Act. Reports have surfaced about a number of other cases, such as teen boys at a New Jersey school accused of creating AI-deep fake nudes of their female classmates. A famous YouTuber, Ducky Bhai, from Pakistan, and his wife also became victims of the fake videos. There have been numerous reported cases, and the majority of the victims have taken their own lives.

YearDeep Fake Videos
201920000
202040000
2023100000
2024500,000

Table: 1 the Number of the Fake Video Online

Seeing is believing; shifting into seeing is not believing

In the previous era, the adage “seeing is believing” gained popularity. However, with the advent of deep fake videos, this notion has shifted to “not believing.” The increasing trend of deep fake videos shows that from 2019 to 2023, research reports that there has been a 120% increase in the deep fake videos listed below. Table 1 above clearly indicates an increasing trend. Another study predicted the availability of 500,000 videos online by 2024. You can imagine the significant shift in people’s perception when they watch fake videos. Therefore, it’s not accurate to say that “seeing is believing.”

Figure-4 Examine things with a focused approach.
Figure-4 Examine things with a focused approach.

The impact of deep fake videos on individuals as a collective

Diana Riaza and Giacomo Livan’s research article in 2023 is titled “Public and private beliefs under disinformation in social networks.” They used the DHA framework. The misleading information regarding individuals becomes part of the whole society; these deep fake videos create conspirators in the society. The role of the conspirators becomes to spread the information in society as the whole society learns. We cannot neglect the role of the debunker, who transforms false information into truth and disseminates misinformation.

Deep fake videos at international level

Here are some examples of famous public figures whose videos, such as Obama’s Buzzfeed, Jim Acosta’s doctored video, the Dali Museum, Mark Zuckerberg, Joe Rogan, the Queen’s Christmas Speech, and Anthony Bourdain’s documentary, have been manipulated as deep fakes. These are just examples of how these public figure videos were fabricated, added, and made to change the perception of the viewers. As we can see, Western trends in making deep fake videos are more prevalent than those in the rest of the world. These videos spread all over the world, and people shared their views on social media. This is unethical, and think about how an individual faces this in his life and how much psychological disturbance there is in life, but devils are more as compared to the spiritual.

Figure-5 Individuals believe in society's perception.
Figure-5 Individuals believe in society’s perception.

Can you imagine how profoundly fake videos changed the government? 

The role of the deep fake is not only harmful for the individual level, but also for every segment of society. In 2024, artificial intelligence will play a crucial role in spreading extremist-founded election misinformation in Africa. The Gabon president’s removal is an example of how public reaction changes the world. The video clip of the Volodymyr Zelenskyy , President of Ukraine made for the solider and it was the message that soldiers but the government of the Ukraine given a statement on this video that was made in deep fake, if soldiers act upon the advice of the Volodymyr Zelenskyy so think how much the consequences will be happened.

The company has held a deep fake video conference in Hong Kong

In the case of the multinational losing HK$200 due to the deep fake video conference scam, a company finance manager received the video conference call, in which the senior officer told the finance manager to send money into the account. The police claimed that the victim was a real man, but every other participant in the conference used a deep fake; not a single conference member was genuine. Therefore, this represents the largest scam in history, carried out through a video conference call.

China and the rest of the world have policies regarding synthetic video

 The creation of the deep fake video within 1.5 minutes and the generation of the audio within 5 seconds hold significant importance. There are a number of platforms that provide the facilities for adding online graphics. The policy regarding deep fakes is still uncontrollable. If someone posts a deep fake video, they have 36 hours to delete it. However, in the age of social media, it’s already too late, as the video will likely be on the public’s cell phone. Therefore, we can conclude that the internet community has failed to effectively control the spread of deepfake videos. China’s policy regarding deepfakes, such as those depicted in the video, mandates obtaining the consent of the individual before creating a video. However, the uploaded video lacks the consent of the original creator. The Department of Homeland Security has issued a policy, but the growing trend indicates that it is ineffective.

How can I identify if this video is a deep fake? 

Based on our best research, we have identified a few indicators that could potentially lead a person to use and learn about deep fake videos. These are represented in the graph.

How can I identify if this video is a deep fake?
How can I identify if this video is a deep fake?

The role of McAfee in detecting AI is significant

 As we know, McAfee Corporation is one of the leading companies to detect AI-powered deep fake videos. Audio scams typically target money and personal information. To protect privacy and personal information, the McAfee Corporation chief technology officer claims that this tool has 90% accuracy in detecting audio scams. But there are a lot of efforts needed to protect privacy and personal information. In this era of AI, there should be research on deep fakes, and the outcomes will suggest to the public where they can feel safe while in a video call or audio call.

Figure-6 detection of scam audios and videos
Figure-6 detection of scam audios and videos

Education leads awareness about deep fake…

Teachers are the main pillars at any level of education; they must educate the students and play their role in making them aware of the concept of deep fake. We cannot disregard the importance of religious scholars in educating society about deep fakes and inspiring their listeners to embrace reality. The most suicidal due to the family not giving support to the victim, and they have decided to suicidal; however, now that the situation has changed, we must move forward and join hands with the victim to protect his or her life. However, without public education and awareness-raising, this video scam could have severely impacted people’s lives. The researchers must play their role in developing a tool that protects the privacy of people and their families.

Figure- 7 Every individual plays his role.
Figure- 7 Every individual plays his role.

I hope that this research will serve as a valuable resource for learning about deep fakery and deep synthesis techniques. I request that you all kindly spread this information so that we can play our part in protecting people’s lives. I hope your support and efforts on leading issues will protect people.

Thanks in advance.

By

Dr. Abid Hussain Nawaz

Post Doc, Ph.D., MPhil

 

You Might Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *

11 Comments

  1. This article successfully highlights the worst effects of deep fakes on our society by giving examples of victims. Also enlightened that seeing is no more believing in this so called modern era of technology

  2. Javed Talokar says:

    Just amazing…Very precise, up to date, revealed factual information. The beauty of the article is, it raises question itself and answers too.

  3. Laiba Mustafa says:

    This article is outstanding.It explain about Deep Fake in a very good way and each step has been clearly explained in a concise manner.I highly recommend reading it. Thank you so much sir g.

  4. Laiba Mustafa says:

    This article is outstanding . It explains about Deep Fake in a very good way. I highly recommend it . Thanks a sir g.