Deepfake are being used to dub adverts into different languages

Deepfake is an Artificial Intelligence based technology that is used to make or manipulate video content to make it present something that in fact didn’t occur. It is named after a person known as deepfakes who used to edit the faces of celebrities onto the people in pornographic films in December 2017.

Deepfakes actually means operated videos or pictures formed by sophisticated artificial intelligence, that produce fabricated images and sounds that appear to be real.  A cybersecurity planner in New America said: “The technology can be used to make people believe something is real when it is not”

Hao Li, a top deepfake artist, about a year ago came to an alarming realization that Deepfakes is rapidly developing. In fact, Li believes that in a short time deepfake representations will be completely untraceable. And that’s urging security concerns as the Artificial Intelligence behind the Deepfake technology becomes popular and gets in the hands of hateful performers.

Hao Li has seen the positive aspects of the technology as an evolving computer graphics, particularly for entertainment. He has worked on many prestigious deepfake applications He was in charge of putting Paul Walker into Furious 7 after Paul died before the film finished its production, He also worked to make the facial animation technology that Apple now uses in its Animoji feature in iPhone.

“I believe it will soon be a point where it isn’t possible to detect if videos are fake or not,” Hao Li said.

How Does Deepfake Tech Work?

A deep learning system can create a convincing forged visual representation by studying pictures and videos of a person from many different angles, and then imitating its behavior and talking patterns.

Deepfakes can work in many ways such as from creating deepfake audio, imitating someone’s voice to swapping a face onto another face in video footage.

Audio and video deepfakes use a technology known as (GANs) “generative adversarial networks”, which contains two machine-learning models. One of them influences the dataset to produce fake video footage, while the other one tries to detect the fake footage. These two work together until one of them can’t detect the other one.

GANs are used for generative modeling using deep learning approaches. The word ‘Generative’ points to the property of GANs to create something on its own. But, how can a program making something of its own? If we give it the power of machine learning, it can learn from previous data.

It means if we give GANs tons and tons of images or videos it can create a unique image or video of its own. The GANs pursues to spot flaws in the forgery, leading to enhancements addressing the flaws. Making a deepfake is progressively simple. However, making a good deepfake is another matter. The greater the computer and the graphics card, the better the results.

Possible Dangers of Deepfake

Deepfakes can be used for cyberbullying, slandering and blackmail. In 2017, deepfake celebrity pornography with faces of celebrities swapped onto adult actresses faces became a big thing and was shared on Reddit and many pornography sites. The more scandalous something is; the more people want to share or click. If someone shares a pornographic image or video of you with your friends and family, it wouldn’t matter if it was fake or real the damage would already be done.

The biggest worry is the usage of deepfakes to cause widespread civil turbulence. Especially about China, whose AI competences competes in the United States and Russia. In a sort of modern

a version of war propaganda posters, foreign opponents could use deepfakes to rouse fear inside of Western democracies.

Deepfakes are also having impacts on cybersecurity and given the occurrence of phishing scams, it’s not hard to imagine that we will soon see deepfakes where a company CEO asks employees for their passwords or other serious pieces of information.

An AI company, Dessa, made a deepfake audio clip representing comedian and UFC commentator, Joe Rogan, making unusual statements.

Joe Rogan says he wants to form a chimp hockey team, says that he’s trapped in a robot, and ponders over life being an imitation. The voice isn’t actually the host of The Joe Rogan Experience, however. It’s really an AI-generated voice that sounds like the comedian and podcaster, Jersey accent and all.

In a post about the work, the company writes that the engineers Hashiam Kadhim, Joe Palermo, and Rayhane Mama generated the voice speech with text inputs and that this method would be capable enough of producing a replica of anyone’s voice.

Programs like LyreBird and Modulate.ai also need audio to train the algorithm, even if it’s short, to duplicate a real voice precisely.

Even so, the quality of Dessa’s Rogan dupe stands out. In the announcement blog, the Dessa researchers say they won’t publicly release details about how the algorithm works at this time. But they promise to post a technical overview within the coming days.

China’s popular face-swapping app has raised worries about how made-up but realistic-looking videos can be.

The Zao app, which is the sound of the Chinese character “to create,” was developed by Momo. After releasing, Zao jumped quickly to the top of free mobile downloads on both Android and iPhone app stores in China. But after some time users began to criticize Zao for its weak data privacy protections. Then the WeChat reportedly banned users from sharing videos created using the app. In response, Zao changed its privacy policy.

Zao also said this effect is created by a technical overlap, which the company explained meant the machine-generated images are calculations rather than integrations of actual facial data.

The company added that once a user deletes an account, it will follow the ‘required rules and laws’ in handling that user’s information. However, it was not clear whether that meant data would be completely wiped out, or if the new terms also applied to previously uploaded data.

This deepfake of the CEO of Facebook Mark Zuckerberg was created as a political art installation and went viral when it was posted.

What are the solutions?

There have generally been two methods to solve the problems created by deepfakes: use tech to detect fake videos or improve media literacy.

The tech solution is to try and detect deepfakes using the AI tech that is used to make them. The US Defense Advanced Research Projects Agency (DARPA)’s Media Forensics department awarded SRI three contracts for research into the best ways to automatically detect deepfakes. Funding Researchers at the University at Albany also received from DARPA to research about deepfakes. This team found that examining the blinks in videos could be one way to detect a deepfake from an unaltered video because there are not that many photographs of celebrities blinking.

This is undoubtedly important research. But even if a video can be detected as a fake, what then?

There are already plenty of widely-shared videos that use editing, and not deepfake technology, to spread misinformation. A deepfake might be more convincing, but if you have faith in the message that is being presented anyway, you are not observing for signs that the video is fake.

So, another solution to deepfakes needs to be found. It involves increasing media literacy so that they are able to spot fake news when they see it. People who share this stuff are also part of the problem Individuals and the press should all have the tools already available to quickly test media they are suspicious of to be fake. Policing content should be put it in the hands of individuals and they should be able to identify instantly whether or not something they are seeing or sharing is real.

One important step that should be taken is the implementation of laws prohibiting certain deepfake content. for individuals, it is not only expensive to sue, but it is also difficult. The accused would not only have to be found but also located within the prosecuting jurisdiction.

 

Major big Corporations have also started to fight Deepfakes. Amazon is going to donate $1 million in AWS credits over the next two years to researchers. AWS is also working with DFDC partners to study complicated data sets for deepfake detection on the cloud service using its Amazon S3 scalable infrastructure. 

Facebook announced the DeepFake Detection challenge program in partnership with Microsoft and academic institutions such as University of Oxford ,MIT, Cornell Tech.

Related Posts

 

it’s also worth pointing out that deepfake might also present an opportunity as well as tons of problems. Whilst giving political speeches is not likely to be one of the jobs that AI will eradicate, the obviously faked nature of a lot of deepfake will give rise to a more general doubt about the things we read and see online.

we should also thank deepfakes for making us realize once again that we should not take everything we see and hear for granted.

Leave a Reply

Your email address will not be published. Required fields are marked *