The Vladimir Putin Body Double Rumor Won’t Die—And AI Makes It Seem Plausible

The Vladimir Putin Body Double Rumor Won't Die—And AI Makes It Seem Plausible

[ad_1]

A segment by a news outlet in Japan revived rumors that Vladimir Putin uses body doubles for public appearances, and generative AI is supposedly being used to make photos of those doubles look more like the real Russian president.

The persistent and unsubstantiated claim has circulated for months and resurfaced in Japan, according a Daily Mail report. Japanese researchers reportedly analyzed video footage and photos of Putin at different events and suggested there are at least three people playing him. Although such claims are promptly dismissed by the Kremlin, the story still illustrates a side effect of the rapid advances in AI: advances in AI deepfakes.

An AI deepfake is a technology that uses algorithms to swap faces in videos, images, and other digital content to make the fake appear real. Deepfakes have been recognized as a dangerous new tool for political manipulation. And detecting deepfakes has become a fast-growing industry, with OpenAI—a leader in generative AI—claiming to be able to identify a deepfake with 99% accuracy.

But researchers say doing so is not easy—and will continue to get harder.

Japanese TV network TBS reportedly covered AI-based research that suggested Russian President Vladimir Putin is employing body doubles to attend various events. Image: Daily Mail

“There are attempted ways in probabilistic technology to determine if something’s AI,” Digimarc President and CEO Riley McCormack told Decrypt. “I think you’ve seen if you look at many of these methods, the best ones are pretty low accuracy.”

Launched in 1994, Digimarc provides digital watermarking that helps to identify and protect digital assets and copyright ownership. Watermarks are one of the more commonly suggested ways to identify deepfakes, and companies like Google are exploring the approach.

During Google Cloud Next in San Francisco, Alphabet and Google CEO Sundar Pichai unveiled a watermark feature in the Google Vertix AI platform.

“Using technology powered by Google Deepmind, images generated by Vertex AI can be watermarked in a way that’s invisible to the human eye without damaging the image quality,” Pichai said.

As a strategy, however, watermarking has a number of weaknesses. McCormack said it won’t be enough to stop AI deepfakes.

“The problem with solely doing Generative AI with watermarking is not every generative AI engine is going to adopt it,” he said. “The problem with any system of authenticity is unless you have ubiquity, just marking it one way doesn’t do anything.”

Because AI-image generators pull from freely available images from the internet and social media, other experts have advocated including code that would degrade the image in the generator or utilize a poison pill by mislabelling the data so that when it is fed into an AI model, the AI is unable to create the desired image and could potentially collapse under the stress.

AI deepfakes have been created spoofing U.S. President Joe Biden, Pope Francis, and former President Donald Trump. As deepfake technology is improving by the day, AI detectors are left playing Whack-a-mole in a race to keep up.

Adding to the dangers of AI deepfakes is the increase in AI-generated child-sex adult material (CSAM). In a report released in October by the UK Internet watchdog group the Internet Watch Foundation, child pornography is rapidly spreading online using open-source AI models that are free to use and distribute.

The IWF suggested that deepfake pornography has advanced to the point where telling the difference between AI-generated images of children and real images of children has become increasingly difficult, leaving law enforcement pursuing online phantoms instead of actual abuse victims.

“So there’s that ongoing thing of you can’t trust whether things are real or not,” Internet Watch Foundation CTO Dan Sexton told Decrypt. “The things that will tell us whether things are real or not are not 100%, and therefore, you can’t trust them either.”

Another issue McCormack raised is that many of the AI models on the market are open source and can be used, modified, and shared without restriction.

“If you’re putting a security technique into an open source system,” McCormack said. “You’re putting a lock there, and then right next to it, you’re putting the blueprint for how to pick the lock, so it doesn’t really add a lot of value.”

Sexton said that he is optimistic about one thing when it comes to image generation models: as larger models like Stable Diffusion, Midjourney, and Dall-E become better and retain strong guardrails, then older models that allow local creation of CSAM will fall into disuse. It will also be easier to tell which models created the images, he said.

“You’ll get to the point where it’d be harder to do these things, much more niche to do these things, and potentially easier to target and prevent those harms from happening,” Sexton said.

While government leaders are a prime target for AI deepfakes, more and more Hollywood celebrities are finding their likenesses used in online scams and advertisements featuring AI deepfakes.

Looking to stop the unauthorized use of her image, Hollywood actor Scarlett Johansson said she is pursuing legal action against AI company Lisa AI, who used an AI-generated image of the Avengers star in an ad. Last month, YouTube giant Mr. Beast alerted his followers to an online scam using his likeness, and even Tom Hanks was the victim of an AI deepfake campaign that used his likeness to promote, of all things, a dental plan.

In September, Pope Francis, the subject of many AI deepfakes, made the technology the centerpiece of a holiday sermon for World Peace Day. The religious leader called for open dialogue on AI and what it means for humanity.

“The remarkable advances made in the field of artificial intelligence are having a rapidly increasing impact on human activity, personal and social life, politics and the economy,” Francis said.

As AI detection tools get better, McCormack said, AI deepfake generators will also get better, leading to another front of the AI arms race that began with the launch of ChatGPT last year.

“Generative AI didn’t create the deepfake problem or the disinformation problem,” McCormack said. “It just democratized it.”

Stay on top of crypto news, get daily updates in your inbox.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *