Create a "lie detector" for deepfakes

Estimated read time: 6 min

Deepfakes are fake videos of real people, generated by artificial intelligence software at the hands of people who want to undermine our trust.

The images you see here are NOT actor Tom Cruise, President Barack Obama or Ukrainian President Volodymyr Zelenskyy, who in a fake video called on his compatriots to surrender.

deepfakes-b-1920.jpg
Can you easily tell that these are NOT actor Tom Cruise, President Barack Obama or Ukrainian President Volodymyr Zelenskyy, but products of artificial intelligence software?

CBS News


Nowadays, deepfakes are becoming so realistic that experts worry about what they will do for information and democracy.

But the good guys defend themselves!

Two years ago, Microsoft Chief Science Officer Eric Horvitz, co-creator of the spam filter, began trying to solve this problem. problem. “In five or ten years, if we don’t have this technology, most of what people will see, or a good part of it, will be synthetic. We won’t be able to tell the difference.

“Is there an exit?” wondered Horvitz.

It turned out that a similar effort was underway at Adobe, the company that makes Photoshop. “We wanted to think about giving everyone a tool, a way to tell whether something is true or not,” said Dana Rao, chief attorney and chief trustee at Adobe.

Pogue asked, “Why don’t you just have your genius engineers develop software that can analyze video and say, ‘That’s a fake’?”

“The problem is that the technology to detect AI is growing, and the technology to edit AI is growing,” Rao said. “And there will always be this horse race that you win. And so, we know that from a long-term perspective, AI is not going to be the answer.”

The two companies concluded that trying to tell the real videos from the fakes would be an endless arms race. And so, Rao said, “We turned it around. Because we said, ‘What we really need is to provide people with a way to know what’s TRUEinstead of trying to catch whatever is fake.”

“So you’re not here to develop technology that can to prove that something is a fake? This technology will prove that something is real?”

“That’s exactly what we’re trying to do. It’s a lie detector for photos and videos.”

Eventually, Microsoft and Adobe joined forces and designed a new feature called Content Credentials, which they hope will one day appear on all authentic photo and video.

Here’s how it works:

Imagine scrolling through your social feeds. Someone sends you a photo of pyramids covered in snow, claiming that scientists found them in Antarctica, far from Egypt! A content credentials icon, posted with the photo, will reveal its history when clicked.

“You can see who took it, when and where they took it, and what changes were made,” Rao said. Without a verification icon, the user might conclude, “I think this person might be trying to trick me!”

content-credentials.jpg
Content credentials will help verify the authenticity of images by tracing their origins and any changes made to the image – for example, adding snow to a stock photo of the Giza pyramids.

CBS News


Already, 900 companies have agreed to display the Content Credentials button. They represent the entire lifecycle of photos and videos, from the camera that takes them (like Nikon and Canon) to the websites that display them (The New York Times, Wall Street Journal).

Rao said, “Bad actors, they won’t use this tool; they’ll try to trick you and they’ll make something up. Why didn’t they want to show me their work? Why wouldn’t they show me what was real, what changes they had made? Because if they didn’t want to show you that, maybe you shouldn’t believe them.”

Now, content credentials aren’t going to be a silver bullet. Laws and education will also be needed, so that we the people can sharpen our nonsense detectors.

But within the next couple of years, you’ll start seeing this special button on photos and videos online – at least the ones that aren’t fake.

Horvitz said they were testing different prototypes. It would indicate if someone tried to tamper with a video. “A gold symbol appears and says: ‘Content Credentials Incomplete’. [meaning] to move back. Be skeptical.”

incomplete-icon.jpg

CBS News


Pogue said: “You mention media companies – New York Times, BBC. You mention software companies – Microsoft, Adobe – who are, in some areas, competitors. You say they’ve all laid down their arms to work together on something to save democracy?”

“Yes – groups working together across the entire ecosystem: social media platforms, IT platforms, broadcasters, producers and governments,” Horvitz said.

“So this thing could work?”

“I think this has a chance to make a dent. Potentially a big dent in the challenges we face, and a way for all of us to come together to meet this challenge of our time.”


For more information:


Story produced by John Goodwin. Publisher: Ben McCormick.


More from David Pogue on artificial intelligence:


ChatGPT: Artificial Intelligence Handwriting Notation

08:02


Art created by artificial intelligence

06:53

See also:

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.