Talk about serendipity.
In yesterday’s blog, I wrote about the advent of digital clothing which gives people the chance to make it appear as if they are wearing a unique outfit, tailored just for them.
Well today on 60 Minutes, one of the segments was on synthetic media, also known as deepfakes.
I was hoping that fake news would begin to fade away, but it looks like it is only going to get worse. It will be harder and harder to know what’s “real”.
Here is one of the more famous examples of a deepfake:
After a few of these videos starting to appear and people began wondering who was behind these fake videos, a modest, 32-year-old Belgian visual effects artist named Chris Umé, stepped forward to claim credit.
Umé says his work is made easier because he teamed up with a Tom Cruise impersonator whose voice, gestures, and hair are nearly identical to the real McCoy. Umé only deepfakes Cruise’s face and stitches that onto the real video and sound of the impersonator.
Umé notes that It begins with training a deepfake model. He gathers all the face angles of Tom Cruise, all the expressions, all the emotions. It takes time to create a really good deepfake model. The software then begins training, analyzing all the images of Tom Cruise, all his expressions, compared to my impersonator. So the computer’s gonna teach itself: When my impersonator is smiling, Umé is going to recreate Tom Cruise smiling, and so on.
The U.S. military, law enforcement and intelligence agencies have kept a wary eye on deepfakes for years. At a 2019 hearing, Senator Ben Sasse of Nebraska asked if the U.S. is prepared for the onslaught of disinformation, fakery, and fraud.
Ben Sasse: When you think about the catastrophic potential to public trust and to markets that could come from deepfake attacks, are we organized in a way that we could possibly respond fast enough?
Dan Coats: We clearly need to be more agile. It poses a major threat to the United States and something that the intelligence community needs to be restructured to address.
The technology behind deepfakes is artificial intelligence, which mimics the way humans learn. In 2014, researchers for the first time used computers to create realistic-looking faces using something called “generative adversarial networks,” or GANs.
In a GAN, you set up an adversarial game where you have two AIs combating each other to try and create the best fake synthetic content. And as these two networks combat each other, one trying to generate the best image, the other trying to detect where it could be better, you basically end up with an output that is increasingly improving all the time.
You can see the power of generative adversarial networks is on full display at a website called “ThisPersonDoesNotExist.com” Every time you refresh the page, there’s a new image of a person who does not exist. If you click the link, you will see how realistic these fake people look. If you refresh the page, you will see a new one.
Synthesia, based in London, is one of dozens of companies using deepfake technology to transform video and audio productions. Synthesia essentially replaces cameras with code, allowing it to do a lot of things that you wouldn’t be able to do with a normal camera. It’s still very early, but some people believe this will be a fundamental change in how media is created. Synthesia makes and sells “digital avatars,” using the faces of paid actors to deliver personalized messages in 64 languages… and allows corporate CEOs to address employees overseas.
You can try it for free; the free version does not allow you to choose your avatar, but you can have an avatar deliver a customized message. Here is what I came up with:
Can you imagine if I could have Barack Obama deliver such a message or Bruce Springsteen? I’d have to hire a staff of people to help me with my blog…
So it is a whole new world out there, and it will get harder and harder to know what is real.
But you can always count on Borden’s Blather being the real deal; no one would want to associate a fake account with it…
You can watch the whole 60 Minutes segment by clicking here.
*image from NY Post