Tuesday, Dec 24th

Deep Fakes are a Threat to Us All

Deep Fake

Bully: “one who is habitually cruel, insulting, or threatening to others who are weaker, smaller, or in some way vulnerable” (Merriam-Webster).

We are all familiar with the stereotypical image of a bully—aggressive jocks and “mean girls.” But with the rise of social media, bullying has turned from “personal” to virtual. Moreover, social media has not only enabled cyberbullying but also the defamation of celebrities and politicians. Sameer Ahuja, a Scarsdale Village Trustee and the President of sports tech company GameChanger, is worried that the improvement of deep fakes, digitally created images or videos of others, will only intensify these issues. 

Ahuja writes a newsletter, Consume at Once, that discusses how technology is transforming how we create and consume movies, videos, TV, games, sports, and social media. He recently wrote Deep Fakes are a Threat to Us All, which identifies AI’s societal implications and steps we can take to combat its effects:

"Imagine you spent your whole life preparing for a career in politics. 

As a child you immersed yourself in the stump speeches of famous Presidents and Senators. You studied arduously from undergrad to law school, familiarizing yourself with not just civics, but arcane minutia surrounding Congressional committees and bill passage. 

Finally, you find yourself in D.C., working towards your big break as a staffer for one of our country’s leaders. By now, you’ve paid your dues, maintained a sterling reputation, and scrubbed anything on the web that could be deemed even remotely problematic. 

And then 'it' comes out. 

And by it, I mean a video drops online. It depicts you inebriated one night out in college. In the clip, you say and do inappropriate things. Your heart races as you imagine all the pundits discussing this humiliation on CNN. You can visualize the memes they’ll create about you, demolishing any potential you might have had.

Yet the more you stare at this video, the more it occurs to you: this isn’t you. Your face seems doctored, and your voice isn’t quite right. But here’s the really bad part: no one will believe this isn’t you when you try to tell them. 

It looks so real. It sounds so real.

You scramble to think what to do next, but you can’t. Not with all the text messages flying in. 

'COULD THIS HAPPEN TO YOU?'

It wasn’t long ago that parents strongly emphasized limiting what their kids put on the Internet. (“Be careful what you post to Facebook! What might a future employer say?”) But does it matter anymore if suddenly anyone with a WIFI signal can produce a convincing deep fake and wreck your life? 

This is the world we are entering as AI becomes highly realistic. And it’s just the beginning.

Consider this. If the tech to create deep fakes was good only a couple of years ago—back when it was used to impersonate Tom Cruise, imagine what it’s capable of in 2023—the year AI went mainstream.

Zooming out to better grok societal implications, it’s fair to suggest every innovation comes at a cost. Fire’s invention enabled our food to taste better. It also made us susceptible to more burns, death, and mass destruction. Likewise, the nuclear bomb is credited for ending World War II—and killing between 130,000 to 230,000 people.

Our digital age, especially the Attention Economy, offers similar blessings and curses. AI gives us unprecedented abilities, especially to produce (ersatz) content. Yet as Spiderman teaches us, “With great power comes great responsibility.” 

So what can we do? 

It's clear laws haven’t caught up to the speed of innovation. Certainly, some punishment could be levied against perpetrators/disseminators of deep fake content. Yet the sheer volume of content being produced makes it nearly impossible to track down and prosecute each offender. 

No. Combatting this issue, wherever it pops up in the coming years, requires the human touch.

We must have a long-overdue national discussion—both on this issue and AI in general. That’s because 2023 is the year AI went mainstream.

Beyond building awareness of the problem, it’s past time we use a public forum to establish ethical guidelines. These should be designed to protect individuals from harm and ensure rights are respected. Victims especially must be at the forefront of our discussions, not simply an afterthought. 

One possible solution is to create a 'golden rule' outlining ethical principles to abide by when creating or sharing online content. Among other things, it should emphasize consent and respect for privacy. 

(Yes, I’m aware it sounds naïve to propose something voluntary.) To build momentum for my idea, I suggest giving it teeth by encouraging a groundswell of influencers to support it—creating positive peer pressure. 

There’s another consideration worth exploring: our cultural zeitgeist. 

To this point, production expectations can be another double-edged sword. Here’s why. For more than a decade, AI-based algorithms encouraged intrepid creators to pump out as much content as possible. As we know, YouTube and Instagram reward quantity and frequency.

Now for the flip side.

The more any of us put ourselves out there digitally, the more vulnerable we become. (As the story at the top of this article dramatizes.)

Already, music from popular artists like Drake has been imitated with surprising success. Of course, politicians are also particularly vulnerable. Presidents Joe Biden and Donald Trump’s likenesses have been replicated ad nauseum. 

Put simply, an increased digital footprint raises chances for imitation. This is especially horrifying for the millions who built livelihoods off content creation. But it’s even worse for 'civilians'—such as families who have been ransomed by deep fake kidnappers claiming to have their child.   

Fortunately, growing concern offers a foothold to turn this around. 

Last year, the White House introduced a Blueprint for an Automation Bill of Rights aimed at 'Making automated systems work for the American people.' Simultaneously, the EU drafted the AI Act designed to mitigate privacy challenges and establish automation regulations. 

Another positive? Many online platforms are community driven

Thanks to years of putting out authentic content, creators often have the benefit of enormously supportive communities. So while salacious stories about AI produce entertaining headlines, we can look to good old-fashioned humanity to counter digital threats.

For now, deep fakes aren’t going anywhere. 

What was once largely kept in the shadows—relegated to the fringes of the internet—has gone mainstream. (Pseudo) content proliferation is here to stay.

On the flip side? 

Authenticity and deeper human connection will become ever more valuable commodities going forward. Here’s to navigating our brave new world together, one day at a time."

Sameer AhujaSameer Ahuja