As much as I love a deepfake Anakin Skywalker, it might be wise to get more AI-focused firewalls in place prior to the next federal election.
Many have raised red flags around AI (artificial intelligence), with numerous tech types having signed a statement referring to it as one of our greatest existential threats.
ѻýMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,ѻý reads the statement, shared by the Center for AI Safety.
I hadnѻýt thought of deepfakes as being part of that threat, at least not I recently started seeing ads that didnѻýt sit right.
Deepfakes use a type of AI to create imagery and audio. If youѻýre a Star Wars fan like myself, you got a taste of deepfake AI in Gareth Edwardsѻý movie Rogue One, where it was used to recreate the likenesses of the later Carrie Fisher and Peter Cushing. Continuing on that theme, I love comedian/impersonator Charlie Hopkinsonѻýs YouTube videos in which he critiques Star Wars films while deepfaked to look like disheveled characters from the movies (his Anakin Skywalker and Obi-Wan Kenobi slay me).
Curiously, while watching these Hopkinson gems and other Internet content, I recently started seeing scam ads featuring likenesses of other real-life celebrities. One had Elon Musk rabbiting on about how you need to invest in, er, whatever. (Apparently, artificial Elons have been shilling get-rich schemes for a while now online.) Another scam circulating on social media features a deepfake of YouTuber Jimmy Donaldson, aka Mr. Beast.
ѻýLots of people are getting this deepfake scam ad of me ѻý are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem,ѻý Donaldson commented on X (formerly Twitter).
Lots of people are getting this deepfake scam ad of meѻý are social media platforms ready to handle the rise of AI deepfakes? This is a serious problem
ѻý MrBeast (@MrBeast)
Regardless of what you think of Mr. Beast, he is right; it is a serious problem, and not just in the threat it poses to celebrities. Deeper, darker deepfake concerns have been raised around exploitation and intimidation.
And then thereѻýs politics.
This week, the Canadian governmentѻýs Communications Security Establishment (CSE) issued an alert stating ѻýcyber threat activity is more likely to happen in Canadaѻýs next federal election than in the past.ѻý
ѻýCyber threat actors are increasingly using generative artificial intelligence (AI) to enhance online disinformation. Itѻýs very likely that foreign adversaries or hacktivists will use generative AI to influence voters ahead of Canadaѻýs next federal election.ѻý
Though the statement doesnѻýt speak specifically to deepfake AI, it is an increasingly accessible tech tool that I wouldnѻýt be surprised to see utilized to manipulate perceptions and sway voters one way or another.
And in a country where, according to a recent Leger poll, 11 per cent of us believe the earth is flat, those hacktivists donѻýt exactly have their work cut out for them.
While CSE assures itѻýs doing what it can to protect Canadaѻýs democratic process, the Canadian Bar Association argues criminal law doesnѻýt go far enough to protect the public from harm posed by targeted deepfakes.
Unfortunately, technology advances quickly, and unless a concentrated effort is made to put necessary protections in place, weѻýre soon going to find the line between reality and deepfake much more difficult to discern, let alone agree upon.
Sign up for our newsletter to get Salmon Arm stories in your inbox every morning.