The Canadian Security Intelligence Service (CSIS), Canada’s primary national intelligence agency, has voiced apprehension regarding the proliferation of disinformation campaigns utilizing artificial intelligence (AI) deepfakes across the internet.
Canada identifies the escalating “realism of deepfakes” and the challenge of recognizing or detecting them as a potential threat to its citizens. CSIS, in its report, highlighted instances where deepfakes were employed to harm individuals.
“Deepfakes and other advanced AI technologies threaten democracy as certain actors seek to capitalize on uncertainty or perpetuate ‘facts’ based on synthetic and/or falsified information. This will be exacerbated further if governments are unable to ‘prove’ that their official content is real and factual.”
The report referenced Cointelegraph’s coverage of deepfakes featuring Elon Musk, particularly those targeting crypto investors.
Since 2022, malicious actors have used sophisticated deepfake videos to persuade unsuspecting crypto investors to part with their funds. Musk’s caution against deepfakes followed a fabricated video on X (formerly Twitter), promoting a cryptocurrency platform with unrealistic returns.
CSIS cited privacy violations, social manipulation, and bias as additional concerns associated with AI. The agency advocates for governmental policies, directives, and initiatives to adapt to the realism of deepfakes and synthetic media:
“If governments assess and address AI independently and at their typical speed, their interventions will quickly be rendered irrelevant.”
CSIS recommends collaboration among partner governments, allies, and industry experts to counter the global dissemination of legitimate information.
In line with this, Canada solidified its commitment to address AI concerns with allied nations on Oct. 30, when the Group of Seven (G7) industrial countries agreed upon an AI code of conduct for developers.
The code, featuring 11 points, aims to promote “safe, secure, and trustworthy AI worldwide” while addressing and mitigating the risks associated with this technology.
Comments