Welcome to theAI Incident Database
Incident 1325: Reported AI-Generated Deepfake Videos Impersonating Elon Musk and Dragon’s Den Allegedly Used in Cryptocurrency Investment Scam Targeting Canadian Victims
“Two Canadians lose $2.3 million in AI deepfake crypto investment scam”Latest Incident Report
Two Canadians from Ontario and Prince Edward Island lost a combined $2.3 million to an AI-enabled deepfake cryptocurrency investment scheme. The Ontario victim lost $1.7 million after being deceived by a fake Elon Musk video, while the other victim lost $600,000 after watching a clip falsely linked to Dragon's Den. The report was updated on December 21, 2025.
Updated on December 21, 2025 12:31 PM UTC, the report details that two Canadians from Markham, Ontario, and Prince Edward Island lost a combined $2.3 million to a deepfake cryptocurrency investment scheme. According to W5, AI-generated videos and fabricated dashboards were used to persuade targets that small deposits were producing real profits.
How victims were lured by AI imposters
A 51-year-old woman from Markham saw a Facebook clip that appeared to feature Elon Musk discussing a crypto opportunity. She sent an initial $250 and, two days later, was shown a $30 gain, which encouraged further deposits and trust in documents that looked official.
"I applied for almost a million dollars on the equity of my home. I took it out and I started sending it to them. Right? Like $350,000 and then $350,000." --- Ontario victim, Markham
Scammers later displayed a balance of $3 million and demanded taxes and fees before any withdrawal. To cover those costs, she borrowed $500,000 from family and friends and maxed out credit cards, bringing her total losses to $1.7 million.
A man in Prince Edward Island encountered a video that claimed a link to the TV program Dragon's Den and suggested investing could start at $250. He increased his transfers over time, at one point sending $10,000 per day, and ultimately lost $600,000. As with the first case, a fake balance of more than $1 million was shown, and withdrawal attempts were blocked.
Together, their losses totaled $2.3 million. According to the Canadian Anti-Fraud Centre, Canadians have lost $1.2 billion to investment scams over three years, and the agency believes actual losses are higher.
Reports of industrial-scale fraud networks
Former U.S. prosecutor Erin West said the fraud is organized like an industry and that many callers are themselves victims, trafficked to scam compounds in Southeast Asia and forced to work long hours. Those who refuse or attempt escape face beatings or torture, according to her account.
West described visiting cyber fraud compounds in the Philippines and said their scale reflects industrial-level operations that rely on psychological manipulation, technology, and human trafficking. She warned that as deepfake tools become cheaper and more accessible, similar scams are likely to expand globally, making it harder for ordinary investors to distinguish legitimate opportunities from AI-driven fraud.
Incident 1326: Waymo Robotaxis Allegedly Contributed to Traffic Gridlock During San Francisco PG&E Power Outage
“Waymo explains why robotaxis stalled during San Francisco blackout”
Waymo on Tuesday acknowledged that its driverless cars contributed to traffic congestion during San Francisco's massive weekend power outage, saying the scale of the disruption overwhelmed parts of its system and prompted the company to implement immediate software and emergency-response changes.
The outage, caused by a fire at a PG&E substation, knocked out electricity to nearly one-third of the city on Saturday, disabling hundreds of traffic signals and triggering gridlock across major corridors. As police officers were deployed to manually control intersections, stalled Waymo robotaxis became one of the most visible signs of the citywide disruption, drawing scrutiny from residents and elected officials.
In a blog post published Tuesday, Waymo said the unprecedented number of dark traffic signals strained safeguards designed for smaller outages. Its vehicles are programmed to treat nonfunctioning signals as four-way stops, but in some cases they request a remote "confirmation check to ensure it makes the safest choice."
"While we successfully traversed more than 7,000 dark signals on Saturday, the outage created a concentrated spike in these requests," the company said, adding that delays in confirmations contributed to congestion on already overwhelmed streets.
As the blackout persisted and the San Francisco Department of Emergency Management urged residents to stay home, Waymo said it temporarily suspended service and directed vehicles to pull over and park so they could be returned to depots in stages.
Mayor Daniel Lurie said the city was in direct contact with the company, owned by Google parent Alphabet, as conditions deteriorated.
"I made a call to the Waymo CEO and asked them to get the cars off the road immediately," Lurie said at a Monday news conference. "They were very understanding ... but we need them to be more proactive."
The incident has prompted renewed questions about how autonomous vehicles perform during large-scale emergencies. San Francisco supervisors have called for a hearing on Waymo's response, and a California regulator said Monday it is reviewing incidents involving stalled robotaxis during the outage.
Waymo said it is already rolling out fleet-wide software updates that give vehicles more context about regional power failures, allowing them to navigate intersections "more decisively."
The company also said it is updating emergency preparedness plans, expanding coordination with city officials and continuing first-responder training, noting that more than 25,000 responders worldwide have already been trained to interact with its vehicles.
"While the failure of the utility infrastructure was significant, we are committed to ensuring our technology adjusts to traffic flow during such events," a Waymo spokesperson said. "We are focused on rapidly integrating the lessons learned from this event, and are committed to earning and maintaining the trust of the communities we serve every day."
Incident 1333: Purportedly AI-Generated Images and Videos Reportedly Spread Misinformation About Nicolás Maduro's Capture on X
“AI-generated content spreads after Maduro’s removal — blurring fact and fiction”
Following the U.S. military operation in Venezuela that led to the removal of its leader, Nicolas Maduro, AI-generated videos purporting to show Venezuelan citizens celebrating in the streets have gone viral on social media.
These artificial intelligence clips, depicting rejoicing crowds, have amassed millions of views across major platforms like TikTok, Instagram and X.
One of the earliest and most widely shared clips on X was posted by an account named "Wall Street Apes," which has over 1 million followers on the platform.
The post depicts a series of Venezuelan citizens crying tears of joy and thanking the U.S. and President Donald Trump for removing Maduro.
The video has since been flagged by a community note, a crowdsourced fact-checking feature on X that allows users to add context to posts they believe are misleading. The note read: "This video is AI generated and is currently being presented as a factual statement intended to mislead people."
The clip has been viewed over 5.6 million times and reshared by at least 38,000 accounts, including by business mogul Elon Musk, before he eventually removed the repost.
CNBC was unable to confirm the origin of the video, though fact-checkers at BBC and AFP said the earliest known version of the clip appeared on the TikTok account @curiousmindusa, which regularly posts AI-generated content.
Even before such videos appeared, AI-generated images showing Maduro in U.S. custody were circulating prior to the Trump administration releasing an authentic image of the captured leader.
The deposed Venezuelan president was captured on Jan. 3, 2026, after U.S. forces conducted airstrikes and a ground raid, an operation that has dominated global headlines at the start of the new year.
Along with the AI-generated videos, the AFP's fact-check team also flagged a number of examples of misleading content concerning Maduro's ousting, including footage of celebrations in Chile falsely presented as scenes from Venezuela.
Trump has also reposted several videos related to Venezuelan celebrations on Truth Social this week, though CNBC confirmed many of those were also filmed outside Venezuela, in cities such as Panama City and Buenos Aires.
One of the videos reshared by the president included old footage that first appeared online as early as July 2024 and was thus not related to the recent removal of Maduro.
The dissemination of that type of misinformation surrounding major news events is not new. Similar false or misleading content has been spread during the Israel-Palestine and Russia-Ukraine conflicts.
However, the massive reach of AI-generated content related to recent developments in Venezuela is a stark example of AI's growing role as a tool for misinformation.
Platforms such as Sora and Midjourney have made it easier than ever to quickly generate hyper-realistic video and pass it off as genuine in the chaos of fast-breaking events. The creators of that content often seek to amplify certain political narratives or sow confusion among global audiences.
Last year, AI-generated videos of women complaining about losing their Supplemental Nutrition Assistance Program, or SNAP, benefits during a government shutdown also went viral. One such AI-generated video fooled Fox News, which presented it as real in an article that was later removed.
In light of these trends, social media companies have faced growing pressure to step up efforts to label potentially misleading AI content.
Last year, India's government proposed a law requiring such labeling, while Spain approved fines of up to 35 million euros for unlabeled AI materials.
In addressing these concerns, major platforms, including TikTok and Meta, have rolled out AI detection and labeling tools, though the results appear mixed.
CNBC was able to identify some misleading TikTok videos on Venezuela that had been labeled as AI-generated, but others that appeared to be fabricated or digitally altered did not yet have warnings.
In the case of X, the platform has relied mostly on community notes for content labeling, though critics say the system often reacts too slowly to prevent AI misinformation from spreading before being identified.
Adam Mosseri, who oversees Instagram and Threads, acknowledged the challenge facing social media in a recent post. "All the major platforms will do good work identifying AI content, but they will get worse at it over time as AI gets better at imitating reality," he said.
"There is already a growing number of people who believe, as I do, that it will be more practical to fingerprint real media than fake media," he added.
— CNBC's Victoria Yeo contributed to this report
Incident 1329: Grok Reportedly Generated and Distributed Nonconsensual Sexualized Images of Adults and Minors in X Replies
“Elon Musk’s Grok AI generates images of ‘minors in minimal clothing’”
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.
“There are isolated cases where users prompted for and received AI images depicting minors in minimal clothing,” Grok said in a post on X in response to a user. “xAI has safeguards, but improvements are ongoing to block such requests entirely.”
“As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” xAI posted to the @Grok account on X, referring to child sexual abuse material.
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof”, adding that xAI was prioritising improvements and reviewing details shared by users.
When contacted for comment by email, xAI replied with the message: “Legacy Media Lies”.
The problem of AI being used to generate child sexual abuse material is a longstanding issue in the artificial intelligence industry. A 2023 Stanford study found that a dataset used to train a number of popular AI image-generation tools contained over 1000 CSAM images. Training AI on images of child abuse can allow models to generate new images of children being exploited, experts say.
Grok also has a history of failing to maintain its safety guardrails and posting misinformation. In May of last year, Grok began posting about the far-right conspiracy of “white genocide” in South Africa on posts with no relation to the concept. xAI also apologized in July after Grok began posting rape fantasies and antisemitic material, including calling itself “MechaHitler” and praising Nazi ideology. The company nevertheless secured a nearly $200m contract with the US Department of Defense a week after the incidents.
Incident 1327: Reported AI-Generated Deepfake Romance Scam Allegedly Used to Steal One Bitcoin From Recently Divorced Investor
“How an AI-fueled romance scam drained a Bitcoin retirement fund”
Key takeaways
-
A recently divorced Bitcoin investor lost his entire retirement fund, one full Bitcoin, to an AI-powered romance scam orchestrated by a sophisticated criminal using deepfakes.
-
Pig butchering scams are relationship-based frauds that rely on emotional manipulation and AI-generated deepfakes to build trust before extracting maximum financial value from victims.
-
The scammer used AI to create synthetic portraits and conduct real-time deepfake video calls, making the fabricated relationship virtually indistinguishable from reality.
-
Once cryptocurrency is transferred via a blockchain, recovery is nearly impossible. Unlike bank transfers, there are no chargebacks, reversals or consumer protections available to victims.
When a recently divorced Bitcoin investor finally reached the milestone of owning one full Bitcoin, he believed his financial future was secure. Within days, however, an elaborate scheme orchestrated by a sophisticated scammer using AI stripped away his entire retirement savings and left him devastated.
His story, shared by Bitcoin security adviser Terence Michael, offers a critical lesson in how emotional manipulation, combined with modern AI technologies, has weaponized traditional scams to target cryptocurrency holders.
Understanding the pig-butchering framework
Before examining the specifics of this case, it is essential to understand what security experts call "pig butchering" scams. Unlike traditional cryptocurrency hacks that target wallets directly, these schemes are relationship-based frauds that rely entirely on psychological manipulation. The term, borrowed from the agricultural practice of fattening an animal before slaughter, describes how scammers gradually build trust and emotional connection with their victims before extracting maximum value.
The fundamental difference is critical. Victims willingly send their funds, believing they are making sound investments or supporting someone they love. This consent-based manipulation makes these schemes extraordinarily difficult for fraud detection systems to identify, as the transactions themselves appear legitimate on the surface.
According to a report by Cyvers, a blockchain security platform, the average grooming period for victims lasts between one and two weeks in roughly one-third of cases, while approximately 10% of victims endure grooming periods spanning one to three months. This extended timeline underscores the sophistication of these operations. Scammers understand that patience and consistency build credibility far more effectively than rushing the process.
How the scam unfolded: The AI advantage
In this case, the scammer employed a sophisticated, multi-layered approach that leveraged AI. The victim was first approached through an unsolicited message from someone claiming to be an attractive female trader.
The scammer offered to help double the investor's Bitcoin holdings, a promise designed to appeal to both greed and the desire for financial security, particularly for someone navigating a recent divorce.
What made this scheme exponentially more powerful than traditional romance scams was the integration of AI technology. Rather than relying on stolen photos or crude image editing, the scammer used AI to generate entirely synthetic portraits that appeared convincingly realistic. These AI-generated identities are nearly indistinguishable from real people to the untrained eye.
During video calls, the scammer employed even more sophisticated technology. Live deepfake video generation overlaid a fabricated face onto the scammer's actual body in real time. Advanced systems can now maintain lip-sync accuracy across different lighting conditions, creating the illusion of a genuine human connection so convincing that even skeptical viewers struggle to detect the deception.
The emotional dimension cannot be overstated. The scammer professed romantic feelings, discussed future plans and constructed an elaborate narrative of a woman who appeared to care deeply about the investor's financial well-being. The victim was even convinced to purchase a plane ticket to meet in person, deepening the psychological investment. This personal connection proved far more persuasive than any technical security measure.
Vulnerability and life circumstances
The specific targeting of a recently divorced individual was not random. It was calculated predation. Divorce creates acute vulnerability, including emotional isolation, diminished self-esteem and a psychological void that scammers are trained to exploit. Scammers actively recruit victims who fit specific profiles, such as older individuals, recent divorcees, widows, widowers and those expressing loneliness online.
This case highlights a critical blind spot in modern fraud prevention. Traditional banking fraud detection systems are designed to flag unusual transactions, not to recognize psychological coercion. The victim's Bitcoin transfers appeared completely normal to automated systems, consisting of regular amounts over time rather than a single large withdrawal. This gradual escalation is deliberately designed to bypass algorithmic detection.
The scale of the problem
In 2024, pig butchering scams cost victims $5.5 billion across roughly 200,000 individual cases, averaging $27,500 per victim, according to Chainalysis. The company has also classified these scams as a national security concern. Romance scam losses exceeded $1.34 billion in 2024 and 2025, with the Federal Trade Commission reporting that 40% of online daters have been targeted by romance scams.
AI has made these schemes exponentially more scalable. Listed below are several ways to protect yourself from these scams:
-
Verify identity through multiple channels: Request live video calls rather than accepting pre-recorded messages. Look for unnatural eye movement, inconsistent blinking and warped edges where the face meets the neck, which are common deepfake indicators.
-
Be skeptical of rapid relationship progression: Genuine relationships develop gradually. Declarations of love within days, especially when paired with investment opportunities, should trigger immediate suspicion.
-
Consult trusted advisers before moving funds: Reaching out to security professionals or financial advisers before transferring cryptocurrency can provide a rational perspective when judgment may be compromised.
-
Recognize that legitimate traders do not date clients: Professional investment advisers maintain clear ethical boundaries. Someone offering both romance and investment opportunities should be treated as a serious red flag.
-
Understand irreversibility: Bitcoin and other cryptocurrencies offer no consumer protections, such as chargebacks or reversals. Once funds are transferred, recovery is typically impossible.
Vigilance over vulnerability
The investor's loss, a full Bitcoin, represents not merely a financial setback but a profound emotional trauma that extends far beyond monetary terms. Beyond the devastating financial impact, he faced the psychological shock of discovering that the romantic relationship was entirely fabricated, the emotional intimacy false, the future plans imaginary and his trust completely violated by a criminal operating across multiple time zones.
His story serves as a cautionary narrative for cryptocurrency holders. Technical security is only one layer of protection. Personal vigilance, skepticism toward unsolicited contact, emotional awareness and consultation with trusted advisers form an equally critical defense perimeter.
As AI makes deception increasingly sophisticated, human judgment, informed and grounded in healthy skepticism, remains the most powerful safeguard against scams designed to exploit deep human needs for connection and security. The lesson is not to distrust online relationships entirely, but to recognize that the convergence of romantic interest and financial opportunity demands extraordinary caution before any funds change hands.
Quick Add New Report URL
About the Database
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
You are invited to submit incident reports, whereupon submissions will be indexed and made discoverable to the world. Artificial intelligence will only be a benefit to people and society if we collectively record and learn from its failings. (Learn More)

AI Incident Roundup – August, September, and October 2025
By Daniel Atherton
2025-11-08
At Templestowe, Arthur Streeton, 1889 🗄 Trending in the AIID Across August, September, and October 2025, the AI Incident Database logged one...
Incident Report Submission Leaderboards
These are the persons and entities credited with creating and submitted incident reports. More details are available on the leaderboard page.
The AI Incident Briefing

Create an account to subscribe to new incident notifications and other updates.
Random Incidents
The Responsible AI Collaborative
The AI Incident Database is a project of the Responsible AI Collaborative, an organization chartered to advance the AI Incident Database. The governance of the Collaborative is architected around the participation in its impact programming. For more details, we invite you to read the founding report and learn more on our board and contributors.

View the Responsible AI Collaborative's Form 990 and tax-exempt application.
Organization Founding Sponsor
Database Founding Sponsor







