Just last year, a mother in Miami, Manitoba, received a strange call from a private number. The voice on the other end sounded exactly like her son. He started asking questions, but something felt off. Uneasy, she hung up and called her real son directly.
Turns out the first call was an AI scam.
Stories like this are becoming increasingly common. According to cybersecurity company CrowdStrike, voice phishing attacks jumped 442% in the second half of 2024 compared to the first half of the year. AI-generated voices, deepfakes, and impersonation scams are becoming more sophisticated, faster to deploy, and far harder to detect.
Over the weekend, Techloy spoke with Matthew Rosenquist, a cybersecurity expert with more than 30 years of experience, about what happens when AI becomes convincing enough to imitate almost anyone online.