AI Is Making Social Engineering Harder to Detect—But We’re Still Training People Like It’s 2015
informationsecuritybuzz.comLast year, Hong Kong police disclosed a reported case that would become a watershed moment in cybersecurity: a finance worker at global engineering firm Arup transferred $25 million to fraudsters after attending a video conference call with what appeared to be the company’s CFO and several colleagues. Every person on the call was a deepfake. The voices matched. The faces were convincing. The worker initially suspected the emailed request was phishing—but the video call with familiar colleagues erased those doubts.
This wasn’t a zero-day exploit or supply chain attack. Just an AI-generated video and audio, and an employee whose security awareness training had never prepared them for anything like this.
The Arup case exposed an uncomfortable truth: while threat actors have weaponized generative AI to create increasingly convincing social engineering attacks, most organizations are still training their people to spot misspelled phishing emails and suspicious attachments. The ...
Copyright of this story solely belongs to informationsecuritybuzz.com . To see the full text click HERE

