Generative AI is increasingly being used to defraud businesses of big money and no one is prepared

Deepfakes, like the recent pornography featuring a likeness of Taylor Swift, are a growing problem that is just starting to be used for financial fraud.
Deepfakes, like the recent pornography featuring a likeness of Taylor Swift, are a growing problem that is just starting to be used for financial fraud.
Christopher Jue—TAS24/Getty Images for TAS Rights Management

Hello and welcome to Eye on AI. 

Over the past few weeks, the threat of AI-generated deepfakes went from a near concern to a real and present danger playing out faster than many predicted. Thousands of New Hampshire voters received an AI-generated robocall purporting to be President Joe Biden and intended to suppress election turnout. Taylor Swift quickly went from being deepfaked in faux Le Creuset cookware ads to having pornographic images of her likeness circulate online to being falsely depicted as a Donald Trump supporter. And now, a story out of Hong Kong is serving as a harrowing example of the new, very expensive ways generative AI can wreak havoc on businesses. 

Malicious actors used AI-generated deepfakes to trick an employee at a multinational company’s Hong Kong office into transferring $25 million (HK$200 million) of the company’s funds by posing as the CFO and other colleagues on a video call. 

“Everyone present on the video calls except the victim was a fake representation of real people,” wrote the South China Morning Post, citing Hong Kong police. “The scammers applied deepfake technology to turn publicly available video and other footage into convincing versions of the meeting’s participants.”

In its 2023 year-end report, identity fraud company Sumsub identified AI-powered fraud, mainly deepfakes, as one of the top five types of identity fraud threats in 2023. The company found a tenfold increase in deepfakes globally in the last year, including a 1,740% surge in North America alone, and warned that deepfakes will become increasingly advanced and challenging to detect in 2024. 

Threat researchers are also starting to gain insight into the role generative AI is having on phishing attacks. In its recent threat report, SlashNext reported a 1,265% increase in phishing emails since the launch of ChatGPT. DarkTrace additionally noted in its 2023 threat report that phishing emails now include significantly more text, bypass existing security layers, and use novel social engineering techniques, “potentially leveraging generative AI tools.”

Yoav Keren, CEO of digital risk firm BrandShield, told Eye on AI that his firm “detects a vast variety of threats, and more and more of them are created with generative AI.” He said they’ve encountered AI-generated scams using fake voices and visuals, and even more sophisticated AI-based scams using elaborate planning and execution. 

“Anyone can write fake ads or web pages with perfect English, design professional banners or program malicious code with simple AI tools—and they do it much faster than without them. Enterprises should expect more threats in less time, and more sophisticated threats than ever before,” Keren said.

So what must enterprises do to protect themselves? Keren—as well as some of the aforementioned reports—advises multifactor authentication, regular employee training and awareness programs, advanced threat and AI detection tools, and following all the fundamental principles of cybersecurity, such as regular patching, access control, and incident response planning. Essentially, what everyone’s already been doing. 

These measures, however, have hardly proven effective. Yes, some breaches have been thwarted and some supply chain attacks stifled before they fully spiraled out of control. But the graphs on cyberattacks—both the frequency and the damage—have only continued to go up and to the right. The global average cost of a data breach was $4.45 million in 2023, a 15% increase over three years, according to IBM

With decades of cybersecurity innovation and billions upon billions invested, only leading to today’s grim state of attacks, there’s a case to be made that cybersecurity is broken (it’s a case I first made years ago). In his final edition of the Washington Post’s Cybersecurity 202 newsletter published in 2022, Joseph Marks reflected on his eight years on the beat and similarly declared, “Cybersecurity’s bad and it’s getting worse.” He pointed to the fact that there’s been a lot of big talk with little real change and called out the emergence of AI for how it will inevitably make the situation even worse. 

“Things will get worse before they get better,” he wrote. 

So yes, companies absolutely should have all the basic measures in place. But since they were hardly enough to stop cyber threats before generative AI, there’s little reason to believe they’ll truly protect businesses in the wake of it. We’re building the generative AI plane while flying and dealing with the fallout in real-time, so companies can only do so much to educate their employees on the latest generative AI-related threats. Some security professionals are bullish on the potential to fight AI with AI, but this is also something I’ve heard for years. Either way, generative AI might just be cybersecurity’s moment of reckoning.

And with that, here’s more of today’s AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

Google sunsets the Bard name, launches a Gemini digital assistant for mobile, and finally rolls out Gemini Ultra. Bard is no more as Google fully embraces the Gemini name across its AI products. The company is also launching Gemini as a digital assistant on mobile, which will enable users to do things like point the phone’s camera at something and ask Gemini questions about it. Android users can opt-in and replace the Google Assistant on their phones with Gemini, while iOS users can access the Gemini assistant through the Google app. Lastly, the company is launching a new Gemini Advanced subscription tier as part of Google One that will let users finally access the most advanced Gemini model, Ultra 1.0. The new Gemini Advanced service will cost $19.99 monthly (the same cost as rival ChatGPT pro) and also integrate across Google Workspace products.

OpenAI adds C2PA watermarks to images created with DALL-E 3. Images created within ChatGPT or via the company’s API will now include a signature indicating they were generated by its DALL-E 3 model. Images created within ChatGPT will also contain an additional notation that they were created within the chatbot. In a post announcing the news, OpenAI pointed to how this will let social platforms and content distributors see if an image was generated by the company’s AI products. OpenAI also acknowledged that metadata like C2PA, an open technical standard for embedding metadata into media, “is not a silver bullet to address issues of provenance” because it can easily be removed.

Meta says it will label AI-created images shared on its platforms and punish users who upload AI-generated videos without disclosure. The company cited upcoming elections worldwide in its post announcing the move and said labels on AI-generated images on Facebook, Instagram, and Threads will start showing up in the coming months. Meta also said it’s working with companies across the industry on technical standards for identifying AI-generated content (including C2PA and IPTC). Additionally, it said it’s developing classifiers to help automatically detect AI-generated content even if the content lacks invisible markers and also looking for ways to make it more difficult to remove or alter invisible watermarks. For videos and audio, Meta is punting to users and asking them to voluntarily disclose such content as AI-generated so the company can label it, citing that companies with AI tools aren’t yet including signals that point to AI generation in these types of media.

FORTUNE ON AI

Exclusive: What Andreessen Horowitz’s Anish Acharya is looking for in consumer AI startups —Allie Garfinkle

78% of dealmakers say Sam Altman should be running OpenAI right now —Allie Garfinkle

Microsoft CEO Satya Nadella says ‘AI is really in the air now’ and is planning to train 2 million Gen Z in India with tech skills —Sunny Nagpaul

AI is moving too fast to keep pace for 4 in 5 workers —Jane Thier

The AI hypefest might just save San Francisco from its office space doom loop —Alena Botros

Europe faces an aging population and a shrinking workforce. AI can fill the gap —Peter Vanham And Nicholas Gordon

EYE ON AI NUMBERS

90%

That’s the percentage of global enterprise AI decision-makers who have concrete plans to implement generative AI internally and for customers. The number comes from a new Forrester report: the State of Generative AI, 2024. Even with concrete plans in hand, the research also revealed that firms are taking a cautious approach and are starting to use AI internally. The report identifies the top internal use cases as employee productivity, knowledge management, and software development. For companies slowly pushing their generative AI experiments into customer-facing domains, they’re targeting conversational AI, chatbots, and virtual assistants. “Most of these external deployments employ heavy human-in-the-loop management and validation—at least for now,” the report notes. 

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.