Cybersecurity researchers have been warning for quite a while now that (GenAI) programs are vulnerable to a vast array of attacks, from that can break guardrails, to data leaks that can reveal sensitive information. The deeper the research goes, the more experts are finding out just how much GenAI is a wide-open risk, especially to enterprise users with extremely sensitive and valuable data. "This is a new attack vector that opens up a new attack surface," said Elia Zaitsev, chief technology officer of cyber-security vendor CrowdStrike, in an interview with ZDNET.
"I see with generative AI a lot of people just rushing to use this technology, and they're bypassing the normal controls and methods" of secure computing, said Zaitsev. "In many ways, you can think of generative AI technology as a new operating system, or a new programming language," said Zaitsev. "A lot of people don't have expertise with what the pros and cons are, and how to use it correctly, how to secure it correctly.
" The most infamous recent example of AI raising security concerns is Microsoft's Recall feature, which originally was to be built into . that attackers who gain access to a PC with the Recall function can see the entire history of an individual's interaction with the PC, not unlike what happens when a keystroke logger or other spyware is deliberately placed on the machine. "They have released a consumer feature that basically is built-in spyware, that copies everything you're doing in an unencrypt.
