Nvidia today announced that a digital lab playground for its latest security offering is now available, letting users try out an AI-powered system designed to monitor individual user accounts for potentially hazardous behavior.
The idea, according to the company, is to leverage the large amounts of data that many organizations compile anyway about login and data access events on their systems, and use that to train an AI that watches for user accounts to diverge from their usual patterns. The system moves security teams from a scenario in which they have to comb through potentially millions of events a week to identify a problem to a small handful of “high risk” events identified by the system.
More Stories
Verizon DBIR: Small Businesses Bearing the Brunt of Ransomware Attacks
While the Verizon annual report showed that ransomware is rising, it also found that ransom payments are in decline Read...
Ransomware Attacks Fall Sharply in March
NCC Group found that ransomware attacks fell by 32% in March compared to February, but described this finding as a...
ETSI Unveils New Baseline Requirements for Securing AI
ETSI’s says new technical specification for securing AI models and systems sets international benchmark Read More
Ofcom Lays Down the Law with Child Safety Rules for Tech Giants
Ofcom’s Protection of Children Codes and Guidance lists 40 new child safety measures for tech firms Read More
Smashing Security podcast #414: Zoom.. just one click and your data goes boom!
Graham explores how the Elusive Comet cybercrime gang are using a sneaky trick of stealing your cryptocurrency via an innocent-appearing...
DOGE Worker’s Code Supports NLRB Whistleblower
A whistleblower at the National Labor Relations Board (NLRB) alleged last week that denizens of Elon Musk’s Department of Government...