Nvidia today announced that a digital lab playground for its latest security offering is now available, letting users try out an AI-powered system designed to monitor individual user accounts for potentially hazardous behavior.
The idea, according to the company, is to leverage the large amounts of data that many organizations compile anyway about login and data access events on their systems, and use that to train an AI that watches for user accounts to diverge from their usual patterns. The system moves security teams from a scenario in which they have to comb through potentially millions of events a week to identify a problem to a small handful of “high risk” events identified by the system.
More Stories
Patch Tuesday, April 2025 Edition
Microsoft today released updates to plug at least 121 security holes in its Windows operating systems and software, including one...
The AI Fix #45: The Turing test falls to GPT-4.5
In episode 45 of The AI Fix, our hosts discover that ChatGPT is running the world, Mark learns that mattress...
Google Releases April Android Update to Address Two Zero-Days
Google’s latest Android update fixes 62 flaws, including two zero-days previously used in limited targeted attacks Read More
NIST Defers Pre-2018 CVEs to Tackle Growing Vulnerability Backlog
NIST marks CVEs pre-2018 as “Deferred” in the NVD as agency focus shifts to managing emerging threats Read More
Half of Firms Stall Digital Projects as Cyber Warfare Risk Surges
Armis survey reveals that the growing threat of nation-state cyber-attacks is disrupting digital transformation Read More
CISA Warns of CrushFTP Vulnerability Exploitation in the Wild
The US Cybersecurity and Infrastructure Security Agency (CISA) has added CVE-2025-31161 to its Known Exploited Vulnerabilities (KEV) catalog Read More