An international law enforcement operation has shut down Kidflix, a platform for child sexual exploitation with 1.8m registered users
Category Archives: News
CrushFTP Vulnerability Exploited Following Disclosure Issues
A critical authentication bypass flaw in CrushFTP is under active exploitation following a mishandled disclosure process
HellCat ransomware: what you need to know
HellCat – the ransomware gang that has been known to demand payment… in baguettes!
Are they rolling in the dough? Bread it and weep in my article on the Tripwire State of Security blog.
Amateur Hacker Leverages Russian Bulletproof Hosting Server to Spread Malware
The cybercriminal uses the service of Proton66, an infamous Russian-based bulletproof hosting provider, to deploy malware
Web 3.0 Requires Data Integrity
If you’ve ever taken a computer security class, you’ve probably learned about the three legs of computer security—confidentiality, integrity, and availability—known as the CIA triad. When we talk about a system being secure, that’s what we’re referring to. All are important, but to different degrees in different contexts. In a world populated by artificial intelligence (AI) systems and artificial intelligent agents, integrity will be paramount.
What is data integrity? It’s ensuring that no one can modify data—that’s the security angle—but it’s much more than that. It encompasses accuracy, completeness, and quality of data—all over both time and space. It’s preventing accidental data loss; the “undo” button is a primitive integrity measure. It’s also making sure that data is accurate when it’s collected—that it comes from a trustworthy source, that nothing important is missing, and that it doesn’t change as it moves from format to format. The ability to restart your computer is another integrity measure.
The CIA triad has evolved with the Internet. The first iteration of the Web—Web 1.0 of the 1990s and early 2000s—prioritized availability. This era saw organizations and individuals rush to digitize their content, creating what has become an unprecedented repository of human knowledge. Organizations worldwide established their digital presence, leading to massive digitization projects where quantity took precedence over quality. The emphasis on making information available overshadowed other concerns.
As Web technologies matured, the focus shifted to protecting the vast amounts of data flowing through online systems. This is Web 2.0: the Internet of today. Interactive features and user-generated content transformed the Web from a read-only medium to a participatory platform. The increase in personal data, and the emergence of interactive platforms for e-commerce, social media, and online everything demanded both data protection and user privacy. Confidentiality became paramount.
We stand at the threshold of a new Web paradigm: Web 3.0. This is a distributed, decentralized, intelligent Web. Peer-to-peer social-networking systems promise to break the tech monopolies’ control on how we interact with each other. Tim Berners-Lee’s open W3C protocol, Solid, represents a fundamental shift in how we think about data ownership and control. A future filled with AI agents requires verifiable, trustworthy personal data and computation. In this world, data integrity takes center stage.
For example, the 5G communications revolution isn’t just about faster access to videos; it’s about Internet-connected things talking to other Internet-connected things without our intervention. Without data integrity, for example, there’s no real-time car-to-car communications about road movements and conditions. There’s no drone swarm coordination, smart power grid, or reliable mesh networking. And there’s no way to securely empower AI agents.
In particular, AI systems require robust integrity controls because of how they process data. This means technical controls to ensure data is accurate, that its meaning is preserved as it is processed, that it produces reliable results, and that humans can reliably alter it when it’s wrong. Just as a scientific instrument must be calibrated to measure reality accurately, AI systems need integrity controls that preserve the connection between their data and ground truth.
This goes beyond preventing data tampering. It means building systems that maintain verifiable chains of trust between their inputs, processing, and outputs, so humans can understand and validate what the AI is doing. AI systems need clean, consistent, and verifiable control processes to learn and make decisions effectively. Without this foundation of verifiable truth, AI systems risk becoming a series of opaque boxes.
Recent history provides many sobering examples of integrity failures that naturally undermine public trust in AI systems. Machine-learning (ML) models trained without thought on expansive datasets have produced predictably biased results in hiring systems. Autonomous vehicles with incorrect data have made incorrect—and fatal—decisions. Medical diagnosis systems have given flawed recommendations without being able to explain themselves. A lack of integrity controls undermines AI systems and harms people who depend on them.
They also highlight how AI integrity failures can manifest at multiple levels of system operation. At the training level, data may be subtly corrupted or biased even before model development begins. At the model level, mathematical foundations and training processes can introduce new integrity issues even with clean data. During execution, environmental changes and runtime modifications can corrupt previously valid models. And at the output level, the challenge of verifying AI-generated content and tracking it through system chains creates new integrity concerns. Each level compounds the challenges of the ones before it, ultimately manifesting in human costs, such as reinforced biases and diminished agency.
Think of it like protecting a house. You don’t just lock a door; you also use safe concrete foundations, sturdy framing, a durable roof, secure double-pane windows, and maybe motion-sensor cameras. Similarly, we need digital security at every layer to ensure the whole system can be trusted.
This layered approach to understanding security becomes increasingly critical as AI systems grow in complexity and autonomy, particularly with large language models (LLMs) and deep-learning systems making high-stakes decisions. We need to verify the integrity of each layer when building and deploying digital systems that impact human lives and societal outcomes.
At the foundation level, bits are stored in computer hardware. This represents the most basic encoding of our data, model weights, and computational instructions. The next layer up is the file system architecture: the way those binary sequences are organized into structured files and directories that a computer can efficiently access and process. In AI systems, this includes how we store and organize training data, model checkpoints, and hyperparameter configurations.
On top of that are the application layers—the programs and frameworks, such as PyTorch and TensorFlow, that allow us to train models, process data, and generate outputs. This layer handles the complex mathematics of neural networks, gradient descent, and other ML operations.
Finally, at the user-interface level, we have visualization and interaction systems—what humans actually see and engage with. For AI systems, this could be everything from confidence scores and prediction probabilities to generated text and images or autonomous robot movements.
Why does this layered perspective matter? Vulnerabilities and integrity issues can manifest at any level, so understanding these layers helps security experts and AI researchers perform comprehensive threat modeling. This enables the implementation of defense-in-depth strategies—from cryptographic verification of training data to robust model architectures to interpretable outputs. This multi-layered security approach becomes especially crucial as AI systems take on more autonomous decision-making roles in critical domains such as healthcare, finance, and public safety. We must ensure integrity and reliability at every level of the stack.
The risks of deploying AI without proper integrity control measures are severe and often underappreciated. When AI systems operate without sufficient security measures to handle corrupted or manipulated data, they can produce subtly flawed outputs that appear valid on the surface. The failures can cascade through interconnected systems, amplifying errors and biases. Without proper integrity controls, an AI system might train on polluted data, make decisions based on misleading assumptions, or have outputs altered without detection. The results of this can range from degraded performance to catastrophic failures.
We see four areas where integrity is paramount in this Web 3.0 world. The first is granular access, which allows users and organizations to maintain precise control over who can access and modify what information and for what purposes. The second is authentication—much more nuanced than the simple “Who are you?” authentication mechanisms of today—which ensures that data access is properly verified and authorized at every step. The third is transparent data ownership, which allows data owners to know when and how their data is used and creates an auditable trail of data providence. Finally, the fourth is access standardization: common interfaces and protocols that enable consistent data access while maintaining security.
Luckily, we’re not starting from scratch. There are open W3C protocols that address some of this: decentralized identifiers for verifiable digital identity, the verifiable credentials data model for expressing digital credentials, ActivityPub for decentralized social networking (that’s what Mastodon uses), Solid for distributed data storage and retrieval, and WebAuthn for strong authentication standards. By providing standardized ways to verify data provenance and maintain data integrity throughout its lifecycle, Web 3.0 creates the trusted environment that AI systems require to operate reliably. This architectural leap for integrity control in the hands of users helps ensure that data remains trustworthy from generation and collection through processing and storage.
Integrity is essential to trust, on both technical and human levels. Looking forward, integrity controls will fundamentally shape AI development by moving from optional features to core architectural requirements, much as SSL certificates evolved from a banking luxury to a baseline expectation for any Web service.
Web 3.0 protocols can build integrity controls into their foundation, creating a more reliable infrastructure for AI systems. Today, we take availability for granted; anything less than 100% uptime for critical websites is intolerable. In the future, we will need the same assurances for integrity. Success will require following practical guidelines for maintaining data integrity throughout the AI lifecycle—from data collection through model training and finally to deployment, use, and evolution. These guidelines will address not just technical controls but also governance structures and human oversight, similar to how privacy policies evolved from legal boilerplate into comprehensive frameworks for data stewardship. Common standards and protocols, developed through industry collaboration and regulatory frameworks, will ensure consistent integrity controls across different AI systems and applications.
Just as the HTTPS protocol created a foundation for trusted e-commerce, it’s time for new integrity-focused standards to enable the trusted AI services of tomorrow.
This essay was written with Davi Ottenheimer, and originally appeared in Communications of the ACM.
Sensitive Data Breached in Highline Schools Ransomware Incident
Highline Public Schools revealed that sensitive personal, financial and medical data was accessed by ransomware attackers during the September 2024 incident
Over Half of Attacks on Electricity and Water Firms Are Destructive
Semperis claims 62% of water and electricity providers were hit by cyber-attacks in the past year
Nearly 600 Phishing Domains Emerge Following Bybit Heist
BforeAI researchers discover 596 suspicious Bybit-themed domains designed to defraud visitors
CISO: Chief Cybersecurity Warrior Leader
A Cybersecurity Warrior Leader is a term that combines the concepts of leadership and expertise in the field of cybersecurity with the mindset and traits of a warrior. These individuals function as CISOs and vCISOs in project and operational roles leading programs, initiatives, teams, and organizations in defending against cyber threats, while exhibiting key qualities associated with warriors, such as strategic thinking, resilience, and a strong sense of duty. These individuals are skilled, committed, and responsible for guiding organizations through complex and evolving cybersecurity landscapes and challenges, ensuring both the protection of digital assets and the development of a strong and proactive cybersecurity program, culture, and defense posture. Cybersecurity Warrior Leaders are on the front lines of digital defense, working to prevent unauthorized access, data breaches, and other forms of cybercrime. The concept of Chief Information Security Officer (CISO) as Chief Cybersecurity Warrior Leader is increasingly relevant as the role evolves to meet the challenges of a dynamic global threat landscape. This perspective positions CISOs and vCISOs not just as cybersecurity leaders and managers, but as key strategic leaders who integrate risk management with broader business strategies, goals, and objectives.
Below are examples of how CISOs and vCISOs can demonstrate mission critical skills and strategic competencies that draw on the principles and practice of Cybersecurity Warrior Leadership.
Strategic Thinking and Vision
Much like a warrior on a battlefield, a Cybersecurity Warrior Leader must think several steps ahead, anticipating potential threats and vulnerabilities. They must design strategies to protect the organization from sophisticated cyberattacks, data breaches, and other security risks and are responsible for building and leading cybersecurity frameworks, policies, and risk management strategies that are adaptable to the constantly evolving threat landscape.
Decision-Making Under Pressure
Cybersecurity Warrior leaders often have to make critical decisions during high-pressure situations, such as responding to data breaches, cyberattacks, or security incidents. A warrior mindset helps them remain calm, focused, and effective in mitigating damage. They need to act swiftly and decisively, mobilizing their teams to contain threats and minimize impact.
Resilience and Perseverance
Cybersecurity challenges are ongoing, with constant threats emerging from hackers, cybercriminals, or even nation-state actors. A Cybersecurity Warrior Leader must be resilient and able to keep their team motivated despite setbacks or the complexities of the cyber environment. They lead by example, showing determination to address and solve cybersecurity challenges, no matter how difficult.
Team Leadership and Mentorship
As leaders, they are responsible for building and mentoring a team of cybersecurity professionals who can fight against digital threats. This includes providing guidance, support, and opportunities for professional development. The Cybersecurity Warrior Leader must foster a collaborative, highly skilled, and agile team that can respond to threats and adapt to new challenges quickly.
Ethical Responsibility and Integrity
Just as warriors often follow a code of honor, Cybersecurity Warrior Leaders adhere to ethical guidelines in their defense of digital environments. They must ensure that their team’s actions align with both legal and moral standards, protecting sensitive information while combating cybercrime. They must balance the use of offensive cybersecurity tactics (like ethical hacking) with maintaining ethical integrity.
Cybersecurity Expertise
Like any skilled warrior, a Cybersecurity Warrior Leader has deep knowledge and expertise in cybersecurity concepts, tools, and practices. This includes understanding threat intelligence, network security, encryption, risk management, incident response, and emerging technologies such as artificial intelligence or blockchain. They also stay updated on new and evolving cyber threats and techniques to ensure that their team is always prepared to defend against them.
Advocacy and Awareness
A Cybersecurity Warrior Leader educates and advocates for the importance of cybersecurity at all levels of the organization, from top executives to staff members. They work to instill a security-conscious culture, where everyone understands the significance of their role in maintaining security. They may also represent their organization externally, collaborating with other industry leaders, government entities, or cybersecurity communities to share knowledge and combat global cyber threats.
Expanded Role Beyond Cybersecurity
Modern CISOs and vCISOs are tasked with defending organizations against sophisticated threats while also managing business risks. They must navigate areas like regulatory compliance, remote work security, and supply chain risks. This shift demands not only technical expertise but also leadership qualities akin to those of warriors, emphasizing strength, resilience, resolve, adaptability, and strategic foresight.
Leadership in Crisis
The warrior leader analogy fits CISOs and vCISOs because their roles often involve responding to crises, such as security risks, ransomware attacks, and data breaches with precision and decisiveness. The CISOs and vCISOs hard won ability to maintain focus and composure under pressure mirror the traits of traditional warrior leaders.
Cultural Alignment
Warrior leadership and cultivating a cybersecurity warrior mindset and culture aligns with the idea of fostering a proactive security culture within organizations and digital ecosystems. CISOs and vCISOs that embody warrior leadership model, enable, enforce, and inspire teams to adopt vigilance and adaptability, ensuring the entire workforce becomes part of the defense strategy.
Strategic Influence
Increasingly, CISOs and vCISOs are seen as integral to strategic decision-making at the executive level. CISO and vCISO warrior-leader influence can extend to the boardroom, where they advocate for cybersecurity as a core component of business continuity and competitive advantage.
Smashing Security podcast #411: The fall of Troy, and whisky barrel scammers
Renowned cybersecurity expert Troy Hunt falls victim to a phishing attack, resulting in the exposure of thousands of subscriber details, and don’t lose your life savings in a whisky scam…
All this and more is discussed in the latest edition of the “Smashing Security” podcast by cybersecurity veterans Graham Cluley and Carole Theriault.
Plus! Don’t miss our featured interview with Alastair Paterson, CEO and co-founder of Harmonic Security, discussing how companies can adopt Generative AI without putting their sensitive data at risk.