The US government advisory warns healthcare organizations are being targeted by BlackCat amid an ongoing cyber-incident affecting Change Healthcare
Monthly Archives: February 2024
How the “Frontier” Became the Slogan of Uncontrolled AI
Artificial intelligence (AI) has been billed as the next frontier of humanity: the newly available expanse whose exploration will drive the next era of growth, wealth, and human flourishing. It’s a scary metaphor. Throughout American history, the drive for expansion and the very concept of terrain up for grabs—land grabs, gold rushes, new frontiers—have provided a permission structure for imperialism and exploitation. This could easily hold true for AI.
This isn’t the first time the concept of a frontier has been used as a metaphor for AI, or technology in general. As early as 2018, the powerful foundation models powering cutting-edge applications like chatbots have been called “frontier AI.” In previous decades, the internet itself was considered an electronic frontier. Early cyberspace pioneer John Perry Barlow wrote “Unlike previous frontiers, this one has no end.” When he and others founded the internet’s most important civil liberties organization, they called it the Electronic Frontier Foundation.
America’s experience with frontiers is fraught, to say the least. Expansion into the Western frontier and beyond has been a driving force in our country’s history and identity—and has led to some of the darkest chapters of our past. The tireless drive to conquer the frontier has directly motivated some of this nation’s most extreme episodes of racism, imperialism, violence, and exploitation.
That history has something to teach us about the material consequences we can expect from the promotion of AI today. The race to build the next great AI app is not the same as the California gold rush. But the potential that outsize profits will warp our priorities, values, and morals is, unfortunately, analogous.
Already, AI is starting to look like a colonialist enterprise. AI tools are helping the world’s largest tech companies grow their power and wealth, are spurring nationalistic competition between empires racing to capture new markets, and threaten to supercharge government surveillance and systems of apartheid. It looks more than a bit like the competition among colonialist state and corporate powers in the seventeenth century, which together carved up the globe and its peoples. By considering America’s past experience with frontiers, we can understand what AI may hold for our future, and how to avoid the worst potential outcomes.
America’s “Frontier” Problem
For 130 years, historians have used frontier expansion to explain sweeping movements in American history. Yet only for the past thirty years have we generally acknowledged its disastrous consequences.
Frederick Jackson Turner famously introduced the frontier as a central concept for understanding American history in his vastly influential 1893 essay. As he concisely wrote, “American history has been in a large degree the history of the colonization of the Great West.”
Turner used the frontier to understand all the essential facts of American life: our culture, way of government, national spirit, our position among world powers, even the “struggle” of slavery. The endless opportunity for westward expansion was a beckoning call that shaped the American way of life. Per Turner’s essay, the frontier resulted in the individualistic self-sufficiency of the settler and gave every (white) man the opportunity to attain economic and political standing through hardscrabble pioneering across dangerous terrain.The New Western History movement, gaining steam through the 1980s and led by researchers like Patricia Nelson Limerick, laid plain the racial, gender, and class dynamics that were always inherent to the frontier narrative. This movement’s story is one where frontier expansion was a tool used by the white settler to perpetuate a power advantage.The frontier was not a siren calling out to unwary settlers; it was a justification, used by one group to subjugate another. It was always a convenient, seemingly polite excuse for the powerful to take what they wanted. Turner grappled with some of the negative consequences and contradictions of the frontier ethic and how it shaped American democracy. But many of those whom he influenced did not do this; they celebrated it as a feature, not a bug. Theodore Roosevelt wrote extensively and explicitly about how the frontier and his conception of white supremacy justified expansion to points west and, through the prosecution of the Spanish-American War, far across the Pacific. Woodrow Wilson, too, celebrated the imperial loot from that conflict in 1902. Capitalist systems are “addicted to geographical expansion” and even, when they run out of geography, seek to produce new kinds of spaces to expand into. This is what the geographer David Harvey calls the “spatial fix.”Claiming that AI will be a transformative expanse on par with the Louisiana Purchase or the Pacific frontiers is a bold assertion—but increasingly plausible after a year dominated by ever more impressive demonstrations of generative AI tools. It’s a claim bolstered by billions of dollars in corporate investment, by intense interest of regulators and legislators worldwide in steering how AI is developed and used, and by the variously utopian or apocalyptic prognostications from thought leaders of all sectors trying to understand how AI will shape their sphere—and the entire world.
AI as a Permission Structure
Like the western frontier in the nineteenth century, the maniacal drive to unlock progress via advancement in AI can become a justification for political and economic expansionism and an excuse for racial oppression.
In the modern day, OpenAI famously paid dozens of Kenyans little more than a dollar an hour to process data used in training their models underlying products such as ChatGPT. Paying low wages to data labelers surely can’t be equated to the chattel slavery of nineteenth-century America. But these workers did endure brutal conditions, including being set to constantly review content with “graphic scenes of violence, self-harm, murder, rape, necrophilia, child abuse, bestiality, and incest.” There is a global market for this kind of work, which has been essential to the most important recent advances in AI such as Reinforcement Learning with Human Feedback, heralded as the most important breakthrough of ChatGPT.
The gold rush mentality associated with expansion is taken by the new frontiersmen as permission to break the rules, and to build wealth at the expense of everyone else. In 1840s California, gold miners trespassed on public lands and yet were allowed to stake private claims to the minerals they found, and even to exploit the water rights on those lands. Again today, the game is to push the boundaries on what rule-breaking society will accept, and hope that the legal system can’t keep up.
Many internet companies have behaved in exactly the same way since the dot-com boom. The prospectors of internet wealth lobbied for, or simply took of their own volition, numerous government benefits in their scramble to capture those frontier markets. For years, the Federal Trade Commission has looked the other way or been lackadaisical in halting antitrust abuses by Amazon, Facebook, and Google. Companies like Uber and Airbnb exploited loopholes in, or ignored outright, local laws on taxis and hotels. And Big Tech platforms enjoyed a liability shield that protected them from punishment the contents people posted to their sites.
We can already see this kind of boundary pushing happening with AI.
Modern frontier AI models are trained using data, often copyrighted materials, with untested legal justification. Data is like water for AI, and, like the fight over water rights in the West, we are repeating a familiar process of public acquiescence to private use of resources. While some lawsuits are pending, so far AI companies have faced no significant penalties for the unauthorized use of this data.
Pioneers of self-driving vehicles tried to skip permitting processes and used fake demonstrations of their capabilities to avoid government regulation and entice consumers. Meanwhile, AI companies’ hope is that they won’t be held to blame if the AI tools they produce spew out harmful content that causes damage in the real world. They are trying to use the same liability shield that fostered Big Tech’s exploitation of the previous electronic frontiers—the web and social media—to protect their own actions.
Even where we have concrete rules governing deleterious behavior, some hope that using AI is itself enough to skirt them. Copyright infringement is illegal if a person does it, but would that same person be punished if they train a large language model to regurgitate copyrighted works? In the political sphere, the Federal Election Commission has precious few powers to police political advertising; some wonder if they simply won’t be considered relevant if people break those rules using AI.
AI and American Exceptionalism
Like The United States’ historical frontier, AI has a feel of American exceptionalism. Historically, we believed we were different from the Old World powers of Europe because we enjoyed the manifest destiny of unrestrained expansion between the oceans. Today, we have the most CPU power, the most data scientists, the most venture-capitalist investment, and the most AI companies. This exceptionalism has historically led many Americans to believe they don’t have to play by the same rules as everyone else.
Both historically and in the modern day, this idea has led to deleterious consequences such as militaristic nationalism (leading to justifying of foreign interventions in Iraq and elsewhere), masking of severe inequity within our borders, abdication of responsibility from global treaties on climate and law enforcement, and alienation from the international community. American exceptionalism has also wrought havoc on our country’s engagement with the internet, including lawless spying and surveillance by forces like the National Security Agency.
The same line of thinking could have disastrous consequences if applied to AI. It could perpetuate a nationalistic, Cold War–style narrative about America’s inexorable struggle with China, this time predicated on an AI arms race. Moral exceptionalism justifies why we should be allowed to use tools and weapons that are dangerous in the hands of a competitor, or enemy. It could enable the next stage of growth of the military-industrial complex, with claims of an urgent need to modernize missile systems and drones through using AI. And it could renew a rationalization for violating civil liberties in the US and human rights abroad, empowered by the idea that racial profiling is more objective if enforced by computers.The inaction of Congress on AI regulation threatens to land the US in a regime of de facto American exceptionalism for AI. While the EU is about to pass its comprehensive AI Act, lobbyists in the US have muddled legislative action. While the Biden administration has used its executive authority and federal purchasing power to exert some limited control over AI, the gap left by lack of legislation leaves AI in the US looking like the Wild West—a largely unregulated frontier.The lack of restraint by the US on potentially dangerous AI technologies has a global impact. First, its tech giants let loose their products upon the global public, with the harms that this brings with it. Second, it creates a negative incentive for other jurisdictions to more forcefully regulate AI. The EU’s regulation of high-risk AI use cases begins to look like unilateral disarmament if the US does not take action itself. Why would Europe tie the hands of its tech competitors if the US refuses to do the same?
AI and Unbridled Growth
The fundamental problem with frontiers is that they seem to promise cost-free growth. There was a constant pressure for American westward expansion because a bigger, more populous country accrues more power and wealth to the elites and because, for any individual, a better life was always one more wagon ride away into “empty” terrain. AI presents the same opportunities. No matter what field you’re in or what problem you’re facing, the attractive opportunity of AI as a free labor multiplier probably seems like the solution; or, at least, makes for a good sales pitch.
That would actually be okay, except that the growth isn’t free. America’s imperial expansion displaced, harmed, and subjugated native peoples in the Americas, Africa, and the Pacific, while enlisting poor whites to participate in the scheme against their class interests. Capitalism makes growth look like the solution to all problems, even when it’s clearly not. The problem is that so many costs are externalized. Why pay a living wage to human supervisors training AI models when an outsourced gig worker will do it at a fraction of the cost? Why power data centers with renewable energy when it’s cheaper to surge energy production with fossil fuels? And why fund social protections for wage earners displaced by automation if you don’t have to? The potential of consumer applications of AI, from personal digital assistants to self-driving cars, is irresistible; who wouldn’t want a machine to take on the most routinized and aggravating tasks in your daily life? But the externalized cost for consumers is accepting the inevitability of domination by an elite who will extract every possible profit from AI services.
Controlling Our Frontier Impulses
None of these harms are inevitable. Although the structural incentives of capitalism and its growth remain the same, we can make different choices about how to confront them.
We can strengthen basic democratic protections and market regulations to avoid the worst impacts of AI colonialism. We can require ethical employment for the humans toiling to label data and train AI models. And we can set the bar higher for mitigating bias in training and harm from outputs of AI models.
We don’t have to cede all the power and decision making about AI to private actors. We can create an AI public option to provide an alternative to corporate AI. We can provide universal access to ethically built and democratically governed foundational AI models that any individual—or company—could use and build upon.
More ambitiously, we can choose not to privatize the economic gains of AI. We can cap corporate profits, raise the minimum wage, or redistribute an automation dividend as a universal basic income to let everyone share in the benefits of the AI revolution. And, if these technologies save as much labor as companies say they do, maybe we can also all have some of that time back.
And we don’t have to treat the global AI gold rush as a zero-sum game. We can emphasize international cooperation instead of competition. We can align on shared values with international partners and create a global floor for responsible regulation of AI. And we can ensure that access to AI uplifts developing economies instead of further marginalizing them.
This essay was written with Nathan Sanders, and was originally published in Jacobin.
dav1d-1.4.0-1.fc40
FEDORA-2024-12fcc689ac
Packages in this update:
dav1d-1.4.0-1.fc40
Update description:
Update to version 1.4.0.
This version addresses CVE-2024-1580 (see RHBZ#2264939).
TimbreStealer Malware Targets Mexican Victims with Tax-Related Lures
The maker of the Mispadu Trojan started distributing a new infostealer with financial lures to Mexican users, Cisco Talos found
AI governance and preserving privacy
AT&T Cybersecurity featured a dynamic cyber mashup panel with Akamai, Palo Alto Networks, SentinelOne, and the Cloud Security Alliance. We discussed some provocative topics around Artificial Intelligence (AI) and Machine Learning (ML) including responsible AI and securing AI. There were some good examples of best practices shared in an emerging AI world like implementing Zero Trust architecture and anonymization of sensitive data. Many thanks to our panelists for sharing their insights.
Before diving into the hot topics around AI governance and protecting our privacy, let’s define ML and GenAI to provide some background on what they are and what they can do along with some real-world use case examples for better context on the impact and implications AI will have on our future.
GenAI and ML
Machine Learning (ML) is a subset of AI that relies on the development of algorithms to make decisions or predictions based on data without being explicitly programmed. It uses algorithms to automatically learn and improve from experience.
GenAI is a subset of ML that focuses on creating new data samples that resemble real-world data. GenAI can produce new and original content through deep learning, a method in which data is processed like the human brain and is independent of direct human interaction.
GenAI can produce new content based on text, images, 3D rendering, video, audio, music, and code and increasingly with multimodal capabilities can interpret different data prompts to generate different data types to describe an image, generate realistic images, create vibrant illustrations, predict contextually relevant content, answer questions in an informational way, and much more.
Real world uses cases include summarizing reports, creating music in a specific style, develop and improve code faster, generate marketing content in different languages, detect and prevent fraud, optimize patient interactions, detect defects and quality issues, and predict and respond to cyber-attacks with automation capabilities at machine speed.
Responsible AI
Given the power to do good with AI – how do we balance the risk and reward for the good of society? What is an organization’s ethos and philosophy around AI governance? What is the organization’s philosophy around the reliability, transparency, accountability, safety, security, privacy, and fairness with AI, and one that is human-centered?
It’s important to build each of these pillarsn into an organization’s AI innovation and business decision-making. Balancing the risk and reward of innovating AI/ML into an organization’s ecosystem without compromising social responsibility and damaging the company’s brand and reputation is crucial.
At the center of AI where personal data is the DNA of our identity in a hyperconnected digital world, privacy is a top priority.
Privacy concerns with AI
In Cisco’s 2023 consumer privacy survey, a study of over 2600 consumers in 12 countries globally, indicates consumer awareness of data privacy rights is continuing to grow with the younger generations (age groups under 45) exercising their Data Subject Access rights and switching providers over their privacy practices and policies. Consumers support AI use but are also concerned.
With those supporting AI for use:
48% believe AI can be useful in improving their lives
54% are willing to share anonymized personal data to improve AI products
AI is an area that has some work to do to earn trust
60% of respondents believe the use of AI by organizations has already eroded trust in them
62% reported concerns about the business use of AI
72% of respondents indicated that having products and solutions audited for bias would make them “somewhat” or much “more comfortable” with AI
Of the 12% who indicated they were regular GenAI users
63% were realizing significant value from GenAI
Over 30% of users have entered names, address, and health information
25% to 28% of users have provided financial, religion/ethnicity, and account or ID numbers
These categories of data present data privacy concerns and challenges if exposed to the public. The surveyed respondents indicated concerns with the security and privacy of their data and the reliability or trustworthiness of the information shared.
88% of users said they were “somewhat concerned” or “very concerned” if their data were to be shared
86% were concerned the information they get from Gen AI could be wrong and could be detrimental for humanity.
Private and public partnerships in an evolving AI landscape
While everyone has a role to play in protecting personal data, 50% of the consumer’s view on privacy leadership believe that national or local government should have primary responsibility. Of the surveyed respondents, 21% believe that organizations including private companies should have primary responsibility for protecting personal data while 19% said the individuals themselves.
Many of these discussions around AI ethics, AI protection, and privacy protection are occurring at the state, national, and global stage from the Whitehouse to the European parliament. AI innovators, scientists, designers, developers, engineers, and security experts who design, develop, deploy, operate, and maintain in the burgeoning world of AI/ML and cybersecurity play a critical role in society because what we do matters.
Cybersecurity leaders will need to be at the forefront to adopt human-centric security design practices and develop new ways to better secure AI/ML and LLM applications to ensure proper technical controls and enhanced guardrails are implemented and in place. Privacy professionals will need to continue to educate individuals about their privacy and their rights.
Private and public collaborative partnerships across industry, government agencies, academia, and researchers will continue to be instrumental to promote adoption of a governance framework centered on preserving privacy, regulating privacy protections, securing AI from misuse & cybercriminal activities, and mitigating AI use as a geopolitical weapon.
AI governance
A gold standard for an AI governance model and framework is imperative for the safety and trustworthiness of AI adoption. A governance model that prioritizes the reliability, transparency, accountability, safety, security, privacy, and fairness of AI. One that will help cultivate trust in AI technologies and promote AI innovation while mitigating risks. An AI framework that will guide organizations on the risk considerations.
How to monitor and manage risk with AI?
What is the ability to appropriately measure risk?
What should be the risk tolerance?
What is the risk prioritization?
What is needed to verify?
How is it verified and validated?
What is the impact assessment on human factors, technical, social-cultural, economic, legal, environmental, and ethics?
There are some common frameworks emerging like the NIST AI Risk Management Framework. It outlines the following characteristics of trustworthy AI systems: valid & reliable, safe, secure & resilient, accountable & transparent, explainable & interpretable, privacy-enhanced, and fair with harmful bias managed.
The AI RMF has four core functions to govern and manage AI risks: Govern, Map, Measure and Manage. As part of a regular process within an AI lifecycle, responsible AI performed by testing, evaluating, verifying, and validating allows for mid-course remediation and post-hoc risk management.
The U.S. Department of Commerce recently announced that through the National Institute of Standards and Technology (NIST), they will establish the U.S. Artificial Intelligence Safety Institute (USAISI) to lead the U.S. government’s efforts on AI safety and trust. The AI Safety Institute will build on the NIST AI Risk Management Framework to create a benchmark for evaluating and auditing AI models.
The U.S. AI Safety Institute Consortium will enable close collaboration among government agencies, industry, organizations, and impacted communities to help ensure that AI systems are safe and trustworthy.
Preserving privacy and unlocking the full potential of AI
AI not only has durable effects on our business and national interests, but it can also have everlasting impact to our own human interest and existence. Preserving the privacy of AI applications:
Secure AI and LLM enabled applications
Secure sensitive data
Anonymize datasets
Design and develop trust and safety
Balance the technical and business competitive advantages of AI and risks without compromising human integrity and social responsibility
Will unlock the full potential of AI while maintaining compliance with emerging privacy laws and regulation. An AI risk management framework like NIST which addresses fairness and AI concerns with bias and equality along with human-centered principles at its core will play a critical role in building trust in AI within society.
AI risks and benefits to our security, privacy, safety, and lives will have a profound influence on human evolution. The impact of AI is perhaps the most consequential development to humanity. This is just the beginning of many more exciting and interesting conversations on AI. One thing is for sure, AI is not going away. AI will remain a provocative topic for decades to come.
To learn more
Explore our Cybersecurity consulting services to help.
Biden Bans Mass Sale of Data to Hostile Nations
A new presidential executive order attempts to prevent the mass sales of personal data to countries like China and Russia
GUloader Unmasked: Decrypting the Threat of Malicious SVG Files
Authored by: Vignesh Dhatchanamoorthy
In the ever-evolving landscape of cybersecurity threats, staying ahead of malicious actors requires a deep understanding of their tactics and tools. Enter GUloader, a potent weapon in the arsenal of cybercriminals worldwide. This sophisticated malware loader has garnered attention for its stealthy techniques and ability to evade detection, posing a significant risk to organizations and individuals.
One of GUloader’s distinguishing features is its utilization of evasion techniques, making it particularly challenging for traditional security measures to detect and mitigate. Through polymorphic code and encryption, GUloader can dynamically alter its structure, effectively masking its presence from antivirus software and intrusion detection systems. This adaptability enables GUloader to persistently infiltrate networks and establish footholds for further malicious activity.
McAfee Labs has observed a recent GUloader campaign being distributed through a malicious SVG file delivered via email.
Scalable Vector Graphics (SVG)
The SVG (Scalable Vector Graphics) file format is a widely used vector image format designed for describing two-dimensional vector and mixed vector/raster graphics in XML. One of the key features of SVG files is their support for interactivity and animation, achieved through JavaScript and CSS.
Modern web browsers such as Google Chrome, Mozilla Firefox, and Microsoft Edge have built-in support for rendering SVG files. When you open an SVG file in Chrome or Firefox, the browser renders the vector graphics using its built-in SVG rendering engine. This engine interprets the XML-based SVG code and displays the image accordingly on the web page.
Browsers treat SVG files as standard web content and handle them seamlessly within their browsing environments.
Execution Chain
Figure 1: Infection chain
The execution process begins with the opening of an SVG file from an email attachment. This action triggers the browser to download a ZIP file. Within this ZIP file is a WSF (Windows Script File), acting as the conduit for the subsequent stage. Upon execution of the WSF, wscript calls the PowerShell command to establish a connection with a malicious domain and execute the hosted content. This content includes shellcode injected into the MSBuild application, facilitating further malicious actions.
Figure 2: Process Tree
Technical Analysis
A recipient receives a spam email that contains malware embedded in archived attachments. The attachment contains a malicious SVG file named “dhgle-Skljdf.svg”
Figure 3: Spam Email
JavaScript that was smuggled inside of the SVG image contained the entire malicious zip archive. When the victim opened the attachment from the email the smuggled JavaScript code inside the SVG image created a malicious zip archive, and then presented the user with a dialog box to decrypt and save the file.
Figure 4: Saving file prompt
The SVG file utilizes a Blob object that contains the embedded zip file in base64 format. Subsequently, the zip file is dropped via the browser when accessed.
Figure 5: SVG file code
Inside the zip file, there is an obfuscated WSF (Windows Script File). The WSF script employs several techniques to make analysis quite difficult.
Figure 6: Obfuscated WSF Script
It invokes PowerShell to establish a connection with a malicious domain, subsequently executing the hosted content retrieved from it.
Encoded PowerShell
Figure 7: Encoded PowerShell code
After Decoding
Figure 8: Decoded PowerShell code
URL: hxxps://winderswonders.com/JK/Equitably.mix
The URL hosts base64-encoded content, which, after decoding, contains shellcode and a PowerShell script.
Hosted Content
Figure 9: Hosted Base64 content
After decoding Base64
Figure 10: Decoded Base64 content
The above PowerShell script attempts to load the shellcode into the legitimate MSBuild process using the Process Hollowing technique.
After injection, the shellcode executes anti-analysis check then it modifies the Registry run key to achieve persistence.
HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionRun
The final stage uses the injected shellcode to download and execute the final malicious executable. GuLoader can also download and deploy a wide range of other malware variants.
Indicator of Compromise (IOCs)
File
SHA256/URL
Email
66b04a8aaa06695fd718a7d1baa19386922b58e797634d5ac4ff96e79584f5c1
SVG
b20ea4faca043274bfbb1f52895c02a15cd0c81a333c40de32ed7ddd2b9b60c0
WSF
0a196171571adc8eb9edb164b44b7918f83a8425ec3328d9ebbec14d7e9e5d93
URL
hxxps://winderswonders[.]com/JK/Equitably[.]mix
The post GUloader Unmasked: Decrypting the Threat of Malicious SVG Files appeared first on McAfee Blog.
Smashing Security podcast #361: Wireless charging woe, AI romance apps, and ransomware revisited
Your smartphone may be toast – if you use a hacked wireless charger, we take a closer look at the latest developments in the unfolding LockBit ransomware drama, and Carole dips her toe into online AI romance apps. All this and much much more is discussed in the latest edition of the “Smashing Security” podcast … Continue reading “Smashing Security podcast #361: Wireless charging woe, AI romance apps, and ransomware revisited”
How to interpret the MITRE Engenuity ATT&CK® Evaluations: Enterprise
Graham Cluley Security News is sponsored this week by the folks at Cynet. Thanks to the great team there for their support! Thorough, independent tests are a vital resource as cybersecurity leaders and their teams evaluate vendors’ abilities to guard against increasingly sophisticated threats to their organization. And perhaps no assessment is more widely trusted … Continue reading “How to interpret the MITRE Engenuity ATT&CK® Evaluations: Enterprise”
USN-6648-2: Linux kernel (Azure) vulnerabilities
It was discovered that a race condition existed in the AppleTalk networking
subsystem of the Linux kernel, leading to a use-after-free vulnerability. A
local attacker could use this to cause a denial of service (system crash)
or possibly execute arbitrary code. (CVE-2023-51781)
Zhenghan Wang discovered that the generic ID allocator implementation in
the Linux kernel did not properly check for null bitmap when releasing IDs.
A local attacker could use this to cause a denial of service (system
crash). (CVE-2023-6915)
Robert Morris discovered that the CIFS network file system implementation
in the Linux kernel did not properly validate certain server commands
fields, leading to an out-of-bounds read vulnerability. An attacker could
use this to cause a denial of service (system crash) or possibly expose
sensitive information. (CVE-2024-0565)
Jann Horn discovered that the TLS subsystem in the Linux kernel did not
properly handle spliced messages, leading to an out-of-bounds write
vulnerability. A local attacker could use this to cause a denial of service
(system crash) or possibly execute arbitrary code. (CVE-2024-0646)