thunderbird-115.12.1-1.fc40

Read Time:12 Second

FEDORA-2024-bf1c613d5a

Packages in this update:

thunderbird-115.12.1-1.fc40

Update description:

Update to 115.12.1

https://www.thunderbird.net/en-US/thunderbird/115.12.1/releasenotes/
https://www.mozilla.org/en-US/security/advisories/mfsa2024-28/

Read More

LevelBlue Labs Discovers Highly Evasive, New Loader Targeting Chinese Organizations

Read Time:17 Minute, 15 Second

Executive Summary

LevelBlue Labs recently discovered a new highly evasive loader that is being delivered to specific targets through phishing attachments. A loader is a type of malware used to load second-stage payload malware onto a victim’s system.  Due to the lack of previous samples observed in the wild, LevelBlue Labs has named this malware “SquidLoader,” given its clear efforts at decoy and evasion. After analysis of the sample LevelBlue Labs retrieved, we uncovered several techniques SquidLoader is using to avoid being statically or dynamically analyzed. LevelBlue Labs first observed SquidLoader in campaigns in late April 2024, and we predict it had been active for at least a month prior.  

The second-stage payload malware that SquidLoader delivered in our sample is a Cobalt Strike sample, which had been modified to harden it against static analysis. Based on SquidLoader’s configuration, LevelBlue Labs has assessed that this same unknown actor has been observed delivering sporadic campaigns during the last two years, mainly targeting Chinese-speaking victims. Despite studying a threat actor who seems to focus on a specific country, their techniques and tactics may be replicated, possibly against non-Chinese speaking organizations in the near future by other actors or malware creators who try to avoid detections.  

Loader Analysis

In late April 2024, LevelBlue Labs observed a few executables potentially attached to phishing emails. One of the samples observed was ‘914b1b3180e7ec1980d0bafe6fa36daade752bb26aec572399d2f59436eaa635’ with a Chinese filename translating to “Huawei industrial-grade router related product introduction and excellent customer cases.” All the samples LevelBlue Labs observed were named for Chinese companies, such as: China Mobile Group Shaanxi Co Ltd, Jiaqi Intelligent Technology, or Yellow River Conservancy Technical Institute (YRCTI). All the samples had descriptive filenames aimed at luring employees to open them, and they carried an icon corresponding to a Word Document, while in fact being executable binaries. 

These samples are loaders that download and execute a shellcode payload via a GET HTTPS request to the /flag.jpg URI. These loaders feature heavy evasion and decoy mechanisms which help them remain undetected while also hindering analysis. The shellcode that is delivered is also loaded in the same loader process, likely to avoid writing the payload to disk and thus risk being detected.  

Due to all the decoy and evasion techniques observed in this loader, and the absence of previous similar samples, LevelBlue Labs has named this malware “SquidLoader”.

Most of the samples LevelBlue Labs observed use a legitimate expired certificate to make the file look less suspicious. The invalid certificate (which expired on July 15, 2021) was issued to Hangzhou Infogo Tech Co., Ltd. It has the thumbprint “3F984B8706702DB13F26AE73BD4C591C5936344F” and serial number “02 0E B5 27 BA C0 10 99 59 3E 2E A9 02 E3 97 CB.” However, it is not the only invalid certificate used to sign the malicious samples. 

The command and control (C&C) servers SquidLoader uses employ a self-signed certificate. In the course of this investigation all the discovered C&C servers use a certificate with the following fields for both the issuer and the subject: 

Common Name: localhost
Organizational Unit: group
Organization:  Company
Locality: Nanjing
State/Province: Jiangsu
Country: CN 

When first executed, the SquidLoader duplicates to a predefined location (unless the loader is already present) and then restarts from the new location. In this case the target location was C:BakFilesinstall.exe. This action appears to be an intentional decoy, executing the loader with a non-suspicious name, since it does not pursue any persistence method. Even though SquidLoader does not feature any persistence mechanisms, the observed second-stage payload being delivered (Cobalt Strike) has the capability of creating services and modifying registry keys, which enables the C&C operators to achieve persistence on demand. 

This shellcode is delivered in the HTTPS body of the response, and it is encrypted with a 5-byte XOR key. For the sample LevelBlue analyzed, the key was hardcoded with a value of “DE FF CC 8F 9A” after accounting for little endian storage. 

Figure 1: XOR decoding of the shellcode.

Despite having a filename and icon claiming to be a Word Document to deceive the victim, the samples include vast amounts of code that reference popular software products like WeChat or mingw-gcc in an attempt to mislead security researchers inspecting the file. In addition, the file and PE metadata carry references to these companies. This is done to decoy as a legitimate component of said products. However, this code will never be executed – as the execution flow will be transferred to the loaded payload before the execution reaches that point. As an example, the code below referencing WeChat was found in the WinMain function of one of the discovered samples.

Figure 2: WeChat code never executed.

Other samples reference other software products like mingw-gcc. Even though this decoy code is included, all observed executables have icons that resemble the Microsoft Office icon for Word documents, making this decoy not very credible. The malicious code even generates an alert stating “File format error cannot be opened” in simplified Chinese.

Figure 3: Alert generated by malicious code.

Defense Evasion Techniques

SquidLoader caught our attention not only because of how few detections there were for it, but how many defensive evasion and obfuscation techniques it uses. Some of these observed techniques are:

Usage of pointless or obscure instructions: Some of the functions in the binaries include obscure and otherwise pointless x86 instructions, for example: “pause”, “mfence” or “lfence”. As can be seen in the sections below, some functions also include filler instructions, like random arithmetic calculations whose results are left unused. This is potentially an attempt to break or bypass antivirus emulators as they might have not implemented less-common instructions or likely operate on a maximum of emulated instructions.

Encrypted code sections: Immediately after starting execution the malware loads a bundled encrypted shellcode. The malware decrypts it in a dynamically allocated memory section, gives said section execution privileges and finally invokes it. The encryption algorithm is a single byte XOR with a fixed displacement, as can be observed in Figure 4 – the decryption loop also includes decoy instructions to further obfuscate the code’s purpose but that are actually pointless.

Figure 4: Shellcode XOR decryption among useless instructions.

In-stack encrypted strings: Keywords that can be easily associated with malicious activity or sensitive strings in the encrypted shellcode are embedded in each function body as XOR encrypted local variables. The strings are decrypted when they are needed with a multibyte XOR key. Storing strings in the stack makes it easier to conceal sensitive information as their content will be removed from memory when the stack-frame they reside in gets overwritten by a newer stack-frame. In the below example you can see the malware decrypting the string “NtWriteVirtualMemory” to later resolve the API.

Figure 5: Encrypted sensitive strings embedded in the function body as local variables

Jumping to the middle of instructions: Some functions include a “call” or a “jmp” instruction to an address within another function. The jumps are crafted in such a way that linear disassemblers consider them to be the middle of another instruction, thus producing incorrect assembly for the function body.

As an example, in Figure 6a we can see one of such calls made by the malware.

If we explore the target location 14000770E + 2 (Figure 6b), IDA will generate incorrect assembly output because the address is in the middle of what IDA considers a different function and 140007710 won’t even show up (Figure 6b). If we were to manually mark the beginning of a function in that address, IDA would identify a different set of operations – one that allows us to properly disassemble the malicious actions taken by the loader (Figure 6c).

  Figure 6a: Call function to new function Figure 6b: Wrong function parsing by IDA Figure 6c:Fixed function parsing by IDA

It is worth noting that the hidden function that we have disassembled in Figure 6c is located within the “__scrt_common_main_seh” function and the called target is the routine that decrypts and executes the bundled loader shellcode. This function is a routine generated by the standard Microsoft C compiler and is responsible for starting WinMain / main – in other words a place where custom code is not supposed to be. Therefore, the normal and expected program flow starting at WinMain would be altered, generating yet another way of obfuscating the malicious code in unexpected places. Summarizing, this technique can:

– Hide code in areas reserved for Windows default functions.

– Conceal code leveraging IDA automated disassembly processes

Return address obfuscation: The routine responsible for loading and executing the shellcode mentioned in the previous section also performs return address obfuscation via stack manipulation. At the beginning of the routine in Figure 7a we can observe how the return address points to __scrt_common_main_seh+14. The stack is then manipulated via improper stack cleanup after the last function call. This results in a stack that points to the decrypted shellcode address as its return address when the function reaches the retn instruction. The main purpose of this technique is to hinder any person or tool analyzing this code.

Figure 7a: Original return address Figure 7b: Actual return address when executing retn highlighted in blue

Control Flow Graph (CFG) obfuscation: One of the most easily identifiable obfuscation features of this family is the CFG obfuscation of the shellcode functions. The CFG is flattened into one or several infinite loops with a vast switch statement. The switch is controlled by a variable that gets assigned seemingly random values to pick the next branch to be executed. This obfuscation makes it almost impossible to know what order the switch blocks would be executed or if they would be executed at all without dynamic analysis. An example of the CFG obfuscation found in the malware can be seen below.

Figure 8: CFG technique with infinite loops and manifold switches

Debugger detection: The loader searches for the presence of debuggers at several points during its execution with three different detection methods and will crash itself by executing illegal instructions if detected.

1. The first of these methods is to check the list of running processes against a list of known debugger process names. The running process list is obtained via calling NtQuerySystemInformation with the SystemProcessInformation (0x5) information class. The full list of checked processes is:

Ida64.exe
Ida.exe
DbgX.Shell
Windbg.exe
X32dbg.exe
X64dbg.exe
Olldbg.exe

Figure 9: Checking a running process against a list of blacklisted process names (XOR encrypted)

2. Later in the execution flow, the loader performs another check, looking for a debugger attached to the running process by calling NtQueryInformationProcess with the undocumented 0x1e value for the ProcessInformationClass parameter. This instructs the API to return the “debug object” of the process. 

NtQueryInformationProcess (in: ProcessHandle=0xffffffffffffffff, ProcessInformationClass=0x1e, ProcessInformation=0x26ce8ff788, ProcessInformationLength=0x8, ReturnLength=0x26ce8ff788 | out: ProcessInformation=0x26ce8ff788, ReturnLength=0x26ce8ff788) returned 0xc0000353

3. The loader also looks for the presence of a kernel debugger by calling NtQuerySystemInformation with SystemKernelDebuggerInformation (0x23) system information class.

NtQuerySystemInformation (in: SystemInformationClass=0x23, SystemInformation=0x26ce8ff388, Length=0x2, ResultLength=0x0 | out: SystemInformation=0x26ce8ff388, ResultLength=0x0) returned 0x0

Quirkily enough, if the loader detects the presence of a debugger, besides crashing itself, it will also replace the prologue of WinHttpConnect with a jump to his own entrypoint. This causes the loader to not properly load the library and avoid outputting network traffic to the Command and Control (C&C) server when it reaches the payload download section. Figure 10 displays a debugger with the replaced WinHttpConnect prologue on the left versus the actual prologue in IDA on the right.

Figure 10: Code modifications after a debugger is detected

File checking: The loader also checks for the existence of the following three files and exits if it finds any of the three, but the purpose of this check is unconfirmed:

C:tempdiskpartScript.txt
C:UsersAdminMy PicturesMy Wallpaper.jpg
C:Program Files (x86)GoogleChromeApplicationchrome.exe

Performing direct syscalls: Whenever possible, the malware avoids calling Windows NT APIs and opts instead to perform their own syscalls. The malware author created several NT API wrappers, one for each NT API they wanted to wrap with different count of parameters. As an example, the wrapper for an NT API with 4 parameters can be seen in Figure 11. Note that IDA wrongfully shows a function signature that accepts only 1 parameter, the actual function accepts 4 parameters as it would be expected.

Figure 11: NT API wrapper parsed by IDA with 1 parameter instead of 4.

In this case the wrapper is resolving NtQuerySystemInformation, as it can be seen from the returned value in RAX. The +12 offset from the function start corresponds to the “syscall” x86 instruction within NtQuerySystemInformation’s function body. The function below the current one (highlighted in blue) will prepare the stack and register for the “syscall” instruction. Finally, “jump_to_syscall” moves the given syscall number to EAX and performs the jump to “NtQuerySystemInformation+12”. This avoids calling NT APIs entirely, bypassing potential hooks and thus prevents them from showing in execution logs.

Figure 12: jump_to_syscall function body.

Figure 13: the jmp instruction jumps directly to the syscall instruction.

Delivered Payload

During the time LevelBlue Labs has been analyzing this sample and the C&C server has been online we have observed only one unique payload being loaded – Cobalt Strike. The adversary simulation sample contains the same type of CFG obfuscation found in the loader, so it was probably modified by the same authors who made the loader. However, it does not contain anti-debug or anti-VM mechanisms, which are expected to be already avoided by the loader.

When executed, the payload performs an HTTPS GET request to the /api/v1/pods URI in an attempt to resemble Kubernetes traffic. For the gathered samples, the C&C was always the same as the loader used to download the payload. If the C&C does not reply or the response is not in the expected format, the payload keeps pinging the C&C in an endless loop. &c>

Figure 14: C&C request sample. &c>

From the above request the header X-Method stands out. This HTTP header signals the intent of the request and can take three possible values: &c>

con: Initial connection request / call home &c>
snd: Exfiltrating system information to the C&C&c>
rcv: Pinging the C&C to receive tasks &c>

This configuration in a Cobalt Strike beacon is non-standard and has already been observed in different campaigns during the past few years, specifically targeting Chinese-speaking users, which is consistent with the observed behavior of the loader. The payload then reads the server’s response and checks that it has certain features present: &c>

HTTP response code should be 200. &c>
An X-Session HTTP header should be present. &c>

If the response has the mentioned features, the payload begins gathering system information to later exfiltrate it via a HTTP POST request to /api/v1/pods. The gathered information is: Username, Computer name, ACP, OEMCP and IP addresses of network interfaces. &c>&c>

Figure 15: Collecting system information. &c>&c>

The exfiltrated information is sent in binary encrypted form in the HTTP POST body. &c>&c>

Figure 16: Exfiltrating encrypted system information. &c>&c>

After exfiltrating the system information, the payload starts pinging the C&C for tasks by sending HTTP GET requests to the same URL but this time with X-Method: rcv. When the RAT sends said request it later checks for a response with HTTP header X-Fin: true (C&C signaling it has no more data). If X-Fin is not set to true it will keep reading requests until the C&C signals its end. The C&C sends its instructions in the response body in encrypted binary form. The encryption algorithm is based on an extensive number of bitwise operations. &c>&c>

Figure 17: Encryption routine. &c>&c>

Evasion &c>&c>

Win32 API obfuscation &c>&c>

The payload needs to be position-independent, so WinAPI imports need to be resolved dynamically. The malware creates a table in memory with all the API function addresses it needs. Instead of storing the direct addresses of the functions, the malware stores the result of ~(_DWORD) api_addr & 0xCAFECAFE | api_addr & 0xFFFFFFFF35013501. &c>&c>

Figure 18: Storing API function ‘s addresses. &c>&c>

This needs to be undone before calling the APIs, so API calls look like this: &c>&c>

Figure 19: Unfurling API function addresses and performing the call. &c>&c>

Conclusion &c>&c>

The SquidLoader sample LevelBlue Labs analyzed clearly makes an effort to avoid detection and both static and dynamic analysis. Additionally, the threat actor has been using the same Cobalt Strike beacon configuration to target Chinese-speaking victims for more than two years. Analysis in this report may not include enough data to classify this threat actor as an APT, however, the TTPs observed from this threat actor resemble those of an APT. 

Additionally, given the success SquidLoader has shown in evading detection, it is likely that threat actors targeting demographics beyond China will start to mimic the techniques used by the threat actor responsible for SquidLoader, helping them to to elude detection and analysis on their unique malware samples. LevelBlue Labs will continue to track this threat actor, together with the techniques observed in this blog, to keep our clients protected from the latest trends in malware development.

&c>&c>

Detection Methods &c>&c>

The following associated detection methods are in use by LevelBlue Labs. You can use them to tune or deploy detections in your own environments or for your additional research. &c>&c>

SURICATA IDS SIGNATURES
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”AV TROJAN SquidLoader CobaltStrike CnC Checkin”; flow:to_server,established; content:”GET”; http_method; content:”X-Method|3a 20|”; http_header; pcre:/X-Methodx3Ax20(con|rcv)x0dx0a/H; reference:md5,60bec57db4f367e60c6961029d952fa6; classtype:trojan-activity; sid:4002768; rev:1; metadata:created_at 2024_06_07, updated_at 2024_06_07;)
alert http $HOME_NET any -> $EXTERNAL_NET any (msg:”AV TROJAN SquidLoader CobaltStrike CnC Request”; flow:established,to_server; content:”POST”; http_method; content:”X-Method|3a 20|snd|0d 0A|”; http_header; content:”X-Session|3a 20|”; http_header; reference:md5,60bec57db4f367e60c6961029d952fa6; classtype:trojan-activity; sid:4002769; rev:1; metadata:created_at 2024_06_07, updated_at 2024_06_07;)

 

Associated Indicators (IOCs) &c>&c>

The following technical indicators are associated with the reported intelligence. A list of indicators is also available in the OTX Pulse. Please note, the pulse may include other activities related but out of the scope of the report. &c>&c>

See the full information on IOCs.  &c>&c>

 

SquidLoader Mapped to MITRE ATT&CK &c>&c>

The findings of this report are mapped to the following MITRE ATT&CK Matrix techniques: &c>&c>

● TA0001: Initial Access &c>&c>

○ T1566: Phishing &c>&c>

■ T1566.001: Spearphishing Attachment &c>&c>

○ T1589: Gather Victim Identity Information &c>&c>

■ T1589.002: Email Addresses &c>&c>

■ T1589.003: Employee Names &c>&c>

● TA0005: Defense Evasion &c>&c>

○ T1036: Masquerading &c>&c>

■ T1036.005: Match Legitimate Name or Location &c>&c>

■ T1036.008: Masquerade File Type &c>&c>

○ T1127: Trusted Developer Utilities Proxy Execution &c>&c>

○ T1140: Deobfuscate/Decode Files or Information &c>&c>

○ T1480: Execution Guardrails &c>&c>

○ T1622: Debugger Evasion &c>&c>

● TA0011: Command and Control &c>&c>

○ T1573: Encrypted Channel &c>&c>

■ T1573.001: Symmetric Cryptography&c>&c>

Read More

USN-6841-1: PHP vulnerability

Read Time:11 Second

It was discovered that PHP could early return in the filter_var function
resulting in invalid user information being treated as valid user
information. An attacker could possibly use this issue to expose raw
user input information.

Read More

The Hacking of Culture and the Creation of Socio-Technical Debt

Read Time:22 Minute, 11 Second

Culture is increasingly mediated through algorithms. These algorithms have splintered the organization of culture, a result of states and tech companies vying for influence over mass audiences. One byproduct of this splintering is a shift from imperfect but broad cultural narratives to a proliferation of niche groups, who are defined by ideology or aesthetics instead of nationality or geography. This change reflects a material shift in the relationship between collective identity and power, and illustrates how states no longer have exclusive domain over either. Today, both power and culture are increasingly corporate.

Blending Stewart Brand and Jean-Jacques Rousseau, McKenzie Wark writes in A Hacker Manifesto that “information wants to be free but is everywhere in chains.”1 Sounding simultaneously harmless and revolutionary, Wark’s assertion as part of her analysis of the role of what she terms “the hacker class” in creating new world orders points to one of the main ideas that became foundational to the reorganization of power in the era of the internet: that “information wants to be free.” This credo, itself a co-option of Brand’s influential original assertion in a conversation with Apple cofounder Steve Wozniak at the 1984 Hackers Conference and later in his 1987 book The Media Lab: Inventing the Future at MIT, became a central ethos for early internet inventors, activists,2 and entrepreneurs. Ultimately, this notion was foundational in the construction of the era we find ourselves in today: an era in which internet companies dominate public and private life. These companies used the supposed desire of information to be free as a pretext for building platforms that allowed people to connect and share content. Over time, this development helped facilitate the definitive power transfer of our time, from states to corporations.

This power transfer was enabled in part by personal data and its potential power to influence people’s behavior—a critical goal in both politics and business. The pioneers of the digital advertising industry claimed that the more data they had about people, the more they could influence their behavior. In this way, they used data as a proxy for influence, and built the business case for mass digital surveillance. The big idea was that data can accurately model, predict, and influence the behavior of everyone—from consumers to voters to criminals. In reality, the relationship between data and influence is fuzzier, since influence is hard to measure or quantify. But the idea of data as a proxy for influence is appealing precisely because data is quantifiable, whereas influence is vague. The business model of Google Ads, Facebook, Experian, and similar companies works because data is cheap to gather, and the effectiveness of the resulting influence is difficult to measure. The credo was “Build the platform, harvest the data…then profit.” By 2006, a major policy paper could ask, “Is Data the New Oil?”3

The digital platforms that have succeeded most in attracting and sustaining mass attention—Facebook, TikTok, Instagram—have become cultural. The design of these platforms dictates the circulation of customs, symbols, stories, values, and norms that bind people together in protocols of shared identity. Culture, as articulated through human systems such as art and media, is a kind of social infrastructure. Put differently, culture is the operating system of society.

Like any well-designed operating system, culture is invisible to most people most of the time. Hidden in plain sight, we make use of it constantly without realizing it. As an operating system, culture forms the base infrastructure layer of societal interaction, facilitating communication, cooperation, and interrelations. Always evolving, culture is elastic: we build on it, remix it, and even break it.

Culture can also be hacked—subverted for specific advantage.4 If culture is like an operating system, then to hack it is to exploit the design of that system to gain unauthorized control and manipulate it towards a specific end. This can be for good or for bad. The morality of the hack depends on the intent and actions of the hacker.

When businesses hack culture to gather data, they are not necessarily destroying or burning down social fabrics and cultural infrastructure. Rather, they reroute the way information and value circulate, for the benefit of their shareholders. This isn’t new. There have been culture hacks before. For example, by lending it covert support, the CIA hacked the abstract expressionism movement to promote the idea that capitalism was friendly to high culture.5 Advertising appropriated the folk-cultural images of Santa Claus and the American cowboy to sell Coca-Cola and Marlboro cigarettes, respectively. In Mexico, after the revolution of 1910, the ruling party hacked muralist works, aiming to construct a unifying national narrative.

Culture hacks under digital capitalism are different. Whereas traditional propaganda goes in one direction—from government to population, or from corporation to customers—the internet-surveillance business works in two directions: extracting data while pushing engaging content. The extracted data is used to determine what content a user would find most engaging, and that engagement is used to extract more data, and so on. The goal is to keep as many users as possible on platforms for as long as possible, in order to sell access to those users to advertisers. Another difference between traditional propaganda and digital platforms is that the former aims to craft messages with broad appeal, while the latter hyper-personalizes content for individual users.

The rise of Chinese-owned TikTok has triggered heated debate in the US about the potential for a foreign-owned platform to influence users by manipulating what they see. Never mind that US corporations have used similar tactics for years. While the political commitments of platform owners are indeed consequential—Chinese-owned companies are in service to the Chinese Communist Party, while US-owned companies are in service to business goals—the far more pressing issue is that both have virtually unchecked surveillance power. They are both reshaping societies by hacking culture to extract data and serve content. Funny memes, shocking news, and aspirational images all function similarly: they provide companies with unprecedented access to societies’ collective dreams and fears.6 By determining who sees what when and where, platform owners influence how societies articulate their understanding of themselves.

Tech companies want us to believe that algorithmically determined content is effectively neutral: that it merely reflects the user’s behavior and tastes back at them. In 2021, Instagram head Adam Mosseri wrote a post on the company’s blog entitled “Shedding More Light on How Instagram Works.” A similar window into TikTok’s functioning was provided by journalist Ben Smith in his article “How TikTok Reads Your Mind.”7 Both pieces boil down to roughly the same idea: “We use complicated math to give you more of what your behavior shows us you really like.”

This has two consequences. First, companies that control what users see in a nontransparent way influence how we perceive the world. They can even shape our personal relationships. Second, by optimizing algorithms for individual attention, a sense of culture as common ground is lost. Rather than binding people through shared narratives, digital platforms fracture common cultural norms into self-reinforcing filter bubbles.8

This fragmentation of shared cultural identity reflects how the data surveillance business is rewriting both the established order of global power, and social contracts between national governments and their citizens. Before the internet, in the era of the modern state, imperfect but broad narratives shaped distinct cultural identities; “Mexican culture” was different from “French culture,” and so on. These narratives were designed to carve away an “us” from “them,” in a way that served government aims. Culture has long been understood to operate within the envelope of nationality, as exemplified by the organization of museum collections according to the nationality of artists, or by the Venice Biennale—the Olympics of the art world, with its national pavilions format.

National culture, however, is about more than museum collections or promoting tourism. It broadly legitimizes state power by emotionally binding citizens to a self-understood identity. This identity helps ensure a continuing supply of military recruits to fight for the preservation of the state. Sociologist James Davison Hunter, who popularized the phrase “culture war,” stresses that culture is used to justify violence to defend these identities.9 We saw an example of this on January 6, 2021, with the storming of the US Capitol. Many of those involved were motivated by a desire to defend a certain idea of cultural identity they believed was under threat.

Military priorities were also entangled with the origins of the tech industry. The US Department of Defense funded ARPANET, the first version of the internet. But the internet wouldn’t have become what it is today without the influence of both West Coast counterculture and small-l libertarianism, which saw the early internet as primarily a space to connect and play. One of the first digital game designers was Bernie De Koven, founder of the Games Preserve Foundation. A noted game theorist, he was inspired by Stewart Brand’s interest in “play-ins” to start a center dedicated to play. Brand had envisioned play-ins as an alternative form of protest against the Vietnam War; they would be their own “soft war” of subversion against the military.10 But the rise of digital surveillance as the business model of nascent tech corporations would hack this anti-establishment spirit, turning instruments of social cohesion and connection into instruments of control.

It’s this counterculture side of tech’s lineage, which advocated for the social value of play, that attuned the tech industry to the utility of culture. We see the commingling of play and military control in Brand’s Whole Earth Catalog, which was a huge influence on early tech culture. Described as “a kind of Bible for counterculture technology,” the Whole Earth Catalog was popular with the first generation of internet engineers, and established crucial “assumptions about the ideal relationships between information, technology, and community.”11 Brand’s 1972 Rolling Stone article “Spacewar: Fantastic Life and Symbolic Death Among the Computer” further emphasized how rudimentary video games were central to the engineering community. These games were wildly popular at leading engineering research centers: Stanford, MIT, ARPA, Xerox, and others. This passion for gaming as an expression of technical skills and a way for hacker communities to bond led to the development of MUD (Multi-User Dungeon) programs, which enabled multiple people to communicate and collaborate online simultaneously.

The first MUD was developed in 1978 by engineers who wanted to play fantasy games online. It applied the early-internet ethos of decentralism and personalization to video games, making it a precursor to massive multiplayer online role-playing games and modern chat rooms and Facebook groups. Today, these video games and game-like simulations—now a commercial industry worth around $200 billion12—serve as important recruitment and training tools for the military.13 The history of the tech industry and culture is full of this tension between the internet as an engineering plaything and as a surveillance commodity.

Historically, infrastructure businesses—like railroad companies in the nineteenth-century US—have always wielded considerable power. Internet companies that are also infrastructure businesses combine commercial interests with influence over national and individual security. As we transitioned from railroad tycoons connecting physical space to cloud computing companies connecting digital space, the pace of technological development put governments at a disadvantage. The result is that corporations now lead the development of new tech (a reversal from the ARPANET days), and governments follow, struggling to modernize public services in line with the new tech. Companies like Microsoft are functionally providing national cybersecurity. Starlink, Elon Musk’s satellite internet service, is a consumer product that facilitates military communications for the war in Ukraine. Traditionally, this kind of service had been restricted to selected users and was the purview of states.14 Increasingly, it is clear that a handful of transnational companies are using their technological advantages to consolidate economic and political power to a degree previously afforded to only great-power nations.

Worse, since these companies operate across multiple countries and regions, there is no regulatory body with the jurisdiction to effectively constrain them. This transition of authority from states to corporations and the nature of surveillance as the business model of the internet rewrites social contracts between national governments and their citizens. But it also also blurs the lines among citizen, consumer, and worker. An example of this are Google’s Recaptchas, visual image puzzles used in cybersecurity to “prove” that the user is a human and not a bot. While these puzzles are used by companies and governments to add a layer of security to their sites, their value is in how they record a user’s input in solving the puzzles to train Google’s computer vision AI systems. Similarly, Microsoft provides significant cybersecurity services to governments while it also trains its AI models on citizens’ conversations with Bing.15 Under this dyanmic, when citizens use digital tools and services provided by tech companies, often to access government webpages and resources, they become de facto free labor for the tech companies providing them. The value generated by this citizen-user-laborer stays with the company, as it is used to develop and refine their products. In this new blurred reality, the relationships among corporations, governments, power, and identity are shifting. Our social and cultural infrastructure suffers as a result, creating a new kind of technical debt of social and cultural infrustructure.

In the field of software development, technical debt refers to the future cost of ignoring a near-term engineering problem.16 Technical debt grows as engineers implement short-term patches or workarounds, choosing to push the more expensive and involved re-engineering fixes for later. This debt accrues over time, to be paid back in the long term. The result of a decision to solve an immediate problem at the expense of the long-term one effectively mortgages the future in favor of an easier present. In terms of cultural and social infrastructure, we use the same phrase to refer to the long-term costs that result from avoiding or not fully addressing social needs in the present. More than a mere mistake, socio-technical debt stems from willfully not addressing a social problem today and leaving a much larger problem to be addressed in the future.

For example, this kind of technical debt was created by the cratering of the news industry, which relied on social media to drive traffic—and revenue—to news websites. When social media companies adjusted their algorithms to deprioritize news, traffic to news sites plummeted, causing an existential crisis for many publications.17 Now, traditional news stories make up only 3 percent of social media content. At the same time, 66 percent of people ages eighteen to twenty-four say they get their “news” from TikTok, Facebook, and Twitter.18 To be clear, Facebook did not accrue technical debt when it swallowed the news industry. We as a society are dealing with technical debt in the sense that we are being forced to pay the social cost of allowing them to do that.

One result of this shift in information consumption as a result of changes to the cultural infrastructure of social media is the rise in polarization and radicalism. So by neglecting to adequately regulate tech companies and support news outlets in the near term, our governments have paved the way for social instability in the long term. We as a society also have to find and fund new systems to act as a watchdog over both corporate and governmental power.

Another example of socio-technical debt is the slow erosion of main streets and malls by e-commerce.19 These places used to be important sites for physical gathering, which helped the shops and restaurants concentrated there stay in business. But e-commerce and direct-to-consumer trends have undermined the economic viability of main streets and malls, and have made it much harder for small businesses to survive. The long-term consequence of this to society is the hollowing out of town centers and the loss of spaces for physical gathering—which we will all have to pay for eventually.

The faltering finances of museums will also create long-term consequences for society as a whole, especially in the US, where Museums mostly depend on private donors to cover operational costs. But a younger generation of philanthropists is shifting its giving priorities away from the arts, leading to a funding crisis at some institutions.20

One final example: libraries. NYU Sociologist Eric Klinenberg called libraries “the textbook example of social infrastructure in action.”21 But today they are stretched to the breaking point, like museums, main streets, and news media. In New York City, Mayor Eric Adams has proposed a series of severe budget cuts to the city’s library system over the past year, despite having seen a spike in usage recently. The steepest cuts were eventually retracted, but most libraries in the city have still had to cancel social programs and cut the number of days they’re open.22 As more and more spaces for meeting in real life close, we increasingly turn to digital platforms for connection to replace them. But these virtual spaces are optimized for shareholder returns, not public good.

Just seven companies—Alphabet (the parent company of Google), Amazon, Apple, Meta, Microsoft, Nvidia and Tesla—drove 60 percent of the gains of the S&P stock market index in 2023.23 Four—Alibaba, Amazon, Google, and Microsoft—deliver the majority of cloud services.24 These companies have captured the delivery of digital and physical goods and services. Everything involved with social media, cloud computing, groceries, and medicine is trapped in their flywheels, because the constellation of systems that previously put the brakes on corporate power, such as monopoly laws, labor unions, and news media, has been eroded. Product dependence and regulatory capture have further undermined the capacity of states to respond to the rise in corporate hard and soft power. Lock-in and other anticompetitive corporate behavior have prevented market mechanisms from working properly. As democracy falls into deeper crisis with each passing year, policy and culture are increasingly bent towards serving corporate interest. The illusion that business, government, and culture are siloed sustains this status quo.

Our digitized global economy has made us all participants in the international data trade, however reluctantly. Though we are aware of the privacy invasions and social costs of digital platforms, we nevertheless participate in these systems because we feel as though we have no alternative—which itself is partly the result of tech monopolies and the lack of competition.

Now, the ascendence of AI is thrusting big data into a new phase and new conflicts with social contracts. The development of bigger, more powerful AI models means more demand for data. Again, massive wholesale extractions of culture are at the heart of these efforts.25 As AI researchers and artists Kate Crawford and Vladan Joler explain in the catalog to their exhibition Calculating Empires, AI developers require “the entire history of human knowledge and culture … The current lawsuits over generative systems like GPT and Stable Diffusion highlight how completely dependent AI systems are on extracting, enclosing, and commodifying the entire history of cognitive and creative labor.”26

Permitting internet companies to hack the systems in which culture is produced and circulates is a short-term trade-off that has proven to have devastating long-term consequences. When governments give tech companies unregulated access to our social and cultural infrastructure, the social contract becomes biased towards their profit. When we get immediate catharsis through sharing memes or engaging in internet flamewars, real protest is muzzled. We are increasing our collective socio-technical debt by ceding our social and cultural infrastructure to tech monopolies.

Cultural expression is fundamental to what makes us human. It’s an impulse, innate to us as a species, and this impulse will continue to be a gold mine to tech companies. There is evidence that AI models trained on synthetic data—data produced by other AI models rather than humans—can corrupt these models, causing them to return false or nonsensical answers to queries.27 So as AI-produced data floods the internet, data that is guaranteed to have been derived from humans becomes more valuable. In this context, our human nature, compelling us to make and express culture, is the dream of digital capitalism. We become a perpetual motion machine churning out free data. Beholden to shareholders, these corporations see it as their fiduciary duty—a moral imperative even—to extract value from this cultural life.

We are in a strange transition. The previous global order, in which states wielded ultimate authority, hasn’t quite died. At the same time, large corporations have stepped in to deliver some of the services abandoned by states, but at the price of privacy and civic well-being. Increasingly, corporations provide consistent, if not pleasant, economic and social organization. Something similar occurred during the Gilded Age in the US (1870s–1890s). But back then, the influence of robber barons was largely constrained to the geographies in which they operated, and their services (like the railroad) were not previously provided by states. In our current transitionary period, public life worldwide is being reimagined in accordance with corporate values. Amidst a tug-of-war between the old state-centric world and the emerging capital-centric world, there is a growing radicalism fueled partly by frustration over social and personal needs going unmet under a transnational order that is maximized for profit rather than public good.

Culture is increasingly divorced from national identity in our globalized, fragmented world. On the positive side, this decoupling can make culture more inclusive of marginalized people. Other groups, however, may perceive this new status quo as a threat, especially those facing a loss of privilege. The rise of white Christian nationalism shows that the right still regards national identity and culture as crucial—as potent tools in the struggle to build political power, often through anti-democratic means. This phenomenon shows that the separation of cultural identity from national identity doesn’t negate the latter. Instead, it creates new political realities and new orders of power.

Nations issuing passports still behave as though they are the definitive arbiters of identity. But culture today—particularly the multiverse of internet cultures—exposes how this is increasingly untrue. With government discredited as an ultimate authority, and identity less and less connected to nationality, we can find a measure of hope for navigating the current transition in the fact that culture is never static. New forms of resistance are always emerging. But we must ask ourselves: Have the tech industry’s overwhelming surveillance powers rendered subversion impossible? Or does its scramble to gather all the world’s data offer new possibilities to hack the system?

 

1. McKenzie Wark, A Hacker Manifesto (Harvard University Press, 2004), thesis 126.

2. Jon Katz, “Birth of a Digital Nation,” Wired, April 1, 1997.

3. Marcin Szczepanski, “Is Data the New Oil? Competition Issues in the Digital Economy,” European Parliamentary Research Service, January 2020.

4. Bruce Schneier, A Hacker’s Mind: How the Powerful Bend Society’s Rules, and How to Bend Them Back (W. W. Norton & Sons, 2023).

5. Lucie Levine, “Was Modern Art Really a CIA Psy-Op?” JStor Daily, April 1, 2020.

6. Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (W. W. Norton & Sons, 2015).

7. Adam Mosseri, “Shedding More Light on How Instagram Works,” Instagram Blog, June 8, 2021; Ben Smith, “How TikTok Reads Your Mind,” New York Times, December 5, 2021.

8. Giacomo Figà Talamanca and Selene Arfini, “Through the Newsfeed Glass: Rethinking Filter Bubbles and Echo Chambers,” Philosophy & Technology 35, no. 1 (2022).

9. Zack Stanton, “How the ‘Culture War’ Could Break Democracy,” Politico, May 5, 2021.

10. Jason Johnson, “Inside the Failed, Utopian New Games Movement,” Kill Screen, October 25, 2013.

11. Fred Turner, “Taking the Whole Earth Digital,” chap. 4 in From Counter Culture to Cyberculture: Stewart Brand, The Whole Earth Network, and the Rise of Digital Utopianism (University of Chicago Press, 2006).

12. Kaare Ericksen, “The State of the Video Games Industry: A Special Report,” Variety, February 1, 2024.

13. Rosa Schwartzburg, “The US Military Is Embedded in the Gaming World. It’s Target: Teen Recruits,” The Guardian, February 14, 2024; Scott Kuhn, “Soldiers Maintain Readiness Playing Video Games,” US Army, April 29, 2020; Katie Lange, “Military Esports: How Gaming Is Changing Recruitment & Moral,” US Department of Defense, December 13, 2022.

14. Shaun Waterman, “Growing Commercial SATCOM Raises Trust Issues for Pentagon,” Air & Space Forces Magazine, April 3, 2024.

15. Geoffrey A Fowler, “Your Instagrams Are Training AI. There’s Little You Can Do About It,” Washington Post, September 27, 2023.

16. Zengyang Li, Paris Avgeriou, and Peng Liang, “A Systematic Mapping Study on Technical Debt and Its Management,” Journal of Systems and Software, December 2014.

17. David Streitfeld, “How the Media Industry Keeps Losing the Future,” New York Times, February 28, 2024.

18. “The End of the Social Network,” The Economist, February 1, 2024; Ollie Davies, “What Happens If Teens Get Their News From TikTok?” The Guardian, February 22, 2023.

19. Eric Jaffe, “Quantifying the Death of the Classic American Main Street,” Medium, March 16, 2018.

20. Julia Halprin, “The Hangover from the Museum Party: Institutions in the US Are Facing a Funding Crisis,” Art Newspaper, January 19, 2024.

21. Quoted in Pete Buttigieg, “The Key to Happiness Might Be as Simple as a Library or Park,” New York Times, September 14, 2018.

22. Jeffery C. Mays and Dana Rubinstein, “Mayor Adams Walks Back Budget Cuts Many Saw as Unnecessary,” New York Times, April 24, 2024.

23. Karl Russell and Joe Rennison, “These Seven Tech Stocks Are Driving the Market,” New York Times, January 22, 2024.

24. Ian Bremmer, “How Big Tech Will Reshape the Global Order,” Foreign Affairs, October 19, 2021.

25. Nathan Sanders and Bruce Schneier, “How the ‘Frontier’ Became the Slogan for Uncontrolled AI,” Jacobin, February 27, 2024.

26. Kate Crawford and Vladan Joler, Calculating Empires: A Genealogy of Technology and Power, 1500–2025 (Fondazione Prada, 2023), 9. Exhibition catalog.

27. Rahul Rao, “AI Generated Data Can Poison Future AI Models,” Scientific American, July 28, 2023.

This essay was written with Kim Córdova, and was originally published in e-flux.

Read More