A threat actor claimed to get hold of an exposed GitLab configuration file containing Zendesk API access tokens
50,000 Files Exposed in Nidec Ransomware Attack
The August ransomware attack stole 50,000+ documents from Nidec, leaked after ransom refusal
Netskope Reports Possible Bumblebee Loader Resurgence
The malware loader taken down by Europol in May 2024 could be back with a vengeance
USN-7062-2: libgsf vulnerabilities
USN-7062-1 fixed vulnerabilities in libgsf. This update provides the
corresponding updates for Ubuntu 24.10.
Original advisory details:
It was discovered that libgsf incorrectly handled certain Compound
Document Binary files. If a user or automated system were tricked into
opening a specially crafted file, a remote attacker could possibly use
this issue to execute arbitrary code.
USN-7042-3: cups-browsed vulnerability
USN-7042-2 released an improved fix for cups-browsed. This update provides
the corresponding update for Ubuntu 24.10.
Original advisory details:
Simone Margaritelli discovered that cups-browsed could be used to create
arbitrary printers from outside the local network. In combination with
issues in other printing components, a remote attacker could possibly use
this issue to connect to a system, created manipulated PPD files, and
execute arbitrary code when a printer is used. This update disables
support for the legacy CUPS printer discovery protocol.
Australia’s Privacy Watchdog Publishes Guidance on Commercial AI Products
Businesses in Australia must update their privacy policies with clear and transparent information about their use of AI, said the regulator
AI and the SEC Whistleblower Program
Tax farming is the practice of licensing tax collection to private contractors. Used heavily in ancient Rome, it’s largely fallen out of practice because of the obvious conflict of interest between the state and the contractor. Because tax farmers are primarily interested in short-term revenue, they have no problem abusing taxpayers and making things worse for them in the long term. Today, the U.S. Securities and Exchange Commission (SEC) is engaged in a modern-day version of tax farming. And the potential for abuse will grow when the farmers start using artificial intelligence.
In 2009, after Bernie Madoff’s $65 billion Ponzi scheme was exposed, Congress authorized the SEC to award bounties from civil penalties recovered from securities law violators. It worked in a big way. In 2012, when the program started, the agency received more than 3,000 tips. By 2020, it had more than doubled, and it more than doubled again by 2023. The SEC now receives more than 50 tips per day, and the program has paid out a staggering $2 billion in bounty awards. According to the agency’s 2023 financial report, the SEC paid out nearly $600 million to whistleblowers last year.
The appeal of the whistleblower program is that it alerts the SEC to violations it may not otherwise uncover, without any additional staff. And since payouts are a percentage of fines collected, it costs the government little to implement.
Unfortunately, the program has resulted in a new industry of private de facto regulatory enforcers. Legal scholar Alexander Platt has shown how the SEC’s whistleblower program has effectively privatized a huge portion of financial regulatory enforcement. There is a role for publicly sourced information in securities regulatory enforcement, just as there has been in litigation for antitrust and other areas of the law. But the SEC program, and a similar one at the U.S. Commodity Futures Trading Commission, has created a market distortion replete with perverse incentives. Like the tax farmers of history, the interests of the whistleblowers don’t match those of the government.
First, while the blockbuster awards paid out to whistleblowers draw attention to the SEC’s successes, they obscure the fact that its staffing level has slightly declined during a period of tremendous market growth. In one case, the SEC’s largest ever, it paid $279 million to an individual whistleblower. That single award was nearly one-third of the funding of the SEC’s entire enforcement division last year. Congress gets to pat itself on the back for spinning up a program that pays for itself (by law, the SEC awards 10 to 30 percent of their penalty collections over $1 million to qualifying whistleblowers), when it should be talking about whether or not it’s given the agency enough resources to fulfill its mission to “maintain fair, orderly, and efficient markets.”
Second, while the stated purpose of the whistleblower program is to incentivize individuals to come forward with information about potential violations of securities law, this hasn’t actually led to increases in enforcement actions. Instead of legitimate whistleblowers bringing the most credible information to the SEC, the agency now seems to be deluged by tips that are not highly actionable.
But the biggest problem is that uncovering corporate malfeasance is now a legitimate business model, resulting in powerful firms and misaligned incentives. A single law practice led by former SEC assistant director Jordan Thomas captured about 20 percent of all the SEC’s whistleblower awards through 2022, at which point Thomas left to open up a new firm focused exclusively on whistleblowers. We can admire Thomas and his team’s impact on making those guilty of white-collar crimes pay, and also question whether hundreds of millions of dollars of penalties should be funneled through the hands of an SEC insider turned for-profit business mogul.
Whistleblower tips can be used as weapons of corporate warfare. SEC whistleblower complaints are not required to come from inside a company, or even to rely on insider information. They can be filed on the basis of public data, as long as the whistleblower brings original analysis. Companies might dig up dirt on their competitors and submit tips to the SEC. Ransomware groups have used the threat of SEC whistleblower tips as a tactic to pressure the companies they’ve infiltrated into paying ransoms.
The rise of whistleblower firms could lead to them taking particular “assignments” for a fee. Can a company hire one of these firms to investigate its competitors? Can an industry lobbying group under scrutiny (perhaps in cryptocurrencies) pay firms to look at other industries instead and tie up SEC resources? When a firm finds a potential regulatory violation, do they approach the company at fault and offer to cease their research for a “kill fee”? The lack of transparency and accountability of the program means that the whistleblowing firms can get away with practices like these, which would be wholly unacceptable if perpetrated by the SEC itself.
Whistleblowing firms can also use the information they uncover to guide market investments by activist short sellers. Since 2006, the investigative reporting site Sharesleuth claims to have tanked dozens of stocks and instigated at least eight SEC cases against companies in pharma, energy, logistics, and other industries, all after its investors shorted the stocks in question. More recently, a new investigative reporting site called Hunterbrook Media and partner hedge fund Hunterbrook Capital, have churned out 18 investigative reports in their first five months of operation and disclosed short sales and other actions alongside each. In at least one report, Hunterbrook says they filed an SEC whistleblower tip.
Short sellers carry an important disciplining function in markets. But combined with whistleblower awards, the same profit-hungry incentives can emerge. Properly staffed regulatory agencies don’t have the same potential pitfalls.
AI will affect every aspect of this dynamic. AI’s ability to extract information from large document troves will help whistleblowers provide more information to the SEC faster, lowering the bar for reporting potential violations and opening a floodgate of new tips. Right now, there is no cost to the whistleblower to report minor or frivolous claims; there is only cost to the SEC. While AI automation will also help SEC staff process tips more efficiently, it could exponentially increase the number of tips the agency has to deal with, further decreasing the efficiency of the program.
AI could be a triple windfall for those law firms engaged in this business: lowering their costs, increasing their scale, and increasing the SEC’s reliance on a few seasoned, trusted firms. The SEC already, as Platt documented, relies on a few firms to prioritize their investigative agenda. Experienced firms like Thomas’s might wield AI automation to the greatest advantage. SEC staff struggling to keep pace with tips might have less capacity to look beyond the ones seemingly pre-vetted by familiar sources.
But the real effects will be on the conflicts of interest between whistleblowing firms and the SEC. The ability to automate whistleblower reporting will open new competitive strategies that could disrupt business practices and market dynamics.
An AI-assisted data analyst could dig up potential violations faster, for a greater scale of competitor firms, and consider a greater scope of potential violations than any unassisted human could. The AI doesn’t have to be that smart to be effective here. Complaints are not required to be accurate; claims based on insufficient evidence could be filed against competitors, at scale.
Even more cynically, firms might use AI to help cover up their own violations. If a company can deluge the SEC with legitimate, if minor, tips about potential wrongdoing throughout the industry, it might lower the chances that the agency will get around to investigating the company’s own liabilities. Some companies might even use the strategy of submitting minor claims about their own conduct to obscure more significant claims the SEC might otherwise focus on.
Many of these ideas are not so new. There are decades of precedent for using algorithms to detect fraudulent financial activity, with lots of current-day application of the latest large language models and other AI tools. In 2019, legal scholar Dimitrios Kafteranis, research coordinator for the European Whistleblowing Institute, proposed using AI to automate corporate whistleblowing.
And not all the impacts specific to AI are bad. The most optimistic possible outcome is that AI will allow a broader base of potential tipsters to file, providing assistive support that levels the playing field for the little guy.
But more realistically, AI will supercharge the for-profit whistleblowing industry. The risks remain as long as submitting whistleblower complaints to the SEC is a viable business model. Like tax farming, the interests of the institutional whistleblower diverge from the interests of the state, and no amount of tweaking around the edges will make it otherwise.
Ultimately, AI is not the cause of or solution to the problems created by the runaway growth of the SEC whistleblower program. But it should give policymakers pause to consider the incentive structure that such programs create, and to reconsider the balance of public and private ownership of regulatory enforcement.
This essay was written with Nathan Sanders, and originally appeared in The American Prospect.
A Look at the Social Engineering Element of Spear Phishing Attacks
When you think of a cyberattack, you probably envision a sophisticated hacker behind a Matrix-esque screen actively penetrating networks with their technical prowess. However, the reality of many attacks is far more mundane.
A simple email with an innocent subject line such as “Missed delivery attempt” sits in an employee’s spam folder. They open it absentmindedly, then enter their Office 365 credentials on the credible-looking login page that appears. In an instant, bad actors have free reign in the organization’s systems without breaking a sweat.
This example (which is all too realistic) highlights the massive threat spear phishing poses today. Rather than overt technical exploits, attackers leverage social engineering techniques that tap into the weaknesses of the human psyche. Meticulously crafted emails bypass even the most secure perimeter defenses by manipulating users into voluntarily enabling access.
In this blog, I will analyze attackers’ real-world techniques to exploit our weak spots and pain points. I will also show just how much more elaborate these hacking attempts can be compared to the typical phishing attacks that many of us have become accustomed to. That way, you can recognize and resist spear phishing attempts that leverage psychological triggers against you.
Anatomy of a Spear Phishing Hoax
Before analyzing the specifics of social engineering, let’s level set on what defines a spear phishing attack.
Highly targeted: Spear phishing targets specific individuals or organizations using personalization and context to improve credibility. This could be titles, familiar signatures, company details, projects worked on, etc.
Appears legitimate: Spear phishers invest time in making emails and landing pages appear 100% authentic. They’ll often use real logos, domains, and stolen data.
Seeks sensitive data: The end goal is to get victims to give away credentials, bank details, trade secrets, or other sensitive information or to install malware.
Instills a sense of urgency/fear: Subject lines and content press emotional triggers related to urgency, curiosity, fear, and doubt to get quick clicks without deeper thought.
With that foundation set, let’s examine how spear phishers socially engineer their attacks to exploit human vulnerabilities with frightening success.
#1: They Leverage the Human Desire to Be Helpful
Human beings have an innate desire to be perceived as helpful. When someone asks you for a favor, your first instinct is likely wanting to say yes rather than second-guess them.
Spear phishers exploit this trait by crafting emails that make requests sound reasonable and essential. Even just starting an email with “I hope you can help me with…” triggers reciprocity bias that increases vulnerability to attack. Let’s take a look at an example:
Subject: URGENT Support Needed
Email Body: “Hi Amanda, I’m reaching out because I need your help, please. I’m currently out of office and having issues accessing invoices. Do you mind sending me over the 2 most recent invoices we received? I need to send them out by end of day. Sorry for the urgent request! Please let me know. Thanks, Sarah”.
This email pulls together four highly effective social engineering triggers:
Politeness – Saying “please” and “thank you” fits social norms for seeking help.
Sense of urgency – Creating a short deadline pressures quick action without deeper thought.
Vague problem – Keeping the specifics unclear evokes curiosity and a desire to be helpful.
Familiar signature – A known sender name inspires trust.
When faced with a politely worded request for help that seems time-sensitive, many will comply without considering potential risks. This allows spear phishers to gather sensitive data or get victims to click dodgy links quite easily.
#2: They Manufacture Authority
Human psychology is strongly conditioned to defer to authority figures. When someone in leadership asks you to do something, you likely just execute without asking many questions.
Spear phishing attacks often take advantage of this tendency by assuming a position of authority. They spoof executive names, manager titles, administrator accounts, or roles like HR that give directions, making victims far more likely to instantly comply with requests. Here are some examples:
Email pretending to be from the CEO demanding an urgent wire payment.
Fake IT account requesting password resets to resolve “network issues”.
Imitation email from head of HR asking for direct deposit info corrections.
Positioning the sender as influential causes targets to lower their guard and engage without skepticism. Rather than evaluating critically, victims find themselves moving quickly to avoid disappointing the people upstairs.
#3: They Create Illusions of Trust
The principle of social proof states that if other people trust something, we are more likely to trust it too. Spear phishing once again takes advantage of this by building illusions that it is trustworthy through recognizable details.
Instead of coming from totally unknown or random accounts, spear phishing emails will often spoof:
Known signatures – Senders pretend to be contacts already in your network.
Real logos and branding – Emails and sites clone visual elements that match expectations.
Familiar writing tones – Content matches communication styles you’d expect from the spoofed individual or company.
Personal details – They’ll research names, projects, activities, etc. to reference in content.
The tiny familiar details make the sketchy emails feel authentic rather than random, which opens victims up to manipulation using other social engineering techniques.
For instance, an email that pretends to be from a known contact asking you to download a document would trigger almost no scrutiny. The supposed trust earns clicks without critical thought, allowing malware and malicious links to penetrate environments more easily.
#4: They Spark Strong Emotions
Spear phishing emails often try to spark strong emotions that override your logical thinking. Your ability to evaluate situations greatly decreases when you feel urgent excitement or anger. The attackers will use words that tap into emotions like:
Curiosity – Subject lines like “Your password has been changed” arouse worry that makes you rush to check without thinking twice.
Anger – Imagine getting a rude message from a coworker or boss. That anger can cloud your judgment enough to click on malware links.
Hope – “Too good to be true” offers flood inboxes because even smart folks take chances on prizes or dream jobs without considering risks.
Panic – Nothing makes you react faster than thinking your email, bank account, or system access has been compromised or cut off somehow. Fear makes fertile soil for mistakes.
The objective is to make us react from the gut rather than carefully analyze what’s happening. But if you’ve been made aware of these psychological tricks, you can catch yourself in the moment. Just take a beat to consider why certain emails spark strong feelings and whether someone wants you to click without thinking. Staying aware of emotional triggers helps avoid careless errors down the line.
#5: They Exploit Human Sloth
Here’s an unfortunate truth about human nature – we like to expend as little effort as possible. Chances are you don’t thoroughly verify every work email that hits your inbox. It takes a good deal of time and effort when you’re trying to power through tasks.
Spear phishing piggybacks on this tendency for laziness and mental shortcuts. In contrast to overly complex attacks, they present simple calls to action:
Click this password reset link.
Enable macros to view an invoice.
Download the document from a familiar sender.
Visit this site to claim a prize.
When there are no conspicuous red flags, most users fall prey to lazy thinking. Effortlessly clicking links seems easier than scrutinizing sender details, evaluating URLs, or opening documents safely.
This willingness to take the easy path of least resistance plays perfectly into spear phishers’ hands. They want recipients to act quickly without too much thought or effort. Catching people when they’re cognitively lazy is the most reliable way to succeed.
Final Word
While standard phishing attacks are already a big enough headache to deal with, spear phishing takes it one step further by incorporating some clever social engineering tactics to try and fool people into taking action. While anyone could fall for these tricks, vigilance and awareness are the best defense against them. Now that you know the telltale signs and the tactics that these malefactors use, you will be better equipped to spot the attack if you ever find yourself on the receiving end of one.
A Look at the Social Engineering Element of Spear Phishing Attacks
When you think of a cyberattack, you probably envision a sophisticated hacker behind a Matrix-esque screen actively penetrating networks with their technical prowess. However, the reality of many attacks is far more mundane.
A simple email with an innocent subject line such as “Missed delivery attempt” sits in an employee’s spam folder. They open it absentmindedly, then enter their Office 365 credentials on the credible-looking login page that appears. In an instant, bad actors have free reign in the organization’s systems without breaking a sweat.
This example (which is all too realistic) highlights the massive threat spear phishing poses today. Rather than overt technical exploits, attackers leverage social engineering techniques that tap into the weaknesses of the human psyche. Meticulously crafted emails bypass even the most secure perimeter defenses by manipulating users into voluntarily enabling access.
In this blog, I will analyze attackers’ real-world techniques to exploit our weak spots and pain points. I will also show just how much more elaborate these hacking attempts can be compared to the typical phishing attacks that many of us have become accustomed to. That way, you can recognize and resist spear phishing attempts that leverage psychological triggers against you.
Anatomy of a Spear Phishing Hoax
Before analyzing the specifics of social engineering, let’s level set on what defines a spear phishing attack.
Highly targeted: Spear phishing targets specific individuals or organizations using personalization and context to improve credibility. This could be titles, familiar signatures, company details, projects worked on, etc.
Appears legitimate: Spear phishers invest time in making emails and landing pages appear 100% authentic. They’ll often use real logos, domains, and stolen data.
Seeks sensitive data: The end goal is to get victims to give away credentials, bank details, trade secrets, or other sensitive information or to install malware.
Instills a sense of urgency/fear: Subject lines and content press emotional triggers related to urgency, curiosity, fear, and doubt to get quick clicks without deeper thought.
With that foundation set, let’s examine how spear phishers socially engineer their attacks to exploit human vulnerabilities with frightening success.
#1: They Leverage the Human Desire to Be Helpful
Human beings have an innate desire to be perceived as helpful. When someone asks you for a favor, your first instinct is likely wanting to say yes rather than second-guess them.
Spear phishers exploit this trait by crafting emails that make requests sound reasonable and essential. Even just starting an email with “I hope you can help me with…” triggers reciprocity bias that increases vulnerability to attack. Let’s take a look at an example:
Subject: URGENT Support Needed
Email Body: “Hi Amanda, I’m reaching out because I need your help, please. I’m currently out of office and having issues accessing invoices. Do you mind sending me over the 2 most recent invoices we received? I need to send them out by end of day. Sorry for the urgent request! Please let me know. Thanks, Sarah”.
This email pulls together four highly effective social engineering triggers:
Politeness – Saying “please” and “thank you” fits social norms for seeking help.
Sense of urgency – Creating a short deadline pressures quick action without deeper thought.
Vague problem – Keeping the specifics unclear evokes curiosity and a desire to be helpful.
Familiar signature – A known sender name inspires trust.
When faced with a politely worded request for help that seems time-sensitive, many will comply without considering potential risks. This allows spear phishers to gather sensitive data or get victims to click dodgy links quite easily.
#2: They Manufacture Authority
Human psychology is strongly conditioned to defer to authority figures. When someone in leadership asks you to do something, you likely just execute without asking many questions.
Spear phishing attacks often take advantage of this tendency by assuming a position of authority. They spoof executive names, manager titles, administrator accounts, or roles like HR that give directions, making victims far more likely to instantly comply with requests. Here are some examples:
Email pretending to be from the CEO demanding an urgent wire payment.
Fake IT account requesting password resets to resolve “network issues”.
Imitation email from head of HR asking for direct deposit info corrections.
Positioning the sender as influential causes targets to lower their guard and engage without skepticism. Rather than evaluating critically, victims find themselves moving quickly to avoid disappointing the people upstairs.
#3: They Create Illusions of Trust
The principle of social proof states that if other people trust something, we are more likely to trust it too. Spear phishing once again takes advantage of this by building illusions that it is trustworthy through recognizable details.
Instead of coming from totally unknown or random accounts, spear phishing emails will often spoof:
Known signatures – Senders pretend to be contacts already in your network.
Real logos and branding – Emails and sites clone visual elements that match expectations.
Familiar writing tones – Content matches communication styles you’d expect from the spoofed individual or company.
Personal details – They’ll research names, projects, activities, etc. to reference in content.
The tiny familiar details make the sketchy emails feel authentic rather than random, which opens victims up to manipulation using other social engineering techniques.
For instance, an email that pretends to be from a known contact asking you to download a document would trigger almost no scrutiny. The supposed trust earns clicks without critical thought, allowing malware and malicious links to penetrate environments more easily.
#4: They Spark Strong Emotions
Spear phishing emails often try to spark strong emotions that override your logical thinking. Your ability to evaluate situations greatly decreases when you feel urgent excitement or anger. The attackers will use words that tap into emotions like:
Curiosity – Subject lines like “Your password has been changed” arouse worry that makes you rush to check without thinking twice.
Anger – Imagine getting a rude message from a coworker or boss. That anger can cloud your judgment enough to click on malware links.
Hope – “Too good to be true” offers flood inboxes because even smart folks take chances on prizes or dream jobs without considering risks.
Panic – Nothing makes you react faster than thinking your email, bank account, or system access has been compromised or cut off somehow. Fear makes fertile soil for mistakes.
The objective is to make us react from the gut rather than carefully analyze what’s happening. But if you’ve been made aware of these psychological tricks, you can catch yourself in the moment. Just take a beat to consider why certain emails spark strong feelings and whether someone wants you to click without thinking. Staying aware of emotional triggers helps avoid careless errors down the line.
#5: They Exploit Human Sloth
Here’s an unfortunate truth about human nature – we like to expend as little effort as possible. Chances are you don’t thoroughly verify every work email that hits your inbox. It takes a good deal of time and effort when you’re trying to power through tasks.
Spear phishing piggybacks on this tendency for laziness and mental shortcuts. In contrast to overly complex attacks, they present simple calls to action:
Click this password reset link.
Enable macros to view an invoice.
Download the document from a familiar sender.
Visit this site to claim a prize.
When there are no conspicuous red flags, most users fall prey to lazy thinking. Effortlessly clicking links seems easier than scrutinizing sender details, evaluating URLs, or opening documents safely.
This willingness to take the easy path of least resistance plays perfectly into spear phishers’ hands. They want recipients to act quickly without too much thought or effort. Catching people when they’re cognitively lazy is the most reliable way to succeed.
Final Word
While standard phishing attacks are already a big enough headache to deal with, spear phishing takes it one step further by incorporating some clever social engineering tactics to try and fool people into taking action. While anyone could fall for these tricks, vigilance and awareness are the best defense against them. Now that you know the telltale signs and the tactics that these malefactors use, you will be better equipped to spot the attack if you ever find yourself on the receiving end of one.
Half of Organizations Have Unmanaged Long-Lived Cloud Credentials
Long-lived credentials in the cloud put organizations at high risk of breaches, a report from Datadog has found