Trustwave relaunches Advanced Continual Threat Hunting with human-led methodology

Read Time:41 Second

Cybersecurity vendor Trustwave has announced the relaunch of its Advanced Continual Threat Hunting platform with new, patent-pending human-led threat hunting methodology. The firm claimed the enhancement will allow its SpiderLabs threat hunting teams to conduct increased human-led threat hunts and discover more behavior-based findings that could go undetected by traditional endpoint detection and response (EDR) tools.

New method hunts for behaviors associated with known threat actors

In a press release, Trustwave stated that its security teams regularly perform advanced threat hunting to study the tactics, techniques, and procedures (TTPs) of sophisticated threat actors. Trustwave’s new intellectual property (IP) goes beyond indicators of compromise (IoC) to uncover new or unknown threats by hunting for indicators of behavior (IoB) associated with specific attackers.

To read this article in full, please click here

Read More

Multiple Vulnerabilities in Mozilla Firefox Could Allow for Arbitrary Code Execution

Read Time:36 Second

Multiple vulnerabilities have been discovered in Mozilla Firefox and Firefox Extended Support Release (ESR), the most severe of which could allow for arbitrary code execution.

Mozilla Firefox is a web browser used to access the Internet.
Mozilla Firefox ESR is a version of the web browser intended to be deployed in large organizations.
Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

Read More

AI and Political Lobbying

Read Time:6 Minute, 45 Second

Launched just weeks ago, ChatGPT is already threatening to upend how we draft everyday communications like emails, college essays and myriad other forms of writing.

Created by the company OpenAI, ChatGPT is a chatbot that can automatically respond to written prompts in a manner that is sometimes eerily close to human.

But for all the consternation over the potential for humans to be replaced by machines in formats like poetry and sitcom scripts, a far greater threat looms: artificial intelligence replacing humans in the democratic processes—not through voting, but through lobbying.

ChatGPT could automatically compose comments submitted in regulatory processes. It could write letters to the editor for publication in local newspapers. It could comment on news articles, blog entries and social media posts millions of times every day. It could mimic the work that the Russian Internet Research Agency did in its attempt to influence our 2016 elections, but without the agency’s reported multimillion-dollar budget and hundreds of employees.

Automatically generated comments aren’t a new problem. For some time, we have struggled with bots, machines that automatically post content. Five years ago, at least a million automatically drafted comments were believed to have been submitted to the Federal Communications Commission regarding proposed regulations on net neutrality. In 2019, a Harvard undergraduate, as a test, used a text-generation program to submit 1,001 comments in response to a government request for public input on a Medicaid issue. Back then, submitting comments was just a game of overwhelming numbers.

Platforms have gotten better at removing “coordinated inauthentic behavior.” Facebook, for example, has been removing over a billion fake accounts a year. But such messages are just the beginning. Rather than flooding legislators’ inboxes with supportive emails, or dominating the Capitol switchboard with synthetic voice calls, an AI system with the sophistication of ChatGPT but trained on relevant data could selectively target key legislators and influencers to identify the weakest points in the policymaking system and ruthlessly exploit them through direct communication, public relations campaigns, horse trading or other points of leverage.

When we humans do these things, we call it lobbying. Successful agents in this sphere pair precision message writing with smart targeting strategies. Right now, the only thing stopping a ChatGPT-equipped lobbyist from executing something resembling a rhetorical drone warfare campaign is a lack of precision targeting. AI could provide techniques for that as well.

A system that can understand political networks, if paired with the textual-generation capabilities of ChatGPT, could identify the member of Congress with the most leverage over a particular policy area—say, corporate taxation or military spending. Like human lobbyists, such a system could target undecided representatives sitting on committees controlling the policy of interest and then focus resources on members of the majority party when a bill moves toward a floor vote.

Once individuals and strategies are identified, an AI chatbot like ChatGPT could craft written messages to be used in letters, comments—anywhere text is useful. Human lobbyists could also target those individuals directly. It’s the combination that’s important: Editorial and social media comments only get you so far, and knowing which legislators to target isn’t itself enough.

This ability to understand and target actors within a network would create a tool for AI hacking, exploiting vulnerabilities in social, economic and political systems with incredible speed and scope. Legislative systems would be a particular target, because the motive for attacking policymaking systems is so strong, because the data for training such systems is so widely available and because the use of AI may be so hard to detect—particularly if it is being used strategically to guide human actors.

The data necessary to train such strategic targeting systems will only grow with time. Open societies generally make their democratic processes a matter of public record, and most legislators are eager—at least, performatively so—to accept and respond to messages that appear to be from their constituents.

Maybe an AI system could uncover which members of Congress have significant sway over leadership but still have low enough public profiles that there is only modest competition for their attention. It could then pinpoint the SuperPAC or public interest group with the greatest impact on that legislator’s public positions. Perhaps it could even calibrate the size of donation needed to influence that organization or direct targeted online advertisements carrying a strategic message to its members. For each policy end, the right audience; and for each audience, the right message at the right time.

What makes the threat of AI-powered lobbyists greater than the threat already posed by the high-priced lobbying firms on K Street is their potential for acceleration. Human lobbyists rely on decades of experience to find strategic solutions to achieve a policy outcome. That expertise is limited, and therefore expensive.

AI could, theoretically, do the same thing much more quickly and cheaply. Speed out of the gate is a huge advantage in an ecosystem in which public opinion and media narratives can become entrenched quickly, as is being nimble enough to shift rapidly in response to chaotic world events.

Moreover, the flexibility of AI could help achieve influence across many policies and jurisdictions simultaneously. Imagine an AI-assisted lobbying firm that can attempt to place legislation in every single bill moving in the US Congress, or even across all state legislatures. Lobbying firms tend to work within one state only, because there are such complex variations in law, procedure and political structure. With AI assistance in navigating these variations, it may become easier to exert power across political boundaries.

Just as teachers will have to change how they give students exams and essay assignments in light of ChatGPT, governments will have to change how they relate to lobbyists.

To be sure, there may also be benefits to this technology in the democracy space; the biggest one is accessibility. Not everyone can afford an experienced lobbyist, but a software interface to an AI system could be made available to anyone. If we’re lucky, maybe this kind of strategy-generating AI could revitalize the democratization of democracy by giving this kind of lobbying power to the powerless.

However, the biggest and most powerful institutions will likely use any AI lobbying techniques most successfully. After all, executing the best lobbying strategy still requires insiders—people who can walk the halls of the legislature—and money. Lobbying isn’t just about giving the right message to the right person at the right time; it’s also about giving money to the right person at the right time. And while an AI chatbot can identify who should be on the receiving end of those campaign contributions, humans will, for the foreseeable future, need to supply the cash. So while it’s impossible to predict what a future filled with AI lobbyists will look like, it will probably make the already influential and powerful even more so.

This essay was written with Nathan Sanders, and previously appeared in the New York Times.

Edited to Add: After writing this, we discovered that a research group is researching AI and lobbying:

We used autoregressive large language models (LLMs, the same type of model behind the now wildly popular ChatGPT) to systematically conduct the following steps. (The full code is available at this GitHub link: https://github.com/JohnNay/llm-lobbyist.)

Summarize official U.S. Congressional bill summaries that are too long to fit into the context window of the LLM so the LLM can conduct steps 2 and 3.
Using either the original official bill summary (if it was not too long), or the summarized version:

Assess whether the bill may be relevant to a company based on a company’s description in its SEC 10K filing.
Provide an explanation for why the bill is relevant or not.
Provide a confidence level to the overall answer.
If the bill is deemed relevant to the company by the LLM, draft a letter to the sponsor of the bill arguing for changes to the proposed legislation.

Here is the paper.

Read More

Telephony fraud and risk mitigation: Understanding this ever-changing threat

Read Time:6 Minute, 45 Second

Telephony fraud is a significant challenge. Companies of all sizes and industries are subjected to the malicious usage of voice and SMS with the intent of committing financial fraud, identity theft, denial-of-service, and a variety of other attacks. Businesses that fall victim to fraud can incur significant financial losses, irreparable damage to their reputation, and legal implications. Detection of and preventing fraud can be a complex and time-consuming process, requiring businesses to devote significant resources to protect themselves. Some common challenges that companies face when it comes to fraud include the following:

Swiftly adapting to constantly evolving fraud tactics: Fraudsters are always searching for innovative ways to carry out their schemes. Therefore, businesses must be hyper-aware in identifying and addressing potential threats.
Balancing the need for security with customer convenience: Businesses must balance protecting themselves against fraud and providing a seamless customer experience. This can be particularly challenging in the digital age, as customers expect fast, convenient service.
Investing in fraud prevention solutions and skilling up human resources: To stay ahead of fraudsters, organizations may need to invest in technology solutions, such as fraud detection software or security protocols, to help identify and prevent fraudulent activity. Such solutions are often expensive and may require hiring dedicated employees to manage and maintain these toolsets.
Mitigating the aftermath of a fraud incident: If a business or its customers fall victim to a fraud campaign, this organization must be prepared to not only address the immediate financial losses but also work to repair any damage to its reputation and restore customer trust. Such an endeavor is often a time-consuming and costly process.

Vishing

As mentioned above, telephony fraud can consist of voice fraud and SMS fraud sub-categories. Voice fraud, also known as vishing or voice phishing, involves criminals leveraging voice calls or voice messaging to social engineer potential victims into divulging sensitive information or making payments. In this type of attack vector, the malicious actor often attempts to mask their identity through spoofing, which involves alternating caller-ID information to make the communication appear legitimate.

The attacker may also utilize voice manipulation software or even voice impersonation to mask their identity and solicit a target into taking a specific action, such as revealing sensitive data or even transferring bank funds over to the attacker. In such unfortunate scenarios, Vishers may pretend to be an individual from a legitimate organization, such as a trusted individual, a company/business, or a government agency, and request personal information or login credentials.

Some of the voice fraud challenges that companies may face include the following:

Spoofed caller IDs: Criminals can use spoofed caller IDs to make it appear as if the call is coming from a legitimate source, such as a bank or government agency. This can make it difficult for companies to identify fraudulent calls and protect their customers from these scams.
Automated voice messages: Criminals can also use automated voice messages to deliver phishing scams. These messages may ask the recipient to call a specific number to update their account information or resolve an issue. Still, the call leads to a scammer trying to steal sensitive information.
Social engineering tactics: Criminals may use social engineering tactics, such as creating a sense of urgency or playing on the recipient’s emotions, to convince them to divulge sensitive information or make a payment.

Smishing

Smishing is a phishing scam involving using text messages to perform various social engineering attempts to convince victims to reveal sensitive information or persuade them to make fraudulent transactions. Smishing scams often involve fake websites or phone numbers, and they may be disguised as legitimate texts from banks, government agencies, or other trusted organizations.

Smishing attacks can be challenging to detect because they often use familiar logos, language, and tone to make the message appear legitimate. Some common tactics used in smishing attacks include:

Asking for personal information: Smishers may ask for personal information, such as passwords or credit card numbers, under the pretense of verifying account information or completing a transaction.
Offering fake deals or prizes: Smishers may send texts offering fake deals or prizes to lure people into revealing sensitive information or making fraudulent transactions.
Scare tactics: Smishers may send texts threatening to cancel accounts or take legal action unless sensitive information is provided.

Overall, fraud attacks can have serious consequences. If your organization falls victim to a fraud campaign, there may be severe financial loss, damage to brand reputation, data breaches, and disruption to your everyday operations. The event in which a data breach occurs can lead to identity theft of your employees and customers and the leak of proprietary information owned by your company, which can cause long-term financial and legal implications. Therefore, we recommend that organizations take the following steps to protect themselves against telephony fraud:

Educate employees: Train employees to recognize the signs of voice and SMS fraud and to be cautious when giving out sensitive information or making financial transactions over the phone.
Implement two-factor authentication: Leverage two-factor authentication to verify the identity of employees and customers when they access sensitive information or make financial transactions.
Use anti-phishing software: Use anti-phishing software to protect against phishing scams, including smishing attacks.
Monitor your phone bills: Regularly review phone bills for unusual charges or suspicious activity, which may result from a malicious actor spoofing your telephone number.
Secure communication platforms: Use secure communication platforms, such as encrypted messaging apps, to protect against voice and SMS fraud.
Invest in fraud detection solutions to identify and act upon fraudulent calls
Monitor for suspicious activity: Organizations can use tools to monitor suspicious activity, such as unexpected changes in calling patterns or unusual requests for information.

By following these best practices, businesses can reduce the likelihood of a telephony fraud disaster.

If you are an individual who is looking to safeguard yourself from such attacks:

Be vigilant of the types of commonly used scams and how to recognize them.
Never give out personal information or make financial transactions over the phone unless you are sure you are dealing with a legitimate entity.
Use strong passwords and enable two-factor authentication whenever possible to protect against unauthorized access to your accounts.
If you receive a suspicious phone call, hang up and verify the call’s legitimacy before providing any information. You can do this by looking up the phone number online or contacting the organization directly using a phone number you know is legitimate.
Be cautious of unsolicited phone calls, especially if the caller requests personal information or tries to rush you into making a decision.
Report any voice fraud to the authorities and relevant organizations, such as your bank or credit card company. This can help to prevent others from falling victim to similar scams.

Overall, it is imperative to have a multi-layered approach to combat telephony fraud. This should include an effective monitoring solution to identify anomalies in voice and SMS traffic patterns and the ability to detect and act upon suspicious activity quickly.

AT&T Cybersecurity Consulting offers a telephony fraud management program that will equip your organization with unique visibility into your voice and SMS traffic, allowing you to observe daily traffic flow across your network. As a result, your organization will be able to understand established baselines of “normal” traffic originating from your network.

AT&T Cybersecurity Consulting will actively monitor your network traffic to pinpoint deviations from your baseline traffic patterns to quickly identify malicious activity or robocall campaigns spoofing your organization’s telephone numbers. If such an anomaly is detected, the AT&T Cybersecurity Consulting team will notify your team with a report containing the observed activity and then present your team with options for responding to the anomaly. Options for response will include but are not limited to blocking traffic from transiting over the AT&T network, as well as requesting a traceback to determine the originating source of the spoofed traffic.

For more information about our telephony fraud management service, please forward any inquiries to caas-voicefraud@list.att.com.

Read More

Why it’s time to review your on-premises Microsoft Exchange patch status

Read Time:37 Second

We start the patching year of 2023 looking at one of the largest releases of vulnerability fixes in Microsoft history. The January 10 Patch Tuesday update patched one actively exploited zero-day vulnerability and 98 security flaws. The update arrives at a time when short- and long-term technology and budget decisions need to be made.

This is particularly true for organizations using on-premises Microsoft Exchange Servers. Start off 2023 by reviewing the most basic communication tool you have in your business: your mail server. Is it as protected as it could be from the threats that lie ahead of us in the coming months? The attackers know the answer to that question.

To read this article in full, please click here

Read More

redis-6.2.10-1.fc36

Read Time:45 Second

FEDORA-2023-68ae37fca3

Packages in this update:

redis-6.2.10-1.fc36

Update description:

Redis 6.2.10 Released Mon Jan 17 12:00:00 IST 2023

Upgrade urgency: MODERATE, a quick followup fix for a recently released 6.2.9.

Bug Fixes

Revert the change to KEYS in the recent client output buffer limit fix (#11676)

Redis 6.2.9 Released Mon Jan 16 12:00:00 IDT 2023

Upgrade urgency: SECURITY, contains fixes to security issues.

Security Fixes:

(CVE-2022-35977) Integer overflow in the Redis SETRANGE and SORT/SORT_RO commands can drive Redis to OOM panic
(CVE-2023-22458) Integer overflow in the Redis HRANDFIELD and ZRANDMEMBER commands can lead to denial-of-service

Bug Fixes

Avoid possible hang when client issues long KEYS, SRANDMEMBER, HRANDFIELD, and ZRANDMEMBER commands and gets disconnected by client output buffer limit (#11676)
Fix sentinel issue if replica changes IP (#11590)

Read More