CVE-2020-29168

Read Time:8 Second

SQL Injection vulnerability in Projectworlds Online Doctor Appointment Booking System, allows attackers to gain sensitive information via the q parameter to the getuser.php endpoint.

Read More

Are You Getting Caught by Click Bait?

Read Time:3 Minute, 45 Second

It all feels so harmless. Who isn’t even alittle curious which celebrity is their look-a-like or what ’80s song best matches their personality? While some of these fun little quizzes and facial recognition-type games that pop up on social media are advertiser-generated and harmless, others have been carefully designed to steal your data.

According to the Better Business Bureau (BBB) consumers need to beware with the IQ tests, quizzes that require you to trade information. Depending on the goal of the scam, one click could result in a new slew of email or text spam, malicious data mining, or even a monthly charge on your phone bill.

 

Besides the spammy quizzes, scammers also use click bait, that are headlines designed to get your click and your data. Such headlines often promise juicy info on celebrities and may even legitimate human interest stories that claim, “and you won’t believe what happened next.” While some of those headlines are authored by reputable companies simply trying to sell products and compete for clicks, others are data traps that chip away at your privacy.

The best defense against click bait is knowledge. Similar to the plague of fake news circulating online, click bait is getting more sophisticated and deceptive in appearance, which means that users must be even more sophisticated in understanding how to sidestep these digital traps.

5 Tips to Help You Tame Your Clicks

Just say no, help others do the same. Scammers understand human digital behavior and design quizzes they know will get a lot of shares. “Fun” and “wow!” easily goes viral. Refuse to pass on the information and when you see it, call it out like blogger David Neilsen did (right). A scammers goal is access to your data and access to your social pages, which gives them access to your friend’s data. If you want to find out which Harry Potter character you are most like, just know you will pay with your privacy — so just practice saying no.
Vet your friends. Gone are the days of hundreds of thousands of “friends and followers” to affirm our social worth. With every unknown friend you let into your digital circle, you increase your chances of losing more privacy. Why take the risk? Also, take a closer look at who is sharing a contest, quiz, or game. A known friend may have been hacked. Go through their feed to see if there’s anything askew with the account.
Beware of click jacking. This malicious technique tricks a web user into clicking on something different from what the user perceives they are clicking on, which could result in revealing confidential information or a scammer taking control of their computer.
Be aware of ‘Like Farming’ scams. Quizzes can be part of a scam called “Like Farming.” In this scenario, scammers create a piece of legitimate content, then swap it out for something else less desirable once the post has gone viral.
Adjust your settings. Since these quizzes mainly show up on Facebook, start adjusting your settings there. You will be prompted from your Settings to select/deselect the level of permissions that exist. This is one easy way to stop the madness. Another way is to go to the actual post/quiz and click on the downward facing arrow to the top right of the post. Tell Facebook to block these types of ads or posts, or if you are sure it’s a scam, report the post.
Value your online time. Click bait is an epic waste of time. When a headline or quiz teases users to click without giving much information about will follow, those posts get a lot more clicks, which moves them up the Facebook food chain. Keep in mind click bait is a trap that A) tricks you B) wastes valuable time and C) edges out content from your friends and Facebook pages that you actually want to see.

Our digital landscape is peppered with fake news and click bait, which makes it difficult to build trust with individuals and brands who have legitimate messages and products to share. As you become savvy to the kinds of data scams, your discernment and ability to hold onto your clicks will become second nature. Continue to have fun, learn, connect, but guard your heart with every click. Be sure to keep yor devices protected while you do!

The post Are You Getting Caught by Click Bait? appeared first on McAfee Blog.

Read More

Defending against AI Lobbyists

Read Time:6 Minute, 59 Second

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write op-eds and regulatory comments supporting a position, identify members of Congress who wield the most influence over pending legislation, use network pattern identification to discover undisclosed or illegal political coordination, or use supervised machine learning to calibrate the optimal contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that follow the rules of a system, but subvert its intent. Currently a human creative process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI tools applied against the same task would have unfair advantages. They can scale their activity effortlessly across every state in the country—human lobbyists tend to focus on a single state—they may uncover patterns and approaches unintuitive and unrecognizable by human experts, and do so nearly instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally ungovernable. Any policy response to limit the impact of AI hacking on political systems would be critically vulnerable to subversion or control by an AI hacker. If AI hackers achieve unchecked influence over legislative processes, they could dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But now that the emanations and policy prescriptions of ChatGPT have been given an audience in the New York Times and innumerable other outlets in recent weeks, it’s getting harder to dismiss.

At least one group of researchers is already testing AI techniques to automatically find and advocate for bills that benefit a particular interest. And one Massachusetts representative used ChatGPT to draft legislation regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT. What will the technology of 2025 seem like if we could glimpse it today? To us there is no question that now is the time to act.

First, let’s dispense with the concepts that won’t work. We cannot solely rely on explicit regulation of AI technology development, distribution, or use. Regulation is essential, but it would be vastly insufficient. The rate of AI technology development, and the speed at which AI hackers might discover damaging strategies, already outpaces policy development, enactment, and enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests that AI models trying to classify text samples as human- or AI-generated have limited precision, and are ill equipped to handle real world scenarios. These reactive, defensive techniques will fail because the rate of advancement of the “offensive” generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents that will use AI to help them express their thoughts, or machine translation tools to help them communicate. If a written opinion or strategy conforms to the intent of a real person, it should not matter if they enlisted the help of an AI (or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the rapid pace of change: privileging the status quo. Slowing down may seem like the natural response to a threat whose primary attribute is speed. Ideas like increasing requirements for human identity verification, aggressive detection regimes for AI-generated messages, and elongation of the legislative or regulatory process would all play into this fallacy. While each of these solutions may have some value independently, they do nothing to make the already powerful actors less powerful.

Finally, it won’t work to try to starve the beast. Large language models like ChatGPT have a voracious appetite for data. They are trained on past examples of the kinds of content that they will be asked to generate in the future. Similarly, an AI system built to hack political systems will rely on data that documents the workings of those systems, such as messages between constituents and legislators, floor speeches, chamber and committee voting results, contribution records, lobbying relationship disclosures, and drafts of and amendments to legislative text. The steady advancement towards the digitization and publication of this information that many jurisdictions have made is positive. The threat of AI hacking should not dampen or slow progress on transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems like ChatGPT and our still-hypothetical political-strategy AI are still far from artificial general intelligences. They do not think. They do not have free will. They are just tools directed by people, much like lobbyist for hire. And, like lobbyists, they will be available primarily to the richest individuals, groups, and their interests.

However, we can use the same tools that would be effective in controlling human political influence to curb AI hackers. These tools will be familiar to any follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied to political action committees of all types as well as to candidate operated campaigns, can reduce the dependence of politicians on contributions from private interests. The unfair advantage of a malicious actor using AI lobbying tools is at least somewhat mitigated if a political target’s entire career is not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and lobbying relationships, with provisions to prevent the obfuscation of the funding source. Self-interested advocacy should be transparently reported whether or not it was AI-assisted. Meanwhile, we should increase penalties for organizations that benefit from AI-assisted impersonation of constituents in political processes, and set a greater expectation of responsibility to avoid “unknowing” use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than trying to make it harder for AI to participate in the political process, make it easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the little guy lobby for theirs. Promote inclusion and engagement in the political process so that organic constituent communications grow alongside the potential growth of AI-directed communications. Encourage direct contact that generates more-than-digital relationships between constituents and their representatives, which will be an enduring way to privilege human stakeholders. Provide paid leave to allow people to vote as well as to testify before their legislature and participate in local town meetings and other civic functions. Provide childcare and accessible facilities at civic functions so that more community members can participate.

The threat of AI hacking our democracy is legitimate and concerning, but its solutions are consistent with our democratic values. Many of the ideas above are good governance reforms already being pushed and fought over at the federal and state level.

We don’t need to reinvent our democracy to save it from AI. We just need to continue the work of building a just and equitable political system. Hopefully ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center blog.

Read More

A Vulnerability in Clam AntiVirus Could Allow for Remote Code Execution

Read Time:30 Second

A vulnerability has been discovered in Clam AntiVirus, which could allow for remote code execution. Clam AntiVirus is an open-source, cross-platform antimalware toolkit able to detect many types of malware. Successful exploitation of this vulnerability could allow an attacker to execute remote code as the Clam AntiVirus platform. Depending on the privileges associated with the application, an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Applications that are configured to have fewer user rights on the system could be less impacted than those that operate with administrative user rights.

Read More

curl-7.85.0-6.fc37

Read Time:11 Second

FEDORA-2023-ddf6575695

Packages in this update:

curl-7.85.0-6.fc37

Update description:

fix HTTP multi-header compression denial of service (CVE-2023-23916)
share HSTS between handles (CVE-2023-23915 CVE-2023-23914)

Read More

Building blocks for Cyber resilience:  MSSPs can lead the way

Read Time:5 Minute, 12 Second

In today’s world, cybersecurity is an ever-growing concern for businesses. With the rising threat of cyber threats and data breaches, it can be difficult for companies to keep up with the latest security technologies and stay ahead of the curve. Managed Security Services Providers (MSSPs) provide comprehensive security solutions to clients. They offer various services, from monitoring and threat intelligence to incident response. MSSPs are ideal for businesses looking for an all-in-one security solution tailored to their specific needs. MSSPs offer a wide range of services to help protect businesses from cyber threats. Here are some initiatives that MSSPs should consider when looking to help customers in 2023.

Making Zero Trust attainable

As the global landscape continues to test our resiliency, staying focused on a security-first mindset is critical. Organizations must consider the most significant risks and take a proactive approach to address cyber risk concerns. This means assessing the current state of their cybersecurity, understanding their attack surface, and rethinking their security strategy with a Zero Trust model. By taking a risk-based approach to vulnerability management, implementing cloud security measures, and developing third-party risk management solutions, organizations can ensure they are prepared to adapt to the ever-changing digital landscape and remain resilient in the face of cyber threats.

The traditional perimeter as we know it is no longer viable due to the shift to remote and hybrid working. To keep our networks secure, Zero Trust architecture is essential. Zero Trust reduces the risk of security breaches by authenticating and authorizing every person and system before granting access. Nowadays, the security industry is figuring out how to apply Zero Trust practically. Established companies are using the term Zero Trust in their product portfolios to capitalize on the opportunity. Ultimately, Zero Trust will become more prominent with measurable results.

Risk-Based vulnerability management

Managing vulnerabilities inside your environment are challenging. New attack vectors for threat actors to breach your network are identified daily. Organizationally, the attack surface is constantly changing due to IT device and platform lifecycle issues, changing operational priorities, and the adoption of emerging technologies. With every change comes the risk that a new flaw or configuration issue will provide a threat actor with the final link in their attack chain, resulting in an impact on your users, operations, and customers.

Your network is expanding in the traditional sense and with the ever-increasing role of endpoints, devices, and the Internet of Things. Each year you see the amount of data multiply exponentially, the threat of attacks become more sophisticated, and the challenge of minimizing risk and optimizing operations grow more challenging. It can feel like a never-ending battle, yet identifying, prioritizing, and managing vulnerabilities through remediation is not only possible—it can be simple.

Vulnerability management is an established function of information security, but with technology configurations constantly evolving and cloud and container infrastructure expanding, the complexities of vulnerability management persist. Today’s best vulnerability management platforms have been designed with visibility, remediation automation, and improved vulnerability prioritization.

Vulnerability and patch management are essential for any organization, as is the need for risk reduction. With the right risk reduction strategy, organizations can improve their cyber resilience and reduce their risk. To help ensure that organizations keep their IT infrastructure up-to-date and secure, they should focus on strengthening the fundamentals of vulnerability and patch management, risk reduction, and Managed Extended Detection and Response (MXDR). By implementing these strategies, organizations can reduce risk and improve security posture.

Security Mesh, Zero Trust, and SASE (Secure Access Service Edge)

These are three technology trends converging to allow organizations to consolidate and optimize their Zero Trust initiatives. Security Mesh provides a cloud-based fabric that enables organizations to connect to users, applications, and data in a secure and unified fashion. Zero Trust is a security model that eliminates the concept of trust assumptions based on internal network boundaries.

And SASE is a cloud-delivered service that combines network and security functions, including secure access, cloud security, and network security, into a single integrated solution. These technologies can be used together to reduce complexity and help organizations to implement their Zero Trust strategies quickly and effectively. By consolidating and optimizing Zero Trust initiatives, organizations can gain the security, agility, and scalability needed to accelerate their digital transformation.

The biggest challenge for SASE adoption is the split decision between networking and security components. While the two technologies have their strengths and weaknesses, their integration is the most critical factor for successful SASE deployments. Enterprises need to evaluate both solutions’ performance, scalability, scalability, reliability, and cost to determine which is best suited for their needs. Additionally, at the same time, they need to consider the synergies between both solutions to make sure that the combination of them will yield the best results. The primary benefit of SASE is the integration of networking and security services, which simplifies the provisioning and maintenance of both solutions.

Additionally, the service provider can offer more tailored solutions to its customers, allowing them to customize their SASE deployments to meet their specific needs. This makes the solution more attractive to enterprises and increases the likelihood of adoption. Ultimately, the split decision between networking and security components is a challenge that SASE must overcome to remain relevant in the future. Enterprises need to weigh both solutions’ pros and cons and ensure they invest in the right technologies. By doing so, they can ensure that they get the most out of their SASE deployments and guarantee that their solutions remain up-to-date and secure.

Cyber Resilience

As MSSPs look to offer a Cyber Resilience service that leverages expertise to enhance protection, detection, and response capabilities while driving an organization’s ability to recover in the event of a malicious attack rapidly. MSSPs can help shift an organization’s model from reactive to proactive, helping the team prepare for potential cyberattacks by implementing a resilience model. This end-to-end service capability helps reduces risk holistically and supports an organization’s ability to identify, protect, detect, respond, and recover from malicious activity. Cyber Resilience service is a customized strategy to enhance your current people, processes, and technology based on comprehensive strategic and tactical evaluations across an enterprise.

Read More