New Mirai botnet variant V3G4 targets Linux servers, IoT devices

Read Time:24 Second

A new variant of Mirai — the botnet malware used to launch massive DDoS attacks —has been targeting 13 vulnerabilities in IoT devices connected to Linux servers, according to researchers at Palo Alto Network’s Unit 42 cybersecurity team. 

Once the vulnerable devices are compromised by the variant, dubbed V3G4, they can fully controlled by attackers and become part of a botnet, capable of being used to conduct further campaigns, including DDoS attacks. 

To read this article in full, please click here

Read More

CVE-2020-29168

Read Time:8 Second

SQL Injection vulnerability in Projectworlds Online Doctor Appointment Booking System, allows attackers to gain sensitive information via the q parameter to the getuser.php endpoint.

Read More

Are You Getting Caught by Click Bait?

Read Time:3 Minute, 45 Second

It all feels so harmless. Who isn’t even alittle curious which celebrity is their look-a-like or what ’80s song best matches their personality? While some of these fun little quizzes and facial recognition-type games that pop up on social media are advertiser-generated and harmless, others have been carefully designed to steal your data.

According to the Better Business Bureau (BBB) consumers need to beware with the IQ tests, quizzes that require you to trade information. Depending on the goal of the scam, one click could result in a new slew of email or text spam, malicious data mining, or even a monthly charge on your phone bill.

 

Besides the spammy quizzes, scammers also use click bait, that are headlines designed to get your click and your data. Such headlines often promise juicy info on celebrities and may even legitimate human interest stories that claim, “and you won’t believe what happened next.” While some of those headlines are authored by reputable companies simply trying to sell products and compete for clicks, others are data traps that chip away at your privacy.

The best defense against click bait is knowledge. Similar to the plague of fake news circulating online, click bait is getting more sophisticated and deceptive in appearance, which means that users must be even more sophisticated in understanding how to sidestep these digital traps.

5 Tips to Help You Tame Your Clicks

Just say no, help others do the same. Scammers understand human digital behavior and design quizzes they know will get a lot of shares. “Fun” and “wow!” easily goes viral. Refuse to pass on the information and when you see it, call it out like blogger David Neilsen did (right). A scammers goal is access to your data and access to your social pages, which gives them access to your friend’s data. If you want to find out which Harry Potter character you are most like, just know you will pay with your privacy — so just practice saying no.
Vet your friends. Gone are the days of hundreds of thousands of “friends and followers” to affirm our social worth. With every unknown friend you let into your digital circle, you increase your chances of losing more privacy. Why take the risk? Also, take a closer look at who is sharing a contest, quiz, or game. A known friend may have been hacked. Go through their feed to see if there’s anything askew with the account.
Beware of click jacking. This malicious technique tricks a web user into clicking on something different from what the user perceives they are clicking on, which could result in revealing confidential information or a scammer taking control of their computer.
Be aware of ‘Like Farming’ scams. Quizzes can be part of a scam called “Like Farming.” In this scenario, scammers create a piece of legitimate content, then swap it out for something else less desirable once the post has gone viral.
Adjust your settings. Since these quizzes mainly show up on Facebook, start adjusting your settings there. You will be prompted from your Settings to select/deselect the level of permissions that exist. This is one easy way to stop the madness. Another way is to go to the actual post/quiz and click on the downward facing arrow to the top right of the post. Tell Facebook to block these types of ads or posts, or if you are sure it’s a scam, report the post.
Value your online time. Click bait is an epic waste of time. When a headline or quiz teases users to click without giving much information about will follow, those posts get a lot more clicks, which moves them up the Facebook food chain. Keep in mind click bait is a trap that A) tricks you B) wastes valuable time and C) edges out content from your friends and Facebook pages that you actually want to see.

Our digital landscape is peppered with fake news and click bait, which makes it difficult to build trust with individuals and brands who have legitimate messages and products to share. As you become savvy to the kinds of data scams, your discernment and ability to hold onto your clicks will become second nature. Continue to have fun, learn, connect, but guard your heart with every click. Be sure to keep yor devices protected while you do!

The post Are You Getting Caught by Click Bait? appeared first on McAfee Blog.

Read More

Defending against AI Lobbyists

Read Time:6 Minute, 59 Second

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write op-eds and regulatory comments supporting a position, identify members of Congress who wield the most influence over pending legislation, use network pattern identification to discover undisclosed or illegal political coordination, or use supervised machine learning to calibrate the optimal contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that follow the rules of a system, but subvert its intent. Currently a human creative process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI tools applied against the same task would have unfair advantages. They can scale their activity effortlessly across every state in the country—human lobbyists tend to focus on a single state—they may uncover patterns and approaches unintuitive and unrecognizable by human experts, and do so nearly instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally ungovernable. Any policy response to limit the impact of AI hacking on political systems would be critically vulnerable to subversion or control by an AI hacker. If AI hackers achieve unchecked influence over legislative processes, they could dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But now that the emanations and policy prescriptions of ChatGPT have been given an audience in the New York Times and innumerable other outlets in recent weeks, it’s getting harder to dismiss.

At least one group of researchers is already testing AI techniques to automatically find and advocate for bills that benefit a particular interest. And one Massachusetts representative used ChatGPT to draft legislation regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT. What will the technology of 2025 seem like if we could glimpse it today? To us there is no question that now is the time to act.

First, let’s dispense with the concepts that won’t work. We cannot solely rely on explicit regulation of AI technology development, distribution, or use. Regulation is essential, but it would be vastly insufficient. The rate of AI technology development, and the speed at which AI hackers might discover damaging strategies, already outpaces policy development, enactment, and enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests that AI models trying to classify text samples as human- or AI-generated have limited precision, and are ill equipped to handle real world scenarios. These reactive, defensive techniques will fail because the rate of advancement of the “offensive” generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents that will use AI to help them express their thoughts, or machine translation tools to help them communicate. If a written opinion or strategy conforms to the intent of a real person, it should not matter if they enlisted the help of an AI (or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the rapid pace of change: privileging the status quo. Slowing down may seem like the natural response to a threat whose primary attribute is speed. Ideas like increasing requirements for human identity verification, aggressive detection regimes for AI-generated messages, and elongation of the legislative or regulatory process would all play into this fallacy. While each of these solutions may have some value independently, they do nothing to make the already powerful actors less powerful.

Finally, it won’t work to try to starve the beast. Large language models like ChatGPT have a voracious appetite for data. They are trained on past examples of the kinds of content that they will be asked to generate in the future. Similarly, an AI system built to hack political systems will rely on data that documents the workings of those systems, such as messages between constituents and legislators, floor speeches, chamber and committee voting results, contribution records, lobbying relationship disclosures, and drafts of and amendments to legislative text. The steady advancement towards the digitization and publication of this information that many jurisdictions have made is positive. The threat of AI hacking should not dampen or slow progress on transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems like ChatGPT and our still-hypothetical political-strategy AI are still far from artificial general intelligences. They do not think. They do not have free will. They are just tools directed by people, much like lobbyist for hire. And, like lobbyists, they will be available primarily to the richest individuals, groups, and their interests.

However, we can use the same tools that would be effective in controlling human political influence to curb AI hackers. These tools will be familiar to any follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied to political action committees of all types as well as to candidate operated campaigns, can reduce the dependence of politicians on contributions from private interests. The unfair advantage of a malicious actor using AI lobbying tools is at least somewhat mitigated if a political target’s entire career is not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and lobbying relationships, with provisions to prevent the obfuscation of the funding source. Self-interested advocacy should be transparently reported whether or not it was AI-assisted. Meanwhile, we should increase penalties for organizations that benefit from AI-assisted impersonation of constituents in political processes, and set a greater expectation of responsibility to avoid “unknowing” use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than trying to make it harder for AI to participate in the political process, make it easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the little guy lobby for theirs. Promote inclusion and engagement in the political process so that organic constituent communications grow alongside the potential growth of AI-directed communications. Encourage direct contact that generates more-than-digital relationships between constituents and their representatives, which will be an enduring way to privilege human stakeholders. Provide paid leave to allow people to vote as well as to testify before their legislature and participate in local town meetings and other civic functions. Provide childcare and accessible facilities at civic functions so that more community members can participate.

The threat of AI hacking our democracy is legitimate and concerning, but its solutions are consistent with our democratic values. Many of the ideas above are good governance reforms already being pushed and fought over at the federal and state level.

We don’t need to reinvent our democracy to save it from AI. We just need to continue the work of building a just and equitable political system. Hopefully ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center blog.

Read More