CVE-2015-10099

Read Time:24 Second

A vulnerability classified as critical has been found in CP Appointment Calendar Plugin up to 1.1.5. This affects the function dex_process_ready_to_go_appointment of the file dex_appointments.php. The manipulation of the argument itemnumber leads to sql injection. It is possible to initiate the attack remotely. The name of the patch is e29a9cdbcb0f37d887dd302a05b9e8bf213da01d. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-225351.

Read More

LLMs and Phishing

Read Time:5 Minute, 24 Second

Here’s an experiment being run by undergraduate computer science students everywhere: Ask ChatGPT to generate phishing emails, and test whether these are better at persuading victims to respond or click on the link than the usual spam. It’s an interesting experiment, and the results are likely to vary wildly based on the details of the experiment.

But while it’s an easy experiment to run, it misses the real risk of large language models (LLMs) writing scam emails. Today’s human-run scams aren’t limited by the number of people who respond to the initial email contact. They’re limited by the labor-intensive process of persuading those people to send the scammer money. LLMs are about to change that. A decade ago, one type of spam email had become a punchline on every late-night show: “I am the son of the late king of Nigeria in need of your assistance….” Nearly everyone had gotten one or a thousand of those emails, to the point that it seemed everyone must have known they were scams.

So why were scammers still sending such obviously dubious emails? In 2012, researcher Cormac Herley offered an answer: It weeded out all but the most gullible. A smart scammer doesn’t want to waste their time with people who reply and then realize it’s a scam when asked to wire money. By using an obvious scam email, the scammer can focus on the most potentially profitable people. It takes time and effort to engage in the back-and-forth communications that nudge marks, step by step, from interlocutor to trusted acquaintance to pauper.

Long-running financial scams are now known as pig butchering, growing the potential mark up until their ultimate and sudden demise. Such scams, which require gaining trust and infiltrating a target’s personal finances, take weeks or even months of personal time and repeated interactions. It’s a high stakes and low probability game that the scammer is playing.

Here is where LLMs will make a difference. Much has been written about the unreliability of OpenAI’s GPT models and those like them: They “hallucinate” frequently, making up things about the world and confidently spouting nonsense. For entertainment, this is fine, but for most practical uses it’s a problem. It is, however, not a bug but a feature when it comes to scams: LLMs’ ability to confidently roll with the punches, no matter what a user throws at them, will prove useful to scammers as they navigate hostile, bemused, and gullible scam targets by the billions. AI chatbot scams can ensnare more people, because the pool of victims who will fall for a more subtle and flexible scammer—one that has been trained on everything ever written online—is much larger than the pool of those who believe the king of Nigeria wants to give them a billion dollars.

Personal computers are powerful enough today that they can run compact LLMs. After Facebook’s new model, LLaMA, was leaked online, developers tuned it to run fast and cheaply on powerful laptops. Numerous other open-source LLMs are under development, with a community of thousands of engineers and scientists.

A single scammer, from their laptop anywhere in the world, can now run hundreds or thousands of scams in parallel, night and day, with marks all over the world, in every language under the sun. The AI chatbots will never sleep and will always be adapting along their path to their objectives. And new mechanisms, from ChatGPT plugins to LangChain, will enable composition of AI with thousands of API-based cloud services and open source tools, allowing LLMs to interact with the internet as humans do. The impersonations in such scams are no longer just princes offering their country’s riches. They are forlorn strangers looking for romance, hot new cryptocurrencies that are soon to skyrocket in value, and seemingly-sound new financial websites offering amazing returns on deposits. And people are already falling in love with LLMs.

This is a change in both scope and scale. LLMs will change the scam pipeline, making them more profitable than ever. We don’t know how to live in a world with a billion, or 10 billion, scammers that never sleep.

There will also be a change in the sophistication of these attacks. This is due not only to AI advances, but to the business model of the internet—surveillance capitalism—which produces troves of data about all of us, available for purchase from data brokers. Targeted attacks against individuals, whether for phishing or data collection or scams, were once only within the reach of nation-states. Combine the digital dossiers that data brokers have on all of us with LLMs, and you have a tool tailor-made for personalized scams.

Companies like OpenAI attempt to prevent their models from doing bad things. But with the release of each new LLM, social media sites buzz with new AI jailbreaks that evade the new restrictions put in place by the AI’s designers. ChatGPT, and then Bing Chat, and then GPT-4 were all jailbroken within minutes of their release, and in dozens of different ways. Most protections against bad uses and harmful output are only skin-deep, easily evaded by determined users. Once a jailbreak is discovered, it usually can be generalized, and the community of users pulls the LLM open through the chinks in its armor. And the technology is advancing too fast for anyone to fully understand how they work, even the designers.

This is all an old story, though: It reminds us that many of the bad uses of AI are a reflection of humanity more than they are a reflection of AI technology itself. Scams are nothing new—simply intent and then action of one person tricking another for personal gain. And the use of others as minions to accomplish scams is sadly nothing new or uncommon: For example, organized crime in Asia currently kidnaps or indentures thousands in scam sweatshops. Is it better that organized crime will no longer see the need to exploit and physically abuse people to run their scam operations, or worse that they and many others will be able to scale up scams to an unprecedented level?

Defense can and will catch up, but before it does, our signal-to-noise ratio is going to drop dramatically.

This essay was written with Barath Raghavan, and previously appeared on Wired.com.

Read More

What is the true potential impact of artificial intelligence on cybersecurity?

Read Time:45 Second

Will artificial intelligence become clever enough to upend computer security? AI is already surprising the world of art by producing masterpieces in any style on demand. It’s capable of writing poetry while digging up arcane facts in a vast repository. If AIs can act like a bard while delivering the comprehensive power of the best search engines, why can’t they shatter security protocols, too?

The answers are complex, rapidly evolving, and still murky. AI makes some parts of defending computers against attack easier. Other parts are more challenging and may never yield to any intelligence, human or artificial. Knowing which is which, though, is difficult. The rapid evolution of the new models makes it hard to say where AI will or won’t help with any certainty. The most dangerous statement may be, “AIs will never do that.”

To read this article in full, please click here

Read More

CVE-2014-125098 (http_server)

Read Time:28 Second

A vulnerability was found in Dart http_server up to 0.9.5 and classified as problematic. Affected by this issue is the function VirtualDirectory of the file lib/src/virtual_directory.dart of the component Directory Listing Handler. The manipulation of the argument request.uri.path leads to cross site scripting. The attack may be launched remotely. Upgrading to version 0.9.6 is able to address this issue. The name of the patch is 27c1cbd8125bb0369e675eb72e48218496e48ffb. It is recommended to upgrade the affected component. The identifier of this vulnerability is VDB-225356.

Read More

CVE-2014-125097

Read Time:25 Second

A vulnerability, which was classified as problematic, was found in BestWebSoft Facebook Like Button up to 2.33. Affected is the function fcbkbttn_settings_page of the file facebook-button-plugin.php. The manipulation leads to cross site scripting. It is possible to launch the attack remotely. Upgrading to version 2.34 is able to address this issue. The name of the patch is b766da8fa100779409a953f0e46c2a2448cbe99c. It is recommended to upgrade the affected component. VDB-225354 is the identifier assigned to this vulnerability.

Read More

CVE-2014-125096

Read Time:26 Second

A vulnerability was found in Fancy Gallery Plugin 1.5.12. It has been declared as problematic. Affected by this vulnerability is an unknown functionality of the file class.options.php of the component Options Page. The manipulation leads to cross site scripting. The attack can be launched remotely. Upgrading to version 1.5.13 is able to address this issue. The name of the patch is fdf1f9e5a1ec738900f962e69c6fa4ec6055ed8d. It is recommended to upgrade the affected component. The identifier VDB-225349 was assigned to this vulnerability.

Read More

CVE-2012-10012

Read Time:24 Second

A vulnerability has been found in BestWebSoft Facebook Like Button up to 2.13 and classified as problematic. Affected by this vulnerability is the function fcbk_bttn_plgn_settings_page of the file facebook-button-plugin.php. The manipulation leads to cross-site request forgery. The attack can be launched remotely. The name of the patch is 33144ae5a45ed07efe7fceca901d91365fdbf7cb. It is recommended to apply a patch to fix this issue. The associated identifier of this vulnerability is VDB-225355.

Read More