How Ozempic Scams Put People’s Finances and Health at Risk

Read Time:6 Minute, 23 Second

As pharmacies each week fill more than one million prescriptions for Ozempic and other GLP-1 weight loss drugs, scammers are cashing in on the demand. Findings from our Threat Research Team reveal a sharp surge in Ozempic and weight loss scams online.

Any time money and scarcity meet online, you’ll find scammers. That’s what we have here with Ozempic and weight loss scams.

Doctors have prescribed GLP-1 drugs to treat diabetes for nearly two decades. Demand spiked with the U.S. Food and Drug Administration’s (FDA) approval of several GLP-1 drugs for weight loss.

Now, what was a $500 million market for the drug in 2020 stands to clear more than $7.5 billion in 2024.[i] As a result, these drugs are tough to come by as pharmaceutical companies struggle to keep up.

Ozempic scams abound across the internet, phones, and social media

McAfee’s Threat Research Team uncovered just how prolific these weight-loss scams have become. Malicious websites, scam emails and texts, posts on social media, and marketplace listings all round out the mix.

In the first four months of 2024, malicious phishing attempts centered around Ozempic, Wegovy, and Semaglutide increased 183% compared to October through December 2023.
McAfee researchers further discovered 449 risky website URLs and 176,871 dangerous phishing attempts centered around these drugs.
On Facebook, scammers impersonate doctors based outside of the U.S. These phony accounts promise Ozempic and other drugs without a prescription.
Other scammers have taken to Craigslist and similar marketplaces. In just one day in April, McAfee researchers identified 207 scam postings for Ozempic.

Across all these scams, they offer to accept payment through Bitcoin, Zelle, Venmo, and Cash App. All are non-standard payment methods for prescription drugs and are certain red flags for scams.

Example of a scam website

Also common to these scams: a discount. McAfee researchers discovered several scams that offered bogus drugs at a discount if victims paid in cryptocurrency. Others offered them at greatly reduced prices, well under the $1,000 per dose — the legitimate drug’s cost.

Bogus Craigslist ad

As with so many scams, you can file these Ozempic and weight loss scams under “Too Good To Be True.” Steep discounts and offers to purchase the drugs without a prescription are sure-fire signs of a scam. And with this scam comes significant risks.

What happens when you fall for an Ozempic or weight loss scam

These scams can rip you off, harm your health, or both.

In many instances, these scams never deliver. Anything at all. The scam sites simply pocket the money in return for nothing. Further, many steal personal and financial info to commit identity theft down the road.

In some cases, scammers do indeed deliver. Yet instead of receiving an injection pen with the proper drug, scammers send EpiPens loaded with allergy medication, insulin pens, or pens loaded with a saline solution.

One scam victim shared her story with us after she got scammed with a phony pen:

“I started using Ozempic in February 2023, as part of managing my diabetes. At first, it was reliably in stock but when it got more popular a few months later, stock got really low.

Around September, it got really hard to find Ozempic in stock and there was about a month and a half when my mom and I couldn’t find it at all. I mentioned it to a co-worker, who said she had a friend selling it. I was skeptical but did know her friend was connected to the medical industry and the price was only slightly higher than what I’d been paying. It didn’t sound outrageous, so I decided we’d try it. I got the product and gave her the money.

When we opened the box up, it didn’t look or feel right. The packaging felt flimsy and the pen looked quite different from the one we had been using. My mom inspected it and immediately noticed something was wrong. I took photos and videos and with my doctor’s help, we got in touch with a rep [from the legitimate pharma company], who confirmed it was fake. It wasn’t Ozempic, it was an insulin pen.

Realizing that I’d almost injected myself with the wrong substance, thinking it was Ozempic, was terrifying and could have been fatal. It’s really scary to think about what could have happened if we hadn’t done a careful double-check.”

This story frames exactly what’s at stake with Ozempic and weight loss scams. Unlike the bulk of online scams out there, these scams can lead to physical harm — which makes the need to avoid them that much more urgent.

How to avoid Ozempic and weight loss scams online

Remember, buying Ozempic or similar drugs without a prescription is illegal. That makes selling these drugs on social media like Facebook Marketplace, Craigslist, or other related sites illegal as well. Further, watch out for foreign pharmacies and sites you’re not familiar with. Per the FDA, they might sell drugs unapproved by the FDA. Likewise, they might be phony.

Only buy from reputable pharmacies. You can check a pharmacy’s license through your state board of pharmacy (this link from the FDA can help you track that down). If the pharmacy you’re considering isn’t listed, don’t use it. Also, make sure it has a phone number and physical address in the U.S.

Watch out for unreasonably low prices. Once again, if an offer is too good to be true, it probably is. In addition, never use a digital wallet app, bitcoin, prepaid debit cards, or wire funds to pay for your prescription. PayPal, Apple Pay, or a credit card payment are typical options for legitimate pharmacies.

Keep an eye out for website errors and missing product details. Scam websites typically lack verifiable product info. Pay attention to and read the fine print. Look for product batch numbers, expiration dates, or manufacturer details to confirm what you’re purchasing is legit. Other sites fail the eye test, as they look poorly designed and have grammar issues.

A poorly written scam on social media…

Look for misleading claims. If any drug offers rapid weight loss or miracle cures, be on guard. Purchasing counterfeit Ozempic poses significant health risks, including exposure to harmful substances, incorrect dosages, and lack of therapeutic effects. In addition to financial loss, you can experience adverse reactions or worsening of your condition by purchasing ineffective or counterfeit medications.

Consider AI-powered scam protection. McAfee Scam Protection uses AI to detect and block dangerous links that scammers drop into emails, text messages, and social media messages. Additionally, McAfee Web Protection detects and blocks links to scam sites that crop up in search and while browsing.

The cost of Ozempic and weight loss scams

Truly, these scams can cause great harm. They can take a toll on your finances and your health. The good news here is that you can avoid them entirely.

This stands as a good reminder…when something gets popular and scarce, it spawns scams. That’s what we’re seeing with these in-demand drugs. And it’s just as we’ve seen before with popular toys around the holidays and even rental cars during peak periods of travel. Where there’s a combination of urgency, need, and money, your chances of stumbling across a scam increase.

[i] https://www.jpmorgan.com/insights/global-research/current-events/obesity-drugs

The post How Ozempic Scams Put People’s Finances and Health at Risk appeared first on McAfee Blog.

Read More

Battered and bruised 23andMe faces probe after hack that stole seven million users’ data

Read Time:18 Second

23andMe, the California-based company which sells DNA testing kits to help people learn about their ancestry and potential health risks, is facing scrutiny from British and Canadian data protection authorities following a security breach that saw hackers compromise the personal data of nearly seven million users.

Read more in my article on the Hot for Security blog.

Read More

Using AI for Political Polling

Read Time:8 Minute, 57 Second

Public polling is a critical function of modern political campaigns and movements, but it isn’t what it once was. Recent US election cycles have produced copious postmortems explaining both the successes and the flaws of public polling. There are two main reasons polling fails.

First, nonresponse has skyrocketed. It’s radically harder to reach people than it used to be. Few people fill out surveys that come in the mail anymore. Few people answer their phone when a stranger calls. Pew Research reported that 36% of the people they called in 1997 would talk to them, but only 6% by 2018. Pollsters worldwide have faced similar challenges.

Second, people don’t always tell pollsters what they really think. Some hide their true thoughts because they are embarrassed about them. Others behave as a partisan, telling the pollster what they think their party wants them to say—or what they know the other party doesn’t want to hear.

Despite these frailties, obsessive interest in polling nonetheless consumes our politics. Headlines more likely tout the latest changes in polling numbers than the policy issues at stake in the campaign. This is a tragedy for a democracy. We should treat elections like choices that have consequences for our lives and well-being, not contests to decide who gets which cushy job.

Polling Machines?

AI could change polling. AI can offer the ability to instantaneously survey and summarize the expressed opinions of individuals and groups across the web, understand trends by demographic, and offer extrapolations to new circumstances and policy issues on par with human experts. The politicians of the (near) future won’t anxiously pester their pollsters for information about the results of a survey fielded last week: they’ll just ask a chatbot what people think. This will supercharge our access to realtime, granular information about public opinion, but at the same time it might also exacerbate concerns about the quality of this information.

I know it sounds impossible, but stick with us.

Large language models, the AI foundations behind tools like ChatGPT, are built on top of huge corpuses of data culled from the Internet. These are models trained to recapitulate what millions of real people have written in response to endless topics, contexts, and scenarios. For a decade or more, campaigns have trawled social media, looking for hints and glimmers of how people are reacting to the latest political news. This makes asking questions of an AI chatbot similar in spirit to doing analytics on social media, except that they are generative: you can ask them new questions that no one has ever posted about before, you can generate more data from populations too small to measure robustly, and you can immediately ask clarifying questions of your simulated constituents to better understand their reasoning

Researchers and firms are already using LLMs to simulate polling results. Current techniques are based on the ideas of AI agents. An AI agent is an instance of an AI model that has been conditioned to behave in a certain way. For example, it may be primed to respond as if it is a person with certain demographic characteristics and can access news articles from certain outlets. Researchers have set up populations of thousands of AI agents that respond as if they are individual members of a survey population, like humans on a panel that get called periodically to answer questions.

The big difference between humans and AI agents is that the AI agents always pick up the phone, so to speak, no matter how many times you contact them. A political candidate or strategist can ask an AI agent whether voters will support them if they take position A versus B, or tweaks of those options, like policy A-1 versus A-2. They can ask that question of male voters versus female voters. They can further limit the query to married male voters of retirement age in rural districts of Illinois without college degrees who lost a job during the last recession; the AI will integrate as much context as you ask.

What’s so powerful about this system is that it can generalize to new scenarios and survey topics, and spit out a plausible answer, even if its accuracy is not guaranteed. In many cases, it will anticipate those responses at least as well as a human political expert. And if the results don’t make sense, the human can immediately prompt the AI with a dozen follow-up questions.

Making AI agents better polling subjects

When we ran our own experiments in this kind of AI use case with the earliest versions of the model behind ChatGPT (GPT-3.5), we found that it did a fairly good job at replicating human survey responses. The ChatGPT agents tended to match the responses of their human counterparts fairly well across a variety of survey questions, such as support for abortion and approval of the US Supreme Court. The AI polling results had average responses, and distributions across demographic properties such as age and gender, similar to real human survey panels.

Our major systemic failure happened on a question about US intervention in the Ukraine war.  In our experiments, the AI agents conditioned to be liberal were predominantly opposed to US intervention in Ukraine and likened it to the Iraq war. Conservative AI agents gave hawkish responses supportive of US intervention. This is pretty much what most political experts would have expected of the political equilibrium in US foreign policy at the start of the decade but was exactly wrong in the politics of today.

This mistake has everything to do with timing. The humans were asked the question after Russia’s full-scale invasion in 2022, whereas the AI model was trained using data that only covered events through September 2021. The AI got it wrong because it didn’t know how the politics had changed. The model lacked sufficient context on crucially relevant recent events.

We believe AI agents can overcome these shortcomings. While AI models are dependent on  the data they are trained with, and all the limitations inherent in that, what makes AI agents special is that they can automatically source and incorporate new data at the time they are asked a question. AI models can update the context in which they generate opinions by learning from the same sources that humans do. Each AI agent in a simulated panel can be exposed to the same social and media news sources as humans from that same demographic before they respond to a survey question. This works because AI agents can follow multi-step processes, such as reading a question, querying a defined database of information (such as Google, or the New York Times, or Fox News, or Reddit), and then answering a question.

In this way, AI polling tools can simulate exposing their synthetic survey panel to whatever news is most relevant to a topic and likely to emerge in each AI agent’s own echo chamber. And they can query for other relevant contextual information, such as demographic trends and historical data. Like human pollsters, they can try to refine their expectations on the basis of factors like how expensive homes are in a respondent’s neighborhood, or how many people in that district turned out to vote last cycle.

Likely use cases for AI polling

AI polling will be irresistible to campaigns, and to the media. But research is already revealing when and where this tool will fail. While AI polling will always have limitations in accuracy, that makes them similar to, not different from, traditional polling. Today’s pollsters are challenged to reach sample sizes large enough to measure statistically significant differences between similar populations, and the issues of nonresponse and inauthentic response can make them systematically wrong. Yet for all those shortcomings, both traditional and AI-based polls will still be useful. For all the hand-wringing and consternation over the accuracy of US political polling, national issue surveys still tend to be accurate to within a few percentage points. If you’re running for a town council seat or in a neck-and-neck national election, or just trying to make the right policy decision within a local government, you might care a lot about those small and localized differences. But if you’re looking to track directional changes over time, or differences between demographic groups, or to uncover insights about who responds best to what message, then these imperfect signals are sufficient to help campaigns and policymakers.

Where AI will work best is as an augmentation of more traditional human polls. Over time, AI tools will get better at anticipating human responses, and also at knowing when they will be most wrong or uncertain. They will recognize which issues and human communities are in the most flux, where the model’s training data is liable to steer it in the wrong direction. In those cases, AI models can send up a white flag and indicate that they need to engage human respondents to calibrate to real people’s perspectives. The AI agents can even be programmed to automate this. They can use existing survey tools—with all their limitations and latency—to query for authentic human responses when they need them.

This kind of human-AI polling chimera lands us, funnily enough, not too distant from where survey research is today. Decades of social science research has led to substantial innovations in statistical methodologies for analyzing survey data. Current polling methods already do substantial modeling and projecting to predictively model properties of a general population based on sparse survey samples. Today, humans fill out the surveys and computers fill in the gaps. In the future, it will be the opposite: AI will fill out the survey and, when the AI isn’t sure what box to check, humans will fill the gaps. So if you’re not comfortable with the idea that political leaders will turn to a machine to get intelligence about which candidates and policies you want, then you should have about as many misgivings about the present as you will the future.

And while the AI results could improve quickly, they probably won’t be seen as credible for some time. Directly asking people what they think feels more reliable than asking a computer what people think. We expect these AI-assisted polls will be initially used internally by campaigns, with news organizations relying on more traditional techniques. It will take a major election where AI is right and humans are wrong to change that.

This essay was written with Aaron Berger, Eric Gong, and Nathan Sanders, and previously appeared on the Harvard Kennedy School Ash Center’s website.

Read More

Social Media Cybersecurity: Don’t Let Employees Be Your Weakest Link

Read Time:3 Minute, 41 Second

The content of this post is solely the responsibility of the author.  LevelBlue does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

Maintaining an active social media presence can be a great way to improve brand visibility and generate leads, but it also opens the door to cybersecurity risks — from phishing scams and malware to identify theft and data breaches. If employees accidentally post confidential information or click dodgy links via corporate accounts, cybercriminals can launch malicious attacks that can cause lasting damage to your business (67% of data breaches result from human error). Despite that, as many as 45% of businesses don’t have an official social media policy for employees to follow. Fortunately, by creating a comprehensive social media policy, you can raise social media cybersecurity awareness among your employees, and keep sensitive company data safe.

Creating a social media policy

A formal social media policy should outline cybersecurity best practices for employees working with your business’s social media accounts. At a minimum, the policy should prevent employees from posting things like private business plans, trade secrets, and personal details about other employees, customers, and clients. It’s also important to include guidance that helps employees avoid common cybersecurity risks — for example, they should know not to click on suspicious messages or links as these can contain worms (self-replicating malware) and phishing campaigns.

Quizzes should also be off-limits. Although they might seem like harmless fun, social media quizzes may be harvesting company and/or personal data to sell to third-parties. Hackers can also guess passwords from the information provided in quizzes, so they should be avoided altogether.

Corporate content should be posted with corporate devices, not personal ones

Your social media policy should also state that work devices (and only work devices) should be used to create and publish corporate content. When staff are free to use their personal devices, they may accidentally post personal content on the corporate account (or vice versa). So, personal devices should never be used for business purposes, so as to prevent any mix-ups. Personal devices also tend to be far less secure than corporate ones. Shockingly, 36% of remote workers don’t even have standard password protection on all their personal devices, which leaves any corporate accounts accessed on them at greater risk of compromise.

That said, it’s also important to regularly invest in new corporate devices rather than relying on old ones in order to save money. 60% of businesses hit by a data breach say unpatched vulnerabilities were to blame, and these weaknesses are often present on old devices. “Consider the fact that older devices run older software and are often prone to working slowly and freezing up” Retriever warns. “They’re also less likely to be able to stand cyber attacks. These factors put data at risk and it’s why it’s recommended that computer hardware is updated every three years”.

Only allow authorized employees to publish content

You can secure your social media accounts even further by making it a rule only authorized employees can publish corporate content. However, never grant these employees full admin rights if you can help it. Doing so technically gives others the power to remove you as an admin, which would mean you’re no longer in control of your corporate social accounts. It’s also important to pay attention to which employees have page admin and editing roles. So, if/when these employees leave your company, they should then be immediately removed from these roles to keep your accounts secure.

A good password policy for your social media accounts can also help prevent unwanted access. For instance, two-step verification reinforces security by making users show a second form of ID on top of their password (usually, in the form of a code sent to their phone that they have to then enter). Also, make use of available user access logging features that can provide you with greater account transparency. With these, you can record who accesses the account and who’s responsible for what activity (including unauthorized posts).

Social media cybersecurity is essential to keep your business accounts secure. By implementing a solid social media cybersecurity policy, you can successfully improve cybersecurity awareness among your employees and turn them from your organization’s biggest security weakness into your greatest strength.

Read More

Multiple Vulnerabilities in Google Chrome Could Allow for Arbitrary Code Execution

Read Time:28 Second

Multiple vulnerabilities have been discovered in Google Chrome, the most severe of which could allow for arbitrary code execution. Successful exploitation of the most severe of these vulnerabilities could allow for arbitrary code execution in the context of the logged on user. Depending on the privileges associated with the user an attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. Users whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

Read More