Ransomware-hit vodka maker Stoli files for bankruptcy in the United States

Read Time:21 Second

Stoli Group USA, the US subsidiary of vodka maker Stoli, has filed for bankruptcy – and a ransomware attack is at least partly to blame.

The American branch of Stoli, which imports and distributes Stoli brands in the United States, as well as the Kentucky Owl bourbon brand it purchased in 2017, was hit by a ransomware attack in August 2024.

Learn more in my article on the Exponential-e blog.

Read More

U.S. Offered $10M for Hacker Just Arrested by Russia

Read Time:4 Minute, 0 Second

In January 2022, KrebsOnSecurity identified a Russian man named Mikhail Matveev as “Wazawaka,” a cybercriminal who was deeply involved in the formation and operation of multiple ransomware groups. The U.S. government indicted Matveev as a top ransomware purveyor a year later, offering $10 million for information leading to his arrest. Last week, the Russian government reportedly arrested Matveev and charged him with creating malware used to extort companies.

An FBI wanted poster for Matveev.

Matveev, a.k.a. “Wazawaka” and “Boriselcin” worked with at least three different ransomware gangs that extorted hundreds of millions of dollars from companies, schools, hospitals and government agencies, U.S. prosecutors allege.

Russia’s interior ministry last week issued a statement saying a 32-year-old hacker had been charged with violating domestic laws against the creation and use of malicious software. The announcement didn’t name the accused, but the Russian state news agency RIA Novosti cited anonymous sources saying the man detained is Matveev.

Matveev did not respond to requests for comment. Daryna Antoniuk at TheRecord reports that a security researcher said on Sunday they had contacted Wazawaka, who confirmed being charged and said he’d paid two fines, had his cryptocurrency confiscated, and is currently out on bail pending trial.

Matveev’s hacker identities were remarkably open and talkative on numerous cybercrime forums. Shortly after being identified as Wazawaka by KrebsOnSecurity in 2022, Matveev published multiple selfie videos on Twitter/X where he acknowledged using the Wazawaka moniker and mentioned several security researchers by name (including this author). More recently, Matveev’s X profile (@ransomboris) posted a picture of a t-shirt that features the U.S. government’s “Wanted” poster for him.

An image tweeted by Matveev showing the Justice Department’s wanted poster for him on a t-shirt. image: x.com/vxunderground

The golden rule of cybercrime in Russia has always been that as long as you never hack, extort or steal from Russian citizens or companies, you have little to fear of arrest. Wazawaka claimed he zealously adhered to this rule as a personal and professional mantra.

“Don’t shit where you live, travel local, and don’t go abroad,” Wazawaka wrote in January 2021 on the Russian-language cybercrime forum Exploit. “Mother Russia will help you. Love your country, and you will always get away with everything.”

Still, Wazawaka may not have always stuck to that rule. At several points throughout his career, Wazawaka claimed he made good money stealing accounts from drug dealers on darknet narcotics bazaars.

Cyber intelligence firm Intel 471 said Matveev’s arrest raises more questions than answers, and that Russia’s motivation here likely goes beyond what’s happening on the surface.

“It’s possible this is a shakedown by Kaliningrad authorities of a local internet thug who has tens of millions of dollars in cryptocurrency,” Intel 471 wrote in an analysis published Dec. 2. “The country’s ingrained, institutional corruption dictates that if dues aren’t paid, trouble will come knocking. But it’s usually a problem money can fix.

Intel 471 says while Russia’s court system is opaque, Matveev will likely be open about the proceedings, particularly if he pays a toll and is granted passage to continue his destructive actions.

“Unfortunately, none of this would mark meaningful progress against ransomware,” they concluded.

Although Russia traditionally hasn’t put a lot of effort into going after cybercriminals within its borders, it has brought a series of charges against alleged ransomware actors this year. In January, four men tied to the REvil ransomware group were sentenced to lengthy prison terms. The men were among 14 suspected REvil members rounded up by Russia in the weeks before Russia invaded Ukraine in 2022.

Earlier this year, Russian authorities arrested at least two men for allegedly operating the short-lived Sugarlocker ransomware program in 2021. Aleksandr Ermakov and Mikhail Shefel (now legally Mikhail Lenin) ran a security consulting business called Shtazi-IT. Shortly before his arrest, Ermakov became the first ever cybercriminal sanctioned by Australia, which alleged he stole and leaked data on nearly 10 million customers of the Australian health giant Medibank.

In December 2023, KrebsOnSecurity identified Lenin as “Rescator,” the nickname used by the cybercriminal responsible for selling more than 100 million payment cards stolen from customers of Target and Home Depot in 2013 and 2014. Last month, Shefel admitted in an interview with KrebsOnSecurity that he was Rescator, and claimed his arrest in the Sugarlocker case was payback for reporting the son of his former boss to the police.

Ermakov was sentenced to two years probation. But on the same day my interview with Lenin was published here, a Moscow court declared him insane, and ordered him to undergo compulsory medical treatment, The Record’s Antoniuk notes.

Read More

AI and the 2024 Elections

Read Time:5 Minute, 42 Second

It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did.

In a Pew survey of Americans from earlier this fall, nearly eight times as many respondents expected AI to be used for mostly bad purposes in the 2024 election as those who thought it would be used mostly for good. There are real concerns and risks in using AI in electoral politics, but it definitely has not been all bad.

The dreaded “death of truth” has not materialized—at least, not due to AI. And candidates are eagerly adopting AI in many places where it can be constructive, if used responsibly. But because this all happens inside a campaign, and largely in secret, the public often doesn’t see all the details.

Connecting with voters

One of the most impressive and beneficial uses of AI is language translation, and campaigns have started using it widely. Local governments in Japan and California and prominent politicians, including India Prime Minister Narenda Modi and New York City Mayor Eric Adams, used AI to translate meetings and speeches to their diverse constituents.

Even when politicians themselves aren’t speaking through AI, their constituents might be using it to listen to them. Google rolled out free translation services for an additional 110 languages this summer, available to billions of people in real time through their smartphones.

Other candidates used AI’s conversational capabilities to connect with voters. U.S. politicians Asa Hutchinson, Dean Phillips and Francis Suarez deployed chatbots of themselves in their presidential primary campaigns. The fringe candidate Jason Palmer beat Joe Biden in the American Samoan primary, at least partly thanks to using AI-generated emails, texts, audio and video. Pakistan’s former prime minister, Imran Khan, used an AI clone of his voice to deliver speeches from prison.

Perhaps the most effective use of this technology was in Japan, where an obscure and independent Tokyo gubernatorial candidate, Takahiro Anno, used an AI avatar to respond to 8,600 questions from voters and managed to come in fifth among a highly competitive field of 56 candidates.

Nuts and bolts

AIs have been used in political fundraising as well. Companies like Quiller and Tech for Campaigns market AIs to help draft fundraising emails. Other AI systems help candidates target particular donors with personalized messages. It’s notoriously difficult to measure the impact of these kinds of tools, and political consultants are cagey about what really works, but there’s clearly interest in continuing to use these technologies in campaign fundraising.

Polling has been highly mathematical for decades, and pollsters are constantly incorporating new technologies into their processes. Techniques range from using AI to distill voter sentiment from social networking platforms—something known as “social listening“—to creating synthetic voters that can answer tens of thousands of questions. Whether these AI applications will result in more accurate polls and strategic insights for campaigns remains to be seen, but there is promising research motivated by the ever-increasing challenge of reaching real humans with surveys.

On the political organizing side, AI assistants are being used for such diverse purposes as helping craft political messages and strategy, generating ads, drafting speeches and helping coordinate canvassing and get-out-the-vote efforts. In Argentina in 2023, both major presidential candidates used AI to develop campaign posters, videos and other materials.

In 2024, similar capabilities were almost certainly used in a variety of elections around the world. In the U.S., for example, a Georgia politician used AI to produce blog posts, campaign images and podcasts. Even standard productivity software suites like those from Adobe, Microsoft and Google now integrate AI features that are unavoidable—and perhaps very useful to campaigns. Other AI systems help advise candidates looking to run for higher office.

Fakes and counterfakes

And there was AI-created misinformation and propaganda, even though it was not as catastrophic as feared. Days before a Slovakian election in 2023, fake audio discussing election manipulation went viral. This kind of thing happened many times in 2024, but it’s unclear if any of it had any real effect.

In the U.S. presidential election, there was a lot of press after a robocall of a fake Joe Biden voice told New Hampshire voters not to vote in the Democratic primary, but that didn’t appear to make much of a difference in that vote. Similarly, AI-generated images from hurricane disaster areas didn’t seem to have much effect, and neither did a stream of AI-faked celebrity endorsements or viral deepfake images and videos misrepresenting candidates’ actions and seemingly designed to prey on their political weaknesses.

AI also played a role in protecting the information ecosystem. OpenAI used its own AI models to disrupt an Iranian foreign influence operation aimed at sowing division before the U.S. presidential election. While anyone can use AI tools today to generate convincing fake audio, images and text, and that capability is here to stay, tech platforms also use AI to automatically moderate content like hate speech and extremism. This is a positive use case, making content moderation more efficient and sparing humans from having to review the worst offenses, but there’s room for it to become more effective, more transparent and more equitable.

There is potential for AI models to be much more scalable and adaptable to more languages and countries than organizations of human moderators. But the implementations to date on platforms like Meta demonstrate that a lot more work needs to be done to make these systems fair and effective.

One thing that didn’t matter much in 2024 was corporate AI developers’ prohibitions on using their tools for politics. Despite market leader OpenAI’s emphasis on banning political uses and its use of AI to automatically reject a quarter-million requests to generate images of political candidates, the company’s enforcement has been ineffective and actual use is widespread.

The genie is loose

All of these trends—both good and bad—are likely to continue. As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics. This will happen whether the AI’s performance is superhuman or suboptimal, whether it makes mistakes or not, and whether the balance of its use is positive or negative. All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation.

This essay was written with Nathan Sanders, and originally appeared in The Conversation.

Read More