On Alec Baldwin’s Shooting

Read Time:51 Second

We recently learned that Alec Baldwin is being charged with involuntary manslaughter for his accidental shooting on a movie set. I don’t know the details of the case, nor the intricacies of the law, but I have a question about movie props.

Why was an actual gun used on the set? And why were actual bullets used on the set? Why wasn’t it a fake gun: plastic, or metal without a working barrel? Why does it have to fire blanks? Why can’t everyone just pretend, and let someone add the bang and the muzzle flash in post-production?

Movies are filled with fakery. The light sabers in Star Wars weren’t real; the lighting effects and “wooj-wooj” noises were add afterwards. The phasers in Star Trek weren’t real either. Jar Jar Binks was 100% computer generated. So were a gazillion “props” from the Harry Potter movies. Even regular, non-SF non-magical movies have special effects. They’re easy.

Why are guns different?

Read More

Predicting which hackers will become persistent threats

Read Time:6 Minute, 39 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the authors in this article. This blog was jointly written with David Maimon, Professor at Georgia State University.

Website defacement

Websites are central to business operations but are also the target of various cyber-attacks. Malicious hackers have found several ways to compromise websites, with the most common attack vector being SQL injection: the act of injecting malicious SQL code to gain unauthorized access to the server hosting the website. Once on the server, the hacker can compromise the target organization’s website, and vandalize it by replacing the original content with content of their own choosing. This criminal act is referred to as website defacement. See Figure 1 for examples of past website defacements.

Figure 1. Examples of past website defacements.

While the act of vandalizing a website may seem trivial, it can be devastating for the victimized entities. If an e-commerce site is publicly compromised, for example, they suffer direct and indirect financial loss. The direct losses can be measured by the amount of revenue that would have been generated had the website not been compromised, and by the time and money spent to repair the damaged site. Indirect losses occur because of reputational damage. Potential customers may be deterred from providing their banking information to an organization portrayed and perceived as incapable of protecting their assets.

Threat actors

Unlike most forms of hacking, website defacement has a public facing component. Assailants are eager to get credit for their success in compromising websites and are notorious for bragging about their exploits across various platforms, including general social media (e.g., Facebook, Twitter, Youtube, etc.) and hacking specific sites. The most popular platform on which hackers report successful defacements is Zone-H. Users of the platform upload evidence of their attack, and once the attack is verified by the site’s administrators, it is permanently housed in the archive and viewable on Zone-H’s webpage. Zone-H is the largest hacking archive in the world: over 15 million attacks have been verified by Zone-H thus far, with over 160,000 unique active users. The archive, as depicted in Figure 2, includes the hackers’ moniker, the attacked website’s domain name, and an image of the defacement content (resembling the images depicted in Figure 1).

Figure 2. Zone-H: The largest hacking archive in the world.

Hackers tend to use the same moniker across platforms to bolster the reputation and status of their online identity, which allows for the gathering of digital artifacts and threat intelligence pertinent to the attack and attacker, respectively. Indeed, we have been systematically gathering data on active malicious hackers who report their successful defacements to Zone-H since 2017 and, in doing so, have uncovered several interesting findings that shed light on this underground community. For example, and in direct contrast to Hollywood’s stereotype of the lone actor, we observed an interconnected community of hackers who form teams and develop their skills through collaboration and camaraderie. We also found variation in hackers’ attack frequency: some hackers are extremely prolific and can be classified as persistent threats, while others only engage in a few attacks before disappearing. These findings served as motivation for this study.

Criminal trajectories           

Recently, we built an analytic model capable of predicting which new hackers will become persistent threats at the onset of their criminal career. The study began by identifying 241 new hackers on the Zone-H archive. We then tracked each of these hackers for one year (52 weeks) following their first disclosed website defacement. We recorded their total number of attacks, extracted and analyzed content from their defacements, and gathered open-source intelligence from a litany of social media and hacking sites. In total, the 241 hackers in our study defaced 39,428 websites within the first year of their hacking career. We identified 73% of our sample on a social media site and found that 50% also report their defacements to other hacking archives. Finally, we extracted and analyzed the content of each new hacker’s first defacement and found that 39% of hackers indicated involvement with a hacking team, 12% posted political content, and 34% left their contact information directly on the compromised site. 

To plot trajectories, we had to first disaggregate the dataset to determine whether each of the hackers in our sample defaced at least one website each week for the 52 weeks following their first defacement. Upon completion, we employed latent group-based trajectory modeling to determine if, and how many, unique criminal trajectories exist. Results are presented in Figure 3. We found that new hackers follow one of four patterns: low threat (28.8%), naturally desisting (23.9%), increasingly prolific (25.8%), and persistent threat (21.5%). Hackers classified as low threat (blue line) engage in very few defacements and do not increase their attack frequency within one year of their first attack. Those labeled as naturally desisting (red line) begin their careers with velocity, but this is short-lived. Conversely, those classified as increasingly prolific (green line) engage in more attacks as they advance in their criminal careers. Finally, those deemed as persistent threats (yellow line) begin their careers with velocity and remain prolific. To our knowledge, we are the first to plot the trajectories of new malicious hackers.

Figure 3. The one-year trajectory of new malicious hackers.

After plotting the trajectories, we employed a series of regression models to determine if open-source intelligence and digital artifacts can be used to predict the evolution of a new hacker’s criminal career. Contrary to our expectation, we found politically driven hackers are at an increased odds of naturally desisting. While these hackers may engage in a high number of attacks at the onset of their career, this is short-lived. We suspect eager new hacktivists simply lose sight, or get bored, of their cause. Conversely, new hackers who post their contact information directly to the compromised site are at a decreased odds of naturally desisting. Tagging a virtual crime scene with contact information is a bold move. We suspect these hackers are rewarded for their boldness and initiated into the hacking community, where they continue defacing websites alongside their peers.

Different patterns emerged when predicting who will become a persistent threat. We found that social media engagement and reporting defacement activity to other platforms increase the odds of being a persistent threat. This may boil down to commitment: hackers committed to building their brand by posting on multiple platforms are also committed to building their brand through continual and frequent defacement activity. The most interesting, yet also intuitive, patterns emerge when predicting who will become increasingly prolific. We found that hackers who report to other platforms and indicate team involvement engage in more attacks as they progress in their career. Joining a hacking team is a valuable educational experience for a new hacker. As a novice hacker learns new skills, it is no surprise they demonstrate their capabilities by defacing more websites.

Taken together, these findings offer insight into the development of proactive cybersecurity solutions. We demonstrate that open-source intelligence can be used to predict which hackers will become persistent threats. Upon identifying high-risk hackers, we believe the next logical step is to launch early intervention programs aimed at redirecting their talent toward something more constructive. Recruiting young hackers for cybersecurity positions could create a safer cyberspace by filling the nation’s skills shortage while simultaneously removing persistent threat actors from the equation.

Acknowledgements

This work was conducted alongside several members of the Evidence-Based Cybersecurity Research Laboratory. We thank Cameron Hoffman and Robert Perkins for their continual involvement on the hacking project. For more information about our team of researchers and this project visit https://ebcs.gsu.edu/. Follow @Dr_Cybercrime on Twitter for more cutting-edge cybersecurity research.

Read More

Recent legal developments bode well for security researchers, but challenges remain

Read Time:33 Second

Despite the hoodie-wearing bad guy image, most hackers are bona fide security researchers protecting users by probing and testing the security configurations of digital networks and assets. Yet the law has often failed to distinguish between malicious hackers and good-faith security researchers.

This failure to distinguish between the two hacker camps has, however, improved over the past two years, according to Harley Geiger, an attorney with Venable LLP, who serves as counsel in the Privacy and Data Security group. Speaking at Shmoocon 2023, Geiger pointed to three changes in hacker law in 2021 and 2022 that minimize security researchers’ risks.

To read this article in full, please click here

Read More

9 API security tools on the frontlines of cybersecurity

Read Time:35 Second

Application programming interfaces (APIs) have become a critical part of networking, programs, applications, devices, and nearly everything else in the computing landscape. This is especially true for cloud and mobile computing, neither of which could probably exist in its current form without APIs holding everything together or managing much of backend functionality.

Because of their reliability and simplicity, APIs have become ubiquitous across the computing landscape. Most organizations probably don’t even know how many APIs are operating within their networks, especially within their clouds. There are likely thousands APIs working within larger companies and even smaller organizations probably rely on more APIs than they realize.

To read this article in full, please click here

Read More

ChatGPT: A Scammer’s Newest Tool

Read Time:3 Minute, 29 Second

ChatGPT: Everyone’s favorite chatbot/writer’s-block buster/ridiculous short story creator is skyrocketing in fame. 1 In fact, the AI-generated content “masterpieces” (by AI standards) are impressing technologists the world over. While the tech still has a few kinks that need ironing, ChatGPT is almost capable of rivaling human, professional writers.  

However, as with most good things, bad actors are using technology for their own gains. Cybercriminals are exploring the various uses of the AI chatbot to trick people into giving up their privacy and money. Here are a few of the latest unsavory uses of AI text generators and how you can protect yourself—and your devices—from harm. 

Malicious Applications of ChatGPT 

Besides students and time-strapped employees using ChatGPT to finish writing assignments for them, scammers and cybercriminals are using the program for their own dishonest assignments. Here are a few of the nefarious AI text generator uses: 

Malware. Malware often has a very short lifecycle: a cybercriminal will create it, infect a few devices, and then operating systems will push an update that protects devices from that particular malware. Additionally, tech sites alert their readers to emerging malware threats. Once the general public and cybersecurity experts are made aware of a threat, the threat’s potency is quickly nullified. Chat GPT, however, is proficient in writing malicious code. Specifically, the AI could be used to write polymorphic malware, which is a type of program that constantly evolves, making it difficult to detect and defend against.2 Plus, criminals can use ChatGPT to write mountains of malicious code. While a human would have to take a break to eat, sleep, and walk around the block, AI doesn’t require breaks. Someone could turn their malware operation into a 24-hour digital crime machine. 
Fake dating profiles. Catfish, or people who create fake online personas to lure others into relationships, are beginning to use AI to supplement their romance scams. Like malware creators who are using AI to scale up their production, romance scammers can now use AI to lighten their workload and attempt to keep up many dating profiles at once. For scammers who need inspiration, ChatGPT is capable of altering the tone of its messages. For example, a scammer can tell ChatGPT to write a love letter or to dial up the charm. This could result in earnest-sounding professions of love that could convince someone to relinquish their personally identifiable information (PII) or send money. 
Phishing. Phishers are using AI to up their phishing game. Phishers, who are often known for their poor grammar and spelling, are improving the quality of their messages with AI, which rarely makes editorial mistakes. ChatGPT also understands tone commands, so phishers can up the urgency of their messages that demand immediate payment or responses with passwords or PII. 

How to Avoid AI Text Generator Scams 

The best way to avoid being fooled by AI-generated text is by being on high alert and scrutinizing any texts, emails, or direct messages you receive from strangers. There are a few tell-tale signs of an AI-written message. For example, AI often uses short sentences and reuses the same words. Additionally, AI may create content that says a lot without saying much at all. Because AI can’t form opinions, their messages may sound substance-less. In the case of romance scams, if the person you’re communicating with refuses to meet in person or chat over video, consider cutting ties.  

To improve your peace of mind, McAfee+ Ultimate allows you to live your best and most confident life online. In case you ever do fall victim to an identity theft scam or your device downloads malware, McAfee will help you resolve and recover from the incident. In addition, McAfee’s proactive protection services – such as three-bureau credit monitoring, unlimited antivirus, and web protection – can help you avoid the headache altogether!  

1Poc Network, “I asked AI (ChatGPT) to write me a rather off short story and the result was amazing 

2CyberArk, “Chatting Our Way Into Creating a Polymorphic Malware 

The post ChatGPT: A Scammer’s Newest Tool appeared first on McAfee Blog.

Read More

The Rise and Risks of AI Art Apps

Read Time:5 Minute, 32 Second

The popularity of AI-based mobile applications that can create artistic images based on pictures, such as the “Magic Avatars” from Lensa, and the OpenAI service DALL-E 2 that generates them from text, have increased the mainstream interest of these tools. Users should be aware of those seeking to take advantage to distribute Potential Unwanted Programs (PUPs) or malware, such as through deceptive applications that promise the same or similar advanced features but are just basic image editors or otherwise repackaged apps that can drain your data plan and battery life with Clicker and HiddenAds behaviors, subscribe you to expensive services that provide little or no value over alternatives  (Fleeceware), or even steal your social media account credentials (FaceStealer).

Dozens of apps surface daily claiming to offer AI image creation. Some of them might be legitimate or based on open-source projects such as Stable Diffusion, but in the search for a free application that produces quality results, users might try new apps that could compromise their privacy, user experience, wallet and/or security.

The McAfee Mobile Research Team recently discovered a series of repackaged image editors on the Google Play app store which presented concerning behaviors.  McAfee Mobile Security products help protect against such apps, including those classified as Android/FakeApp, Android/FaceStealer, Android/PUP.Repacked and Android/PUP.GenericAdware.

McAfee, a member of the App Defense Alliance focused on protecting users by preventing threats from reaching their devices and improving app quality across the ecosystem, reported the discovered apps to Google, which took prompt action and the apps are no longer available on Google Play.

We now discuss various types of privacy and/or security risks associated with the types of apps recently removed from the app store.

FaceStealer

“Pista Cartoon Photo Effect” and “NewProfilePicture” are examples of apps that offered compelling visual results, however, each was based on the same image editor with basic filters and trojanized with Android/FaceStealer, which is a well-known malware capable of compromising a victim’s Facebook or Instagram account. The apps could capture user credentials during a Facebook login by embedding a javascript function loaded from a remote server (to evade detection) into a flutter webview activity that displays the Facebook login screen. 

“NewProfilePicture” and “Pista – Cartoon Photo Effect” are examples of FaceStealer malware that posed as a cartoon avatar creator.

The same image editor which was repackaged into the above two apps has also been repackaged alternatively with adware modules and distributed by other developers under other app names, such as “Cartoon Effect | Cartoon Photo”:  

Fleeceware

Fleeceware refers to mobile apps that use various tactics to enroll users into subscriptions with high fees, typically after a free trial period, and often with little or no value to the subscriber beyond cheaper or free alternatives. If the user does not take care to cancel their subscription, they continue to be charged even after deleting the app.

“Toonify Me”, which is no longer available on the Play Store, cost $49.99 per week after 3 days – almost $2,600 per year – for what featured AI-generated illustrations in the app description, but was another repackaged version of the same image editor functionality found within “NewProfilePicture” and “Pista – Cartoon Photo Effect”. 

In this case, the “Toonify Me” app did not allow feature access without enrolling in the subscription, and the “CONTINUE” button which initiated the subscription was the only option to tap in the app once it was launched.

Adware

Promoted by ads that described it as capable of transforming pictures into artistic drawings, the “Fun Coloring – Paint by Number app is an example of a repackaged version of a different, legitimate pixel painting app.  It lacked the advertised AI effects and was plagued with adware-like behavior 

Consistent with many reviews complaining about unexpected ads out of the context of the app, once installed, the app started a service that communicated in the background with Facebook Graph API every 5 seconds and might pull ads based on received commands after some time of execution. The app contained multiple injected SDK modules from AppsFlyer, Fyber, InMobi, IAB, Mintegral, PubNative and Smaato (none of which are in the original app, which was repackaged to include these), which would help monetize installations without regard for user experience. 

help monetize installations without regard for user experience.

Conclusion

When new types of apps become popular and new ones appear on the market to offer similar features, users should act with caution to avoid becoming victim to those wanting to exploit public interest.

When installing an app which causes you doubt, make sure you:

Read the pricing and other terms carefully,
Check that permissions requested are reasonable with the purpose of the app,
Look for consistently bad reviews describing unexpected or unwanted app behavior,
Verify if the developer has other apps available and their reviews, and
Consider skipping the app download if you aren’t convinced of its safety.

Even if an app is legitimate, we also encourage users to look closely before installation at any available privacy policy to understand how personal data will be treated.  Your face is a biometric identifier that’s not easy to change, and multiple pictures might be needed (and stored) to create your model.

Artificial intelligence tools will continue to amaze us with their capabilities and probably will become more accessible and safer to use over time.  For now, keep in mind that AI technology is still limited and experimental, and can be expensive to use – always consider any hidden costs.  AI also will bring more challenges as we discussed on the 2023 McAfee Threat Prediction blog.

IDENTIFIED APPS

The following table lists the application package name, hash sum SHA256, minimum number of installations on Google Play and the type of detected threat. These apps were removed from Google Play, but some may remain available elsewhere.

Package Name
SHA256
I
Type

com.ayogamez.sketchcartoon
9cb1d996643fbec26bb9878939735221dfbf639075ceea3abdb94e0982c494c1
5M
Adware

com.rocketboosterapps.toonifyme
3f45a38b103e1812146df8ce179182f54c4a0191e19560fcbd77240cbc39886b
10K
Fleeceware

com.nhatanhstudio.cartoon.photoeffect
2c7f4fc403d1449b70218624d8a409497bf4694493c7f4c06cd8ccecff21799a
5K
Repackaged Adware

com.cambe.PhotoCartoon
5327f415d0e9b21523f64403ec231e1fd0279c48b41f023160cd1d70dd733dbf
10K
Repackaged Adware

com.chiroh.cartoon.prismaeffect
18fef9f92639e31dd6566854feb30e1e4333b971b05ae9aba93ac0aa395c955b
1K
Repackaged Adware

cartoon.photo.effect.editor.cartoon.maker.online.caricature.appanime.convert.photo.intocartoon
3b941b7005572760b95239d73b8a8bbfdb81d26d405941171328daa8f3c01183
50
Repackaged Adware

com.waxwell.saunders.pistaphotoeditor
489d4aaec3bc694ddd124ab8b4f0b7621a51aad13598fd39cd5c3d2067b950e5
50
FaceStealer

com.ashtoon.tooncool.skordoi
980c090c01bef890ef74bd93e181d67a5c6cd1b091573eaaf2e1988756aacd50
100K
FaceStealer

com.faceart.savetoon.cartoonedit
55ffc2e392280e8967de0857b02946094268588209963c6146dad01ae537daca
100
FaceStealer

com.okenyo.creatkartoon.studio
e696d7304e5f56d7125dd54c853ff35a394a4175fcaf7785d332404e161d6deb
500K
FaceStealer

com.onlansuyanto.editor.bading
59f9630c2ebe4896f585ec7722c43bb54c926e3e915dcfa4ff807bea444dc07b
10K
FaceStealer

com.madtoon.aicartoon.kiroah
c29adfade300dde5e9c31b23d35a6792ed4a7ad8394d37b69b5cecc931a7ad9f
100K
FaceStealer

com.acetoon.studio.facephoto
24cf7fcaefe98bc9db34f551d11906d3f1349a5b60adf5fa37f15a872b61ee95
100K
FaceStealer

com.funcolornext.beautyfungoodcolor
b2cfa8b2eccecdcb06293512df0db463850704383f920e5782ee6c5347edc6f5
100K
Repackaged
Adware

 

 

 

 

 

 

 

The post The Rise and Risks of AI Art Apps appeared first on McAfee Blog.

Read More