US DoJ Announces Plan to Shakeup Cybercrime Investigations

Read Time:6 Second

In a speech, the DoJ’s Nicole M. Argentieri announced the merger of the NCET into the CCIPS

Read More

[SYSS-2023-006]: Omnis Studio – Expected Behavior Violation (CWE-440) (CVE-2023-38334)

Read Time:18 Second

Posted by Matthias Deeg via Fulldisclosure on Jul 21

Advisory ID: SYSS-2023-006
Product: Omnis Studio
Manufacturer: Omnis Software Ltd.
Affected Version(s): 10.22.00
Tested Version(s): 10.22.00
Vulnerability Type: Expected Behavior Violation (CWE-440)
Risk Level: Low
Solution Status: Open
Manufacturer Notification: 2023-03-30
Solution Date: –
Public Disclosure: 2023-07-20
CVE Reference:…

Read More

[SYSS-2023-005]: Omnis Studio – Expected Behavior Violation (CWE-440) (CVE-2023-38335)

Read Time:18 Second

Posted by Matthias Deeg via Fulldisclosure on Jul 21

Advisory ID: SYSS-2023-005
Product: Omnis Studio
Manufacturer: Omnis Software Ltd.
Affected Version(s): 10.22.00
Tested Version(s): 10.22.00
Vulnerability Type: Expected Behavior Violation (CWE-440)
Risk Level: Low
Solution Status: Open
Manufacturer Notification: 2023-03-30
Solution Date: –
Public Disclosure: 2023-07-20
CVE Reference:…

Read More

AI and Microdirectives

Read Time:5 Minute, 49 Second

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.

In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.

Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.

AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.

Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.

A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.

It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.

A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.

Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.

And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.

This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?

This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.

Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.

AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.

This essay was written with Jon Penney, and previously appeared on Slate.com.

Read More

Android SpyNote attacks electric and water public utility users in Japan

Read Time:3 Minute, 30 Second

Authored by Yukihiro Okutomi 

McAfee’s Mobile team observed a smishing campaign against Japanese Android users posing as a power and water infrastructure company in early June 2023. This campaign ran for a short time from June 7. The SMS message alerts about payment problems to lure victims to a phishing website to infect the target devices with a remote-controlled SpyNote malware. In the past, cybercriminals have often targeted financial institutions. However, on this occasion, public utilities were the target to generate a sense of urgency and push victims to act immediately. Protect your Android and iOS mobile devices with McAfee Mobile Security.

Smishing Attack Campaign 

A phishing SMS message impersonating a power or water supplier claims a payment problem, as shown in the screenshot below. The URL in the message directs the victim to a phishing website to download mobile malware. 

Notice of suspension of power transmission because of non-payment of charges from a power company in Tokyo (Source: Twitter) 

Notice of suspension of water supply because of non-payment of charges from a water company in Tokyo (Source: Twitter) 

 

When accessed with a mobile browser, it will start downloading malware and display a malware installation confirmation dialog. 

The confirmation dialog of Spyware installation via browser (Source: Twitter) 

SpyNote malware 

SpyNote is a known family of malware that proliferated after its source code was leaked in October 2022. Recently, the malware was used in a campaign targeting financial institutions in January and targeting Bank of Japan in April 2023 

The SpyNote malware is remotely controlled spyware that exploits accessibility services and device administrator privileges. It steals device information and sensitive user information such as device location, contacts, incoming and outgoing SMS messages, and phone calls. The malware deceives users by using legitimate app icons to look real. 

Application Icons disguised by malware. 

After launching the malware, the app opens a fake settings screen and prompts the user to enable the Accessibility feature. When the user clicks the arrow at the bottom of the screen, the system Accessibility service settings screen is displayed. 

A fake setting screen (left), system setting screen (center and right) 

By allowing the Accessibility service, the malware disables battery optimization so that it can run in the background and automatically grants unknown source installation permission to install another malware without the user’s knowledge. In addition to spying on the victim’s device, it also steals two-factor authentication on Google Authenticator and Gmail and Facebook information from the infected device. 

Although the distribution method is different, the step of requesting Accessibility service after launching the app is similar to the case of the Bank of Japan that occurred in April. 

Scammers keep up with current events and attempt to impersonate well-known companies that have a reason to reach out to their customers. The mobile malware attack using SpyNote discovered this time targets mobile apps for life infrastructure such as electricity and water. One of the reasons for this is that electric bills and water bills, which used to be issued on paper, are now managed on the web and mobile app. If you want to learn about smishing, consult this article “What Is Smishing? Here’s How to Spot Fake Texts and Keep Your Info Safe”. McAfee Mobile Security detects this threat as Android/SpyNote and alerts mobile users if it is present and further protects them from any data loss. For more information, visit McAfee Mobile Security. 

Indicators of compromise (IoC) 

C2 Server: 

104.233.210.35:27772 

Malware Samples: 

SHA256 Hash 
Package name 
Application name 

075909870a3d16a194e084fbe7a98d2da07c8317fcbfe1f25e5478e585be1954 
com.faceai.boot 
キャリア安全設定 

e2c7d2acb56be38c19980e6e2c91b00a958c93adb37cb19d65400d9912e6333f 
com.faceai.boot 
東京電力 

a532c43202c98f6b37489fb019ebe166ad5f32de5e9b395b3fc41404bf60d734 
com.faceai.boot 
東京電力TEPCO 

cb9e6522755fbf618c57ebb11d88160fb5aeb9ae96c846ed10d6213cdd8a4f5d 
com.faceai.boot 
東京電力TEPCO 

59cdbe8e4d265d7e3f4deec3cf69039143b27c1b594dbe3f0473a1b7f7ade9a6 
com.faceai.boot 
東京電力TEPCO 

8d6e1f448ae3e00c06983471ee26e16f6ab357ee6467b7dce2454fb0814a34d2 
com.faceai.boot 
東京電力TEPCO 

5bdbd8895b9adf39aa8bead0e3587cc786e375ecd2e1519ad5291147a8ca00b6 
com.faceai.boot 
東京電力TEPCO 

a6f9fa36701be31597ad10e1cec51ebf855644b090ed42ed57316c2f0b57ea3c 
com.faceai.boot 
東京電力TEPCO 

f6e2addd189bb534863afeb0d06bcda01d0174f5eac6ee4deeb3d85f35449422 
com.faceai.boot 
東京電力TEPCO 

755585571f47cd71df72af0fad880db5a4d443dacd5ace9cc6ed7a931cb9c21d 
com.faceai.boot 
東京電力TEPCO 

2352887e3fc1e9070850115243fad85c6f1b367d9e645ad8fc7ba28192d6fb85 
com.faceai.boot 
東京電力TEPCO 

90edb28b349db35d32c0190433d3b82949b45e0b1d7f7288c08e56ede81615ba 
com.faceai.boot 
東京電力TEPCO 

513dbe3ff2b4e8caf3a8040f3412620a3627c74a7a79cce7d9fab5e3d08b447b 
com.faceai.boot 
東京電力TEPCO 

f6e2addd189bb534863afeb0d06bcda01d0174f5eac6ee4deeb3d85f35449422 
com.faceai.boot 
東京電力TEPCO 

0fd87da37712e31d39781456c9c1fef48566eee3f616fbcb57a81deb5c66cbc1 
com.faceai.boom 
東京水道局アプリ 

acd36f7e896e3e3806114d397240bd7431fcef9d7f0b268a4e889161e51d802b 
com.faceai.boom 
東京水道局アプリ 

91e2f316871704ad7ef1ec74c84e3e4e41f557269453351771223496d5de594e 
com.faceai.boom 
東京水道局アプリ 

 

 

The post Android SpyNote attacks electric and water public utility users in Japan appeared first on McAfee Blog.

Read More