ntpsec-1.2.2a-1.fc38

Read Time:6 Second

FEDORA-2023-26cbce3854

Packages in this update:

ntpsec-1.2.2a-1.fc38

Update description:

Security fix for CVE-2023-4012

Read More

USN-6271-1: MaraDNS vulnerabilities

Read Time:25 Second

Xiang Li discovered that MaraDNS incorrectly handled certain inputs. If a
user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to obtain
sensitive information. (CVE-2022-30256)

Huascar Tejeda discovered that MaraDNS incorrectly handled certain inputs. If
a user or an automated system were tricked into opening a specially crafted
input file, a remote attacker could possibly use this issue to cause a denial
of service. (CVE-2023-31137)

Read More

How Malicious Android Apps Slip Into Disguise

Read Time:4 Minute, 3 Second

Researchers say mobile malware purveyors have been abusing a bug in the Google Android platform that lets them sneak malicious code into benign mobile apps and evade security scanning tools. Google says it has updated its app malware detection mechanisms in response to the new research.

At issue is a mobile malware obfuscation method identified by researchers at ThreatFabric, a security firm based in Amsterdam. Aleksandr Eremin, a senior malware analyst at the company, told KrebsOnSecurity they recently encountered a number of mobile banking trojans abusing a bug present in all Android OS versions that involves corrupting components of an app so that its new evil bits will be ignored as invalid by popular mobile security scanning tools, while the app as a whole gets accepted as valid by Android OS and successfully installed.

“There is malware that is patching the .apk file [the app installation file], so that the platform is still treating it as valid and runs all the malicious actions it’s designed to do, while at the same time a lot of tools designed to unpack and decompile these apps fail to process the code,” Eremin explained.

Eremin said ThreatFabric has seen this malware obfuscation method used a few times in the past, but in April 2023 it started finding many more variants of known mobile malware families leveraging it for stealth. The company has since attributed this increase to a semi-automated malware-as-a-service offering in the cybercrime underground that will obfuscate or “crypt” malicious mobile apps for a fee.

Eremin said Google flagged their initial May 9, 2023 report as “high” severity. More recently, Google awarded them a $5,000 bug bounty, even though it did not technically classify their finding as a security vulnerability.

“This was a unique situation in which the reported issue was not classified as a vulnerability and did not impact the Android Open Source Project (AOSP), but did result in an update to our malware detection mechanisms for apps that might try to abuse this issue,” Google said in a written statement.

Google also acknowledged that some of the tools it makes available to developers — including APK Analyzer — currently fail to parse such malicious applications and treat them as invalid, while still allowing them to be installed on user devices.

“We are investigating possible fixes for developer tools and plan to update our documentation accordingly,” Google’s statement continued.

Image: ThreatFabric.

According to ThreatFabric, there are a few telltale signs that app analyzers can look for that may indicate a malicious app is abusing the weakness to masquerade as benign. For starters, they found that apps modified in this way have Android Manifest files that contain newer timestamps than the rest of the files in the software package.

More critically, the Manifest file itself will be changed so that the number of “strings” — plain text in the code, such as comments — specified as present in the app does match the actual number of strings in the software.

One of the mobile malware families known to be abusing this obfuscation method has been dubbed Anatsa, which is a sophisticated Android-based banking trojan that typically is disguised as a harmless application for managing files. Last month, ThreatFabric detailed how the crooks behind Anatsa will purchase older, abandoned file managing apps, or create their own and let the apps build up a considerable user base before updating them with malicious components.

ThreatFabric says Anatsa poses as PDF viewers and other file managing applications because these types of apps already have advanced permissions to remove or modify other files on the host device. The company estimates the people behind Anatsa have delivered more than 30,000 installations of their banking trojan via ongoing Google Play Store malware campaigns.

Google has come under fire in recent months for failing to more proactively police its Play Store for malicious apps, or for once-legitimate applications that later go rogue. This May 2023 story from Ars Technica about a formerly benign screen recording app that turned malicious after garnering 50,000 users notes that Google doesn’t comment when malware is discovered on its platform, beyond thanking the outside researchers who found it and saying the company removes malware as soon as it learns of it.

“The company has never explained what causes its own researchers and automated scanning process to miss malicious apps discovered by outsiders,” Ars’ Dan Goodin wrote. “Google has also been reluctant to actively notify Play users once it learns they were infected by apps promoted and made available by its own service.”

The Ars story mentions one potentially positive change by Google of late: A preventive measure available in Android versions 11 and higher that implements “app hibernation,” which puts apps that have been dormant into a hibernation state that removes their previously granted runtime permissions.

Read More

The Need for Trustworthy AI

Read Time:4 Minute, 39 Second

If you ask Alexa, Amazon’s voice assistant AI system, whether Amazon is a monopoly, it responds by saying it doesn’t know. It doesn’t take much to make it lambaste the other tech giants, but it’s silent about its own corporate parent’s misdeeds.

When Alexa responds in this way, it’s obvious that it is putting its developer’s interests ahead of yours. Usually, though, it’s not so obvious whom an AI system is serving. To avoid being exploited by these systems, people will need to learn to approach AI skeptically. That means deliberately constructing the input you give it and thinking critically about its output.

Newer generations of AI models, with their more sophisticated and less rote responses, are making it harder to tell who benefits when they speak. Internet companies’ manipulating what you see to serve their own interests is nothing new. Google’s search results and your Facebook feed are filled with paid entries. Facebook, TikTok and others manipulate your feeds to maximize the time you spend on the platform, which means more ad views, over your well-being.

What distinguishes AI systems from these other internet services is how interactive they are, and how these interactions will increasingly become like relationships. It doesn’t take much extrapolation from today’s technologies to envision AIs that will plan trips for you, negotiate on your behalf or act as therapists and life coaches.

They are likely to be with you 24/7, know you intimately, and be able to anticipate your needs. This kind of conversational interface to the vast network of services and resources on the web is within the capabilities of existing generative AIs like ChatGPT. They are on track to become personalized digital assistants.

As a security expert and data scientist, we believe that people who come to rely on these AIs will have to trust them implicitly to navigate daily life. That means they will need to be sure the AIs aren’t secretly working for someone else. Across the internet, devices and services that seem to work for you already secretly work against you. Smart TVs spy on you. Phone apps collect and sell your data. Many apps and websites manipulate you through dark patterns, design elements that deliberately mislead, coerce or deceive website visitors. This is surveillance capitalism, and AI is shaping up to be part of it.

Quite possibly, it could be much worse with AI. For that AI digital assistant to be truly useful, it will have to really know you. Better than your phone knows you. Better than Google search knows you. Better, perhaps, than your close friends, intimate partners and therapist know you.

You have no reason to trust today’s leading generative AI tools. Leave aside the hallucinations, the made-up “facts” that GPT and other large language models produce. We expect those will be largely cleaned up as the technology improves over the next few years.

But you don’t know how the AIs are configured: how they’ve been trained, what information they’ve been given, and what instructions they’ve been commanded to follow. For example, researchers uncovered the secret rules that govern the Microsoft Bing chatbot’s behavior. They’re largely benign but can change at any time.

Many of these AIs are created and trained at enormous expense by some of the largest tech monopolies. They’re being offered to people to use free of charge, or at very low cost. These companies will need to monetize them somehow. And, as with the rest of the internet, that somehow is likely to include surveillance and manipulation.

Imagine asking your chatbot to plan your next vacation. Did it choose a particular airline or hotel chain or restaurant because it was the best for you or because its maker got a kickback from the businesses? As with paid results in Google search, newsfeed ads on Facebook and paid placements on Amazon queries, these paid influences are likely to get more surreptitious over time.

If you’re asking your chatbot for political information, are the results skewed by the politics of the corporation that owns the chatbot? Or the candidate who paid it the most money? Or even the views of the demographic of the people whose data was used in training the model? Is your AI agent secretly a double agent? Right now, there is no way to know.

We believe that people should expect more from the technology and that tech companies and AIs can become more trustworthy. The European Union’s proposed AI Act takes some important steps, requiring transparency about the data used to train AI models, mitigation for potential bias, disclosure of foreseeable risks and reporting on industry standard tests.

Most existing AIs fail to comply with this emerging European mandate, and, despite recent prodding from Senate Majority Leader Chuck Schumer, the US is far behind on such regulation.

The AIs of the future should be trustworthy. Unless and until the government delivers robust consumer protections for AI products, people will be on their own to guess at the potential risks and biases of AI, and to mitigate their worst effects on people’s experiences with them.

So when you get a travel recommendation or political information from an AI tool, approach it with the same skeptical eye you would a billboard ad or a campaign volunteer. For all its technological wizardry, the AI tool may be little more than the same.

This essay was written with Nathan Sanders, and previously appeared on The Conversation.

Read More

What Is Global Privacy Control (GPC), and how can it help you protect your data?

Read Time:4 Minute, 59 Second

The content of this post is solely the responsibility of the author.  AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article. 

More than 67% of internet users in the US remain blissfully unaware of online privacy and data protection regulations.

At the same time, the global average cost of data breaches and cyber-attacks has increased by 15% since 2020 to $4.45 million. In fact, compromised credentials and personal information are responsible for nearly 20% of nearly 1.4 billion security incidents during this period.

As a result, there’s a growing need for a solution to protect sensitive data from potential theft or misuse.

Global Privacy Control (GPC) is an emerging resolution to give users more control over their data when navigating the internet and using digital solutions.

In this article, you’ll learn about the core concept of GPC and its importance in digital protection.

What is Global Privacy Control (GPC)?

Global Privacy Control, or GPC, is a cybersecurity and data privacy initiative to give businesses and individuals greater control over their data, including its storage, distribution, and usage.

It offers a simple, standardized way to assert and protect your privacy rights while surfing the internet and navigating different websites and applications.

Adopting and implementing the protocol sends a “Do Not Collect or Share My Data” signal to digital platforms, prompting them to refrain from selling your data to third parties for advertising and other commercial purposes.

Common data websites collect generally include:

Personal information (Name, contact, address, etc.);
Browsing history;
Live location;
Device information (Model, operating system, etc.);
IP address;
Cookies;
Payment information (Card details, digital wallet credentials, etc.);
Account credentials (Social media apps, third-party services, etc.);
Usage data (Time, features used, launch frequency, and more).

By activating the GPC signal, you can exercise your privacy rights and stop sites and apps from collecting all the information listed (and more).

The significance of data privacy and how can GPC help?

Data privacy is more critical than ever due to the unprecedented exchange and collection of data on the internet. Digital entities actively collect your valuable data, including personal information, browsing habits, location, financial details, etc.

By creating vast repositories of your data, websites, and apps gain insights into your online behavior, and use them to tailor:

Ads;
UI/UX design;
Site content;
Products;
Services.

However, by doing so, they increase your risk of security breaches and privacy infringements. Hackers and cybercriminals actively target sensible information like your IP address to orchestrate various attacks, including:

Distributed Denial of Service (DDoS) attacks;
Spoofing;
Ransomware and spyware;
Man-in-the-Middle attacks;
Brute force attacks, etc.

Fortunately, you can prevent an IP address hack and consequential attacks using a virtual private network (VPN). A VPN encrypts your IP address and online traffic, making it nearly impossible for malicious criminals to access your data.

However, you can take data protection to a whole new level by combining Global Privacy Control with VPN and other essential cybersecurity tools, such as:

Anti-malware software;
SSL certificates;
Multi-factor authentication;
Intrusion detection systems, etc.

Preserving data privacy is crucial for protecting valuable data and building trust between users and digital platforms. As it stands, GPS is one of the few initiatives that can proactively prevent breaches by stopping the flow of user data.

Benefits of adopting Global Privacy Control

Below are the key benefits of adopting GPC on websites or apps:

1. Data security & privacy enhancement

GPC enables you to fortify your valuable data against nonconsensual or unauthorized sharing. Hence, you can use your personal information solely for core purposes, such as logging into your account or online transactions.

With GPC protocols, no website or app will record your browsing activity, usage, or online behavior, significantly reducing the risk of attacks, identity theft, and unauthorized access.

2. Transparent data collection and usage

If your business relies on collecting user data, you can use GPC to enable transparent collection and usage. You can share how your site or app collects, processes, and shares user data. This transparency allows visitors, customers, or users to make more informed decisions about engaging with your site or app.

3. Building trust & credibility

If you run an online business, one of the best ways to build trust with users is by respecting their online privacy preferences. This powerful branding and marketing strategy allows you to implement GPC and honor “Do Not Share My Data” requests.

Demonstrating that you care about your user’s privacy needs can improve credibility and foster a long-term relationship with them.

4. Compliance with privacy regulations

In the post-pandemic age, there’s an increased focus on data privacy regulations worldwide, including (but not limited to):

General Data Protection Regulation (GDPR) – EU and UK;
California Consumer Privacy Act (CCPA);
California Privacy Rights Act (CPRA);
Personal Information Protection and Electronic Documents Act (PIPEDA) – Canada;
Health Information Technology for Economic and Clinical Health Act (HITECH), etc.

These bodies have strict privacy laws and policies you must adhere to. Failure to comply could lead to heavy fines and legal liabilities. Moreover, when users learn you’re non-compliant, they’ll hesitate to visit your site or use your app.

5. Empowering user control

Global Privacy Control makes users 100% responsible and accountable for the data they share on digital platforms. You have full control over your sharing preferences and can choose to avoid sharing data with third-party companies directly or through the site or app.

This user-centric approach promotes a sense of ownership and helps businesses mitigate security risks.

Conclusion

As the world rapidly shifts to a digital-first economy, you must take the necessary steps to safeguard data privacy.

With our commitment to Global Privacy Control (GPC), you can maximize data control and privacy protection. So, feel free to delve into our wealth of resources and empower yourself with the knowledge to fortify your online defenses.

Read More