AI – The Good, Bad, and Scary

Read Time:7 Minute, 19 Second

AI and machine learning (ML) optimizes processes by making recommendations for optimizing productivity, reducing cycles, and maximizing efficiency. AI also optimizes human capital by performing mundane & repetitive tasks 24×7 without the need for rest and minimizing human errors.

There are numerous benefits as to how AI can benefit society. As much as AI can propel human progress forward, it can be consequential to our own detriment without proper guidance. We need to understand the risks and challenges that comes with AI. Growing your knowledge in the new era of AI will help you and your organization evolve.

AI can be a battlefield of good and evil. There’s the power to do good and the power to do evil. Here are some examples on the Good, Bad, and Scary of AI.

Good

Cybersecurity – Detect and respond to cyber-attacks with automation capabilities at machine speed and predict behavioral anomalies and defend against cyber threats before an actual attack occurs
Banking & Finance – Detect and prevent fraud, manage risks, enable personalized services, and automate financial-decision processing
Healthcare – Optimize patient interactions, develop personalized treatment plans, attain better patient experience, improve patient data accuracy, and reduce misfiled patient records
Manufacturing – Predict maintenance, detect defects and quality issues, enhance productivity, generate product & component designs, and optimize inventory & demand forecasting
Retail – Secure self-checkout that helps loss prevention, optimize retail operations & supply chain, and enhance customer experiences
Smart cities & IoT – Manage traffic of autonomous vehicles & self-driving, manage energy consumption, optimize water usage, and streamline waste management through real-time sensor data
Telecom – Predict network congestion and proactively reroute traffic to avoid outages

Bad

Cybercriminals – Leverage AI-powered tools and social engineering to steal identities, generate ransomware attacks, perform targeted national state attacks, and destroy national critical infrastructure
Computing resources – Require heavy power supply, Thermal Design Power (TDP), Graphics Processing Unit (GPU), and Random Access Memory (RAM)
Environmental impact – Impact of intensive computing resources have on carbon footprint and environment
Energy cost – Rise in electric power usage and water for cooling and increasing computational costs translates into carbon emissions
Bias & Discrimination – Propagate biases as a result of bad training data, incomplete data, and poorly trained AI model
Inequality – Widen the gap between the rich and poor and increase inequality in society
Privacy – Loss of data privacy from insecure AI systems, unencrypted data sources, and misuse & abuse
Skills loss – Reduce human critical thinking skills to uncover root issues, solve complex problems, and ability to write at college level and professionally

Scary

Job loss and displacement – Replace humans with robots across every sector to perform highly skilled professional jobs
Overreliance on AI – Rely heavily on AI to make important decisions like electing medical procedures, making life or death decisions, or choosing political candidates
Dominance of AI – Potential ability of AI to surpass human intelligence and take control
Monopoly by tech – a select number of tech companies could monopolize the economy and have undue influence over the social construct of our daily lives from buying patterns to everyday decision-making
Deepfakes – Generate deepfakes with manipulated videos and images to influence discussions on social media and online forums
Propaganda & Disinformation – Deploy human and bot campaigns to spread disinformation and propaganda to manipulate public opinion
Censorship – AI chatbots restricting access to media content and removing unfavorable online speech all pose a risk to internet freedom and a democratic society

In the example of deepfakes and spreading disinformation, how does an AI system verify the authenticity of the video or image of the individual? How does it validate the source and validity of the information? How does one separate fact from fiction? How do we mitigate skepticism when the information is in fact the truth?

Concerns found in research surveys

The bad and scary of AI is not far fetch from reality. These concerns raise more than just eyebrows with the average consumers. They hit pretty close to home. In Cisco’s 2023 consumer privacy survey,

75% of respondents were concerned they could lose their jobs or be replaced by Gen AI
72% of respondents indicated that having products and solutions audited for bias would make them “somewhat” or “much more” comfortable with AI
86% were concerned the information they get from Gen AI could be wrong and could be detrimental for humanity.

According to a 2023 Pew Research survey,

52% of Americans were more concerned than excited about the increased use of AI. Those who were familiar with AI have grown more concerned about the role of AI climbing 16 points to 47% while those who hear a little about AI grew 19 points to 58% from the previous year
When it came to AI for good such as doctors providing patient care or people finding products, services they are interested in online, Americans perceived AI as more helpful than hurtful – that’s 46% and 49% respectively. Demographics with higher education and income say AI is having a positive impact
Despite the good and benefits that come with AI, loss of data privacy stood out as a major concern across demographics. 53% of Americans said AI is doing more to hurt than help people keep their personal information private. 59% of college graduates said the same.

Navigating through the uncertainties

While there are arguments predicting doomsday there are some AI experts who argue that it’s not all doom and gloom.

We are far from overreliance on AI or dominance of AI. We are far from machine intelligence surpassing human intelligence let alone achieving it. Another perspective on this is how machine intelligence can augment human capabilities like we see with Google search? Further, how can humans and machines communicate and work together more intelligently than alone?

When we look at the advancement of technology from personal computing on the PC to revolutionizing the way we communicate and connect over the Internet, there’s a lot we can glean and learn from. The Internet transformed the way we work and spawned new jobs that never existed before. They range from web developers to data scientists, application developers, software engineers, search engine optimization specialist, digital marketer, social media manager, and many more with the growth of 5G wireless and the Internet of Things (IoT).

Each of these roles increased productivity and solved problems. We can expect the same as the generative AI market continues to evolve and mature – boost productivity, solve new problems more efficiently, and see a new wave of job creation.

Developing a mindset – GenAI is still in its early days, and we really don’t know enough to predict all the negative implications to society to be pessimistic. The new era of AI opens doors to infinite possibilities to explore creativity and new innovations. An optimistic view is the positive impact AI could have on mitigating existential threats like climate change, AI itself, and more.
Training and Education – AI isn’t going away. What’s scarier than the scary of AI itself, is the fear holding back your personal growth. Education is key. Determine what new skills and capabilities your current job and the next generation workforce will need. Learn to use AI tools. There are hundreds of AI tools available in every industry.
Managing Risk – Managing the inherent risk with AI starts with being aware of the risks and challenges of AI. Cybercriminals aren’t going away either. While not every criminal is prevented from creating the “scary” with AI, combatting AI-powered criminals with the modern next gen SOC powered by AI is necessary to achieve resilience.
Securing AI – Embedding security from the start and throughout each of the stages of the AI systems development lifecycle is not only best practice but mitigates the use of AI by adversaries for bad
Regulating Privacy – The U.S. needs to have a national privacy and data protection law. Unlike places like the EU with GDPR, we don’t have one today. In fact, the EU is on its way to be the first global body to regulate AI. Without a comprehensive approach to privacy and data protection laws with clear legal regulation, the U.S. lags in its ability to protect its citizens’ security and privacy.
Developing Trust – Being more transparent and explaining how AI systems and tools work, ensuring human involvement, and instituting an AI ethics management program would make more individuals comfortable with AI.

These are just some tips to help guide individuals and organizations – both private and public – navigate the uncertainties of AI – the Good, Bad, and Scary. Ultimately, the fate of humanity is up to each of us, organizations, and society as a whole. Let’s hope we make the right choices. Our future depends on it.

To learn more · Explore our Cybersecurity consulting services to help.

Read More

ZDI-24-357: RARLAB WinRAR Mark-Of-The-Web Bypass Vulnerability

Read Time:18 Second

This vulnerability allows remote attackers to bypass the Mark-Of-The-Web protection mechanism on affected installations of RARLAB WinRAR. User interaction is required to exploit this vulnerability in that the target must perform a specific action on a malicious page. The ZDI has assigned a CVSS rating of 4.3. The following CVEs are assigned: CVE-2024-30370.

Read More

ZDI-24-359: Flexera Software FlexNet Publisher Uncontrolled Search Path Element Local Privilege Escalation Vulnerability

Read Time:17 Second

This vulnerability allows local attackers to escalate privileges on affected installations of Flexera Software FlexNet Publisher. An attacker must first obtain the ability to execute low-privileged code on the target system in order to exploit this vulnerability. The ZDI has assigned a CVSS rating of 7.8. The following CVEs are assigned: CVE-2024-2658.

Read More

ZDI-24-360: JetBrains TeamCity AgentDistributionSettingsController Cross-Site Scripting Vulnerability

Read Time:17 Second

This vulnerability allows remote attackers to execute arbitrary script on affected installations of JetBrains TeamCity. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The ZDI has assigned a CVSS rating of 4.6. The following CVEs are assigned: CVE-2024-31138.

Read More

assimp-5.0.1-7.el8

Read Time:11 Second

FEDORA-EPEL-2024-d0d107787c

Packages in this update:

assimp-5.0.1-7.el8

Update description:

Security fix for CVE-2023-45661 CVE-2023-45662 CVE-2023-45663 CVE-2023-45664 CVE-2023-45666 CVE-2023-45667

Read More

Ross Anderson

Read Time:2 Minute, 42 Second

Ross Anderson unexpectedly passed away Thursday night in, I believe, his home in Cambridge.

I can’t remember when I first met Ross. Of course it was before 2008, when we created the Security and Human Behavior workshop. It was well before 2001, when we created the Workshop on Economics and Information Security. (Okay, he created both—I helped.) It was before 1998, when we wrote about the problems with key escrow systems. I was one of the people he brought to the Newton Institute for the six-month cryptography residency program he ran (I mistakenly didn’t stay the whole time)—that was in 1996. I know I was at the Fast Software Encryption workshop in December 1993, another conference he created. There I presented the Blowfish encryption algorithm. Pulling an old first-edition of Applied Cryptography down from the shelf, I see his name in the acknowledgments. Which means that sometime in early 1993 I, as an unpublished book author who only wrote a couple of crypto articles for Dr. Dobbs Journal, asked him to read and comment on my book manuscript. And he said yes. Which means I mailed him a paper copy. And he read it. And mailed his handwritten comments back to me. In an envelope with stamps. Because that’s how we did it back then.

I have known Ross for over thirty years, as both a colleague and a friend. He was enthusiastic, brilliant, opinionated, articulate, curmudgeonly, and kind. Pick up any of his academic papers—there are many—and odds are that you will find an unexpected insight. He was a cryptographer and security engineer, but also very much a generalist. He analyzed block ciphers in the 1990s, and attacks against large-language models last year. He started conferences like nobody’s business. His masterwork book, Security Engineering—now in its Third Edition—is as comprehensive a tome on cybersecurity and related topics as you could imagine. (Also note his fifteen-lecture video series on that same page. If you have never heard Ross lecture, you’re in for a treat.) He was the first person to understand that security problems are often actually economic problems. He was the first person to make a lot of those sorts of connections. He fought against surveillance and back doors, and for academic freedom. He didn’t suffer fools in either government or the corporate world.

He’s listed in the acknowledgments as a reader of every other of my books from Beyond Fear on. Recently, we saw each other on only a couple of occasions every year: at this or that workshop or event. Most recently was last June, at SHB 2023, in Pittsburgh. He was going to attend my Workshop on Reimagining Democracy, but he had to cancel at the last minute. (He sent me the talk he was going to give. I will see about posting it.) The day before he died, we were discussing how to accommodate everyone who registered for this year’s SHB workshop. I learned something from him every single time we had a conversation. And I am not the only one.

My heart goes out to his wife Shreen and his family. We lost him much too soon.

Read More