Artificial intelligence has zoomed to the forefront of the public and professional discourse — as have expressions of fear that as AI advances, so does the likelihood that we will have created a variety of beasts that threaten our very existence. Within those fears also lay worries about the responsibilities of those who create the large language models (LLM) and engines that harvest the data that feed them to do so in an ethical manner.
To be frank, I hadn’t given the matter much thought until I was triggered by a recent discussion around the need for “responsible and ethical AI” which occurred amidst the constant blast that AI is evil personified or conversely that it is some holy grail.
To read this article in full, please click here
More Stories
Friday Squid Blogging: Squid Game Season Two Teaser
The teaser for Squid Game Season Two dropped. Blog moderation policy. Read More
Clever Social Engineering Attack Using Captchas
This is really interesting. It’s a phishing attack targeting GitHub users, tricking them to solve a fake Captcha that actually...
US Cyberspace Solarium Commission Outlines Ten New Cyber Policy Priorities
In its fourth annual report, the US Cyberspace Solarium Commission highlighted the need to focus on securing critical infrastructure and...
Cybersecurity Skills Gap Leaves Cloud Environments Vulnerable
A new report by Check Point Software highlights a significant increase in cloud security incidents, largely due to a lack...
Going for Gold: HSBC Approves Quantum-Safe Technology for Tokenized Bullions
The bank giant and Quantinuum trialed the first application of quantum-secure technology for buying and selling tokenized physical gold Read...
This Windows PowerShell Phish Has Scary Potential
Many GitHub users this week received a novel phishing email warning of critical security holes in their code. Those who...