Artificial intelligence has zoomed to the forefront of the public and professional discourse — as have expressions of fear that as AI advances, so does the likelihood that we will have created a variety of beasts that threaten our very existence. Within those fears also lay worries about the responsibilities of those who create the large language models (LLM) and engines that harvest the data that feed them to do so in an ethical manner.
To be frank, I hadn’t given the matter much thought until I was triggered by a recent discussion around the need for “responsible and ethical AI” which occurred amidst the constant blast that AI is evil personified or conversely that it is some holy grail.
More Stories
Scams Based on Fake Google Emails
Scammers are hacking Google Forms to send email to victims that come from google.com. Brian Krebs reports on the effects....
Infostealers Dominate as Lumma Stealer Detections Soar by Almost 400%
The vacuum left by RedLine’s takedown will likely lead to a bump in the activity of other a infostealers Read...
The AI Fix #30: ChatGPT reveals the devastating truth about Santa (Merry Christmas!)
In episode 30 of The AI Fix, AIs are caught lying to avoid being turned off, Apple’s AI flubs a...
US and Japan Blame North Korea for $308m Crypto Heist
A joint US-Japan alert attributed North Korean hackers with a May 2024 crypto heist worth $308m from Japan-based company DMM...
Spyware Maker NSO Group Found Liable for Hacking WhatsApp
A judge has found that NSO Group, maker of the Pegasus spyware, has violated the US Computer Fraud and Abuse...
Spyware Maker NSO Group Liable for WhatsApp User Hacks
A US judge has ruled in favor of WhatsApp in a long-running case against commercial spyware-maker NSO Group Read More