Microsoft is trying to create a personal digital assistant:
At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.
I wrote about this AI trust problem last year:
One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.
[…]
And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.
It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.
We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
[…]
All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.
The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.
We are going to need some sort of public AI to counterbalance all of these corporate AIs.
More Stories
Scammers Exploit Microsoft 365 to Target PayPal Users
A new PayPal phishing scam used genuine money requests, bypassing security checks to deceive recipients Read More
Casio Admits Security Failings as Attackers Leak Employee and Customer Data
Electronics firm Casio revealed that ransomware attackers have leaked the personal data of employees, customers and business partners Read More
New Mirai Botnet Exploits Zero-Days in Routers and Smart Devices
A newly identified Mirai botnet exploits over 20 vulnerabilities, including zero-days, in industrial routers and smart home devices Read More
Fake Government Officials Use Remote Access Tools for Card Fraud
Group-IB has observed scammers impersonating government officials to trick disaffected consumers into divulging card details Read More
Google’s Willow Quantum Chip and Its Potential Threat to Current Encryption Standards
Introduction: Google's recent announcement of their Willow quantum processor marks a significant advancement in quantum computing technology while raising questions...
A Day in the Life of a Prolific Voice Phishing Crew
Besieged by scammers seeking to phish user accounts over the telephone, Apple and Google frequently caution that they will never...