Microsoft is trying to create a personal digital assistant:
At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.
I wrote about this AI trust problem last year:
One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.
[…]
And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.
It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.
We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
[…]
All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.
The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.
We are going to need some sort of public AI to counterbalance all of these corporate AIs.
More Stories
Midnight Blizzard Targets European Diplomats with Wine Tasting Phishing Lure
Russian state actor Midnight Blizzard is using fake wine tasting events as a lure to spread malware for espionage purposes,...
Age Verification Using Facial Scans
Discord is testing the feature: “We’re currently running tests in select regions to age-gate access to certain spaces or user...
NTLM Hash Exploit Targets Poland and Romania Days After Patch
An NTLM hash disclosure spoofing vulnerability that leaks hashes with minimal user interaction has been observed being exploited in the...
Senators Urge Cyber-Threat Sharing Law Extension Before Deadline
Bipartisan support grows in Congress to extend Cybersecurity Information Sharing Act for 10 years Read More
Identity Attacks Now Comprise a Third of Intrusions
IBM warns of infostealer surge as attackers automate credential theft and adopt AI to generate highly convincing phishing emails en...
Microsoft Thwarts $4bn in Fraud Attempts
Microsoft has blocked fraud worth $4bn as threat actors ramp up AI use Read More