Microsoft is trying to create a personal digital assistant:
At a Build conference event on Monday, Microsoft revealed a new AI-powered feature called “Recall” for Copilot+ PCs that will allow Windows 11 users to search and retrieve their past activities on their PC. To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research. Despite encryption and local storage, the new feature raises privacy concerns for certain Windows users.
I wrote about this AI trust problem last year:
One of the promises of generative AI is a personal digital assistant. Acting as your advocate with others, and as a butler with you. This requires an intimacy greater than your search engine, email provider, cloud storage system, or phone. You’re going to want it with you 24/7, constantly training on everything you do. You will want it to know everything about you, so it can most effectively work on your behalf.
And it will help you in many ways. It will notice your moods and know what to suggest. It will anticipate your needs and work to satisfy them. It will be your therapist, life coach, and relationship counselor.
You will default to thinking of it as a friend. You will speak to it in natural language, and it will respond in kind. If it is a robot, it will look humanoid—or at least like an animal. It will interact with the whole of your existence, just like another person would.
[…]
And you will want to trust it. It will use your mannerisms and cultural references. It will have a convincing voice, a confident tone, and an authoritative manner. Its personality will be optimized to exactly what you like and respond to.
It will act trustworthy, but it will not be trustworthy. We won’t know how they are trained. We won’t know their secret instructions. We won’t know their biases, either accidental or deliberate.
We do know that they are built at enormous expense, mostly in secret, by profit-maximizing corporations for their own benefit.
[…]
All of this is a long-winded way of saying that we need trustworthy AI. AI whose behavior, limitations, and training are understood. AI whose biases are understood, and corrected for. AI whose goals are understood. That won’t secretly betray your trust to someone else.
The market will not provide this on its own. Corporations are profit maximizers, at the expense of society. And the incentives of surveillance capitalism are just too much to resist.
We are going to need some sort of public AI to counterbalance all of these corporate AIs.
More Stories
Moxa Urges Immediate Updates for Security Vulnerabilities
Moxa has reported two critical vulnerabilities in its routers and network security appliances that could allow system compromise and arbitrary...
US Treasury Department Sanctions Chinese Company Over Cyberattacks
From the Washington Post: The sanctions target Beijing Integrity Technology Group, which U.S. officials say employed workers responsible for the...
Phishing Click Rates Triple in 2024
Netskope observed a 190% growth in enterprise users clicking phishing links as attackers become more creative in delivering effective lures...
UK Government to Ban Creation of Explicit Deepfakes
The UK government is cracking down on the generation of sexually explicit deepfakes in a bid to protect women and...
CISA Claims Treasury Breach Did Not Impact Other Agencies
The US Cybersecurity and Infrastructure Security Agency claims a recent China-linked breach was confined to the Treasury Read More
Supply Chain Attack Targets Key Ethereum Development Tools
A new supply chain attack targets Ethereum tools, exploiting npm packages to steal sensitive data Read More