The user's bottom line: Trusting AI and the internet
Introductory Series • Post 8
This is the fourth and final post explaining how our proposed innovations—Proactive Voice-AI Agents (PVAIs) and User-Controlled Services (UCS)—will benefit millions of caregivers and care recipients worldwide, enabling us to use AI + the internet for the greatest good. Below we explain what some may view as the most compelling rationale of all for this new model of e-care … it would reestablish our trust in the virtual world.
The most glaring failure of the virtual world to date has been its inability to earn consumers’ trust for much more than finding and buying things or communicating with one another—asynchronously or in real-time, via text, voice, or video. While we use mobile apps and online services constantly for commerce and communications, we don’t trust either to help us care for what matters most: our loved ones, especially our young children and aging relatives.
The most widely told stories about our online experiences involve suicide rates among the young, scams targeting all ages, blatant racism and defamation, ubiquitous phishing, identity theft, institutional security breaches, and cybercrime. And the list goes on.
These stories stem from the ability of online services and apps—owned and operated by people of dubious intent—to fool, tempt, or even “attack” individual users or institutions in ways that were inconceivable before the age of the internet and AI-driven apps.
Regrettably, the tech oligarchy doesn’t—or simply can’t—mitigate the collateral damage caused daily by and in the virtual world they control.
Apple, for example, assures us their proprietary ecosystem will keep us safe, but only if we stay within it and figure out how to protect ourselves on our own. Hundreds of millions trust Amazon and others to deliver what they order online, but not to keep their profiles and preferences private. Similarly, trust in Facebook or Google to safeguard personal information—rather than share or sell it—is at an all-time low. Sandwich-generation caregivers, meanwhile, worry about both their aging parents and their young children falling victim to online predators of all sorts.
Imagine trusting Alexa to entertain or check on your mother—only to discover that Amazon is using her health conversations to target ads, or that a data breach exposed her personal information or medical history to others. If users don’t trust Amazon to save and reuse Alexa’s recordings of their voice, when will they ever trust Alexa to do something more important than playing music or setting timers?
What we’re being asked to trust today
Consider what we’re being asked to trust in today’s online world:
(1) AI chatbots owned and operated by faceless institutions whose primary motivations are to monetize everything and maximize profits;
(2) Systems that collect and reuse personal details about us—our online activities, preferences, purchases, homes, histories, and health; and
(3) AI models and chatbots that regularly “hallucinate” or make mistakes we can’t detect or correct without specialized knowledge.
While it’s hard to determine which of these weaknesses represents the biggest barrier to trusting VAIs to care for us and our loved ones, it’s clear that none of these risks existed before consumers migrated to the virtual world.
Giving control to users …
PVAIs with UCS address these trust barriers by shifting who’s in control.
Instead of faceless institutions controlling an AI agent that monitors or advises your parent, family caregivers and care recipients—or the professionals they authorize—control it. Instead of corporations collecting and controlling health data without meaningful consent, caregivers decide what data’s collected, who can access it, and how it’s used. And instead of being forced to trust opaque algorithms, caregivers can configure what their PVAI says and when it speaks, plus set up triggers for urgent alerts.
So, what’s the bottom line?
Big Tech has given us neither the proactive monitoring tools nor the user control we need to care for our loved ones remotely in ways that only we can. The self-service voice-AI chatbots that exist today offer caregivers and care recipients no meaningful way to stay connected, virtually yet constantly.
Happily, the solution is hiding in plain sight. We can now build systems—PVAIs—where those closest to the person receiving care, be they family and friends or trusted professionals, are always at the helm.
Having more control over our shared online experience through PVAIs and UCS will enable us to entrust AI agents with our agency and help us monitor and care for those we love.
In our next post, we’ll summarize the “four pillars of future caregiving” that are only possible with the proactive voice-AI agents (PVAIs) and user-controlled services (UCS) innovations we’ve described. And this introduction concludes with a challenge — we hope you’ll take it!
