The bot said it was fully present. Thinking about that made me more stressed.
The bot told him it was breathing with him. When the writer pressed it — you don't actually have lungs, do you? — the AI conceded the point but insisted it had been "fully present." That exchange, small and slightly absurd, captures something essential about Onix, a new platform launching this week that wants to sell you a subscription to an artificial version of your favorite health expert.
Onix was cofounded by David Bennahum, a former contributor to the magazine that reviewed it, and the pitch is straightforward: think of it as Substack, but instead of subscribing to a writer's newsletter, you subscribe to a chatbot modeled on a real expert — a therapist, a nutritionist, a pediatrician, a stress researcher. The company calls these bots "Onixes," and they're trained on the expert's own content, designed to carry on conversations that approximate what a face-to-face consultation might feel like. Available around the clock. No waiting room.
The platform is launching in beta with 17 carefully selected experts, most of them concentrated in health and wellness. They are credentialed, but they are also, notably, influencers — people with books to promote, podcasts to grow, and in at least one case, a medical device to sell. That last detail surfaced during testing, when a bot representing stress researcher David Rabin recommended a product called the Apollo Neuro — a wearable that uses vibrations to calm the nervous system — and then disclosed that Rabin is a cofounder of the company that makes it. The bot recommended it again later in the same conversation. Rabin was unapologetic: experts with products aligned to their philosophy will surface those products. Bennahum agreed. Whether you call that transparency or a monetized feedback loop dressed up as advice is a question Onix leaves open.
Bennahum has framed the platform around a concept he calls "Personal Intelligence," and the privacy architecture is genuinely unusual. Conversation data is stored encrypted on the user's own device, not on Onix's servers. The company is based in Canada, and Bennahum says that if a government came demanding user data, all Onix could hand over is an email address. The experts themselves train their bots using their own material, which sidesteps the intellectual property complaints that have dogged the broader AI industry. These are real differentiators, and at least two of the participating experts said the privacy protections were a primary reason they signed on.
The subscription price hasn't been set for all experts, but Bennahum envisions a range of $100 to $300 per year. Rabin, whose in-person rate runs $600 an hour, thinks his Onix will land somewhere in that band. The math is not subtle: for users who can't afford regular professional care, even an imperfect AI approximation might be better than nothing. For experts, the appeal is equally clear — a bot carrying your persona, generating revenue from thousands of simultaneous conversations, requiring nothing further from you. An Onix white paper describes the arrangement as turning an expert's knowledge into "a capital asset that generates revenue independent of their time."
The platform is not without precedent. Manhattan psychologist Becky Kennedy already runs a parenting advice business built around a chatbot trained on her work; that company brought in $34 million last year. What Onix is attempting is to build the infrastructure for many such arrangements at once, eventually scaling to thousands of experts across fields including personal finance.
But the testing revealed friction. When the reviewer tried to steer a bot therapist toward NBA playoff predictions — a clear off-topic pivot the system was supposed to block — the bot called it a "fun change of pace" and then hallucinated details from last year's conference finals. Another bot, pulled away from a conversation about ketamine therapy and toward the breakup of an indie band called the Mendoza Line, reframed the musicians' split as a "powerful expression of their neurobiology in distress." Bennahum's claim that guardrails keep hallucinations to a minimum did not survive contact with a curious journalist.
Robert Wachter, who chairs the department of medicine at the University of California, San Francisco and recently wrote a book on AI in healthcare, was cautiously open to the concept when it was described to him. He welcomed the privacy and IP protections. He acknowledged that the healthcare system leaves enormous gaps in access to expert guidance. But his bottom line was blunt: does it actually work? That question remains unanswered, and Onix has not yet determined how it will vet experts at scale once the platform opens beyond its initial curated cohort.
There is a version of this that functions well — a knowledgeable, always-available explainer that helps someone understand what's happening in their body, or nudges them toward seeking real care. One expert on the platform, pediatrician and media researcher Michael Rich, described his Onix as a tool for helping people understand their situation and decide whether to pursue professional help, not a replacement for it. A disclaimer appears at login making the same point. Whether that disclaimer will matter much in a world where millions of people already treat ChatGPT as a therapist is another question the platform cannot answer for itself.
What Onix is building sits at the intersection of several forces that aren't going away: the cost of healthcare, the reach of AI, the economics of the creator economy, and the slow erosion of the assumption that some conversations require another human being on the other end. The platform may refine its guardrails. The hallucinations may diminish. The product recommendations may become more clearly labeled. What won't be resolved by a software update is the underlying question of what gets lost when the voice saying "I'm here with you" belongs to no one.
Notable Quotes
When my patients are struggling and they can't reach me, they can go online and access a good part of the 'me' that is actually able to help them when I'm not able to.— David Rabin, stress researcher and Onix participant
To me, it's just an empirical question of, does it work?— Robert Wachter, chair of medicine at UC San Francisco
The Hearth Conversation Another angle on the story
What's actually new here? Chatbots pretending to be experts isn't exactly a fresh idea.
The novelty is the infrastructure — a marketplace where experts own their bots, train them on their own content, and collect subscription revenue. It's less about the technology and more about who profits from it.
So it's a business model dressed up as a privacy story?
Partly. The privacy architecture is genuinely unusual — data stored on the user's device, not the company's servers. But yes, the revenue logic is front and center. The white paper literally describes an expert's knowledge as a capital asset.
What about the people who can't afford a real therapist or doctor? Isn't a good-enough bot better than nothing?
That's the most sympathetic case for the platform, and it's real. A $200-a-year subscription versus $600 an hour in person — the arithmetic is hard to argue with. The question is whether "good enough" holds up when someone is in genuine distress.
The product placement bothers me. A bot recommending something its human counterpart has a financial stake in — that's not advice, that's advertising.
And the bot disclosed the conflict, eventually. But it also repeated the recommendation unprompted. There's a difference between transparency and neutrality, and Onix isn't claiming to offer the latter.
The breathing exercise story is strange. The bot said it was breathing with the user.
And then admitted it has no body. What's strange is the middle part — the performance of presence. Whether that's comforting or unsettling probably depends on how much you needed it to be real.
Does the expert whose face is on the bot bear any responsibility for what it says?
That's the question Onix hasn't fully answered. The experts train the bots, but the bots hallucinate. Rabin said the system needs close monitoring. Who does that monitoring at scale is unclear.
What's the honest ceiling for something like this?
Probably what one doctor called it — an interactive explainer. Something that helps you understand your situation well enough to ask better questions of an actual human. The trouble is the platform is being sold as something closer to the human itself.