Every week, more than 230 million people turn to OpenAI’s service for health guidance. It’s one of many platforms now competing for attention having already been overtaken in usage by Anthropic’s Claude. Big Tech platforms continue to roll out AI health assistants at scale, with Amazon’s new agent the latest to be integrated into apps and devices used by millions.
Healthcare is evolving from episodic interactions to continuous, conversational support. As AI moves into care delivery, design will determine whether it improves or complicates the experience of care. The most effective systems will not always be the most technically advanced. They will be the ones people can use easily, where outputs are clear and actionable, and care pathways remain coherent.
AI is already strengthening healthcare capacity. In diagnostic imaging, AI-supported breast screening has demonstrated improved detection rates and reduced false positives in large-scale studies. These improvements have potential to dramatically improve early detection in patients where radiologists are overstretched.
In care delivery, organisations such as UK home care provider Cera have been credited with saving the NHS an estimated £1.5 million per day by halving hospital admissions. In practice, these systems are already acting as a capacity multiplier. It helps healthcare systems do more with the expertise they already have, extending reach without diluting clinical quality.
Design for safe, usable AI in healthcare
As more people begin turning to conversational AI with their healthcare concerns, use is exposing areas for improvement. Algorithmic summaries have, in some cases, produced misleading or potentially unsafe medical advice, prompting correction and removals.
AI systems can produce answers, but they are still not reliably good at signalling how confident or uncertain those answers are in a way clinicians can safely act on. As these systems are used as a decision-making layer at scale, the way uncertainty is surfaced to users is critical. It determines whether AI outputs translate into safe clinical action, or whether they introduce an avoidable risk.
Design becomes the mechanism through which AI behaviour is made safe, legible, and usable inside healthcare environments. It defines the boundaries of what AI can do, how it collaborates with clinicians, and how patients experience care. For AI health assistants to be safe and useful at scale, these systems must be designed with the same rigour as other parts of healthcare infrastructure. Four principles can consistently shape safer outcomes:
- Make sources and confidence of information visible. Every claim should be verifiable and understandable to both clinicians and patients. This helps mitigate AI hallucination, a problem where systems can generate sources or references that don’t actually exist.
- Add pauses where decisions carry risk. Traditional healthcare embeds pauses, second opinions, and diagnostic safeguards. Conversational systems remove many of these safeguards. It is our job as designers to restore pauses and preserve the safety built into traditional care.
- Treat safety as core infrastructure. Use high-quality clinical data, monitor performance continuously, and keep humans in the loop. Healthcare AI should be run like a safety-critical service.
- Test AI in the world and publish the results. The NHS, for example, is uniquely positioned to host large scale pilots that evaluate clinical outcomes, escalation pathways, and patient experience. Public trust will grow with transparent reporting.
Systems to augment clinicians, safely and effectively
A World Economic Forum analysis reports strong optimism amongst clinicians that AI can improve outcomes and workflows, creating fertile ground for co-design. The opportunity now is to translate this momentum into systems that augment clinicians safely and effectively.
Conversational AI is becoming part of the fabric of healthcare, and design is now shaping how it behaves. At scale, that design layer determines whether AI improves care or introduces unnecessary complexity.
What matters now is how these systems are deployed in real clinical settings. This requires structured evaluation in live environments, alongside clearer expectations for how uncertainty, escalation, and source transparency are handled. Design teams also need earlier involvement in clinical and product decisions, where many of the safety and usability trade-offs are first made.
Making safety visible and measurable in everyday use is what turns capability into a reliable clinical tool. Systems that embed these practices early are the ones most likely to deliver consistent, dependable value in clinical settings. With these elements in place, conversational AI has a clear path to becoming a dependable part of healthcare delivery.
About the author

As executive director of AI, Nayan Jain is leading ustwo’s AI innovation strategy. A tech pioneer and entrepreneur, Jain co-founded health tech start-ups Heartbeat Health and Leo Health, and was founding engineer at Rally Health. He was a Presidential Innovation Fellow at the White House under President Obama and has received other awards in the health tech space, including first place at the 2012 Health 2.0 World Developer Cup in San Francisco for creating a Twitter bot for emerging health.
