As AI adoption accelerates across healthcare, organisations are moving beyond basic digitisation toward using AI for clinical workflows, analytics, operational efficiency, and decision-making. However, challenges around fragmented data, legacy systems, regulation, and responsible AI adoption continue to slow large-scale transformation across the sector.
Emids, a healthcare-focused technology and consulting company, works with payers, providers, and life sciences organisations on AI, data, cloud, and digital transformation initiatives.
In this interaction with CiOL, Sathiyan Kutty, Chief AI Officer (CAIO), Emids, discusses why many healthcare organisations still struggle to operationalise AI, how responsible AI in healthcare differs from other industries, and what enterprises need to successfully scale AI-driven transformation. He also shares insights from his experience across healthcare and intelligent systems, including previous roles at Tesla and Kaiser Permanente
Interview Excerpts:
Across organisations, what is the one consistent mistake when trying to operationalise AI?
AI is not new; my first exposure was back in 2003 at one of the largest chip companies in the world. Even then, the fundamentals were the same: using intelligent systems to drive processes. What’s changed is the scale and the expectation.
The most consistent mistake I see today is that organisations cannot clearly define outcomes and use cases. They hear about AI but fail to contextualise it for their own business. The second challenge is data, AI requires large volumes of well-organised information, and many organisations either lack sufficient data or haven’t structured it effectively. The third, and arguably the biggest, is organisational change. Leadership teams tend to treat AI as an IT problem rather than a strategic priority. That disconnect between leadership, business objectives, and technology adoption is what holds most organisations back.
How is AI adoption different in healthcare compared to other sectors?
When I was at Tesla, AI was deeply embedded in vehicles through deep learning and machine learning. Healthcare began adopting similar technologies, though with a lag of roughly five to six years. In the pre-GPT era, AI in healthcare was primarily used in clinical settings, diagnostics, research, where it had the most immediate impact. That’s what drew me to the sector.
In the post-GPT era, the shift has been significant. AI is now being embedded directly into workflows. Healthcare is highly fragmented and often subjective, which makes this particularly valuable. Adoption is increasingly happening on the administrative side, where automation can drive immediate efficiency gains. What would have taken years of traditional transformation is being accelerated by generative AI bridging existing technical debt.
Is regulation still preventing healthcare from scaling AI into production?
Healthcare in the US is actively adopting AI, but it has to take a methodical approach because it’s a highly regulated market, similar to financial services. The deterministic elements, traditional machine learning and deep learning, will continue. The newer generative AI components are still in a test-and-go phase in many organisations.
If there’s one industry that truly needs responsible AI, it’s healthcare. Anything that touches human life demands the highest level of governance and guardrails. Everyone wants to move fast, they just can’t, because the stakes are different here.
With all the focus on generative AI, are companies overlooking classical approaches like optimisation and operations research?
Customers are coming to us to operationalise AI across any part of their workflows, whether that’s increasing revenue, improving efficiency, or reducing cost. What’s changed is the speed. What used to take a traditional RPA automation discovery of six months is now being done in about twelve weeks.
One interesting shift on the build-versus-buy question: many CTOs we speak to are realising they could build certain capabilities themselves, now that code creation has essentially been commoditised. But context is not. What we bring is the healthcare-specific governance, the responsible AI framework, and the understanding of what happens when something goes wrong. Healthcare is adopting AI at least 2.2% faster than most other industries at this point, and the demand is only growing.
What is your framework for balancing safety and compliance while still delivering outcomes?
Healthcare organisations already had governance and compliance in their DNA before generative AI arrived. When traditional AI came in, the frameworks were already there. I can give an example from my time at Kaiser; we’d build pricing models for specific markets and had to ensure no community was unfairly disadvantaged. Those responsible AI principles carried forward.
What changed with GenAI is that it’s a non-deterministic environment. You can test outputs today, and when the next model version arrives, the behaviour shifts. Model drift has become a significant challenge. Many healthcare leaders don’t yet know how to solve for it, and without proper guardrails, the consequences can be serious. That’s an area we’re actively advising on.
What is the biggest misconception leadership teams have about what AI can realistically deliver?
The biggest misconception is that AI will solve every problem without requiring any internal change. Change management is consistently underestimated.
Leaders fall into different camps. Some demand outcomes without doing anything differently. Others in the middle understand that AI, like any technology, needs to be measured and managed. The ones at the forefront treat AI as a forcing function, a multiplier for the entire organisation. They know how to build the structures to support it.
The common blind spot at the early stage is thinking everything is possible without actually doing anything about it. “I can do it in ChatGPT, why can’t we do it at scale?” because ChatGPT is a one-on-one relationship. When you introduce an organisational structure, there are cascading effects, and teams often don’t even have project managers who know how to run an AI project.
The Chief AI Officer role is still relatively new. How do you define success in it?
The Chief AI Officer role is fundamentally a business role, not a technical one. It’s transformational. Most organisations have a digital transformation officer with more than half their focus on the business side, the CAIO is the same. It should not sit in IT.
The job is to set the right AI adoption strategy, both internally and for customers, with clearly defined business outcomes, not pet projects. You also have to identify the pivot point where transformation becomes visible to the organisation. When you first adopt a tool like Excel, a 15%-time saving doesn’t register. But when the entire organisation is running at 20% to 50% more capacity and people can actually feel the bandwidth, that’s when the next level of adoption kicks in. Chief AI Officers have to know when their organisation is approaching that inflection point.
Where are the biggest near-term AI opportunities in healthcare and life sciences?
On the payer side, prior authorisation and revenue cycle management are two areas with enormous potential. In the past, payers stitched processes together with robotic process automation. Now we’re seeing things become possible that weren’t before, particularly longitudinal cost analytics, which lets you look at member information across multiple providers and claims. That kind of analysis was never practical without agentic solutions because the underlying data is so complex and unstructured.
On the provider side, AI started first in clinical settings, so adoption has traditionally been faster there. For payers, administration is the entry point, and they’re now connecting it back to predictive member analytics, identifying chronic conditions early, managing costs before they escalate. Diabetes is a good example. Intervene early, and the cost of managing that patient drops significantly. That benefits payers, members, and providers alike.
How do you see autonomous systems shaping healthcare in the coming years?
Today AI is the buzzword, but the future is ambient, every product, every experience will have an AI component woven into it. Right now, we’re pointing at things and saying “that’s AI.” In a few years, you won’t notice it any more than you notice electricity.
The one thing I feel strongly about is that AI is going to bring healthcare organisations closer together than ever before. There’s been so much contextual information sitting in payer, provider, and member systems that couldn’t talk to one another. Agents are beginning to change that. If one field is wrong in a data exchange, we can now infer intention, parse it faster, contextualise it faster. That’s interoperability 2.0.
The other issue that needs to change is data ownership. Patient data is currently being held hostage by large technology companies that want to remain monopolies. A patient should own their medical data and be able to share it with whoever they choose for their own care. That’s not the reality today, and it needs to be.
