TLDR Summary
Clinical AI implementation must be substantive rather than additive to avoid overwhelming an already stressed healthcare system
AI training requires whole language processing and training on millions of clinical conversations to be accurate and effective
Healthcare infrastructure challenges (fragmented data, provider leakage, lack of interoperability) must be addressed for AI to deliver value
AI should focus on efficiency gains: automating documentation, scheduling, utilization management, and risk stratification
Workforce development is critical – AI should retool and upskill workers rather than simply replace them
Remote patient monitoring and rural healthcare access represent significant opportunities for AI-enabled improvements
Legal, compliance, and cybersecurity considerations are essential when implementing AI solutions
Shared risk arrangements and in-year value demonstration are now required for vendor partnerships
Clinician buy-in and involvement from frontline workers is mandatory for successful AI adoption
Public health concerns around AI infrastructure (water usage, electricity grid strain) need government oversight
Listen in on the Discussion
Cassandra: Good morning, Sheryl. We’re to talk about the vast topic of Clinical AI. Out of the gate, I think we’re going to need a few more of these.
Sheryl: I agree!
AI Implementation Approach
Cassandra: Let’s get started. What are the different layers we should consider when discussing Clinical AI?
Sheryl: The critical distinction is between additive versus substantive AI solutions. Adding more work to a stressed system is ineffective. AI that requires physicians to re-correct inaccurate outputs creates frustration rather than value. We need whole language processing – similar to early cochlear implant learning – to capture accurate clinical data in real environments with ambient noise.
Cassandra: How should we think about training AI for clinical use?
Sheryl: Training AI for clinical documentation should be like getting someone through med school – learning body systems, physiology, anatomy first, then moving to context and use cases. We need millions of clinical conversations with the right prescriptions, conditions, and context because natural language processing doesn’t actually think – it relies on word frequency patterns.
Generational and Personalized Considerations
Cassandra: How do generational differences impact AI implementation?
Sheryl: We need age-appropriate learning models. Millennials and Gen Z who grew up with technology aren’t scared of computers and can code. Adult learning requires hearing information multiple times in multiple ways. AI outputs must be tailored to the user’s generation, health literacy, and ethnicity to be relevant and personalized.
Cassandra: How does this apply to patient-facing AI?
Sheryl: It works both ways – the listening piece for AI must account for how different people describe symptoms based on their healthcare experiences, cultural background, and trust in the system. Some people under-discuss symptoms or have experienced healthcare bias. The output must also be personalized to who the user is.
Healthcare Infrastructure Challenges
Cassandra: What are the major infrastructure barriers to AI effectiveness?
Sheryl: Healthcare has giant gaps in delivery system connectivity. A patient’s care journey involves multiple disconnected systems – PCP, physical therapy, radiology, surgery centers – with data stored in different places. HIEs haven’t solved this. Claims data has timing issues and things get lost. We need one care plan, one dashboard, one person – but we don’t have that.
Sheryl: How significant do you think is the fragmentation problem?
Cassandra: I’ve regularly seen in ACOs taking financial risk specialty referral leakage around 50-70%. Physicians can change IPA affiliations anytime for better contracts. Over-the-counter medications purchased at CVS or Walgreens don’t flow into charts. Rural communities have even more independent physician groups whose documentation never enters a centralized system.
AI Value Propositions
Sheryl: Where can AI deliver the most value in healthcare?
Cassandra: The biggest opportunity is efficiency, since 80% of provider costs are labor. We can automate clinical documentation, nursing assessments, care plans, appointment scheduling, and follow-ups – with humans editing before finalization. Risk stratification and predictive models can incorporate unstructured data from clinical notes to direct resources appropriately.
Sheryl: Utilization management, monthly reporting, and analytics that currently require large teams can be automated. Conversational AI has become sophisticated enough to match speech patterns and cadence, detect hesitation, and address it in real-time – far superior to old IVR systems. This can replace overseas call centers that frustrated patients.
Workforce Impact
Sheryl: How will AI affect healthcare jobs?
Cassandra: Healthcare jobs are the top economic driver in most regions. We haven’t solved health outcomes, so teams can be relieved of busy work to do more advanced work on whole-person care and intersectionality. The system will continue expecting more efficiency and waste reduction.
Sheryl: This is about retooling rather than replacing. We need specialized auditors and programmers. Home health aides can receive AI-supported prompts for better daily care. The workforce development gaps – home health aides, maternity support, complex homebound care – can be addressed by using AI to upskill workers with smart prompts that sequence care priorities.
Rural Healthcare Applications
Cassandra: What role can AI play in rural healthcare?
Sheryl: Rural areas have care deserts – people go from no care directly to the ICU with nothing in between. AI can fill that gap substantively. However, infrastructure is critical – you must have internet access, reliable power, and technology availability. This hasn’t been solved in 20 years of telehealth efforts.
Sheryl: What about specialist access in rural areas?
Cassandra: Computer-aided diagnostic support for radiology has existed for a long time, but there’s more appetite now to expand this to OB ultrasounds and other specialties. However, we haven’t expanded medical education slots, so we need technology to extend specialist reach – but only if the infrastructure supports it.
Vendor Management
Cassandra: How should organizations approach AI vendor relationships?
Sheryl: Hundreds of companies are offering AI solutions just to get your data and have you teach them. You don’t have to be part of every pilot. Some companies have 20 years of experience and should offer complete solutions. Require shared risk arrangements where everyone has upside and downside – there’s no margin for anything else.
Sheryl: What do you think about demonstrating value?
Cassandra: The old model of year one implementation, year two flat performance, year three savings is gone. Organizations expect in-year value now. Vendors should be at risk for any financial value promised. Use your own strategic vision and business cases to identify needs, then find tools that fit – don’t let vendors’ hammers search for nails.
Clinical Adoption
Sheryl: How do you ensure clinical adoption of AI tools?
Cassandra: You must have clinician partners involved from the start. Frontline workers who are doing the actual work must be pressure testers at the table. If you’re not solving their biggest problem, they won’t focus on helping you succeed. Understand what takes up their most time, what wastes money, and what patients complain about.
Sheryl: VPs and higher leadership often make changes without input from people doing the work. The person on the line must validate that the solution is substantive and actionable. It’s not just about buy-in but actual adoption – teaching them how to make it work in their daily workflow, not just having it exist esoterically.
Data Quality and Auditing
Cassandra: How do we ensure AI accuracy over time?
Sheryl: We need to understand how humans audit AI agents. Teams are building layers of IT infrastructure, but it requires a human to stop, pause, and evaluate accuracy. If someone inputs a wrong prompt and the system runs for three weeks generating bad data, how do you undo it? What are the warning signs that something is wrong? I want to understand the leading indicators that you’re off track.
Cassandra: What about garbage in, garbage out?
Sheryl: AI systems are only as good as the data we give them. They’re not independent thinkers. Physicians learn to look for gaps – breathing patterns, coloration, things that aren’t obvious checkboxes. You can’t train AI to pick up those nuances. AI must be substantive and accurate, but it cannot function without the human element.
Health Education and Shared Decision-Making
Sheryl: How can AI support patient health education?
Cassandra: Multiple apps already serve up health education relevant to patient conditions. AI can help people understand their conditions and translate symptoms to healthcare providers. There are opportunities in shared decision-making to clarify patient experiences so they communicate effectively with providers.
Sheryl: We need to teach people to be the CEO of their body – to know when something doesn’t feel well and articulate what the gap is. You can’t just say “I don’t feel well.” You need systems analysis – what’s going on with your head, what changed two days ago. AI could teach this and help people hand off information to providers who know a different level of detail.
Remote Patient Monitoring
Sheryl: What’s the future of remote patient monitoring with AI?
Cassandra: Remote patient monitoring has shown limited value historically, with hesitancy from payers and providers due to cost and perceived lack of benefit. AI can reduce costs and connect monitoring practically to clinical workflows. It can advise when it’s time to call a patient or schedule an appointment, reducing physician office hesitancy about monitoring data.
Sheryl: There’s opportunity for AI to determine the right intervention level – does someone need a phone call, two phone calls, phone call plus mail? Sequencing care based on risk and need can make remote monitoring substantive rather than just generating more data.
Implementation Best Practices
Cassandra: What advice would you give executives implementing AI?
Sheryl: Approach with cautious optimism. It’s wildly exciting but cannot be additive. Everyone is running as fast as they can – it must be substantive and change outcomes. Apply a critical lens to evaluate what works and what doesn’t. Look at unintended consequences, downstream and upstream impacts. Be honest about costs and benefits. Focus on retraining people and giving them tools they never had.
Cassandra: What governance considerations are essential?
Sheryl: Establish an AI center of excellence examining legal ramifications, data privacy, state regulations on generative AI, model hallucination prevention, and vendor compliance. Flow down all regulatory language to vendors. Increase cybersecurity insurance. Evaluate from financial, legal, clinical, operations, and IT perspectives. Don’t participate in every pilot – demand vendors come with proven solutions and shared risk arrangements.
Public Health Concerns
Sheryl: What are the public health implications of AI infrastructure?
Cassandra: Water usage for data centers is concerning, especially in rural communities with limited aquifers like North Central Texas. There’s insufficient government oversight of water protection. The electricity grid is outdated, and utility bills are rising. Some families in South Carolina unplug refrigerators during peak demand pricing hours. If we have affordability and electricity crises, why are residents on the same grid funding AI data centers?
Sheryl: This requires infrastructure conversations – desalination from oceans to get water to these places. These are big questions in a country with already stressed infrastructure. We need state, local, and federal oversight on pricing and restrictions while enabling innovation.
Future Timeline
Sheryl: How quickly is AI advancing?
Cassandra: Silicon Valley leaders expect artificial general intelligence – democratization of all knowledge – within one year. Artificial superintelligence, where AI improves itself automatically, is expected in six years. Whether true or not, these timelines move faster than the government can respond, which is concerning for necessary oversight and regulation.
Cassandra: I appreciate this intro conversation. Let’s think about the following topics we can discuss in this area and perhaps others will send in questions.
If you have any topics you’d like us to discuss, leave it in the comments of this post!