CASE STUDY
AI + Live Agent
Designed a hybrid conversational system that balanced automation with empathy, reshaping how healthcare providers access critical payment data. Proved that AI can amplify the human experience when designed with understanding, not just intelligence.
Bridging automation and empathy in healthcare communication
This project started with a human problem, not a technical one. Healthcare providers were overwhelmed with calls to access Explanation of Payments (EOPs). I led the design of an AI + Live Agent hybrid system that paired automation with empathy. We co-created the conversation architecture with agents, validated it with providers, and refined it until it felt as intuitive as talking to a trusted colleague.
Outcomes
- 90% reduction in daily support calls
- 40% faster handling time
- 18% increase in sales from improved self-service adoption
We we focused on
- Mapped the top provider support journeys and identified high-friction points in EOP retrieval.
- Designed a conversational flow blending AI automation for common tasks with seamless handoffs to live agents for edge cases.
- Developed natural-language prototypes tested with real users to ensure tone, trust, and clarity.
- Partnered with engineering to embed learning loops, improving AI accuracy and intent recognition over time.
- Created a visual monitoring tool for agents to track context and continue conversations with zero loss of continuity.
What helped
- Co-design sessions that kept the human voice at the center of every AI decision.
- Cross-functional commitment between Product, Engineering, and Customer Support to blend machine capability with emotional intelligence.
- Rapid iteration cycles and in-the-field testing with real providers.
What I learned
This project reminded me that conversational AI isn’t only about flows and intent matching, it’s about designing a system where human judgment, machine efficiency, and business outcomes work in sync. I learned how critical it is to strategically map decision nodes where AI can confidently automate and where human-in-the-loop support is required to validate accuracy, handle edge cases, and protect user trust. Building an effective escalation model wasn’t just a UX problem; it was a system design and data design challenge, tied directly to reducing support costs and improving resolution speed.
Training AI responsibly was another eye-opener. I saw firsthand how small UX decisions affected model performance, error handling, and even the reinforcement of bias. I gained a new appreciation for the role UX plays in shaping training data, conversational tone, compliance, and ethical guardrails. Ultimately, this experience reinforced a key principle: AI-driven experiences shouldn’t aim to replace humans, but to scale human expertise, responsibly, intelligently, and with measurable impact.


