Dear Reader,
Last week, I launched AI Lab Assistant, a Lindy workflow template for lab interpretation after weeks of tinkering. I was convinced the value was obvious. Then the questions started rolling in:
- "What exactly is AI?"
- "How is this different from setting up email filters?"
- "Which AI models can we use with this?"
- "What's Lindy and why would I need it?"
I stared at these questions, feeling slightly deflated. I'd spent weeks deep in my AI rabbit hole, fine-tuning prompts and testing workflows, convinced I was creating something valuable. But clearly, I'd failed to convey what it actually does.
Then I realized: this disconnect was the valuable thing.
The Gap That's Actually An Opportunity
Your questions revealed something I hadn't seen: We're scattered across the entire spectrum of AI understanding. Some of you are debating Claude vs GPT-o3 while others are asking how AI differs from automated email sorting. Some use ChatGPT daily but had no idea HIPAA-compliant platforms existed.
This isn't a knowledge gap to fix—it's showing us exactly where we are as a profession. And honestly? It excites me.
Because the practitioner asking "what exactly is AI?" is asking the most important question. They're not behind—they're being appropriately cautious about tools that will handle patient data. Meanwhile, the colleague comparing language models reminded me that the same lab values can yield completely different insights depending on which AI model we use. We need both perspectives desperately.
What Actually Matters
Here's what using AI for lab interpretation actually looks like in my practice: Last week, a patient came in with stubborn fatigue. Pre-AI, I'd be rushing between patients, trying to review their labs, cross-reference past results, and calculate ratios in the few minutes I had. I'd walk into the room, still processing the numbers, half my attention on what I'd just read.
Now? I have a clinically relevant summary ready before my day even starts. When I walk into that room, I'm fully present. I notice the way they say "tired" differently than last visit. I catch the mention of their teenager's college stress. I have space to explore why this fatigue feels heavier, what shifted in their life three months ago when the pattern changed.
The AI didn't replace my clinical reasoning. It gave me back the bandwidth to actually use it.
Not using AI to cram in more patients or generate more content, but to reclaim the space for actual healing work.
Bridging the Gap Together
One question in particular struck me: "What's Lindy, and why would I need it?"
It made me realize—just a year ago, we literally couldn't automate patient workflows safely. Not unless we were paying $3,000+ per month for enterprise software. Now? The same HIPAA-compliant capabilities are available for $50-100 a month. That shift happened so fast that most practitioners don't even know these tools exist, let alone how to evaluate them. Meanwhile, I'm over here assuming everyone knows about workflow automation.
What's fascinating is that this gap exists even for AI experts. The models are capable of far more than any of us currently understand. But the best—and safest—way to explore these capabilities is exactly what we're doing: learning in community, with good intentions, from wherever we stand.
That's why I'm hosting a free workshop: AI in Integrative Healthcare on August 1st at 2 PM EST.
We'll start with actual foundations (yes, defining what AI actually is and isn't). We'll explore which tools work for which tasks, how to navigate HIPAA compliance, and prompting techniques that handle clinical nuance. Most importantly, we'll build from wherever you are—whether you're asking "what's AI?" or "why does Claude give different results than GPT-4?"
Because here's what I believe: the future of AI in healthcare won't be shaped by tech companies or early adopters alone. It'll be shaped by all of us, asking different questions from different vantage points, building something that actually serves healing.
Your questions aren't just helping me become a better teacher. They're helping our entire field figure out what ethical, effective AI integration actually looks like.
If you're interested, reply and I'll send details. And please—send me your questions. All of them. They're helping shape something better than what any of us could create alone.
With care,
Katy
P.S. To everyone who asked a question that made me uncomfortable with my assumptions—thank you. That discomfort is where the learning lives.