top of page

The AI Hype is Loud - But Healthcare Moves at the Speed of Trust

  • Writer: Angie Lamb
    Angie Lamb
  • Jan 2
  • 4 min read

In the past few months, I've come across an increasing number of social media posts and advertisements claiming that AI with "replace" administrative staff, fully automate patient communication, or transform the "old way" of running healthcare practices obsolete within the next 12-18 months. Many of these fear-based narratives, compare healthcare to Blockbuster, taxis, or the music industry - as if regulated, relational clinical work moves at the same pace as Silicon Valley.


A person in a patterned dress uses a phone and laptop at a desk. Background has shelves with books and pictures. Calm, focused mood.

But mental health and wellness care are not tech companies. They are human services rooted in safety, ethics, and connection.


And while AI is changing the landscape, I don't believe it's doing so in the binary, all-or-nothing way that I've seen much the online noise suggesting.


The Quiet Truth: AI Doesn’t Replace Human Care


In counselling, psychology, acupuncture, massage therapy, and the other health-support professions, the work depends on more nuance than the algorithm can fully support. In my experience, there are critical aspects of care that AI often overlooks, including:


Systems Development

AI can certainly outline workflows or patient-journey steps that look polished, but they don’t always reflect the real rhythms of clinical work. It can offer a helpful starting point; however, systems become truly sustainable through collaborative (human) refinement - weaving together operational clarity with the practitioner’s individual approach.


Messaging

AI can draft emails, forms, and marketing copy that look supportive on the surface, but it often overlooks the subtle cues that make healthcare communication feel safe and appropriately paced.


In my work, I focus on refining onboarding messages, intake forms, and marketing language so they don't unintentionally create pressure or activation - particularly important when clients are already navigating tender physical or mental health concerns. The goal is communication that feels calm and clear - aligned with the care you offer.


Evidence-Informed Practices

AI often draws on broad health claims and may rely on outdated or decontextualized information. This can lead to language that unintentionally overstates outcomes, oversimplifies or misrepresents research, or falls out of alignment with the standards practitioners are accountable to under their regulatory colleges. Left unaddressed, this kind of language can carry meaningful risk, having significant impact on the credibility of a practice.


Privacy and PHI Protection

AI-generated systems often overlook how patient information actually moves through a practice - where it’s collected, where it’s stored, and who ultimately has access to it. While more platforms are beginning to take privacy and compliance seriously, many commonly used tools still fall short. Even widely used tools like Gmail can raise compliance questions for some Canadian practitioners, particularly around data residency and access, as information may be stored or processed on U.S.-based servers unless specific safeguards are in place. These details matter, because privacy obligations don’t stop at intention - they extend to infrastructure, data residency, and the systems supporting day-to-day care.


Therapeutic Communication

I believe that the administrative team plays a foundational role in building the therapeutic alliance. As the first point of contact, administrative interactions and onboarding processes help establish trust, psychological safety, and a sense of being held - long before a client enters the therapy room. This includes not just what is communicated, but when and how: the timing of follow-ups, the pacing of reminders, and the ability to respond with discernment rather than automation. When timing and pacing are off, even well-intended or generic communication can contribute to client drop-off, increased no-shows, or early disengagement before care has a chance to take root. While AI can generate language that sounds friendly or supportive, it rarely understands the relational nuance, attunement, and timing required in therapeutic settings.


In my view, even though these may appear to be “just” administrative considerations, they are not optional elements of care - they are foundational. AI can’t ethically replace this relational work, but it can thoughtfully support the systems that hold and protect it.


Where Does AI in Healthcare Responsibly Belong?


Most of the real, sustainable benefits of AI are quiet ones:


  • Reducing repetitive administrative tasks

    Drafting documents, organizing internal templates, summarizing operational/administrative notes, or creating SOPs and checklists - freeing up time for decision-making and relational work.


  • Supporting efficient patient or client communication

    Drafting emails, follow-ups, and onboarding documents using AI outside of a clinic’s compliant email or record systems and without any personal information, with human review ensuring tone, timing, and clinical appropriateness before sending through secure channels.


  • Helping you stay visible without burning out

    Supporting content planning, early drafts for newsletters or posts, and idea generation for education-based marketing, with intentional human refinement to ensure accuracy, ethics, and alignment with your values and regulatory obligations.


In my view, AI works best (at least at this stage) as a supportive assistant - a starting point that can be shaped to fit your brand, refined with human judgment, and adjusted to align ethically with your clinical messaging.


Despite the bold claims I’m seeing around AI in healthcare, I believe that when it’s adopted too quickly or without clear boundaries, it has the potential to create unnecessary risk and overwhelm. Used thoughtfully, however, AI can open space for you to focus on the most human parts of your work - the care held within the therapeutic relationship with your clients.



So What’s the Sustainable Path Forward?


I believe the practices that will thrive in the coming years won’t be the ones that adopt every new tool overnight. They’ll be the ones that integrate technology with clarity, boundaries, ethical alignment, and clinical responsibility. As the landscape continues to shift, this feels like a steadier place to stand.


If this post resonates, you might take a moment to reflect on where AI feels supportive in your work - and where clearer boundaries might help protect care.



Angie Lamb is the founder of Cedar Coast Collective, where she supports healthcare and mental-health practices with ethical, presence-centred systems and operations.

Comments


bottom of page