Skip to content
Schedule a Consult
Schedule a Consult
    January 9, 2026

    Why Your Team Might Be Your Biggest AI Challenge

    Getting that second invitation from Adrian Lefler to join the Byte Sized podcast felt different this time. Our first conversation focused on the external threats - hackers, breaches, the criminals trying to get into your systems. But this second episode? We needed to talk about something much closer to home, something practice owners don't want to hear: the biggest threat to your new AI investment might be sitting at your front desk right now.

    After implementing security solutions in over 1,000 healthcare facilities during my 25 years in this industry, I thought I'd seen every possible way a technology rollout could fail. But the data Adrian and I discussed reveals something entirely new - a deliberate, calculated resistance that's happening in dental practices across the country.

    The research floored both of us. We're not talking about technical glitches or training issues. We're talking about active sabotage. The numbers tell a story that should concern every practice owner investing in AI initiatives. And when you break it down by generation, the results completely shatter our assumptions about who embraces new technology.

    The Three Faces of Sabotage I've Witnessed

    Through Black Talon Security's new AI division, which we launched after bringing on a developer with 10 years of AI experience, we're seeing three distinct patterns of sabotage emerge in dental practices. First, there's bad data injection, where employees intentionally feed incorrect information to AI systems. Second, we see silent resistance, where team members simply refuse to engage with the technology. Third, and perhaps most dangerous, is accidental sabotage through ignorance, where employees unknowingly compromise practices by entering PHI into public AI tools.

    During our conversation with Adrian, I captured the voice of the resistant employee perfectly - the one who's been doing this for 15 years and has seen technology come and go. This attitude transforms expensive AI investments into what I call "a very expensive coat hanger" - impressive to look at but utterly useless.

    What struck Adrian most was when I explained why younger employees are particularly problematic. They know how, I told him. They're more likely to know the tech. They know how to navigate the technology. Unlike older generations who might dismiss AI as overhyped, Gen Z understands exactly what these tools can do. They've used ChatGPT. They've seen the capabilities. And they're smart enough to recognize the genuine threat to their job security.

    The Microsoft Revelation That Changed Everything

    Adrian brought up a massive CEO executive forum with Microsoft where thousands of business leaders reached a unanimous conclusion: successful AI implementation isn't top-down, it's bottom-up. This completely contradicts how most dental practices operate. The traditional approach, where the dentist announces a new technology and expects compliance, simply doesn't work anymore.

    1 x 1 – social post–24-1

    My prescription for practices is clear and non-negotiable: Do not try to implement any type of AI technology without communicating clearly to your team what your goal is. Don't implement for the sake of implementing AI. This isn't just advice; it's a survival strategy for your investment.

    The solution we've found most effective is identifying an AI champion within the practice. Not the doctor forcing change, but a naturally curious team member who becomes your internal advocate. Give them responsibility, give them a budget, let them explore tools like ChatGPT or Claude. Let them bring solutions up to you rather than forcing solutions down on them.

    1 x 1 – social post–24-3

    The Legal Minefield Nobody Sees Coming

    A major DSO is currently facing a class action lawsuit over something as simple as implementing AI to handle phone calls. The basis? Federal wiretapping laws. Before implementing any AI technology, I insist on two requirements that could save you from similar litigation.

    First, request a recent third-party risk assessment. It has to be recent, showing where your data goes and whether it's training AI models. Second, demand complete GRC documentation—Governance, Risk, and Compliance proof that the vendor has considered legal requirements. Do not assume that a company, even if they're well-known, has taken all of that into consideration, I warned Adrian.

    Why Communication Beats Technology Every Time

    The most powerful moment in our conversation came when Adrian pointed out that dental practices miss about 30% of their phone calls anyway. How can front desk staff worry about AI taking their jobs when they literally can't handle the current workload? This reality check creates the perfect communication opportunity.

    I emphasized to Adrian that team members need to hear they're valued and that the technology is there to support them, not replace them. They work hard from the moment they step into the practice until they leave, and they need to know they're not at risk - we're just trying to take some pressure off and streamline processes.

    Being back on Byte Sized reminded me why these conversations matter. I'm grateful for the platform to share these critical insights, especially as Black Talon Security's AI division helps practices safely navigate this transformation.

    If you're implementing AI in your practice, listen to the full episode. Adrian and I go even deeper into implementation strategies, training solutions, and the education gap nobody's filling. Your technology investment depends on getting this right.

    Connect with Black Talon Security at blacktalonsecurity.com to ensure your AI implementation is both secure and successful.

    More from the blog

    View All Posts