Core Solutions Blog

Supporting the Incorporation of AI Into Clinical Practice

Artificial Intelligence in Clinical Practice

The Federation of State Medical Boards (FSMB) recently released a pivotal report on the responsible and ethical incorporation of artificial intelligence (AI) into clinical practice. I read this report and evaluated the best practices it shares with a critical eye. I believe this is the first time I have seen state medical boards identifying that providers might be forced to use AI tools in the future.  

I do not find this alarming. As technology progresses, new tools become available that providers should begin to use, and there typically comes a time and tipping point where it's determined that these tools and the services they provide become mainstream and standard practice. Once use of such tools is determined to be regular practice, providers become hard-pressed to explain why they are not using the tools now widely available and accessible.  

I view the emergence of AI in a similar way to the emergence and implementation of electronic health records (EHR) 15 years ago when we saw the enacting of the HITECH Act. This was the time before most providers had an EHR or funding to purchase an EHR. Within a few years after the 2009 passage of HITECH, adoption of EHRs had soared, but not everyone had embraced these solutions. Then we began to see lawsuits against providers for not using the technology available to them. The idea that a provider can get away with using less sophisticated means to evaluate, document, or test for certain diseases seems unacceptable when the majority of providers are using certain systems, platforms, or tests in an effort to provide the highest quality care.

I am in agreement with this way of thinking. I think it's important for providers to stay on top of what is considered clinical best practice and be able to modify their individual practices or their group practice to utilize the available tools. This helps ensure all clients or patients can receive the same quality and sophistication of care that everyone else can receive. It then becomes incumbent on providers to keep up with technology as it evolves and not let their practice fall behind, as doing so can contribute to a decline in the quality care delivered to the clients and patients they serve. 

I commend FSMB for informing their constituents about the growing use of AI in clinical care and encouraging their constituents to take certain actions if they are going to use AI and machine learning in their efforts to provide the best care. In its new policy, FSMB recommends that physicians should maintain transparency and disclose the use of AI in their healthcare practice. They should always inform patients of the tools being used and disclose any conflicts of interest. They should also inform patients of their right to refuse the use of AI.  

Another key point FSMB identified concerned education and understanding of AI. Physicians must learn a new skill set that includes comprehending the fundamental workings of AI and its benefits. Physicians do not necessarily need to know what happens “under the hood.” Rather, they need to understand the basic fundamentals of how AI works and its potential risks.  

FSMB also recommends that providers use AI responsibly and have accountability for their use. This recommendation is directly targeted to using AI tools for clinical decision support and is intended to ensure that physicians have a comprehensive education about these tools, including training on how to interpret the AI recommendations. Equity and access are significant motivators for developing and using AI, so physicians need to be aware of implications for equity and access as well as data privacy and security.  

FSMB also suggests that AI can assist medical boards in their decision-making processes, such as those concerning monitoring physician performance more efficiently and identifying potential opportunities to be proactive in protecting people who rely on the use of AI.  

FSMB identified a need for continual review and adoption of laws and regulations around AI. It's vital that we all — and especially physicians and other practitioners — stay on top of the use of AI and its implications for healthcare. Healthcare has always been a rapidly and ever-changing environment. The introduction and growth of AI ensures this will continue. 

In summary, I believe FSMB’s recommendations are sound and provide a good path for providers to follow. The policy and its guidelines open up the doors for the continued use of AI with guardrails around the security, privacy, and appropriate use of these tools. These should help ensure equity and access across all patient and client populations. 


Michael Lardieri HeadshotMike Lardieri, LCSW

Senior Vice President, Strategy | Core Solutions

Mike Lardieri is a licensed clinical social worker with a 30 + year history leading behavioral health organizations. He has led some of the largest behavioral health organizations in the nation including for profit, not for profit, and managed care organizations. Mike is an expert in 42 CFR Part 2 and the implementation of artificial intelligence and machine learning in behavioral health settings. He is also a national leader in quality measurement and sits on the Battelle Partnership for Quality Management Committee. He has also held senior leadership positions at the National Association for Community Health Centers (NACHC), Northwell Health, the largest provider in NY State and the National Council for Mental Wellbeing.

New call-to-action