Core Solutions Blog

Is Using AI Technology in Healthcare Okay for Behavioral Health and IDD?

Is Using AI Technology in Healthcare Okay for Behavioral Health and IDD?

Consider some of the major technological innovations that have impacted healthcare over the years: electronic health records (EHRs), telemedicine, the internet itself. Each of these — and countless others — have continually revamped the industry, pushing providers to swiftly adapt. But new technology brings with it uncertainty about benefits, best practices, potential risks, and more. Such uncertainty can lead to periods of discomfort, pushback, and slower adoption. Advancements in artificial intelligence (AI) are no different.

Given the lack of familiarity with AI technology in healthcare, it’s natural for providers — including those specializing in mental health, substance use, and intellectual and developmental disabilities (IDD) — to be concerned about the implications of AI’s use. Will AI improve my ability to support clients? Is my clients’ personal data secure? Could AI harm the healthcare experience? 

These are important questions to ask when considering any new technology. However, if leaders stay current on AI best practices and partner with vendors that prioritize issues important to providers (e.g., seamless integration into workflows, security, avoiding bias that can harm outcomes), they’re more likely to reap the benefits of behavioral health AI. Here’s a breakdown of common provider barriers to AI adoption and the steps to take to help mitigate them.

Get the eBook: Guide to Selecting Behavioral Health and IDD AI Technology

Putting Safety First, Last, and Always

Client safety is a provider’s number-one concern, and effective AI solutions can empower clinicians to provide better and safer care, ultimately leading to better outcomes. But identifying the right AI technology in healthcare requires understanding how the tools in question support stronger operations and more secure environments for treatment, and where solutions need more monitoring to deliver on the promise they hold.

There have unfortunately been cases where rudimentary AI-powered technologies have shared inaccurate information or had poor algorithmic design. Some text-based therapy AI and CBT  (cognitive behavioral therapy) tools, for example, have served up content promoting eating disorders to 23% of user prompts. Other tools have been known to “hallucinate” facts without checking their validity. Since busy providers and individuals with behavioral health disorders might not have the time and resources to fact-check all information AI serves up, these falsities can hinder treatment or recovery.

A way to help avoid this undesirable consequences is adoption of more sophisticated AI in the healthcare space. Technologies that support backend processes — like Core’s mental health, substance use, and IDD diagnosis tracking tools that scan provider notes over time within the EHR or across a system of care to help identify difficult-to-see symptoms and patterns and its Evidence-Based Practice (EBP) tool that guides the provider and ensures they maintain fidelity to the EBP — can help ensure providers and clients are on the same page and that care teams are delivering the best and safest possible service. These technologies also enable organizations to reengineer complex processes so that providers can spend more time with clients or in administrative tasks supporting the organization.

The Data Security and Privacy Question

Sensitive health and personal data flows continuously between various entities within the healthcare system. Particularly for a behavioral health provider, keeping that data secure is both a legal requirement and an ethical obligation. 

However, one of the main benefits of AI technology in healthcare — its ability to alleviate provider burden due to the autonomy it can wield — also raises questions about whether AI services will store, use, and circulate data appropriately.

It’s important for behavioral health leaders to follow discussions and guidance around Health Insurance Portability and Accountability Act (HIPAA) compliance. It’s crucial to understand the role providers play in safeguarding protected health information (PHI) and whether the AI that is being used sends data out to the internet in a secure manner.

For leaders, this requires having transparency into how technology vendors that offer therapy AI, AI CBT tools, and AI-powered IDD diagnosis solutions store PHI. If, for example, the vendor shares information with third parties that are not HIPAA-covered entities, this may not only require additional layers of client consent but also opens up the possibility of the non-HIPAA entity sharing data that’s not being anonymized according to HIPAA standards `being “re-identified” when combined with other data. More secure solutions like Core’s Cx360 EHR store data within the platform but then delete sensitive information after it’s served its purpose, such as session transcripts used to create summaries for review.

The implementation of AI applications is also a good time to bolster your organization’s existing security measures for additional layers of protection, including stronger identity verification, strict access controls, end-to-end encryption, and incident response and disaster recovery plans.

With only 38% of U.S. healthcare consumers currently trusting AI and with some providers likewise wary, organizations must ensure that these initial steps are taken to protect all stakeholders in their care experiences and strengthen provider trust in the AI solutions introduced into their organizations.

Get the eBook: Guide to Selecting Behavioral Health and IDD AI Technology

Committing to Ethical, Client-Centric Care 

One-to-one time with providers is prized by healthcare consumers despite the convenience of technology like therapy AI tools, AI CBT services, and AI resources for people with IDD. According to one study, 63% of people surveyed are worried that these tools will lead to less direct interaction with their providers. 

The crux of many provider and consumer concerns about overreliance on AI is algorithmic bias. In other words, AI is a human-made tool that uses human-constructed data sets. Any cultural biases that human developers and users have can get embedded into the technology itself, potentially supporting unfair and ineffective clinical decision-making.

Bias falls into one of three categories: 

  1. Illegal bias, or models that break the law, such as discriminating against social groups
  2. Unfair bias, in which unethical behavior like favoring men over women or one kind of political viewpoint over another is embedded in the model
  3. Inherent bias, which relates to data patterns that machine learning systems are projected to identify 

How can facilities manage the embedded biases of AI technology in healthcare while also gleaning its benefits? The first step is to query the vendor and ask what processes they have to mitigate bias in its offerings.

One recent study recommends paying attention to the entire end-to-end development process before adopting AI CBT, diagnosis, or therapy solutions to help ensure equity across the whole product lifecycle. Other sources suggest performing regular audits to keep technologies in line with evolving updates. And to ensure fairness at the point of care, organizations must use diverse sources of information in creating care plans — including surveying patients on social determinants of health — and properly prepare all information they input into the platforms they use.

The short of it is this: AI is only as useful as the ways providers employ it. Being critical of potential biases, interpreting AI outputs carefully, instilling processes for catching errors, and discussing with AI vendor partners how they work to mitigate bias can help ensure the people walking through your doors receive the best care possible.

Harnessing the Right AI Tools for Healthcare

AI is quickly evolving, and there are a growing number of AI-powered technologies that are safe to implement that can create backend efficiencies and boost quality of care.

Core Solutions’ Cx360 platform is a leading platform for providers in the mental health, substance use disorder, and IDD spaces, and it incorporates AI-backed tools like:

  • Logistical solutions that can manage massive datasets at speed to assist with clinical care, billing, and finances 
  • Ambient dictation solutions that enable providers to record, summarize, and analyze notes from behavioral health sessions 
  • Social determinants of health (SDOH) tracking capabilities that identify SDOH issues shared by clients and make them available to providers at the point of care 
  • Symptom and diagnosis tracking, which assists with clinical decision support at the point of care

Is it okay to use AI technology in healthcare? It is when services support stronger operations and enhance the care experience, when staff and clinicians understand how to securely use it as a way to support their roles, and when the companies developing and supporting those solutions make safety a top priority.

Your people are your biggest asset, so improving their operational capabilities with AI is key.

Contact Core today to discuss how the Cx360 platform can help you improve behavioral health and IDD outcomes. 

Guide to Selecting Behavioral Health and IDD AI Technology

No Comments Yet

Let us know what you think