Core Solutions Blog

Bias and Equity in AI for Mental Health, IDD, and Substance Use Disorders

Bias and Equity in AI for Mental Health, IDD, and Substance Use Disorders

Artificial intelligence (AI) is rapidly advancing and helping healthcare providers in nearly every specialty improve workflows and clinical care. Like any other technology, however, its effectiveness is in part determined by human factors that influence development and implementation. One particular concern giving some healthcare professionals pause when considering AI to support operations and clinical decision-making: How bias is eliminated or mitigated in AI algorithms. 

Behavioral health providers have been working hard to untangle access to care and care delivery from inequities, so it’s especially critical that AI for mental health, intellectual or developmental disabilities (IDD), and substance use disorders are trained on diverse and representative data and doesn’t perpetuate existing disparities. 

As new behavioral health AI technologies make their way into organizations and care workflows, providers need to understand the potential impact of AI on underserved populations, follow best practices for adopting AI that is fair and equitable, research the processes and procedures vendors are implementing to identify and mitigate bias in the AI solutions they offer and choose vendors who make reducing bias in their AI solutions a top priority.

Get the eBook: Guide to Selecting Behavioral Health and IDD AI Technology

The Importance of Addressing Bias in AI for Behavioral Health

Although conversations about behavioral health conditions like depression, anxiety, and substance use disorder have become less taboo in recent years, receiving treatment for these disorders and for IDD often still carries stigmas, particularly for marginalized populations that have seen medical research, diagnoses, and practices be historically influenced by cultural biases and prejudices. Despite providers’ best efforts to give equal treatment to all clients, these systemic issues create unequal conditions that can have very real impacts on individuals’ access to care and well-being. Consider the following statistics:

Proactively working to protect clients from unintended harm caused by AI bias is an effort that must be undertaken by behavioral health providers who may already struggle with outreach to underserved communities. It’s one more obstacle to acceptance of care, which is sorely needed in racially diverse populations in particular: Data shows that Black people living with psychological disorders are incarcerated at higher rates than individuals of other races. And when clients of color with IDD experience poor healthcare treatment in geographical locations with high rates of ableism and racism, their overall quality of life significantly decreases. 

AI’s Role in Equitable Healthcare

With its ability to automate processess, support clinical decision-making, and help identify faster and more precise diagnoses, AI for mental health, substance use disorder, and IDD poses real promise for supporting better outcomes across all populations. In some cases, AI might be key to eliminating inequitable treatment altogether — but providers must be aware of its potential shortcomings and what leading organizations are doing to address them.

Though autonomous, AI is built on human-made data, which means it’s easy for human biases to be embedded directly into its algorithm training — potentially leading to discrimination in client care, unfair treatment, or exacerbation of inherent biases. One study, for example, found that a natural language processing (NLP) psychiatric solution replicated implicit bias around gender, race, nationality, and other identifiers. Another found that biased AI recommendations influenced mental health providers’ likelihood of involving police in behavioral health emergencies involving Black and Muslim men.

While these challenges can drive apprehension around AI in behavioral healthcare and slow adoption by providers, numerous technology and healthcare leaders are coming together to guide organizations on their AI journey and help AI technology deliver on the potential it holds to support quality care across all demographics.

The Bioinfo4women (B4W) program of the Barcelona Supercomputing Center, for example, has listed a series of recommendations for overcoming AI biases, including guidelines for education, defining evaluation metrics, standardizing data collection, and more.

Other researchers and healthcare providers have advocated for the “fair-aware” approach to AI for mental health, which works to outline a set of values and equity-focused methodologies that developers can integrate into AI counseling and clinical support tools. 

Get the eBook: Guide to Selecting Behavioral Health and IDD AI Technology

Mitigating Bias to Realize the Benefits of AI in Healthcare

Behavioral healthcare leaders preparing to integrate AI into their organization’s operations can mitigate bias and reap the benefits of AI in healthcare by not only following guidelines shared across multiple industries, but also by assessing their readiness and needs and asking the right questions of vendors before making an investment. 

Decision-makers may begin by examining how equity and implicit bias might play a role in their individual practices, regardless of the technologies currently employed, and considering how AI decision-making tools will function in their practice and how well they align with equity best practices. 

Transparency is essential, and when choosing a vendor, providers should take time to ask questions about the intended purpose behind an AI tool and how it’s used (particularly for AI counseling solutions), as well as the vendor’s approach to bias — including how they identify it, whether they ensure consistent results across all domains (i.e., when the algorithm is run, the data is rendered the same across multiple demographics and social determinants of health domains), and how they address bias throughout various development phases, from data collection to model training and validation. It’s also important to inquire about the origins of the data that powers the AI solution, to understand whether it matches the makeup of the client population, and ask whether the vendor is in alignment with the “AI Bill of Rights.”

Finally, organizations should develop and circulate clear plans for identifying and mitigating AI bias when or if it occurs. This should always include stratifying, reviewing, and comparing results across all domains to ensure that the data is not biased.

By taking these actions and finding the right vendor and platform fit, behavioral healthcare leaders gain the ability to foster more trust with both clients and providers and deliver quality care that produces more consistent positive outcomes. The right AI behavioral health solutions can also free up providers so they spend less time in administrative work and more time with clients.

The AI-powered tools in Core Solutions’ Cx360 platform, for example, supplement the direct human care that clients need while creating cost- and time-saving efficiencies. Administrative and AI decision-making tools support smarter note-taking, process automation, and identification of anomalies in care delivery and billing, and other Core AI solutions help identify population health data to improve diagnoses and health risk predictions.

The future of artificial intelligence in behavioral health is bright — and it’s already having a significant, positive impact on providers that are implementing leading AI solutions like those developed by Core. To help ensure clients receive the best, most equitable care possible, providers will need to keep pace with these burgeoning technologies, always with equity and outcomes top of mind. 

Want to learn more about AI for mental health, substance use disorder, and IDD that can improve health outcomes and the steps Core takes to tackle bias and inequity? Request a demo of the Cx360 platform today.

Guide to Selecting Behavioral Health and IDD AI Technology

No Comments Yet

Let us know what you think