EHR | EHR Systems | Blog | Core Solutions

How Core Is Methodically Using Augmented Intelligence in its Solutions

Written by Rachna Bagdi | December 10, 2024

Q&A With Ravi Ganesan, President & CEO for Core Solutions

Ravi Ganesan is the founder and CEO of Core Solutions, where he leads the revolution in behavioral health through artificial intelligence and other cutting-edge technologies. With over 25 years of dedication, Ravi has addressed the unique challenges of the health and human services industry by developing innovative EHR technology, significantly improving treatment outcomes. His deep expertise in healthcare and technology, combined with a strong commitment to customer satisfaction, has established Core Solutions as a leading software provider nationwide.

The future of AI in healthcare is wide open, with the global market expected to expand at a compound annual growth rate of nearly 37%, potentially reaching just under $614 billion by 2034. Still, many providers, patients, and even developers are still learning about AI: what it can do, how it works, and how much autonomy it has.

In the complicated behavioral healthcare space, it’s particularly important to understand the differences between augmented intelligence and artificial intelligence and how each is applied to solutions to pressing challenges.

In this Q&A, Core Solutions President Ravi Ganesan provides his perspective on AI for behavioral health, including a look at Core’s steady, patient approach to development of augmented intelligence-backed products, the challenges faced in that process, and what he’s looking forward to as new use cases are addressed and more applications are rolled out.

Q: How do you define augmented intelligence vs. artificial intelligence?

Ravi Ganesan: I have a slightly different take. I think of AI in 3 stages. One is AI ‘in the loop,’ where the individual is involved in making the decisions, and AI is there to support them. ‘On the loop’ is when an individual is informed of what AI is doing, but it is a little bit more independent. ‘Off the loop’ is where the AI is working independently, and this is where the AI can make a decision and say, ‘based on this, we are going to do something else.’ Augmented intelligence to me is the same as ‘in the loop,’ where an individual is involved in making the decisions, and AI is there to support them.

I would say in all clinical decisions we want augmented intelligence, because mental health is complex and AI cannot diagnose people or make decisions on clinical care at this point. But agentic AI, which is more autonomous and can automate tasks without human intervention, would be more like off the loop, where an agent is able to look at the data and make a decision based on that data.

Q: How would you categorize Core’s current AI mental health products based on those definitions?

RG: Our current products are designed to assist clinicians in their day-to-day tasks, and so they should be considered augmented intelligence. AI will not make decisions, but AI can provide recommendations and insights, and then individuals will make final decisions, especially when it comes to clinical care. We are empowering human decision-making with information that's faster for AI to generate than for individuals.

The other benefit is sometimes individuals with less experience or less skills are able to do tasks that oftentimes require more experience.

Q: If final decisions still lie in the hands of the humans using AI for behavioral health solutions, how does Core determine the level of autonomy it’ll allow the AI to operate with when designing new tools?

RG: We are working on some level of autonomy for AI. In our ambient listening app, we generate a note. We can score that note, and if it is less than 70, it automatically goes to an AI supervisor that will review it and provide recommendations or rewrite the note so the clinician can review and approve it. So that is a level of autonomy that AI has — the decision to send the note for review was an AI-guided decision, but the final note is signed off on by the clinician.

Now similarly, we are doing something with our scheduler where we can say: For this appointment, we are going to allow double-booking because certain factors suggest there's, say, a 60 percent probability the client is not going to show up. AI is making an autonomous decision to say, should we allow this?

Q: At what point in the product development decision-making process does AI come into the discussion? Is it a more competitive decision to use tested AI for behavioral health challenges, like, ‘we’ve seen this capability that we can improve upon,’ or are you starting with your target audience's basic problems and then figuring out how AI solves them?

RG: I think it's more customer driven. We look at the customer pain points, and we say, what are the big challenges in this industry? You know, staff burnout is a big challenge. Misdiagnosis, under diagnosis, that's a big problem. Compliance is a big problem. So, we take these big problems, and within that, we assess what are the use cases to solve these problems? Once you come up with a use case, sometimes it's very clear, like, a more autonomous AI agent is going to be better to solve the problem, but it's an iterative process.

Sometimes you do not know till you take steps one, two, three, to decide what is the right course of action. One of our customers said they have a problem with their clinicians repeating a lot of similar notes — I'm not going to say copy and paste, but that's pretty much what happens sometimes. The clinician is reusing notes and making small edits. So, we have an AI-based model that looks at the quality of the notes and can identify things like plagiarism to say, okay, these notes look like the notes you wrote five days ago, and there's no change.

The solution was to give the output back to the client so they could decide. But as that model gets better, we're able to say, why do you need to look at everything? If we can identify the top 5% of issues, and the rest of it can be automated to send an email back to the clinician to say, hey, these notes do not pass our criteria, then I think there's a level of autonomy for AI, but the human is really involved in the more complex cases where it requires intervention to decide if we need further review.

It's an iterative process where you take the customer problem, figure out the best strategy, and along the way, oftentimes it's a little bit of art. We do something and totally scrap it, and go back to the drawing board and say, can we do it differently? Can we do it better? Then after the second or third try, we get to a point where we are super excited and we show it to a customer, and they break it in two minutes and say, ‘this doesn't make sense.’

So, we are not in a rush to just throw out the product. We want to create products that are safe, effective, and are going to be used by customers. It's not as simple as just, ‘We're solving a problem.’ You are sort of figuring it out as you go.

Q: What are the challenges you face in developing tools backed by augmented intelligence vs. artificial intelligence?

RG: Honestly, the challenges are the same. The first is to have clarity on the use case, and sometimes it's a little bit more challenging than it appears. The statement might be, ‘we're trying to solve our compliance problem,’ and from there, you start breaking this into smaller pieces. The use case has to be really specific. I think that's the first challenge.

Having the right data to train the model is probably by far the biggest challenge. We rely upon a mix of user-created data or synthetic data to create the model. That is time-consuming. It's expensive. That is the second challenge.

Once [we've trained and created the model], we usually create an API, which needs to be plugged into a user interface. We want AI to be smart about when and where it appears in the user experience, because we don't want to burden the user with too much information. At the same time, when we make an AI-driven recommendation, we want to provide appropriate information so the user can say, ‘this recommendation is based on these assumptions or these findings.’ They can then validate the AI and make sure it is making the right recommendation.

Q: What’s the secret to creating AI mental health solutions that are trusted, reliable, and useful?

RG: I think first, there's a great rush to market AI solutions, and companies have raised a lot of money. Once you raise a lot of money, you need to sell to justify the valuation and the money that you've raised. So it's the wild, wild West, and the challenge is customers in the market are still getting educated in these technologies. A lot of things could go wrong — like missed expectations, people getting hurt.

I think we need to step back. We want to first educate customers and the broader market on how AI works and how to safely implement it. The next piece, apart from education, is transparency. When companies are in a rush, I've seen multiple cases when everything looks so wonderful, and the customer comes in with all these expectations, and the transparency is missing. So, to build a trusted AI solution, we must start with transparency.

What does that really mean? I think we want customers to understand how the AI solution works. What data was used to train the model? What are the limitations? What are the potential biases in the model? For example, if all the data that I use to train my model was based in Brooklyn, New York, and this customer is in Texas, the patient population is different. The way people speak is different. So that is a potential limitation of bias.

There is the idea of a model card that answers: What is used to train the model? What is the data that went in? Customers should know these questions. Right now, they are not as educated in asking the right questions, and so there are a lot of pilot projects that don't go forward once the customer understands the limitations they’re running into.

We want to provide responsible AI, and that responsibility starts with us as the vendor. I think we'll figure out in five years if we were too slow to market our products because we were cautious or if this is the right approach.

Q: To whatever extent possible, can you talk about some of the upcoming products you have that use augmented intelligence or artificial intelligence?

RG: I think we'll talk about the immediate future. I wouldn’t say we’re creating new products. We're creating new use cases, and in the web application, we are going to be launching all of our AI capabilities into what we're going to call Clinician Assist. When we started, we were trying to embed many use cases within our existing EHR. Then we realized this is going to be a lot more powerful if someone can click on an AI button and everything AI for their client is there in one screen. So we changed that approach, and in Q1, we should have Clinician Assist in our EHR.

Now what are the use cases that I'm most excited about that are coming in 2025? The supervisor AI is something I am super excited about because that is relatively easy to implement. It's going to look at existing client documentation and provide feedback. So, from a compliance standpoint, high-risk items are going to be identified for human intervention much faster. For things that don't require attention, the AI is going to be better at going through all those kinds of things that right now human beings are spending time on.

How we are integrating treatment planning in notes is equally exciting because the treatment planning process is so critical, and oftentimes, there's a disconnect between the notes and the treatment plan. So what we are working on for 2025 is the ability to use unstructured data to say, what goals are we making progress on? What goals are we behind on? Without having the clinician to quantify and collect more data, AI is going to be automatically connecting these dots. Those are a couple key clinical capabilities I'm excited about.

Q: Is there anything else you’d like to add about the future of AI for behavioral health, as it applies to augmented intelligence or artificial intelligence?

RG: I'll leave you with this last thought. In the next couple of years, we'd feel comfortable in saying the entire revenue cycle process will be automated through AI. So I provide a service, money hits the bank, there's no human being in between. I think that reality is happening. There are companies trying to do it today.

Again, we are taking the more cautious approach in a step-by-step fashion. So it's a more long-term outlook. I think in 2025, we'll have the functional capabilities for that. But then it's going to take customer validation. That's the other part of transparency and trust. We validate internally the models we are creating, but we also want customers and academic institutions to provide an independent level of validation before we put the AI solution to use in a live clinical environment.

So that's a little bit about our planning and future product path that honestly, I'm excited about.

To learn more about how Core’s AI applications are shaping the future of AI in healthcare for behavioral health providers and clients, as well as the benefits they can bring to your organization, schedule a demo today.