Customer service calls often start and end at the operator’s headset, but there is so much untapped data from these conversations that could be used to improve business systems on a holistic level. Today’s guest, Amy Brown has seen the value of unlocking conversational data to improve healthcare systems across the country, and as the Founder and CEO of Authenticx, she has taken giant strides towards accomplishing this goal.
Authenticx is an AI-powered platform that makes it possible for healthcare organizations to have a single source of conversational data, creating powerful and immersive customer insight analysis that informs business decisions. In today’s conversation, Amy explains why she founded Authenticx, what the company does, and why her business is important for healthcare. We also learn about how the company uses machine learning in its processes, the challenges of working with conversational data, how Authenticx upholds a high ethical standard, and how the impact of its technology can be measured across healthcare systems nationwide. After sharing some important advice for other leaders of AI-powered startups, Amy explains why Authenticx will be a key player in healthcare for the foreseeable future.
Key Points:
- A warm welcome to the Founder and CEO of Authenticx, Amy Brown.
- Amy’s professional background, and how she ended up founding Authenticx.
- What Authenticx does and why the company is important for healthcare.
- How the company uses machine learning to get better insights from conversational data.
- A closer look at the conversational data that Authenticx works with.
- The challenges of working with and training models on conversational data.
- Other ways that they validate their models.
- Mitigating biases and upholding ethics.
- How Amy measures the impact of Authenticx’s technology.
- Her advice to other leaders of AI-powered startups.
- Where Authenticx will be in the next three to five years, according to Amy.
Quotes:
“That’s really what I’m trying to get at; using technology to help explain customer and consumer perception of their care, and using that; putting that to work for the healthcare industry so it can start to improve its systems in a way that allows patients and consumers to actually get a better outcome.” — Amy Brown
“Our data team has had to become extremely proficient at dealing with all kinds of messy data.” — Amy Brown
“We’ve hired a diverse group of human beings because we want to make sure that we’re inclusive in our interpretations of what’s happening in these conversations.” — Amy Brown
“You can never eliminate all bias – we would never purport of doing that – but we can be very intentional about how we train the data.” — Amy Brown
“[The] dream scenario is that the healthcare system in this country starts to make room for and evolve in how it makes its business decisions to include the voices of their customers as a key source of insight, intel, and data.” — Amy Brown
Links:
Amy Brown on LinkedIn
Amy Brown on X
Authenticx
Authenticx on Instagram
LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
[INTRODUCTION]
[0:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.
[EPISODE]
[0:00:33] HC: Today, I’m joined by guest, Amy Brown, founder and CEO of Authenticx to talk about conversational intelligence for healthcare. Amy, welcome to the show.
[0:00:42] AB: Thank you. It’s great to be here, Heather.
[0:00:43] HC: Amy, could you share a bit about your background and how that led you to create Authenticx?
[0:00:47] AB: Sure. My background is a windy road. My education is in social work and I was particularly interested in the macro elements of social work. So, I cared a lot about social systems. I started my career in state government working with the Family and Social Services Administration. So, think about serving vulnerable populations, child welfare, Medicaid populations. I just became super fascinated with the intersection between healthcare policy and then the real world.
After leaving state government, I spent the next 20 years working in the business of healthcare, which for me meant working with health insurance companies that provided healthcare benefits to different types of populations. Medicare, Medicaid, commercially insured populations. I also spent some time working in the pharmaceutical industry. In all of my roles, I was an operator, and I was responsible for managing kind of the day in, day-out interactions between the healthcare company and consumers of our healthcare system. I ran contact centers as part of my job. And I got to know customer conversation data really deeply. I got to understand what was happening in those conversations. And I just became really fascinated about this idea of aggregating all of those recorded conversations to help tell a story, not just for the contact center operators, but more importantly, for the entire enterprise, data back, storytelling using the literal voices of their own customers.
That’s what I set out to found and start in 2018. I left my corporate career and started Authenticx, really from scratch. Not a technology background, more of a business and social work background. I was fascinated with the data and so, I quickly found a CTO to help me build out the tech and that’s how I got started.
[0:03:01] HC: So, what does Authenticx do? And why is it important for healthcare?
[0:03:04] AB: So, Authenticx, what we like to say is we help healthcare organizations which we define as pharma payers and healthcare providers, like physicians and hospital systems. We help them listen at scale, to the conversations that flow in and out of their organizations each and every day by, usually, by the millions. We do that by leveraging AI that has been trained by our own people, to help leaders understand at the highest level, what’s going on, what’s driving customer friction, what’s driving customer behavior, and allowing our clients to drill into that data in a way that contextualizes the customer experience. This is really important for health care, right? Because we focused a lot of resources in the past few decades on trying to pull together clinical data, lab data, in our country to try to predict health outcomes on the clinical side of our care.
What we haven’t done as good of a job as a country on doing is marrying clinical data with actual experience data to tell and predict what’s actually going to happen with someone’s health care. For example, I might have a diagnosis of diabetes, and I might have a care plan provided by my doctor that says I need to follow a particular diet and take a particular type of insulin regularly in order for me to manage my diabetes. But what the electronic health record might not tell the story of is how I’m struggling to maintain employment, how I’m struggling with my insurance coverage, and how I’m struggling with transportation. These are all nuances of a consumer or a patient’s life that aren’t found in lab data, but can be found in conversation data as they communicate with the healthcare system. That’s really what I’m trying to get at, is using technology to help explain customer and consumer perception of their care, and using that, putting that to work for the healthcare industry so it can start to improve its systems in a way that allows patients and consumers to actually get a better outcome.
[0:05:38] HC: So, how do you use machine learning to get those insights out of the conversational data?
[0:05:41] AB: Yes. So, when we started our business and decided we wanted to leverage AI to help us scale listening, we decided we wanted to stay focused in healthcare. So, we hired nurses and social workers and clinicians to listen manually to interactions and start to tag and label those interactions, not just based on what they were hearing in the spoken word, but based on what they could interpret from the context of the situation. We started tagging and labeling the problems and concerns that were unique to the healthcare industry. We did this for several years, and then that became the basis of our training data for our machine learning models.
So, we use machine learning to understand at scale, kind of across millions of conversations, where the signals are, where the themes are, the things that need to be paid attention to, and we surfaced that in our UI for leaders to be able to kind of prioritize where they’re focused and have a higher degree of confidence that what’s being surfaced is actually usable and meaningful because it’s been trained using healthcare conversation data.
[0:07:09] HC: What does this conversation data look like? Is this video that you translate into text output and then goes into your models? Are you using raw video, or some other type of input?
[0:07:20] AB: Yes, the most prevalent forms of input are voice, actually. If you think about contact centers, in the healthcare setting, as healthcare consumers, you’ve probably occasionally had to call your insurance company to ask about a claim or a bill, perhaps you’ve had to get a prior authorization that your insurance required in order to get something that your doctor has prescribed. When you’ve called your doctor’s office, perhaps you’ve talked on a recorded line to schedule an appointment. Or maybe you’re asking for lab results. Maybe you’re a person who is on a high-cost prescription drug, and you’re getting assistance from the pharmaceutical manufacturer, and you call their patient support line. Maybe you’ve called because you’re a parent, called your doctor’s office or your hospital system, because you’re a parent and your child is sick over the weekend, and you’re wondering whether you should take them to the ER or not.
These are the types of conversations that we are most often acquiring and adjusting in our technology. We also, because of the proliferation of the digital front door, we also take in text-based conversation. So, think of bidirectional communication between customer and company, whether or not the company is hosting that conversation with a bot or with a human. What Authenticx is really – our sweet spot is bidirectional voice and chat data, because it’s in the nuance of the conversation itself that you get so much rich context.
[0:08:54] HC: What kinds of challenges do you encounter in working with and training models based off of this conversational data?
[0:09:01] AB: Yes, well, first off all of our clients, everybody has a different standard in terms of the cleanliness and quality of their data. Our data team has had to become extremely proficient at dealing with all kinds of messy data, right? Metadata and all of that good stuff. But one of the biggest challenges is making sure that the models are meaningful, useful, and that we’re constantly tuning them and making sure that we’re tracking for drift in the models and also drift amongst our own data labelers. So, we take that role, and that works super seriously, and we have a team of individuals on-site that are constantly validating whether or not they agree with the AI prediction of whatever it’s predicting. It’s just a challenge, but it’s also a very worthy one to invest in, and something that we’re continuously committed to.
[0:10:04] HC: So, you mentioned that you have annotators, checking the output of the models. Is this your main means of validation, or are there other ways to validate these models?
[0:10:11] AB: It’s our main means of validation is through our own humans, and then our users who are clients in the system, right? So, they’re seeing the output of AI, and we have a UI that allows them to agree or disagree, and it’s through that feedback that our models are tuned and informed.
We also do, just to mention, our data labelers themselves, we focus a lot on calibration internally. So, we’ve hired a diverse group of human beings, because we want to make sure that we’re inclusive in our interpretations of what’s happening in these conversations. But we also know that we need to have a standard, and we need to have a high agreement rate amongst the labor lawyers themselves. We have an interrater reliability process or a calibration process whereby we are holding ourselves accountable to our standard of agreement with the AI.
[0:11:09] HC: With the proliferation of AI over the last few years, it’s been a much greater emphasis and interest in ethics. How my bias manifests with models trained on conversational data, and are there some things your team is doing to mitigate it?
[0:11:23] AB: As we know, AI is only meaningful and useful as the data it was trained on. And if you have a skewed or one-sided source of data, then it’s very likely to be biased. Our clients serve a diverse patient population in this country. So yes, our data labelers have an incredible responsibility for making sure that they’re interpreting with culturally competent years as much as possible. So that’s why we focus so much of our time on two things, on making sure that our labeling group represents a diverse group of human beings that are reflective of the patient population, of the clients that we work with, but then that they’re also calibrating amongst themselves, and that when they’re interpreting, what a customer might be trying to say, or a problem that exists in a customer experience, that there’s debate. And that there’s dialogue, and that there’s a good healthy back and forth between the team members before agreeing on a label, and in what cases a label needs to be used, and in what cases something else needs to be created from a label perspective.
You can never eliminate all bias. We would never purport of doing that. But we can be very intentional about how we train the data. Working with our people is one key way, and then also because we are vertically specific, we are an AI company focused on serving the healthcare enterprise in this country. We’re using data, we’re taking in data that is kind of in a narrow set of use cases, which makes our ability to focus on meaningful algorithms that much more able to be accomplished.
[0:13:18] HC: Thinking more broadly about what you’re doing at Authenticx, how do you measure the impact of your technology?
[0:13:23] AB: Yes. Well, we sell into business leaders, typically not the technical side of the client’s organization, more often the business leaders. And we measure our success and the success of Authenticx based on the changes in the impact that our clients are able to drive in their business. So, one of the key things that they measure is the degree to which they identify and reduce customer friction. We have a proprietary machine learning model called the Eddy Effect, named after a phenomenon that happens in nature, in rivers where the current is disrupted by an object and it creates a counter-current flow, which when you look at a river, you might see whirlpools on the sides of the river. That’s called a river eddy. And the same thing happens in customer journeys, where customers are thinking one thing is going to happen with their claim, or their bill, or their service, and that doesn’t go as expected.
Usually, you can determine by listening to vast amounts of data, what the root cause is. It’s usually a process gap, a technology gap, and the agents are just the messenger, right? But the real root issue is a process or system gap. So, our models help identify the prevalence of customer friction and the root cause of that friction, using conversational data as key source of intel on that. Then, our clients deploy strategies to reduce that friction, and they do that because the ROI is there, right?
Because when they reduce friction, they’re number one, reducing the amount of time spent by the company, handling customer escalations that are preventable. So, it’s helping drive out waste. But the other thing about reducing customer friction and developing strategy based on friction, is that it makes for happier customers when they aren’t stuck as much. And happier customers means that they are going to be more loyal, and they’re going to tell their friends and family about your business. So, it’s this wonderful metric, and an algorithm that goes with that metric of measuring customer friction, that impacts both profitability and revenue. There’s 10 other use cases we work with clients on in the healthcare space, but that’s our most popular and the one that’s driving the growth for Authenticx.
[0:15:59] HC: Is there any advice you could offer to other leaders of AI powered startups?
[0:16:04] AB: Gosh. That’s such a great question, and such an interesting time to be asked that question with all the hype out there in the market. I mean, my advice would be, the market really needs to be educated right now, and there is so much noise. What we’ve found to be really helpful to our clients is taking a very consultative and education-focused role in the sales process. Buyers are trying to sort through the noise and really determine value. So, in your go-to market, make sure you’re paying attention to that.
From an internal perspective, I would say just super important that a startup really understands the use case, what they’re trying to solve for, what problems in the market they’re trying to solve, and make sure that everybody is aligned to solving those problems, that the technology itself must deliver value in the real world and making sure that everyone at the company understands that value is just going to make for better innovation and more focus on what it is they’re trying to solve for.
[0:17:18] HC: And finally, where do you see the impact of Authenticx in three to five years?
[0:17:22] AB: Oh, goodness, dream scenario is that the health care system in this country starts to make room for and evolve in how it makes its business decisions to include the voices of their customers as a key source of insight, intel, and data. Customer voices historically have been relegated to the call center, and they’re kind of stored and ignored in data centers or in the cloud. The system that we all consume our healthcare in, those leaders at the top don’t routinely listen to what’s actually being said by their customers, and for good reason, because it’s a massive data source, and it’s really messy, and it’s historically been inaccessible.
But through AI and technology that exists now, leaders can have access to the voices of their customers, and they can know what signals to listen to, and they can have a much closer understanding of what’s going on at the root, the grassroots level of their organization. So, in three to five years, I went to see boardrooms and C suites incorporating the literal voice of their customer in strategy and everyday business decisions.
[0:18:54] HC: This has been great, Amy. I appreciate your insights today. Where can people find out more about you online?
[0:18:59] AB: Absolutely. We’d love to meet your audience. They can either find me on LinkedIn, or go to our website, at authenticx.com. That’s authentic with an X at the end, .com.
[0:19:10] HC: Perfect. I’ll include both of those links in the show notes. Thanks for joining me today.
[0:19:15] AB: Thank you.
[0:19:15] HC: All right, everyone. Thanks for listening. I’m Heather Couture, and I hope you join me again next time for Impact AI.
[OUTRO]
[0:19:25] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.
[END]