Joining me today are Linda Chung and Michael Mullarkey to discuss the transformative potential of AI in mental health care. Linda is the co-CEO and Co-Founder of Aiberry, a groundbreaking AI company redefining mental healthcare accessibility. With a background in speech-language pathology, Linda pioneered telehealth services and now leads Aiberry in leveraging innovative technology for objective mental health screenings. Michael, the Senior Clinical Data Scientist at Aiberry, is dedicated to translating complex data science into tangible human value. His unique background in clinical psychology merged with a passion for coding drives his mission to address pressing human concerns through data.

In our conversation, we explore the fascinating intersection of clinical expertise and artificial intelligence, unlocking personalized insights and proactive strategies for mental well-being. Hear about Aiberry’s innovative chatbot “Botberry” and how it helps provide insights into the user’s mental health. We also get into the weeds and unpack how Aiberry develops its models, data source challenges, the value of custom models, mitigating model biases, and much more! Our guests also provide invaluable advice for other startups and share their future vision for the company. Tune in and discover AI technology at the forefront of mental health innovation with Linda Chung and Michael Mullarkey from Aiberry!


Key Points:
  • Linda’s healthcare background and motivation for starting Aiberry.
  • Michael's transition from clinical psychology to AI at Aiberry.
  • Aiberry's AI-powered mental health assessment platform and its unique approach.
  • The role of machine learning in Aiberry's technology.
  • Model development, data collection challenges, and custom model creation.
  • Addressing bias in models trained on patient interview data.
  • Measuring impact and success metrics at Aiberry
  • Advice for leaders of AI-powered startups.
  • The vision for Aiberry's impact in the next three to five years.

Quotes:

“We know that early detection leads to early intervention and better outcomes.” — Linda Chung

“Our models take the messy, natural human way that people talk about their mental health, and we turn it into systematic data that are necessary for the healthcare industry and report it back to the user.” — Linda Chung

“As a health tech company, we have to take the health and the tech elements of our business equally seriously. So, one of our guiding principles from a health perspective is we have to keep people's data secure.” — Michael Mullarkey

“We really value letting people talk about their mental health in their own words, and that can lead to some unexpected outcomes on the modeling side of the operation.” — Michael Mullarkey


Links:

Linda Chung on LinkedIn
Michael Mullarkey on LinkedIn
Aiberry


Resources for Computer Vision Teams:

LinkedIn – Connect with Heather.
Computer Vision Insights Newsletter – A biweekly newsletter to help bring the latest machine learning and computer vision research to applications in people and planetary health.
Computer Vision Strategy Session – Not sure how to advance your computer vision project? Get unstuck with a clear set of next steps. Schedule a 1 hour strategy session now to advance your project.
Foundation Model Assessment – Foundation models are popping up everywhere – do you need one for your proprietary image dataset? Get a clear perspective on whether you can benefit from a domain-specific foundation model.


Transcript:

[INTRODUCTION]

[0:00:03] HC: Welcome to Impact AI. Brought to you by Pixel Scientia Labs. I’m your host, Heather Couture. On this podcast, I interview innovators and entrepreneurs about building a mission-driven, machine-learning-powered company. If you like what you hear, please subscribe to my newsletter to be notified about new episodes. Plus, follow the latest research in computer vision for people in planetary health. You can sign up at pixelscientia.com/newsletter.

[EPISODE]

[0:00:33] HC: Today, I’m joined by Linda Chung and Michael Mullarkey from Aiberry. Linda is a co-CEO and co-founder, and Michael is a senior clinical data scientist at Aiberry. And today, we’re going to talk about mental health care. Linda and Michael, welcome to the show.

[0:00:47] LC: Thanks for having us.

[0:00:49] MM: Yes, thanks so much for having us.

[0:00:50] HC: Linda, could you share a bit about your background and how that led you to create Aiberry?

[0:00:54] LC: Yes. I started my career in healthcare as a pediatric therapist. So, I’ve experienced firsthand, provider shortages, various accessing care delays, initial treatment, contact, and lack of scalability, and just got worse when the pandemic hit. So, I found myself really wanting to stretch my impact and really going beyond the limitations of traditional health care. So, that’s when I met Johan Bjorklund, a tech leader, and now my co-CEO, and he just got it. The potential of AI to truly make a mark in mental health.

He then introduced me to Dr. Newton Howard, a brain and cognitive scientist, and that’s really how we got started. We wanted to make mental health assessments a normal, everyday thing, and accessible to everyone. So, what we did was we set out to tackle the challenges surrounding mental health assessments, both from the perspective of the one giving the assessment, as well as the individual taking the assessment. So, you want to fill the gaps and bring a sense of comprehensiveness and scalability to mental health care. We knew it was something that was desperately needed, and it wasn’t just about leveraging AI. It was about finding ways to capture the real problems that people face when it comes to their mental well-being. From the very beginning, we knew we needed a clinician who had both experience in the field, as well as a deep understanding of technology and AI, and that’s when we found Dr. Michael Mullarkey.

[0:02:13] HC: So, Michael, what led you to Aiberry?

[0:02:15] MM: Yes, it was interesting, because I do have a background, I have a Ph.D. in Clinical Psychology, and I spent a lot of years working as a therapist with people one-on-one, while also honing my research skills. I actually got a little burned out working in mental health. I was working in a psychiatric emergency room in New York when COVID first hit. So, for a little while, I kind of leaned more into the AI and technology part of my skill set, and worked as a senior data scientist at another startup.

But when I heard about Aiberry, it just sounded like a dream opportunity to blend my clinical and AI expertise into one role. And especially with the focus on accessibility of mental health assessments, it was just a no-brainer for me.

[0:03:00] HC: So, could you tell me more about what Aiberry offers and why it’s so important for mental health?

[0:03:04] LC: Well, Aiberry is an AI-powered mental health assessment platform. Users get to have guided conversations with our chatbot, whose name is Botberry, and she’s not your typical chatbot. We’ve designed her to be approachable and disarming, and she really gets you to think about how you’re doing. And the best part is that you get to respond in your own words.

So, I’m not sure if you know how mental assessments are done today. But it’s basically a multiple-choice form, asking, in the last 14 days, how often are experiencing these specific symptoms, and your choices are, not at all, several days, more than half the days, and nearly every day. Those are the options for every single question. So, you can see how there are nuances that aren’t picked up and how completely impersonal it is. So, in terms of how our screenings work, our AI looks at three things. What you say and how you say it, and we even detect subtle changes in your facial expressions. So, it’s like having someone that’s looking at the whole person, and that’s so much better than these mundane forms.

From that conversation comes a risk score and we also give you some insights. So, how’s my energy, my concentration, my mood, and so on, and so forth. Then we also give you the ability to track trends over time. In terms of why this is important for mental health, it’s not just about the technology, but about catching and addressing issues early on. I mentioned earlier about delays and initial treatment contact. So, we know that people typically wait a decade to seek help. Imagine being able to screen people early on and preventatively, and this is at schools, doctor’s offices, workplaces, and even in their homes, and that’s where we want to be because we know that early detection leads to early intervention and better outcomes.

[0:04:43] HC: And what role does machine learning play in this technology?

[0:04:46] LC: Machine learning and AI are a core part of Aiberry’s technology. People who work in the mental health field know how to talk about mental health. We know what’s important to share, what’s not as relevant, and we also know exactly the right words to use, to describe our mood in just the right way. But the rest of the world, and that’s who’s really important to us, that language doesn’t come naturally to them. Most of us don’t walk around describing ourselves as, “Yes, right now I endorse Anhedonia. But I deny fluctuations in appetite.” It’s just not how we talk.

So, our models take the messy, natural human way that people talk about their mental health, and we turn it into systematic data that’s necessary for the healthcare industry and report back to the user. So, we can help clinicians track their patients over time. We give HR leads, insights into the overall mental health of their organization, and we give people the space to talk about their mental health, however they like, all at the same time. Before Aiberry’s machine learning and AI technology, that just didn’t exist.

[0:05:44] HC: So, getting a bit more into the details, how do these models work? What’s the input? What’s the output? Maybe you have a few examples to share?

[0:05:52] MM: So, yes, kind of go into the weeds a little bit, which I like to do. I think that essentially, as Linda just said, a lot of the inputs are people’s own words. Their way of expressing themselves verbally, non-verbally, everything like that. We’re able to take kind of a wide variety of signals, and then we ingest those into a pretty intricate feature extraction pipeline. Then, we’re able to kind of map those features to more systematic forms of data.

So, we obviously take what people say about themselves seriously, and we take them being able to say it in their own words seriously, and we also take clinician expertise seriously. That’s one of the reasons I’m on board, and we collaborate to do a clinical validation study with outside researchers as well. So, it really is this kind of translating what looks – a look at the whole person, from a healthcare point of view, into this also tech point of view of making sure that we’re being systematic about our data pipelines or feature extraction, and then the predictions that are relevant to people. It’s not just a yes, no. It’s, okay, how severe is this depression looking? Which particular symptoms are bothering you most? Really focused on what people might actually care about, rather than just a number at the end of the pipeline that isn’t really something that people might be able to understand right off the bat.

[0:07:28] HC: So, how do you go about gathering data in order to train your algorithms? Do you need to annotate it? Do you need clinicians in order to annotate things for you?

[0:07:38] LC: Yes. This is actually an interesting story. And Michael, definitely jump in here. But we initially set out to collect training data through a clinical study, and we sort of expected thousands of people to sign up and we’d have all this amazing data. If that didn’t happen, COVID really slowed things down, and people weren’t enrolling in studies. They weren’t even leaving their houses. So, we made the decision to try to collect the data in-house through one- on-one Zoom interviews. And we would advertise on social media, put up ads anywhere we could, and then we waited. We were amazed at the response that we got.

I went from taking 41 one-on-one Zoom interviews per day to hiring more and more people to help fill the demand. People wanted to talk to us. I think that they wanted to talk to us because they were hurting. We got a lot of training data out of it, but the more important thing is that we got to spend time early on with our future users, and it was definitely confirmation that we were onto something and that what we’re creating was so needed.

[0:08:33] HC: So, in developing models to solve this, have foundation models like ChatGPT, and some of the other popular ones right now played a role in developing this technology? Or did you need to develop a custom model to solve your challenges in a different way?

[0:08:47] MM: Yes, that’s an excellent question. As a health tech company, we have to take the health and the tech elements of our business equally seriously. So, one of our guiding principles from a health perspective is we have to keep people’s data secure. That’s priority number one. Because of that, all of our models live inside our HIPAA-compliant infrastructure, which necessitates building custom models. So, even though building custom models is way more difficult than just sending sensitive data to an outside API, that process of taking the time to build the models ourselves has a lot of positive side effects above and beyond data security.

For example, since we control our models and our endpoints. We don’t have to worry about things like model drift, where the process where an earlier version of the model performed really well for the tasks that you’re looking at. But then, later versions, that model might be better overall, but it gets worse on your particular task. So, we can mitigate that as well, as well as having the essential data security element of it.

[0:09:50] HC: What other types of challenges have you encountered and working with data from patient interviews and then training models on it?

[0:09:57] MM: I mean, anyone who’s worked with real-world data will tell you, it’s messy, and these interviews are no exception to that. As we’ve talked about, we really value letting people talk about their mental health in their own words, and that can lead to some unexpected outcomes on the modeling side of the operation. I’ll give you one example. We had a question where, as part of the feature extraction, we’re kind of trying to get at whether people were saying more yes or more no. We thought we had a pretty comprehensive list of ways people might say yes or no to this question, and then we had someone answer, “I can’t think of a single one”, which was not a part of our list. I think the key element there is checking what our assumptions are. I think anytime you’re working with data, me also coming from a statistics background, one of the main things we think about these models is what assumptions are the model is making? I think, also, from the clinical side, what assumptions are we making about how people are going to speak about these difficult parts in their lives, and I think that that’s been a real kind of iterative journey for us, kind of learning how to blend the clinical aspect with the technological aspect in a way that’s as seamless as possible.

[0:11:12] HC: So, speaking of assumptions, and thinking more broadly about ethical use of AI, how about bias manifests with models trained on data from patient interviews? And are there some things your team is doing to try and mitigate bias?

[0:11:24] MM: Absolutely, yes. I mean, no matter how fancy your model, if you bring garbage data in, you’re going to get garbage results out, and garbage data isn’t just low-quality video, or audio in our case. It could also be if you’re really only collecting data from white people, or if you’re only collecting data from people who have really high incomes. We’ve made it a practice to really include bias mitigation as a key part of our entire workflow throughout the research and development process, rather than just kind of, “Oh, an afterthought. Oh, we need to real quick test for this.” At the end, it’s thinking about who you’re collecting the data from, thinking about how you’re building the models, thinking about how you’re displaying things to people at the end. It’s really a whole pipeline process, rather than just, “Oh, yes, I guess we should look at that.”

[0:12:14] HC: Do you have some examples of some specific things you’re doing in each of those categories you mentioned, with data collection, with modeling, with display at the end? Are there specific techniques that you found helpful that other companies in different fields might benefit from hearing?

[0:12:29] MM: Sure. I mean, I think that some of it is really – I mean, having the expertise in machine learning in AI is fantastic, and also thinking a little “back to basics” when it comes to statistics. The sampling we did for our clinical validation study, if we just open that up and let whoever just be part of that study without any thought for, “Okay, what is the race of our participants looks like? What is the geographic distribution of our participants look like?” If we hadn’t been prepared and thinking about that ahead of time and been really intentional about our sampling strategy, we could have had way more people maybe in our initial clinical validation set. But it would have been pretty biased data and we would have been kind of fooling ourselves about how well we were doing not just overall, but within certain subgroups of folks.

So, I think that’s like probably one of the most prominent examples, is really kind of – it also shows thinking upfront, “All right, we want to think about this from an equity standpoint from the beginning, rather than just trying to do certain analyses at the end to maybe correct for bias that exists in the data.” You can do both. But you really want to start with the data collection process, I think. I think that’s true across industries.

[0:13:53] HC: Thinking more broadly about what you’re trying to accomplish at Aiberry, how do you measure the impact of your technology to be sure you’re having the effect that you want?

[0:14:01] MM: Yes. I think the approach that we have is we kind of have some key success metrics and some key guardrail metrics. We just talked about one of our key guardrail metrics. The model is displaying a bias against any race or gender, or anything, any other protected category. That’s a nonstarter, no matter how good any other metric looks. And one key success metric, Linda talked really well about earlier is really detecting mental health risk earlier in people. Having been a clinician, I’ve had people come into my office who have been suffering for more than 10 years before they got treatment. I think one of the biggest ways I feel successful in Aiberry’s efforts is helping people get that detection and get that treatment earlier. It’s just that we’ve had success with that with some of our partners and that’s just been super gratifying to hear and be a part of.

[0:15:00] HC: Is there any advice you could offer to other leaders of AI powered startups?

[0:15:05] LC: I think startups are hard, even in the best of times. And we know that within the first five years, 90% of startups will fail. So, I’ve definitely learned a lot throughout this journey and continue to learn every day. I think in terms of advice, I’d say, number one, AI is a bit of a buzzword. So, figuring out how do you explain what you’re doing and how you’re using AI in such an easy way that even a fifth grader can understand it. I think also, focusing on building good culture, deciding early on what your startups core values are, and communicating them clearly across organization, and then making sure you’re hiring people that fit into those values and continue to uphold them day in and day out.

Also, we mentioned, raising money. When the capital market is good, raise more money than you think you’ll need. I think everything takes longer and costs more than you think it will. I would say finally, choose good shareholders. People that are genuinely supportive and have trust in you.

[0:16:00] HC: Michael, are there any other pieces of advice you’d like to add?

[0:16:03] MM: I mean, I think that one of the biggest things is, I am so impressed when organizations can both have kind of core values, like Linda mentioned, and then think about what are a variety of applications that can come from those core values. Being willing to kind of stand fast in your core values, but being willing to be flexible, and what that impact could look like in the real world. I think that one of the things that we’ve experienced is getting to work with clinicians, and also getting to work with other target markets. Being able to kind of identify the different needs in those different target markets, while staying true to our values has really, I think, been a huge positive. I think that that’s very easy to say and very hard to do. So, the more that you can hold to that, I think the better you’re going to do.

[0:17:02] HC: Finally, where do you see the impact of Aiberry in three to five years?

[0:17:06] LC: Well, I think, 15 years ago, there was the quantified self-movement. So, when people started to monitor their steps, their heart rate, their sleep, the idea was that by collecting and analyzing this data, you can get insights about various aspects of your life, and really your overall well-being. It’s a great blend of technology and data and self-discovery. So, I think in three to five years, we really see checking in on your mental health becoming an everyday thing that people do. You log the good days, you log the bad ones, and you get alerted to trends and risk factors much earlier, so everything becomes proactive, rather than reactive.

I think the engagement and accessibility of Aiberry, will give users the knowledge to take action when appropriate, sort of like, “The frequency of my anxiety symptoms have been up for over a week. I should do something about that.” I think that’s where early on, some of – that’s where the next thing, the action can really happen. That’s one of the main goals we had from the outset. So again, I think the impact by waiting three to five years is that people are actually going to be doing these check-ins every day. And we’re giving them the knowledge and empowerment of when to take action.

[0:18:14] HC: This has been great. Linda and Michael, I appreciate your insights today. I think this will be valuable to many listeners. Where can people find out more about you online?

[0:18:22] LC: I think the best way is to check our website, aiberry.com. Or you can find us on LinkedIn.

[0:18:31] HC: Perfect. Thanks for joining me today.

[0:18:33] LC: Thanks so much for having us.

[0:18:34] MM: Thanks for having us.

[0:18:34] HC: All right, everyone. Thanks for listening. I’m Heather Couture and I hope you join me again next time for Impact AI

[OUTRO]

[0:18:45] HC: Thank you for listening to Impact AI. If you enjoyed this episode, please subscribe and share with a friend, and if you’d like to learn more about computer vision applications for people in planetary health, you can sign up for my newsletter at pixelscientia.com/newsletter.

[END]