Can AI-Assisted Voice Analysis Help Identify Mental Disorders?

[ad_1]

part of this article a limited series on the potential of artificial intelligence to solve everyday problems.

Imagine a test that can reliably identify an anxiety disorder or predict an impending depressive relapse, as quick and easy as taking your temperature or measuring your blood pressure.

Healthcare providers have many tools to measure a patient’s physical condition, but lack reliable biomarkers (objective indicators of medical conditions observed from outside the patient) to assess mental health.

But some AI researchers now believe the sound of your voice may be the key to understanding your mental state – and AI is perfectly suited to detecting such changes that are otherwise difficult if not impossible to detect. The result is a suite of apps and online tools designed to monitor your mental state, as well as programs that offer telehealth and call center providers real-time mental health assessments.

Psychologists have long known that certain mental health problems can only be detected through listening. What but the person says How That’s what they’re saying, said Maria Espinola, a psychologist and assistant professor at the University of Cincinnati School of Medicine.

In depressed patients, Dr. Espinola says, “their speech is usually more monotonous, flatter, and softer. They also have a reduced pitch pitch and a lower volume. They take more breaks. They stop more often.”

Patients with anxiety feel more tension in their bodies, which can change the way their voices are heard. “They tend to speak faster. They have more difficulty breathing.”

Today, such vocal features are used by machine learning researchers to predict depression and anxiety, as well as other mental illnesses such as schizophrenia and post-traumatic stress disorder. The use of deep learning algorithms can reveal additional patterns and features that may not be obvious even to trained experts, as captured in short audio recordings.

“The technology we’re using now can extract meaningful features that even the human ear can’t detect,” said Kate Bentley, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital.

“There is a lot of excitement about finding biological or more objective psychiatric diagnostic indicators that go beyond traditionally used more subjective forms of assessment, such as clinician-rated interviews or self-report measures,” he said. Other clues the researchers followed include changes in activity levels, sleep patterns, and social media data.

These technological advances come at a time when the need for mental health care is particularly acute: According to a report from the National Alliance on Mental Illness, one in five adults He suffered from mental illness in the United States in 2020. And the numbers continue to climb.

Dr. While AI technology isn’t able to address the scarcity of qualified mental health care providers—not nearly enough to meet the nation’s needs—there is hope that it can lower barriers to getting an accurate diagnosis and help clinicians identify, Bentley said. Patients who may hesitate to seek care and facilitate self-monitoring between visits.

Dr. “A lot can happen between appointments, and technology can offer us the potential to improve monitoring and evaluation on a more continuous basis,” Bentley said.

To test this new technology, I started by downloading: Mental Fitness app Ask Sonde Health, a health tech company, whether my discomfort is a sign of something serious or just unconscious. Described as a “voice-powered mental fitness tracking and journaling product,” the free app invited me to record my first check-in, my 30-second verbal diary attempt, which would rank my mental health on a scale of 1 to 100.

A minute later my score came in: a not so good 52nd app warned “Beware”.

The app marked that the level of vitality detected in my voice was quite low. Did I sound monotonous just because I was trying to speak quietly? Should I consider the app’s recommendations to improve my mental fitness by taking a walk or tidying up my space? (The first question may point to one of the app’s potential flaws: As a consumer, it can be hard to know. why your volume fluctuates.)

Next, feeling nervous between interviews, I tested another voice analysis program, which focused on detecting anxiety levels. this Stress Waves Test It’s a free online tool from health and insurance conglomerate Cigna, developed in collaboration with AI expert Ellipsis Health, to assess stress levels using 60-second samples of recorded speech.

“What keeps you awake at night?” It was the website’s prompt. After spending a minute talking about my persistent concerns, the show scored my record and sent me an email notification: “Your stress level is moderate.” Unlike the Sonde app, Cigna’s email offered no helpful self-help tips.

Other technologies add a potentially useful layer of human interaction. kintsugi, a Berkeley, California-based company that raised $20 million in Series A funding earlier this month. Kintsugi is named after the Japanese practice. repair of broken pottery with gold veins.

Kintsugi, founded by Grace Chang and Rima Seiilova-Olson, combining their shared past experience in fighting for access to mental health care, develops technology for telehealth and call center providers that can help them identify patients who could benefit from greater support.

Using Kintsugi’s voice analysis program, for example, a nurse may be asked to take an extra minute to ask a troubled parent with a colic infant about her well-being.

One of the concerns with the development of such machine learning technologies is the issue of bias – ensuring that programs work equally for all patients, regardless of age, gender, ethnicity, nationality, and other demographic criteria.

“For machine learning models to work well, you need to have a really large, diverse and robust dataset,” said Ms. Chang, noting that Kintsugi uses audio recordings from around the world in many different languages ​​as a security measure. especially this problem.

Dr. Another major concern in this emerging field is privacy, specifically audio data that could be used to identify individuals, Bentley said.

And even if patients agree to be enrolled, the issue of consent is sometimes twofold. Some voice analysis programs use recordings to develop and improve their own algorithms, in addition to assessing a patient’s mental health.

Dr. Another challenge, Bentley said, is the potential for consumers to distrust machine learning and specifically so-called black-box algorithms that work in ways that even developers can’t fully explain what features they use to make predictions.

D., interim director of the Semel Institute for Neuroscience and Human Behavior and head of psychiatry at the University of California, Los Angeles. Alexander S. Young said, “There is creating the algorithm and there is understanding the algorithm,” and expressed his concerns. What many researchers have about AI and machine learning in general: there is little if any human supervision during the training phase of the program.

For now, Dr. Young is cautiously optimistic about the potential of voice analysis technologies, particularly as tools for patients to monitor themselves.

“I believe you can model people’s mental health states, or approximate their mental health states in general,” he said. “People like to be able to self-monitor their condition, especially those with chronic illnesses.”

However, before automated audio analysis technologies become mainstream, some want their accuracy to be rigorously investigated.

Dr. “We really need more validation of not just voice technology, but artificial intelligence and machine learning models built on other data streams as well,” Bentley said. “And we need to derive this validation from large-scale, well-designed representative studies.”

Until then, AI-driven voice analysis technology remains a promising but unproven tool and could ultimately become an everyday method for measuring the temperature of our mental health.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *