Sep 19, 2022 – Imagine this: You think you might have COVID. You speak a few sentences on your phone. Then the app gives you reliable results in less than a minute.
“You look sick” is what we humans might tell a friend. Artificial intelligence, or AI, can take this to new heights by analyzing your voice to detect COVID infection.
Researchers said a simple and inexpensive app could be used in low-income countries or to screen crowds at concerts and other large gatherings.
It is the latest example in an emerging trend that is exploring sound as a diagnostic tool for disease detection or prediction.
Over the past decade, artificial intelligence speech analysis has been shown to help detect Parkinson’s disease, Post Traumatic Stress DisorderDementia and heart disease. The research was so promising that the National Institutes of Health just released a file A new initiative to develop artificial intelligence for the use of voice To diagnose a wide range of conditions. These range from respiratory diseases such as Pneumonia Chronic obstructive pulmonary disease, laryngeal cancer, and even brain attack, amyotrophic lateral sclerosis, and mental disorders such as depression and schizophrenia. The researchers say the software can detect nuances that the human ear can’t.
At least half a dozen studies have taken this approach to detect COVID. In the latest development, researchers from Maastricht University in the Netherlands reported that their AI model was accurate 89% of the time, compared to an average of 56% for various lateral flow tests. The voice test was also more accurate in detecting infection in people who had no symptoms.
One hitch: Lateral flow tests show false positives less than 1% of the time, compared to 17% for the audio test. However, since the test is “virtually free”, it will still be practical only for those who test positive to undergo further testing, Researcher Wafaa Al-Jabawi said:who presented the preliminary results at the International Congress of the European Respiratory Society in Barcelona, Spain.
“Personally, I am excited about the potential medical implications,” says Visara Urovi, PhD, project researcher and associate professor at the Institute for Data Science at Maastricht University. “If we better understand how the voice changes with different conditions, we are likely to know when we are on the cusp of disease or when to seek further testing and/or treatment.”
artificial intelligence development
COVID infection can change your voice. Affects the respiratory system, “resulting in decreased speech energy and loss of voice due to shortness of breath and upper airway congestion, congestion, congestion‘, says the preprint paper, which has not yet been peer-reviewed. The dry cough typical of a COVID patient also causes changes in the vocal cords. Previous research found that weakening of the lung and larynx due to coronavirus alters the acoustic properties of the voice.
Part of what makes the latest research remarkable is the size of the data set. The researchers used a mass database from the University of Cambridge containing 893 audio samples from 4,352 people, 308 of whom tested positive for COVID.
You can contribute to this database – all anonymously – via Cambridge COVID-19 Voices Appwho asks you Cough Three times, breathe deeply through Mouth Three to five times, and read a short sentence three times.
For their study, Maastricht University researchers focused on spoken sentences only, Orofi explains. It says that the “signal parameters” of the sound “provide some information about speech energy”. “It is these numbers that are used in the algorithm to make the decision.”
Hearing enthusiasts may find it interesting that researchers have used slope-spectrum analysis to characterize a sound wave (or timbre). AI enthusiasts will note that the study found that long-term memory (LSTM) is the type of AI model that works best. It is based on neural networks that simulate the human mind They are especially good at modeling signals that are collected over time.
For ordinary people, it is enough to know that advances in this field may lead to “reliable, effective, affordable, convenient and easy-to-use” technologies for disease detection and prediction.
Orofi says building this research into a meaningful application requires a successful validation phase. This “external validation” – testing how the model works with another data set of sounds – can be a slow process.
“The validation stage could take years before the app becomes available to the wider public,” Orofi says.
Orofi stresses that even with the large Cambridge data set, “it’s hard to predict how well this model will work overall.” If a speech test proves to work better than a rapid antigen test, “people may prefer the cheap, nonsurgical option.”
“But more research is needed to explore the acoustic features that are most useful in selecting COVID cases, and to ensure that models can distinguish COVID from other respiratory conditions,” the paper says.
So are app tests before the concert in our future? That will depend on cost-benefit analyzes and many other considerations, Orofi says.
However, “the benefits may persist if the test is used in support or in addition to other well-established screening tools such as the polymerase chain reaction assay.”