News

New Database Will Help Researchers Identify Vocal Biomarkers Using Machine Learning

Typically when you think of biomarkers, it’s on a molecular scale—signaled by alterations in genes, proteins, and the like. However, a new $14 million project, funded by NIH’s Bridge2AI program, is turning that traditional biomarker concept on its ear. Instead of examining genetic or similar molecular characteristics, researchers are collecting data to look for voice biomarkers that could indicate the presence of cancer and other diseases.

The new project, led by Yaël Bensoussan, M.D., of the University of South Florida, and Olivier Elemento, Ph.D., of Weil Cornell Medicine, seeks to assemble a database of 30,000 voices over the course of the next 4 years.

Scientists will be able to use data from those audio clips to detect speech differentiators (i.e., tone, inflection, pacing, etc.); quality of the voice; as well as breathing patterns and airway issues, such as the presence of a cough.

Armed with audio data, researchers then will be able to train machine learning models to extract markers that signify changes associated with diseases, such as laryngeal cancer, Parkinson’s Disease, and even depression. For example, abnormal growths in the vocal cords can make the voice sound hoarse and lead to shortness of breath when cancers are advanced.

Researchers will be able to access the database within a cloud-based infrastructure, allowing them to easily combine, sort, and analyze data without the need to download large files. Central to this database is an app-based tool, which is currently in development. The app will make it easier to enroll patients and collect data.

Because the voice is considered a key personal identifier, the researchers are taking special precautions to protect patient identity. As noted by Dr. Bensoussan, “A team of bioethicists will consider ethical and legal questions, data sharing, and voice identification.” In the future, the researchers plan to provide specific guidelines to the research community on how to handle voice data linked to health information.  

Dr. Bensoussan said their project, which blends artificial intelligence with voice research, should help further promote the use of deep learning models in the field. Ultimately, she hopes this research will foster greater collaboration between clinicians and researchers, leading to earlier diagnosis of patients with cancer and other vocal-related diseases and better outcomes.

Vote below about this page’s helpfulness.
CAPTCHA
Image CAPTCHA

Enter the characters shown in the image.