A new AI algorithm developed at EPFL and the University Hospital of Geneva (HUG) will use a smart stethoscope – the Pneumoscope – to improve the management of respiratory diseases in a low-resource and remote manner.
As air passes through the labyrinth of tiny passages in our lungs, it makes a distinctive hum. When these ducts are narrowed by asthma inflammation or blocked by infectious bronchitis secretions, the sound changes in characteristic ways. Checking for these diagnostic signatures using a chest stethoscope, a procedure called auscultation, has become an inevitable part of nearly every health checkup.
However, despite two centuries of experience using stethoscopes, the interpretation of auscultation is still very subjective when one doctor hears something different. In fact, depending on where you are in the world, a single sound can be variously described as chirping, popping candy, Velcro, cooking rice, and more. Accuracy is further influenced by the level of experience and specialization of the healthcare worker.
These complications make it an ideal challenge for deep learning, which can distinguish audio patterns more objectively. Deep learning has already been shown to improve human perception when interpreting a variety of complex medical exams, such as X-rays and MRIs.
Now, a new study published in Nature Digital Medicine from EPFL’s Intelligent Global Health Research Group (iGH), based in the Machine Learning and Optimization Laboratory at the Center for Interdisciplinary AI Specialists in the School of Computer and Communication Sciences, describes their AI algorithm DeepBreath. , which demonstrates the potential of automated interpretation in the diagnosis of respiratory diseases.
“What makes this study particularly unique is the diversity and rigorous selection of the auscultation sound bank,” said the study’s senior author, Dr. Mary-Anne Hartley, MD and biomedical data scientist who leads the iGH. Almost six hundred pediatric outpatients were recruited in five countries: Switzerland, Brazil, Senegal, Cameroon and Morocco. Breath sounds were recorded in patients younger than fifteen years of age with the three most common respiratory diseases, radiographically confirmed pneumonia and clinically diagnosed bronchiolitis and asthma.
“Respiratory diseases are the main preventable cause of death in this age group,” explained Professor Alain Gervaix, Head of the Department of Pediatric Medicine at HUG and founder of Onescope: the startup that will introduce this smart stethoscope that integrates the DeepBreath algorithm. to the market. “This work is an excellent example of successful collaboration between HUG and EPFL, clinical studies and basic science. The DeepBreath powered pneumoscope is a breakthrough innovation in the diagnosis and management of respiratory disease,” he continued.
Dr. Hartley’s team is leading Onescope’s AI development and is particularly excited about the tool’s potential for low-resource and remote applications. “Reusable, disposable diagnostic tools like this smart stethoscope have the unique advantage of guaranteed sustainability,” she explained, adding that “AI tools can also be continuously improved, so I hope we can extend the algorithm to other respiratory applications.” diseases. and populations with more data.
DeepBreath is trained on patients from Switzerland and Brazil and then validated with recordings from Senegal, Cameroon and Morocco, providing insight into the tool’s geographic generalizability. “You can imagine that there is a lot of variation in emergency departments in Switzerland, Cameroon and Senegal,” said Dr. Hartley and lists examples: “The sound of background noise, the way a doctor holds a stethoscope that records sound, epidemiology. and local diagnostic protocols.
With enough data, the algorithm should be robust to these nuances and find the signal among the noise. Despite the small number of patients, DeepBreath maintained impressive performance across sites, suggesting further improvements are possible with more data.
A particularly unique contribution of the study was the inclusion of techniques that sought to demystify the inner workings of the algorithm’s black box. The authors were able to show that the model actually used the respiratory cycle to make predictions and show which parts of it were most important. Demonstrating that the algorithm is actually using breath sounds, rather than “cheating” using biased background noise signatures, is a critical gap in the current literature and may undermine confidence in the algorithm.
A multidisciplinary team is working to develop an algorithm for real-world use in its smart stethoscope, the Pneumoscope. Another major challenge is to repeat the study with more patients using recordings from this newly developed digital stethoscope, which also records temperature and blood oxygen levels. “Combining these signals is likely to further improve predictions,” predicts Dr. Hartley.
Dr. The team of Hartley students involved in the development of DeepBreath includes Julien Heitmann, Jonathan Doenz, Julianne Dervaux and Giorgio Mannarini, who all completed their master’s theses on the project.
#DeepBreath #deep #learning #identify #respiratory #diseases