Google Opens HeAR AI Model to Researchers for Health Diagnostics

TapTechNews August 21 news, Google announced in a blog post on August 19 that through the Google Cloud API, it has currently opened the Health Acoustic Representations (HeAR) AI model to researchers.

Google Opens HeAR AI Model to Researchers for Health Diagnostics_0

TapTechNews reported in March this year that Google's HeAR AI model can help humans diagnose diseases and can diagnose diseases by analyzing people's coughs and breaths.

Google said that HeAR outperforms other models in various tasks and shows excellent ability in capturing meaningful patterns in health-related acoustic data.

Importantly, models trained with HeAR can achieve high performance with relatively little training data, and the scarcity of data is often a challenge in healthcare research, so this is a crucial advantage.

The Google research team trained HeAR with 300 million pieces of audio data collected from a diverse and de-identified data set, and we specifically trained the cough model with about 100 million cough sounds.

The potential application areas of HeAR are very broad. For example, the respiratory care company Salcit Technologies based in India is exploring how HeAR can enhance its existing artificial intelligence model Swaasa to detect tuberculosis early based on cough sounds, which is especially influential in areas with limited healthcare services.

The potential of HeAR is not limited to tuberculosis. The model can be used universally in various microphones and environments, so it can conduct low-cost and barrier-free screening for various respiratory diseases, marking an important step forward in acoustic health research. Google's goal is to popularize this technology and support the global medical community to develop innovative solutions and break down barriers to early diagnosis and care.

Likes