У нас вы можете посмотреть бесплатно Is Detecting Diseases Based on 45 s of Voice Accurate? (Henry O'Connell) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Ambient speech technology is now common for documentation; but what if the biggest opportunity isn’t transcription? In this episode of Faces of Digital Health, host Tjasa Zajc speaks with Henry O’Connell (CEO of Canary Speech) about voice biomarkers: how conversational speech can support detection and monitoring of neurological and behavioral health conditions—without relying on “what you say,” but on how the nervous system produces speech. We cover: Why decades of word-based voice biomarker research struggled to reach clinical practice Canary’s approach: analyzing the “primary data layer” behind speech production How ~45 seconds of conversational speech can generate millions of data elements for analysis Reported accuracy differences between progressive neurological diseases and behavioral health Why validation must be repeated across languages and populations Real-world use in primary care screening, clinical trials, and workflow-integrated ambient systems Ethics and guardrails for wellness and workplace use Emerging operational use cases, including aggression risk signals in hospital rooms Notes: 00:12 Intro: voice biomarkers beyond ambient documentation 01:03 Potential vs reality: what voice can (and can’t yet) prove clinically 01:54 Why earlier voice-biomarker work focused on words—and why it stalled 06:34 The “intuition” problem: we hear mood without words 07:33 Canary’s clinical-first approach and global clinical partnerships 08:39 Do you need long-term data? (Agatha Christie example) 09:41 Method: ~45 seconds of conversational speech, ambient capture 10:36 Scale: 2,590 features every 10ms (~13M data elements) 11:02 Where the features come from (vocal cords, respiration, resonance) 14:36 How models are built: IRB, clinician ground truth, ML correlation 17:30 Accuracy + adoption: why it’s not standard practice yet 18:22 Reported performance: 98%+ neuro, ~80s behavioral health 22:44 Culture/language bias concern: why validation per language matters 24:22 Guardrails: validate every new language and population testing 31:07 Why read-speech scripts misled the field; conversational-only stance 34:16 What changed: AI, compute, real-time streaming, workflow fit 37:10 How it’s used: screening vs suspected disease; incidental findings 39:24 Primary care example: postpartum depression flagged despite “I’m fine” 47:56 Wellness/employee use: de-identified dashboards and ethics 53:42 Technical requirements: device capture, signal-to-noise checks 57:10 In-room monitoring: aggression risk signals for staff safety 1:02:22 Clinical trials: pre-screening + measuring therapeutic impact 1:07:14 Global rollout: regions, languages, and partnerships 1:09:42 Consumer access: wellness product planned in 2026 1:12:32 Wrap-up: why this matters as cognitive health needs grow