У нас вы можете посмотреть бесплатно MedGemma Tele-Triage: AI-Powered Clinical Intake Agent или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
The Silent Crisis: Triage Bottlenecks & Language Barriers Emergency Rooms and Urgent Care clinics globally are facing a breaking point. Patient wait times are skyrocketing, leading to "Left Without Being Seen" (LWBS) rates as high as 10-15%. Specifically, respiratory conditions—often contagious—require immediate isolation, yet patients sit in crowded waiting rooms for hours. Furthermore, language barriers delay critical care; a Triage Nurse often waits 20+ minutes for a translator, during which time a patient's condition (e.g., silent hypoxia) may deteriorate. Impact Potential: Our solution targets the "Golden Hour" of patient intake. By automating the initial subjective assessment (History of Present Illness) and objective acoustic analysis Overall solution: Effective use of HAI-DEF models MedGemma Tele-Triage is a fully functional, privacy-first web application that acts as an "AI Resident Doctor". It sits at the clinic front desk (or on a patient's phone at home) and conducts a compassionate, clinical interview. We integrated three core pillars of the Google Health AI Developer Foundations (HAI-DEF): Gemma 2 (MedGemma) for Clinical Reasoning: Unlike generic LLMs, we utilized the medically-tuned Gemma 2 9B (via Ollama) to act as the central reasoning agent. It doesn't just "chat"; it follows a clinical decision tree. It asks clarifying questions based on symptoms, rules out red flags, and synthesizes a structured SOAP Note (Subjective, Objective, Assessment, Plan). This note is what physicians actually need, converting 5 minutes of rambling audio into 30 seconds of readable text. HeAR (Health Acoustic Representations) for Objective Biomarkers: We deployed the HeAR model to analyze raw audio waveforms. While the patient speaks, the system isolates cough and breathing events, assigning a "Respiratory Risk Score" (0.0 - 1.0). This provides an objective data point that complements the subjective patient history, flagging "silent" respiratory distress. MedASR / Whisper for Universal Access: The system uses state-of-the-art ASR to handle multilingual inputs (demonstrated with Tamil mixed-language support), ensuring the AI understands the medical intent regardless of the patient's native tongue.