У нас вы можете посмотреть бесплатно AI should strengthen care, not replace human connection, says Uniting’s CDIO или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
In this ADAPT Insider podcast episode, Andrew Dome explains how Uniting turned a frontline AI assistant into a repeatable care workflow that other providers now want to buy, while keeping ethics, trust, and human oversight intact. What happens when AI removes admin from frontline care without taking people out of the process? Andrew Dome, Chief Digital Information Officer at Uniting, explains how the organisation is using AI to reduce documentation friction, keep people in the loop, and build towards safer care outcomes. Key Takeaways: The strongest frontline AI use cases remove admin where care happens, giving staff more time back in the day without taking people out of the process. Trust grows faster when AI is introduced with clear guardrails, visible consent, and human oversight built into the workflow. -A practical AI use case becomes far more valuable when it proves repeatable enough to scale internally and relevant enough for others to want it too. Frontline AI works when it removes admin at the point of care The most valuable AI use cases in aged care sit inside frontline workflows, where they can remove admin at the point of service and return time directly to care. Andrew says their Azure/ChatGPT 5.0-powered AI assistant “Buddy” was designed to reduce documentation friction for frontline teams, especially in home and community care. Staff can use it to capture notes through voice transcription straight after visits instead of losing time later to manual write ups. For Uniting, that turns AI into a practical care workflow tool rather than another layer of system complexity. He also points to a stronger sign that Buddy has moved beyond experimentation. Other aged care providers have shown interest in adopting it as a software as a service platform, suggesting the tool is solving a repeatable frontline problem rather than a one off internal need. Trust depends on visible guardrails and human oversight AI in care settings cannot sit in the background as an invisible layer. People need to understand what it is doing, where it is being used, and what remains under human control. In residential care, that means staff explaining when AI transcription is being used, checking that clients are comfortable, and showing the output so it can be reviewed. Andrew argues that trust grows faster when consent, visibility, and quality assurance are built into the workflow itself. If AI transcription is being used in front of a client, staff explain what is happening, ask if the client is comfortable, and show the output so it can be checked. That makes the lesson broader than responsible AI as a slogan. In care environments, trust grows when consent, visibility, and quality assurance are built into the workflow itself. The next gains will come from safer and more responsive care With AI already reducing admin safely, the next opportunity is to extend its value into earlier action and more consistent care. Andrew links that next step to Uniting’s work on AI supported service interactions and fall prevention, where the aim is to help staff respond sooner, reduce risk, and strengthen continuity of care.