У нас вы можете посмотреть бесплатно #18 – Thinking With AI: What Can’t Be Automated? [Peter Danenberg, Google DeepMind] или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
🎓 20% Off | Oxford AI Ethics Executive Programme https://oxsbs.link/ailyceum “The things we can say are limited by the things we can think.” In this episode, Samraj Matharu speaks with Peter Danenberg, Senior Software Engineer specialising in rapid LLM prototyping at Google DeepMind, based in Palo Alto, California. Peter works at the frontier where large language models move from research to real-world systems. Together, they explore what it actually means to think with AI — not outsource thinking to machines, but use them as tools that test, challenge, and sharpen human judgment. The conversation examines the limits of automation, why intelligence is not the same as thinking, and how philosophy underpins critical reasoning in an age of powerful models. Peter reflects on our tendency to anthropomorphise AI, whether that instinct is a flaw or a feature, and why ethics in LLMs has largely focused on harm reduction rather than human flourishing. They also discuss critical thinking as a response to crisis, the role of judgment when systems scale faster than reflection, and the idea of “peirastic AI” — systems designed not to reassure us, but to probe and test our reasoning. This episode is a grounded, thoughtful exploration of what cannot be automated, and what responsibility still rests with humans. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 2:30 ➤ Peter’s work in rapid LLM prototyping at DeepMind 5:30 ➤ Intelligence vs thinking: why the distinction matters 9:30 ➤ Philosophy as the language of thinking 13:00 ➤ Critical thinking, crisis, and discernment 17:00 ➤ Anthropomorphising AI: bug or feature? 21:30 ➤ Ethics in LLMs and the limits of harm reduction 26:00 ➤ Automation, judgment, and human responsibility 31:00 ➤ Peirastic AI: systems that test us 36:00 ➤ What can’t be automated 41:00 ➤ Future interfaces and tactile thinking 47:00 ➤ Judgment at scale and human accountability 53:00 ➤ Why intelligence averages but wisdom doesn’t 59:00 ➤ Trust, reassurance, and systems that challenge us 1:06:00 ➤ Human-in-the-loop vs human-on-the-loop 1:13:00 ➤ Agency, responsibility, and system design 1:20:00 ➤ Long-term risks and philosophical blind spots 1:28:00 ➤ What remains fundamentally human 1:36:00 ➤ Closing reflections and audience question 🔗 LISTEN, WATCH & CONNECT 🎓 Oxford Programme (20% Off): https://oxsbs.link/ailyceum 🌐 Join the 1K+ Community: https://linktr.ee/theailyceum 💻 Website: https://theailyceum.com ▶️ YouTube: / @the.ai.lyceum 🎧 Spotify: https://open.spotify.com/show/034vux8... 🎧 Apple: https://podcasts.apple.com/us/podcast... 🎧 Amazon: https://music.amazon.com/podcasts/5a6... ABOUT THE AI LYCEUM The AI Lyceum is a global community exploring AI, ethics, creativity, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University. #ai #genai #llm #google #deepmind #aiethics #ethics #philosophy #thinking #criticalthinking #automation #humanjudgment #agenticai #responsibleai #theailyceum