У нас вы можете посмотреть бесплатно #17 - The Future of Law With AI: Making Sense of It All [Stephen Dnes, Lawyer] или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
“Law does not stop innovation — but poor regulation can quietly distort it.” In this episode, Samraj speaks with Stephen Dnes, media lawyer, partner at Dnes & Felver, and lecturer in law at Royal Holloway, University of London, about one of the most important questions facing AI today: How should law assign responsibility when AI systems act, decide, and transact autonomously? Stephen brings a rare transatlantic legal perspective, having worked across UK, EU, and US competition, data, and technology regulation. Together, they explore how existing legal doctrine struggles with agentic systems, why GDPR and the EU AI Act often collide rather than complement one another, and how concepts like liability, mens rea, hazard, and risk must evolve in an AI-mediated world. The conversation moves beyond surface-level AI debates into deeper legal, economic, and philosophical territory — including how agentic contracts change verification and accountability, why today’s AI systems are better at averaging than wisdom, and what trust really means as humans gradually leave the loop. Whether you work in law, policy, advertising, technology, or AI strategy, this episode offers a rare, clear-eyed view of how legal systems may adapt to the next era of automation. EPISODE HIGHLIGHTS 0:00 ➤ Intro / Guest Welcome 3:00 ➤ Defining data, information, and regulation 8:00 ➤ Why law always lags innovation 13:00 ➤ GDPR vs the EU AI Act: a structural tension 18:00 ➤ Hazard vs risk and the limits of precaution 24:00 ➤ Mens rea, strict liability, and AI systems 32:00 ➤ Agentic contracts and responsibility chains 41:00 ➤ Disintermediation and the future of advertising markets 50:00 ➤ Trust, brands, and humans leaving the loop 58:00 ➤ Artificial intelligence vs artificial wisdom 1:05:00 ➤ Law, philosophy, and the role of human judgment 1:09:00 ➤ Closing reflections & audience question 🔑 KEY QUESTIONS ANSWERED ➤ How should responsibility be assigned when AI acts autonomously? ➤ Why do GDPR and the EU AI Act often pull in opposite directions? ➤ What is the legal difference between hazard and risk — and why does it matter? ➤ Can concepts like mens rea apply to AI systems at all? ➤ How do agentic contracts change verification and liability? ➤ Why AI systems average well but struggle with wisdom ➤ What does “trust” mean in an AI-mediated economy? 🔗 SUBSCRIBE TO THE AI LYCEUM / @the.ai.lyceum 🎓 20% Off | Oxford AI Ethics Executive Programme https://oxsbs.link/ailyceum 🔗 Website: https://theailyceum.com 🔗 Instagram: / theailyceum 🔗 LinkedIn Company Page: / dashboard 🔗 LinkedIn Community: https://linktr.ee/theailyceum #ai #law #aigovernance #aiethics #regulation #agenticai #liability #trust #gdpr #euaiact #digitalmarkets #advertising #adtech #policy #philosophy #technology #theailyceum