У нас вы можете посмотреть бесплатно Why AI creates knowledge debt and how leaders stay in control with Macs Dickinson или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Why does AI create knowledge debt? And how can leaders adopt it without losing control of their systems? In this episode of The Digital Lighthouse, Zoe Cunningham speaks with Macs Dickinson, Director of Engineering at LHV Bank. Macs leads engineering teams in a highly regulated banking environment, with previous experience across gambling and ticketing. He is working at the sharp end of AI adoption, where the pressure to move quickly meets the reality of compliance, operational resilience, and customer protection. They explore the hidden risk that comes with AI assisted engineering. While generative AI can accelerate delivery, it can also reduce understanding. Teams may ship faster, but lose the deep system knowledge required to maintain, explain, and stand behind what they build. In regulated industries, that trade off carries real consequences. Macs shares how his teams are approaching machine learning and AI agents safely, why narrow internal use cases are the smartest place to start, and how governance, monitoring, and ownership help prevent long term knowledge debt. 🔎 You’ll learn: • Why AI can speed up delivery while increasing long term risk • What “knowledge debt” means in practice • Why regulated industries cannot rely on “fail fast” thinking • How machine learning models can be governed through monitoring and human review • Why generative AI is harder to test than traditional software • How building internal AI tools first reduces customer risk • Why “you commit the code, you own the code” is critical in AI assisted engineering • How strong guardrails help both humans and AI agents operate safely 💡 Whether you are a CTO, engineering leader, or part of a team adopting AI tools, this conversation offers a grounded view on how to move forward without sacrificing accountability or understanding. ⸻ Timestamps 00:00 – Introduction 01:13 – Macs’ journey into engineering leadership 06:30 – What makes an industry regulated and why the cost of failure is so high 12:18 – Using machine learning for fraud prevention 14:00 – Why generative AI is harder to govern 15:11 – Building an internal AI code reviewer 19:32 – Why narrowing scope reduces risk 22:10 – Guardrails, autonomy, and AI agents 25:20 – Knowledge debt and ownership in AI assisted engineering ⸻ 🎧 Listen to The Digital Lighthouse: Spotify | Apple Podcasts | SoundCloud