У нас вы можете посмотреть бесплатно The Dangerous Illusion of AI Coding? - Jeremy Howard или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Dive into the realities of AI-assisted coding, the origins of modern fine-tuning, and the cognitive science behind machine learning with fast.ai founder Jeremy Howard. In this episode, we unpack why AI might be turning software engineering into a slot machine and how to maintain true technical intuition in the age of large language models. GTC is coming, the premier AI conference, great opportunity to learn about AI. NVIDIA and partners will showcase breakthroughs in physical AI, AI factories, agentic AI, and inference, exploring the next wave of AI innovation for developers and researchers. Register for virtual GTC for free, using my link and win NVIDIA DGX Spark (https://nvda.ws/4qQ0LMg) Jeremy Howard is a renowned data scientist, researcher, entrepreneur, and educator. As the co-founder of fast.ai, former President of Kaggle, and the creator of ULMFiT, Jeremy has spent decades democratizing deep learning. His pioneering work laid the foundation for modern transfer learning and the pre-training and fine-tuning paradigm that powers today's language models. Key Topics and Main Insights Discussed: The Origins of ULMFiT and Fine-Tuning The Vibe Coding Illusion and Software Engineering Cognitive Science, Friction, and Learning The Future of Developers RESCRIPT: https://app.rescript.info/public/shar... https://app.rescript.info/api/public/... Jeremy Howard: https://x.com/jeremyphoward https://www.answer.ai/ --- TIMESTAMPS (fixed): 00:00:00 Introduction & GTC Sponsor 00:04:30 ULMFiT & The Birth of Fine-Tuning 00:12:00 Intuition & The Mechanics of Learning 00:18:30 Abstraction Hierarchies & AI Creativity 00:23:00 Claude Code & The Interpolation Illusion 00:27:30 Coding vs. Software Engineering 00:30:00 Cosplaying Intelligence: Dennett vs. Searle 00:36:30 Automation, Radiology & Desirable Difficulty 00:42:30 Organizational Knowledge & The Slope 00:48:00 Vibe Coding as a Slot Machine 00:54:00 The Erosion of Control in Software 01:01:00 Interactive Programming & REPL Environments 01:05:00 The Notebook Debate & Exploratory Science 01:17:30 AI Existential Risk & Power Centralization 01:24:20 Current Risks, Privacy & Enfeeblement --- REFERENCES: Blog Post: [00:03:00] fast.ai Blog: Self-Supervised Learning https://www.fast.ai/posts/2020-01-13-... [00:13:30] DeepMind Blog: Gemini Deep Think https://deepmind.google/blog/accelera... [00:19:30] Modular Blog: Claude C Compiler analysis https://www.modular.com/blog/the-clau... [00:19:45] Anthropic Engineering Blog: Building C Compiler https://www.anthropic.com/engineering... [00:48:00] Cursor Blog: Scaling Agents https://cursor.com/blog/scaling-agents [01:05:15] fast.ai Blog: NB Dev Merged Driver https://www.fast.ai/posts/2022-08-25-... [01:17:30] Jeremy Howard: Response to AI Risk Letter https://www.normaltech.ai/p/is-avoidi... Book: [00:08:30] M. Chirimuuta: The Brain Abstracted https://mitpress.mit.edu/978026254804... [00:30:00] Daniel Dennett: Consciousness Explained https://www.amazon.com/Consciousness-... [00:42:30] Cesar Hidalgo: Infinite Alphabet / Laws of Knowledge https://www.amazon.com/Infinite-Alpha... Archive Article: [00:13:45] MLST Archive: Why Creativity Cannot Be Interpolated https://archive.mlst.ai/read/why-crea... Research Study: [00:24:30] METR Study: AI OS Development https://metr.org/blog/2025-07-10-earl... Paper: [00:24:45] Fred Brooks: No Silver Bullet https://www.cs.unc.edu/techreports/86... [00:30:15] John Searle: Minds, Brains, and Programs https://www.cambridge.org/core/journa... Research Paper: [00:13:50] Mathilde Caron et al.: Emerging Properties in Self-Supervised Vision Transformers (DINO) https://arxiv.org/abs/2104.14294 [00:25:00] Oxford VGG: Sculptor Identification Paper https://www.robots.ox.ac.uk/~vgg/publ... [00:36:30] Anthropic Paper: AI Skill Formation https://arxiv.org/pdf/2601.20245 Historical Reference: [00:36:45] Ebbinghaus: Memory / Spaced Repetition https://www.loc.gov/item/e11000616/ Technical Note: [00:42:45] John Ousterhout: Slope vs Intercept https://gist.github.com/gtallen1187/e... Video: [00:59:00] Bret Victor: Inventing on Principle https://vimeo.com/906418692 [01:05:00] Joel Grus: I Don't Like Notebooks • I don't like notebooks.- Joel Grus (Allen ...