У нас вы можете посмотреть бесплатно Runpoint Podcast: 10 Hard Questions • This Week in AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Two builder-operators break down the last two weeks in AI through 10 Tyler Cowen-style questions. Topics include Sora 2’s cameo culture, whether "thinking" models are worth the latency, agents that actually help, model choice for client work, the energy and compute wave, and why open-weights like DeepSeek matter (or don’t) for practitioners. What you’ll get Practical takes from people shipping client projects Where Claude 4.5 vs GPT shines (coding vs writing) When to use "extended thinking / deep research" vs fast models Real talk on agents, meeting schedulers, and workflow design Energy, nuclear, and why AI is like infrastructure Open-weights vs ecosystems: where the real moat is Chapters 00:00 – Cold open and intro 00:26 – Who’s Tyler Cowen and why this format 02:00 – Q1: Sora 2, IP, and the "cameo economy" 06:44 – What we’re doing in this episode (format explainer) 08:11 – Q2: GPT apps and the VibeCoder value prop (workflow architect vs app builder) 17:03 – Q3: "30-hour agents" and autonomy myths (Claude, Replit Agent) 22:57 – Q4: When to use thinking models vs fast models (and deep research) 29:04 – Q5: SB-53 AI transparency — useful or compliance theater? 30:24 – Q6: Picking models for clients: capability, brand, or last best output? 37:01 – Q7: Agents that actually help (Lindy scheduling, weekly pain points) 41:34 – Q8: Compute, energy, and nuclear — should builders be optimistic? 46:50 – Q9: DeepSeek R1 costs and the real moat (ecosystems vs raw performance) 49:33 – Wrap-up and feedback ask