У нас вы можете посмотреть бесплатно Prompt Engineering Myths Everyone Still Believes или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Most prompt engineering advice is cargo cult science. Here's what research actually says about chain-of-thought, few-shot learning, personas, and "magic phrases." You've seen the tips everywhere: "Think step by step." "You are an expert." "Here are 10 examples." But when researchers actually tested these techniques, the results were far off from popular advice and tips. Chain-of-thought can drop accuracy by 36%. Role prompts showed zero improvement across 162 personas tested. And the "optimal" prompt phrases are model-specific — there's no universal magic. In this video, I break down four popular prompting myths using research insights research from Google, DeepMind, and leading AI labs. TIMESTAMPS: 0:00 - What is Cargo Cult Prompting? 0:44 - Myth 1: Chain-of-Thought Always Helps 1:57 - Myth 2: Few-Shot Labels Teach the Model 3:10 - Myth 3: Personas Make AI Smarter 4:02 - Myth 4: Magic Phrases Work Everywhere 4:50 - What Actually Works More Videos : Software Egineering Basics - • Software Engineering Basics Software Design - • Software Design Sources: [P1] Kojima et al. (2022) Zero-shot-CoT: https://arxiv.org/abs/2205.11916 [P2] Wei et al. (2022) Chain-of-Thought Prompting: https://arxiv.org/abs/2201.11903 [P3] Turpin et al. (2023) Unfaithful CoT / up to 36% drop: https://arxiv.org/abs/2305.04388 [F1] Min et al. (2022) Demonstrations / random labels sometimes small drop: https://arxiv.org/abs/2202.12837 [F2] Kossen et al. (2023/2024) ICL learns label relationships / large drops possible: https://arxiv.org/abs/2307.12375 [F3] Lu et al. (2022) Example order sensitivity: https://arxiv.org/abs/2104.08786 [F4] Zhao et al. (2021) Calibrate Before Use: https://arxiv.org/abs/2102.09690 [S1] Zheng et al. (2023/2024) Personas don’t improve accuracy: https://arxiv.org/abs/2311.10054 [O1] Yang et al. (2023) OPRO (LLMs as Optimizers) / 34%→80.2% (setup-specific): https://arxiv.org/abs/2309.03409 [T1] He et al. (2024) Prompt formatting impact / up to ~40% swings: https://arxiv.org/abs/2411.10541 [L1] Liu et al. (2023) Lost in the Middle: https://arxiv.org/abs/2307.03172 [L2] Du et al. (2025) Context length alone hurts despite perfect retrieval: https://arxiv.org/abs/2510.05381 #promptengineering #llm #ai #chatgpt #coding #developers