У нас вы можете посмотреть бесплатно AI Doesn't Think - It Predicts (Here's How ChatGPT Actually Works) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
How does ChatGPT write poetry, debug code, and pass the bar exam — without understanding a single word? The most powerful AI systems on Earth are doing one thing: predicting the next word in a sequence. That's it. A glorified autocomplete that somehow learned to reason. This is the complete guide to how AI actually works — from raw text to something that feels like intelligence. Tokens, embeddings, attention, training, RLHF, hallucinations, and the emergent abilities that keep researchers up at night. New science videos every week. Subscribe so you don't miss the next one. Sources: Language Models are Few-Shot Learners (GPT-3 Core Mechanism & Dimensions) — arXiv — https://arxiv.org/abs/2005.14165 What are tokens and how to count them? — OpenAI Help Center — https://help.openai.com/en/articles/493685... OpenAI Tokenizer Documentation — OpenAI — https://platform.openai.com/tokenizer GPT-4 Technical Report — arXiv — https://arxiv.org/abs/2303.08774 Efficient Estimation of Word Representations in Vector Space (Word2Vec) — arXiv — https://arxiv.org/abs/1301.3781 Attention Is All You Need — arXiv — https://arxiv.org/abs/1706.03762 Stanford AI Index Report 2024 (Training Costs) — Stanford HAI — https://aiindex.stanford.edu/wp-content/up... Training language models to follow instructions with human feedback (RLHF) — arXiv — https://arxiv.org/abs/2203.02155 Survey of Hallucination in Natural Language Generation — ACM Computing Surveys — https://dl.acm.org/doi/10.1145/3571730 Are Emergent Abilities of Large Language Models a Mirage? — arXiv — https://arxiv.org/abs/2304.15004 Large Language Models are Zero-Shot Reasoners (Chain of Thought) — arXiv — https://arxiv.org/abs/2205.11916 📌 TIMESTAMPS: 0:00 — The Hook: AI doesn't think 0:25 — Tokens: How AI reads 1:15 — Embeddings: 12,000-dimension meaning 2:15 — Attention: The mechanism that changed everything 3:30 — Training: It read the entire internet 4:45 — RLHF: Teaching AI manners 5:45 — Hallucinations: When AI gets it wrong 6:30 — Emergent abilities: Nobody programmed reasoning 7:15 — The big question: Is it thinking? #AI #MachineLearning #ChatGPT #NeuralNetworks #DeepLearning #Science #Technology #ScienceUntold #Education