У нас вы можете посмотреть бесплатно How LLMs Like ChatGPT Answer You in Seconds — And Why It Feels Like Magic или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Let me start with something honest. The first time I used AI… I typed a question. Pressed Enter. And before my brain could even process my own question… Boom. Answer. Clear. Structured. Confident. And I froze. I thought: “Wait… did it already know I was going to ask this?” How does something read my sentence… Understand it… Analyze it… And reply beautifully… In one second? Is there a secret army of people typing behind the screen? Is there a supercomputer exploding somewhere? Or is it just magic? Let’s break the illusion. 🧠 First: How Was It Trained? Nobody sat and “taught” ChatGPT like a teacher in a classroom. It didn’t go to school. It didn’t read books one by one. Instead… It was trained on massive amounts of publicly available text: Books. Articles. Websites. Research. Code. Conversations. Not to memorize answers. But to learn patterns. Imagine reading billions of sentences and learning: When someone says this… The next idea usually looks like this. It learns language like autocomplete on steroids. You type: “The capital of France is…” It predicts: “Paris.” But that’s the simple version. Now imagine that prediction happening across: Philosophy. Physics. Coding. Psychology. Humor. Storytelling. It doesn’t “know” facts like a human. It predicts the most likely next word based on patterns it learned during training. But here’s where it becomes powerful. It has billions of parameters. Think of them like tiny adjustable knobs in a massive digital brain. During training, those knobs are adjusted millions of times per second. Until the model becomes very, very good at predicting language. That’s phase one. 👨🏫 Phase Two: Humans Step In After raw training, humans review responses. They compare answers. They rank which one sounds better. They correct mistakes. The model learns from that feedback. This is why it doesn’t just sound smart. It sounds helpful. It learned not just to complete sentences — But to align with human expectations. That’s why answers feel structured and natural. ⚡ Now the Big Question: HOW Is It So Fast? This is where people imagine something dramatic. You type a question. It travels to a data center. A giant AI wakes up. It scans the entire internet. Does complex calculations. Then sends back an answer. No. It does NOT search the internet every time. It does NOT think for minutes. Here’s what actually happens. When you press Enter: Your text is broken into tokens (small pieces of words). Those tokens go through layers of a neural network. Each layer transforms the numbers slightly. The model predicts the next token. Then the next. Then the next. Very, very fast. How fast? These data centers use specialized hardware — GPUs and AI accelerators — built specifically for matrix math. These chips can do trillions of calculations per second. For them, your question is small. It’s like asking a supercomputer: “2 + 2?” Easy. So it doesn’t feel slow because: • The model is already trained. • It’s not learning in that moment. • It’s just running forward through fixed math. Like a calculator. But massively bigger. That’s why you get responses in one second. Not because it’s simple. But because the heavy work was already done during training. 🗂 What About Memory? Now here’s something interesting. People think: “It remembers everything I’ve ever said.” Not exactly. There are two kinds of memory: Short-term memory (context window) Long-term memory (if enabled by the system) Short-term memory is like working memory. It only remembers what’s inside the current conversation window. If the conversation gets too long, Old parts get trimmed away. Why? Because the model can only process a limited number of tokens at once. It doesn’t have infinite memory in one go. Long-term memory — when available — is handled outside the model. Stored separately. Retrieved when needed. The model itself doesn’t “store” new knowledge permanently after each chat. It doesn’t learn from you live. That’s a common myth. 😄 So Why Does It Feel Intelligent? Because it mimics patterns of intelligence. When something: • Understands context • Keeps logical flow • Adjusts tone • Explains step-by-step • Makes jokes at the right time Our brain says: “That’s intelligence.” But here’s the truth: It’s extremely advanced pattern prediction. It predicts the next best word. Then the next. Then the next. When prediction becomes extremely accurate… It looks like understanding. That’s the illusion. And it’s a powerful one. 🔥 The Deeper Insight The real magic isn’t that AI answers fast. The real magic is this: Human knowledge… Compressed into mathematics. Billions of human-written patterns. Encoded into numbers. Running on silicon. Responding in seconds. That’s not magic. That’s engineering at scale. 🎤 Final Thought Next time you press Enter… And the answer appears almost instantly… Don’t imagine a robot thinking hard. Imagine this instead: Years of training. Trillions of adjustments. Massive hardware. Optimized math.