• ClipSaver
  • dtub.ru
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27] скачать в хорошем качестве

Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27] 11 месяцев назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27]
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27] в качестве 4k

У нас вы можете посмотреть бесплатно Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27] или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27] в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



Regularization in ML explained simply | Lasso (L1) and Ridge (L2) | Foundations for ML [Lecture 27]

I first heard “regularization” during MIT’s graduate-level machine learning course in the fall of 2019. Later, a couple of friends mentioned it during their ML job interviews—specifically, they were asked about “Lasso and Ridge regression.” That’s when I realized that regularization is a key concept I needed to understand better. For new topics, I usually start by Googling “Topic XYZ visually explained.” So, I typed “Regularization visualized” into Google Images and was both amazed and a bit overwhelmed by the figures I saw. Even though the math behind regularization looked straightforward (just apply a penalty term to the loss function), something didn’t add up. As I learned more about Lasso, I became confused: Why does Lasso force some model parameters to be exactly zero, while Ridge only makes them small? I set that confusing part aside, not knowing that it would eventually unlock the full beauty of regularization for me. Today, I truly appreciate that visual intuition—even though for 2 or 3 years I paid little attention to it. In this video, I’ll explain regularization in the simplest way possible. I cover: • What is Regularization? Regularization is used in machine learning to prevent overfitting and improve a model’s ability to generalize to new, unseen data. By adding a penalty to the loss function, the model is discouraged from learning overly complex patterns or noise that only fits the training data. This penalty simplifies the model by constraining its parameters, so it focuses on the most important features. • How Does Regularization Work? Think of an ML model that aims to minimize its loss function. Regularization modifies this loss function by adding a penalty term that prevents the model parameters from becoming too high when fitting noisy data. The regularization strength, denoted by λ, controls the trade-off between the original loss and the penalty. • Types of Regularization: Ridge (L2) vs. Lasso (L1) Regression Ridge Regression (L2 Regularization): It modifies the linear regression loss function by adding an L2 penalty (the sum of squared weights). When λ is 0, Ridge is just like normal linear regression. As λ increases, the model shrinks all weights closer to 0 to help prevent overfitting. Lasso Regression (L1 Regularization): It uses an L1 penalty, adding the absolute values of the weights. With a small λ, Lasso behaves like linear regression. But when λ is large, Lasso forces some weights to become exactly zero—effectively performing feature selection since features with a weight of zero are not used for making predictions. • Why Does Lasso Set Some Weights to Zero But Not Ridge? This was the million-dollar question that frustrated me for quite some time. Here’s the intuition: Ridge: Even when λ is large, Ridge regression only shrinks the parameters, making them small but not exactly zero. Lasso: There are many cases where, even with a high λ, Lasso can set parameters to exactly zero. While I can’t paste equations and images here, imagine a graphical illustration where the penalty shapes differ: Ridge’s penalty forms a circle, while Lasso’s forms a diamond. The diamond’s corners make it more likely for the optimization process to land on an axis (i.e., setting a parameter to zero), whereas the circular shape of Ridge doesn’t encourage exact zeros. • How Do You Select a Good Value for λ? There’s no strict rule, but I’ll share some primary considerations and practical insights—especially if you’re using scikit-learn in Python. I highly recommend checking out the brilliant visuals and explanations on explained.ai (link in the video description) for even more insight into these concepts. If you’re interested in understanding the full beauty and intuition behind regularization, Lasso, and Ridge regression, then this video is for you. Enjoy, and I’m sure you’ll appreciate these concepts as much as I do! Don’t forget to like, subscribe, and hit the bell icon for more deep dives into machine learning concepts. Thanks for watching!

Comments
  • Basics of Python [beginners only] | Foundations for ML [Lecture 28] 11 месяцев назад
    Basics of Python [beginners only] | Foundations for ML [Lecture 28]
    Опубликовано: 11 месяцев назад
  • Regularization Part 1: Ridge (L2) Regression 7 лет назад
    Regularization Part 1: Ridge (L2) Regression
    Опубликовано: 7 лет назад
  • Introduction to Optimization for Machine Learning [Lecture 22] 1 год назад
    Introduction to Optimization for Machine Learning [Lecture 22]
    Опубликовано: 1 год назад
  • Stanford CS229 I Machine Learning I Building Large Language Models (LLMs) 1 год назад
    Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
    Опубликовано: 1 год назад
  • Random forest explained and built from scratch 11 месяцев назад
    Random forest explained and built from scratch
    Опубликовано: 11 месяцев назад
  • Introduction to Machine Learning in Julia
    Introduction to Machine Learning in Julia
    Опубликовано:
  • Путина предали? / Требование досрочных выборов президента 5 часов назад
    Путина предали? / Требование досрочных выборов президента
    Опубликовано: 5 часов назад
  • Machine Learning Tutorial Python - 17: L1 and L2 Regularization | Lasso, Ridge Regression 5 лет назад
    Machine Learning Tutorial Python - 17: L1 and L2 Regularization | Lasso, Ridge Regression
    Опубликовано: 5 лет назад
  • ResNet: The architecture that changed ML forever | Legendary paper with 260k+ citations 11 месяцев назад
    ResNet: The architecture that changed ML forever | Legendary paper with 260k+ citations
    Опубликовано: 11 месяцев назад
  • Музыка для работы за компьютером | Фоновая музыка для концентрации и продуктивности 5 месяцев назад
    Музыка для работы за компьютером | Фоновая музыка для концентрации и продуктивности
    Опубликовано: 5 месяцев назад
  • Lecture 3 | Loss Functions and Optimization 8 лет назад
    Lecture 3 | Loss Functions and Optimization
    Опубликовано: 8 лет назад
  • Deep Focus — музыка для учёбы, концентрации и работы 4 месяца назад
    Deep Focus — музыка для учёбы, концентрации и работы
    Опубликовано: 4 месяца назад
  • 4 Hours Chopin for Studying, Concentration & Relaxation 4 года назад
    4 Hours Chopin for Studying, Concentration & Relaxation
    Опубликовано: 4 года назад
  • Linear Regression: OLS, Ridge, Lasso and beyond 5 лет назад
    Linear Regression: OLS, Ridge, Lasso and beyond
    Опубликовано: 5 лет назад
  • Самая сложная модель из тех, что мы реально понимаем 1 месяц назад
    Самая сложная модель из тех, что мы реально понимаем
    Опубликовано: 1 месяц назад
  • Gradient descent in machine learning  [Lecture 21] 1 год назад
    Gradient descent in machine learning [Lecture 21]
    Опубликовано: 1 год назад
  • Регуляризация в нейронной сети | Борьба с переобучением 4 года назад
    Регуляризация в нейронной сети | Борьба с переобучением
    Опубликовано: 4 года назад
  • Stanford CS229 Machine Learning I Bias - Variance, Regularization I 2022 I Lecture 10 2 года назад
    Stanford CS229 Machine Learning I Bias - Variance, Regularization I 2022 I Lecture 10
    Опубликовано: 2 года назад
  • Live Day 2- Discussing Ridge, Lasso And Logistic Regression Machine Learning Algorithms Трансляция закончилась 4 года назад
    Live Day 2- Discussing Ridge, Lasso And Logistic Regression Machine Learning Algorithms
    Опубликовано: Трансляция закончилась 4 года назад
  • Музыка для продуктивной работы (Гамма-волны 40 Гц) 1 месяц назад
    Музыка для продуктивной работы (Гамма-волны 40 Гц)
    Опубликовано: 1 месяц назад

Контактный email для правообладателей: u2beadvert@gmail.com © 2017 - 2026

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5