У нас вы можете посмотреть бесплатно Logits and the Bernoulli Distribution | with examples in TensorFlow Probability или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Let's define the Bernoulli Distribution not in terms of a constrained parameter, but by one that can take any real valued number. Here are the notes: https://raw.githubusercontent.com/Cey... When using a Bernoulli distribution in TensorFlow Probability, one can choose between defining it by probabilities or logits. The latter is even the default option. But why, and what is a logit? Contrary to a probability which is limited to the range [0, 1], a logit can take any real value (i.e., it has the range (-inf, +inf) ). That allows for much more flexibility when the parameter of the Bernoulli is the output of a previous operation. In mathematical terms, the logit is just a differentiable mapping from (-inf, inf) to [0, 1]. ------- 📝 : Check out the GitHub Repository of the channel, where I upload all the handwritten notes and source-code files (contributions are very welcome): https://github.com/Ceyron/machine-lea... 📢 : Follow me on LinkedIn or Twitter for updates on the channel and other cool Machine Learning & Simulation stuff: / felix-koehler and / felix_m_koehler 💸 : If you want to support my work on the channel, you can become a Patreon here: / mlsim ------- Timestamps: 00:00 Introduction 00:35 Definition 01:15 Visualizing the Logit 02:32 TFP: Setup 02:55 TFP: Bernoulli by probability vs by logit 04:54 Inverse Mapping: The sigmoid 06:40 Why the name logit? 07:06 TFP: Using the sigmoid 07:32 End-Card