У нас вы можете посмотреть бесплатно CITRIS Research Exchange: The Alignment Problem: Machine Learning and Human Values или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
TALK TITLE: The Alignment Problem: Machine Learning and Human Values SPEAKER: Brian Christian, author and visiting scholar at UC Berkeley ABSTRACT: With the incredible growth of machine learning (ML) over recent years has come an increasing concern about whether ML systems’ objectives truly capture their human designers’ intent: the so-called “alignment problem.” Over the last five years, these questions of both ethics and safety have moved from the margins of the field to become arguably its most central concerns. The result is something of a movement: a vibrant, multifaceted, interdisciplinary effort to address the alignment problem head-on, which is producing some of the most exciting research happening today. Brian Christian, visiting scholar at CITRIS and author of the acclaimed bestsellers “The Most Human Human” and “Algorithms to Live By,” will survey this landscape of recent progress and the frontier of open questions that remain. BIO: Brian Christian is the author of the acclaimed bestsellers, “The Most Human Human” and “Algorithms to Live By” (with Tom Griffiths), which have been translated into nineteen languages. A visiting scholar at the CITRIS Policy Lab, the Scientific Communicator in Residence at the Simons Institute, and an Affiliate of the Center for Human-Compatible Artificial Intelligence, he lives in San Francisco.