У нас вы можете посмотреть бесплатно Dario Amodei (Anthropic CEO) - $10 Billion Models, OpenAI, Scaling, & Alignment или скачать в максимальном доступном качестве, которое было загружено на ютуб. Для скачивания выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса savevideohd.ru
Here is my conversation with Dario Amodei, CEO of Anthropic. Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them. Transcript: https://www.dwarkeshpatel.com/dario-a... Apple Podcasts: https://apple.co/3rZOzPA Spotify: https://spoti.fi/3QwMXXU Follow me on Twitter: / dwarkesh_sp --- I’m running an experiment on this episode. I’m not doing an ad. Instead, I’m just going to ask you to pay for whatever value you feel you personally got out of this conversation. Pay here: https://bit.ly/3ONINtp --- (00:00:00) - Introduction (00:01:00) - Scaling (00:15:46) - Language (00:22:58) - Economic Usefulness (00:38:05) - Bioterrorism (00:43:35) - Cybersecurity (00:47:19) - Alignment & mechanistic interpretability (00:57:43) - Does alignment research require scale? (01:05:30) - Misuse vs misalignment (01:09:06) - What if AI goes well? (01:11:05) - China (01:15:11) - How to think about alignment (01:31:31) - Is modern security good enough? (01:36:09) - Inefficiencies in training (01:45:53) - Anthropic’s Long Term Benefit Trust (01:51:18) - Is Claude conscious? (01:56:14) - Keeping a low profile