У нас вы можете посмотреть бесплатно Sam Altman explains his top three biggest AI risks или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Sam Altman discusses the biggest AI risks facing society as artificial intelligence becomes more powerful. The video explores concerns around superintelligent AI, national security threats, misuse by bad actors, AI alignment challenges, and the long-term risks of losing control over advanced AI systems. It also touches on how AI could become deeply embedded in society and the importance of preventing dangerous outcomes before they occur. There's decades of sci-fi telling us that AI is eventually going to kill us all. And since you know more about AI than arguably anybody in this room, I just want to ask you, what does keep you up at night? What are the things that you worry about when it comes to AI? And how do we prevent those things that you worry about from coming true? I think there's three sort of scary categories. Um there's a bad guy gets super intelligence first and misuses it before the rest of the world has a powerful enough version to defend. So a an adversary of the US says um I'm going to use this super intelligence to design a boweapon um to take down the United States power grid to you know break into the financial system and take everyone's money. Uh, category two is the sort of broadly called loss of control incidents where the that's kind of like the sci-fi movie. The AI is like, "Oh, I don't actually want you to turn me off. I'm afraid I can't do that." You know, whatever. Um, and that's I think that is less of a concern to me than the first category, but a very grave concern if it came to pass. There's a lot of work we and other companies do um on model alignment to prevent that from happening. But as these systems become so powerful, uh that's a real concern. And then there's the third one which I think those first two are sort of easy to think about and imagine. The third one is to me difficult more difficult to imagine but quite scary. And I'll I'll explain what it is and then I'll give a short-term and a long-term example. Um this is the category where the models kind of accidentally take over the world. They never wake up. They never do the sci-fi thing. They never open the pod bay doors, but they just become so ingrained in society and they're so much smarter than we are and we we can't really understand what they're doing. Um, but we do kind of have to rely on them. And even without a drop of malevolence from anyone, society can just veer in a sort of strange