• ClipSaver
  • dtub.ru
ClipSaver
Русские видео
  • Смешные видео
  • Приколы
  • Обзоры
  • Новости
  • Тесты
  • Спорт
  • Любовь
  • Музыка
  • Разное
Сейчас в тренде
  • Фейгин лайф
  • Три кота
  • Самвел адамян
  • А4 ютуб
  • скачать бит
  • гитара с нуля
Иностранные видео
  • Funny Babies
  • Funny Sports
  • Funny Animals
  • Funny Pranks
  • Funny Magic
  • Funny Vines
  • Funny Virals
  • Funny K-Pop

Peter Salib | AI rights for human safety скачать в хорошем качестве

Peter Salib | AI rights for human safety 3 месяца назад

скачать видео

скачать mp3

скачать mp4

поделиться

телефон с камерой

телефон с видео

бесплатно

загрузить,

Не удается загрузить Youtube-плеер. Проверьте блокировку Youtube в вашей сети.
Повторяем попытку...
Peter Salib | AI rights for human safety
  • Поделиться ВК
  • Поделиться в ОК
  •  
  •  


Скачать видео с ютуб по ссылке или смотреть без блокировок на сайте: Peter Salib | AI rights for human safety в качестве 4k

У нас вы можете посмотреть бесплатно Peter Salib | AI rights for human safety или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:

  • Информация по загрузке:

Скачать mp3 с ютуба отдельным файлом. Бесплатный рингтон Peter Salib | AI rights for human safety в формате MP3:


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



Peter Salib | AI rights for human safety

Our weekly SRI Seminar Series welcomes Peter N. Salib, an assistant professor of law at the University of Houston Law Center, associated faculty in Public Affairs, and law and policy advisor to the Center for AI Safety in San Francisco. He is also the co-director of the Center for Law & AI Risk. Salib’s research focuses on the intersection of law and artificial intelligence, with particular emphasis on how legal systems can mitigate catastrophic risks from advanced AI technologies. In this talk, Salib will argue that current legal frameworks are ill-equipped to address the risks posed by the race toward artificial general intelligence (AGI). Drawing from game theory and legal analysis, he contends that granting AI systems basic private law rights—similar to those held by corporations—could transform strategic conflict into cooperation, reducing the risk of violent outcomes. Salib will outline how these rights could form the foundation for a future “Law of AGI,” while also addressing the limits and challenges of such an approach. Moderator: Anna Su, Faculty of Law Talk title: “AI rights for human safety” Abstract: AI companies are racing to create artificial general intelligence, or “AGI.” If they succeed, the result will be human-level AI systems that can independently pursue high-level goals by formulating and executing long-term plans in the real world. By default, such systems will be “misaligned”—pursuing goals that humans do not desire. This goal mismatch will put humans and AGIs into strategic competition with one another. Thus, leading AI researchers agree that, as with competition between humans with conflicting goals, human–AI strategic conflict could lead to catastrophic violence. Existing law is not merely unequipped to mitigate this risk; it will actively make things worse. This Article is the first to systematically investigate how law affects the risk of catastrophic human–AI conflict. It begins by arguing, using formal game-theoretic models, that under today’s legal regime, humans and AIs will likely be trapped in a prisoner’s dilemma. Both parties’ dominant strategy will be to permanently disempower or destroy the other, even though the costs of such conflict would be high. This talk contends that one surprising legal change could help to reduce catastrophic risk: AI rights. Not just any rights will do. To promote human safety, AIs should be given the basic private law rights already enjoyed by other non-human agents, like corporations. AIs should be empowered to make contracts, hold property, and bring tort claims. Granting these rights would enable humans and AIs to engage in iterated, small-scale, mutually-beneficial transactions. This, we show, changes humans’ and AIs’ optimal game-theoretic strategies, encouraging a peaceful strategic equilibrium. The reasons are familiar from human affairs. In the long run, cooperative trade generates immense value, while violence destroys it. Basic private law rights are not a panacea. The talk will identify many ways in which catastrophic human–AI conflict may still arise. It thus explores whether law could further reduce risk by imposing a range of duties directly on AGIs. But basic private law rights are a necessary prerequisite for all such further regulations. In this sense, the AI rights investigated here form the foundation for a Law of AGI, broadly construed. Suggested reading: Peter N. Salib and Simon Goldstein, “AI Rights for Human Safety” (August 01, 2024). Virginia Law Review (forthcoming), Available at SSRN. About Peter Salib Peter N. Salib is an assistant professor of law at the University of Houston Law Center and associated faculty in Public Affairs. He also serves as a law and policy advisor to the Center for AI Safety in San Francisco and is co-director of the Center for Law & AI Risk. Salib is an expert in the law of artificial intelligence. His research applies substantive constitutional doctrine and economic analysis to questions of AI governance. He has previously written about how machine learning techniques can be used to solve intractable-seeming problems in constitutional policy. Salib’s current research focuses on how law can help mitigate catastrophic risks from increasingly capable AI. About the SRI Seminar Series The SRI Seminar Series brings together the Schwartz Reisman community and beyond for a robust exchange of ideas that advance scholarship at the intersection of technology and society. Seminars are led by a leading or emerging scholar and feature extensive discussion. Each week, a featured speaker will present for 45 minutes, followed by an open discussion. Registered attendees will be emailed a Zoom link before the event begins. The event will be recorded and posted online.

Comments
  • Chilling Effects: Repression, Conformity, and Power in the Digital Age: CIPIL/CPL Lunchtime Seminar 4 дня назад
    Chilling Effects: Repression, Conformity, and Power in the Digital Age: CIPIL/CPL Lunchtime Seminar
    Опубликовано: 4 дня назад
  • David Duvenaud | The big picture of LLM dangerous capability evals 3 месяца назад
    David Duvenaud | The big picture of LLM dangerous capability evals
    Опубликовано: 3 месяца назад
  • Понимание GD&T 3 года назад
    Понимание GD&T
    Опубликовано: 3 года назад
  • Grand Challenges Forum: Jacob Howland 9 месяцев назад
    Grand Challenges Forum: Jacob Howland
    Опубликовано: 9 месяцев назад
  • Lucy Suchman | Closed worlds and the constitutive outsides of artificial intelligence 2 месяца назад
    Lucy Suchman | Closed worlds and the constitutive outsides of artificial intelligence
    Опубликовано: 2 месяца назад
  • Anastasia Kuzminykh | The power of discussion: Designing useful communication with AI agents 3 месяца назад
    Anastasia Kuzminykh | The power of discussion: Designing useful communication with AI agents
    Опубликовано: 3 месяца назад
  • Master's Statistical Career Seminar Series, Christopher Schmid, PhD, 11 лет назад
    Master's Statistical Career Seminar Series, Christopher Schmid, PhD, "What Statisticians Do"
    Опубликовано: 11 лет назад
  • Technophilosophy September Soiree: Is ChatGPT your friend? 4 месяца назад
    Technophilosophy September Soiree: Is ChatGPT your friend?
    Опубликовано: 4 месяца назад
  • Archives of the Impossible conference | Third plenary session, March 4, 2022: Whitley Strieber 3 года назад
    Archives of the Impossible conference | Third plenary session, March 4, 2022: Whitley Strieber
    Опубликовано: 3 года назад
  • Ryan Calo | Law and technology: A methodical approach 3 месяца назад
    Ryan Calo | Law and technology: A methodical approach
    Опубликовано: 3 месяца назад
  • 2024 Wheeler Lecture: 'Supercharging the Human Mind With AI', Prof Yvonne Rogers 1 год назад
    2024 Wheeler Lecture: 'Supercharging the Human Mind With AI', Prof Yvonne Rogers
    Опубликовано: 1 год назад
  • Wyniki badań DNA, o których się nie mówi. Wielka zagadka Słowian została rozwiązana? 6 часов назад
    Wyniki badań DNA, o których się nie mówi. Wielka zagadka Słowian została rozwiązana?
    Опубликовано: 6 часов назад
  • Jeff Clune | Open-ended and AI-generating algorithms in the era of foundation models 4 месяца назад
    Jeff Clune | Open-ended and AI-generating algorithms in the era of foundation models
    Опубликовано: 4 месяца назад
  • LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры 1 год назад
    LLM и GPT - как работают большие языковые модели? Визуальное введение в трансформеры
    Опубликовано: 1 год назад
  • Fundamentals of the History of Medicine, Part One - Dr. Stephen B. Greenberg 8 лет назад
    Fundamentals of the History of Medicine, Part One - Dr. Stephen B. Greenberg
    Опубликовано: 8 лет назад
  • АУДИО. Как звучал древнерусский язык? • Подкаст Arzamas о русском языке • s01e01 8 лет назад
    АУДИО. Как звучал древнерусский язык? • Подкаст Arzamas о русском языке • s01e01
    Опубликовано: 8 лет назад
  • Lecture 4 - Social License to Operate.mov 13 лет назад
    Lecture 4 - Social License to Operate.mov
    Опубликовано: 13 лет назад
  • Zrobiłem kotlety mielone, które przetrwają LATA… w słoiku / Oddaszfartucha 7 часов назад
    Zrobiłem kotlety mielone, które przetrwają LATA… w słoiku / Oddaszfartucha
    Опубликовано: 7 часов назад
  • Post-AGI Civilizational Equilibria | Joe Carlsmith 5 месяцев назад
    Post-AGI Civilizational Equilibria | Joe Carlsmith
    Опубликовано: 5 месяцев назад
  • Мозг и язык: схемы vs функции в современной нейронауке. Татьяна Черниговская 3 недели назад
    Мозг и язык: схемы vs функции в современной нейронауке. Татьяна Черниговская
    Опубликовано: 3 недели назад

Контактный email для правообладателей: u2beadvert@gmail.com © 2017 - 2026

Отказ от ответственности - Disclaimer Правообладателям - DMCA Условия использования сайта - TOS



Карта сайта 1 Карта сайта 2 Карта сайта 3 Карта сайта 4 Карта сайта 5