У нас вы можете посмотреть бесплатно Legal perspective on the use and misuse of foundation models | AI UK 2023 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Large language models like ChatGPT have popularised and revolutionised AI in the public consciousness, as well as presented new innovation opportunities, but they also raise issues including embedded bias and stereotyping, indiscriminate scraping of text and images to form training sets, hallucinatory tendencies, and the threat of “fake news on steroids”. In this fireside chat, Lilian Edwards (Turing Fellow) and Adrian Weller (Director of Research in machine learning, University of Cambridge & Programme Director for Safe and Ethical AI, The Alan Turing Institute) address regulatory issues surrounding the use of these technologies, including how effective the draft EU AI Act might be, as well as existing laws such as copyright and data protection. An interesting issue not much examined is how (or if) these models self-regulate by their own terms of service and privacy policies. There is an urgent need to work out how to promote safe, ethical and responsible use of these technologies. (ps. ChatGPT co-wrote this text) Find out all about AI UK here: https://ai-uk.turing.ac.uk/ And keep up with the latest AI UK releases and stay in the loop on the next AI UK – follow The Alan Turing Institute on: Twitter: / turinginst LinkedIn: / the-alan-turing-institute Instagram: / theturinginst