У нас вы можете посмотреть бесплатно TQ2022 Keynote - What Could Possibly Go Wrong? - Fiona Charles или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Software is everywhere, permeating our society and influencing every aspect of our lives. It can and does bring enormous benefits, but it can also do great harm. Decisions once made by humans are now often made by algorithms, prompting one writer to comment that, “We are increasingly…abdicating our power to make decisions based on our own judgement, including our moral convictions.” Yet many of the models and criteria used in AI systems to assess human behavior and motives are unproven at best and at worst, based on junk science. An algorithm decides if you should get a job interview or if your CV should go on the discard pile. If you get that interview, another AI system may analyse your words and facial expressions to decide if you are a trustworthy person. Meanwhile, a workplace surveillance system could be making judgements about your productivity and interactions with your co-workers, and ultimately determining your compensation, promotional prospects, and future with the organisation. More critically, software systems on two Boeing MAX 737 planes interpreted the signal from a faulty sensor and decided on a course of action that crashed the planes, killing hundreds of people. What can testers do to help maximize the benefits and minimize the harms of all this software? First we ask, “What could possibly go wrong?” Join Fiona Charles to explore what else we can do and what other questions we can ask.