У нас вы можете посмотреть бесплатно Mike Williams, Fast Forward Labs | Big Data Silicon Valley 2016 или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
01. Mike Williams, Fast Forward Labs, Visits theCUBE . (00:15) 02. Supervised Machine Learning. (00:57) 03. Legal Framework Not Keeping up with Development. (02:52) 04. Assessing and Sustaining Machine Learning. (07:26) #theCUBE #FastForwardLabs #BigDataSV #SiliconANGLE --- --- Can machine learning create liability issues for businesses? | #BigDataSV by Nelson Williams | Mar 31, 2016 In the new digital era, a business needs a store of data to help inform its decisions and interactions, but data is useless unless someone acts upon it. Unfortunately, it’s very easy to collect more data than any human team could possibly sort through, let alone put into practice. That’s where machine learning comes in. These systems learn from information stores to locate patterns and create rules with computer speed. Machine learning is becoming a valuable, perhaps even necessary, part of business infrastructure. To gain some insight into machine learning, Peter Burris (@plburris) and Jeff Frick (@jefffrick), cohosts of theCUBE from the SiliconANGLE Media team, joined Mike Williams, research engineer at Fast Forward Labs, during the BigDataSV 2016 event in San Jose, California, where theCUBE is celebrating #BigDataWeek, including news and events from the #StrataHadoop conference. Rules of thumb What machine learning does, Williams said, is uncover general patterns from historical data. These patterns, and how the machine works with them, are called “rules of thumb.” The problem, he continued, is these rules of thumb might not always be correct. This raises the possibility of the system doing harm, be it to the business, a product or even a person. Machine learning, he said, can set in stone biases drawn from the historical data, such as race issues. The people who deploy machine learning models need to be aware of the legal issues this could cause. Breaking the rules The machine learning community, Williams said, is lacking a clear set of rules one could write on a postcard that says they’re creating safe machine learning systems. In the meantime, one option is to censor or adjust the data to remove unneeded variables that could bias the model. Fresh and updated data is a vital part of creating an accurate, safe model. Companies want data scientists to say there’s a problem before a liability appears, Williams said. For now, humans are still part of the process. @theCUBE #BigDataSV #StrataHadoop