У нас вы можете посмотреть бесплатно On the Opportunities and Risks of Foundation Models (intro) или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This video provides a slow description of the introduction to the report "On the Opportunities and Risks of Foundation Models" by R. Bommasani et al., shared on arxiv in August 2021. Timestamps: 00:00 - On the Opportunities and Risks of Foundation Models 01:07 - Outline 01:58 - What is a foundation model? 04:31 - Emergence and homogenisation 07:01 - Foundation models - origin story 09:44 - Foundation models - NLP developments 12:08 - Foundation models - homogenisation 13:52 - Foundation models - risks and naming 17:43 - Social impact 21:06 - Ecosystem 24:24 - Think ecosystem, act model 25:46 - The future of foundation models 30:53 - The role of academia and incentives 33:58 - Resource accessibility 38:33 - A report on foundation models 41:43 - Capabilities 47:09 - Applications 51:54 - Technology 01:02:47 - Society 01:08:11 - Responses/critiques 01:21:39 - Summary and further resources The figures in the video are taken from the report - credit for these colourful creations is due to Drew Hudson (in partnership with the authors of corresponding sections of the report). Slides (pdf): https://samuelalbanie.com/files/diges... The paper can be found on arxiv here: https://arxiv.org/abs/2108.07258 Papers mentioned in the video that also fit within the YouTube character limit (a complete list can be found at http://samuelalbanie.com/digests/2022... ): R. K. Merton, "The Normative Structure of Science" (1942) A. M. Turing, “Intelligent Machinery" (1948) A. L. Samuel, "Some Studies in Machine Learning Using the Game of Checkers", IBM Journal of R&D (1959) P. Anderson, "More is different: broken symmetry and the nature of the hierarchical structure of science", Science (1972) S. Bozinovski et al., "The influence of pattern similarity and transfer of learning upon training of a base perceptron B2" (original in Croatian, 1976) V. R. de Sa, “Learning Classification with Unlabeled Data”, NeurIPS (1993) D. Lowe, "Object recognition from local scale-invariant features" ICCV (1999) C. Kerr, "The uses of the university", Harvard University Press (2001) L. Smith and M. Gasser, "The development of embodied cognition: Six lessons from babies", Artificial life (2005) A. Beberg et al., "Folding@ home: Lessons from eight years of volunteer distributed computing" (2009) J. Turian, "Word representations: a simple and general method for semi-supervised learning", ACL (2010) A. Krizhevsky et al., "Imagenet classification with deep convolutional neural networks", NeurIPS (2012) T. Mikolov et al. “Efficient Estimation of Word Representations in Vector Space”, ICLR (2013) J. Pennington et al., "Glove: Global vectors for word representation", EMLNP (2014) J. Schmidhuber, "Deep learning in neural networks: An overview", Neural networks (2015) Y. LeCun et al., "Deep learning", Nature (2015) A. Dai et al., "Semi-supervised sequence learning", NeurIPS (2015) O. Russakovsky et al., "Imagenet large scale visual recognition challenge", IJCV (2015) K. Radinsky, "Data monopolists like Google are threatening the economy", Harvard Business Review (2015) D. Silver et al., "Mastering the game of Go with deep neural networks and tree search", Nature (2016) M. Abadi et al., "{TensorFlow}: a system for {Large-Scale} machine learning", OSDI (2016) A. Vaswani et al., "Attention is all you need", NeurIPS (2017) A. Radford et al., "Improving language understanding by generative pre-training", (2018) J. Howard et al., “Universal Language Model Fine-tuning for Text Classification”, ACL (2018) M. Peters et al,. “Deep Contextualized Word Representations”, NAACL (2018) A. Paszke et al., "Pytorch: An imperative style, high-performance deep learning library", NeurIPS (2019) A. Radford, “Language Models are Unsupervised Multitask Learners”, (2019) J. Devlin et al., "Bert: Pre-training of deep bidirectional transformers for language understanding", NAACL-HLT (2019) Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach”, arxiv (2019) C. Raffel, et al., "Exploring the limits of transfer learning with a unified text-to-text transformer", JMLR (2019) M. Mitchell et al., "Model cards for model reporting", FAccT (2019) T. Brown et al., "Language models are few-shot learners", NeurIPS (2020) J. Kaplan et al., "Scaling laws for neural language models", arxiv (2020) P. Yin et al., "TaBERT: Pretraining for joint understanding of textual and tabular data", arxiv (2020) A. T. Liu et al. “Mockingjay: Unsupervised Speech Representation Learning with Deep Bidirectional Transformer Encoders”, ICASSP (2020) M. Lewis et al., “BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension”, ACL (2020)