У нас вы можете посмотреть бесплатно MOCO24 embodied exploration of deep latent spaces in interactive dance-music performance Sarah Nabi или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Presentation of our paper entitled "Embodied exploration of deep latent spaces in interactive dance-music performance" accepted at the MOCO'24 conference on movement and computing (https://moco24.movementcomputing.org/). abstract : In recent years, significant advances have been made in deep learning models for audio generation, offering promising tools for musical creation. In this work, we investigate the use of deep audio generative models in interactive dance/music performance. We adopted a performance-led research design approach, establishing an art-research collaboration between a researcher/musician and a dancer. First, we describe our motion-sound interactive sys- tem integrating deep audio generative model and propose three methods for embodied exploration of deep latent spaces. Then, we detail the creative process for building the performance centered on the co-design of the system. Finally, we report feedback from the dancer’s interviews and discuss the results and perspectives. The code implementation is publicly available on our github. link to our github page : https://ircam-ismm.github.io/embodied... link to our github : https://github.com/ircam-ismm/embodie... link to our paper : https://hal.science/hal-04602229 ACKNOWLEDGMENTS : This work has been supported by the Paris Ile-de-France Région in the framework of DIM AI4IDF, and by Nuit Blanche-Ville de Paris. We extend our heartfelt thanks to the dancer Marie Bruand without which this study would not have been possible. We are also deeply grateful to our friends and colleagues from the STMS-IRCAM lab, particularly Victor Paredes, Antoine Caillon and Victor Bigand.