У нас вы можете посмотреть бесплатно Learning to Predict High Frequency Signals via Low Frequency Embeddings или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Jane Wu, Ph.D. Student at Stanford Existing machine learning models still struggle to predict high-frequency details present in data due to regularization, a technique necessary to avoid overfitting. Hence, researches are conducted whereby high-frequency information is procedurally embedded into low-frequency data so that when the latter is smoothed by the network, the former still retains its high-frequency detail. Join us on February 8 as Stanford researcher Jane Wu discusses the specific application of predicting cloth geometry and dynamics, focusing on the efficacy of learning perturbations embedded in low-frequency geometric structures for the specific application of virtual cloth. A recipient of the Don Chamberlin Research Award in 2018 and presenter at the Symposium on Computer Animation (SCA) 2021, Jane has conducted researches in underwater robotics, human-robot interaction, with Google, and with NVIDIA's Autonomous Vehicles Perception team. https://janehwu.github.io Recovering Geometric Information with Learned Texture Perturbations Jane Wu, Yongxu Jin, Zhenglin Geng, Hui Zhou, Ronald Fedkiw https://arxiv.org/abs/2001.07253 Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth Jane Wu, Zhenglin Geng, Hui Zhou, Ronald Fedkiw https://arxiv.org/abs/2006.04874 Producer: Elisa Agor, Chair, Silicon Valley ACM SIGGRAPH For more details: https://www.meetup.com/SV-SIGGRAPH/ev... 0:00 Silicon Valley ACM Chapter Intro 1:07 Presentation 1:27 Introduction 2:43 Outline Motivation and Background 3:28 Towards Real-time Cloth Capture 4:34 Problem Statement 5:01 Motivation 6:16 Recovering Geometric Information with Learned Texture Perturbations 6:26 Texture Sliding 10:48 Dataset Methodology and Dataset Generation 11:38 Texture Sliding Dataset Genration 13:16 Texture Sliding Neural Network (TSNN) 13:58 TSNN Test Results Interpolation and Reconstruction 14:33 Novel View Interpolation 16:56 3D Reconstruction 19:11 Texture Sliding Summary 20:43 Skinning a Parameterization of Three-Dimensional Space for Neural Network Cloth Skinned Tetrahedral Mesh Framework for Cloth 21:10 Kinematically Deforming Skinned Mesh (KDSM) 22:20 Tetrahedral Mesh Framework for Learning Cloth Cloth Embedding and Dataset Generation 25:17 Dataset Generation 29:44 DNN Training 30:05 Results 31:08 Mocap Video Comparisons 32:03 Modified Body Shapes 33:25 Modified Cloth Sizes 33:39 Skinned Mesh Summary 35:34 Future Directions 36:58 Thank you and Q&A 37:18 Q: Extend to different clothing types? 38:30 Q: Temporal stability artifacts? 39:53 Q: DNN training performance scaling? 41:26 Q: Volumetric representation of cloth? 44:58 Q: Coverage of the dataset? 46:50 Q: How much is NN? Speed performance/frame rate? 50:27 Close