У нас вы можете посмотреть бесплатно [CVPR 2024] 3D Human Pose Perception from Egocentric Stereo Videos или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Abstract: While head-mounted devices are becoming more compact, they provide egocentric views with significant self-occlusions of the device user. Hence, existing methods often fail to accurately estimate complex 3D poses from egocentric views. In this work, we propose a new transformer-based framework to improve egocentric stereo 3D human pose estimation, which leverages the scene information and temporal context of egocentric stereo videos. Specifically, we utilize 1) depth features from our 3D scene reconstruction module with uniformly sampled windows of egocentric stereo frames, and 2) human joint queries enhanced by temporal features of the video inputs. Our method is able to accurately estimate human poses even in challenging scenarios, such as crouching and sitting. Furthermore, we introduce two new benchmark datasets, i.e., UnrealEgo2 and UnrealEgo-RW (RealWorld). The proposed datasets offer a much larger number of egocentric stereo views with a wider variety of human motions than the existing datasets, allowing comprehensive evaluation of existing and upcoming methods. Our extensive experiments show that the proposed approach significantly outperforms previous methods. We will release UnrealEgo2, UnrealEgo-RW, and trained models on our project page. 0:00 Introduction 0:06 Egocentric 3D Human Pose Estimation 0:26 Stereo Methods 1:07 Key Contributions 1:47 Proposed Method 2:52 Proposed Method (3D Module) 3:55 UnrealEgo2 Dataset 4:54 UnrealEgo2 (Motion Diversity) 5:16 New Portable Stereo Device 5:35 UnrealEgo-RW Dataset 6:05 Quantitative Results 6:37 Qualitative Results 7:05 Qualitative Results (In-the-Wild) and Applications 7:35 End