Русские видео

Сейчас в тренде

Иностранные видео


Скачать с ютуб [ECCV 2020] DVI: Depth Guided Video Inpainting for Autonomous Driving -- Long Talk в хорошем качестве

[ECCV 2020] DVI: Depth Guided Video Inpainting for Autonomous Driving -- Long Talk 4 года назад


Если кнопки скачивания не загрузились НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием, пожалуйста напишите в поддержку по адресу внизу страницы.
Спасибо за использование сервиса ClipSaver.ru



[ECCV 2020] DVI: Depth Guided Video Inpainting for Autonomous Driving -- Long Talk

Presentation video for ECCV 2020 DVI: Depth Guided Video Inpainting for Autonomous Driving. 实现最强自动驾驶街景仿真,百度ECCV 2020视频修复论文解读 (机器之心报道): https://www.jiqizhixin.com/articles/2... Paper: https://arxiv.org/abs/2007.08854 Github: https://github.com/sibozhang/Depth-Gu... Dataset: http://apolloscape.auto/inpainting.html Demo:    • [ECCV 2020] DVI: Depth Guided Video I...   Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang. Abstract: To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud. By building a dense 3D map from stitched point clouds, frames within a video are geometrically correlated via this common 3D map. To fill target inpainting area in a frame, it is straightforward to transform pixels from other frames into the current one with correct occlusion. Furthermore, we are able to fuse multiple videos through 3D point cloud registration, making it possible to inpaint a target video with multiple source videos. The motivation is to solve the long-time occlusion problem where an occluded area has never been visible in the entire video. To our knowledge, we are the first to fuse multiple videos for video inpainting. To verify the effectiveness of our approach, we build a large inpainting dataset in the real urban road environment with synchronized images and Lidar data including many challenge scenes, e.g., long time occlusion. The experimental results show that the proposed approach outperform the state-of-the-art approaches for all the criteria, especially the RMSE (Root Mean Squared Error) has been reduced by about 13%. Keywords: Video Inpainting, Autonomous Driving, Depth, Image Synthesis, Simulation

Comments