У нас вы можете посмотреть бесплатно FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Preprint: https://arxiv.org/pdf/2408.14035 GitHub: https://github.com/hku-mars/FAST-LIVO2 We propose an efficient and accurate LiDAR-Inertial-Vision fusion localization and mapping system, FAST-LIVO2, which demonstrates great potential in real-time 3D reconstruction and robotic onboard localization in degraded scenes. What can FAST-LIVO2 do? 1. Real-time high-precision reconstruction: The system can generate photo-realistic dense colored point clouds in real time. More importantly, it can run in real time on low-power ARM-based platforms (such as rk3588, Jetson Orin NX, RB5, etc.). 2. Stability in extreme environments: It can stably map and return to the origin in extremely degraded and GPS-denied tunnel environments (over 25 minutes of data collection). We have also tested it on the FAST-LIVO2 private dataset with numerous sequences of LiDAR/visual degradation (over 2TB), verifying its efficiency and robustness. 3. Breakthrough in UAV autonomous navigation: FAST-LIVO2 is the world’s first application of LiDAR-Inertial-Vision Odometry (LIVO) systems in UAV autonomous navigation. It enables UAVs to operate stably in environments where both LiDAR and vision are degraded. 4. Enhanced airborne mapping accuracy: It effectively addresses the cumulative drift issues arising from LiDAR degradation or inaccurate point cloud measurements (where the air-to-ground distance is too far and the LiDAR spot effect is significant) in aerial surveying, resulting in pixel-level mapping outcomes. 5. Support for downstream applications in 3D scene representation: It quickly generates dense and accurate large-scale colored point clouds and camera poses for downstream applications (such as mesh generation, texture mapping, depth-supervised 3D Gaussian Splatting, etc.). 6. Real-world 3D scanning: Utilizing its non-contact, high-precision, high-detail, high-efficiency, and large-scale capabilities, it captures 3D data of ancient buildings and landscape features, which can then be imported into UE5 modeling software. This allows game environments (such as the 'Black Myth: Wukong' DLC) to achieve detail comparable to the real world. Our source code, datasets, handheld and UAV devices, hardware synchronization schemes, and subsequent applications will be open-sourced on GitHub to promote the development of the robotics and computer vision community.