У нас вы можете посмотреть бесплатно SNAP: Towards Segmenting Anything in Any Point Cloud или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Segmenting objects in 3D point clouds is a core problem in 3D scene understanding and scalable data annotation. In this talk, I will present SNAP: Segmenting Anything in Any Point Cloud, a unified framework for interactive point cloud segmentation that supports both point-based and text-based prompts across indoor, outdoor, and aerial domains. SNAP is trained jointly on multiple heterogeneous datasets and achieves strong cross-domain generalization through domain-adaptive normalization. The model enables both spatially prompted instance segmentation and text-prompted panoptic and open-vocabulary segmentation directly on point clouds. Extensive experiments demonstrate that SNAP matches or outperforms domain-specific methods on a wide range of zero-shot benchmarks. Resources • Project: https://neu-vi.github.io/SNAP/ • Paper: https://arxiv.org/pdf/2510.11565 • Code: https://github.com/neu-vi/SNAP About the Speaker Hanhui Wang is a first-year Ph.D. student at the Visual Intelligence Lab at Northeastern University. His research centers on 3D scene understanding, with recent work on point cloud segmentation and structured representations, and broader interests in generation and reasoning for multimodal 3D/4D perception. #computervision #ai #artificialintelligence #machinelearning