У нас вы можете посмотреть бесплатно 🤖 Gemini Robotics ER 1.5 × NVIDIA Isaac Lab × Enactic OpenArm или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Vision-Language Reasoning Meets Real-World Robotic Control In this video, we demonstrate Google Gemini Robotics ER 1.5 integrated with NVIDIA Isaac Lab and the Enactic OpenArm (7-DoF) to perform grounded, real-world robotic manipulation. This setup showcases a modern robotics control stack where a large multimodal foundation model handles high-level reasoning and planning, while physics-based simulation and whole-body control handle accurate, safe execution on real hardware. 🧠 What’s happening under the hood Gemini Robotics ER 1.5 acts as the robot’s reasoning engine: Interprets vision + language instructions Performs spatial and task-level reasoning Outputs structured, mid-level actions (not raw motor commands) NVIDIA Isaac Lab provides: Physically accurate simulation Domain randomization and policy validation A consistent sim-to-real control interface Enactic OpenArm (7-DoF) executes: Cartesian motion via inverse kinematics Compliance through impedance control Precise manipulation using a redundant arm configuration This separation of concerns mirrors how frontier systems like GR00T and Physical Intelligence (π-series) are deployed: Foundation models plan — controllers execute. 🔧 Control Architecture Overview High-level: Gemini Robotics ER 1.5 (Vision + Language + Reasoning) Mid-level: Task-space goals / waypoints / skill parameters Low-level: Cartesian impedance control + IK on OpenArm (running at high frequency for stability and accuracy) This approach enables: Latency-tolerant foundation model inference Safe contact-rich manipulation Transferable skills across simulation and hardware 🚀 Why this matters This demo represents a practical path forward for robotics using foundation models: ❌ No end-to-end black-box torque policies ✅ Clear interfaces between reasoning, control, and hardware ✅ Works today with open hardware and open simulation tools It’s a blueprint for scaling from single-arm manipulation to bimanual humanoids, mobile manipulators, and household robots. 🛠️ Technologies Used Google Gemini Robotics ER 1.5 NVIDIA Isaac Lab Enactic OpenArm (7-DoF, open-source robotic arm) ROS 2 + custom control interfaces Cartesian impedance & redundancy-aware IK 🔗 Learn more Enactic OpenArm: https://openarm.dev NVIDIA Isaac Lab: https://developer.nvidia.com/isaac-lab Google Gemini Robotics: https://deepmind.google/robotics