У нас вы можете посмотреть бесплатно Tesla Robotaxi Gets Stuck at Green Light! или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
A quick note for context: from what I could tell, at no point did the vehicle do anything overtly unsafe or dangerous. It was awkward, yes (especially sitting at a green light longer than a human driver likely would) but it remained stationary, predictable, and cautious. Eventually, with assistance from robo-taxi support, it proceeded and cleared the intersection safely. It’s worth emphasizing that all drivers, human and autonomous alike, encounter situations that fall outside the “clean and obvious” norm. What really matters is how the system handles ambiguity. Does it default to aggression, or to caution? Does it escalate appropriately when confidence is low? In this case, it clearly chose the conservative path. And it’s important not to conflate a brief hesitation at a green light with the kinds of high-speed, high-consequence human driving errors that tragically claim around 40,000 lives per year in the U.S. These are not remotely the same category of risk. Why, in my view, this particular intersection may have confused the system: 1. The odd geometry of the intersection. Several roads converge at shallow, non-standard angles rather than forming a clean 90-degree cross. That kind of layout can complicate lane association (i.e., which signal head corresponds to which lane) especially when multiple signal clusters are visible from slightly different orientations. 2. Signal visibility restrictions built into the housing. The green light had a visibility-limiting device (often used to prevent cross-traffic from seeing the wrong signal). While that’s helpful for humans approaching from unintended angles, it also makes the signal dimmer and more directionally constrained. Even for vehicles directly in front, it appeared slightly less vivid than a typical signal head. 3. Multiple signal heads at shallow offsets. Because of the odd road angles, it’s possible the vehicle’s perception system detected multiple green signals in the scene but had lower confidence about which one governed its exact lane. When confidence drops below a certain threshold, the safest option is to wait rather than guess. Given those factors, the vehicle chose not to proceed immediately. That’s a conservative bias, and frankly the correct one in uncertainty. Importantly, it didn’t inch unpredictably, block cross-traffic in a hazardous way, or make a sudden maneuver. It simply held position. What’s especially notable here is how the broader system handled the off-nominal scenario. When the vehicle couldn’t fully resolve the ambiguity on its own, support staff were able to assist and help it confirm the appropriate path forward. This is exactly how these deployments are designed to work: autonomous capability first, with remote support as a backstop in rare edge cases. That collaboration, between the onboard autonomy stack and the remote operations team, is a feature, not a flaw. Real-world driving environments are messy. Unusual geometry, imperfect infrastructure, and visual edge cases happen. The question isn’t whether those scenarios exist; it’s whether the system can handle them safely when they arise. In this case, it did. The robo-taxi remained cautious, escalated appropriately, and ultimately proceeded through the intersection without incident. That’s the key takeaway. A moment of hesitation at a green light is awkward. A system that defaults to safety and resolves ambiguity without creating danger is exactly what we want.