У нас вы можете посмотреть бесплатно What analog optical computing chips are actually d или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Computer Chips Most of these designs target matrix–vector multiplication (MVM) / matrix multiply-accumulate (MAC), because that dominates: transformers (attention + MLP) vision models robotics perception stacks Optical/photonic approaches can compute those operations by letting physics “do the math”: The common trick: “weights stored in physics” Instead of fetching weights from SRAM/HBM and multiplying digitally, the chip: stores weights as physical parameters (phase shifts, interference patterns, meta surface responses, or programmable optical elements), sends light through/onto that structure, and the output light intensity encodes the multiply result. This is closely aligned with what your summary says (“weights physically stored in the meta surface; light performs multiplication instantly”), and it matches the broader literature on metasurface/diffractive optical computing and optical neural nets. 2) The “meta surface + optical transistor” claim (and what we can actually substantiate) Recent reporting describes Neurophos as pursuing meta surface-based optical computing with extremely small optical switching/modulation elements (framed as “optical transistors”), claiming very large matrix engines and big efficiency upside—but also noting it’s early and volume production is not imminent. Key corroborated points from the coverage: Texas/Austin-based photonic AI chip startup (Neurophos) Meta surfaces used for high-speed silicon photonics / in-memory style processing Claims of optical “transistors” far smaller than prior approaches, promising dense, fast optical compute blocks Timeline risk: “testing,” ecosystem building, scaling/manufacturing hurdles; not “swap GPUs tomorrow” So: the direction is real, the specific performance claims should be treated as provisional until there are peer- reviewed device papers, customer benchmarks, and manufacturable yield data. 3) Why optics can be drastically more energy-efficient than GPUs (in principle) GPUs burn power in two main places: data movement (especially memory ↔ compute) switching transistors billions of times per second Optical compute attacks both: Light propagation has no electrical resistance, so you avoid certain resistive losses (though lasers, modulators, detectors still cost power). Optical interference can implement linear algebra “in one shot,” massively parallel. This is why photonics is repeatedly framed as a response to the “AI energy wall.” 4) The hard engineering truth: precision, noise, and “hybrid” reality Analog optical compute is not free magic. The big constraints are well-known: A) Numerical precision Optical analog cores tend to excel at: low precision inference (INT4/FP4-ish equivalents, sometimes 8-bit-ish in practice depending on architecture) High-precision training is harder because analog noise accumulates and calibration is constant work. B) Noise, drift, thermal stability Your summary mentions thermal stability—this is absolutely a real issue: photonic devices are temperature-sensitive metasurface responses can drift maintaining calibration over time is non-trivial C) You still need electronics Even “optical compute” systems usually need: ADC/DAC, detectors digital control loops memory hierarchy scheduling/software stack So, the winning architectures are typically hybrid: optics for the heavy linear algebra; electronics for control, nonlinearity, and everything else. The research literature and industry commentary consistently land on this “hybrid” conclusion. 5) How these ties into the other giant trend: optical interconnect (data movement) Even if optical compute cores take longer, optical links are already becoming central because data movement is the bottleneck. Reuters reported major momentum in co-packaged optics and photonic fabrics for chip-to-chip connectivity, but also that reliability, ecosystem, and deployment timelines still matter (with some expectations that broad adoption could take years). This matters because many “photonic compute” companies first win by: reducing power in interconnect increasing bandwidth between GPU/memory pools then expanding into compute 6) Why this is exciting for humanoid robots (your angle is spot on) Humanoids have brutal constraints: SWaP: size, weight, and power heat dissipation real-time latency battery life