У нас вы можете посмотреть бесплатно Massive Scale Training and Inference: AT&T, RelationalAI & ScalarLM Break #1 on Spider with AMD GPUs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Watch the complete expert panel featuring Greg Diamos (ScalarLM Architect), Molham Aref (CEO, RelationalAI), Farbod Tavakkoli (Data Scientist, AT&T), and Ilya Tabakh (VP Innovations, TensorWave) as they reveal how open-source AI is transforming enterprise decision intelligence. 📌 Chapters & Timestamps 00:00 – Welcome & Event Intro (TensorWave) 00:47 – Panel Format & Speaker Overview 01:00 – Intro: Greg Diamos, ScalarLM 02:00 – Intro: Molham Aref, RelationalAI 03:00 – Intro: Farbod Tavakkoli, AT&T 04:00 – What is ScalarLM? Origins & Open Source 05:02 – AMD MI300/MI325 Cluster & Kubernetes Deployment 06:10 – Distributed Training Challenges 07:00 – BI vs. Decision Intelligence (Enterprise Reality) 09:00 – Openness & Avoiding Vendor Lock-In 10:15 – Private Enterprise Data: The Next LLM Frontier 11:20 – Super Alignment: Structured Data → LLM Knowledge 14:00 – Scaling Laws in Enterprise Datasets 15:10 – #1 Result on Spider SQL Benchmark 16:20 – Benchmark Complexity Explained 17:00 – BIRD Benchmark: Better Than Human Performance 18:30 – Why Innovation Moves Above the Stack 19:40 – Open Source vs. Proprietary Frameworks 20:00 – Inside “Ask AT&T” GenAI Platform 21:00 – 9B Tokens/Day: Massive Internal Usage 22:10 – Fine-Tuning Saves Big: Cost & Performance Gains 23:00 – Network Event Classification via Logs → LLM 24:45 – 156 Fine-Tuning Experiments → Breakthrough Result 25:30 – AMD GPU Efficiency at Scale 26:30 – Optimizing Compute Pipeline Efficiency 27:00 – GSMA Global Telecom Model Collaboration 28:00 – Multilingual & Multimodal Roadmap (EN + Arabic) 29:00 – Call Analytics & Competitive Signal Detection 31:00 – Reflections: Impact of Open Ecosystems 33:00 – Openness Builds Trust & Adoption 34:00 – GPU Agnostic Deployment Momentum 35:00 – Q&A: Small Models vs. Large Models in Production 37:00 – Q&A: Closed-Loop Operations & Automation 42:00 – Why GSMA Work Remains Open Source 43:20 – Closing Remarks & Networking 🚀 Key Highlights: • #1 on Spider SQL Benchmark – Super Alignment model beats GPT-5, Claude, and Grok using private enterprise data • AT&T’s Ask AT&T Platform: 100K+ employees, 9B tokens/day, 910M API calls, 20% coding efficiency gain • Fine-tuned 4B model beats 100B+ LLMs on telecom log classification – 90% cost savings • GSMA Global Telecom AI Initiative – Multi-company effort to build open-source telecom foundation models (text → multilingual + vision by 2026) • GPU-Agnostic Training on AMD MI300/MI325 via ScalarLM + Kubernetes + Helm – unified training & inference at scale • Super Alignment Explained: Convert Snowflake relational data → LLM tokens while preserving privacy and semantics 🛠 Tech Stack in Action: • ScalarLM (open-source): Megatron-Core + Hugging Face + vLLM • AMD GPU Clusters (TensorWave) • RelationalAI + Snowflake for decision intelligence • Open standards: Iceberg, Delta, OpenTable Formats – no vendor lock-in 🎯 Who Should Watch: • AI Engineers training on private enterprise data • Data scientists using Snowflake / BigQuery • CTOs building GenAI platforms at scale • Open-source AI advocates • Telecom & enterprise AI leaders 🌊 About TensorWave TensorWave is the AI neocloud purpose-built for performance. Powered exclusively by AMD Instinct™ Series GPUs, we deliver high-bandwidth, memory-optimized infrastructure that scales with your most demanding models—training or inference. Ready to get started? Connect with a Sales Engineer @ tesnorwave.com/connect