У нас вы можете посмотреть бесплатно SiliconMind-V1 Demo или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
SiliconMind-V1: Multi-Agent Distillation and Debug-Reasoning Workflows for Verilog Code Generation Our team at Academia Sinica and National Taiwan University is proud to release SiliconMind-V1: a family of open-source Large Language Models (LLMs) specialized for Verilog code generation, testing, and debugging. Unlike previous approaches that rely heavily on commercial models or external EDA tools, SiliconMind-V1 is locally fine-tuned to iteratively generate, test, and debug Register-Transfer Level (RTL) designs through test-time scaling. The SiliconMind-V1 models are enabled by a unified multi-agent framework for reasoning-oriented training data generation with integrated testbench-driven verification to achieve state-of-the-art functional correctness on major benchmarks. We are continuing to improve our models, and we sincerely hope to hear your feedback. Key Highlights: Matches the current state-of-the-art small-scale LLM, QiMeng-CodeV-R1 (CodeV-R1), in functional correctness on major Verilog benchmarks, while achieving a 9× speedup in model training. Trains on only 36k functionally verified data points (vs. 87k for CodeV-R1). Operates entirely with open-source tools: no commercial LLMs, no licensed EDA tools at inference. Generalizes across four different base models: Qwen2.5-Coder-7B-Instruct, Qwen3-4B-Thinking-2507, Qwen3-8B, and Olmo-3-7B-Think. All fine-tuned variants are open source Model Sources: Project Page: https://AS-SiliconMind.github.io/Sili... Inference Engine GitHub: https://github.com/AS-SiliconMind/Sil... Models: https://huggingface.co/collections/AS... Paper: pending on arXiv Special thanks to: Prof. Chia-Hung Tu of National Cheng Kung University for his valuable advice and suggestions, as well as for his careful revision of the manuscript. Academia Sinica's SiliconMind project (AS-IAIA-114-M11) as well as National Science and Technology Council for financial support. National Center for High-Performance Computing and Taipei-1 for computational and storage resources. Our Team: Prof. Shih-Hao Hung of National Taiwan University Prof. H.T. Kung of Harvard University Mu-Chi Chen Po-Hsuan Huang Hsiang-Yu Tsou Cheng-Liang Yu-Hung Kao Shao-Chun Ho Yu-Kai Hung I-Ting Wu En-Ming Huang Wei-Po Hsin Video Production (Editor & Sound): Mu-Chi Chen