У нас вы можете посмотреть бесплатно Supercharge your development time with the new Arm NN 20.11 release или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Register for our next AI Virtual Tech Talk: https://developer.arm.com/solutions/m... #Arm #AIVirtualTechTalk #ArmNN 00:00 - Welcome & Upcoming tech talks 01:13 - Meet our speakers 02:28 - Arm's Software stack 04:40 - Why use ArmNN and Arm Compute Library? 08:41 - Arm NN's new TFLite Delegate 10:54 - Debian packages for Arm NN and Arm Compute Library 12:07 - PyArmNN 13:12 - Arm Public Model Zoo 15:20 - Demo 1: Inference on Linux on a Arm CPU 24:12 - Demo 2: Inference on Android on a Mali GPU 30:20 - Useful links The use of machine learning in endpoint applications is becoming increasingly prevalent across a wide range of industries. As model complexity increases, achieving the best results on the underlying Arm architectures is a fundamental necessity for application developers. The latest release of Arm NN and Arm Compute Library – pivotal parts of the Arm software stack, includes key new features designed to accelerate our industry-leading performance and substantially reduce your development time: Arm NN Delegate for TensorFlow Lite Debian packages for Arm ML software Python Bindings for Arm NN Model Zoo Updated ML examples and documentation Join Arm engineers to learn how to best leverage these new features for your ML projects, including demonstrations to help you get the best ML performance, no matter what Arm device you support. Speakers: Ronan Naughton, Senior Product Manager, Arm Jim Flynn, Arm NN Engineering Manager, Arm Gian Marco Iodice, Arm Compute Library Staff Software Engineer, Arm Check out these top 4 questions asked during the talk: Q1: Can the Arm SW stack be used even if there are unsupported ops in the Tflite model? Answer: Yes, by using the Arm NN Tflite delegate it allows the Tflite runtime to partition a model into sub graphs. Subgraphs which are supported by the Arm SW stack will be processed through Arm NN / ACL with advanced acceleration. The remaining subgraph will be processed by the CPU reference implementation as directed by the TFLite runtime. This provides a holistic solution ensuring optimum performance as well as full .tflite model support. Q2: Can you recommend any development boards similar to Raspberry Pi except for GPU inference? Answer: Yes. The Odroid N2 is a development board which can be used with the Arm ML SW stack for GPU inference. This platform contains a Cortex A73 CPU, and a Mali G52 GPU. Q3: Can you explain the significance of the tuner file that was used with the GPU commands? Is it for performance improvement? Answer: The opencl tuner file contains the most suitable LWS (Local workgroup size) value for each open Cl kernel dispatched by ACL backend. This provide up to ~30% performance gain for certain kernels. More info can be found here: https://community.arm.com/developer/i... Q4: Where can I find relevant examples Answer: https://github.com/ARM-software/ML-ex... Stay connected with Arm: Website: http://arm.com/ Twitter: / arm Facebook: / arm LinkedIn: / arm Instagram: / arm