У нас вы можете посмотреть бесплатно Interpretable PID Parameter Tuning for Control Engineering ... by Klaus Diepold или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Automation in complex (energy) systems rely on closed loop control, wherein a controller interacts with a controlled process via actions, based on observations. These systems are increasingly complex, yet most deployed controllers are linear Proportional Integral Derivative (PID) controllers. PID controllers perform well on linear and near li near systems but their simplicity is at odds with the robustness required to reliably control complex processes. Machine learning techniques are a way to extend controllers beyond their linear control capabilities by using neural networks. However, such an extension comes at the cost of losing stability guarantees and controller interpretability. I review the utility of extending PID controllers with recurrent neural networks and show that this approach performs well on a range of complex control systems a nd highlight how they can be a scalable and interpretable option for modern control systems. I also address the lack of interpretability that prevents neural networks from being used in real world control processes. I discuss bounded input bounded output stability analysis to evaluate the parameters suggested by the neural network, making them interpretable for engineers. This combination of rigorous evaluation paired with better interpretability is an important step towards the acceptance of neural netwo rk based control approaches for real world systems. It is furthermore an important step towards interpretable and safely applied artificial intelligence.