У нас вы можете посмотреть бесплатно Uncertainty Quantification: Artificial Intelligence and Machine Learning in Military Systems или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Uncertainty Quantification: Artificial Intelligence and Machine Learning in Military Systems by Major Ayla R. Reed Summary of the Article: The article proposes that implementing a military standard for quantified uncertainty metadata is essential for successfully deploying artificial intelligence and machine learning (AI/ML) systems for military advantage. By standardizing and provisioning for this metadata now, the Department of Defense (DoD) can continue to develop capabilities while simultaneously determining the best policy for using AI/ML, thereby preventing technical delays. Context and the Need for Uncertainty Quantification (UQ) The fundamental motivation for using AI/ML in the military is the need to observe, orient, decide, and act (OODA) faster and better than an adversary. However, the use of AI/ML faces three major concerns: 1. Addressing moral and ethical considerations regarding giving AI the authority to cause destruction and death. 2. Balancing the cost versus the military utility of developing AI/ML capabilities. 3. Ensuring an appropriate level of trust in a machine to optimally utilize investment in AI/ML components. Current AI/ML systems lack the mathematical framework necessary to provide assurance, which impedes their broad adoption for critical defense situations. Assurance, or confidence, requires minimal uncertainty. UQ as the Solution Uncertainty quantification (UQ) is defined as the process of assigning number(s) to the imperfect or unknown information within a system. The proposed military standard requires that UQ be tagged as metadata to every piece of data or information in digital systems. This mechanism allows the machine to express in real time how unsure it is, providing critical transparency necessary for building trust. Once input information includes UQ metadata, it can be propagated through functional relationships to higher levels of information usage, allowing the AI/ML model to express its confidence in its output. UQ implementation helps address the three core pitfalls: • Ethical Concerns: UQ allows the DoD to categorize military actions based on three degrees of machine autonomy (never, sometimes/partially, or always performed by the machine). By defining a minimum level of certainty as a boundary condition for each category, commanders can establish guidelines for when a machine should be allowed to decide. • Cost/Utility Balance: Predefined minimum uncertainty boundaries allow acquisition professionals to determine the best way to allocate limited resources to ensure investment is proportional to military utility. • Trust and Assurance: UQ secures trust in AI/ML systems. Real-time feedback of a machine's awareness of its own competence increases transparency into the observe, orient, and decide functions, improving trust in the system. Assurance in AI/ML systems is achieved through a “quantify–evaluate–improve–communicate” cycle. Furthermore, requiring UQ metadata supports the DoD’s ethical principles for artificial intelligence, ensuring AI capabilities are Responsible, Equitable (bias can be measured like uncertainty), Traceable, Reliable, and Governable (UQ boundary conditions define guidelines for autonomy trust levels). Mathematical Implementation and Challenges Applying UQ to the OODA loop assumes the existence of functional relationships that describe military situations. These relationships allow for the use of a general equation for uncertainty propagation, which calculates the uncertainty of an output based on the uncertainties of its input variables. This mathematical approach accounts for both aleatoric uncertainty (inherent randomness) and epistemic uncertainty (lack of training data). However, there are challenges: • Calculating propagated uncertainty in highly complex systems-of-systems at strategic levels may require an infeasible amount of computing power. When uncertainty is highly propagated, it may approach 100 percent, potentially illuminating a "mathematical proof of the fog of war". Even so, the metadata is still useful for inspection, allowing operators to see which factors contribute the most uncertainty. • Communicating UQ to end-users is difficult because people have varying levels of statistical understanding and their perception of uncertainty is often biased by decision-making heuristics. Therefore, UQ displays and user interfaces should be tested and catered to different end-user types (e.g., data scientists versus operators). By implementing UQ, the military can integrate data uncertainty into the learning process, which helps models become more immune to overfitting—a behavior where a model fits too closely to training data, resulting in inaccurate predictions on unknown data. Ultimately, standardizing UQ accounts for uncertainty related to observation, orientation, and action within the OODA loop, ensuring military advantage through superior actions.