У нас вы можете посмотреть бесплатно The procurement gap: why “trusted AI” fails in the real world или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Article link: https://open.substack.com/pub/johanos... The article argues that “trusted AI” often fails for a simple reason: organisations buy narratives instead of evidence. While Davos leaders talk about trust, transparency and accountability, trust is actually built in procurement — the unglamorous decisions inside tenders, contracts, testing plans and monitoring requirements. When buyers accept slick demos, vague assurances, or endless pilots, they often end up with AI systems that can’t be properly audited, don’t have clear accountability, and create confusion when something goes wrong. The piece frames procurement as governance in the AI era. It explains the practical questions organisations must force upfront: can decisions be traced through logs and versioning, who is named as accountable for ongoing monitoring, what triggers shutdown or rollback, and what redress exists for people harmed by incorrect decisions. For South Africa, the warning is urgent because we will import many AI systems before we build them — and weak procurement means importing risk at scale. The takeaway is clear: if you can’t audit it, can’t assign accountability, and can’t provide redress, you can’t claim you’re deploying “trusted AI”.