У нас вы можете посмотреть бесплатно A Primer on Artificial Intelligence for Philosophers and Theologians | Taylor Black или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Large language models are, in their entirety, single mathematical functions: sequences of matrix transformations defined by hundreds of billions of learned parameters, trained on trillions of words of human text, that produce probability distributions over possible next tokens. Every word these systems generate emerges one token at a time, left to right, with no revision, no planning, and no homunculus deliberating about what to say. This talk provides the technical substrate philosophers and theologians need to evaluate what these systems are, what they do, and — with equal precision — what they do not do. The talk begins with the transformer architecture introduced by Vaswani et al. (2017): multi-head attention, feed-forward networks, residual connections, and the fixed computational depth that constrains what a single forward pass can accomplish. It then describes the training process — next-token prediction via gradient descent over a corpus that encodes the full breadth of human knowledge — and the compression hypothesis: that learning to predict human text requires building internal representations of the structure that generates it. The critical observation is that this structure was put into the text by human beings who had already understood the world. The model compresses the products of human insight, not the act. Recent mechanistic interpretability research reveals that models develop unified entity representations, compositional circuits, and geometric encodings of semantic relationships — structures that inherit the form of human understanding without performing the act. A January 2026 result from Kim, Lai, Scherrer, Agüera y Arcas, and Evans demonstrates that reasoning models spontaneously generate multi-perspective adversarial deliberation under nothing more than accuracy pressure, a finding with striking parallels to the scholastic disputatio. Drawing on Bernard Lonergan's cognitional theory, the talk argues that models lack insight in all three of its constitutive moments: no question drives the extraction, no intelligence grasps why a pattern obtains, and no act of understanding pivots between data and concept. Reflective insight — the "Is it so?" that converts understanding into knowledge — and deliberative judgment are likewise absent. The model stores, retrieves, and recombines the fruits of human intelligence at unprecedented speed and scale while possessing neither insight, judgment, nor deliberation. This characterization clarifies both the technology's extraordinary utility and its intrinsic dependence on the human intelligence that must direct it, evaluate its outputs, and take responsibility for what it produces. The philosophical and theological evaluation of whether this dependence is permanent or contingent — whether scale or architectural innovation could bridge the gap — is the work this talk invites its audience to undertake.