У нас вы можете посмотреть бесплатно Rethinking LLM-as-a-Judge. Representation-as-a-Judge With SLMs Via Semantic Capacity Asymmetry или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This research paper introduces a novel evaluation framework called Representation-as-a-Judge, which challenges the prevailing reliance on large, computationally expensive language models for assessing text quality. The authors propose the Semantic Capacity Asymmetry Hypothesis, arguing that the cognitive load required to evaluate text is significantly lower than that needed to generate it; consequently, small language models often possess the internal understanding necessary to judge quality even if they lack the capacity to articulate it through coherent text generation. To operationalize this, the researchers developed INSPECTOR, a system that freezes small models and uses lightweight probing classifiers to extract evaluative signals directly from their intermediate hidden states rather than relying on unreliable generated outputs. Experiments across reasoning benchmarks such as GSM8K and MATH demonstrate that this probing method substantially outperforms direct prompting of small models and closely approximates the accuracy of large proprietary models. Furthermore, the study confirms that these internal representations serve as effective data filters for training downstream models, offering a scalable, efficient, and interpretable alternative to the costly LLM-as-a-Judge paradigm. https://arxiv.org/pdf/2601.22588 https://github.com/zhuochunli/Represe...