У нас вы можете посмотреть бесплатно Why We Mistake Humans for AI или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
You’ve seen the comment. Buried under a video or article, someone writes: “This is AI.” It sounds like a technical judgment. But what if it isn’t one at all? In this video, we unpack a new digital ethnographic analysis that treats these comments not as evidence-based claims—but as psychological signals. Using real comments pulled from public threads, we examine why people accuse content of being AI-generated even when they offer no proof, no analysis, and no attempt at verification. This isn’t about whether something is AI. It’s about what happens when familiar human cues—voice cadence, fluency, polish—stop working the way they used to. Inside this breakdown: Two real-world “specimens” of AI accusation, analyzed side by side The difference between anxiety-driven and irritation-driven reactions Why “AI” often functions as a social gesture, not a diagnosis How accusations of AI serve as identity repair for uncertain observers What this reveals about human perception in the age of synthetic media The core insight is simple—and uncomfortable: The problem isn’t that the system is artificial. It’s that the observer can no longer be sure. The next time you see someone confidently declare “This is AI,” you’ll know you’re not looking at detection. You’re looking at a symptom. Slade, T. (2026). Attribution Anxiety in the Age of Synthetic Media: Digital Ethnographic Specimens of Lay AI Detection [Data set]. Zenodo. https://doi.org/10.5281/zenodo.18276054