У нас вы можете посмотреть бесплатно Reimagining Data-Driven Decisions in Education through Critical Data Literacy with Shreepriya Dogra или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Artificial Intelligence (AI) and Generative AI (GenAI) are marketed as upgrades to data-driven decision making in education, promising faster predictions, personalization, and adaptive interventions. Yet these systems do not address the fundamental problems like over-reliance on quantifiable metrics, bias, inequity, and lack of transparency, embedded in educational data practices; they amplify them. Across platforms such as Learning Management Systems (LMS), institutional dashboards, and predictive models, what is counted as “data” remains narrow: logins, clicks, scores, demographics, and test results. Excluded are lived experiences, complex identities, and structural inequities. These omissions are not accidental; they are design choices shaped by institutional priorities and power. Drawing on O’Neil and Broussard, this session highlights how data-driven systems risk misinterpretation, reductionism, and exclusion. Participants will engage with scenarios that demonstrate both the promises and pitfalls of triangulating educational data. Together, we will discuss how such data might be misinterpreted, reduced, or stripped of context when filtered through AI systems. As a starting point to navigate these problems, Critical Data Literacy is introduced as a framework for reimagining data practices through comprehension, critique, and participation. It equips participants engaging with data-driven systems in education and beyond to interrogate how data is produced, whose knowledge counts, and what is excluded or marginalized. Participants will leave with reflective questions to guide their own practice: Better for whom? What is not on the screen? Whose goals are being personalized? Without this lens, AI risks accelerating inequities under the guise of objectivity.