У нас вы можете посмотреть бесплатно Jack Wilkinson - Detecting problematic RCTs или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Systematic reviews of health interventions synthesise evidence from all RCTs addressing a particular research question, and are considered very high-quality evidence. Unfortunately, it has become clear that some RCTs included in systematic reviews are not authentic and may have been entirely fabricated. We call trials subject to data falsification, fabrication, or other serious research integrity issues “problematic studies”, and recent examples can be found amongst studies included in systematic reviews of ivermectin for the treatment of COVID-19. However, there is no consensus around how to identify these problematic studies when undertaking a systematic review, and often no checks are performed at all. Comments from the talk: Would it make sense to do your testing of the using studies in the cochane register of studies rather than in individual reviews? You would then cover more areas of research and get an idea of prevalance? The best tests for data fabrication are those which are easy to conduct and super hard to circumvent. They use this asymmetry of effort for encryption safety. Mathematical trap door functions are used. Maybe some tools fulfill this property Does your tool look at researcher allegiance/conflict of interest? My problem with the CRS is that it's just refencing, it doesn't link even to the RoB assessment that may have been done on the trial. So if this get going one can imagine that journals that want to publish could possibly be a ”not so interested partner” in the quest that you describe. Unfortuntely there is not much to expect from Journals as this recent publication Shows: Ortega, José-Luis; Delgado-Quirós, Lorena (2023). “How do journals deal with problematic articles. response of journals to articles commented in PubPeer”. Profesional de la información, v. 32, n. 1, e320118. https://doi.org/10.3145/epi.2023.ene.18 We have pursued the Monticone question in pain RCTs: https://www.painresearchforum.org/pap.... Also effect on guidelines and systematic reviews, in press in BMJ Regional Anesthesia & Pain Medicine Repository of red flags would be incredibly useful. In the past I’ve labelled trials as clearly not RCT after investigation for a Cochrane systematic review but there was no way to reflect this in the CRS record for others to see. We should have a system of flags and an outward facing repository would be even better. Otherwise, others may have to replicate all this effort expended in investigations! We also need to educate people who create IT/AI algorithms, so that retraction notices appear high in search results and retracted science isn't featured in searches due to it having many 'hits'.