У нас вы можете посмотреть бесплатно CMU M&T Seminar: The Ventriloquist Illusion and Deconstructing the Pluck или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
This presentation was given as part of the Music and Technology Seminar (Spring 2025) of Carnegie Mellon University. Date: Tuesday, February 25, 2025 • Description This talk will review two different auditory perception studies, both of which require reconciling conflicting perceptual information. Both studies were collaborations across the colleges of Dietrich and Fine Arts. The Ventriloquist Illusion using Wavefield Synthesis : Laurie Heller* Carnegie Mellon University Anjelica J. Ferguson Carnegie Mellon University Sungjoon Park Carnegie Mellon University Daniel Rosenberg Carnegie Mellon University We studied the ventriloquist illusion under naturalistic conditions in which observers moved real objects in an open environment, thereby supplying cues from gesture and motion. With the use of a wavefield synthesis array, sound sources were invisibly placed to be either collocated with the object, or several degrees to the left or right of the object. With the use of motion tracking cameras, the sound sources could move along with the objects while observers moved them (in the congruent motion tracking condition). Because the sound was emitted from the speaker array and not the object, the sound sources could remain stationary while the object was moved (in the incongruent motion-tracking condition), and vice versa. We tested the hypothesis that auditory and visual motion congruence would promote the binding of sounds and objects, while incongruent motion would disrupt it. The localization threshold was higher when the object was present than when it was absent. This was an expected result of the ventriloquist illusion, in which observers tend to localize sound towards a nearby visual stimulus. In the congruent motion condition, when the sound was collocated with the moving object, lateralization thresholds were even higher. However, inconsistent with our predictions, thresholds were just as high in the incongruent motion condition. Deconstructing the Pluck: Paige Brady, Carnegie Mellon University Richard Randall, Carnegie Mellon University Laurie Heller, Carnegie Mellon University This research aims to investigate the acoustic components of instrument timbre that convey to listeners whether a stringed instrument is being plucked or bowed. Past research has focused on whether plucks and bows are perceived categorically, with mixed results (Cutting, 1982; Rosen & Howell, 1983; Kewley-Port & Pisoni., 1984; Smurzyński, 1985). However, much less work has been done on what other factors can be manipulated to create hybrids and how people actually perceive them (Peynircioğlu et al, 2016). This project extends auditory categorical perception research by looking at multiple stimulus dimensions. We created hybrid stimuli that transition from a bow to a pluck by blending temporal and spectral aspects of the sounds, with the goal of creating a plausible morph. Although morph ratings were highest when the pluck and bow ratings fell within a mid-range, only particular sounds in this mid-range were good morphs. We also asked whether a sound was likely to be made by a cello, a guitar, or a computer. The results may provide insight into human auditory perception and methods for evaluating realistic morphs, which can be applied in music technology. Laurie Heller: Laurie Heller is a Professor of Psychology at Carnegie Mellon University who has collaborations and affiliated faculty appointments across many CMU departments and programs, including Music & Technology. Her research examines the human ability to use sound to understand events happening in the environment. Her perceptual experiments discover acoustic cues that reveal attributes of sound events, and how our knowledge of these cue-attribute relationships influences our recognition of sounds. She has also examined how this knowledge influences which brain regions are recruited during the perception of sound events. Her multimodal experiments have combined hearing and vision as well as asking whether sound affects the gestures we make. Her research on sound localization included teaching naive listeners to learn to extract information from echoes about the surrounding environment. Ongoing work involves the perception of sound categories and the effects of unwanted sounds. Collaborative applications are being developed to test sound recognition in hearing impaired listeners and to improve the performance of a machine learning system for sound event classification. Applications of her research have the potential to enhance auditory displays, hearing aids, and navigation aids for the visually impaired.