Malcolm Slaney of the Machine Listening Group at Google Research will be discussing the topic of auditory attention. Malcolm is know for his work on automatic speech recognition and auditory perception, among many other things.

When: Friday, January 22nd, at 3pm

WhereASRC Auditorium

Important: please confirm your attendance by RSVP’ing to neuroccny@gmail.com.  RSVP will facilitate your entry into the ASRC building. Feel free to extend invites to relevant parties.

Title: Understanding and using audio attention

Abstract: Understanding auditory attention is key to many tasks. In this talk I would like to summarize several aspects of attention that we have used to better understand how humans use attention in our daily lives.  This work extends from top-down and bottom-up models of attention useful for solving the cocktail party problem, to the use of eye-gaze and face-pose information to better understand speech in human-machine and human-human-machine interactions, to new techniques that use EEG (and other brain signals) to infer the direction of auditory attention. The common thread throughout all this work is the use of implicit signals such as auditory saliency, face pose and eye gaze as part of a speech-processing system. I will show algorithms and results from speech recognition, speech understanding, addressee detection, and selecting the desired speech from a complicated auditory environment.  This talk will describe work that I did while at Microsoft Research, and efforts at the Telluride Neuromorphic Cognition Engineering Workshop that were partially supported by Google.

Biography: BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a research scientist in the Machine Hearing Group at Google Research. He is a Consulting Professor at Stanford CCRMA, where he has led the Hearing Seminar for more than 20 years, and an Affiliate Faculty in the Electrical Engineering Department at the University of Washington. He is a (former) Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing and IEEE Multimedia Magazine. He has given successful tutorials at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP, and “Web-Scale Multimedia Data” at ACM Multimedia 2010. He is a coauthor, with A. C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”. This book was republished by SIAM in their “Classics in Applied Mathematics” Series. He is coeditor, with Steven Greenberg, of the book “Computational Models of Auditory Function.” Before joining Microsoft Research, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo! Research, and Microsoft Research. For many years, he has lead the auditory group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s recent work is on understanding audio perception and decoding auditory attention from brain signals.  He is a Fellow of the IEEE.

Comments are closed.