cerrar
On enabling human-centered behavioral informatics
21 de octubre de 2014
De 10.30 a 12.00 h
Auditorio Raúl Baillères, Río Hondo

Prof. Shrikanth Narayanan

On enabling human-centered behavioral informatics

Audio-visual data have been a key enabler of human behavioral research and its applications. The confluence of sensing, communication and computing technologies is allowing capture and access to data, in diverse forms and modalities, in ways that were unimaginable even a few years ago. Importantly, these data afford the analysis and interpretation of multimodal cues of verbal and non-verbal human behavior. These data sources carry crucial information about not only a person’s intent and identity but also underlying attitudes and emotions. Automatically capturing these cues, although vastly challenging, offers the promise of not just efficient data processing but in tools for discovery that enable hitherto unimagined insights.

Recent computational approaches that have leveraged judicious use of both data and knowledge have yielded significant advances in this regards, for example in deriving rich, context-aware information from multimodal sources including human speech, language, and videos of behavior. These are even complemented and integrated with data about human brain and body physiology. This talk will focus on some of the advances and challenges in gathering such data and creating algorithms for machine processing of such cues. It will highlight some of our ongoing efforts in Behavioral Signal Processing (BSP)—technology and algorithms for quantitatively and objectively understanding typical, atypical and distressed human behavior—with a specific focus on communicative, affective and social behavior. The talk will illustrate Behavioral Informatics applications of these techniques that contribute to quantifying higher-level, often subjectively described, human behavior in a domain-sensitive fashion. Examples will be drawn from health and well being realms such as Autism, Couple therapy and Addiction counseling.

Biography of the Speaker:

Shrikanth (Shri) Narayanan is Andrew J. Viterbi Professor of Engineering at the University of Southern California, where he is Professor of Electrical Engineering, Computer Science, Linguistics and Psychology and Director of the Ming Hsieh Institute. Prior to USC he was with AT&T Bell Labs and AT&T Research. His research focuses on human-centered information processing and communication technologies. He is a Fellow of the Acoustical Society of America, IEEE, and the American Association for the Advancement of Science (AAAS). Shri Narayanan is an Editor for the Computer, Speech and Language Journal and an Associate Editor for the IEEE Transactions on Affective Computing, the Journal of Acoustical Society of America and the APISPA Transactions on Signal and Information Processing having previously served an Associate Editor for the IEEE Transactions of Speech and Audio Processing (2000-2004), the IEEE Signal Processing Magazine (2005-2008) and the IEEE Transactions on Multimedia (2008-2012). He is a recipient of several honors including the 2005 and 2009 Best Transactions Paper awards from the IEEE Signal Processing Society and serving as its Distinguished Lecturer for 2010-11. With his students, he has received a number of best paper awards including winning the Interspeech Challenges in 2009 (Emotion classification), 2011 (Speaker state classification), 2012 (Speaker trait classification) and in 2013 (Paralinguistics/Social Signals). He has published over 600 papers and has been granted 16 U.S. patents.


Organiza: Departamento Académico de Computación
Teléfono(s):
Susana Corvera Tel. 5628 4000 ext. 3614
Correo Electrónico:
susana.corvera@itam.mx