Members: Please place your sketch in alphabetical order by last name
(Use the Heading 3, not boldface, setting for the line with your name on it.)

(alphabetical by last name)

Maria Neimark Geffen

is an assistant professor at the University of Pennsylvania. Maria works on neuronal circuits in the central auditory pathway in auditory perception and learning. You can find out more about her research here.

Greg Huber

Is a Deputy Director at KITP working in biological physics on the mechanics of the cell interior. His webpage can be found here

Jim Hudspeth

is a Professor at Rockefeller University and an Investigator of Howard Hughes Medical Institute. Jim is interested in biophysical approaches to the transduction process of hair cells and especially in the basis of the active process that amplifies inputs by 100X to 1000X, underlies frequency discrimination of 0.1 % to 0.2 %, and provides compressive nonlinearity that telescopes six orders of magnitude in input amplitude into two order of magnitude of output. His web page is always out-of-date.

Andrei Kozlov

is an assistant professor at the Department of Bioengineering, Imperial College London. His interests are clustered around auditory neuroscience and biophysics, mainly auditory cortex and hair-cell mechanotransduction. His lab web page is here: kozlovlab.com

Christoph Kirst

is a fellow for physics and biology and a Kavli physics fellow at the Rockefeller University working on the dynamics and flexible computation in neuronal networks.

Richard Lyon

is a principal research scientist at Google, leading the Sound Understanding team to develop applications of what we can learn from the hearing research community. He describes his approach, and some of the applications developed, in his 2017 book //Human and Machine Hearing: Extracting Meaning from Sound//.

Richard Mooney

is a Professor of Neurobiology in the Duke University School of Medicine. Rich's research interests include vocal learning and vocal communication, using both songbirds and mice as experimental subjects. His webpage can be found here.

Ankit B. Patel

is an Assistant Professor in the ECE Dept. at Rice University and the Neuroscience Dept. at the Baylor College of Medicine in Houston, TX. Ankit works on bridging the gap between deep machine learning and computational neuroscience by building neurally-consistent models and developing theories of deep learning. (A key result in his work is that convolutional nets can be recast as efficient inference in a hierarchical generative model.) His webpage is here.

Tobias Reichenbach

is a Senior Lecturer at Imperial College London. Tobias works on the biophysics of hearing and neuroscience, at the interface of science, technology and medicine. His webpage is here.

Ralf Schlüter
is an Academic Director at Lehrstuhl Informatik 6 - Human Language Technology and Pattern Recognition at RWTH Aachen University, Germany, where he leads the automatic speech recognition (ASR) working group. Ralf is interested in all aspects of ASR, including feature processing, acoustic and language modeling, as well as search. For more details, see his homepage.

Jonathan Z. Simon

is a professor at the University of Maryland. Jonathan works on auditory representations of sound with a strong temporal basis, typically in auditory cortex, and primarily in humans. He is especially interested in neural representations of complex acoustic stimuli, such as speech and multi-source auditory scenes. His webpage can be found here.

Malcolm Slaney

is a Research Scientist in the Machine Hearing group at Google Research, and is an adjunct professor at Stanford CCRMA and an affiliate professor at the University of Washington EE Department. He is interested in all manner of audio processing, and specializes in models of auditory perception.