On this edition of FOCUS In Sound, we welcome a scientist who explores the way the brain encodes information about the world around us – she combines computational and biological approaches to study the mechanisms behind the transformation of sensory representations. Dr. Maria N. Geffen is an Assistant Professor in the Departments of Otorhinolaryngology: Head and Neck Surgery and Neuroscience at the University of Pennsylvania. In 2008, when she was still at Rockefeller, she received the Burroughs Wellcome Fund Career Award at the Scientific Interface, a $500,000 grant designed to help bridge advanced postdoctoral training and the first three years of faculty service.
Transcription of “Interview with Maria Geffen”
00;00;02;29 – 00;00;31;06
Ernie Hood
Welcome to Focus In Sound, the podcast series from the Focus newsletter published by the Burroughs Wellcome Fund. I’m your host, science writer Ernie Hood. On this edition of Focus In Sound, we welcome a scientist who explores the way the brain and codes information about the world around us. She combines computational and biological approaches to study the mechanisms behind the transformation of sensory representations.
00;00;31;27 – 00;00;58;21
Ernie Hood
Dr. Maria Geffen is an assistant professor in the departments of Oto Rhino Learning Geology, Head and Neck surgery and Neuroscience at the University of Pennsylvania. She received her bachelors degree from Princeton, her Ph.D. in biophysics at Harvard, and did her postdoc at the Center for Studies in Physics and Biology at Rockefeller University in New York. In 2008, when she was still at Rockefeller.
00;00;59;09 – 00;01;14;19
Ernie Hood
She received the Burroughs Wellcome Fund Career Award at the Scientific Interface, a $500,000 grant designed to help bridge advanced postdoctoral training and the first three years of faculty service. Maria, welcome to Focus In Sound.
00;01;15;07 – 00;01;16;06
Maria Geffen
Thanks for having me.
00;01;16;26 – 00;01;24;20
Ernie Hood
Maria I imagine the world looks and sounds much different to you here in 2013 than it did in 2008 when you received the Burroughs Wellcome Fund Award.
00;01;25;03 – 00;01;55;20
Maria Geffen
Oh yes. This has been a really interesting five years for me. I started out being a postdoctoral fellow in the physics center at Rockefeller University, and now I lead an independent research lab at the University of Pennsylvania. And I have to say that the focus of my research there is a definite trajectory that I can trace. But we do use very many novel techniques, and we have really been pushing our studies to address new questions.
00;01;56;24 – 00;02;11;13
Ernie Hood
Well, Maria, in order to put the details in context, as we explore your research, would you kind of paint a big picture for us of your pursuits? What are the overall goals of your laboratory of auditory coding?
00;02;11;18 – 00;03;01;27
Maria Geffen
We have really sort of two big goals that we’re trying to address simultaneously and one is that we try to understand how networks of neurons in the brain encode information about complex auditory environments, and we do that in the context of the natural experience that either us humans or animals will experience in the natural world. And that’s a complex question because while it’s known how single individual neurons respond to different auditory stimuli, how they work together in ensembles, it’s only beginning to be understood.
00;03;01;28 – 00;03;38;22
Maria Geffen
Now, the reason for that is the Optogenetic revolution that has taken place in the last five years where we have gotten new tools that allow us to study the behavior of cells, not just an individual in isolation, but really as ensembles. And furthermore, we can do that in the wake brain, which gives us the ability to manipulate the behavioral tasks in which we can place the animal while performing our experiments and analyzing the brain activity.
00;03;39;04 – 00;04;15;27
Maria Geffen
That’s one big question that we’re trying to understand is really the function of the ensembles of networks in the brain. And another more general question is try to understand how sound processing takes shape. How is it that we hear? How is it that as you’re listening to my voice, your brain receives information about the mechanics of vibrations of the sound reform, yet eventually your brain translates that into words and you can comprehend words.
00;04;16;11 – 00;04;42;22
Maria Geffen
And so we study both the structure of sounds, the structure of speech, and we try to connect that to what we know about neural processing in the brain to identify how sound representation gets transformed and how neurons are able to construct this very complicated representation of the auditory environment.
00;04;43;12 – 00;04;52;04
Ernie Hood
That’s all very interesting. Maria So tell us a bit more about some of the methods that you mentioned and how they interact to generate new knowledge.
00;04;52;28 – 00;05;40;04
Maria Geffen
We try to take what’s called systems neuroscience approach to this. In Systems Neuroscience, we approach this questions at sort of all different levels. So on the one hand, we perform sound analysis to try to generate different types of sound stimuli where we modulate the natural sounds in some specific ways. During our experiments, we engage the animal in a specific behavioral task, or the humans engage in the psychophysical task such that they’re not just listening passively to the satellite, but they have some form of knowledge or some form of learning and memory that they need to engage in during the experiment.
00;05;40;28 – 00;06;21;23
Maria Geffen
What we’re, of course, ultimately interested in understanding is the neuronal activity. And so to understand neuronal activity, we record the signals that they draw on to each other using electrodes which are tiny probes that measure the electrical potentials between the cells. And we also use pharmaco logical tools to modulate the activity of neurons overall in the brain. But also we use optogenetic tools which allow us to change the activity of individual neuronal cell types on the really high resolution.
00;06;22;03 – 00;06;59;19
Maria Geffen
In a typical experiment, we combine all these techniques together, and what this allows us to do is to study the brain function as it happens in the real world, where we are always learning new information. The animal is performing a behavioral task of listening to this mathematically generated sound stimuli while we are recording the activity of its neurons and simultaneously perturbing the activity of some of the neurons, letting some of those neurons respond to the stimuli as they would under natural conditions.
00;07;00;18 – 00;07;11;23
Ernie Hood
Well, it’s very exciting that you’re using so many state of the art tools and emerging tools to, in combination, answer such complex questions.
00;07;13;10 – 00;07;49;06
Maria Geffen
Yeah, So we’ve really benefited from the recent growth in the wearable techniques. When I was starting to study systems neuroscience in my graduate work. I was actually interested in very similar research questions, even though I was working in the Salamander retina. And what we tried to do so was how ensembles of neurons function together. And at that point, under natural conditions, it was only possible to study the system in the isolated neuronal tissue.
00;07;49;14 – 00;08;22;09
Maria Geffen
Now, the retina could live in the dish for many days and we could run very involved, complicated experiments, manipulating the activity, levels of individual neuronal cell types and recording the activity for populations of neurons and again, using computational techniques. And this was, of course, his vision. But ultimately, we were restricted by the fact that that’s isolated tissue. And I always wanted to move into the cerebral cortex, which is a really important part of our brain.
00;08;22;11 – 00;08;59;03
Maria Geffen
People think that that’s what really makes us human. This was only possible recently where we wouldn’t have to sacrifice either. The control over the state of the animal, and we could integrated with it behavioral tasks. So it’s a much richer repertoire of tools within which we can study the function of the brain. This year, the systems neuroscience approach to map the neuronal brain function was recognized by President Obama and the NIH as a top priority, which you might have heard from the BRAIN initiative.
00;09;00;21 – 00;09;22;12
Ernie Hood
Maria, I know that you and your postdoc, Mark Eisenberg, recently published a study in Nature Neuroscience with some pretty amazing findings regarding associations between emotions and the ability to discriminate sounds shedding new light on some previous, seemingly contradictory findings. Would you tell us more about that research?
00;09;23;21 – 00;09;56;25
Maria Geffen
Yeah. So this research was again, in this context of trying to understand how the brain encodes sounds in the real world where we constantly have to learn to discriminate between different types of sounds. For an animal, for example, it’s very important to be able to detect behaviorally relevant, very specific sounds. For example, the sound of an owl flying overhead or the sound of the feet of the predator.
00;09;57;05 – 00;10;35;12
Maria Geffen
To us, of course, there is a huge variety of sounds that we are constantly need to pay attention to, which our brain learns to associate with dangerous sounds. For example, the sound of an alarm or a siren. What we tested was how well can our brain learn to discriminate? Once our brain has learned to associate a specific emotional value with a particular sound, how well does that affect our ability to in general discriminate between different sounds but is becoming afraid of something?
00;10;35;12 – 00;11;22;21
Maria Geffen
For example, does that change our ability to tell apart different sensory stimuli? There was some work that actually resulted in somewhat controversial findings. This is done using this model of what’s called aversive learning, where we learn to become afraid of something that has previously been neutral. In one study there had been a finding that if you learn to that some sound is followed by something unpleasant, by an unpleasant stimulus, then your brain actually becomes less sensitive to difference between different types of sounds.
00;11;23;06 – 00;11;55;20
Maria Geffen
So it’s as if if you had really great sense of pitch, it actually decreases if you become afraid of one of those sounds that you’re listening to. This was explained by the idea that it might be actually beneficial for the organism when exposed to something aversive to kind of generalize that sense of aversion to other similar sounds. And this doesn’t have to be restricted to sounds.
00;11;55;20 – 00;12;30;23
Maria Geffen
This can, of course, go to all different perceptual senses. On the other hand, when a similar study was conducted using two different sense and this very sense that people couldn’t really tell apart before, although they were two slightly different chemicals, one people were trained to associate one of those chemicals with in negative stimulus. Then they actually perceptually were able to tell apart the two odors from each other.
00;12;31;13 – 00;13;05;08
Maria Geffen
So in a way, their sense of how well they can tell apart different sensory cues in this case increased as a result of very similar type of hearing, what we call emotional learning. My postdoc noticed that there was actually something that’s different between these two studies, and that was what was required during the emotional learning in one study, the emotional learning was restricted to something that was perceptually obvious.
00;13;05;26 – 00;13;31;10
Maria Geffen
The subjects in that study were trained to tones, where the tones were perceptually very far apart, and so it was easy to discriminate one tone from from the other and to associate one of the tones with negative signals. Whereas in the study that tested the other perception, the two orders were really close together, and that led to the opposite effect.
00;13;31;23 – 00;14;17;11
Maria Geffen
Our hypothesis was that the relationship between how precise the emotional learning needs to be is closely linked to the resulting changes in how tight our sensory acuity becomes. So what that meant was that even during the emotional learning, the sounds that are used are very close together. Then we predicted actually this would not only translate in much more precise, emotional learning, but it would also translates into changes that our sensory discrimination changes in the sense of pitch of the animals that we’re testing.
00;14;17;11 – 00;14;36;19
Maria Geffen
If we didn’t ask for a very precise emotional learning of the animals, they will not develop such a precise emotional response. And this will translate into an actual worsening of sensory cues.
00;14;37;27 – 00;14;57;18
Ernie Hood
Well, it’s very impressive that you’ve actually been able to identify the brain mechanisms that underlie these activities. And I understand also, Maria, that there are some actual implications for understanding or getting a better perception of conditions such as PTSD and anxiety. Could you tell us about the translational potential?
00;14;58;07 – 00;15;40;01
Maria Geffen
One of the things that happen in post-traumatic stress disorder is that a fearful, emotional experience becomes translated into fear that the patient develops in response to everyday sensory stimuli. So, for example, a veteran who was traumatized in combat by the sound of bomb explosion when they come home, there are many different types of sounds, such as the sound of thunder that can trigger a very strong emotional response.
00;15;40;16 – 00;16;16;13
Maria Geffen
So that means that in a way, this is sort of like generalized from one sound to another in their emotional learning. And that’s why we used emotional learning. It’s a it’s in a way, models for developing anxiety or specifically post-traumatic stress disorder. But what’s interesting is that some of the veterans develop PTSD and others who have been in the same exact combat situations have the same exact training, do not.
00;16;17;12 – 00;16;49;25
Maria Geffen
In our experiments, we also see that there is a huge variability in the range of the effects of the individual animals exhibit in response to the same exact emotional training that they undergo. And there is a difference both in the sensory response and also in how much they generalize or how specific their emotional response becomes to the civilize that they’re trained in.
00;16;50;10 – 00;17;34;18
Maria Geffen
And we think that there is actually a parallel between this and the differences that you can see in the emotional state of, say, the veterans. So that’s a group of people who have undergone the same emotional experiences, but some of whom have developed PTSD and some of whom have not. And we’re thinking of ways in which we can use the same animal models to try to maybe be able to predict whether there could be some basic sensory tests that we could develop that would allow us to predict whether some individuals are more at risk for developing PTSD versus those that are not.
00;17;35;12 – 00;18;06;25
Maria Geffen
And also on the flipside, for developing treatments and therapies, we’re trying to understand the circuits that underlie this learning and changes in sensory perception that follow emotional learning. We believe that the brain circuits are shared with those that are involved in the development of anxiety disorders, and that possibly by training these brain circuits, we can develop new therapies for these disorders.
00;18;07;19 – 00;18;25;20
Ernie Hood
Well, Maria, that’s just fascinating work and certainly holds a lot of promise for helping some people who definitely need it. So we will certainly keep an eye on that line of research. Where is your research on the complex relationship between sounds and the brain headed from here?
00;18;26;17 – 00;18;58;07
Maria Geffen
So now that we have gotten a grip on some basic things that the auditory cortex is involved in, we’re trying to understand the details of the processing that takes shape within the auditory cortex. On the one hand, we aim to understand how processing of complex sounds is modulated between different areas within the auditory cortex, such as the primary and the secondary auditory cortex.
00;18;58;24 – 00;19;32;21
Maria Geffen
And there we’re asking a very specific question, which is how does our brain develop a representation of sounds that is invariant to some basic perturbations? So for example, if I say the words neurons slowly or fast, you can still extract the meaning of that word. And also, if I lower my voice or I raise the pitch of my voice or somebody else says that word, you can still tell that that word is that’s the same word.
00;19;33;01 – 00;20;13;27
Maria Geffen
So although several very different law soundly forms and two, yet you’re somewhere in your brain, the brain creates a representation of that word that’s invariant to the basic acoustic features. And we think that that transformation happens somewhere in the auditory cortex based on some research results that we’ve obtained and also on decades of work by other researchers that have also identified the auditory cortex as this crucial area where the brain goes from representing the physical features of the sound to really the object based representation.
00;20;14;16 – 00;20;48;06
Maria Geffen
And so we’re asking the question where we go from within the auditory cortex between different subdivisions of that brain area. So we see a very gradual shift such that the representation becomes more invariant, it changes less as the pitch of the sound is change or as the temporal statistics are modified. And how does that shift occur at the level of populations of neurons and also different cell types that play a role in that?
00;20;48;23 – 00;21;14;15
Maria Geffen
That’s also important in passing the auditory scene in sort of being able to hear my voice. It gets a very loud background. For example, if we were in the middle of the cafeteria, you were trying to listen to, your auditory system would be shutting down what we call the background noise. That’s where we’re going and trying to understand the processing of complex sounds.
00;21;15;07 – 00;21;31;07
Maria Geffen
And with the study of emotional learning, we’re really pulling apart the different brain mechanisms that are involved in that and also refining our evil approaches to be able to us more realistic, more complex questions.
00;21;31;16 – 00;21;41;19
Ernie Hood
Well, Maria, it’s been a real pleasure to get to know you and your fascinating work. We wish you the best of luck for continued success. And thanks so much for joining us today on Focus In Sound.
00;21;41;28 – 00;21;56;15
Maria Geffen
Thank you so much for having me. It was real pleasure talking to you. It was a real pleasure to be able to explain to you the development of the research in our laboratory of auditory coding at the recent Pennsylvania.
00;21;57;15 – 00;22;12;13
Ernie Hood
We hope you enjoyed this edition of the Focus In Sound podcast. Until next time. This is really good. Thanks for listening.
Comments are closed.