Back To Top

Patterns of connections reveal brain functions

For more than a decade, neuroscientists have known that many of the cells in a brain region called the fusiform gyrus specialize in recognizing faces. However, those cells don’t act alone: They need to communicate with several other parts of the brain. By tracing those connections, MIT neuroscientists have now shown that they can accurately predict which parts of the fusiform gyrus are face-selective. 



The study, which appeared in the Dec. 25 issue of the journal Nature Neuroscience, is the first to link a brain region’s connectivity with its function. No two people have the exact same fusiform gyrus structure, but using connectivity patterns, the researchers can now accurately predict which parts of an individual’s fusiform gyrus are involved in face recognition.

This work goes a step beyond previous studies that have used magnetic resonance imaging (MRI) to locate the regions that are involved in particular functions. “Rather than just mapping the brain, what we’re doing now is adding on to that a description of function with respect to connectivity,” says David Osher, a lead author of the paper and a graduate student in the lab of John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research.

Using this approach, scientists may be able to learn more about the face-recognition impairments often seen in autism and prosopagnosia, a disorder often caused by stroke. This method could also be used to determine relationships between structure and function in other parts of the brain.

To map the brain’s connectivity patterns, the researchers used a technique called diffusion-weighted imaging, which is based on MRI. A magnetic field applied to the brain of the person in the scanner causes water in the brain to flow in the same direction. However, wherever there are axons — the long cellular extensions that connect a neuron to other brain regions — water is forced to flow along the axon, rather than crossing it. This is because axons are coated in a fatty material called myelin, which is impervious to water.

By applying the magnetic field in many different directions and observing which way the water flows, the researchers can identify the locations of axons and determine which brain regions they are connecting. 

“For every measurable unit of the brain at this level, we have a description of how it connects with every other region, and with what strength it connects with every other region,” says Zeynep Saygin, a lead author of the paper and a graduate student who is advised by Gabrieli and Rebecca Saxe, senior author of the paper and associate professor of brain and cognitive sciences.

Gabrieli is also an author of the paper, along with Kami Koldewyn, a postdoc in MIT professor Nancy Kanwisher’s lab, and Gretchen Reynolds, a former technical assistant in Gabrieli’s lab.

Making connections

The researchers found that certain patches of the fusiform gyrus were strongly connected to brain regions also known to be involved in face recognition, including the superior and inferior temporal cortices. Those fusiform gyrus patches were also most active when the subjects were performing face-recognition tasks.

Based on the results in one group of subjects, the researchers created a model that predicts function in the fusiform gyrus based solely on the observed connectivity patterns. In a second group of subjects, they found that the model successfully predicted which patches of the fusiform gyrus would respond to faces.

“This is the first time we’ve had direct evidence of this relationship between function and connectivity, even though you certainly would have assumed that was going to be true,” says Saxe, who is also an associate member of the McGovern Institute. “One thing this paper does is demonstrate that the tools we have are sufficient to see something that we strongly believed had to be there, but that we didn’t know we’d be able to see.”

The other regions connected to the fusiform gyrus are believed to be involved in higher-level visual processing. One surprise was that some parts of the fusiform gyrus connect to a part of the brain called the cerebellar cortex, which is not thought to be part of the traditional vision-processing pathway. That area has not been studied very thoroughly, but a few studies have suggested that it might have a role in face recognition, Osher says.

Now that the researchers have an accurate model to predict function of fusiform gyrus cells based solely on their connectivity, they could use the model to study the brains of patients, such as severely autistic children, who can’t lie down in an MRI scanner long enough to participate in a series of face-recognition tasks. That is one of the most important aspects of the study, says Michael Beauchamp, an associate professor of neurobiology at the University of Texas Medical School.

“Functional MRI is the best tool we have for looking at human brain function, but it’s not suitable for all patient groups, especially children or older people with cognitive disabilities,” says Beauchamp, who was not involved in this study.

The MIT researchers are now expanding their connectivity studies into other brain regions and other visual functions, such as recognizing objects and scenes, as well as faces. They hope that such studies will also help to reveal some of the mechanisms of how information is processed at each point as it flows through the brain.

 

<한글기사>

'기계로 두개골 열고 생각 읽어낸다'



사람의 뇌 활동을 읽어내는 것 만으로 그  사람 이 어떤 단어를 떠올렸는지 알아내는 기계의 시제품이 미국 연구진에 의해 개발됐다 .

31일 미국 버클리 캘리포니아대(UC버클리) 홈페이지와 영국 일간 가디언 인터넷 판에 따르면 연구진은 피실험자의 두개골을 열고 대뇌 피질에 그물 모양의 전극을 직접 접촉시켜 뇌의 전기적 신호를 읽어내는 방법을 사용했다.

실험 대상은 난치성 간질 치료를 위해 뇌수술을 받아야 하는 환자들 중 지원자 15명으로 구성됐다.

먼저 연구진은 피실험자가 5~10분 동안 대화를 듣는 동안 뇌에서 어떤 신호가 발생하는지를 기록하고, 어떤 신호 형태가 어떤 소리와 연결되는지를 분석했다.

이후 피실험자에게 연구진이 모르는 특정한 단어를 들려준 뒤 피실험자의  뇌에 서 발생하는 전기 신호만으로 연구진은 그가 어떤 단어를 들었는지 '해독'할 수  있 었다.

물론 '해독'해 낸 소리가 단어로 알아들을 수 없는 잡음인 경우도 있었다.

또 뇌의 다른 부분에 비해 옆쪽 귀 위에 있는 위관자이랑(superior temporal gy

rus) 부분에서 발생한 전기 신호를 해독했을 때 다른 부분에서보다 인식 가능한  음 성을 얻을 가능성이 높았다.

이번 연구 논문의 제1저자인 브라이언 페이슬리 UC버클리 박사후과정 연구원은 "뇌의 기억과 들리는 소리와의 관계를 완전히 이해하게 된다면 어떤 사람이  생각하 는 소리를 실제의 음성으로 만들어 보이거나 혹은 연결 장치를 사용해 기록할 수 있 게 될 것"이라고 설명했다.

공동 저자인 로버트 나이트 UC버클리 심리학•신경과학과 교수는 이번 연구가 발전되면 뇌 관련 질환 때문에 말하는 기능에 장애를 얻은 사람들에게 큰 도움이 될 것이라고 전망했다.

일부에서 이런 기술이 발전되면 사생활 침해로 악용될 수 있다고 우려하는데 대 해 나이트 교수는 이번 연구와 같은 수준의 성과를 내려면 대상자가 자신의  두개골 을 열도록 협조해야 한다며 그런 우려가 '과학소설의 영역'이라고 말했다.

이번 연구 논문은 공공과학도서관 생물학(PLoS Biology) 학술지에 게재됐다. (연합뉴스)

 

MOST POPULAR
LATEST NEWS
subscribe
소아쌤