October 26, 2022 – “We eat first with our eyes.”
Roman foodie Apicius is believed to have uttered these words in the first century AD. Now, about 2,000 years later, scientists may prove him right.
MIT researchers have discovered a previously unknown part of the brain that lights up when we see food. This part, called the ventral food component, is located in the brain’s visual cortex, in an area known to play a role in identifying faces, sights and words.
The study published in the journal current biologyAnd the It involves using artificial intelligence (AI) technology to build a computer model of this part of the brain. Similar models are emerging across fields of research to simulate and study complex body systems. A computer model of the digestive system was recently used to determine it The best position for the body to take the pill.
“The research is still developing,” says study author Meenakshi Khosla, Ph.D. “There is a lot that needs to be done to understand whether this area is the same or different in different individuals, and how it is modified by experience or familiarity with different types of foods.”
Khosla says identifying these differences can provide insight into how people choose what to eat, or even help us learn about the causes of eating disorders.
Part of what makes this study unique is the researchers’ approach, called the “neutral hypothesis.” Instead of setting out to prove or disprove a fixed hypothesis, they simply began exploring the data to see what they could find. The goal: to go beyond “personal hypotheses that scientists already thought were testing,” the paper says. So, they began sifting through a public database called the Natural Scenes Dataset, which is an inventory of brain scans from eight volunteers viewing 56,720 images.
As expected, the software analyzing the data set detected areas of the brain already known to result from images of faces, bodies, words and scenes. But to the researchers’ surprise, the analysis also revealed a previously unknown part of the brain that appeared to respond to images of food.
“Our first reaction was, ‘That’s nice and all, but it can’t be right,'” Khosla says.
To confirm their discovery, the researchers used the data to train a computer model of this part of the brain, a process that takes less than an hour. They then fed the model with more than 1.2 million new photos.
Sure enough, the model lit up in response to the food. Color didn’t matter – even black and white photos of food made it pop, though not as powerfully as color photos. And the model can tell the difference between food and things that look like food: a banana versus a crescent moon, or a blueberry pie versus a puppy with a muffin face.
From human data, researchers have found that some people respond slightly more to processed foods like pizza than to unprocessed foods like apples. They hope to explore how other things, such as liking or disliking a food, can influence a person’s response to that food.
This technology could open up other areas of research as well. Khosla hopes to use it to explore how the brain responds to social cues such as body language and facial expressions.
Currently, Khosla has already begun verifying the computer model in real people by scanning the brains of a new group of volunteers. “We collected empirical data on a few subjects recently and were able to localize this component,” she says.
Discussion about this post