-
Categories
-
Pharmaceutical Intermediates
-
Active Pharmaceutical Ingredients
-
Food Additives
- Industrial Coatings
- Agrochemicals
- Dyes and Pigments
- Surfactant
- Flavors and Fragrances
- Chemical Reagents
- Catalyst and Auxiliary
- Natural Products
- Inorganic Chemistry
-
Organic Chemistry
-
Biochemical Engineering
- Analytical Chemistry
-
Cosmetic Ingredient
- Water Treatment Chemical
-
Pharmaceutical Intermediates
Promotion
ECHEMI Mall
Wholesale
Weekly Price
Exhibition
News
-
Trade Service
"Seeing eye to eye" is a harmonious expression, but when different people see the same outside world, do they really see the same thing?" The simple answer is no," Dr.
Liron Gruber said
.
"Even the same person, every time they look at the same thing, they see it differently
.
"
Gruber and Ehud Ahissar of the Department of Brain Science at the Weizmann Institute of Science came to these conclusions
after conducting a study.
In this study, they investigated interesting differences
between human vision and computer vision that Weizmann mathematicians found.
The researchers, led by Professor Shimon Ullman of the Department of Computer Science and Applied Mathematics, found that computer algorithms, no matter how clever, are much worse than humans at interpreting image fragments known as minimum recognition configurations (MIRCs) (middle row in the image above)—that is, in determining from which objects these fragments are derived from (the line above
).
In addition, when the researchers gradually clipped or blurred the MIRCs, the computer's recognition decreased in a linear fashion, while in the human participants, it dropped abruptly at some cut-off point (bottom row).
Gruber realized that experiments involving MIRCs could provide a wealth of data
about the workings of the human visual system.
In an earlier study, she and her doctoral supervisor, Ahissar, have demonstrated that, contrary to widely accepted belief, the human eye doesn't work like a camera passively takes snapshots
.
In a study later published in the Proceedings of the National Academy of Sciences, she and Ahissar teamed up with computer scientist Ullman to test
human vision.
Identifying MIRCs usually takes a relatively long time for people – more than two seconds, which is more than 6 times more than
the 300 milliseconds it takes to recognize an entire object.
The researchers recorded eye movements in an attempt to identify MIRCs and used computational models to simulate the activity of
neurons in the retina.
These activity patterns not only vary with eye movements, but also depend on whether people can recognize objects
in pictures.
On average, recognition requires scanning different points in the picture with the eye four times; At each point, the eye drifts locally in all directions for hundreds of milliseconds
.
The results show that the interaction between eye movements and objects is critical
for recognition.
In fact, when the researchers canceled for the interaction between objects and eye movements—for example, by moving the picture synchronously with the eye—study participants were unable to recognize the object
.
"The retina doesn't replicate the outside world — unlike a camera, it replicates external patterns
on film or digitally.
Instead, human vision is an active process that includes interactions between external objects and eye movements," Ahissar said
.
"When different people look at the same thing, their eyes will follow different paths, and even the eyes of the same person will not copy the same trajectory, so in a way, every time we look at something, it is a one-time experience
.
"
So how does the brain encode visual reality? More precisely, how does this coding arise from eye movements and interactions between objects? Gruber said: "When we look at an object or scene, the intensity of the light received by each receptor on the retina changes
with each movement of the eye.
The resulting patterns of neuronal activity can be interpreted by the brain and perhaps stored
by the brain.
”
The findings represent a new direction in the search for neural codes, namely how information is encoded in the brain
.
Unlike the ubiquitous genetic code, the neural code can be different in different
brain regions.
The findings suggest that retinal coding stems from a dynamic process in which the brain interacts with external reality through the senses
.
They explain why it takes time to recognize a blurry object or figure out optical illusions – for example, finding a "hidden" Dalmatian dog in a black patch on a white surface: capturing such a complex image requires scanning
with an eye.
Once human vision — from eye movements to neural coding — is better understood, it will be possible to develop effective artificial aids for visually impaired people and teach robots to catch up with humans
when recognizing objects in challenging conditions.