Scientists have identified a neural overlap between machine and humans that can explain why the things we see in the eye of our mind is wildly different to the information that gets processed by our eyes when we actually look at something,
Using an artificial neural network, which is an AI engine that is a program designed to mimic the human brain, and an fMRI scanner, this new research is made to draw a parallel between the inner workings of the human brain and a computer system. These findings explain why a dog in your own head does not match the picture of an actual dog and could lead to more studies regarding mental health issues and the development of artificial intelligence through neural networks.
Thomas Nasaleris, a neuroscientist affiliated with the Medical university of South Carolina, has explained that researchers are aware of the fact that mental imagery is highly similar in some ways to regular vision, but there is no way for the two to be completely identical. Naselaris has continued by saying that the researchers wanted to precisely know how there is a difference between the two.
The team made use of a generative neural network that can create pictures and identify them with ease if it is given enough data for training purposes. The team then studies how it behaves when analyzing sample images and producing its very own images.
This particular analysis was, after that, compared with the regular activity found in human brains, as it is measured using an fMRI scanner. At different stages of the study, volunteers looked at the images on a screen and imagined mental pictures of their own inside their own minds.
The results were perplexing. The human brain and the artificial network matched up to a certain extent. Scientists observed similarities in the way in which signal is passed. It would appear that imagined images are a lot less precise than images that are showed.