Skip to content

Researchers from the University of Osnabrück contribute to an article published in Nature Machine Intelligence, focusing on reinterpreting the workings of human vision.

Researcher from University of Osnabrück suggests innovative approach in esteemed journal 'Nature Machine Intelligence'

Scientists from the University of Osnabrück contribute to an article on redefining human vision,...
Scientists from the University of Osnabrück contribute to an article on redefining human vision, published in Nature Machine Intelligence.

Researchers from the University of Osnabrück contribute to an article published in Nature Machine Intelligence, focusing on reinterpreting the workings of human vision.

In a groundbreaking study published in Nature Machine Intelligence, researchers from the University of Osnabrück, along with institutions in Montreal and Minnesota, have proposed a novel approach to understanding human vision using language models from artificial intelligence (AI). The paper, titled "High-level visual representations in the human brain are aligned with large language models," suggests that the visual system of the human brain might be trying to find a common language, a lingua franca, across different senses and language.

The research team, led by Prof. Dr. Adrien Doerig, now researching at FU Berlin, and Prof. Dr. Tim C. Kietzmann from the University of Osnabrück, demonstrates that large language models (LLMs) can predict patterns of brain activity recorded by fMRI as participants viewed various visual scenes. This means that the human brain encodes visual information not just as raw images or objects but in a way that is rich in meaning, context, and the relationships between objects—similar to how LLMs encode the meaning of text.

The study found that linguistic scene descriptions, represented in large language models, show remarkable similarities to brain activity in the visual system while participants view the corresponding images in an MRI scanner. To validate this, the researchers trained artificial neural networks to predict language model representations from images, creating models that process visual information in a linguistically decodable way. These newly developed models can better represent brain activity in participants than many current leading AI models in the field.

The ability to "read minds" hints at potential improvements for brain-computer interfaces. This common language, according to Prof. Dr. Doerig, would greatly simplify communication between brain regions, potentially revolutionizing our understanding of human vision and the way the brain processes visual information. Furthermore, the study suggests possible paths for improving AI systems in the future, with potential applications in medical fields, and the new technology could one day contribute to the development of visual prosthetics for people with severe visual impairments.

Prof. Dr. Tim C. Kietzmann, a co-first author of the study, explains that using language models to understand visual processing might seem counterintuitive at first. However, the correspondence between representations in AI language models and activation patterns in the brain is significant for understanding complex semantic processing in the brain. The researchers believe that language models are extremely good at processing contextual information and have a semantically rich understanding of objects and actions.

The paper, with the DOI 10.1038/s42256-025-01072-0, can be accessed at this link: https://www.nature.com/articles/s42256-025-01072-0. For further information, Prof. Dr. Tim C. Kietzmann can be contacted at [email protected].

References: 1. High-level visual representations in the human brain are aligned with large language models. Nature Machine Intelligence. (2025). 10.1038/s42256-025-01072-0 2. Link to the paper 3. Personal communication with Prof. Dr. Tim C. Kietzmann, University of Osnabrück.

Read also:

Latest