Shared expectations help shape visual memory
In a recent study, the researchers explored the distortions that bias our memory regarding the locations of objects and details within a scene. These distortions creep in because our visual system cannot process the torrent of information constantly pouring in as we view the world around us. Our brains accordingly boil things down to focus on only the most important bits.

An essential function of the human visual system is to locate objects in space and navigate the environment. Due to limited resources, the visual system achieves this by combining imperfect sensory information with a belief state about locations in a scene, resulting in systematic distortions and biases. These biases can be captured by a Bayesian model in which internal beliefs are expressed in a prior probability distribution over locations in a scene.

Researchers introduce a paradigm that enables us to measure these priors by iterating a simple memory task where the response of one participant becomes the stimulus for the next.

This approach reveals an unprecedented richness and level of detail in these priors, suggesting a different way to think about biases in spatial memory. Prior distribution on locations in a visual scene can reflect the selective allocation of coding resources to different visual regions during encoding (“efficient encoding”). This selective allocation predicts that locations in the scene will be encoded with variable precision, in contrast to previous work that has assumed fixed encoding precision regardless of location.

They demonstrate that perceptual biases covary with variations in discrimination accuracy, a finding that is aligned with simulations of our efficient encoding model but not the traditional fixed encoding view.

This work demonstrates the promise of using nonparametric data-driven approaches that combine crowdsourcing with the careful curation of information transmission within social networks to reveal the hidden structure of shared visual representations.

PNAS
Source: https://doi.org/10.1073/pnas.2012938118
Like
Comment
Share