Date:
Location:
Patrick Cavanagh, PhD
Professor, Université Paris Descartes http://cavlab.net
Professor, Dartmouth College http://pbs.dartmouth.edu/people/patrick-cavanagh
Professor Emeritus, Harvard University http://visionlab.harvard.edu/Members/Patrick/cavanagh.html
Topic: Where
Abstract: How do we know where things are? Recent results indicate that an object’s location is constructed at a high level where the motions of the object, our eyes, head, and body are discounted much like perceived color discounts the illumination or intentions are inferred by discounting the situation. For the coding of position, locations can be updated predictively so that an object is represented at its expected location, even before it gets there. These predictions operate differently for eye movements and for perception, establishing two distinct formats for this high-level coding. We are able to assign perception and action to one format while eye movements and spatial attention operate on a separate format. This means that, at times, our eyes or our attention are not directed to where we see a target but to another, often quite distant location. In contrast, other actions (grasping, pointing) are unlike eye movements and are reliably and reassuringly directed to the locations where we actually see their targets.