I am interested in vision and decision making. Here is a selection of my past and current projects. Check also my publications.

Gambling in the visual periphery, analysis of human ability to judge visual uncertainty

Bayesian decision theory, visual uncertainty, ideal observer, eye movements

Vision is accurate at the center of the eye, where the photoreceptors are packed, but noisy as the eccentricity (the ditance from the center of the eye) increases. We therefore move our eyes constantly in order to gather high resolution information from the visual surrounding. In a series of two studies, my collaborators and I explored how good humans are at estimating this acuity fall-off. And whether the specific way in which the acuity drops is taken into account when planning eye movements.

Study 1. Gambling. PLoS Computational Biology: paper

Study 2. Optimality of saccade planning. Under review: cosyne abstract.

Eye-hand coordination

eye-hand coordination, motor decision, visual search, eye movement, statistical decision theory, Bayesian decision theory

The hand movement is usually yoked to the eyes. The eyes explore, and the hand then reaches. In many cases though (driving, reading, typing, playing music, etc.), those two systems should be decoupled for better efficiency. We found that humans are able to decouple those two systems but in suboptimal way.


Sequencial decision making. When should I stop searching?

foraging, sequential decision making, visual search, statistical decision theory, Bayesian decision theory

We used visual search to study the stopping rules used by humans when they forage for a price. We found that humans largely undersample, underestimating what wonders the world has to offer.


I am also studying the potential implications of this research for real life search such as job search and negotiation and bargaining.

Do we perceive motion on the retina or in the world?

motion perception, reference frames, visual search, smooth pursuit, representation of space

Motion pops-out and it is good because moving objects are potential threats. Given the rapidity of this detection one could think that it occurs at a low level of processing -- like the retina or V1 -- but the motion at those steps of processing is not compensated yet (ie, non corrected for the eye movement -- see here for more details). Where does pop-out occur: at low level -- uncompensated motion -- or high-level -- motion in the world? In Morvan & Wexler (2005) by coupling object motion to eye motion, we created stimuli that moved fast on the retina but slowly in an eye-independent reference frame, or vice versa. In the 100 ms after stimulus onset, motion detection is dominated by retinal motion, uncompensated for eye movements. As early as 130 ms, compensated signals become available: objects that move slowly on the retina but fast in an eye-independent frame are detected as easily as those that move fast on the retina.

Compensation for smooth pursuit eye movements

motion perception, smooth pursuit, extra-retinal signal, non-linear model of compensation

When we move our eyes, for instance to follow a moving object, the background slips on the retina with a velocity opposite-and-equal to that of the eyes. In order to perceive the real position and speed of objects in space, this retinal slip must be compensated for. The classical model of compensation states that an extraretinal signal encodes the eye velocity and is substracted to the retinal image. This linear model has been widely used and is considered as a reference. In Morvan & Wexler (2009) we show that this model does not explain compensation for movement non colinear to the pursuit. More details here.