The history of eye-movement research extends back at least to 1794, when Erasmus Darwin (Charles' grandfather)
published Zoonomia, including descriptions of eye movements due to self-motion. But research on eye movements was
restricted to the laboratory for 200 years, until Michael Land built the first wearable eyetracker at the University of
Sussex and published the seminal paper "Where we look when we steer" [1]. In the intervening centuries, we learned a
tremendous amount about the mechanics of the oculomotor system and how it responds to isolated stimuli, but virtually
nothing about how we actually use our eyes to explore, gather information, navigate, and communicate in the real world.
Inspired by Land's work, we have been working to extend knowledge in these areas by developing hardware, algorithms,
and software that have allowed researchers to ask questions about how we actually use vision in the real world. Central
to that effort are new methods for analyzing the volumes of data that come from the experiments made possible by the
new systems. We describe a number of recent experiments and SemantiCode, a new program that supports assisted
coding of eye-movement data collected in unrestricted environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.