Conventional CCD or CMOS cameras create complete two-dimensional images of a scene. In contrast, event cameras generate signals at pixels and times where and when changes in brightness occur. Reported changes come in the form of a stream of events, which are digital packets containing the pixel’s location, a time stamp, and additional information, such as the direction of the change. While successive images of conventional frame-based cameras might be partially redundant, measurements of event cameras are sparse. They do not contain any static information, and thus adapt to the dynamics of an observed scene. Event cameras also offer high temporal precision without the need for the same high bandwidth that would be required for comparable high-speed frame-based cameras. We have developed a 3D measurement setup, which consists of a pair of event cameras in stereo configuration and a specialized projector. We designed the projector in a way that it probes the measurement volume by means of a horizontally moving, vertically oriented contrast edge, i.e., a sharp transition between two levels of illumination. We argue that our method of structured illumination is well adapted to the sparse sampling and radiometric properties of event cameras. We present 3D measurements, performed within ∼200 ms, with quantified uncertainties to demonstrate the abilities of our setup, which enabled us to reconstruct an entire scene with ∼55,000 3D points. At a working distance of ∼700 mm, we achieved a spatial uncertainty of ∼0.6 mm. These results are achieved through triangulation of temporally corresponding events without any smoothing or similar post-processing. Based on our work and previous research, we suggest areas of future investigation in the field of event-based 3D and 4D (3D + time) measurements.
|