Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Orientation

Save favourite 28 Jun June 2016
Lab setup of a multi-camera rig for calibration algorithm evaluation
Left: A scene state (central checkerboard position) captured in our dataset with 15 viewpoint camera positions. Right: 3 Canon EOS M cameras rigidly affixed to a moving dolly, set in position 1 of 5.

Camera calibration methods are commonly evaluated on cumulative reprojection error metrics, on disparate one-dimensional datasets. To evaluate calibration of cameras in two-dimensional arrays, assessments need to be made on two-dimensional datasets with constraints on camera parameters. In this study, accuracy of several multi-camera calibration methods has been evaluated on camera parameters that are affecting view projection the most.

As input data, we used a 15-viewpoint two-dimensional dataset with
intrinsic and extrinsic parameter constraints and extrinsic ground truth. The assessment showed that self-calibration methods using
structure-from-motion reach equal intrinsic and extrinsic parameter estimation accuracy with standard checkerboard calibration
algorithm, and surpass a well-known self-calibration toolbox, BlueCCal. These results show that self-calibration is a viable
approach to calibrating two-dimensional camera arrays, but improvements to state-of-art multi-camera feature matching are
necessary to make BlueCCal as accurate as other self-calibration methods for two-dimensional camera arrays.

About Dataset

The main dataset contains images of 17 checkerboard configurations taken from 15 coplanar viewpoints,
with additional two "checkerboard-less scene" image states. The intended use of the dataset is to facilitate
evaluation and research of multi-camera calibration methods.

Capture was performed via three Canon EOS M cameras, mounted in a rigid vertical stack onto a calibrated dolly.
For each scene configuration, the camera triplet was translated horizontally to 5 pre-set positions, taking images
at each position (hence resulting 15 viewpoints). Position measurement data is included in the dataset archive.


This dataset may be used for academic and research purposes. If you want to use this dataset, please cite:

title={Assessment of Multi-Camera Calibration Algorithms for Two-Dimensional Camera Arrays Relative to Ground Truth Position and Direction},
author={Dima, Elijs and Sjöström, Mårten and Olsson, Roger},
booktitle={3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), 2016},


Dataset can be downloaded from here. (1,3 GB)