Camera Self-Calibration

I'm a kind of "calibration expert", i.e. I try to position cameras just looking at the  environment ! This can be  done at no cost, without  assuming anything neither on the cameras nor on the environment. Calibration is the preliminary and the crucial task before anyout of images, view synthesis, virtual and augmented reality ... And my work may definitely interest people from any topic above.

Here is some contributions I have done so far:

 1. Developed a two-stage non-linear method for camera self-calibration using points as features.

 2. Developed a robust method to handle outliers in self-calibration procedure.

 3. Developed an ellipse-based self-calibration algorithm.

        I'd like to mention that ellipses, unlike points and lines, have not been used for self-calibration before.  The new method proved that camera self-calibration can be achieved by using ellipses as features. The paper based on this research was published in:  IEEE International Conference on Robotics and Automation, May, 2001.

       Since the algorithm is based on an ellipse-to-ellipse correspondence, not on a point-to-point correspondence, it does not need to suffer mismatch in determining in point matches between different images.

       Both liner and non-linear methods are discussed and implemented by C language. They have been validated on a great deal of synthetic data and real data.

       The followings are some figures and images in the experiments.
 

Image 1 Image 2 Image 3

[ Go Back || Home ]