The state of the art stitching software are capable of stitching images reasonably and give visually pleasing and acceptable results. Most of these software/libraries rely on estimating camera parameters for images in the scene by computing point-to-point correspondences between images and optimizing distances between them by varying known camera properties like focal length, FOV, lens offset etc. Once we know the camera parameters, we can compute homography between the images, warp and place them accordingly in a single panorama.

It is important to note that Homography based stitching techniques are focused on error minimization and not error elimination. In theory the above system works well for camera rigs that have perfect rotation but this is almost never the case for real-life camera rigs as a small parallax is almost always introduced. During the practical assembling of rigs, the cameras may not be perfectly aligned and lens-offsets are often introduced. This is due to the extremely high cost of aligning things pixel-perfectly, parts that might move slightly in transit, lack of time in between shots to recalibrate etc. Introduction of this slight error combined with the fact that its not always possible to compute a single 2D transformation that can satisfy all point correspondences between images makes these techniques infeasible for robust image stitching.

We created a proprietary framework for better and robust image stitching by combining camera properties and their positions with local warping .The resulting stitch is smoother and selectively warps the misaligned sections of a panorama while preserving the aligned sections.

Specific details about our framework are currently private and will be released upon patent filing

-->