UTimelapse: a tool for creating high-quality timelapse videosMay 4, 2014
Here's yet another fun application: UTimelapse, a tool that creates timelapse videos from a sequence of still images. The main goal of this tool is to create high-quality videos without requiring high-end equipment. The tool addresses three main problems that are encountered in timelapse photography: camera shake, flickering, and single-frame artifacts.
Before going into details, let's take a look at the demos:
Camera Shake Reduction
The camera shake reduction feature reduces the camera shake by estimating a homography between consecutive frames using point correspondences and applying transforms to align the frames. To calculate the point correspondences, the app provides two options: fast and robust modes. The fast mode detects a set of good features in a reference frame. Then, it estimates the optical flow between the consecutive frames to locate the features in the next frame. The robust mode uses the Scale Invariant Feature Transform  to describe keypoint features, performs approximate nearest neighbors search (OpenCV FLANN-based matcher ) to match features between consecutive frames, and uses Random Sample Consensus  to eliminate the outlying matches. The robust mode tends to produce more stable results, especially when a tripod is not used, although it is more computationally expensive.
Using the two sets of matching key points between the consecutive frames, the program estimates a homography between the frames and checks if the estimated homography matrix is stable. A homography matrix is labeled as unstable if its determinant is either close to zero or too large. An unstable homography is usually observed at scene changes where there are no valid point correspondences between consecutive frames. Therefore, the app interprets an unstable homography matrix as a new scene and applies no transformations. Otherwise, it warps the current frame and aligns it with the previous frame using an affine or perspective transform depending on user preferences. The app also provides an option to zoom to crop margins.
The deflickering feature works as a brightness and contrast normalizer that helps reduce the frame-to-frame illumination and contrast differences. There are two different deflickering modes available in the software: global and local deflickering. The global mode computes the mean and standard deviation of pixel intensity values for each frame and keeps them in circular buffer arrays. The circular buffers work as running average filters on the mean and standard deviation of the intensity values to smooth out brightness and contrast fluctuations. The local mode smooths local variations in image brightness, such as cloud shadows. The local deflicker assumes that the flickers occur at lower spatial frequencies. To achieve smooth transitions at the low frequencies, it uses a circular buffer to keep and blend the discrete cosine transform (DCT) coefficients of the consecutive frames. This local deflickering mode is experimental and not available in the stable release of the software.
Temporal Smoothing and Artifact Removal
This feature smooths frames temporally to attenuate temporal jitters and spatial noise. The feature implements temporal median and Gaussian filters using a buffer to keep previous and next frames with respect to the current frame and computing the median or the Gaussian weighted average of the pixels accross the consecutive frames.
I would like to note that most of the methods that I have used here are well-known methods, so I do not claim novelty for the most part. The DCT-based deflicker, on the other hand, has never been proposed before in the literature to the best of my knowledge.
 Lowe, David G. "Distinctive image features from scale-invariant keypoints." International Journal of Computer Vision 60.2 (2004).
 Fischler, Martin A., and Robert C. Bolles. "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography." Communications of the ACM 24.6 (1981).