Video motion stabilization with awareness of lens projection

Dewobble is a library for video motion stabilization and camera projection changes. It is written in C++, with headers provided for C++ and C.

It is named dewobble because its accurate camera models avoid the wobbling effect that is produced by many other video stabilisation libraries when they are applied to videos with a wide field of view.


See the detailed Doxygen documentation.


See the repository on Sourcehut.


  • Accurate motion detection and wobble free output due to accurate camera models
  • Option to choose a different camera projection for the output
  • Single pass of pixel interpolation to perform both projection change and motion stabilisation
  • Choice of pixel interpolation algorithms including bilinear, cubic, and lanczos4
  • Options for simulating a fixed camera position or to skip stabilization and only change projection
  • Fast processing with the majority of work done in OpenCL
  • Adjust output dimensions and focal length to include as much or as little of the input as you want
  • Robust motion estimation even in relatively blurry or low contrast images
  • Interpolation of camera motion if detection fails for some frames


The majority of processing (with some exceptions, depending on settings) is done using OpenCL. Depending on the OpenCL implementation, this gives great performance with minimal CPU load. On weaker integrated graphics the process is usually GPU bound, whereas on discrete GPUs it tends to be CPU bound.


Hardware and OpenCL implementation 1920x1440, default settings 1920x1440, no stabilization
Intel i7-8750H (Core/Xeon runtime) 22fps 42fps
Intel UHD Graphics 630 (intel-compute-runtime) + i7-8750H 51fps 94fps
Nvidia GTX 1050 Ti mobile (proprietary driver) + i7-8750H 88fps 374fps
AMD Radeon RX 5600XT (ROCm or AMDGPU-PRO) + i3-8100 61fps 165fps

These benchmarks are collected by using the in development dewobble_opencl FFmpeg filter. For GPUs other than Intel (which has a zero copy VA-API to OpenCL interop), the times include copying the video frames from the GPU (where the input video is decoded) to the CPU, and then back to the GPU again for the filter.

The first test is for the default settings including stabilization, and the second for projection change only (which happens entirely in OpenCL).



Build and run

  • OpenCV - used extensively, especially for feature detection and tracking and homography estimation.
  • gram_savitzky_golay - used for camera path optimisation


  • A working implementation of OpenCL



#include <dewobble/filter_threaded.hpp>
dewobble::Camera input_camera(
145.8 * PI / 180,
(1920 - 1.0) / 2,
(1440 - 1.0) / 2);
dewobble::Camera output_camera(
145.8 * PI / 180,
(1920 - 1.0) / 2,
(1440 - 1.0) / 2);
auto stabilizer =
make_shared<dewobble::StabilizerSavitzkyGolay>(input_camera, 60, 30);
dewobble::FilterConfig config(input_camera, output_camera, stabilizer);
dewobble::FilterThreaded filter(input_camera, output_camera, stabilizer);
while (...) {
cl_mem input_frame = filter.get_input_frame_buffer();
// ... put data in input frame
while (filter.frame_ready()) {
cl_mem output_frame = NULL, input_frame;
filter.pull_frame(&output_frame, &input_frame, NULL);
// ... retrieve data from output frame
Definition: camera.hpp:28
Definition: filter_config.hpp:83
Definition: filter_threaded.hpp:83


#include <dewobble/filter_threaded.h>
DewobbleCamera input_camera = NULL, output_camera = NULL;
DewobbleStabilizer stabilizer = NULL;
DewobbleFilterConfig config = NULL;
DewobbleFilter filter = NULL;
input_camera = dewobble_camera_create(
145.8 * PI / 180,
(1920 - 1.0) / 2,
(1440 - 1.0) / 2);
output_camera = dewobble_camera_create(
145.8 * PI / 180,
(1920 - 1.0) / 2,
(1440 - 1.0) / 2);
stabilizer = dewobble_stabilizer_create_savitzky_golay(input_camera, 60, 30);
config = dewobble_filter_config_create(input_camera, output_camera, stabilizer);
while (...) {
cl_mem input_frame = dewobble_filter_get_input_frame_buffer(filter);
// ... put data in input frame
dewobble_filter_push_frame(filter, input_frame, NULL);
while (dewobble_filter_frame_ready(filter)) {
cl_mem output_frame = NULL, input_frame;
frame =
dewobble_filter_pull_frame(filter, &output_frame, &input_frame, NULL);
// ... retrieve data from output frame
struct _DewobbleCamera * DewobbleCamera
Definition: camera.h:16
DewobbleCamera dewobble_camera_create(DewobbleProjection projection, double diagonal_field_of_view, int width, int height, double focal_point_x, double focal_point_y)
int dewobble_filter_push_frame(DewobbleFilter c_filter, cl_mem input_buffer, void *extra)
DewobbleFilter dewobble_filter_create_threaded(DewobbleFilterConfig config)
int dewobble_filter_pull_frame(DewobbleFilter c_filter, cl_mem *output_buffer, cl_mem *input_buffer, void **extra)
struct _DewobbleFilter * DewobbleFilter
Definition: filter.h:18
cl_mem dewobble_filter_get_input_frame_buffer(DewobbleFilter c_filter, cl_int *errcode_ret)
int dewobble_filter_frame_ready(DewobbleFilter c_filter)
int dewobble_filter_end_input(DewobbleFilter c_filter)
void dewobble_filter_release_input_frame_buffer(DewobbleFilter c_filter, cl_mem *input_buffer)
void dewobble_filter_release_output_frame_buffer(DewobbleFilter c_filter, cl_mem *output_buffer)
void dewobble_filter_config_set_opencl_context(DewobbleFilterConfig c_config, cl_context context)
struct _DewobbleFilterConfig * DewobbleFilterConfig
Definition: filter_config.h:42
DewobbleFilterConfig dewobble_filter_config_create(DewobbleCamera input_camera, DewobbleCamera output_camera, DewobbleStabilizer stabilizer)
struct _DewobbleStabilizer * DewobbleStabilizer
Definition: stabilizer.h:18
DewobbleStabilizer dewobble_stabilizer_create_savitzky_golay(DewobbleCamera camera, int radius, unsigned int interpolation_horizon)

input/output format

Currently the only accepted input and output is an OpenCL buffer containing an NV12 image with full range BT.709 colours. To assist with tracking other data associated with frames, it is also possible to attach a void * pointer called extra to each frame.

Choosing an OpenCL platform

The filter config class has the ability to configure the OpenCL context and device to used by the filter. By default, the OpenCL platform will be chosen by OpenCV. The configured context and device must match that of the buffers you pass in.

Note: OpenCV uses a global/implicit OpenCL context which is local to the current thread. If you use the threaded variant of the filter, all usage of OpenCV will occur in a separate thread, and will therefore not interfere with the use of OpenCV in your application threads. The API is the same, although the threaded variant will keep one extra frame in the pipeline to assist with keeping the worker thread busy.

Supported camera projection models


This is the most commonly used camera projection, and has the property that straight lines in the real world appear straight in the image. Dewobble supports rectilinear projections with a configurable field of view and focal point.

Equidistant fisheye

This is a popular projection used in very wide angle lenses. If desired it is able to project the entire sphere of potential real world object angles into a circular image. Dewobble supports equidistant fisheye projections with a configurable field of view and focal point.

Example: GoPro Hero 5 Black

FoV setting Stabilization Diagonal FoV
4x3 Wide disabled 145.8°
4x3 Wide enabled 131.5
16x9 Wide disabled 127.9°
16x9 Wide enabled 109.5°

Measuring the camera field of view

If you don't know the field of view of your camera, you can measure it. In order to work with dewobble, it will need to have a supported type of projection.

Note that the following will affect the measurement:

  • The lens used.
  • The zoom level.
  • Any settings on the camera related to the projection or field of view.
  • Any built in stabilization on the camera (some cameras apply dynamic zooming/cropping when stabilization is enabled!).
  • The refractive index of any gas or liquid that the camera is submerged in (e.g. water).

Physical method

  1. Set up the camera at a fixed position facing a wall.
  2. mark the positions on the wall that are shown at two diagonally opposite corners of the image.
  3. Measure the three distances between those points and the camera sensor and use trigonometry (e.g. the cosine rule) to calculate the angle from the camera sensor between the two points. This is the diagonal field of view.

Programmatic method

  1. Compile the OpenCV camera_calibration example and familiarise yourself with its operation.
  2. Configure it to use the fisheye model or not depending on the projection of your camera.
  3. Fix the focal point at the center unless you suspect that your camera is not centered.
  4. Fix the calibration coefficients to 0 (Dewobble does not support these).
  5. Run the calibration.
  6. Read the measured focal length from the <camera_matrix /> element. The horizontal and vertical focal lengths should match closely, and be at position (0, 0) and (1, 1) in the matrix.
  7. Convert this focal length (in pixels) to the diagonal field of view (in radians). For rectilinear projections this is 2*atan(sqrt(width^2+height^2)/(2*focal_length)). For fisheye projection this is given by sqrt(width^2+height^2)/focal_length.

Supported camera path smoothing algorithms

Savitzky Golay

Use a Savitzky Golay filter to find a smooth camera path. Adjustable window size (expressed in terms of the number of frames before/after each frame which are considered).


Fix the camera orientation as it was in the first frame.


Do not apply motion stabilisation (perform only camera projection changes).


GPL version 3 or later


Contributions are welcome and encouraged!

For bug reports, please make a ticket on the issue tracker:

To send patches, please post to the mailing list:

For more complex changes please open a ticket to discuss the idea.