The Dark Room

Pictures were stacking up, and now some solution had to be found to process them further into films, and to remove as much of the extraneous motion as possible.

Up until now I had been using FFmpeg to assemble image sequences into films. Documentation is very poor, but something can be made to work at times without pixellating the whole result.

Not many video processing programs provide digital image stabilisation, but I did find that Virtual Dub had a filter that could be used. The only problem with this program was that it was virtually (get it!) impossible to have the program use a codec to compress the video, and, as a result, uncompressed video had to be produced, and then recoded (entirely unnecessary step).

Oh, and before we start there was the problem of timestamps to be solved. It would be nice if the video had timestamps on it, just to give a feeling for time as the kayak tour proceeds. Now, you could try placing the timestamps (read out from the exif data) directly onto the images as they were being processed into the video, or shortly before that step. This means that transparency, colour, size, position, etc. could be controlled to a large degree. The only problem is that subsequent stabilisation would lead to the timestamps finishing up all over the place like the dog’s breakfast.

As it turns out, Virtual Dub also possesses a filter for placing timestamps onto the finished film by means of SubStation Alpha (SSA). Instead of placing the timestamps onto the image at the beginning, the data for the timestamps is collected and written to a project file for SSA, the video is processed, and the timestamps are added in the final stage. SSA’s syntax is highly redundant, but basically you define a style (font, colour, size, shadow, etc.), then a sequence of events with starting and finishing times, the style and the text you want in the subtitle. Save as a text file. The only drawback is that the ability to control transparency is lost, so that you end up with some pretty determined timestamps on the images, but that’s as good as it gets.

The whole sequence is not easy and moderately error-prone, so for documentation’s sake, here goes. First, FFmpeg assembles the images into a good quality (24 MB/s) film in a two-pass process. In the course of this processing, the frames are scaled from 1600*1200 to 1360*1020 and deshaken with what little FFmpeg can offer (edge=3), the timestamps read out, and initial and final images extracted. The video thus produced is loaded into VirtualDub and the Deshaker filter (by Gunnar Thalin) is called (Video → Filters → Add → Deshaker). This is a two-pass filter, meaning it has to be run once to analyse the motion in the sequence. In Pass 1, just two parameters are changed: Deep analysis for blocks where < 8% of vectors is OK (instead of 0%), and Skip frame if < 16% of all blocks are OK (instead of 8%). Then click on “OK” in two windows to close them, and start a dummy run by pressing the play0 button (third from the left at the bottom). Unless you have a Hubble Space Telescope handy you won’t be able to read the output from Deshaker, but it doesn’t matter all that much.

For Pass 2, reset the slider to Frame 0, go back to the filters, choose Deshaker and “Configure”, and now click on “Pass 2” and change the following parameters:

  • Edge compensation: Adaptive zoom average (some borders);
  • Use previous and future frames to fill in borders: Activate (also activates “Soft borders”);
  • Extrapolate colors into border: Activate;
  • Motion smoothness: Zoom: set to 0.

There is no need to remove the filter and replace it as is claimed in some Youtube videos on the subject. Just configure the same old filter for Pass 2.

Click on “OK” to return to the filters. Now add two further filters. First “Resize”, but set the resize to 100% (i.e. no change). When this filter is configured, click on “Cropping” and crop the video to its final destination size, 1280*720. Crop left and right by 40 pixels, top by perhaps 100 and bottom by 200 to achieve the desired result. This removes most of the edges produced by Pass 2 of Deshaker.

Secondly, add the subtitler filter (has to be downloaded separately from Avery Lee’s site) and add the subtitle file that was created from the timestamps. OK everything and then save the video as uncompressed frames.

The final step is to hand the video back to FFmpeg for suitable compression, and extraction of the first and final frames. Then it’s on to AVS’s disastrous Video Editor for assembling into the final product.

Some videos that have been produced in this way can be found on my Youtube channel:
[youtube=https://www.youtube.com/watch?v=aDzwWTC5ipU]
[youtube=https://www.youtube.com/watch?v=vb4sgr2pO1U]
[youtube=https://www.youtube.com/watch?v=UBGKTkizerk]
[youtube=https://www.youtube.com/watch?v=oW1xP2fZVQ0]

I’d like to thank Gunnar Thalin for the advice he gave on configuring Deshaker, and all the brain donors for transplants.No 1