I have a Canon T3i with a Canon EF 50mm f1.4 lens that I use for the gross majority of my day-to-day photography these days. I’ve been using a custom firmware for the Canon called Magic Lantern that provides a some interesting (and useful!) functions. One of them is HDR video. Here’s a beautiful example of what can be done:
I tried my hand at processing the HDR video output and was able to get a reasonably nice tone-mapped video:
After the break, you’ll find how I processed the initial Magic Lantern video using MATLAB and exiftool and tone-mapped the output using Luminance HDR.
First, we need to process the video with a function I (poorly) named ‘Step1MovieToInterpolatedFrames.m’ to separate the dark and light frames. The video is first loaded using the VideoReader object. Then we check to see if the first frame is darker or lighter than the second. This is admittedly a bit of a hack, but given the gross differences in exposure, seems to work well enough. After determining whether the first is light or dark, we then loop through all the frames of the movie, saving the real frames and appending an “L” to signify they are “light” and, also, interpolate between the frames. Why go through the bother of interpolation? Well, there will be image registration problems with the tone-mapping if we assume that a given dark frame matches the earlier or later light frame, especially with high-speed motion. Interpolation helps us “smooth” these errors out. Note that ideally we would use a morphing algorithm (similar to the one used by Twixtor), but this is the quickest method for the time being. After saving each frame, I use exiftool to assign an aperture value. Note that this has nothing to do with the real aperture value, but helps Luminance HDR tonemap the composite image. We do this for the dark frames as well, but now we take into account the EV shift in the video’s ISO when writing the aperture value using exiftool.
The second function, ‘Step2FramesToHDRFrames.m’, takes the individual light and dark frames and generates tone-mapped images. We go through every frame and use the Luminance HDR CLI (command line interface) to generate an HDR image and tone-map it (here using the mantiuk08 tone-mapping operator).
And the final function (‘Step3HDRFramesToVideos.m’) compiles all of the tone-mapped images into videos (one for the light frames, one for the dark frames, and one for the tonemapped frames).
The code can be found at the bottom of the post.
So, what do each of the Luminance HDR tonemapping operators look like (with their default parameters) when applied to a video? Here’s the source (note that YouTube strips out the alternating frames, you can find the original MOV here):