# Superposing video frames in a single picture

Paper (dead trees) is not the best format to publish video content, but one doesn't always choose her medium. In this post, I will explain how I used FFmpeg and ImageMagick to convert a small video sequence into a superposition of pictures with varying hue and opacity. Apart from figuring out the appropriate command lines, what I found interesting here is the progression from command line to shell script, Makefile, and then some Python scripts. The result is a quick-and-dirty pipeline: a one-shot solution that I can save and reconsider later, if needed.

## Swinging a pendulum¶

So I had this one-second video showing a pendulum swing up in four successive swings. I chose to convert it to a single picture in the following way:

• the center of the pendulum is fixed (at the center of the picture);
• opacity (alpha channel) goes from 30% to 100% at time goes by;
• the hue of the pendulum changes during each of the successive swings.

You can see the result below: the pendulum first goes right (green), then left (blue), then further right (magenta) and finally all the way up (red).

## Extracting video frames¶

A single call to FFmpeg does the trick:

## Makefile¶

It took me some time to figure out the right calls to convert. After several devastations of my pictures folder, I figured out I ought to automate the process of starting again from the beginning, so I put everything in a Makefile:

SHELL=/bin/bash  # otherwise Make uses /bin/sh
WORKDIR=./tmp
VIDEO=output.mpg
FRATE=24

all: clean ffmpeg transparency

$(WORKDIR): mkdir$(WORKDIR)

clean: $(WORKDIR) find$(WORKDIR) -mindepth 1 -delete

ffmpeg: clean
ffmpeg -i $(VIDEO) -r$(FRATE) -ss 00:00:00.550 $(WORKDIR)/image-%3d.jpg transparency: for i in$(WORKDIR)/*.jpg; do \
convert $$i -fuzz 5% -transparent white$${i/jpg/png}; \
done
rm -f $(WORKDIR)/*.jpg  ## Crossing the alpha channel¶ The next step was to set the opacity of pictures in the working folder from 30% for image-001.png to 100% for the last one. Since it involves computing a fraction for each file, I thought a small Python script would be a good way to go: import os import sys src_dir = sys.argv[1] all_files = os.listdir(src_dir) nb_files = len(all_files) min_alpha = 30. # % for i, fname in enumerate(sorted(all_files)): alpha = min_alpha + i * (100. - min_alpha) / nb_files fpath = os.path.join(src_dir, fname) cmd = "mogrify -alpha on -channel a -evaluate and " cmd += "%02d%% %s" % (alpha, fpath) os.system(cmd) print cmd  I guess one can perform the same operation within the command line, but I don't have sufficient shell proficiency to do it ;) Here, I called mogrify, which is the same as convert but stores the result in place instead of saving it to a seperate output file. The call for each file is mogrify -alpha on -channel a -evaluate and XX image.png  Where XX goes from 0% to 100%. -alpha on turns on the alpha channel, while -channel a selects it. -evaluate specifies a pixel-wise operator: for instance, -evaluate set 50% means you set all pixels' alpha values to 50%. However, we previously set a transparent background here: using set would override it and put a semi-transparent white instead. With and, 0% (transparent) stays 0%, and 100% (opaque) becomes the new value. ## Hue make me feel¶ For this operation, I needed to set the hue of the picture based on its position in the sequence: after identifying (manually) the key frames at which I wanted the hue to change, I resorted to a small Python script again: import os import sys key_frames = [10, 24, 37] src_dir = sys.argv[1] all_files = os.listdir(src_dir) nb_files = len(all_files) key_files = ["image-%03d.png" % fid for fid in key_frames] print key_files hue_step = 120 / len(key_files) hue = 0 for i, fname in enumerate(sorted(all_files)): alpha = i * 100. / nb_files fpath = os.path.join(src_dir, fname) outpath = os.path.join(src_dir, fname + '.hued') cmd = "convert -modulate 100,100,%d %s %s" % (hue, fpath, outpath) os.system(cmd) print cmd if fname in key_files: hue += hue_step  Again, shell power users, feel free to let me know how to do it in Bash/Zsh :) ## Merging all frames¶ The command line to merge all layers into one is the following: convert tmp/image-* -background none -compose dst_over -flatten output.png  The two important parts are the compose method, -compose dst_over, which merges files atop each other in the order of the list (first ones going under), and -flatten, which merges all layers into one. ## It's the final Makefile¶ My final Makefile looks like this: SHELL=/bin/bash # otherwise Make uses /bin/sh VIDEO=output.mpg FRATE=24 WORKDIR=./tmp OUTPUT=pendulum_traj.png clean: rm -rf$(WORKDIR)/

ffmpeg: clean
mkdir $(WORKDIR) ffmpeg -i$(VIDEO) -r $(FRATE) -ss 00:00:00.550$(WORKDIR)/image-%3d.jpg

transparency:
for i in $(WORKDIR)/*.jpg; do \ convert $$i -fuzz 30% -transparent white$${i/jpg/png} \ done rm$(WORKDIR)/*.jpg

alpha:
python alpha.py $(WORKDIR) all: clean ffmpeg transparency alpha python hue.py$(WORKDIR)
convert $(WORKDIR)/image-*.hued -background none -compose dst_over -flatten$(OUTPUT)