fx notes

Search

Search IconIcon to open search

Slit Scanning

Last updated Nov 21, 2024 Edit Source

# History

Slit scanning is a cool photographic technique that captures motion over time by using a narrow slit to take images of a scene. Instead of snapping a traditional photo, it exposes only a thin strip of the image at a time. When moving that slit you expose different parts of the frame at different times. This way, it records movement in a way that creates a unique and often surreal effect.

Paul Bourke shared a great explanation on his blog. (worth checking out!)

2001: A Space Odyssey has the most well known use of the effect. Here’s a screenshot I found in Golan Levin’s amazing collection of slit scan artworks. The effect was done by Douglas Trumbull.

space_odyssey_slitscan_douglas_trumbull

Source: Levin, Golan. An Informal Catalogue of Slit-Scan Video Artworks, 2005-2015. World Wide Web: http://www.flong.com/texts/lists/slit_scan

The technique has roots in early photography, but it gained a lot of attention in the 20th century, especially with artists and filmmakers who wanted to explore time and space in their work. It’s often associated with experimental films and art installations, where the visuals can twist and stretch in unexpected ways. You can think of it as capturing a moment while also showing how it unfolds over time—kind of like a time-lapse but in a single frame.

It’s fascinating how something so simple can produce such mind-bending results!

# Slit Scanning on Videos using Houdini

Images are cool, but videos are much cooler! Houdini being Houdini it’s easy enough to set something like this up. And you don’t even have to use a slit to do the “scanning”. you can use arbitrary values on each pixel to timeshift to the past and future which let’s you create some trippy videos like this:

Source:

The amazing RBD sim and render is from Gurmukh Sandhu, who was so kind to let me use it for this experiment

Inspiration:

Stephan Helbing presented a similar setup for XK Studio in this amazing hive talk. My setup is loosely inspired by it. The main addition is the “straight to COPs” part, which makes it much faster to iterate

You can see the impact and destruction in some parts of the image before the ball hit the violin. 3D video? somewhat!

The main problem you run into when doing this is that you need A LOT of frames for this to look good. In my example you can still see a lot of stepping, because I didn’t have enough frames to fill up the full range of timeshifts. That way multiple different timeshift values will be quantized to read from the same input image. Ideally you would have the amount of pixels on the longer side of your image + the amount of frames you want to end up with as your input frame range:

For a 1920x1080 input sequence that would be 1945 frames for a 1 second video at 25fps. This would allow you to timeshift around through the whole range without having to worry about using the same image on two rows twice and causing stepping.

The setup is pretty straight forward:

I was surprised how relatively fast Houdini samples the colours from all of these frames at the same time.

// point wrangle “norm_xy”

1
2
3
4
float norm_x = chramp("remap_x", @P.x);
float norm_y = chramp("remap_y", @P.y);

f@timeoffset = max(norm_x, norm_y);

// point wrangle “udim”

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
int firstframe = chi("first_frame");
int lastframe = chi("last_frame");
int framerange = chi("frame_range");
int currframe = (int)@Frame;

firstframe += currframe - 1;
lastframe += currframe - 1;

firstframe = select(firstframe<framerange, firstframe, framerange);
lastframe = select(lastframe<framerange, lastframe, framerange);

float readframe = (int)fit01(@timeoffset, firstframe, lastframe);

@uv.x += (readframe-1) % 10;
@uv.y += floor((readframe-1)/10);

// point wrangle “raterize”

1
2
int pt = nearpoint(1, v@P);
v@C = point(1, "Cd", pt);

A couple things to keep in mind for this to run:

Download: File


sources / further reading:


Interactive Graph