January 2018
You can move the camera by dragging with the left mouse button.
Pressing number keys changes the color mode.
[ 1 ] = 1 bit monochrome
[ 2 ] = 8 bits monochrome
[ 3 ] = 1 bit per channel
[ 4 ] = 8 bits per channel
The goal of this project is to render a 3D scene, such that each individual frame contains only noise. The color of every pixel must be independent of all other pixels in the same frame, but is allowed to depend on pixels in the previous frames.
There are multiple ways to approach this problem. I decided to move the noise with displacement maps based on the screen-space velocity of the objects in the scene.
Since individual frames contain nothing but randomness, we can generate perception of depth only through parallax motion. What makes this challenging is that the pixel positions (and therefore their velocities) are quantized. Simply rounding the screen-space velocities to nearest integers introduces artifacts equivalent to color banding. The boundaries between areas of different quantized speeds are very noticeable and the perception of depth is diminished.
One solution to this problem is dithering. Adding a random bias between 0 and 1 to the original speed before rounding down results in dithered speeds that average to the original speed over time. If the bias is chosen randomly per pixel, we run into another problem: Neighboring pixels with a similar original speed will often have different quantized speeds and move apart from another. This opens up holes, which need to be filled with new random values and cause the motion to drown in noise.
We can avoid this noisiness by choosing one random dithering bias per frame and applying it to all pixels. While the result is much improved, the motion appears somewhat irregular.
The irregularity is caused by the usage of uniform randomness. Uniformly random samples tend to clump together and leave voids. To get a more even coverage of the possible bias values, we can simply add the golden ratio to the bias modulo one each frame. A fancy technical term for this is low-discrepancy additive recurrence.
TLDR:
// globally, once per frame: dither_bias += 0.618; // golden ratio if(dither_bias >= 1.0) dither_bias -= 1.0; // screen-space velocity of each pixel: quantized_x = floor(original_x + dither_bias); quantized_y = floor(original_y + dither_bias);
On a CPU we can easily use the screen-space velocity to move each pixel to its correct destination, but on a GPU it's difficult to do random-access writes. Pixel shaders can read texture samples from arbitrary coordinates, but the position of their output is predetermined. We therefore need to generate a displacement map, which points to the source of a pixel instead of its destination. But this displacement map needs to be constructed very carefully: If any two pixels share the same source, their colors are no longer independent and the image ceases to be pure noise.
One way to guarantee unique sources is to require a match between the forward displacement of the source and the backwards displacement of the destination. This requires two separate passes. First we render the scene in the state it is was in during the previous frame and project all the motion that happens between frames into screen-space. The resulting displacement vectors are stored as a texture. Secondly we render the scene in it's state during the current frame and compute displacement vectors again in order to determine potential sources. If the displacement of a potential source of a pixel points back to exactly that pixel, we allow the color of the previous frame to be copied. Otherwise the pixel is filled with a new random value.