Pixel Shader Basics

From Winamp Developer Wiki
Jump to: navigation, search

Introduction

This article assumes you've read through the wonderful Preset Authoring Guide and will attempt to build on the information there. By the end you should have a solid understanding of how pixel shaders work in Milkdrop.

Overview of Pixel Shaders

Simply put, a pixel shader is a tool that applies a set of instructions to every pixel in a display window. In almost every modern video game pixel shaders are used to create realistic lighting and shading effects in 3D scenes. A pixel Shader takes in information about models, textures, light sources and materials, applies a set of instructions to this data and returns a single float4 value, which holds the current pixel's red, blue, green and alpha channel values.

Pixel shaders in Milkdrop are a bit simpler, because we're limited to working in 2 dimensions without things like models or light sources. In addition, the final return value of our pixel shaders is a float3, leaving out the alpha channel. But the principle is still the same; when we write the code for a pixel shader we are giving it a set of instructions to perform on every pixel in the display window, which it uses to find the final color value for that pixel, which is assigned to ret.

It's important to understand that the two pixel shaders in Milkdrop, the warp shader and the composite shader, do very different things. We'll go over exactly how they're different later in the article, for now just keep in mind that they do not do the same thing.

Textures

In the Milkdrop pixel shaders a texture is simply a sampling source. They can be sampled using the tex2D() function, or the tex3D function if you're sampling from a 3D noise volume. There are 3 types of textures we can access in Milkdrop; sampler_main, noise textures and custom textures. The latter two are covered very well in the preset authoring guide, so we'll just take a moment to say a few things about sampler_main.

sampler_main

We can think of this texture as a screenshot of the display window. This snapshot is taken by Milkdrop just before it executes the Warp Shader, and is passed to the Warp Shader as sampler_main. Using sampler_main we can get the color value of the current pixel either by calling tex2D(sampler_main, uv), or by calling GetPixel(uv).

There is one important difference between the Warp Shader and the Composite Shader when it comes to sampler_main; Milkdrop takes a new snapshot of the scene after the Warp Shader has finished executing, and passes this new snapshot to the Composite Shader. What this means is that the Warp Shader actively alters the scene, while the Composite Shader only returns a copy of the scene to display. In other words, anything the Composite Shader does, does not affect the rest of Milkdrop because it only alters a copy of the scene, instead of the scene itself. The Warp Shader alters the scene itself, and anything it does will affect the rest of Milkdrop.

UV Coordinates

A pixel shader needs some way of knowing which pixel it's currently working on, and which pixel you want to sample in a texture. In Milkdrop these concepts are represented by uv, which is a float2 with x and y values between 0..1. The top left corner of the scene has the coordinates (0.0,0.0), while the bottom right corner is (1.0,1.0). How Milkdrop generates these values between 0..1 is not so important, what's important is that it stores the dimensions of the display window and the size of a single pixel in the float4 variable texsize.xyzw. Lets suppose we have a display window of size 600x400, in this case texsize will hold the following values:

  • texsize.x = 600
  • texsize.y = 400
  • texsize.z = 1/600 = 0.0017
  • texsize.w = 1/400 = 0.0025

A pixel shader always moves from left to right, starting in the upper left corner with pixel (0,0). Next it will increment to (move to) the next pixel, and because the size of a single pixel along the x axis for our supposed window is 0.0017 the current uv value is changed from (0,0) to (0.0017,0). For our window, the pixel shader does this 600 times, each time adding 0.0017 to the uv.x value and keeping uv.y the same. After 600 increments it's reached the right side of the screen, (1,0), so it moves back to the left side and starts with the next line of pixels, the uv value is now (0,0.0025). This process gets repeated for the entire display window, in our hypothetical case a total of 240,000 times.

This also means the shader code you write will be executed 240,000 times for that frame, for a window size of 1680x1050 that number increases to 1.7 million times, per frame! At 60 fps this means your graphics card is executing your instructions 106 million times per second, per shader! Something to think about the next time one of the more intense presets brings your machine to its knees.

Getting back to our example, remember that sampler_main contains a copy of the scene in the form of a texture. We know enough now to read the following instruction:

  ret = tex2D(sampler_main, uv);

as "Set ret to the rgb values from the sampler_main texture at the coordinates of the current pixel".

Modifying the uv coordinates

Some very interesting effects can be achieved by modifying the uv coordinates before you sample a texture. The theory behind this is very straight forward because there are really only two things we can do, translate and scale.

Translation

Lets say we want to create a movement effect similar to dx (aka Translation (X) in the Motion menu) that moves the entire scene one pixel to the left each frame. In mathematical terms this is called a Translation, and you achieve it by modifying the uv values like so:

  ret = tex2D(sampler_main, float2(uv.x + texsize.z, uv.y));

which reads as "Set ret to the rgb values from the sampler_main texture at one pixel to the right of the current pixel". When we do this for every pixel we've in effect moved the entire scene one pixel to the left. Conversely, subtracting texsize.z from uv.x would move the scene one pixel to the right. To move the scene 2 pixels to the left we just multiply texsize.z by 2.

Scaling

In the same way that translation is achieved by addition and subtraction, scaling is achieved by multiplying and dividing. An important note though, by dividing we really mean to multiply by a decimal fraction, uv*0.5 instead of uv/2 for example. This is because a computer can perform multiplication much faster than division, so we multiply whenever we can.

When we scale uv what we're really doing is increasing or decreasing the sampling area. For example, the instruction uv*0.5 will cut the available sampling area in half, and the sampling box now goes from (0,0) to (0.5,0.5). Notice this also means that the pixel ratio between the display window and the sampling area is no longer 1:1, one pixel from the sampling area gets scaled up to be 4 pixels in the display window. The effect of this is that the sampling area is blown up to fit the display window, and you lose some resolution in the process. The entire process is less confusing if we see it in action in the Composite Shader:

Notice in the third image we've moved the sampling area to the center of the texture. It's easy to see why this works, all we've done is add 0.25 to the coordinates so that the box now extends from (0.25,0.25) to (0.75,0.75).

Comp shader vs Warp shader

It's important to note that the effects of these transformations are very different in the two Milkdrop shaders. Remember, the composite shader treats sampler_main as a static texture and returns a modified copy of it that Milkdrop displays. Think of it as a camera, when we apply a translation to the uv coordinates we are moving the camera. When we scale the coordinates we are zooming the camera in or out. At the end the camera takes a picture of what it sees and returns that picture.

In the warp shader, on the other hand, we're really changing the scene when we transform the uv coordinates. And this transformation gets compounded in each successive frame. A lot of cool effects like fractals and error diffusion dither take advantage of this. Consider the Zoom effect in the Motion menu. To replicate this with pixel shaders we add the following instruction to the warp shader:

  uv = 0.5 + (uv-0.5)*0.95;

This causes the scene to zoom in and does exactly the same thing as setting Zoom Amount to 1.05 in the Motion menu.

Texture Wrap

Remember that uv coordinates are always values between 0..1, so what happens if we multiply (0.75,0.75) by 2 to get the coordinate (1.5,1.5)? The answer depends on which sampling mode you've chosen to use on your texture; wrap or clamp. Clamp mode (fc and pc) will use 1.0 for any coordinate value larger than 1, or 0 for any value smaller than 0, so (1.5,-0.4) becomes (1,0). Wrap mode (fw and pw) will simply roll the coordinates back to 0 (or 1) when the values are over 1.0 (or under 0) and then add the remainder. (1.5,-0.4) becomes (0.5,0.6). Note when the values are greater than 1 wrap mode always returns just the fractional component, so writing this:

  uv = (1.5,1.5);
  tex2D(sampler_fc_main, frac(uv)); 

will return exactly the same result as

  uv = (1.5,1.5);
  tex2D(sampler_fw_main, uv);

because by encasing uv inside frac() we ensure the coordinate values will never be >= 1.0. If the values are negative numbers then it's not the same, because for example -0.1 will be turned into 0.9, not 0.1. In practical terms, wrap mode will tile a texture when the uv coordinates go out of bounds.

In clamp mode any x or y coordinate value larger than 1.0 will cause tex2D() to return the color value at the right edge or bottom edge of the screen respectively. Any x or y coordinate smaller than 0 (negative) will return the color value at the left edge or top edge of the screen respectively.

Color Values

All of the work done inside a pixel shader is done to find a single set of values; the rgb color values of the current pixel. This is what you assign to ret at the end of your shader code, and this is what Milkdrop draws onto the screen for that pixel. All the fancy mathematics we see in shader code is done to manipulate color values in one form or another, because that's all a pixel shader can do.

RGB basics

In Milkdrop shaders color values are always stored as a float3 that represents the amount of red, green and blue color at a particular point. In the body of your shader code these values can be any number at all, but the final values that are assigned to ret at the end of your shader must be between 0..1 or they will be clamped to either 0 or 1. A negative value will be 0, a value greater than 1 will be 1. Here are a few other notes about color values, working with ret:

  • 1-ret returns the inverse of the color, assuming it is clamped to the 0..1 range
  • saturate(ret) clamps the color to the 0..1 range
  • ret.zyx causes the red and blue channels to trade places
  • ret.x gives a float1 with the value of the red channel
  • lum(ret) basically returns the greyscale version of the color

One important note is that Milkdrop uses 8 bit values for the color channels, which is a fancy way of saying that there are 256 possible shades in each channel, giving a maximum precision of 0.004.

Color in Comp shader vs Warp shader

Just as with uv transformations, color manipulation works differently in the warp shader than it does in the composite shader. Adding the following instruction,

  ret += 0.01;

to the warp shader will quickly white out the entire scene, while adding the same line to the composite shader will make hardly any difference. Again, this is because the effects in the warp shader are compounded with each successive frame, it will keep adding 0.01 to the pixel's color with each new frame until all 3 channels reach 1.0 and get clamped there. This is why any color correction (like giving the entire scene a green tint) should be done in the composite shader.

Basic color examples

To give give the entire preset a green tint we could add this instruction to the composite shader:

  ret *= float3(0.5, 1.0, 0.5);

Which would cut the red and blue color values in half, allowing more of the green to show through.

To make the entire scene brighter we could add this instruction to the composite shader:

  ret *= 3;

Which simply multiplies each channel by 3, making them brighter. Keep in mind final output will still be clamped to 1.0, you can't get any brighter than pure white!


For a more advanced use of color, why don't we combine it with uv translation? Consider the following instructions in the warp shader:

  float3 temp = tex2D(sampler_main, uv);
  float2 myuv = float2( uv.x + 0.05*temp.x, uv.y);
  ret = tex2D(sampler_main, myuv);

Here we first sample the colors of the current pixel from sampler_main and assign them to temp. In the next line we create a new coordinate value that multiplies 0.05 by the value of temp's red channel and adds it to uv.x, while keeping uv.y the same. Where temp's red channel is 1.0 we're adding 0.05 to uv.x, where the red channel is 0.5 we're only adding 0.025 to x, and where the red channel is 0 we're keeping uv.x the same. The effect of this is a translation based on how much red is in the current pixel. A lot of neat effects can be done with this, including the painterly effect.

Important note about ATI cards

Due to some obscure bug in either Milkdrop's source or ATI's drivers, PS 3.0 in Milkdrop will not work on any ATI card. As a temporary fix to this problem PS 2.X support has been added to Milkdrop. There aren't many differences between the two version as far as we're concerned, so try to keep your presets limited to PS 2.0 or PS 2.X, which should give you more than enough instruction slots to play with.

A second bug affects the Xx00 and 9x00 line of Radeon cards, that has to do with the _fw_ and _pw_ sampling modes for textures (made worse by the fact that _fw_ is the default sampling mode). To have this display properly encase the uv coordinates in frac() like this;

  tex2D(sampler_main, frac(uv));

This bug seems to be fixed on the Radeon HD line of cards (both the 3xxx's and the 4xxx's).