WaitUntil Component

Here’s a little something that I hope to use increasingly in the future : elements of functional programming to facilitate modification of state over time or game loops, without using threads all over the place. It’s nothing new, and there are other solutions (like Nick Gravelyn’s Interpolators and Timers), but I tried to make it as concise and generic as I could.

Here it is, more comments after the listing.

using System;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.GamerServices;

namespace Foo
{
    public class WaitUntil : IGameComponent, IUpdateable
    {
        public static Game Game { private get; set; }

        public static void GuideDisappears(Action onValid)
        {
            Game.Components.Add(new WaitUntil(_ => !Guide.IsVisible, onValid));
        }

        public static void TimePassed(float secondsToWait, Action onValid)
        {
            Game.Components.Add(new WaitUntil(waited => (waited as TimeKeeper).Elapsed.TotalSeconds > secondsToWait,
                                              (elapsed, waited) => (waited as TimeKeeper).Elapsed += elapsed,
                                              onValid, new TimeKeeper()));
        }
        class TimeKeeper { public TimeSpan Elapsed; }

        readonly Func<object, bool> condition;
        readonly Action<TimeSpan, object> whileWaiting;
        readonly Action onValid;
        readonly object state;

        WaitUntil(Func<object, bool> condition, Action onValid) : this(condition, ActionHelper.NullAction, onValid) { }
        WaitUntil(Func<object, bool> condition, Action<TimeSpan, object> whileWaiting, Action onValid) : this(condition, whileWaiting, onValid, null) { }
        WaitUntil(Func<object, bool> condition, Action<TimeSpan, object> whileWaiting, Action onValid, object state) 
        {
            this.condition = condition;
            this.whileWaiting = whileWaiting;
            this.onValid = onValid;
            this.state = state;
        }

        public void Update(GameTime gameTime)
        {
            if (condition(state))
            {
                onValid();
                Game.Components.Remove(this);
            }
            else
                whileWaiting(gameTime.ElapsedGameTime, state);
        }

        #region Stuff we don't care about
        public void Initialize() { }
        public bool Enabled { get { return true; } }
        public event EventHandler EnabledChanged;
        public event EventHandler UpdateOrderChanged;
        public int UpdateOrder { get { return 0; } }
        #endregion
    }

    public static class ActionHelper
    {
        public static void NullAction<T, U>(T t, U u) { }
    }
}

I ended up using basically a GameComponent, which means I need access to a Game instance to add it and remove it from the component collection. I decided to use a static field (that you assign in the Game’s constructor) to avoid passing it every time. It’s very unlikely that the Game instance will change or be destroyed… and I already static-ified it in my ServiceHelper earlier anyway.

I also wrote a couple (okay, two) static factory methods that are slightly fluent-interface-ey.

// (from the context of your Game class implementation)
{
    // Say "OK" when the guide stops being visible, like this?
    Components.Add(new WaitUntil(_ => !Guide.IsVisible, () => Console.WriteLine("OK!")));
    // ...or like this!
    WaitUntil.GuideDisappears(() => Console.WriteLine("OK!"));

    // And while we're at it...
    WaitForTwoSeconds();
}
void WaitForTwoSeconds()
{
    Console.WriteLine("Will wait for two seconds...");
    // ...recursive timers!
    WaitUntil.TimePassed(2, WaitForTwoSeconds);
}

I’ll probably add new factory methods as the needs arise, and make the class overall more useful, but I feel like it’s a good start.
I used the “GuideDisappears” method when a gamer signs out and I want to show a warning message using the Xbox Guide before going back to a sign-up screen… but since signing out is usually performed from the Guide itself, you have to wait for it to close before doing anything. This seemed like the simplest solution, and it works great.

Anaglyph Stereoscopic Rendering in First Person

I decided to finally finish up my analgyph stereoscopy sample, and cut the depth-of-field component that was adding too much complexity for my limited spare time right now.

stereoscopy finalstereoscopy final 2

Download

Binaries : stereoscopy_bin.zip (1.9 Mb – Binaries)
Code : stereoscopy_src.zip (524 Kb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

This sample demonstrates a couple of things :

  • Optimized anaglyph filters for red/cyan stereoscopic rendering, to reduce eye-strain by minimizing retinal revialry but still keep some color information. As my source suggests, I’ve also implemented red channel gamma correction in the shader.
  • Auto-focus of both eyes on a focal plane. The camera is in the first person and the two “virtual cameras” act like human eyes, in the manner that they are connected to a single brain that wants to look at a single point. So the center of the screen is assumed to be that focal point, and both eyes will look at it. I used a depth rendering pass to achieve this, and a weighted sum of a certain portion of the screen near the center (this is all tweakable in realtime)

The distance between the eyes is also tweakable, if you want to give yourself a headache.

I wanted to do a stereoscopy sample to show how I did it in Super HYPERCUBE, but I can’t/don’t want to release its source, so I just re-did it properly. Auto-focus is a bonus feature that I wanted to play with; sHC didn’t need that since the focal plane was always the backing wall.

This sample, like all my recent ones, uses the latest version of my components framework. There are some differences between this release and the Stencil Rendering one, but it’s mostly the same. The biggest thing is that base components (Keyboard, Sound, etc.) are not auto-loaded anymore, and must be added in the Core.Initialize() implementation. This way, if I don’t need the sound engine, I just don’t load it… makes more sense.

Hope you like it!

Stencil Rendering

Here’s a little demo to show off a technique that Farbs posted about earlier this week.

yay yay2

Download

Binaries : stencilrendering_bin.zip (3.6 Mb – Binaries)
Code : stencilrendering_src.zip (1.8 Mb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

Every frame, a random color from the target image is sampled. This color will be used as a stencil, such that every pixel whole target color is close enough to that stencil’s color will be painted. It’s a constructive painting process; every frame paints a single color, but if you wait long enough in a single spot you’ll end up with the target image.

Actually, the post processing draw code is so concise that I can post it here :

public override void PostDraw()
{
    targetBuffer.BltFromMainBuffer();
    targetBuffer.SetSystemMemCopy(true, true); // Lolworkaround

    // 1) Pick a colour from target image
    var pickedColor = Globals.DecodeRGBA(TextureFactory.GetPixel(targetBuffer.GetTexture(), RandomHelper.Random.Next(0, targetBuffer.GetWidth()), RandomHelper.Random.Next(0, targetBuffer.GetHeight())));
    stencilShader.SetEffectParamVector3("stencilColor", new TV_3DVECTOR(pickedColor.r, pickedColor.g, pickedColor.b));
    Screen2DImmediate.Draw_FullscreenQuadWithShader(stencilShader, 0, 0, 1, 1, mainBuffer.GetTexture(), targetBuffer.GetTexture());
    mainBuffer.BltFromMainBuffer();
}

“targetBuffer” and “mainBuffer” are just two TVRenderSurfaces as big as the viewport. Since I sample from targetBuffer, it needs to be flagged with “system memory copy”. I thought this would slow things down, but it runs at very interactive framerates (60 and more).

And uh… The “lolworkaround” is a bug in the current build of TV3D. Usually you only need to set this once at initialization time, but a BltFromMainBuffer does not flag the surface as dirty and it prevents updates to the pixels that I sample. Resetting the memory copy mode makes the changes effective. Sylvain tells me it’s fixed in the current development build. :)

A pixel shader does the rest :

float4 PS(PS_INPUT IN) : COLOR 
{
	float3 bufferSample = tex2D(MainBufferSampler, IN.TexCoord).rgb;
	float3 targetSample = tex2D(TargetSampler, IN.TexCoord).rgb;

	// 2) Calc difference between current screen and target (per channel subtraction, abs, then accumulate all three into one)
	float sampleDiff = distance(bufferSample, targetSample);

	// 3) As above, but between colour picked in step 1 and target image
	float stencilDiff = distance(stencilColor, targetSample);

	// 4) Where result of step 2 > result of step 3, draw colour picked in step 1
	float4 color;
	if (stencilDiff < sampleDiff)
		color = float4(stencilColor, 1);
	else
		color = float4(bufferSample, 1);

	return color;
}

I think it’s a lovely effect. It’s very dependent on how colorful and contrasted the scene is, and it works differently for sharply-defined shapes or gradients… And of course camera movement is a big factor. In the video it gets confusing, the effect is more “painterly” if you just rotate the camera in small circles and wait for the effect to accumulate.

Stimergy

Stimergy is a game that I have made with Heather Kelley of Kokoromi for the Bivouac Urbain gamejam/competition last weekend in Québec city. Our team name was EMERGENCY HAMMER… don’t ask?

The point of the jam was to make a game in 3 days – 36 hours. And so we did.

splash stimergy final

Download

stimergy_final.zip (2.1 Mb – Binaries)
stimergy_src.zip (471 Kb – Source Only, get DLLs for TV3D, SlimDX and IrrKlang from the Binaries)

I thought it was so cool that Petri used Chronolapse to film the making of his game Post I.T. Shooter, so I used the same thing for Stimergy :

Details

Heather did most of the game design, defined the graphics style and did the sound effects. I did the programming and some game design.

The game was made from scratch in C# 3.0 using the Truevision3D engine with no prior design, graphics or sound work. All the graphics in the game are procedural, and the gameplay itself is based on AI rules, basically a cellular automaton plus the notion of “stigmergy” from the insect world. (We dropped the G in the game title.)

I used a more recent version of my components system, which now has a sound interface via IrrKlang. It’s the same system I used for Super HYPERCUBE and Trouble in Euclidea, growing quite fond of it.

Goals

  • Guide the ants using suggestive pheromone trails towards or away from the picnic blanket, or killer antlions.

There are 4 levels. Each level has different goals, which are explained when it begins. Some have time limits. If you fail the objective, the level will restart until you get it.

Controls

Mouse Left Button : Attraction Pheromone
Mouse Right Button : Repulsion Pheromone

Escape key to restart the game from Level 1.

Requirements

This game uses the .NET Framework 3.5, installing it is mandatory. I suggest you install the SP1 version just in case.
You will also need a bunch of DirectX DLLs that are provided by the DirectX Web Setup.

Because of the fancy blur effect, the game requires a Shader Model 2.0 compatible graphics card. This type of card is fairly commonplace now.

 

Post Mortem

 

This was my first real rapid game prototyping experience. The shortest project span I had seen for a game before was about 15 days… so this was something else. Here’s a couple of random thoughts about that :

  • I need a proper system for timers and interpolators, similar to what Nick Gravelyn posted, or maybe just steal that. It’s really annoying to have a dozen TimeSpan variables to keep track of what changes over time and how long the transition lasts.
  • I need to learn other languages than C#, and other frameworks than TV3D/XNA. I’m making big efforts in my “engines” to cut down the redundant code, but even then I feel like I’m programming much more than is needed to describe the game mechanics.
  • A big part of what makes a game actually fun, is how much direct feedback you get from interacting in the gameworld, and how much you feel like you can control these interactions. Stimergy is a pretty slow game, almost an RTS (funny, because I hate this genre), and I’ve seen other competitors value the responsiveness of player input more than the complexity of game systems, and it ended up being a lot more fun. Maybe it just fit the “jam” context better, too.
  • I need a system for game screens. Something that puts game components in a context, that gives them lifetime. I tend to make all the major components global and automatically-loaded, which is the easiest way, but it makes game state management pretty hard. And it’s dangerous for memory usage. There’s actually a good XNA example that I just need to look at in detail.
  • If I’m going to do more 2D games in TV3D, I need to build something that will load GIF animations and non-power-of-two textures. Material management for stuff that changes color or opacity was kind of a mess too. I may need to wrap it in something more concise. Switching engine would probably be the more logical choice.
  • I spent silly amounts of time on tweaking graphical things that I ended up not using. This time would have been better spent balancing the difficulty level or adding more gameplay elements. I’m used to making demos look pretty,… I have to remind myself that I’m making a game here.

I may sound like I’m complaining about everything, but I’m actually really happy about how the game turned out. It’s fairly fun/challenging and it looks pretty good. I’m still wondering whether I’ll release the code because it’s kind of a mess. I suppose I will if I get a request. The source is available up in the Download section!

Fast Uniform Poisson-Disk Sampling in C#

Updated July 26th : Added a circle sampling method.

This appears to be tool-class weekend! Here’s another that I just finished up.

I’ve been wanting uniform poisson-disk distribution generation code for ages, ever since I started working with shadow map filtering in October 2006. I had difficulty decrypting the whitepapers that popped in the first results of Google at the time, so I just gave up for a little bit until… well… someone did a simple implementation for me to grab.

Thankfully, it happened!

The fine gentlemen at Luma labs (Herman Tulleken is credited for the class I converted) released a Java, Python and Ruby implementation of uniform and non-uniform poisson-disk sampling. It’s based on an oddly straightforward whitepaper by Robert Bridson at the UBC (in Canada!).

It took me under an hour to make it work in C#. This will prove very useful in my PCF filtering code, and I will be using it immediately in a Depth-of-Field shader I’m working on.

poisson disk 1poisson disk 2
poisson disk 3poisson disk 4
circle basedcircle based 2

Download

UniformPoissonDiskSampler.cs (4 kb – C# class)

This one uses Vector2’s from the SlimDX namespace, but it would be easy to just replace them with XNA Vector2’s or TV3D TV_2DVECTOR’s.

The original Java code used a non-static class, I decided to make it static and use structures for settings and state. This will reduce garbage if it’s used repeatedly, and since I send the structures as references, it’s just as fast as instance variables. I feel it’s a bit cleaner too.

The pointsPerIteration parameter defaulted to 30 in the original code, I did this too and it works well. You can reduce it to have potentially less dense distributions but much faster generation, or make it higher to make sure the whole space is filled.

Again, I won’t post a sample because the class usage is as simple as can be : it returns a List<Vector2> in the specified domain.

Enjoy!

Flash-style Tween/Easing Functions in C#

Easing functions make any movement look pretty. So you’re going to need them at some point.

And implementations are so widely available that you don’t have an excuse not to use them. I found this ActionScript reference to be very helpful in creating my own C# easing functions library, which I’d like to share… so here it is!

easing

First line is EaseIn, second is EaseOut, third is EaseInOut

Download

Easing.cs (4 kb – C# class)

It’s a static class and it’s framework-agnostic, completely standalone. The screenshot you see above is my TV3D test class. I won’t post the whole sample, but here’s the code if you want to see how I used it. In order to run, one would need a version of my components framework that I haven’t released yet.

I also didn’t implement each and every function that Robert Penner presents, just the ones I figured I’d use. Bounce and Elastic sound like physics to me, I don’t think I’d use easing functions to achieve that.

Quick Tip : Stencil Buffer Comparisons

I just realized something today about the stencil buffer, which I love and use extensively in Fez to do tricky screen-space effects and keep everything single-pass.
Even though the way it works is really standard, documented in the APIs and even has the exact same behavior in both DirectX and OpenGL… it kinda goes against what you’d expect.

Let’s say you want to only draw pixels where the current stencil buffer value is greater than 4. In your mind, it probably goes like this (in a fictitious “foreach pixel in screen” loop) :

if (stencilBuffer > 4) draw();

…and if so, I’m totally with you. You already wrote to the stencil buffer, and now you want to compare against it. Intuitively, the reference value would be at the right-hand-side of the comparison operator.

But nope, wrong way around! This is how the GPU expects you to think :

if (4 > stencilBuffer) draw();

This OpenGL page on the stencil buffer states it clearly :

GL_GREATER : [stencil test] passes if reference value is greater than stencil buffer

This applies for Less, LessEqual, Greater and GreaterEqual operators. When you test a reference value against the stencil buffer, you test it in that precise order : the reference value goes at the left-hand-side of the comparison operator.

Hope I saved you some confusion! I sure wasted an hour on this one…

Full Xbox 360 Gamepad Support In C#, Without XNA

I think that XNA’s implementation of the XInput API for Xbox 360 controllers is terrific. Simple, complete, ready to use. So when I wanted to add gamepad support (including analog input and vibration) to Super HYPERCUBE, a TV3D 6.5 game, I just added a reference to the XNA Framework assembly and carried on like I did in Fez. But I realize that referencing XNA just for controller support is silly, and I don’t want to assume that it’s installed on clients, nor force-install the redistributable.

So I recently looked for a more proper solution. It looks like MDX 2.0 Beta had support for XInput, but it’s been discontinued in favor of XNA for some time. There is also a handful of managed wrappers around the XInput native DLL that use P/Invokes… wasn’t too keen on that either.

Then I recalled that SlimDX is the community-maintained successor to MDX, and that it’s awesome. And as a matter of fact, it has complete support for XInput! So might as well use that.

But SlimDX is, well, slim. It’s a very thin wrapper and doesn’t do any normalization or dead zone detection for thumbsticks. It’s not much of a bother to do yourself, but I figured I’d post my code as a reference to people who want to use SlimDX’s XInput implementation in the real world.

Details

My dead zone code is based around the “Getting Started With XInput” guide on MSDN, but given the C#3/SlimDX beautifying treatment. :)
I also found that analog triggers on my controllers did not need dead zones, but if you want to use them as binary buttons, you can check against the appropriate SlimDX constant.

My “state class” does not wrap everything that the SlimDX Controller class exposes, like voice support, battery information, etc. So I made the local Controller instance a public readonly member, and you can access it to query whatever other information you need. Or of course you can add the getters/state variables that you need.

Download

GamepadState.cs (C#3 – 5 Kb, SlimDX March 2009 SP1 SDK or later needed)

Just the dead zone code

If you’re only looking for that, here it is :

    var gamepadState = Controller.GetState().Gamepad;
    var leftStick = Normalize(gamepadState.LeftThumbX, gamepadState.LeftThumbY, Gamepad.GamepadLeftThumbDeadZone);
    var rightStick = Normalize(gamepadState.RightThumbX, gamepadState.RightThumbY, Gamepad.GamepadRightThumbDeadZone),
}

static Vector2 Normalize(short rawX, short rawY, short threshold)
{
    var value = new Vector2(rawX, rawY);
    var magnitude = value.Length();
    var direction = value / (magnitude == 0 ? 1 : magnitude);

    var normalizedMagnitude = 0.0f;
    if (magnitude - threshold > 0)
        normalizedMagnitude = Math.Min((magnitude - threshold) / (short.MaxValue - threshold), 1);

    return direction * normalizedMagnitude;
}

Isotropic Specular Reflection Models Comparison

I’ve decided to repost all my remaining TV3D 6.5 samples to this blog (until I get bored). These are not new, but they were only downloadable from the TV3D forums until now!

This demo (originally released as VB.Net 2005 on Feburary 13th, 2007 here) is a visual and performance comparison (and reference implementation) of five different per-pixel lighting models for isotropic specular reflections :

  • Phong reflection model
  • Blinn-Phong (Blinn D1, Phong) specular distribution
  • Lyon halfway method 1 (for k=2 and D = H* – L)
  • Trowbridge-Reitz (Blinn D3) specular distribution
  • Torrance-Sparrow (Blinn D2, Gaussian) specular distribution

My main goals were to :

  • Make an optimized HLSL implemention of each model that fits in a single Shader Model 2.0 pass and supports 3 lights
  • Evaluate the performance of each model in a multiple light, per-pixel rendering context
  • Determine which model keeps the most numerical precision and does not produce artifacts when used with normal-mapping

Download

IsotropicModels.zip [2.7 Mb] – C#3 (VS.NET 2008, TV3D 6.5 Prerelease .NET DLL Required)

Screenshots

lyon 2lyon 7
lyon 8lyon 9

Details

The HLSL shader supplied with this sample was made to mimic the built-in TV3D offset-bumpmapping shader as closely as possible. As a result, almost all of its effect parameters are mapped to standard semantics. It supports :

  • One colored directional light
  • Two colored point lights in SM2.0, and four in more recent models (SM2a/b and SM3)
  • Support for all types of vertex fog
  • Parallax mapping of texture coordinates using a grayscale heightmap
  • Diffuse mapping with alpha support (a.k.a. texturing)
  • Normal mapping
  • Specular mapping (using the alpha channel of the normalmap)
  • Emissive mapping (colored!)
  • Usage of all material terms (diffuse, ambient, specular, emissive, power and opacity)

There is no support for point light attenuation as this would’ve gone over the 64 instructions limit of the ps_2_0 profile. (also TV3D doesn’t provide semantics for these)
There is no support for spot lights for the same reasons, but I believe spots will be processed as regular point lights, ignoring the specific parameters.

Techniques

With the realtime controls, you can choose from three different techniques : FiveLightsBranching, FiveLights and ThreeLights. On SM2.0 hardware, only the third option will be valid.

The FiveLightsBranching mode uses loops and “if” statements to produce dynamic branching on SM3.0 compatible hardware. This can (but may not) be benificial because only the calculations for enabled lights are performed.
The FiveLights and ThreeLights modes respectively do five and three lights (WHAT YOU SAY !!) but all in a static manner. It does not just unroll the loop! Most of the calculations are done with matrices, which makes it more efficient on most hardware.

To keep the shader “simple” (or to prevent from becoming even more complex…) I decided not to implement a multipass 5 lights technique for SM2.0… sorry!

Blinn vs. Phong

There’s two major categories in the models I tested : the ones that use the halfway vector, and the Phong model that works with the reflected vector. (see the Wiki entry on Blinn-Phong for details on these vectors).

BlinnvsPhong
A directional light reflecting on a surface with a power value of 64

According to a paper from Siggraph 2004 called Experimental Validation of Analytical BRDF Models, the halfway methods generate specular highlights with more realistic shapes than the Phong model. I realized that myself when working on an ocean rendering shader that had a Phong specular reflection, and it was impossible to get a long grazing highlight when the sun was setting.

Normal Mapping Artifacts

One of the things that made me do this whole analysis is that I was dissatisfied with the image quality of Phong and Blinn-Phong when used with a normal map, so per-pixel lighting. I had huge block artifacts on my water surface, and everything with a high enough specular power value and a bumpy surface. So I found about a reformulation of the Blinn-Phong model by Richard F. Lyon, written in 1993 (!) for Apple. (trivia : Mr. Lyon also invented the optical mouse… how awesome is that!)

lyonblinn

This reformulation is interesting because it does not use the specular power literally as an exponentiation, it uses a distance metric and a much lower power value to produce very similar results to the Blinn-Phong model. Using a high specular power (32 or more) hurts floating-point accuracy, even in full-precision mode.

That said, I have seen hardware that do not have this problem. I am starting to think that it may be a driver issue, or something about mobile GPUs… In any case, the safe thing to do is choose the model that never produces artifacts, right?

The Other Blinn Distributions

The two other models I implemented (Trowbridge-Reitz and Torrance-Sparrow) were “ported” from the MATLAB code in Lyon’s reformulation paper. I wanted to test them out to see if they had the same artifact problems, and how different they looked from the classic models.

trowbridge

Trowbridge-Reitz is an interesting model because of how it looks. It’s slower than Blinn-Phong, but it has a distinct smoothness to it. The falloff of its specular highlights is softer than the other models… I’m not sure if it’s more accurate, but it looks pretty. Sadly, it has the same problems with normal mapping.

Torrance-Sparrow is a visual identity to the Blinn-Phong model. It’s the same thing, but slower and more instruction-heavy. It does not even fit in SM2.0 with 3 lights,… So I suggest you disregard it for realtime graphics.

Performance

I found that performance varies a lot depending on which technique you use, which shader model you support, how much your GPU is fillrate-limited instead of arithmetic-limited… So I’ll just say this : the Lyon model looks great, and it’s simple and fast enough to be worth considering. If you don’t experience the artifacts I describe, then the Blinn-Phong model is your best shot, but test Trowbridge-Reitz to see if it’s fast on your hardware.

It’s also worth mentioning that many things could be optimized by factorizing equations into small 1D or 2D textures (or perhaps a normalization cubemap), if your GPU loves pixels and hates instructions. But I don’t believe that the shader can be optimized that much by reorganizing code or removing useless statements. At least not without hurting visual quality.

Component System Updates

This sample contains a major breaking change to my component framework : The Service baseclass is gone. This makes Components able to “be” services (and publish many service interfaces), and allows this sample to have a much simpler class structure… no more state classes! The components just publish whatever data they want via their service interfaces. And with the new Eventful<T> class, it’s really easy to propagate changes from a controller to a view.

Post Processing/Fullscreen Texture Sampling

I’ve decided to repost all my remaining TV3D 6.5 samples to this blog (until I get bored). These are not new, but they were only downloadable from the TV3D forums until now!

This demo (originally released July 1st 2008 here) is a mixed bag of many techniques I wanted to demonstrate at once. It contains (all of which are described below) :

  • Pixel-perfect sampling of mainbuffer rendersurfaces for post-processing
  • A component/service system very similar to XNA’s and with support for IoC service injection
  • A minimally invasive, memory-conserving workflow for post-processing shaders
  • Many realtime surface downsampling methods (blit, blit 2x, tent, sinc+kaiser and bicubic)
  • An optimized Gaussian blur shader with up to 17 effective taps in a single pass

Download

FullscreenShaders_(final).zip [8.6 Mb] – C#3 (VS.NET 2008, TV3D 6.5 Prerelease .NET DLL Required)

Screenshots

fs1fs2fs3fs4

Dissection

Pixel-Perfect Sampling

There is a well-known oddity in all DirectX versions (I think I’ve read somewhere that DX11 fixes it… amazing!) that when drawing a fullscreen textured quad, the texture coordinates need to be shifted by a half-texel. So that’s why if you use hardware filtering (which is typically enabled by default), all your fullscreen quads are slightly blurred.
But the half texel offset is related to the texel size of the texture you are sampling, as well as the viewport to which you are rendering! And it gets seriously wierd when you’re downsampling (sampling a texture at a lower frequency than its resolution), or when you’re sampling different-sized textures in a single render pass…

So this demo adresses the simple cases. I very recently found a more proper way to address this problem in a 2003 post from Simon Brown. I tested it and it works, but I don’t feel like updating the demo. ^_^

Component System

The component system was re-used in Trouble In Euclidea and Super HYPERCUBE, and I’m currently using it to prototype a culling system that uses hardware occlusion queries efficiently. The version bundled with this demo is slightly out of date, but very functional.
I blogged about the service injection idea a long time ago, if you want to read up on it.

Post-processing that just sits on top

This is how I consider post-processing should be done : using the main buffer directly without writing to a rendersurface to begin with. This way, the natural rendering flow is not disturbed, and post-processing effects are just plugged after the rendering is done.

If you can work directly with the main buffer (no downscaling beforehand), you can grab the mainbuffer onto a temporary rendersurface using BltFromMainBuffer() after all draw calls are performed, and call Draw_FullscreenQuadWithShader() using the temporary rendersurface as a texture. The post-processed result is output right on the main-buffer.
Any number of effects can be chained this way… blit, draw, blit, draw. Since the RS usage only lasts until the fullscreen draw call, you can re-use the same RS over and over again.

If you want to scale the main buffer down before applying your effect (e.g. for performance reasons, or to widen the effect of a blur), then you’ll need to work “one frame late”. I described how this works in this TV3D forum post.

Downsampling Techniques

The techniques I implemented for downsampling are the following :

  • Blit : Simply uses BltFromRenderSurface onto a half- or quarter-sized rendersurface. A single-shot 4x downscale causes sampling issues because it ignores half of the source texels… but it’s fast!
  • Double blit : 4x downscaling via two successive blits, each recieving surface being half as big as the source. It has less artifacts and is still reasonably fast.
  • Bicubic : A more detail-conserving two-pass filter for downsampling that uses 4 linear taps for 2x downsampling, or 8 linear taps for 4x. Works great in 2x, but I’m not sure if the result is accurate in 4x. (reference document, parts of it like the bits about interpolation/upsampling are BS but it worked to an extent)
  • Tent/Triangular/Bilinear 4x : I wasn’t sure of its exact name because it’s shaped in 2D like a tent, in 1D like a triangle, and it’s exactly the same as bilinear filtering… It’s accurate but detail-murdering 4x downsampling. Theoretically, it should produce the same result as a “double blit”, but the tent is a lot more stable, which shows that BlitFromRenderSurface has sampling problems.
  • Sinc with Kaiser window : A silly and time-consuming experiment that pretty much failed. I found out about this filter in a technical column by the Jonathan Blow (of Braid fame), which mentioned that it is the perfect low-pass filter and should be the most detail-conserving downsampling filter. There are very convincing experimental results in part 2 as well, so I gave it a shot. I get a lot of rippling artifacts, it’s way too slow for realtime, but it’s been fun to try out. (reference document)

Fast Gaussian Blur + Metrics

The gaussian blur shader (and its accompanying classes) in this demo an implementation of the stuff I blogged about some months ago : the link between “lost light” in the weights calculation and how similar to a box-filter it becomes. You can change the kernel size dynamically and it’ll tell you how box-similar it is. The calculation for this box-similarity factor is still very arbitrary and you should take it with a grain of salt… but it’s a metric, an indicator.

But there’s something else : hardware filtering! I read about this technique in a GLSL Bloom tutorial by Philip Rideout, and it allows up to 17 effective horizontal and vertical samples in a ps_2_0 (SM2 compatible) pixel shader… resulting in 289 effective samples and a very wide blur! It speeds up 7-tap and 9-tap filters nicely too, by reducing the number of actual samples and instructions.
Phillip’s tutorial contains all the details, but the idea is to sample in-between taps using interpolated weights and achieve the same visual effect even if the sample count is halved. Very ingenious!

 

If you have more questions, I’ll be glad to answer them in comments.
Otherwise I’ll direct you to the original TV3D forum post for info on its development… there’s a link to a simpler earlier version of the demo there, too.