Squaring The Thumbsticks

Update (Oct. 26th) : Better interpolation = better shaping, and the simpler “inscribed square” method.
Update (Nov. 4th) : Exact and customizable solution!

The input values of thumbsticks on an Xbox 360 controller are always contained inside the unit circle, a disk centered on the origin and whose radius is 1. This is usually fine, because you want to interpret the values as a vector inside that circle… but if you want to provide an analog version of a D-Pad, for instance to control a character in a 2D platformer, then you’ll find that your character will never reach maximum horizontal speed unless you hit the (1, 0) or (-1, 0) vectors, exactly right or left. Anything tilted up or down will give a fractional speed, and if you hit a diagonal corner, you’ll end up with ±√½.

So what if the analog stick was a square area instead, such that the entire circumference has a least one component maxed out, like the contour of a square?
It may seem like a simple thing to do, but it proved to be a lot of trouble, and in the end I chose to use an approximation (nope, it worked out in the end!)… but here’s how I researched the problem.

Attempt #1 : Wrong way around

I first googled “Mapping circle to square“, and ended up on a blog entry that describes the inverse transform. Fine, I’ll just invert it, isolate the x and y variables as a function of x’ and y’… But that proved to be a problem.

Wolfram|Alpha doesn’t know what to do with it, and after making it more manageable (for x, for y), it gives 4 different answers that seem to work only for specific intervals, but doesn’t say which…
I tried doing it by hand and was stuck early on, because I suck at this.

And since I don’t have access to (nor know how to use) proper nonlinear equation solvers, I decided to give up.

Attempt #2 : Projections

By googling some more and varying search terms, I stumbled upon a cool flicker image titled “Conformal Transformation: from Circle to Square“. That’s exactly what I need!
But alas, it involves something called elliptic integrals and this also goes beyond my math understanding, or the amount of effort I was willing to put in this. The description also links to the Peirce quincuncial projection, which maps a sphere to a square, but the math also goes above my head (complex numbers arithmetic and again, elliptic integrals).

At this point I realized that what I was trying to do was not trivial.

Attempt #3 : Intuition serves best

So I just went back what I usually do : analyze the problem, determine what I expect as a basic solution, and work my way there.

  • Problem : The smallest horizontal component I end up with is ±√½, or ±~0.707, when the θ angle is π/4, 3π/4, 5π/4 or 7π/4.
  • Solution : Scale this to ±1, so by a factor of √2.

The scale for angles close to diagonals should fall down to identity (1) when it reaches a cardinal direction, though I’m not sure about the interpolation curve… Linear should do fine?

(code removed because it’s kinda shitty)

And for additional fun and testing, here’s how evenly-spaced random points inside the unit disk stretch to the unit square using this function (points generated by my Uniform Poisson-Disk code) :

poisson
Linear interpolation

Not bad? Good enough for me.

Attempt #4 : Interpolation and variations

I then tried different interpolation/easing methods for the scaling factor, and tried raigan’s idea of just scaling and clamping linearly the circle to a square, here’s what it looks like!

poisson
Quadratic interpolation

poisson
Inscribed square

I tried sine, quadratic and circular ease in/out/in-out from my easing functions library, and quadratic ease-in was the most balanced. It almost shapes everything to a square, very little clamping is necessary, so I’ll keep that in.

The inscribed square method is nice and simple, but cuts off a lot of values and keeps circular gradients in its area. But in the real world, for 2D platformer input, it’s probably just fine…

Attempt #5 : Epic win

So, I decided to give another shot at this after speaking on the bus with a friend about polar coordinates and trigonometry. Turns out there is a more exact solution, and by adding a “inner roundness” parameter you can also tweak how much of the center input will remain circular, while still touching the sides for circumference values.

Here is my final C# function, then I’ll explain how it works :

static Vector2 CircleToSquare(Vector2 point)
{
	return CircleToSquare(point, 0);
}
static Vector2 CircleToSquare(Vector2 point, double innerRoundness)
{
	const float PiOverFour = MathHelper.Pi / 4;

	// Determine the theta angle
	var angle = Math.Atan2(point.Y, point.X) + MathHelper.Pi;

	Vector2 squared;
	// Scale according to which wall we're clamping to
	// X+ wall
	if (angle <= PiOverFour || angle > 7 * PiOverFour)
		squared = point * (float)(1 / Math.Cos(angle));
	// Y+ wall
	else if (angle > PiOverFour && angle <= 3 * PiOverFour)
		squared = point * (float)(1 / Math.Sin(angle));
	// X- wall
	else if (angle > 3 * PiOverFour && angle <= 5 * PiOverFour)
		squared = point * (float)(-1 / Math.Cos(angle));
	// Y- wall
	else if (angle > 5 * PiOverFour && angle <= 7 * PiOverFour)
		squared = point * (float)(-1 / Math.Sin(angle));
	else throw new InvalidOperationException("Invalid angle...?");

	// Early-out for a perfect square output
	if (innerRoundness == 0)
		return squared;

	// Find the inner-roundness scaling factor and LERP
	var length = point.Length();
	var factor = (float) Math.Pow(length, innerRoundness);
	return Vector2.Lerp(point, squared, factor);
}
square
Inner roundness = 0

round
Inner roundness = 1

rounder
Inner roundness = 5

There are two concepts here :

  1. The scaling factor can be calculated for every theta angle without any need for interpolation. It happens to be the inverse of the sine or cosine of the angle, depending on which wall you’re trying to clamp to. This gives a perfect square all around.
  2. By interpolating relatively to the length of the input vector, it’s possible to preserve the circular shape in the center and still clamp to the sides for extreme inputs.

I think I’m done with this now! Yay.

A Render/SamplerState Stack and Manager for XNA

I’ve been meaning to make something like this for a long time, and I know that some engines like Xen offer a viable solution already, but I wanted to make my KISS solution that interferes with the standard XNA interface as little as possible.

It’s a pretty lengthy snippet of code, so I won’t paste it all here, but basically there’s three classes : the StateStack, ManagedRenderStates and ManagedSamplerStates.

Description

StateStack is a static extension class to the GraphicsDevice. It could be just another static class, or even a global XNA service, but since it’s going to be used in combination with the GraphicsDevice most of the time, I thought it was logical to use extensions.
You can push, pop and peek the current state, which takes and returns ManagedRenderState objects. It’s very similar to how OpenGL works with the attribute stack (glPushAttrib), except you don’t choose what you push, the entire state gets pushed/popped everytime. I didn’t think isolating states in categories was useful or natural in XNA.
The ResetState and CommitState methods are shorthands to the same methods in ManagedRenderState.

ManagedRenderState is like a wrapper over a RenderState object, such that it has the same interface : a read/write property for each render state and access to the SamplerState collection.
The difference is that it tracks changes to render states as you make them, marks properties as dirty and applies only what’s needed when you decide to Commit. The Reset method refreshes the object with the current state of the GraphicsDevice; this needs to be done at the beginning of every draw call, or everytime you think the state has been tampered with.

ManagedSamplerState does the same thing but for sampler states : addressing, filtering, etc. You don’t call Reset or Commit on it directly, it gets done automatically by its host ManagedRenderState.

I added blending mode and texture filtering helper methods to both State classes, because I end up using that a lot… and with proper state management, you don’t have to worry about redundant assignments! So you can be permissive and set everything you need, every time.

Usage

So what does it look like to use it?
The Game class needs to do some initialization and per-draw-call stuff first…

protected override void Initialize()
{
    GraphicsDevice.PushState();

    // Other stuff
    base.Initialize();
}

protected override void Draw(GameTime gameTime)
{
    GraphicsDevice.ResetState();
    SetDefaultRenderStates();

    GraphicsDevice.Clear(Color.CornflowerBlue);
    base.Draw(gameTime);
}

void SetDefaultRenderStates()
{
    var rs = GraphicsDevice.PeekState();

    rs.DepthBufferEnable = true;
    rs.SetBlendingMode(BlendingMode.Alphablending);

    rs.SamplerStates[0].SetFiltering(FilteringMode.Anisotropic);
    rs.SamplerStates[0].AddressU = rs.SamplerStates[0].AddressV = TextureAddressMode.Wrap;

    GraphicsDevice.CommitState();
}

Then to use the state stack, you don’t ever touch the GraphicsDevice.RenderState object : always use the Managed equivalent.

// Push a copy of the state on the stack
var rs = GraphicsDevice.PushState();

// Always-on-top, and don't disturb the depth buffer, and no culling
rs.DepthBufferFunction = CompareFunction.Always;
rs.DepthBufferWriteEnable = false;
rs.CullMode = CullMode.None;

// Commit changes
GraphicsDevice.CommitState();

// Draw some stuff
Draw();

// Pop back to the last state
GraphicsDevice.PopState();
// We are now back to the last state

Download & Conclusion

I think it makes a lot of sense to stack states in a component system, because each component can push its own local state and work with it, and just dispose it afterwards. Add this to dirty-checking to remove almost all redundant state changes, and you’ve got a very usable system!

The next step is batching draw calls to keep calls that use the same RenderStates together… in my system, this is left to the user’s discretion. Kevin Gadd presented a way of doing this (and many other things including threaded rendering) on his blog.

Another rather important note : ManagedRenderState contains a MaxSamplers constant that you can tweak depending on the number of sampler states you know you’ll use. Leaving it at 16 will make update/reset/refresh operations kind of slow… not sure yet if it’s noticeable.

And because of the immense amount of copy-paste that’s been needed to produce these classes, I can’t promise that it doesn’t have a typo or two… I haven’t tested every single state. But up to now, it works great, and I’ll update it if needed.

StateStack.cs (2 kb – C# class)
ManagedRenderState.cs (30 kb – C# class)
ManagedSamplerState.cs (7 kb – C# class)

Bonuses in the code files : a method to unpack a packed color from uint to Color, and a simplistic object Pool implementation (to eliminate garbage when stacking up states).

Hope you find it useful!

WaitUntil Component

Here’s a little something that I hope to use increasingly in the future : elements of functional programming to facilitate modification of state over time or game loops, without using threads all over the place. It’s nothing new, and there are other solutions (like Nick Gravelyn’s Interpolators and Timers), but I tried to make it as concise and generic as I could.

Here it is, more comments after the listing.

using System;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.GamerServices;

namespace Foo
{
    public class WaitUntil : IGameComponent, IUpdateable
    {
        public static Game Game { private get; set; }

        public static void GuideDisappears(Action onValid)
        {
            Game.Components.Add(new WaitUntil(_ => !Guide.IsVisible, onValid));
        }

        public static void TimePassed(float secondsToWait, Action onValid)
        {
            Game.Components.Add(new WaitUntil(waited => (waited as TimeKeeper).Elapsed.TotalSeconds > secondsToWait,
                                              (elapsed, waited) => (waited as TimeKeeper).Elapsed += elapsed,
                                              onValid, new TimeKeeper()));
        }
        class TimeKeeper { public TimeSpan Elapsed; }

        readonly Func<object, bool> condition;
        readonly Action<TimeSpan, object> whileWaiting;
        readonly Action onValid;
        readonly object state;

        WaitUntil(Func<object, bool> condition, Action onValid) : this(condition, ActionHelper.NullAction, onValid) { }
        WaitUntil(Func<object, bool> condition, Action<TimeSpan, object> whileWaiting, Action onValid) : this(condition, whileWaiting, onValid, null) { }
        WaitUntil(Func<object, bool> condition, Action<TimeSpan, object> whileWaiting, Action onValid, object state) 
        {
            this.condition = condition;
            this.whileWaiting = whileWaiting;
            this.onValid = onValid;
            this.state = state;
        }

        public void Update(GameTime gameTime)
        {
            if (condition(state))
            {
                onValid();
                Game.Components.Remove(this);
            }
            else
                whileWaiting(gameTime.ElapsedGameTime, state);
        }

        #region Stuff we don't care about
        public void Initialize() { }
        public bool Enabled { get { return true; } }
        public event EventHandler EnabledChanged;
        public event EventHandler UpdateOrderChanged;
        public int UpdateOrder { get { return 0; } }
        #endregion
    }

    public static class ActionHelper
    {
        public static void NullAction<T, U>(T t, U u) { }
    }
}

I ended up using basically a GameComponent, which means I need access to a Game instance to add it and remove it from the component collection. I decided to use a static field (that you assign in the Game’s constructor) to avoid passing it every time. It’s very unlikely that the Game instance will change or be destroyed… and I already static-ified it in my ServiceHelper earlier anyway.

I also wrote a couple (okay, two) static factory methods that are slightly fluent-interface-ey.

// (from the context of your Game class implementation)
{
    // Say "OK" when the guide stops being visible, like this?
    Components.Add(new WaitUntil(_ => !Guide.IsVisible, () => Console.WriteLine("OK!")));
    // ...or like this!
    WaitUntil.GuideDisappears(() => Console.WriteLine("OK!"));

    // And while we're at it...
    WaitForTwoSeconds();
}
void WaitForTwoSeconds()
{
    Console.WriteLine("Will wait for two seconds...");
    // ...recursive timers!
    WaitUntil.TimePassed(2, WaitForTwoSeconds);
}

I’ll probably add new factory methods as the needs arise, and make the class overall more useful, but I feel like it’s a good start.
I used the “GuideDisappears” method when a gamer signs out and I want to show a warning message using the Xbox Guide before going back to a sign-up screen… but since signing out is usually performed from the Guide itself, you have to wait for it to close before doing anything. This seemed like the simplest solution, and it works great.

Anaglyph Stereoscopic Rendering in First Person

I decided to finally finish up my analgyph stereoscopy sample, and cut the depth-of-field component that was adding too much complexity for my limited spare time right now.

stereoscopy finalstereoscopy final 2

Download

Binaries : stereoscopy_bin.zip (1.9 Mb – Binaries)
Code : stereoscopy_src.zip (524 Kb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

This sample demonstrates a couple of things :

  • Optimized anaglyph filters for red/cyan stereoscopic rendering, to reduce eye-strain by minimizing retinal revialry but still keep some color information. As my source suggests, I’ve also implemented red channel gamma correction in the shader.
  • Auto-focus of both eyes on a focal plane. The camera is in the first person and the two “virtual cameras” act like human eyes, in the manner that they are connected to a single brain that wants to look at a single point. So the center of the screen is assumed to be that focal point, and both eyes will look at it. I used a depth rendering pass to achieve this, and a weighted sum of a certain portion of the screen near the center (this is all tweakable in realtime)

The distance between the eyes is also tweakable, if you want to give yourself a headache.

I wanted to do a stereoscopy sample to show how I did it in Super HYPERCUBE, but I can’t/don’t want to release its source, so I just re-did it properly. Auto-focus is a bonus feature that I wanted to play with; sHC didn’t need that since the focal plane was always the backing wall.

This sample, like all my recent ones, uses the latest version of my components framework. There are some differences between this release and the Stencil Rendering one, but it’s mostly the same. The biggest thing is that base components (Keyboard, Sound, etc.) are not auto-loaded anymore, and must be added in the Core.Initialize() implementation. This way, if I don’t need the sound engine, I just don’t load it… makes more sense.

Hope you like it!

Stencil Rendering

Here’s a little demo to show off a technique that Farbs posted about earlier this week.

yay yay2

Download

Binaries : stencilrendering_bin.zip (3.6 Mb – Binaries)
Code : stencilrendering_src.zip (1.8 Mb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

Every frame, a random color from the target image is sampled. This color will be used as a stencil, such that every pixel whole target color is close enough to that stencil’s color will be painted. It’s a constructive painting process; every frame paints a single color, but if you wait long enough in a single spot you’ll end up with the target image.

Actually, the post processing draw code is so concise that I can post it here :

public override void PostDraw()
{
    targetBuffer.BltFromMainBuffer();
    targetBuffer.SetSystemMemCopy(true, true); // Lolworkaround

    // 1) Pick a colour from target image
    var pickedColor = Globals.DecodeRGBA(TextureFactory.GetPixel(targetBuffer.GetTexture(), RandomHelper.Random.Next(0, targetBuffer.GetWidth()), RandomHelper.Random.Next(0, targetBuffer.GetHeight())));
    stencilShader.SetEffectParamVector3("stencilColor", new TV_3DVECTOR(pickedColor.r, pickedColor.g, pickedColor.b));
    Screen2DImmediate.Draw_FullscreenQuadWithShader(stencilShader, 0, 0, 1, 1, mainBuffer.GetTexture(), targetBuffer.GetTexture());
    mainBuffer.BltFromMainBuffer();
}

“targetBuffer” and “mainBuffer” are just two TVRenderSurfaces as big as the viewport. Since I sample from targetBuffer, it needs to be flagged with “system memory copy”. I thought this would slow things down, but it runs at very interactive framerates (60 and more).

And uh… The “lolworkaround” is a bug in the current build of TV3D. Usually you only need to set this once at initialization time, but a BltFromMainBuffer does not flag the surface as dirty and it prevents updates to the pixels that I sample. Resetting the memory copy mode makes the changes effective. Sylvain tells me it’s fixed in the current development build. :)

A pixel shader does the rest :

float4 PS(PS_INPUT IN) : COLOR 
{
	float3 bufferSample = tex2D(MainBufferSampler, IN.TexCoord).rgb;
	float3 targetSample = tex2D(TargetSampler, IN.TexCoord).rgb;

	// 2) Calc difference between current screen and target (per channel subtraction, abs, then accumulate all three into one)
	float sampleDiff = distance(bufferSample, targetSample);

	// 3) As above, but between colour picked in step 1 and target image
	float stencilDiff = distance(stencilColor, targetSample);

	// 4) Where result of step 2 > result of step 3, draw colour picked in step 1
	float4 color;
	if (stencilDiff < sampleDiff)
		color = float4(stencilColor, 1);
	else
		color = float4(bufferSample, 1);

	return color;
}

I think it’s a lovely effect. It’s very dependent on how colorful and contrasted the scene is, and it works differently for sharply-defined shapes or gradients… And of course camera movement is a big factor. In the video it gets confusing, the effect is more “painterly” if you just rotate the camera in small circles and wait for the effect to accumulate.

Stimergy

Stimergy is a game that I have made with Heather Kelley of Kokoromi for the Bivouac Urbain gamejam/competition last weekend in Québec city. Our team name was EMERGENCY HAMMER… don’t ask?

The point of the jam was to make a game in 3 days – 36 hours. And so we did.

splash stimergy final

Download

stimergy_final.zip (2.1 Mb – Binaries)
stimergy_src.zip (471 Kb – Source Only, get DLLs for TV3D, SlimDX and IrrKlang from the Binaries)

I thought it was so cool that Petri used Chronolapse to film the making of his game Post I.T. Shooter, so I used the same thing for Stimergy :

Details

Heather did most of the game design, defined the graphics style and did the sound effects. I did the programming and some game design.

The game was made from scratch in C# 3.0 using the Truevision3D engine with no prior design, graphics or sound work. All the graphics in the game are procedural, and the gameplay itself is based on AI rules, basically a cellular automaton plus the notion of “stigmergy” from the insect world. (We dropped the G in the game title.)

I used a more recent version of my components system, which now has a sound interface via IrrKlang. It’s the same system I used for Super HYPERCUBE and Trouble in Euclidea, growing quite fond of it.

Goals

  • Guide the ants using suggestive pheromone trails towards or away from the picnic blanket, or killer antlions.

There are 4 levels. Each level has different goals, which are explained when it begins. Some have time limits. If you fail the objective, the level will restart until you get it.

Controls

Mouse Left Button : Attraction Pheromone
Mouse Right Button : Repulsion Pheromone

Escape key to restart the game from Level 1.

Requirements

This game uses the .NET Framework 3.5, installing it is mandatory. I suggest you install the SP1 version just in case.
You will also need a bunch of DirectX DLLs that are provided by the DirectX Web Setup.

Because of the fancy blur effect, the game requires a Shader Model 2.0 compatible graphics card. This type of card is fairly commonplace now.

Post Mortem

This was my first real rapid game prototyping experience. The shortest project span I had seen for a game before was about 15 days… so this was something else. Here’s a couple of random thoughts about that :

  • I need a proper system for timers and interpolators, similar to what Nick Gravelyn posted, or maybe just steal that. It’s really annoying to have a dozen TimeSpan variables to keep track of what changes over time and how long the transition lasts.
  • I need to learn other languages than C#, and other frameworks than TV3D/XNA. I’m making big efforts in my “engines” to cut down the redundant code, but even then I feel like I’m programming much more than is needed to describe the game mechanics.
  • A big part of what makes a game actually fun, is how much direct feedback you get from interacting in the gameworld, and how much you feel like you can control these interactions. Stimergy is a pretty slow game, almost an RTS (funny, because I hate this genre), and I’ve seen other competitors value the responsiveness of player input more than the complexity of game systems, and it ended up being a lot more fun. Maybe it just fit the “jam” context better, too.
  • I need a system for game screens. Something that puts game components in a context, that gives them lifetime. I tend to make all the major components global and automatically-loaded, which is the easiest way, but it makes game state management pretty hard. And it’s dangerous for memory usage. There’s actually a good XNA example that I just need to look at in detail.
  • If I’m going to do more 2D games in TV3D, I need to build something that will load GIF animations and non-power-of-two textures. Material management for stuff that changes color or opacity was kind of a mess too. I may need to wrap it in something more concise. Switching engine would probably be the more logical choice.
  • I spent silly amounts of time on tweaking graphical things that I ended up not using. This time would have been better spent balancing the difficulty level or adding more gameplay elements. I’m used to making demos look pretty,… I have to remind myself that I’m making a game here.

I may sound like I’m complaining about everything, but I’m actually really happy about how the game turned out. It’s fairly fun/challenging and it looks pretty good. I’m still wondering whether I’ll release the code because it’s kind of a mess. I suppose I will if I get a request. The source is available up in the Download section!

Fast Uniform Poisson-Disk Sampling in C#

Updated July 26th : Added a circle sampling method.

This appears to be tool-class weekend! Here’s another that I just finished up.

I’ve been wanting uniform poisson-disk distribution generation code for ages, ever since I started working with shadow map filtering in October 2006. I had difficulty decrypting the whitepapers that popped in the first results of Google at the time, so I just gave up for a little bit until… well… someone did a simple implementation for me to grab.

Thankfully, it happened!

The fine gentlemen at Luma labs (Herman Tulleken is credited for the class I converted) released a Java, Python and Ruby implementation of uniform and non-uniform poisson-disk sampling. It’s based on an oddly straightforward whitepaper by Robert Bridson at the UBC (in Canada!).

It took me under an hour to make it work in C#. This will prove very useful in my PCF filtering code, and I will be using it immediately in a Depth-of-Field shader I’m working on.

poisson disk 1poisson disk 2
poisson disk 3poisson disk 4
circle basedcircle based 2

Download

UniformPoissonDiskSampler.cs (4 kb – C# class)

This one uses Vector2’s from the SlimDX namespace, but it would be easy to just replace them with XNA Vector2’s or TV3D TV_2DVECTOR’s.

The original Java code used a non-static class, I decided to make it static and use structures for settings and state. This will reduce garbage if it’s used repeatedly, and since I send the structures as references, it’s just as fast as instance variables. I feel it’s a bit cleaner too.

The pointsPerIteration parameter defaulted to 30 in the original code, I did this too and it works well. You can reduce it to have potentially less dense distributions but much faster generation, or make it higher to make sure the whole space is filled.

Again, I won’t post a sample because the class usage is as simple as can be : it returns a List<Vector2> in the specified domain.

Enjoy!

Flash-style Tween/Easing Functions in C#

Easing functions make any movement look pretty. So you’re going to need them at some point.

And implementations are so widely available that you don’t have an excuse not to use them. I found this ActionScript reference to be very helpful in creating my own C# easing functions library, which I’d like to share… so here it is!

easing

First line is EaseIn, second is EaseOut, third is EaseInOut

Download

Easing.cs (4 kb – C# class)

It’s a static class and it’s framework-agnostic, completely standalone. The screenshot you see above is my TV3D test class. I won’t post the whole sample, but here’s the code if you want to see how I used it. In order to run, one would need a version of my components framework that I haven’t released yet.

I also didn’t implement each and every function that Robert Penner presents, just the ones I figured I’d use. Bounce and Elastic sound like physics to me, I don’t think I’d use easing functions to achieve that.

Full Xbox 360 Gamepad Support In C#, Without XNA

I think that XNA’s implementation of the XInput API for Xbox 360 controllers is terrific. Simple, complete, ready to use. So when I wanted to add gamepad support (including analog input and vibration) to Super HYPERCUBE, a TV3D 6.5 game, I just added a reference to the XNA Framework assembly and carried on like I did in Fez. But I realize that referencing XNA just for controller support is silly, and I don’t want to assume that it’s installed on clients, nor force-install the redistributable.

So I recently looked for a more proper solution. It looks like MDX 2.0 Beta had support for XInput, but it’s been discontinued in favor of XNA for some time. There is also a handful of managed wrappers around the XInput native DLL that use P/Invokes… wasn’t too keen on that either.

Then I recalled that SlimDX is the community-maintained successor to MDX, and that it’s awesome. And as a matter of fact, it has complete support for XInput! So might as well use that.

But SlimDX is, well, slim. It’s a very thin wrapper and doesn’t do any normalization or dead zone detection for thumbsticks. It’s not much of a bother to do yourself, but I figured I’d post my code as a reference to people who want to use SlimDX’s XInput implementation in the real world.

Details

My dead zone code is based around the “Getting Started With XInput” guide on MSDN, but given the C#3/SlimDX beautifying treatment. :)
I also found that analog triggers on my controllers did not need dead zones, but if you want to use them as binary buttons, you can check against the appropriate SlimDX constant.

My “state class” does not wrap everything that the SlimDX Controller class exposes, like voice support, battery information, etc. So I made the local Controller instance a public readonly member, and you can access it to query whatever other information you need. Or of course you can add the getters/state variables that you need.

Download

GamepadState.cs (C#3 – 5 Kb, SlimDX March 2009 SP1 SDK or later needed)

Just the dead zone code

If you’re only looking for that, here it is :

    var gamepadState = Controller.GetState().Gamepad;
    var leftStick = Normalize(gamepadState.LeftThumbX, gamepadState.LeftThumbY, Gamepad.GamepadLeftThumbDeadZone);
    var rightStick = Normalize(gamepadState.RightThumbX, gamepadState.RightThumbY, Gamepad.GamepadRightThumbDeadZone),
}

static Vector2 Normalize(short rawX, short rawY, short threshold)
{
    var value = new Vector2(rawX, rawY);
    var magnitude = value.Length();
    var direction = value / (magnitude == 0 ? 1 : magnitude);

    var normalizedMagnitude = 0.0f;
    if (magnitude - threshold > 0)
        normalizedMagnitude = Math.Min((magnitude - threshold) / (short.MaxValue - threshold), 1);

    return direction * normalizedMagnitude;
}

Isotropic Specular Reflection Models Comparison

I’ve decided to repost all my remaining TV3D 6.5 samples to this blog (until I get bored). These are not new, but they were only downloadable from the TV3D forums until now!

This demo (originally released as VB.Net 2005 on Feburary 13th, 2007 here) is a visual and performance comparison (and reference implementation) of five different per-pixel lighting models for isotropic specular reflections :

  • Phong reflection model
  • Blinn-Phong (Blinn D1, Phong) specular distribution
  • Lyon halfway method 1 (for k=2 and D = H* – L)
  • Trowbridge-Reitz (Blinn D3) specular distribution
  • Torrance-Sparrow (Blinn D2, Gaussian) specular distribution

My main goals were to :

  • Make an optimized HLSL implemention of each model that fits in a single Shader Model 2.0 pass and supports 3 lights
  • Evaluate the performance of each model in a multiple light, per-pixel rendering context
  • Determine which model keeps the most numerical precision and does not produce artifacts when used with normal-mapping

Download

IsotropicModels.zip [2.7 Mb] – C#3 (VS.NET 2008, TV3D 6.5 Prerelease .NET DLL Required)

Screenshots

lyon 2lyon 7
lyon 8lyon 9

Details

The HLSL shader supplied with this sample was made to mimic the built-in TV3D offset-bumpmapping shader as closely as possible. As a result, almost all of its effect parameters are mapped to standard semantics. It supports :

  • One colored directional light
  • Two colored point lights in SM2.0, and four in more recent models (SM2a/b and SM3)
  • Support for all types of vertex fog
  • Parallax mapping of texture coordinates using a grayscale heightmap
  • Diffuse mapping with alpha support (a.k.a. texturing)
  • Normal mapping
  • Specular mapping (using the alpha channel of the normalmap)
  • Emissive mapping (colored!)
  • Usage of all material terms (diffuse, ambient, specular, emissive, power and opacity)

There is no support for point light attenuation as this would’ve gone over the 64 instructions limit of the ps_2_0 profile. (also TV3D doesn’t provide semantics for these)
There is no support for spot lights for the same reasons, but I believe spots will be processed as regular point lights, ignoring the specific parameters.

Techniques

With the realtime controls, you can choose from three different techniques : FiveLightsBranching, FiveLights and ThreeLights. On SM2.0 hardware, only the third option will be valid.

The FiveLightsBranching mode uses loops and “if” statements to produce dynamic branching on SM3.0 compatible hardware. This can (but may not) be benificial because only the calculations for enabled lights are performed.
The FiveLights and ThreeLights modes respectively do five and three lights (WHAT YOU SAY !!) but all in a static manner. It does not just unroll the loop! Most of the calculations are done with matrices, which makes it more efficient on most hardware.

To keep the shader “simple” (or to prevent from becoming even more complex…) I decided not to implement a multipass 5 lights technique for SM2.0… sorry!

Blinn vs. Phong

There’s two major categories in the models I tested : the ones that use the halfway vector, and the Phong model that works with the reflected vector. (see the Wiki entry on Blinn-Phong for details on these vectors).

BlinnvsPhong
A directional light reflecting on a surface with a power value of 64

According to a paper from Siggraph 2004 called Experimental Validation of Analytical BRDF Models, the halfway methods generate specular highlights with more realistic shapes than the Phong model. I realized that myself when working on an ocean rendering shader that had a Phong specular reflection, and it was impossible to get a long grazing highlight when the sun was setting.

Normal Mapping Artifacts

One of the things that made me do this whole analysis is that I was dissatisfied with the image quality of Phong and Blinn-Phong when used with a normal map, so per-pixel lighting. I had huge block artifacts on my water surface, and everything with a high enough specular power value and a bumpy surface. So I found about a reformulation of the Blinn-Phong model by Richard F. Lyon, written in 1993 (!) for Apple. (trivia : Mr. Lyon also invented the optical mouse… how awesome is that!)

lyonblinn

This reformulation is interesting because it does not use the specular power literally as an exponentiation, it uses a distance metric and a much lower power value to produce very similar results to the Blinn-Phong model. Using a high specular power (32 or more) hurts floating-point accuracy, even in full-precision mode.

That said, I have seen hardware that do not have this problem. I am starting to think that it may be a driver issue, or something about mobile GPUs… In any case, the safe thing to do is choose the model that never produces artifacts, right?

The Other Blinn Distributions

The two other models I implemented (Trowbridge-Reitz and Torrance-Sparrow) were “ported” from the MATLAB code in Lyon’s reformulation paper. I wanted to test them out to see if they had the same artifact problems, and how different they looked from the classic models.

trowbridge

Trowbridge-Reitz is an interesting model because of how it looks. It’s slower than Blinn-Phong, but it has a distinct smoothness to it. The falloff of its specular highlights is softer than the other models… I’m not sure if it’s more accurate, but it looks pretty. Sadly, it has the same problems with normal mapping.

Torrance-Sparrow is a visual identity to the Blinn-Phong model. It’s the same thing, but slower and more instruction-heavy. It does not even fit in SM2.0 with 3 lights,… So I suggest you disregard it for realtime graphics.

Performance

I found that performance varies a lot depending on which technique you use, which shader model you support, how much your GPU is fillrate-limited instead of arithmetic-limited… So I’ll just say this : the Lyon model looks great, and it’s simple and fast enough to be worth considering. If you don’t experience the artifacts I describe, then the Blinn-Phong model is your best shot, but test Trowbridge-Reitz to see if it’s fast on your hardware.

It’s also worth mentioning that many things could be optimized by factorizing equations into small 1D or 2D textures (or perhaps a normalization cubemap), if your GPU loves pixels and hates instructions. But I don’t believe that the shader can be optimized that much by reorganizing code or removing useless statements. At least not without hurting visual quality.

Component System Updates

This sample contains a major breaking change to my component framework : The Service baseclass is gone. This makes Components able to “be” services (and publish many service interfaces), and allows this sample to have a much simpler class structure… no more state classes! The components just publish whatever data they want via their service interfaces. And with the new Eventful<T> class, it’s really easy to propagate changes from a controller to a view.