Reflective Floor

Nov. 13th Update : Added fresnel term, min/max reflection factors, pixel-perfect sampling.

Back in TV3D land for a minute!

bigpic1bigpic2

(the reflection looks offset in these pics but it’s fine in realtime… not sure why it messed up here.)

Description

It might sound like an easy thing to do, but planar reflections are pretty challenging to do per-pixel. It involves a texture projection shader, some pretty scary matrix play, and so on and so forth. So I decided to make a proper sample that shows how I did it in my reflective water sample, but in its simplest expression.

The floor uses two mesh groups, one for the reflection and one for the actual floor with texture. This makes it possible to only use a shader on the reflection group, and then the other one uses standard TV3D rendering.
Sadly, it’s impossible to do per-pixel reflections without a shader. Thinking about it, an even simpler way would be to use the stencil buffer, but then if you’re working with pixels you can apply fun effects like bump-mapping and fresnel attenuation.

Update Notes

The updated version has fresnel per-pixel attenuation, which means that grazing angles will get full reflection and looking down on the floor will get next to none. The amount of reflection and the effect of the fresnel term can be tweaked with constants.
In order to make this thing work, I had to make it 3-pass :

  1. The first pass is the standard texture mapping one, without shader;
  2. The second pass multiplies the inverse of the fresnel term with the texture-mapped mesh, so that the reflection won’t be overpowering when we blend it in;
  3. The third and last pass additively-blends the reflection with fresnel applied.

The two last passes use the shader, but since both shader passes are enabled, I only need a single group for both. Nice and clean!

I also removed the texel offset thing because if you point-sample the texture (which you should, unless you’re bump-mapping), it’s pixel-perfect without any needed fix. And the blending modes and depth-write flags are moved to the shader, to make things even cleaner!

Download

Binaries + Code : PlaneReflection_src_bin.zip (8.7 Mb – C#3 VS2008 Solution)

Just the code : Projection.fx (the shader) and PlaneReflection.cs (the component that draws everything)

Enjoy!

Anaglyph Stereoscopic Rendering in First Person

I decided to finally finish up my analgyph stereoscopy sample, and cut the depth-of-field component that was adding too much complexity for my limited spare time right now.

stereoscopy finalstereoscopy final 2

Download

Binaries : stereoscopy_bin.zip (1.9 Mb – Binaries)
Code : stereoscopy_src.zip (524 Kb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

This sample demonstrates a couple of things :

  • Optimized anaglyph filters for red/cyan stereoscopic rendering, to reduce eye-strain by minimizing retinal revialry but still keep some color information. As my source suggests, I’ve also implemented red channel gamma correction in the shader.
  • Auto-focus of both eyes on a focal plane. The camera is in the first person and the two “virtual cameras” act like human eyes, in the manner that they are connected to a single brain that wants to look at a single point. So the center of the screen is assumed to be that focal point, and both eyes will look at it. I used a depth rendering pass to achieve this, and a weighted sum of a certain portion of the screen near the center (this is all tweakable in realtime)

The distance between the eyes is also tweakable, if you want to give yourself a headache.

I wanted to do a stereoscopy sample to show how I did it in Super HYPERCUBE, but I can’t/don’t want to release its source, so I just re-did it properly. Auto-focus is a bonus feature that I wanted to play with; sHC didn’t need that since the focal plane was always the backing wall.

This sample, like all my recent ones, uses the latest version of my components framework. There are some differences between this release and the Stencil Rendering one, but it’s mostly the same. The biggest thing is that base components (Keyboard, Sound, etc.) are not auto-loaded anymore, and must be added in the Core.Initialize() implementation. This way, if I don’t need the sound engine, I just don’t load it… makes more sense.

Hope you like it!

Stencil Rendering

Here’s a little demo to show off a technique that Farbs posted about earlier this week.

yay yay2

Download

Binaries : stencilrendering_bin.zip (3.6 Mb – Binaries)
Code : stencilrendering_src.zip (1.8 Mb – Source Only, get DLLs for TV3D and SlimDX from the Binaries)

Description

Every frame, a random color from the target image is sampled. This color will be used as a stencil, such that every pixel whole target color is close enough to that stencil’s color will be painted. It’s a constructive painting process; every frame paints a single color, but if you wait long enough in a single spot you’ll end up with the target image.

Actually, the post processing draw code is so concise that I can post it here :

public override void PostDraw()
{
    targetBuffer.BltFromMainBuffer();
    targetBuffer.SetSystemMemCopy(true, true); // Lolworkaround

    // 1) Pick a colour from target image
    var pickedColor = Globals.DecodeRGBA(TextureFactory.GetPixel(targetBuffer.GetTexture(), RandomHelper.Random.Next(0, targetBuffer.GetWidth()), RandomHelper.Random.Next(0, targetBuffer.GetHeight())));
    stencilShader.SetEffectParamVector3("stencilColor", new TV_3DVECTOR(pickedColor.r, pickedColor.g, pickedColor.b));
    Screen2DImmediate.Draw_FullscreenQuadWithShader(stencilShader, 0, 0, 1, 1, mainBuffer.GetTexture(), targetBuffer.GetTexture());
    mainBuffer.BltFromMainBuffer();
}

“targetBuffer” and “mainBuffer” are just two TVRenderSurfaces as big as the viewport. Since I sample from targetBuffer, it needs to be flagged with “system memory copy”. I thought this would slow things down, but it runs at very interactive framerates (60 and more).

And uh… The “lolworkaround” is a bug in the current build of TV3D. Usually you only need to set this once at initialization time, but a BltFromMainBuffer does not flag the surface as dirty and it prevents updates to the pixels that I sample. Resetting the memory copy mode makes the changes effective. Sylvain tells me it’s fixed in the current development build. :)

A pixel shader does the rest :

float4 PS(PS_INPUT IN) : COLOR 
{
	float3 bufferSample = tex2D(MainBufferSampler, IN.TexCoord).rgb;
	float3 targetSample = tex2D(TargetSampler, IN.TexCoord).rgb;

	// 2) Calc difference between current screen and target (per channel subtraction, abs, then accumulate all three into one)
	float sampleDiff = distance(bufferSample, targetSample);

	// 3) As above, but between colour picked in step 1 and target image
	float stencilDiff = distance(stencilColor, targetSample);

	// 4) Where result of step 2 > result of step 3, draw colour picked in step 1
	float4 color;
	if (stencilDiff < sampleDiff)
		color = float4(stencilColor, 1);
	else
		color = float4(bufferSample, 1);

	return color;
}

I think it’s a lovely effect. It’s very dependent on how colorful and contrasted the scene is, and it works differently for sharply-defined shapes or gradients… And of course camera movement is a big factor. In the video it gets confusing, the effect is more “painterly” if you just rotate the camera in small circles and wait for the effect to accumulate.

Stimergy

Stimergy is a game that I have made with Heather Kelley of Kokoromi for the Bivouac Urbain gamejam/competition last weekend in Québec city. Our team name was EMERGENCY HAMMER… don’t ask?

The point of the jam was to make a game in 3 days – 36 hours. And so we did.

splash stimergy final

Download

stimergy_final.zip (2.1 Mb – Binaries)
stimergy_src.zip (471 Kb – Source Only, get DLLs for TV3D, SlimDX and IrrKlang from the Binaries)

I thought it was so cool that Petri used Chronolapse to film the making of his game Post I.T. Shooter, so I used the same thing for Stimergy :

Details

Heather did most of the game design, defined the graphics style and did the sound effects. I did the programming and some game design.

The game was made from scratch in C# 3.0 using the Truevision3D engine with no prior design, graphics or sound work. All the graphics in the game are procedural, and the gameplay itself is based on AI rules, basically a cellular automaton plus the notion of “stigmergy” from the insect world. (We dropped the G in the game title.)

I used a more recent version of my components system, which now has a sound interface via IrrKlang. It’s the same system I used for Super HYPERCUBE and Trouble in Euclidea, growing quite fond of it.

Goals

  • Guide the ants using suggestive pheromone trails towards or away from the picnic blanket, or killer antlions.

There are 4 levels. Each level has different goals, which are explained when it begins. Some have time limits. If you fail the objective, the level will restart until you get it.

Controls

Mouse Left Button : Attraction Pheromone
Mouse Right Button : Repulsion Pheromone

Escape key to restart the game from Level 1.

Requirements

This game uses the .NET Framework 3.5, installing it is mandatory. I suggest you install the SP1 version just in case.
You will also need a bunch of DirectX DLLs that are provided by the DirectX Web Setup.

Because of the fancy blur effect, the game requires a Shader Model 2.0 compatible graphics card. This type of card is fairly commonplace now.

 

Post Mortem

 

This was my first real rapid game prototyping experience. The shortest project span I had seen for a game before was about 15 days… so this was something else. Here’s a couple of random thoughts about that :

  • I need a proper system for timers and interpolators, similar to what Nick Gravelyn posted, or maybe just steal that. It’s really annoying to have a dozen TimeSpan variables to keep track of what changes over time and how long the transition lasts.
  • I need to learn other languages than C#, and other frameworks than TV3D/XNA. I’m making big efforts in my “engines” to cut down the redundant code, but even then I feel like I’m programming much more than is needed to describe the game mechanics.
  • A big part of what makes a game actually fun, is how much direct feedback you get from interacting in the gameworld, and how much you feel like you can control these interactions. Stimergy is a pretty slow game, almost an RTS (funny, because I hate this genre), and I’ve seen other competitors value the responsiveness of player input more than the complexity of game systems, and it ended up being a lot more fun. Maybe it just fit the “jam” context better, too.
  • I need a system for game screens. Something that puts game components in a context, that gives them lifetime. I tend to make all the major components global and automatically-loaded, which is the easiest way, but it makes game state management pretty hard. And it’s dangerous for memory usage. There’s actually a good XNA example that I just need to look at in detail.
  • If I’m going to do more 2D games in TV3D, I need to build something that will load GIF animations and non-power-of-two textures. Material management for stuff that changes color or opacity was kind of a mess too. I may need to wrap it in something more concise. Switching engine would probably be the more logical choice.
  • I spent silly amounts of time on tweaking graphical things that I ended up not using. This time would have been better spent balancing the difficulty level or adding more gameplay elements. I’m used to making demos look pretty,… I have to remind myself that I’m making a game here.

I may sound like I’m complaining about everything, but I’m actually really happy about how the game turned out. It’s fairly fun/challenging and it looks pretty good. I’m still wondering whether I’ll release the code because it’s kind of a mess. I suppose I will if I get a request. The source is available up in the Download section!

Isotropic Specular Reflection Models Comparison

I’ve decided to repost all my remaining TV3D 6.5 samples to this blog (until I get bored). These are not new, but they were only downloadable from the TV3D forums until now!

This demo (originally released as VB.Net 2005 on Feburary 13th, 2007 here) is a visual and performance comparison (and reference implementation) of five different per-pixel lighting models for isotropic specular reflections :

  • Phong reflection model
  • Blinn-Phong (Blinn D1, Phong) specular distribution
  • Lyon halfway method 1 (for k=2 and D = H* – L)
  • Trowbridge-Reitz (Blinn D3) specular distribution
  • Torrance-Sparrow (Blinn D2, Gaussian) specular distribution

My main goals were to :

  • Make an optimized HLSL implemention of each model that fits in a single Shader Model 2.0 pass and supports 3 lights
  • Evaluate the performance of each model in a multiple light, per-pixel rendering context
  • Determine which model keeps the most numerical precision and does not produce artifacts when used with normal-mapping

Download

IsotropicModels.zip [2.7 Mb] – C#3 (VS.NET 2008, TV3D 6.5 Prerelease .NET DLL Required)

Screenshots

lyon 2lyon 7
lyon 8lyon 9

Details

The HLSL shader supplied with this sample was made to mimic the built-in TV3D offset-bumpmapping shader as closely as possible. As a result, almost all of its effect parameters are mapped to standard semantics. It supports :

  • One colored directional light
  • Two colored point lights in SM2.0, and four in more recent models (SM2a/b and SM3)
  • Support for all types of vertex fog
  • Parallax mapping of texture coordinates using a grayscale heightmap
  • Diffuse mapping with alpha support (a.k.a. texturing)
  • Normal mapping
  • Specular mapping (using the alpha channel of the normalmap)
  • Emissive mapping (colored!)
  • Usage of all material terms (diffuse, ambient, specular, emissive, power and opacity)

There is no support for point light attenuation as this would’ve gone over the 64 instructions limit of the ps_2_0 profile. (also TV3D doesn’t provide semantics for these)
There is no support for spot lights for the same reasons, but I believe spots will be processed as regular point lights, ignoring the specific parameters.

Techniques

With the realtime controls, you can choose from three different techniques : FiveLightsBranching, FiveLights and ThreeLights. On SM2.0 hardware, only the third option will be valid.

The FiveLightsBranching mode uses loops and “if” statements to produce dynamic branching on SM3.0 compatible hardware. This can (but may not) be benificial because only the calculations for enabled lights are performed.
The FiveLights and ThreeLights modes respectively do five and three lights (WHAT YOU SAY !!) but all in a static manner. It does not just unroll the loop! Most of the calculations are done with matrices, which makes it more efficient on most hardware.

To keep the shader “simple” (or to prevent from becoming even more complex…) I decided not to implement a multipass 5 lights technique for SM2.0… sorry!

Blinn vs. Phong

There’s two major categories in the models I tested : the ones that use the halfway vector, and the Phong model that works with the reflected vector. (see the Wiki entry on Blinn-Phong for details on these vectors).

BlinnvsPhong
A directional light reflecting on a surface with a power value of 64

According to a paper from Siggraph 2004 called Experimental Validation of Analytical BRDF Models, the halfway methods generate specular highlights with more realistic shapes than the Phong model. I realized that myself when working on an ocean rendering shader that had a Phong specular reflection, and it was impossible to get a long grazing highlight when the sun was setting.

Normal Mapping Artifacts

One of the things that made me do this whole analysis is that I was dissatisfied with the image quality of Phong and Blinn-Phong when used with a normal map, so per-pixel lighting. I had huge block artifacts on my water surface, and everything with a high enough specular power value and a bumpy surface. So I found about a reformulation of the Blinn-Phong model by Richard F. Lyon, written in 1993 (!) for Apple. (trivia : Mr. Lyon also invented the optical mouse… how awesome is that!)

lyonblinn

This reformulation is interesting because it does not use the specular power literally as an exponentiation, it uses a distance metric and a much lower power value to produce very similar results to the Blinn-Phong model. Using a high specular power (32 or more) hurts floating-point accuracy, even in full-precision mode.

That said, I have seen hardware that do not have this problem. I am starting to think that it may be a driver issue, or something about mobile GPUs… In any case, the safe thing to do is choose the model that never produces artifacts, right?

The Other Blinn Distributions

The two other models I implemented (Trowbridge-Reitz and Torrance-Sparrow) were “ported” from the MATLAB code in Lyon’s reformulation paper. I wanted to test them out to see if they had the same artifact problems, and how different they looked from the classic models.

trowbridge

Trowbridge-Reitz is an interesting model because of how it looks. It’s slower than Blinn-Phong, but it has a distinct smoothness to it. The falloff of its specular highlights is softer than the other models… I’m not sure if it’s more accurate, but it looks pretty. Sadly, it has the same problems with normal mapping.

Torrance-Sparrow is a visual identity to the Blinn-Phong model. It’s the same thing, but slower and more instruction-heavy. It does not even fit in SM2.0 with 3 lights,… So I suggest you disregard it for realtime graphics.

Performance

I found that performance varies a lot depending on which technique you use, which shader model you support, how much your GPU is fillrate-limited instead of arithmetic-limited… So I’ll just say this : the Lyon model looks great, and it’s simple and fast enough to be worth considering. If you don’t experience the artifacts I describe, then the Blinn-Phong model is your best shot, but test Trowbridge-Reitz to see if it’s fast on your hardware.

It’s also worth mentioning that many things could be optimized by factorizing equations into small 1D or 2D textures (or perhaps a normalization cubemap), if your GPU loves pixels and hates instructions. But I don’t believe that the shader can be optimized that much by reorganizing code or removing useless statements. At least not without hurting visual quality.

Component System Updates

This sample contains a major breaking change to my component framework : The Service baseclass is gone. This makes Components able to “be” services (and publish many service interfaces), and allows this sample to have a much simpler class structure… no more state classes! The components just publish whatever data they want via their service interfaces. And with the new Eventful<T> class, it’s really easy to propagate changes from a controller to a view.

Post Processing/Fullscreen Texture Sampling

I’ve decided to repost all my remaining TV3D 6.5 samples to this blog (until I get bored). These are not new, but they were only downloadable from the TV3D forums until now!

This demo (originally released July 1st 2008 here) is a mixed bag of many techniques I wanted to demonstrate at once. It contains (all of which are described below) :

  • Pixel-perfect sampling of mainbuffer rendersurfaces for post-processing
  • A component/service system very similar to XNA’s and with support for IoC service injection
  • A minimally invasive, memory-conserving workflow for post-processing shaders
  • Many realtime surface downsampling methods (blit, blit 2x, tent, sinc+kaiser and bicubic)
  • An optimized Gaussian blur shader with up to 17 effective taps in a single pass

Download

FullscreenShaders_(final).zip [8.6 Mb] – C#3 (VS.NET 2008, TV3D 6.5 Prerelease .NET DLL Required)

Screenshots

fs1fs2fs3fs4

Dissection

Pixel-Perfect Sampling

There is a well-known oddity in all DirectX versions (I think I’ve read somewhere that DX11 fixes it… amazing!) that when drawing a fullscreen textured quad, the texture coordinates need to be shifted by a half-texel. So that’s why if you use hardware filtering (which is typically enabled by default), all your fullscreen quads are slightly blurred.
But the half texel offset is related to the texel size of the texture you are sampling, as well as the viewport to which you are rendering! And it gets seriously wierd when you’re downsampling (sampling a texture at a lower frequency than its resolution), or when you’re sampling different-sized textures in a single render pass…

So this demo adresses the simple cases. I very recently found a more proper way to address this problem in a 2003 post from Simon Brown. I tested it and it works, but I don’t feel like updating the demo. ^_^

Component System

The component system was re-used in Trouble In Euclidea and Super HYPERCUBE, and I’m currently using it to prototype a culling system that uses hardware occlusion queries efficiently. The version bundled with this demo is slightly out of date, but very functional.
I blogged about the service injection idea a long time ago, if you want to read up on it.

Post-processing that just sits on top

This is how I consider post-processing should be done : using the main buffer directly without writing to a rendersurface to begin with. This way, the natural rendering flow is not disturbed, and post-processing effects are just plugged after the rendering is done.

If you can work directly with the main buffer (no downscaling beforehand), you can grab the mainbuffer onto a temporary rendersurface using BltFromMainBuffer() after all draw calls are performed, and call Draw_FullscreenQuadWithShader() using the temporary rendersurface as a texture. The post-processed result is output right on the main-buffer.
Any number of effects can be chained this way… blit, draw, blit, draw. Since the RS usage only lasts until the fullscreen draw call, you can re-use the same RS over and over again.

If you want to scale the main buffer down before applying your effect (e.g. for performance reasons, or to widen the effect of a blur), then you’ll need to work “one frame late”. I described how this works in this TV3D forum post.

Downsampling Techniques

The techniques I implemented for downsampling are the following :

  • Blit : Simply uses BltFromRenderSurface onto a half- or quarter-sized rendersurface. A single-shot 4x downscale causes sampling issues because it ignores half of the source texels… but it’s fast!
  • Double blit : 4x downscaling via two successive blits, each recieving surface being half as big as the source. It has less artifacts and is still reasonably fast.
  • Bicubic : A more detail-conserving two-pass filter for downsampling that uses 4 linear taps for 2x downsampling, or 8 linear taps for 4x. Works great in 2x, but I’m not sure if the result is accurate in 4x. (reference document, parts of it like the bits about interpolation/upsampling are BS but it worked to an extent)
  • Tent/Triangular/Bilinear 4x : I wasn’t sure of its exact name because it’s shaped in 2D like a tent, in 1D like a triangle, and it’s exactly the same as bilinear filtering… It’s accurate but detail-murdering 4x downsampling. Theoretically, it should produce the same result as a “double blit”, but the tent is a lot more stable, which shows that BlitFromRenderSurface has sampling problems.
  • Sinc with Kaiser window : A silly and time-consuming experiment that pretty much failed. I found out about this filter in a technical column by the Jonathan Blow (of Braid fame), which mentioned that it is the perfect low-pass filter and should be the most detail-conserving downsampling filter. There are very convincing experimental results in part 2 as well, so I gave it a shot. I get a lot of rippling artifacts, it’s way too slow for realtime, but it’s been fun to try out. (reference document)

Fast Gaussian Blur + Metrics

The gaussian blur shader (and its accompanying classes) in this demo an implementation of the stuff I blogged about some months ago : the link between “lost light” in the weights calculation and how similar to a box-filter it becomes. You can change the kernel size dynamically and it’ll tell you how box-similar it is. The calculation for this box-similarity factor is still very arbitrary and you should take it with a grain of salt… but it’s a metric, an indicator.

But there’s something else : hardware filtering! I read about this technique in a GLSL Bloom tutorial by Philip Rideout, and it allows up to 17 effective horizontal and vertical samples in a ps_2_0 (SM2 compatible) pixel shader… resulting in 289 effective samples and a very wide blur! It speeds up 7-tap and 9-tap filters nicely too, by reducing the number of actual samples and instructions.
Phillip’s tutorial contains all the details, but the idea is to sample in-between taps using interpolated weights and achieve the same visual effect even if the sample count is halved. Very ingenious!

 

If you have more questions, I’ll be glad to answer them in comments.
Otherwise I’ll direct you to the original TV3D forum post for info on its development… there’s a link to a simpler earlier version of the demo there, too.

Super HYPERCUBE

Note : SUPERHYPERCUBE has been released by Kokoromi for PSVR, I did not work on this version at all, but this article shows its development history.

Super HYPERCUBE (capitalization may vary) is a game I made with the fine folks at Kokoromi for this year’s (2008) Gamma art/game show in Montréal. Gamma is a themed game party that’s been happening for three years now, each year with a design constraint for all the games that apply; this year’s Gamma 3D was about red/cyan stereoscopy aka color anaglyphs.

SHC Logo/Splash Screen : It’s actually all 3D and animated.

The idea is that you have to fit a cluster of cubes inside a wall that represents a projection of that cube on one of its faces, with a series of rotations applied. So it’s a bit like the Japanese “Human Tetris” game shows, which is the comparison that our recent blog coverage have been using, and it’s exactly right. Except you’re handling a random cluster of cubes.

Development Timeline : From concept art, to Sketchup mockup, to early prototype, to final product!

The game, like all other Gamma games, was made to be easy to learn and fun within 5 minutes, because it was to be played by the public and we want to get as many people to play as possible. So the concept is fairly simple, but I was surprised about how competitive the gameplay was on the showfloor! Until the last minute, I had a fight with a fellow party-goer for the #1 high score, which I won by an unfair margin, which I assume was due to luck and… well… hours of testing the game while making it. ;)

Good Luck With That : The shapes get pretty crazy in the last moments.

But our game’s most awesome feature is not just stereoscopy, it’s wiimote headtracking! Which is a bummer, because even if the game is now available for download, I assume noone will have the setup to play it as it was meant to be played. (The most important part being IR-LED-mounted glasses!)

You can still use an Xbox 360 gamepad or just the keyboard to play it, and that’s how I’ve been testing it most of the time. It’s just nowhere as immersive without the headtracking… the combination of that and stereoscopy worked really well for us. There will probably be videos of people playing at Gamma 3D sometime soon, I’ll update this post with links.

Updates :

Downloads

Binaries (this one isn’t open-source, sorry…) : sHC_final.zip (1.6 Mb)
Update 21/11 02h57 GMT-5 : Put the required font in a texture instead of looking up the TTF. I didn’t realize that Century Gothic wasn’t shipped with Windows anymore…

You will need the .NET 3.5 SP1 framework installed, and TV3D requires some oft-missing DirectX DLLs which you can get with the End-User Runtimes.

Acknowledgments

I have to say that the Wiimote headtracking technology is all thanks to Johnny Chung Lee‘s inspiring work on the subject (and free code!), as well as Brian Peek’s C# Wiimote library without which this would have never happened.

The game itself was programmed using C# 3.5, the Truevision3D 6.5 engine and part of the XNA framework (I’ve bundled the DLL) for full Xbox controller support. There is no sound, this is voluntary… there was a DJ at the actual event. :)

And last but not least,…

Credits (I’m not alone in this one!) :

  • Renaud Bédard – Polytron (Concept, Programming, Hardware)
  • Phil Fish – Kokoromi/Polytron (Concept, Design)
  • Jason DeGroot – Polytron (Concept, Hardware)
  • Cindy Poremba – Kokoromi (Design)
  • Heather Kelley – Kokoromi (Design)
  • Damien Di Fede – Kokoromi (Play-Testing)

Effect Compiler & Disassembler

Updated! Now supports nVidia’s ShaderPerf tool.

Downloads

EffectCompiler.zip [51.3kb] – XNA Game Studio Express 2.0 (Visual C# 2005 Express), Source + Binaries

Description

Yesterday I took apart my Effect Compiling Tool which took a HLSL shader and converted it to Windows/Xbox360 bytecode, and made it into something more useful outside of XNA.

It’s always been somewhat of a hassle for me to compile and disassemble HLSL shaders. I can edit them pretty well in Visual Studio with code coloring and tabulations/undo’s/whatnot, but to compile them I always had to go with something else. I had read in the book Programming Vertex and Pixel Shaders by W.Engel how to compile them in VC++ 2005 using a Custom Build Step and fxc.exe, but when working in C# I had to have a parallel C++ project just for shaders, which is dumb. Also, fxc.exe has become less and less stable for some reason… So I finally made my own compiler and disassembler using XNA 2.0.

Continue reading Effect Compiler & Disassembler

Static Ambient Occlusion

Traditional DirectX lighting models define ambient lighting as coming from all directions, and is added as a constant on all surfaces regardless of the geometry. Ambient occlusion acts as a factor to ambient lighting to take into account the cavities and concave areas in a model, or how much a surface is hidden from its environment.

Downloads

StaticAmbientOcclusion.rar [6.8 Mb] – VB 2005 (VS.NET 2005)

Continue reading Static Ambient Occlusion

Realtime Gradient Sky

Downloads

SkyGradient.rar [818kb] – VB 2005 (VS.NET 2005)

Description

I had a request from the same MMORPG developer which asked me for Non-Reflective Water to make a simpler version of my old “HLSL Sky Demo”, which I haven’t put on my blog yet because I’m not all that proud of the code.

Basically, I was asked to copy Worlds Of Warcraft’s skies; so make a tweakable gradient-based day sky solution, that renders fast and looks good, and most importantly that behaves well in huge worlds with big height variations.

Continue reading Realtime Gradient Sky