Sunday, September 06, 2020

Is this progress?

 


Sure, it looks like a broken demo cube program. That's because that's what it is. But it represents me getting back on the bandwagon with this whole hobby of 3D graphics programming when I should be doing something to make myself useful. Today we begin with yet another ground-up rewrite, but this time with Vulkan instead of OpenGL, thanks to a hardware update.

I say today, but this has taken literally a few days short of a month to get to the point, just now, where it displayed anything but a blank window. Just before this, I got it to display a blank window with a dark grey background, and then, the broken cube above. (It's because I got the "stride" wrong in my vertex buffer.)

But in any case, this is my hobby and I will waste my time as I like. Forward!



Saturday, October 26, 2019

Transparent Materials

All the changes I have been making under the hood have started to bear fruit. Aside from the layer mask support in shadows I mentioned earlier, beams and billboards support shadows and transparency, now. And ordinary meshes can be made transparent, including ones with layered materials, just by setting an opacity. (Although, with layered materials I also need to make sure not to layer any properties, like gloss or metallic, that are not supported in the transparent shader.)

The resolution of transparent objects is reduced, and shadows for transparent materials are... a bit ugly. But what can you do? This is almost ten-year-old technology I'm working with. It's still pretty neat.

Tuesday, October 22, 2019

Holey Title, Batman

I have been busily rewriting the entire shader definition system to be more flexible and converting all the existing shaders over to the new system. The end result of all this is... nothing changes.

That's not exactly true. Some new features come out of this, just because a new framework makes them easier to implement. For example, masks selected by a layer selection texture can now cast shadows. There are also a lot less classes and clutter around shadow shaders in general, but that's more difficult to illustrate with an image.

Monday, April 01, 2019

Text Too


I built a framework for cacheing letters, and a bunch of code for getting the positions of individual letters in a typeset string from the OS-level text rendering library (Core Text in OSX). And then I wrote a shader that can be called as something like a particle system, picking out the appropriate character data, placing the quad to render that character, getting the cached texture data for that character and drawing it to the screen.

After that, I refactored the character animation code to use what I had learned about buffer textures, eliminating the limits on joints (skeletal animation, mentioned here but implemented before I restarted this blog, some YouTube posts here and here) and shape keys (see this post), and saving a bunch of memory by compacting shape keys to only store data for vertexes that have useful data. It's all pretty cool but has absolutely no visible effect on the final product. Woo.

Today, I made improvements to the screen-space reflection shader (first implemented two years ago) I had been meaning to make for a long time, which means I'm getting close to the point where I can't avoid working on something new.

Saturday, March 02, 2019

Text


So, more fooling around on things which aren't what I was going to fool around with. I need text for debugging, among other things, so I decided I would "get that out of the way." Turns out, it's a little bit more complicated than that.

But, after some work, I have it more-or-less working. Well, a start of it. One part was just getting characters rendered onto textures. Then, of course, I had to start working on distance fields, because all the kids are talking about distance fields these days. The screen shot below might illustrate why.


The two words are rendered using textures of the same size (about 32 pixels high in total). The top one is anti-aliased text, and the bottom is a distance field. The distance field is generated from anti-aliased text at four times the resolution, and then scaled down. As you can probably see, this produces a nice clear shape even at the relatively low resolution of the final texture. Generating distance fields from anti-aliased text, caching font info, getting text and distance fields added to my material definition formats, that all took a bunch of time.

And I still have to get this text stuff into the UI.

Saturday, January 05, 2019

LODs and Skeletons

This figure is familiar from my earlier implementation of skeletal animations, if anyone was actually reading any of this so that they could actually be familiar with it. It's familiar to me, anyway.

The blue man group member has been washed to reveal an almost flesh-colored putty person. But there's a secret here. This isn't the same model as before, but one I've added a simplified LOD mesh to (in Blender 2.80, by the way, which seems a lot easier to use than 2.79, highly recommended). LOD scales are exaggerated, as usual, so you can see the switch as I move the camera around. The cool thing is that the skeletal animation attached to the original (base) model is automatically hooked up so that it works with other levels of detail. This might sound obvious, but, of course, it takes effort to make things that should work actually work for real.

Next, to check that it also works with shape keys (already implemented, but that just means the bugs are all in place).

Wednesday, January 02, 2019

Loads of LODs

Finally back to doing stuff. Another pile of work went in to doing this thing which should normally be invisible.

In the previous post I showed some automatic LOD generation, which is OK I guess, but doesn't do a great job all the time. For more control, you need to be able to import multiple versions of the same mesh which you have built by hand. The cylinder above is actually three separate objects which were created in Blender and imported from one file. The scales for switching between LODs were assigned automatically (based on the size of the faces in the objects), and are exaggerated here, again, for debugging.

So, that's working. Now just to check it works in concert with some other features, and then I can move on to... other things.

Monday, September 24, 2018

Creepy Purple Head Is Creepy

This took a lot longer than I hoped. This post is about level of detail. That's where you take out some polygons and simplify a shape, usually based on how far away it is from the viewer, to save your graphics card the trouble of calculating things for polygons which are so small they only cover a few pixels or less anyway. I've had spheres doing this in my "engine" for a long time, because for spheres it's relatively easy to calculate the appropriate low(er)-poly mesh or meshes. For general meshes though, the easy approach is to create a mesh for each LOD using external tools or by hand. I was thinking of taking an automatic approach to generating these meshes and calculating the appropriate view distances. This would, in my mind, save me the trouble of having to do this outside the engine and tune each model's selection of meshes. (It would also save me from the problem of how to handle shape keys on lower-poly versions of a mesh, although I'd probably just ignore that.)

So, after finishing up fog and transparency illumination in the last post, I noticed that my sphere LOD scales seemed to be a bit off, so I started debugging. That led to bug fixing, and that led to me starting to implement that auto-LOD-generation that I had been thinking about. Like I said, it turned out to take longer than I had hoped. Part of this was finding some reasonably fast and simple algorithm to implement, and part of it was discovering things like Blender had output a separate vertex for every triangle corner in my mesh, even though most triangles share corners with other triangles. This meant a detour to merge these vertexes so I could do a bit of topology analysis in the algorithm (also incidentally saving a bunch of memory after merging the vertex data, although I guess that's not such a big deal compared to things like texture memory).

The final effect is shown in the video above. It works... Ok? I guess? The effect is exaggerated for testing in the video, and normally the changes are basically invisible. I think simplifying material shaders for distant objects will probably be a bigger win (and I'll see if I can automate that too). But anyway, now you know how I spent the last couple of weeks and why clouds still aren't implemented.

Monday, September 03, 2018

Dramatic Sunset Lighting With Forward Scattering

It sort of has something to do with clouds, honest.

Among a bunch of under the hood fixes and additions over the past little while, I went back to my transparency and fog shaders. Both of these effects reference a 3D texture "illumination map" that I generate in screen space. The illumination map makes it possible for all the lights to at least have some effect on transparent objects, including fog. The original illumination map only holds the response of a directly illuminated sphere to incoming light, summed up for all lights. This would be appropriate for clouds made of fairly large particles, which is why I talked about dust in the description of the first video. Maybe blowing sand would be a better description.


Notice how the transparencies in this older video are black when viewed against the sun, and compare to the behavior of the same type of object in the first part of the new video above.

I also wanted to be able to simulate things like steam or water clouds, which, because they are made of smaller (and transparent) particles, scatter light forward instead of just reflecting it back. And I also wanted to support a mix of different types of cloud or transparency in the same scene. I've attacked this problem by rendering two illumination maps, the original, pure reflection map, and another which represents (mostly) forward scattering. (Right now, both maps are applied as-is, but I plan to make the response weighted by the transparent material in a later update.) This is what makes the transparencies shine when illuminated from behind in the top video, and gives the nice (if pixellated) murky sunset effect at the end of the video.

I was thinking of moving on to clouds next, but I could also implement an entire framework for visualizing and debugging the illumination maps, which would only take another week or two...

Thursday, August 23, 2018

Mr. Blinky

Well, that certainly is a creepy purple vaguely head-shaped lump. This is a slightly more complex model than the non-descript pillar of purple from the last post, and it blinks. It has separate shape keys to close each eye, and an animation linked to both of them. It also has shape keys for opening and closing the mouth, but I haven't created the tools to pick and choose which animation or key I am manipulating from the game engine's UI yet, so I'm stuck just showing one at a time.

Anyway, things move along. Maybe some additional UI tools before I go on to the clouds...