The first isometric party-based computer RPG set in Pathfinder fantasy universe
Created by OWLCAT games
Back to news

Blood, fog, lighting and shadows - Adding love and detail to the Stolen Lands

Dear Pathfinders, It's time for another peek behind the curtains. Today's update is all about lighting, shaders, floral dispersion and all those fun details and effects, which add a bit more life to our game world. Before we get all technical and start revealing our tricks, though, we have two announcements. First of all: the stage 2 alpha test is nearly here!

Our next big alpha build is almost ready and will be distributed as an update through Steam to all alpha testers today. What does this mean for you? If you're one of our alpha testers, you'll be getting more content, more gameplay features, more companions and more Pathfinder: Kingmaker to play. If you aren't part of the alpha, you'll be happy to hear that we're well on schedule, development is steadily progressing and there should be streams and videos of the new build popping up for your viewing pleasure in the near future. Keep your eyes peeled!

Second, we have a draft of the community achievement baron bonus portrait for you! Whew, that was a mouthful. As you might recall, we had several polls on our forums where we asked you to vote on your favorite race, class and appearance options, which were to be depicted in one new in-game character portrait. You have voted for a female half-elf sorcerer of the undead bloodline, with a touch of vampire, wraith and mummy in her appearance. Based on that description, our artist came up with something - and the result sure is creepy! Behold:

Eh, what the heck. *swipes right*
Eh, what the heck. *swipes right*

Now, let's talk more about graphics in Kingmaker. We've already told you about our water rendering techniques. Today we'll review a few other graphics features of our project.

Strange as it may sound, graphics development starts with market research. At an early development stage, we researched the stats of video cards on the market and chose our target hardware in order to let as many players as possible experience all graphical content we produce. Then we selected rendering technologies that would be perfect for this hardware. A lighting system is the basis of game rendering, unless the game is match3 with abstract graphics. The lighting system determines the graphics pipeline architecture of a frame, so we'll start with that.

Forward rendering

There are 2 basic approaches to real-time lighting calculations:

  • Forward rendering
  • Deferred shading

And of course, there are a lot of variations and combinations of these.

Forward rendering is the most widely-used (and probably the first) technique for calculating lighting in real time in games. A model consisting of vertices is fed to the vertex shader, where its vertices are transformed to screen space, and all this data is passed down the video card pipeline to the pixel shader, where lighting happens. This is called forward rendering with per-pixel lighting. 

Of course, lighting can be calculated in the vertex shader as well, then it will be forward rendering with vertex lighting. Vertex lighting is less GPU intensive, because there are much fewer vertices than there are pixels, but its visual quality is also lower. 

Usually shaders that perform calculations only work with one light source. But objects may be lit by multiple sources at the same time. This means the object must be rendered as many times as there are sources that light it. This is the most significant flaw of forward rendering.

 
 

Deferred shading - as the name tells us, it calculates lighting not during geometry rendering, but defers it to the end of the frame. In this case lighting works as a post-process, which allows us not to render the same object multiple times. All geometry is rendered to a special G-buffer that contains the information required to calculate lighting. Then lighting is calculated for all the light sources, using the prepared G-buffer. This means lighting calculations are decoupled from the scene complexity.

 
 

This approach might seem much more effective than forward rendering, but it has its pitfalls:

  • To create a G-buffer, a video card must support the Multiple Render Target (MRT) technology. Of course, the newer video cards do support it, but until recently it has been a significant limitation.
  • G-buffer usually takes 4 render textures. If we take Full HD as average resolution, it would give us 1920*1080*4*32 bits, which equals 256MB. This much memory must pass through the video card's bus 30 times per second if we want 30 FPS, which gives us about 8GB/s. But in fact this volume is even higher, because data can be written to G-buffer multiple times (overdraw), it can also be read during writing (blending, depth/stencil test) and so on. So, this number can go as high as 15-20GB/s. And if we increase the resolution to 4k, the load becomes substantial even for top video cards. And this is just the G-buffer, but in fact lots of other data must pass through the same bus (textures, geometry, intermediate post-process textures and so on). This puts a very heavy load on a video card, especially on average ones.
  • Deferred shading doesn’t work with translucent objects, such as glass. That's why we have to use good old forward rendering to draw such objects. This leads to a large number of shader combinations.
  • Deferred shading does not support multisampling. Hence there are problems with anti-aliasing.

Having weighed all pros and cons we opted for forward rendering, but with some modifications. We will talk about those later.

Light indexed deferred shading

The most important light sources, such as the sun or torches that cast a shadow are rendered using classic forward rendering. The other, not very important accent light sources are rendered using Light indexed deferred shading.

Light indexed deferred shading is a method of feeding more information about light sources to the shader to decrease the number of objects * lights combinations. All light sources are rendered to a special texture, every pixel of which contains a light source index in the general array. This is an example of how this texture looks in our debugger. Colored pixels are the light source index. White color means no light hits the pixel.

 
 

The texture has 4 channels, that is why we can tie up to 4 light sources to each pixel. The index texture is fed to the main object rendering shader along with the light source data array. After obtaining the light source index from the texture we can get all the information required to calculate lighting from the common light source array.

 
 

Of course, we can simply feed 4 light sources to the shader, as Unity does in the vertex lighting mode, but it will be 4 light sources per object. And with the Light indexed deferred shading it's 4 light sources per pixel. This is a significant difference that allows us to create more detailed lighting. There is one serious limitation though - there can't be more than 4 light sources per pixel, otherwise the lighting will flicker.

Decals

Decals are some of the most 'inconvenient' things in graphics. They don't fit into any of the described pipelines and always have significant limitations. There are 3 main ways to render decals:

  • Geometry decals are decals, for which a copy of the geometry they are projected on is created. If such decals are created in real time, there is a high chance that the decal geometry will have holes, missing triangles around the edges and similar artifacts, if it happens to be on the border between geometry objects. Or even worse, the decal won’t be generated at all within reasonable time, due to complex geometry. We use such decals in our project only during development, when we can safely pre-calculate them and bake together with static geometry.
  • Projector - this approach is used for the forward rendering pipeline. The name tells us how such decals work. They project texture onto objects. It means every object, onto which a decal is projected, needs to be rendered again. If a decal is projected on 10 objects, this will add 10 draw calls to the total number of the frame. So, the number of combinations increases to objects * lights * projectors. This is a highly unproductive method of rendering decals. And although it is available out of the box in Unity, we don't use it in our project.
  • Screen space decals - this approach is used for the deferred shading pipeline. Having a G-buffer, we can draw decals into it before rendering lighting, and calculate lighting after that. This is the most effective method, but unfortunately it also has its flaws. First, a rather heavy shader for calculating a decal projection on the screen. Second, the heavy shader makes us keep the space that decals take on the screen to a minimum, which is not trivial. Third, a projection to screen space leads to visual artifacts, related to mip-mapping (here is a good article about this problem: https://bartwronski.com/2015/03/12/fixing-screen-space-deferred-decals/).

 

In our project we use our own modification of screen space decals, which we called screen space pre-pass decals. We have mentioned that before rendering the main image we create the depth texture. This texture is enough to project decals to the screen just as it happens with the regular screen space approach. But since we don't have a normal G-buffer, we render decals to a separate texture. When we render the main image, our renderer uses this texture and blends it with the main albedo texture of an object.

 
 

Fog of war

The fog of war (FOW) is part of the game mechanics, and we wanted the visual part of this technology to be very subtle. The player shouldn't feel any difficulties because of it. To make the FOW as detailed and responsive as possible, we decided to move all calculations from CPU to GPU. This allowed us to fine-tune almost pixel perfect shadows and keep maximum performance.

For the FOW we use a technology resembling shadow volumes, but in 2D. We create 2D objects that are called FogOfWarBlocker. They look like sets of segments (edges) connected to each other. In a special shader each of the edges is stretched in the direction opposite from the character, forming a shadow volume.

 
 

We've developed special tools to create FogOfWarBlockers in Unity3D. Our level designers adjust the 2D geometry of a FogOfWarBlocker directly in the scene. This is what a blocker for one of the houses in a goblin village looks like:

 
 

The FOW system looks for intersections of a character's visibility radius with FogOfWarBlockers and makes a list of blockers to be displayed. In the beginning of each frame, before the depth texture is rendered, we draw all the shadow volumes from all the characters to a separate texture, the Fog Of War Shadow Map.

 
 

Then we can use this texture for rendering the scene. The Fog Of War Shadow Map is projected onto the whole game location, that's why it doesn’t depend on the camera position.

The FOW is projected onto objects in 2 ways. For opaque geometry we use depth textures and project the FOW to screen space after rendering all the opaque geometry. This is like a post-process. For transparent geometry and water, we can't use the depth texture, which is why we use the FOW right during the rendering of each of these objects. This somewhat increases the number of shader variants, but allows for higher productivity because the bulk of operations is done as a post-process. As a result, we get something like this:

 
 

By the way, since a shadow map is projected onto the whole location we can easily use it to render the local map. The FOW remains dynamic even in this mode:

 
 

Foliage interaction

When creating game locations, we wanted to make them pleasant to look at and to move around in. We wanted subtle nuances, making the picture feel alive. We created interactive grass and bushes that sway when characters are close.

When characters walk around the location, they leave behind invisible particles. We literally use a system of particles to create a trail behind the character. If we make it visible, it will look something like this:

 
 

We don't render this trail in the game, of course, but we use it to render the foliage interaction texture (FIT). Each pixel of the FIT is a 2D vector in the direction, where foliage needs to be animated.

 
 

As with the FOW, we project the FIT onto the whole location. The foliage animation itself happens in the vertex shader of foliage objects. Each object projects a FIT on the vertices and animates them in its shader.

 
 

Fluid simulation

We've told you in the water update that we are working on a fluid simulation technology that will allow us to render blood in water. We continue working on this technology to use it not only for water, but for fog, too. Both fog and blood are simulated using the same resources, that's why both simulations are working in parallel and cost as one productivity-wise. We are still experimenting, but we've had some success.

 
 

Particle-snapping

Here's another unusual little thing that we do with the particles. We have a mechanism that we call Particle-snapping. It allows to tie a system of particles to a character's bones. Each particle is glued to a corresponding bone and follows it during animation. We already have a lot of ready effects that use this technology, including Mirror Image.

 
 

For this effect we used more than just Particle-snapping. Our FX-artist Victor Demishev invented a way to create a visual copy of a character using particles. Before rendering this effect, we make a grab pass to get a copy of the screen. Then we shift the world space position of particles relative to the character, and draw a previously copied texture with a back shift of UV coordinates in the screen space in the effect's pixel shader.

 
 

Work on Pathfinder: Kingmaker is in full swing. We keep improving our tools and rendering technologies. We've developed our own lighting system, about a hundred custom tools, compiled over 150,000 shader variants and drunk several hundred gallons of coffee. We've got many complicated and interesting things ahead, which we hope will make players happy when Pathfinder: Kingmaker is released.

 
 

 

Hail to the kings!
Owlcats