Up to now the way that we are rendering a 3D scene is called forward rendering. We first render the 3D objects and apply the texture and lighting effects in a fragment shader. This method is not very efficient if we have a complex fragment shader pass with many lights and complex effects. In addition to that we may end up applying these effects to fragments that may be later on discarded due to depth testing (although this is not exactly true if we enable early fragment testing).
In order to alleviate the problems described above we may change the way that we render the scene by using a technical called deferred shading. With deferred shading we first render the geometry information that is required in later stages (in the fragment shader) to a buffer. The complex calculus required by the fragment shader are postponed, deferred, to a later stage when using the information stored in those buffers.
You can find the complete source code for this chapter here.
Concepts
Deferred requires to perform two rendering passes. The first one, is the geometry pass, where we render the scene to a buffer that will contain the following information:
Depth value.
The diffuse colors and reflectance factor for each position.
The specular component for each position.
The normals at each position (also in light view coordinate system).
All that information is stored in a buffer called G-Buffer.
The second pass is called the lighting pass. This pass takes a quad that fills up all the screen and generates the color information for each fragment using the information contained in the G-Buffer. When we will be performing the lighting pass, the depth test will have already removed all the scene data that would not be seen. Hence, the number of operations to be done are restricted to what will be displayed on the screen.
You may be asking if performing additional rendering passes will result in an increase of performance or not. The answer is that it depends. Deferred shading is usually used when you have many different light passes. In this case, the additional rendering steps are compensated by the reduction of operations that will be done in the fragment shader.
G-Buffer
So let’s start coding. The first task that we will be doing is create a new class for the G-Buffer. The class, named GBuffer, is defined like this:
The class defines a constant that models the maximum number of buffers to be used. The identifier associated to the G-Buffer itself and an array for the individual buffers. The size of the textures is also stored.
The first thing that we do is create a frame buffer. Remember that a frame buffer is just an OpenGL objects that can be used to render operations instead of rendering to the screen. Then we generate a set of textures (4 textures), that will be associated to the frame buffer.
After that, we use a for loop to initialize the textures. We have the following types:
“Regular textures”, that will store positions, normals, the diffuse component, etc.
A texture for storing the depth buffer. This will be our last texture.
Once the texture has been initialized, we enable sampling for them and attach them to the frame buffer. Each texture is attached using an identifier which starts at GL_COLOR_ATTACHMENT0. Each texture increments by one that id, so the positions are attached using GL_COLOR_ATTACHMENT0, the diffuse component uses GL_COLOR_ATTACHMENT1 (which is GL_COLOR_ATTACHMENT0 + 1), and so on.
After all the textures have been created, we need to enable them to be used by the fragment shader for rendering. This is done with the glDrawBuffers call. We just pass the array with the identifiers of the color attachments used (GL_COLOR_ATTACHMENT0 to GL_COLOR_ATTACHMENT5).
The rest of the class are just getter methods and the cleanup one.
Let's examine the changes that we need to apply when doing the geometry pass. We will apply these changes to the SceneRender class and the associated shaders. Staring with the SceneRender class, we need to remove light constants and light uniforms, they will note be used in this pass (we will also not be using ambient color for materials to simplify, we need to remove also that uniform and will remove also the selected entity uniform):
You can see that we receive now a GBuffer instance as a method parameter. That buffer is where we will perform the rendering, therefore we first bind that buffer by calling the glBindFramebuffer . After that, we clear that buffer and disable blending. When using deferred rendering, transparent objects are a bit tricky. The approach would be to render them in light pass or to discard them in the geometry pass. As you can see we have removed all the lights uniform set up code.
The ony change in the vertex shader (scene.vert) is that now the view position is a four components vector (vec4):
This is where we are referring to the textures that this fragment shader will write to. As you can see we just dump the diffuse color (which can be the color of the associated texture of a component of the material), the specular component, the normal, and the depth values for the shadow map. You may notice that we do not store the position in the textures, this is because we can reconstruct fragment position using depth values. We will sww how this can be done in the lighting pass.
SIDE NOTE: We have simplified the Material class definition removing the ambient color component.
If you debug the sample execution with an OpenGL debugger (such as RenderDoc), you can view the textures generated during the geometry pass. The albedo texture will look like this:
The texture that holds the values for the normals will look like this:
The texture that holds the values for the specular colors will look like this:
And finally, the depth texture will look like this:
Lighting pass
In order to perform the lighting pass, we will create a new class named LightsRender which starts like this:
You can see that, in addition to create a new shader program, we define a new attribute of the QadMesh class (which has not been defined yet). Before analyzing the render method, let’s think a little bit about how we will render the lights. We need to use the contents of the G-Buffer, but in order to use them, we need to first render something. But, we have already drawn the scene, what are we going to render. now? The answer is simple, we just need to render a quad that fills all the screen. For each fragment of that quad, we will use the data contained in the G-Buffer and generate the correct output color. This where the QuadMesh class comes to play, it just defines a quad which will be used to render in the lighting pass and is defined like this:
As you can see we just need position and texture coordinate attributes (to properly access G-Buffer textures). Going back to the LightsRender class, we need a method to create the uniforms, which as uou will see, restores the light uniforms previously used in the SceneRenderclass plus a set of new ones to map G-Buffer textures (albedoSampler, normalSampler, specularSampler and depthSampler). In addition to that, we will need new uniforms to calculate fragment position form depth values such as invProjectionMatrix and invViewMatrix. We will see in the shaders code how they will be used.
publicclassLightsRender {...privatevoidcreateUniforms() { uniformsMap =newUniformsMap(shaderProgram.getProgramId());uniformsMap.createUniform("albedoSampler");uniformsMap.createUniform("normalSampler");uniformsMap.createUniform("specularSampler");uniformsMap.createUniform("depthSampler");uniformsMap.createUniform("invProjectionMatrix");uniformsMap.createUniform("invViewMatrix");uniformsMap.createUniform("ambientLight.factor");uniformsMap.createUniform("ambientLight.color");for (int i =0; i < MAX_POINT_LIGHTS; i++) {String name ="pointLights["+ i +"]";uniformsMap.createUniform(name +".position");uniformsMap.createUniform(name +".color");uniformsMap.createUniform(name +".intensity");uniformsMap.createUniform(name +".att.constant");uniformsMap.createUniform(name +".att.linear");uniformsMap.createUniform(name +".att.exponent"); }for (int i =0; i < MAX_SPOT_LIGHTS; i++) {String name ="spotLights["+ i +"]";uniformsMap.createUniform(name +".pl.position");uniformsMap.createUniform(name +".pl.color");uniformsMap.createUniform(name +".pl.intensity");uniformsMap.createUniform(name +".pl.att.constant");uniformsMap.createUniform(name +".pl.att.linear");uniformsMap.createUniform(name +".pl.att.exponent");uniformsMap.createUniform(name +".conedir");uniformsMap.createUniform(name +".cutoff"); }uniformsMap.createUniform("dirLight.color");uniformsMap.createUniform("dirLight.direction");uniformsMap.createUniform("dirLight.intensity");uniformsMap.createUniform("fog.activeFog");uniformsMap.createUniform("fog.color");uniformsMap.createUniform("fog.density");for (int i =0; i <CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {uniformsMap.createUniform("shadowMap_"+ i);uniformsMap.createUniform("cascadeshadows["+ i +"]"+".projViewMatrix");uniformsMap.createUniform("cascadeshadows["+ i +"]"+".splitDistance"); } }...}
The render method is defined like this:
publicclassLightsRender {...publicvoidrender(Scene scene,ShadowRender shadowRender,GBuffer gBuffer) {shaderProgram.bind();updateLights(scene);// Bind the G-Buffer texturesint[] textureIds =gBuffer.getTextureIds();int numTextures = textureIds !=null?textureIds.length:0;for (int i =0; i < numTextures; i++) {glActiveTexture(GL_TEXTURE0 + i);glBindTexture(GL_TEXTURE_2D, textureIds[i]); }uniformsMap.setUniform("albedoSampler",0);uniformsMap.setUniform("normalSampler",1);uniformsMap.setUniform("specularSampler",2);uniformsMap.setUniform("depthSampler",3);Fog fog =scene.getFog();uniformsMap.setUniform("fog.activeFog",fog.isActive() ?1:0);uniformsMap.setUniform("fog.color",fog.getColor());uniformsMap.setUniform("fog.density",fog.getDensity());int start =4;List<CascadeShadow> cascadeShadows =shadowRender.getCascadeShadows();for (int i =0; i <CascadeShadow.SHADOW_MAP_CASCADE_COUNT; i++) {glActiveTexture(GL_TEXTURE0 + start + i);uniformsMap.setUniform("shadowMap_"+ i, start + i);CascadeShadow cascadeShadow =cascadeShadows.get(i);uniformsMap.setUniform("cascadeshadows["+ i +"]"+".projViewMatrix",cascadeShadow.getProjViewMatrix());uniformsMap.setUniform("cascadeshadows["+ i +"]"+".splitDistance",cascadeShadow.getSplitDistance()); }shadowRender.getShadowBuffer().bindTextures(GL_TEXTURE0 + start);uniformsMap.setUniform("invProjectionMatrix",scene.getProjection().getInvProjMatrix());uniformsMap.setUniform("invViewMatrix",scene.getCamera().getInvViewMatrix());glBindVertexArray(quadMesh.getVaoId());glDrawElements(GL_TRIANGLES,quadMesh.getNumVertices(), GL_UNSIGNED_INT,0);shaderProgram.unbind(); }...}
After updating lights we activate the textures that l hold the results of the geometry pass. After that, we set fog and cascade shadows uniforms and draw just a quad.
So, how the vertex shader for the light pass looks like (lights.vert)?
The code above just dumps the vertices directly and passes the texture coordinates to the fragment shader. The fragment shader (lights.frag) is defined like this:
As you can see, it contains functions that should look familiar to you. They were used in previous chapters in the scene fragment shader. The important things here to note are the following lines:
We first sample the albedo, normal map (converting from [0, -1] to [-1, 1] range) and the specular attachment according to current fragment coordinates. In addition to that there is a code fragment that may look new to y ou. We need the fragment position to perform the light calculations. But, we have no position attachments. This is where the depth attachment and the inverse projection matrix comes into play. With that information we can reconstruct the world position (view space coordinates) without requiring to have another attachment which stores the position. You will see in other tutorials, that they set up a specific attachment for positions, but it is much more efficient to do it this way. Always remember, that, the less memory consumed by the deferred attachments, the better. With all that information we just simply iterate over the lights to calculate the light contribution to the final color.
The rest of the code is quite similar to the one in the fragment shader of the scene render.
Finally, we need to update the Render class to use the new classes: