In this chapter, we will explain a technique that will dramatically improve how our 3D models look like. By now we are able to apply textures to complex 3D models, but we are still far away from what real objects look like. Surfaces in the real world are not perfectly plain, they have imperfections that our 3D models currently do not have.
In order to render more realistic scenes, we are going to use normal maps. If you look at a flat surface in the real world you will see that those imperfections can be seen even at distance by the way that the light reflects on it. In a 3D scene, a flat surface will have no imperfections, we can apply a texture to it but we won’t change the way that light reflects on it. That’s the thing that makes the difference.
We may think of increasing the detail of our models by increasing the number of triangles and reflect those imperfections, but performance will degrade. What we need is a way to change the way light reflects on surfaces to increase the realism. This is achieved with the normal mapping technique.
You can find the complete source code for this chapter here.
Concepts
Let’s go back to the plain surface example, a plane can be defined by two triangles which form a quad. If you remember from the lighting chapters, the element that models how light reflects are surface normals. In this case, we have a single normal for the whole surface, each fragment of the surface uses the same normal when calculating how light affects them. This is shown in the next figure.
If we could change the normals for each fragment of the surface we could model surface imperfections to render them in a more realistic way. This is shown in the next figure.
Let’s use the following texture to draw a quad.
An example of a normal map texture for the image above may be the following.
So, to render an object using normal maps we just need an extra texture and use it while rendering fragments to get the appropriate normal value.
Implementation
You can check a great tutorial on this aspect here
Therefore, ye first step is to add support for normal mapping loading the ModelLoader class, including tangent and bitangent information. If you recall, when setting the model loading flags for assimp, we included this one: aiProcess_CalcTangentSpace. This flag allows to automatically calculate tangent and bitangent data.
In the processMaterial method we will first query for the presence of a normal map texture. If so, we load that texture and associate that texture path to the material:
The processTangents and processBitangents methods are quite similar to the one that loads normals:
publicclassModelLoader {...privatestaticfloat[] processBitangents(AIMesh aiMesh,float[] normals) {AIVector3D.Buffer buffer =aiMesh.mBitangents();float[] data =newfloat[buffer.remaining() *3];int pos =0;while (buffer.remaining() >0) {AIVector3D aiBitangent =buffer.get(); data[pos++] =aiBitangent.x(); data[pos++] =aiBitangent.y(); data[pos++] =aiBitangent.z(); }// Assimp may not calculate tangents with models that do not have texture coordinates. Just create empty valuesif (data.length==0) { data =newfloat[normals.length]; }return data; }...privatestaticfloat[] processTangents(AIMesh aiMesh,float[] normals) {AIVector3D.Buffer buffer =aiMesh.mTangents();float[] data =newfloat[buffer.remaining() *3];int pos =0;while (buffer.remaining() >0) {AIVector3D aiTangent =buffer.get(); data[pos++] =aiTangent.x(); data[pos++] =aiTangent.y(); data[pos++] =aiTangent.z(); }// Assimp may not calculate tangents with models that do not have texture coordinates. Just create empty valuesif (data.length==0) { data =newfloat[normals.length]; }return data; }...}
As you can see, we need to modify also Mesh and Material classes to hold the new data. Let's start by the Mesh class:
We need to create two new VBOs for tangent and bitangent data (which follow a structure similar to the normals data) and therefore update the position of the texture coordinates VBO.
In the Material class we need to include the path to the normal mapping texture path:
As you can see we need to define the new input data associated to bitangent and tangent. We transform those elements in the same way that we handled the normal and pass that data as an input to the fragment shader (scene.frag):
We start by defining the new inputs from the vertex shader, including and additional element for the Material struct which signals if there is a normal map available or not (hasNormalMap). We also add a new uniform for the normal map texture (normalSampler)). The next step is to define a function hat updates the normal based on normal map texture:
The calcNormal function takes the following parameters:
The vertex normal.
The vertex tangent.
The vertex bitangent.
The texture coordinates.
The first thing we do in that function is to calculate the TBN matrix. After that, we get the normal value form the normal map texture and use the TBN Matrix to pass from tangent space to view space. Remember that the colour we get are the normal coordinates, but since they are stored as RGB values they are contained in the range [0, 1]. We need to transform them to be in the range [-1, 1], so we just multiply by two and subtract 1.
Finally, we use that function only if the material defines a normal map texture.
We need to modify also the SceneRender class to create and use the new normals that we use in the shaders:
The last step is to update the Main class to show this effect. We will load two quads with and without normal maps associated to them. Also we will use left and right arrows to control light angle to show the effect.
As you can see the quad that has a normal texture applied gives the impression of having more volume. Although it is, in essence, a plain surface like the other quad, you can see how the light reflects.
The way we are going to achieve this is by loading another texture that stores the normals for the surface. Each pixel of the normal texture will contain the values of the x, y and z coordinates of the normal stored as an RGB value.
As you can see, it's as if we had applied a color transformation to the original texture. Each pixel stores normal information using color components. One thing that you will usually see when viewing normal maps is that the dominant colors tend to blue. This is due to the fact that normals point to the positive z axis. The z component will usually have a much higher value than the x and y ones for plain surfaces as the normal points out of the surface. Since x, y, z coordinates are mapped to RGB, the blue component will have also a higher value.
Usually, normal maps are not defined in that way, they usually are defined in the so called tangent space. The tangent space is a coordinate system that is local to each triangle of the model. In that coordinate space the z axis always points out of the surface. This is the reason why a normal map is usually bluish, even for complex models with opposing faces. In order to handle tangent space, we need norm,als, tangent and bi-tangent vectors. We already have normal vector, the tangent and bitangent vectors are perpendicular vectors to the normal one. We need these vectors to calculate the TBN matrix which will allow us to use data that is in tangent space to the coordinate system we are using in our shaders.