Chapter 11 - Lights
In this chapter we will learn how to add light to our 3D game engine. We will not implement a physically perfect light model because, taking aside the complexity, it would require a tremendous amount of computer resources. Instead we will implement an approximation which will provide decent results: we will use an algorithm named Phong shading (developed by Bui Tuong Phong). Another important thing to point is that we will only model lights but we won’t model the shadows that should be generated by those lights (this will be done in another chapter).
Before we start, let us define some light types:
- Point light: This type of light models a light source that’s emitted uniformly from a point in space in all directions.
- Spot light: This type of light models a light source that’s emitted from a point in space, but instead of emitting in all directions is restricted to a cone.
- Directional light: This type for light models the light that we receive from the sun, all the objects in the 3D the space are hit by parallel ray lights coming from a specific direction. No matter if the object is close or far away, all the ray lights impact the objects with the same angle.
- Ambient light: This type of light comes from everywhere in the space and illuminates all the objects in the same way.

Light types
Thus, to model light we need to take into consideration the type of light plus, its position and some other parameters like its color. Of course, we must also consider the way that objects, impacted by ray lights, absorb and reflect light.
The Phong shading algorithm will model the effects of light for each point in our model, that is for every vertex. This is why it’s called a local illumination simulation, and this is the reason why this algorithm will not calculate shadows: it will just calculate the light to be applied to every vertex without taking into consideration if the vertex is behind an object that blocks the light. We will overcome this drawback in later chapters. But, because of that, it's a simple and fast algorithm that provides very good effects. We will use here a simplified version that does not take into account materials deeply.
The Phong algorithm considers three components for lighting:
- Ambient light: models light that comes from everywhere, this will serve us to illuminate (with the required intensity) the areas that are not hit by any light, it’s like a background light.
- Diffuse reflectance: takes into consideration that surfaces that are facing the light source are brighter.
- Specular reflectance: models how light reflects on polished or metallic surfaces.
At the end what we want to obtain is a factor that, multiplied by color assigned to a fragment, will set that color brighter or darker depending on the light it receives. Let’s name our components as
for ambient,
for diffuse and
for specular. That factor will be the addition of those components:
In fact, those components are indeed colors, that is the color components that each light component contributes to. This is due to the fact that light components will not only provide a degree of intensity but they can modify the color of the model. In our fragment shader we just need to multiply that light color by the original fragment color (obtained from a texture or a base color).
We can also assign different colors, for the same materials, that will be used in the ambient, diffuse and specular components. Hence, these components will be modulated by the colors associated to the material. If the material has a texture, we will simply use a single texture for each of the components.
So the final color for a non textured material will be:
.
And the final color for a textured material will be:
Normals are a ket element when working with lights. Let’s define it first. The normal of a plane is a vector perpendicular to that plane which has a length equal to one.

Normals
As you can see in the figure above, a plane can have two normals. Which one should we use? Normals in 3D graphics are used for lighting, so we should choose the normal which is oriented towards the source of light. In other words, we should choose the normal that points out from the external face of our model.
When we have a 3D model, it is composed by polygons, triangles in our case. Each triangle is composed by three vertices. The Normal vector for a triangle will be the vector perpendicular to the triangle surface which has a length equal to one.
A vertex normal is associated to a specific vertex and is the combination of the normals of the surrounding triangles (of course its length is equal to one). Here you can see the vertex models of a 3D mesh (taken from Wikipedia)

Vertex normals
Let’s talk now about diffuse reflectance. This models the fact that surfaces which face in a perpendicular way to the light source look brighter than surfaces where light is received in a more indirect angle. Those objects receive more light, the light density (let me call it this way) is higher.

Diffuse Light
But, how do we calculate this? This is where we will first start using normals. Let’s draw the normals for three points in the previous figure. As you can see, the normal for each point will be the vector perpendicular to the tangent plane for each point. Instead of drawing rays coming from the source of light we will draw vectors from each point to the point of light (that is, in the opposite direction).

Normals and light direction
As you can see, the normal associated to
, named
, is parallel to the vector that points to the light source, which models the opposite of the light ray (
has been sketched displaced so you can see it, but it’s equivalent mathematically).
has an angle equal to
with the vector that points to the light source. Its surface is perpendicular to the light source and
would be the brightest point.
The normal associated to
, named
, has an angle of around 30 degrees with the vector that points the light source, so it should be darker tan
. Finally, the normal associated to
, named
, is also parallel to the vector that points to the light source but the two vectors are in the opposite direction.
has an angle of 180 degrees with the vector that points the light source, and should not get any light at all.
So it seems that we have a good approach to determine the light intensity that gets to a point, and this is related to the angle that forms the normal with a vector that points to the light source. How can we calculate this?
There’s a mathematical operation that we can use, the dot product. This operation takes two vectors and produces a number (a scalar) that is positive if the angle between them is acute, or negative if the angle between them is wide. If both vectors are normalized, that is the both have a length equal to one, the dot product will be between
and
. The dot product will be one if both vectors look exactly in the same direction (angle
); it will be
if both vectors form a square angle, and it will be
if both vectors point in opposite directions.
Let’s define two vectors,
and
, and let
be the angle between them. The dot product is defined by the following formula.

Dot product
If both vectors are normalized, their length, their module will be equal to one, so the dot product is equal to the cosine if the angle between them. We will use that operation to calculate the diffuse reflectance component.
So we need to calculate the vector that points to the source of light. How we do this? We have the position of each point (the vertex position) and we have the position of the light source. First of all, both coordinates must be in the same coordinate space. To simplify, let’s assume that they are both in world coordinate space: then those positions are the coordinates of the vectors that point to the vertex position (
) and to the light source (
), as shown in the next figure.

Diffuse Light calculation I
If we subtract
S from
we get the vector that we are looking for, which is called
.
Now we can compute the dot product between the vector that points to the light source and the normal. This product is called the Lambert term, due to Johann Lambert who was the first to propose that relation to model the brightness of a surface.
Let’s summarize how we can calculate it. We define the following variables:
- : Position of our vertex in model view space coordinates.
- : Position of the light in view space coordinates.
- : Intensity of the light (from 0 to 1).
- : Colour of the light.
- : The vertex normal.
First we need to calculate the vector that points to the light source from current position:
. The result of that operation needs to be normalized.
Then we need to calculate the diffuse factor (a scalar):
. It’s calculated as dot product between two vectors, and since we want it to be between
and
both vectors need to be normalized. Colours need to be between
and
so if a value is lower than
we will set it to 0.
Finally we just need to modulate the light color by the diffuse factor and the light intensity:
Before considering the specular component, we first need to examine how light is reflected. When light hits a surface some part of it is absorbed and the other part is reflected, if you remember from your physics class, reflection is when light bounces off an object.

Light reflection
Of course, surfaces are not totally polished, and if you look at closer distance you will see a lot of imperfections. Besides that, you have many ray lights (photons in fact), that impact that surface, and that get reflected in a wide range of angles. Thus, what we see is like a beam of light being reflected from the surface. That is, light is diffused when impacting over a surface, and that’s the diffuse component that we have been talking about previously.

Surface
But when light impacts a polished surface, for instance a metal, the light suffers from lower diffusion and most of it gets reflected in the opposite direction as it hit that surface.

Polished surface
This is what the specular component models, and it depends on the material characteristics. Regarding specular reflectance, it’s important to note that the reflected light will only be visible if the camera is in a proper position, that is, if it's in the area where the reflected light is emitted.

Specular lighting
Now that the mechanism behind specular reflection has been explained we are ready to calculate that component. First we need a vector that points from the light source to the vertex point. When we were calculating the diffuse component we calculated just the opposite, a vector that points to the light source.
, so let’s calculate it as
.
Then we need to calculate the reflected light that results from the impact of the
into the surface by taking into consideration its normal. There’s the GLSL function
reflect
that does exactly that. So, .
We also need a vector that points to the camera, let’s name it
, and it will be calculated as the difference between the camera position and the vertex position:
. The camera position vector and the vertex position need to be in the same coordinate system and the resulting vector needs to be normalized. The following figure sketches the main components we have calculated up to now.

Specular lighting calculation
Now we need to calculate the light intensity that we see, which we will call
. This component will be higher if the
and the
vectors are parallel and point in the same direction and will take its lower value if they point in opposite directions. In order to calculate this the dot product comes to the rescue again. So
. We only want this value to be between
and
so if it’s lower than
it will be set to 0.
We also need to take into consideration that this light must be more intense if the camera is pointing to the reflected light cone. This will be achieved by raising the
to a parameter named
.
.
Finally we need to model the reflectivity of the material, which will also modulate the intensity if the light reflected. This will be done with another parameter named reflectance. So the color of the specular component will be:
.
We now know how to calculate the three components that will serve us to model a point light with an ambient light. But our light model is still not complete, as the light that an object reflects is independent of the distance from the light source. That is, we need to simulate light attenuation.
Attenuation is a function of the distance and light. The intensity of light is inversely proportional to the square of distance. That fact is easy to visualize, as light propagates its energy along the surface of a sphere with a radius that’s equal to the distance traveled by the light, and the surface of a sphere is proportional to the square of its radius. We can calculate the attenuation factor with this formula:
.
In order to simulate attenuation we just need to multiply that attenuation factor by the final color.
Directional lighting hits all the objects by parallel rays all coming from the same direction. It models light sources that are far away but have a high intensity such as the Sun.

Directional Light
Another characteristic of directional light is that it is not affected by attenuation. Think again about sunlight: all objects that are hit by rays of light are illuminated with the same intensity, as the distance from the sun is so huge that the position of the objects is irrelevant. In fact, directional lights are modeled as light sources placed at infinity, if it was affected by attenuation it would have no effect in any object (its color contribution would be equal to
).
Besides that, directional light is composed also by a diffuse and specular components. The only differences with point lights is that it does not have a position but a direction and that it is not affected by attenuation. Let’s get back to the direction attribute of directional light, and imagine we are modeling the movement of the sun across our 3D world. If we are assuming that the north is placed towards the increasing z-axis, the following picture shows the direction to the light source at dawn, mid day and dusk.

Sun as a directional light
Now we will implement spot lights which are very similar to point lights but the emitted light is restricted to a 3D cone. It models the light that comes out from focuses or any other light source that does not emit in all directions. A spot light has the same attributes as a point light but adds two new parameters, the cone angle and the cone direction.

Spot Light
Spot light contribution is calculated in the same way as a point light with some exceptions. The points for which the vector that points from the vertex position to the light source is not contained inside the light cone are not affected by the point light.

Spot Light II
How do we calculate if it’s inside the light cone or not? We need to do a dot product again between the vector that points from the light source and the cone direction vector (both of them normalized).

Spot Light calculation
The dot product between
and
vectors is equal to:
. If, in our spot light definition we store the cosine of the cutoff angle, if the dot product is higher than that value we will know that it is inside the light cone (recall the cosine graph, when
angle is
, the cosine will be
, the smaller the angle the higher the cosine).
The second difference is that the points that are far away from the cone vector will receive less light, that is, the attenuation will be higher. There are several ways of calculating this; we will chose a simple approach by multiplying the attenuation by the following factor:
(In our fragment shaders we won’t have the angle but the cosine of the cutoff angle. You can check that the formula above produces values from 0 to 1, 0 when the angle is equal to the cutoff angle and 1 when the angle is 0).

Spot Light Sample
Let's start first by creating a set of classes to model the different types of lights. We will start with the class that models point lights:
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class PointLight {
private Attenuation attenuation;
private Vector3f color;
private float intensity;
private Vector3f position;
public PointLight(Vector3f color, Vector3f position, float intensity) {
attenuation = new Attenuation(0, 0, 1);
this.color = color;
this.position = position;
this.intensity = intensity;
}
public Attenuation getAttenuation() {
return attenuation;
}
public Vector3f getColor() {
return color;
}
public float getIntensity() {
return intensity;
}
public Vector3f getPosition() {
return position;
}
public void setAttenuation(Attenuation attenuation) {
this.attenuation = attenuation;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
public void setPosition(float x, float y, float z) {
position.set(x, y, z);
}
public static class Attenuation {
private float constant;
private float exponent;
private float linear;
public Attenuation(float constant, float linear, float exponent) {
this.constant = constant;
this.linear = linear;
this.exponent = exponent;
}
public float getConstant() {
return constant;
}
public float getExponent() {
return exponent;
}
public float getLinear() {
return linear;
}
public void setConstant(float constant) {
this.constant = constant;
}
public void setExponent(float exponent) {
this.exponent = exponent;
}
public void setLinear(float linear) {
this.linear = linear;
}
}
}
As you can see a point light is defined by a color, an intensity, a position and an attenuation model.
Ambient light are defined by just a color and an intensity:
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class AmbientLight {
private Vector3f color;
private float intensity;
public AmbientLight(float intensity, Vector3f color) {
this.intensity = intensity;
this.color = color;
}
public AmbientLight() {
this(1.0f, new Vector3f(1.0f, 1.0f, 1.0f));
}
public Vector3f getColor() {
return color;
}
public float getIntensity() {
return intensity;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
}
Directional lights are defined like this:
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class DirLight {
private Vector3f color;
private Vector3f direction;
private float intensity;
public DirLight(Vector3f color, Vector3f direction, float intensity) {
this.color = color;
this.direction = direction;
this.intensity = intensity;
}
public Vector3f getColor() {
return color;
}
public Vector3f getDirection() {
return direction;
}
public float getIntensity() {
return intensity;
}
public void setColor(Vector3f color) {
this.color = color;
}
public void setColor(float r, float g, float b) {
color.set(r, g, b);
}
public void setDirection(Vector3f direction) {
this.direction = direction;
}
public void setIntensity(float intensity) {
this.intensity = intensity;
}
public void setPosition(float x, float y, float z) {
direction.set(x, y, z);
}
}
Finally, spot lights just include a point light reference plus light conde parameters:
package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
public class SpotLight {
private Vector3f coneDirection;
private float cutOff;
private float cutOffAngle;
private PointLight pointLight;
public SpotLight(PointLight pointLight, Vector3f coneDirection, float cutOffAngle) {
this.pointLight = pointLight;
this.coneDirection = coneDirection;
this.cutOffAngle = cutOffAngle;
setCutOffAngle(cutOffAngle);
}
public Vector3f getConeDirection() {
return coneDirection;
}
public float getCutOff() {
return cutOff;
}
public float getCutOffAngle() {
return cutOffAngle;
}
public PointLight getPointLight() {
return pointLight;
}
public void setConeDirection(float x, float y, float z) {
coneDirection.set(x, y, z);
}
public void setConeDirection(Vector3f coneDirection) {
this.coneDirection = coneDirection;
}
public final void setCutOffAngle(float cutOffAngle) {
this.cutOffAngle = cutOffAngle;
cutOff = (float) Math.cos(Math.toRadians(cutOffAngle));
}
public void setPointLight(PointLight pointLight) {
this.pointLight = pointLight;
}
}
All the lights will be stored in the Scene class, for that we will create a new class named
SceneLights
which will store references to all the types of lights (note that we only need one ambient light instance and one directional light):package org.lwjglb.engine.scene.lights;
import org.joml.Vector3f;
import java.util.*;
public class SceneLights {
private AmbientLight ambientLight;
private DirLight dirLight;
private List<PointLight> pointLights;
private List<SpotLight> spotLights;
public SceneLights() {
ambientLight = new AmbientLight();
pointLights = new ArrayList<>();
spotLights = new ArrayList<>();
dirLight = new DirLight(new Vector3f(1, 1, 1), new Vector3f(0, 1, 0), 1.0f);
}
public AmbientLight getAmbientLight() {
return ambientLight;
}
public DirLight getDirLight() {
return dirLight;
}
public List<PointLight> getPointLights() {
return pointLights;
}
public List<SpotLight> getSpotLights() {
return spotLights;
}
public void setSpotLights(List<SpotLight> spotLights) {
this.spotLights = spotLights;
}
}
We will have a reference to
SceneLights
in the Scene
class:public class Scene {
...
private SceneLights sceneLights;
...
public SceneLights getSceneLights() {
return sceneLights;
}
...
public void setSceneLights(SceneLights sceneLights) {
this.sceneLights = sceneLights;
}
}
We need to modify the
ModelLoader
class to:- Get more properties of the material, in particular, ambient color, specular color and shininess factor.
- Load normals data for each mesh.
In order to get more properties of the material, we need to modify the
processMaterial
method:public class ModelLoader {
...
private static Material processMaterial(AIMaterial aiMaterial, String modelDir, TextureCache textureCache) {
Material material = new Material();
try (MemoryStack stack = MemoryStack.stackPush()) {
AIColor4D color = AIColor4D.create();
int result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_AMBIENT, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setAmbientColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_DIFFUSE, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setDiffuseColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
result = aiGetMaterialColor(aiMaterial, AI_MATKEY_COLOR_SPECULAR, aiTextureType_NONE, 0,
color);
if (result == aiReturn_SUCCESS) {
material.setSpecularColor(new Vector4f(color.r(), color.g(), color.b(), color.a()));
}
float reflectance = 0.0f;
float[] shininessFactor = new float[]{0.0f};
int[] pMax = new int[]{1};
result = aiGetMaterialFloatArray(aiMaterial, AI_MATKEY_SHININESS_STRENGTH, aiTextureType_NONE, 0, shininessFactor, pMax);
if (result != aiReturn_SUCCESS) {
reflectance = shininessFactor[0];
}
material.setReflectance(reflectance);
AIString aiTexturePath = AIString.calloc(stack);
aiGetMaterialTexture(aiMaterial, aiTextureType_DIFFUSE, 0, aiTexturePath, (IntBuffer) null,
null, null, null, null, null);
String texturePath = aiTexturePath.dataString();
if (texturePath != null && texturePath.length() > 0) {
material.setTexturePath(modelDir + File.separator + new File(texturePath).getName());
textureCache.createTexture(material.getTexturePath());
material.setDiffuseColor(Material.DEFAULT_COLOR);
}
return material;
}
}
...
}
As you can see, we get the material ambient color by getting the
AI_MATKEY_COLOR_AMBIENT
property. Specular color is got by using the AI_MATKEY_COLOR_SPECULAR
property. Shininess is queried using the AI_MATKEY_SHININESS_STRENGTH
flag.In order to load normals, we need to create a new method named
processNormals
and invoke it in the processMesh
method.public class ModelLoader {
...
private static Mesh processMesh(AIMesh aiMesh) {
float[] vertices = processVertices(aiMesh);
float[] normals = processNormals(aiMesh);
float[] textCoords = processTextCoords(aiMesh);
int[] indices = processIndices(aiMesh);
// Texture coordinates may not have been populated. We need at least the empty slots
if (textCoords.length == 0) {
int numElements = (vertices.length / 3) * 2;
textCoords = new float[numElements];
}
return new Mesh(vertices, normals, textCoords, indices);
}
private static float[] processNormals(AIMesh aiMesh) {
AIVector3D.Buffer buffer = aiMesh.mNormals();
float[] data = new float[buffer.remaining() * 3];
int pos = 0;
while (buffer.remaining() > 0) {
AIVector3D normal = buffer.get();
data[pos++] = normal.x();
data[pos++] = normal.y();
data[pos++] = normal.z();
}
return data;
}
...
}
As you can see we need to modify also the
Material
and Mesh
classes to store the new information. The changes in the Material
class are as follows:public class Material {
...
private Vector4f ambientColor;
...
private float reflectance;
private Vector4f specularColor;
...
public Material() {
...
ambientColor = DEFAULT_COLOR;
...
}
...
public Vector4f getAmbientColor() {
return ambientColor;
}
...
public float getReflectance() {
return reflectance;
}
public Vector4f getSpecularColor() {
return specularColor;
}
...
public void setAmbientColor(Vector4f ambientColor) {
this.ambientColor = ambientColor;
}
...
public void setReflectance(float reflectance) {
this.reflectance = reflectance;
}
public void setSpecularColor(Vector4f specularColor) {
this.specularColor = specularColor;
}
...
}
Mesh
class now accepts a new float array for normals data, and thus creates a new VBO for that:public class Mesh {
...
public Mesh(float[] positions, float[] normals, float[] textCoords, int[] indices) {
...
// Normals VBO
vboId = glGenBuffers();
vboIdList.add(vboId);
FloatBuffer normalsBuffer = stack.callocFloat(normals.length);
normalsBuffer.put(0, normals);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, normalsBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, false, 0, 0);
// Texture coordinates VBO
...
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 2, GL_FLOAT, false, 0, 0);
// Index VBO
...
...
}
...
}
Now it is time to use the lights while rendering, let's start with the shaders, in particular with the vertex shader (
scene.vert
):#version 330
layout (location=0) in vec3 position;
layout (location=1) in vec3 normal;
layout (location=2) in vec2 texCoord;
out vec3 outPosition;
out vec3 outNormal;
out vec2 outTextCoord;
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
void main()
{
mat4 modelViewMatrix = viewMatrix * modelMatrix;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
outPosition = mvPosition.xyz;
outNormal = normalize(modelViewMatrix * vec4(normal, 0.0)).xyz;
outTextCoord = texCoord;
}
As you can see we now have normal data as another input attribute and we just pass that data to the fragment shader. Before we continue with the fragment shader, there’s a very important concept that must be highlighted. From the code above you can see that
mvVertexNormal
, the variable contains the vertex normal, is transformed into model view space coordinates. This is done by multiplying the normal
by the modelViewMatrix
as with the vertex position. But there’s a subtle difference, the w component of that vertex normal is set to 0 before multiplying it by the matrix: vec4(vertexNormal, 0.0)
. Why are we doing this? Because we do want the normal to be rotated and scaled but we do not want it to be translated, we are only interested into its direction but not in its position. This is achieved by setting is w component to 0 and is one of the advantages of using homogeneous coordinates, by setting the w component we can control what transformations are applied. You can do the matrix multiplication by hand and see why this happens.Changes in the
scene.frag
fragment shader are most complex, let's go one step at a time:#version 330
const int MAX_POINT_LIGHTS = 5;
const int MAX_SPOT_LIGHTS = 5;
const float SPECULAR_POWER = 10;
in vec3 outPosition;
in vec3 outNormal;
in vec2 outTextCoord;
out vec4 fragColor;
...
First, we define the maximum some constants for the maximum number of pont and spot lights we will support. We need this since the data for those lights will be passed as an array of uniforms, which need to have a well defined size at compile time. You can see also that we receive normal data from vertex shader. After that, we define sthe structures that will model lights data:
...
struct Attenuation
{
float constant;
float linear;
float exponent;
};
struct Material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float reflectance;
};
struct AmbientLight
{
float factor;
vec3 color;
};
struct PointLight {
vec3 position;
vec3 color;
float intensity;
Attenuation att;
};
struct SpotLight
{
PointLight pl;
vec3 conedir;
float cutoff;
};
struct DirLight
{
vec3 color;
vec3 direction;
float intensity;
};
...
After that, we define new uniforms for lights data:
...
uniform sampler2D txtSampler;
uniform Material material;
uniform AmbientLight ambientLight;
uniform PointLight pointLights[MAX_POINT_LIGHTS];
uniform SpotLight spotLights[MAX_SPOT_LIGHTS];
uniform DirLight dirLight
...
We will now define some functions to calculate the effect of each light type, starting with ambient light:
...
vec4 calcAmbient(AmbientLight ambientLight, vec4 ambient) {
return vec4(ambientLight.factor * ambientLight.color, 1) * ambient;
}
...
As you can see we just modulate the ambient light color by a factor which is applied to material ambient color. Now we will define a function which will define how color light is calculated for lal types of lights:
...
vec4 calcLightColor(vec4 diffuse, vec4 specular, vec3 lightColor, float light_intensity, vec3 position, vec3 to_light_dir, vec3 normal) {
vec4 diffuseColor = vec4(0, 0, 0, 1);
vec4 specColor = vec4(0, 0, 0, 1);
// Diffuse Light
float diffuseFactor = max(dot(normal, to_light_dir), 0.0);
diffuseColor = diffuse * vec4(lightColor, 1.0) * light_intensity * diffuseFactor;
// Specular Light