Laboration 2

Texture, viewing and shader variations

Goal: Learn how to use texture mapping, to apply projection and viewing, how to work with multiple models in a scene, and try some alternative ways to use shaders.

If you run into problems, you can either look in the textbook, or visit There you will, among many other things, find the entire OpenGL Programming Guide in on-line version.

Important: The makefile for C++ lacks one part that will be important now: You need to add -lstdc++ at the end of the compilation line!

1) Procedural texture mapping

Using the results from lab 1, you should now add texure mapping. First of all, we need texture coordinates. These may be stored in the model. Some models, like the bunny you used before, come without texture coordinates. We need a new model, and to make things simple I prepared a bunny that has texture coordinates. That file is in the archive with files for this lab, which is part of the lab 1 download.

2023 material is part of the download in lab 1!

Old versions:


2021 version (too many files and some obsolete ones, not recommended unless there are problems with the archive above):


The archive includes no new source files. There are a few files in it but most models and textures are optional.

For now, we only need one model. The model file you should use is called "bunnyplus.obj”.

Skärmavbild 2022-01-30 kl. 11.27.43


Why “plus”? Can you see what the difference is to the original “bunny.obj”?

Load the bunny model and draw it. To use its texture coordinates, you need to add a few lines to the upload of the model:

    glGenBuffers(1, &bunnyTexCoordBufferObjID);    

    if (m->texCoordArray != NULL)


        glBindBuffer(GL_ARRAY_BUFFER, bunnyTexCoordBufferObjID);

        glBufferData(GL_ARRAY_BUFFER, m->numVertices*2*sizeof(GLfloat), m->texCoordArray, GL_STATIC_DRAW);

        glVertexAttribPointer(glGetAttribLocation(program, "inTexCoord"), 2, GL_FLOAT, GL_FALSE, 0, 0);

        glEnableVertexAttribArray(glGetAttribLocation(program, "inTexCoord"));


This will deliver your texture cooridinates to a vec2 named "inTexCoord" in the vertex shader.

Pass the texture coordinates to an interpolated ("out") variable, and use that (as "in") in the fragment shader to produce some kind of visual effect. Optionally, you can also pass time information to get an animated pattern. There are infinite possibilities here, be creative!


What kind of procedural texture did you make?

2) Texture mapping

Remember to copy files, including shaders, and update your makefiles accordingly.

Mapping a texture image is slightly trickier, but not so bad when we only use a single texture at a time.

The file "LoadTGA.c" loads TGA images to textures. Load a texture with 

void LoadTGATextureSimple(char *filename, GLuint *tex);

That is, you must declare a texture reference as GLuint, and pass that by referece, with a file. So the call can look like this:

LoadTGATextureSimple("maskros512.tga", &myTex);

A set of textures is available in the archive linked in part 1 above.

LoadTGATextureSimple will create the texture object, so you don't need to initialize the reference. You activate a texture object using

glBindTexture(GL_TEXTURE_2D, myTex);

This will bind the texture to the current texture unit. For now, we can safely assume that that is number 0.

In order to use it in a shader, we need a texture sampler. This is really just the texture unit number. In order not to assume too much, we pass that number from the CPU to a sampler variable. We set it from the CPU like this:

glUniform1i(glGetUniformLocation(program, "texUnit"), 0); // Texture unit 0

In the shader, it should be declared:

uniform sampler2D texUnit;

and you can get texture data like this:

outColor = texture(texUnit, texCoord);

Finally, when you want several textures on a surface (and you will, sooner or later), you select the current texture unit like this:


This call will make texture unit 0 the active unit, so you can use glBindTexture to bind a texture to that specific unit. But for now you don't need to use it, as long as we only use one texture.

Put a texture on the bunny!


How are the textures coordinates mapped on the bunny? Can you see how they vary over the model?

How can you make a texture repeat multiple times over the bunny?

Why can't we just pass the texture object to the shader? There is a specific reason for this, a limited resource. What? (No, it is not that we must avoid re-uploading from CPU. The texture object is on the GPU!)

3) Projection

Remember to copy files, including shaders, and update your makefiles accordingly.

So far, we have been limited to a cube world from -1 to 1 in all directions, and parallel projection. This is not fun, we want a realistic projection. Use this matrix:

#define near 1.0

#define far 30.0

#define right 0.5

#define left -0.5

#define top 0.5

#define bottom -0.5

GLfloat projectionMatrix[] = {    2.0f*near/(right-left), 0.0f, (right+left)/(right-left), 0.0f,

                                            0.0f, 2.0f*near/(top-bottom), (top+bottom)/(top-bottom), 0.0f,

                                            0.0f, 0.0f, -(far + near)/(far - near), -2*far*near/(far - near),

                                            0.0f, 0.0f, -1.0f, 0.0f };

In order to make this editable for you, the frustum dimensions are given as #defines. Pass this to a uniform matrix in the vertex shader. (Warning! #define is OK for constants, but do not mess it up with variables or you can get complex errors.)

When you run, the bunny disappears. Why? Add a translation to put the bunny in view.

Time to move to VectorUtils!

To make this easy, you can use the VectorUtils3/4 package. This is a simple vector/matrix package with the bare essentials such as dot and cross product and matrix multiplication. Add #include "VectorUtils3.h" to your source code (VectorUtils3.c should already be used by your makefile). C++: Add #include "VectorUtils4.h”. It does not go into the makefile since it is header only.

To work with matrices, you can do like this:

Declare matrices:

    mat4 rot, trans, total;

Set matrices to a rotation and translation (note: these are examples, not your exact transformations!):

trans = T(1, 2, 3);

rot = Ry(a);

Multiply these matrices:

total = Mult(trans, rot); (C style)


total = trans * rot; (C++ style)

Upload to GPU!

    glUniformMatrix4fv(glGetUniformLocation(program, "mdlMatrix"), 1, GL_TRUE, total.m);

Note: Previous matrices, like the projection matrix above, were not given as mat4's. There are calls in VectorUtils3 that creates the same matrix as mat4, i.e. the call frustum(). (See the header file for specification.)


How did you move the bunny to get it in view?

4) Viewing using the lookat function

Remember to copy files, including shaders, and update your makefiles accordingly.

The task above should give you a useful model-to-world matrix, but for the complete chain you also want a world-to-view matrix, for camera placement. Although it is possible to make one with rotations and translations, it is better to use a "look-at" matrix as described in the book (section 6.5, page 56 in the 2012 edition).

VectorUtils3 includes two "LookAt" functions, defined like this:

    mat4 lookAtv(vec3 p, vec3 l, vec3 v);

    mat4 lookAt(GLfloat px, GLfloat py, GLfloat pz, 

            GLfloat lx, GLfloat ly, GLfloat lz,

            GLfloat vx, GLfloat vy, GLfloat vz);

These functions mimick the old (but extremely useful) gluLookAt function from the now deprecated GLU library. Use these functions for creating a world-to-view matrix.

Upload the matrix to the vertex shader and use it as world-to-view matrix. Try placing the camera some distance from origin, looking at origin.


Given a certain vector for v, is there some direction you can't look?

5) Diffuse shading

Goal: To render a model with diffuse shading.

Remember to copy files, including shaders, and update your makefiles accordingly.

Finally, we want the bunny to look a bit better, with somewhat realistic light. A good start is diffuse shading.

You need to transform the normal vector in order to make the normal vectors follow the rotation of the model. You do that by removing the translation, which is equivalent to casting the 4x4 matrix to a 3x3 one.

If you have a transformation for your models, it should then be applied to normals as well. Example:

    uniform mat4 myMatrix;

    mat3 normalMatrix1 = mat3(myMatrix);

    transformedNormal = normalMatrix1 * inNormal;

That normal vector is now ready to use for light calculations. Use (for now) a hard-coded light source in your shader, like this:

    const vec3 light = vec3(0.58, 0.58, 0.58);

You will need to use the following built-in functions in the shaders:

normalize() returns a normalized version of a vector.

dot() takes the dot product.

max() and clamp()... OK, you get  the picture?

With 3D models like this, it is more important than ever to use visible surface detection. Try turning Z buffering off. What happens?

Note: Usually, we divide the transformation in three parts: Model to world, world to view, and projection. Normal vectors should only be affected by the first two. 


Did you implement your light calculations in the vertex or fragment shader? So, which kind of shading did you implement?

Some geometry data must be vec4, others are just as well vec3's. Which ones, and why? How about vertices, light source, normal vectors...?

6) Extra: Gouraud vs Phong

Goal: To evaluate the difference between Gouraud and Phong shading. (Note: This will be more meaningful when we get to specular shading, and more of a curiosity at this point.)

Remember to copy files, including shaders, and update your makefiles accordingly.

In the previous task, you implemented one shader of the two main types. Now, implement the other one and compare the difference.

Note that for a correct Phong shading, you will make sure you normalize the interpolated normal vectors properly.


Was the difference big? If not, why?

You are doing almost the same operations. So what is the difference performance-wise? Compare the two methods from a performance standpoint.

7) Building a scene + camera movement with simplified model loadings

Remember to copy files, including shaders, and update your makefiles accordingly.

Important: The makefile for C++ lacks one part that will be important now: You need to add -lstdc++ at the end of the compilation line! Or use g++.

Next step is to build a simple scene. It should include at least two different models, and the camera should circle around them.

You will need to create one model-to-world matrix for each model. Use translations and rotations for that.

These matrices should be handled with the same code in your shader. The shaders does only have to see one matrix at a time, but you change the matrix in the shader by uploading different matrices between drawing each model.

1) Update model-to-view matrix for model 1

2) Upload this matrix to model-to-view matrix in shader

3) Draw model 1

4) Update model-to-view matrix for model 2

5) Upload this matrix to model-to-view matrix in shader

6) Draw model 2

If you just duplicate the model loading code, you will most likely find that tedious, your code explodes. To avoid that, “LoadModel” actually manages upload to the GPU as well as loading from disc. Using that information, there is a function for drawing it, "DrawModel”. Look in LittleOBJLoader.h for the exact syntax.


If you rotate an object or rotate the camera, what matrices are affected?

Extra) Vertex shader fun

We have been using the vertex shader in the standard way, for vertex transformations using the usual chain of matrices and for sending data to interpolations. But you can also do other things with it. Try deforming the bunny with a vertex shader. The deformation should not just be a global scaling but vary over the shape. You can, for example, make it wave like the "wavy teapot" animation I showed at the lecture.

That concludes lab 2. In the next lab, you will expand the scene and make even better light.

This page is maintained by Ingemar Ragnemalm