Laboration 1

Introduction to OpenGL

Goal: In this lab, you will get acquainted with how OpenGL is designed. At the end of the lab, you should be able to display a textured and lit 3D object.

If you run into problems, you can either look in the textbook, or visit There you will, among many other things, find the entire OpenGL Programming Guide in on-line version (old version) and, most importantly, the OpenGL 3.2 Quick Reference Card.

Note that you should write down answers to all questions before you get examined!

We will use the C language for this lab, plus GLSL for shader programs.

0) Some important notes about C and GLSL

Some notes about the C language

C is a "pointer processing language" that is sometimes referred to as a glorified assembler. It is a compiled language, which makes it fast. C has some quirks that you need to get used to if you come from seemingly similar languages like C++ or Java. (Similarity between languages is not just a matter of syntax similarities. Beyond syntax, Java and C are as related as birds and insects - both fly but that is all.)

& takes the address to a variable. When passing variables by reference, C always passes pointers.

An array and a pointer to the first element is the same thing.

* is used both for declaring pointer variables and dereferencing them:

int *a; declares the variable "a" as a pointer to an int.

b = *a; dereferences the variable "a" and gets the value it points to and assigns it to b.

#define and #include are preprocessor directives. Be very careful with them. A #define to make a constant is safe, but if you make a #define with variables in it, it quickly goes out of hand.

You never pass entire arrays or records (struct) to functions. You always pass pointers.

int a[5]; is an array of int

GetSomeData(a); passes a to a function. & is not needed since it is an array, so a is really a pointer.

vec3 p; is a point struct, as defined in the VectorUtils library.

In some cases, you only need a pointer, and you pass a pointer to the pointer to a function will allocate memory and set the pointer to point to this memory.

We will provide you with examples wherever we see a need. Ask us if you feel you need one.

Shader programs

Every OpenGL program must include at least one vertex and one fragment shader. These are small program kernels executed on the GPU as part of rendering, for specifying the behaviour of the vertex and fragment processing steps.

Vertex shaders perform per-vertex calculations. That's where vertices are transformed, per-vertex lighting calculations are done, and where skeletal animation systems do most of their work.

Fragment shaders perform per-pixel calculations. That's where texture and lighting colours are combined into one final pixel colour value.


GLSL, OpenGL Shading Language, is the shader language used by OpenGL. GLSL code is similar to C/C++ code, but with a strong emphasis on computation.

Most GLSL code performs floating-point calculations. Common datatypes used are float, vec2, vec3 and vec4. These datatypes represent 1D, 2D, 3D and 4D vectors. Arithmetic operations can be performed directly on these datatypes.

For integer calculations (such as counting loop iterations), int is available. The bool datatype is also available.

A small GLSL function can look like:

vec4 applyDirectionalLight(vec3 normal, vec4 originalColor)


  vec3 lightDirection = normalize(vec3(0.5, 0.8, 0.7));

  float strength = dot(lightDirection, normal);

  if (strength < 0.0)

    strength = 0.0;

  vec4 color = originalColor.xyxx * strength;

  return color;


vec3(0.5, 0.8, 0.7) constructs a new vec3 from three floating-point values.

dot() calls a predefined math function.

originalColor.xyxx performs "swizzling" on the original vector: the result is a vec4 whose XYZW elements are taken from the X, Y, X and X elements of originalColor, respectively.

You can find a complete list of built-in mathematical functions in the GLSL Language Specification.

GLSL program structure and variables

The code for a shader program is enclosed inside the main() function. It takes no arguments, and returns nothing. Communications between OpenGL, the vertex shader and the fragment shader is done by reading/writing global variables.

Variables can have a few different qualifiers:

uniform - the value is constant over an entire polygon; it is read/write for OpenGL, and read-only for fragment and vertex shaders.

in/out - input and output. In vertex shaders, these are "attributes", can be unique for every vertex (by passing arrays). The resulting color from a fragment shader is an out variable. 

in/out between shaders - out from vertex, in to fragment, "varying" variables: the value will be interpolated over the surface of a polygon; write in vertex shader, read in fragment shader.

All variables whose names begin with "gl_" are predefined by OpenGL. These are always present, and they can be used without declaring them first. For now, you only need to care about gl_Position, which is a vec4 with the resulting vertex from your vertex shader after transformation and projection. Writing this in your vertex shader is mandatory.

OpenGL will take the output from the vertex shader, interpolate the resulting values over the surface of any neighboring polygons, and then run the fragment shader once for every pixel which the polygon is supposed to render to. Any extra out variables in the vertex shader will also be interpolated over the polygon, and the result is available to the fragment shader in in variables.

You can find a full list of pre-defined variables in the GLSL Language Specification. And, of course, the course book also holds more information.

Debugging shaders

Debugging a shader is a story of its own. We don't have any full shader debugger installed in the lab, so we have to resort to other methods. On the positive side, shaders are often very simple (especially in this lab). However, debugging takes some special tricks.

Compilation errors are reported to stdout. This is a main source for information.

You can also play some tricks in the shaders. If your shader is running, but produces the wrong data, you can use its output for extra information. For example, you can output bright red to signal the result of some test comparison.

1) Setup and getting acquainted with the lab shell

Download the lab package below and unpack it at a suitable location. We start from the first example from Chapter 3.



(Old place, old version, kept just as backup: common.tar.gz)

There are several files included in the lab environment:

makefile - contains rules for how the executable should be built; read by make.

lab1-1.c - the actual lab code; this is where the main program resides.

lab1-1.vert - Minimal vertex shader.

lab1-1.frag - Minimal fragment shader.

The folder "common" contains a set of reusable utility files

GL_utilities.c - utilities for loading shaders and more.

MicroGlut.c - the classic user interface library in a stripped-down version in order to improve code transparency and remove obsolete code, and add a few convenient features.

VectorUtils3.c - Simple vector/matrix package.

loadobj.c - Loader for "OBJ" models.

LoadTGA.c - Loader for "TGA" images (for textures), used from Lab 2 and onward.

All these files are relatively small and fairly self-explanatory (except loadobj). Throughout the lab material, we strive for code transparency, code that can easily be edited by you any time you need to modify, change a behavior or add a missing feature, and, not least, any time you want to go "behind the scene" and see how things work.

You will be using makefile, lab1-1.c, shaders and respective .h files directly.

Compile the test program by entering the lab1 directory and performing make on the command line. This should produce a new executable file called lab1-1.

Run lab1-1 by typing ./lab1-1 on the command-line. It should show a white triangle against a dark background.

Open lab1-1.c and have a look inside it. There are two functions of interest to you, init() and display(). init() is called once during program startup and display() is called every time it is time to render a new frame of graphics.

Currently, display() does three things:

* Clearing of screen and Z-buffer

* Rendering of a triangle using OpenGL rendering commands

* Swapping front- and backbuffer

The init() function do work critical for rendering:

* Sets the background color and activates the Z-buffer

* Uploading the vertex list to the GPU

* Loading the vertex and fragment shaders

The upload of the vertex list may require some explanation. We will return to it in section 4 (color shading).

Try changing the triangle data, by moving the vertices.

Change the color of the triangle and the background.

For those of you on other systems, you may need other makefiles or project files. Let us know if you need them.


Where is the origin placed in the on-screen coordinate system?

Which direction are the X and Y axes pointing in the on-screen coordinate system?

The triangle color is controlled from the fragment shader. Would it be possible to control it from the main program? How?

2) Transformations in the vertex shader

Goal: To transform your polygon with 2D transforms defined by matrices.

Copy lab1-1.c to lab1-2.c and add a new entry to the makefile. Make this section's changes to lab1-2.c. Also copy the shaders similarly.

Define transformation matrices, somewhat like this:

GLfloat myMatrix[] = {    1.0f, 0.0f, 0.0f, 0.5f,

                        0.0f, 1.0f, 0.0f, 0.0f,

                        0.0f, 0.0f, 1.0f, 0.0f,

                        0.0f, 0.0f, 0.0f, 1.0f };

What does this matrix do? Define other 2D transformations.

Use the following call to send your matrix to your shaders.

    glUniformMatrix4fv(glGetUniformLocation(program, "myMatrix"), 1, GL_TRUE, myMatrix);

The "program" variable is a reference to your shaders, returned when you first loaded them.

In your vertex shader, declare your matrices and apply them to your vertices as you see fit. For the example above, there should be a matrix declared like this:

uniform mat4 myMatrix;


What is the purpose of the "in", "out" and "uniform" modifiers?

What is the output of the vertex shader?

What does the function glUniformMatrix4fv do?

3) Simple animation

Goal: To add time-based rotation/translation of the object.

Copy lab1-2.c to lab1-3.c and add a new entry to the makefile. Make this section's changes to lab1-3.c. Also copy the shaders similarly.

You can get the current time using

GLfloat t = (GLfloat)glutGet(GLUT_ELAPSED_TIME);

The function returns an integer, a milliseconds value. We cast it to float to avoid truncation when scaling it.

In order to render new images repeatedly, you should use glutTimerFunc(). When you start it, it may look like this:

  glutTimerFunc(20, &OnTimer, 0);

which refers to a call in your code that can look like this:

void OnTimer(int value)



    glutTimerFunc(20, &OnTimer, value);


Thus, OnTimer orders a new call to the display function plus starts the next round with the timer. You need to install the timer with glutTimerFunc() before calling glutMainLoop().

Modify matrices using a time-varying variable to produce an animation. Note that you will now need to upload your matrices in the display() callback, not in init().

When animating objects, you may want to use the sin() and cos() functions. To do that, you should include this header file:

#include <math.h>

and link with the math library using -lm.


What is the frame rate of the animation?

4) Color shading

Goal: To interpolate data between vertices.

Copy lab1-3.c to lab1-4.c and add a new entry to the makefile. Make this section's changes to lab1-4.c. Also copy the shaders similarly.

Now we are going to modify the data upload, so let us look deeper into what actually happens.

The uploading of the vertex list is more complex than you might expect. First, you need a vertex array object (VAO). This is merely a container, which refers to a number of buffers which are to be used together. It is created by glGenVertexArrays(), activated by glBindVertexArray(), and there should be only one for one model.

Then you need one or more buffers. For now, we will only care about vertex buffers, which means the actual data provided to vertices. This includes the vertex coordinates we already have but it is not limited to that as we shall see here. These buffers will be fed to the vertex shader one item at a time, to the "in" variable of your choice.

A buffer is referred to be a vertex buffer object (VBO), allocated by glGenBuffers() and activated - thereby bound to the active VAO - with glBindBuffer. This is just a reference, but we can upload data to it with glBufferData. Finally, we must connect it to the "in" variable in the shader, which is done with glVertexAttribPointer. Notice that you specify both type and the amount of data to send for each activation of the shader, in our case 3 GL_FLOAT, which fits a vec3 in the shader. Finally, we want to make sure that this array is active by glEnableVertexAttribArray().

Add a new array, similar to the vertex array, but this time for colors. Each vertex should have its own color.

Upload this array to the shaders just like you did with the vertices.

Pass the colors to "out" variables in the vertex shader, and as "in" variables in the fragment shader.

Use the interpolated color for the fragment color.


Did you need to do anything different when uploading the color data?

The "in" and "out" modifiers are now used for something different. What?

What is this kind of shading called? What could we use otherwise?

5) Building a cube, visible surface detection

Goal: To build a cube, using more 3D data.

Copy lab1-4.c to lab1-5.c and add a new entry to the makefile. Make this section's changes to lab1-5.c. Also copy the shaders similarly.

Build a cube by creating twelve triangles. Make the cube coordinates within +/- 0.5 units from origin.

As of part 4, set each vertex to a unique color and interpolate between these colors.

Use a transformation as of part 3 to rotate the model. Does something look strange?

It is likely that it looks strange in some orientations. We need some kind of visible surface detection (VSD). We will try one of the most widely used VSD methods: Z buffering. To use that, you need to do three things:

1) Set up with Z buffer:

    glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);

2) Activate the Z buffer:


3) Erase the Z buffer before rendering (modify existing call):


Enable/disable Z-buffering. Compare the difference.


What problems did you encounter while building the cube?

How do you change the facing of a polygon?

6) Load a 3D model from disc

Goal: To render a complex 3D model read from disc.

Copy lab1-5.c to lab1-6.c and add a new entry to the makefile. Make this section's changes to lab1-6.c. Also copy the shaders similarly.

The file loadobj.c will load a Wavefront OBJ file to disc. Include "loadobj.h" in your source to use it and add "loadobj.c" to the makefile.

Model *m;

m = LoadModel("bunny.obj");

However, from there we need to upload it to the GPU ourselves, using a Vertex Array Object as main reference. We also need a few Vertex Buffer Objects temporarily.

unsigned int bunnyVertexArrayObjID;

unsigned int bunnyVertexBufferObjID;

unsigned int bunnyIndexBufferObjID;

unsigned int bunnyNormalBufferObjID;

Uploading it is similar to what we did with simpler models before:

    glGenVertexArrays(1, &bunnyVertexArrayObjID);

    glGenBuffers(1, &bunnyVertexBufferObjID);

    glGenBuffers(1, &bunnyIndexBufferObjID);

    glGenBuffers(1, &bunnyNormalBufferObjID);



    // VBO for vertex data

    glBindBuffer(GL_ARRAY_BUFFER, bunnyVertexBufferObjID);

    glBufferData(GL_ARRAY_BUFFER, m->numVertices*3*sizeof(GLfloat), m->vertexArray, GL_STATIC_DRAW);

    glVertexAttribPointer(glGetAttribLocation(program, "in_Position"), 3, GL_FLOAT, GL_FALSE, 0, 0); 

    glEnableVertexAttribArray(glGetAttribLocation(program, "in_Position"));

    // VBO for normal data

    glBindBuffer(GL_ARRAY_BUFFER, bunnyNormalBufferObjID);

    glBufferData(GL_ARRAY_BUFFER, m->numVertices*3*sizeof(GLfloat), m->normalArray, GL_STATIC_DRAW);

    glVertexAttribPointer(glGetAttribLocation(program, "in_Normal"), 3, GL_FLOAT, GL_FALSE, 0, 0);

    glEnableVertexAttribArray(glGetAttribLocation(program, "in_Normal"));


    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bunnyIndexBufferObjID);

    glBufferData(GL_ELEMENT_ARRAY_BUFFER, m->numIndices*sizeof(GLuint), m->indexArray, GL_STATIC_DRAW);

Don't forget to make error checks.

You can draw the model like this:

    glBindVertexArray(bunnyVertexArrayObjID);    // Select VAO

    glDrawElements(GL_TRIANGLES, m->numIndices, GL_UNSIGNED_INT, 0L);

There are no colors, so you need to edit your shaders. You can use the normal vector in any way you like (be creative!) to select colors by vertex. Then these colors should be interpolated over the triangles.


Why do we need normal vectors for a model?

What did you do in your fragment shader?

Should a normal vector always be perpendicular to a certain triangle? If not, why?

Now we are using glBindBuffer and glBufferData again. They deal with buffers, but in what way?

That concludes lab 1. Good work! In the next lab, you will experiment with texture mapping, scenes containing multiple objects, and camera placement.

This page is maintained by Ingemar Ragnemalm