Introduction to OpenGL
Note: This is a brand new version of the lab for TSBK11. There may be remains of the ordinary lab.
Goal: In this lab, you will work with 3D data, shaders and matrices. At the end of the lab, you should be able to display a 3D object.
If you run into problems, you can either look in the textbook, or visit http://www.opengl.org. There you will, among many other things, find the entire OpenGL Programming Guide in on-line version (old version) and, most importantly, the OpenGL 3.2 Quick Reference Card.
Note that you should write down answers to all questions before you get examined!
We will use the C++ language for this lab, plus GLSL for shader programs.
0) Some important notes about C and GLSL
NOTE: For 2023 we will also provide C++ versions of the labs. They will be mostly the same but you can use operator overloading for many matrix and vector operations, which can be quite convenient.
Some notes about the C/C++ language
We are using C++ for the labs. Most of the time, we stay pretty close to straight C, using the parts of C++ that really helps the lab, like operator overloading.
& takes the address to a variable. When passing variables by reference, C always passes pointers.
An array and a pointer to the first element is the same thing.
* is used both for declaring pointer variables and dereferencing them:
int *a; declares the variable "a" as a pointer to an int.
b = *a; dereferences the variable "a" and gets the value it points to and assigns it to b.
#define and #include are preprocessor directives. Be very careful with them. A #define to make a constant is safe, but if you make a #define with variables in it, it quickly goes out of hand.
You never pass entire arrays to functions. You always pass pointers.
int a[5]; is an array of int
GetSomeData(a); passes a to a function. & is not needed since it is an array, so a is really a pointer.
We pass arrays into OpenGL using glUniformMatrix4fv.m but as a simplification we use uploadMat4ToShader().
VectorUtils define several types that work like in GLSL. The two most important ones are:
vec3 p; is a struct referring to a 3D vector.
mat4 m; is a struct referring to a 4x4 matrix;
For C++, these can be initialized with C++ constructors.
We will provide you with examples wherever we see a need. Ask us if you feel you need one.
Notes about the new C++ versions:
We do not use much of the large and complex C++ language. The main difference to the C code is that you have the very convenient operator overloading so you can do things like multiplying matrices by a * symbol just like you can in the shaders.
In order to make the common code as portable as possible, VectorUtils and LittleObjLoader are here given as header-only units. That means that all code is in the .h file, but only the actual header part is compiled for all units using it except one: The one that defines the “MAIN” symbol.
Both the C and C++ versions are available. Note that the C version currently contains the more conventional version of “common” where .c and .h files are separate. We will most likely phase out the C version, but please let us know if it is of interest to you.
Shader programs
Every OpenGL program must include at least one vertex and one fragment shader. These are small program kernels executed on the GPU as part of rendering, for specifying the behaviour of the vertex and fragment processing steps.
Vertex shaders perform per-vertex calculations. That's where vertices are transformed, per-vertex lighting calculations are done, and where skeletal animation systems do most of their work.
Fragment shaders perform per-pixel calculations. That's where texture and lighting colours are combined into one final pixel colour value.
GLSL
GLSL, OpenGL Shading Language, is the shader language used by OpenGL. GLSL code is similar to C/C++ code, but with a strong emphasis on computation.
Most GLSL code performs floating-point calculations. Common datatypes used are float, vec2, vec3, vec4, mat3 and mat4. These datatypes represent scalars, 2D, 3D, 4D vectors, 3x3 and 4x4 matrices. Arithmetic operations can be performed directly on these datatypes.
For integer calculations (such as counting loop iterations), int is available. The bool datatype is also available.
A small GLSL function can look like:
vec4 applyDirectionalLight(vec3 normal, vec4 originalColor)
{
vec3 lightDirection = normalize(vec3(0.5, 0.8, 0.7));
float strength = dot(lightDirection, normal);
if (strength < 0.0)
strength = 0.0;
vec4 color = originalColor.xyxx * strength;
return color;
}
vec3(0.5, 0.8, 0.7) constructs a new vec3 from three floating-point values.
dot() calls a predefined math function.
originalColor.xyxx performs "swizzling" on the original vector: the result is a vec4 whose XYZW elements are taken from the X, Y, X and X elements of originalColor, respectively.
You can find a complete list of built-in mathematical functions in the GLSL Language Specification.
GLSL program structure and variables
The code for a shader program is enclosed inside the main() function. It takes no arguments, and returns nothing. Communications between OpenGL, the vertex shader and the fragment shader is done by reading/writing global variables.
Variables can have a few different qualifiers:
uniform - the value is constant over an entire polygon; it is read/write for OpenGL, and read-only for fragment and vertex shaders.
in/out - input and output. In vertex shaders, these are "attributes", can be unique for every vertex (by passing arrays). The resulting color from a fragment shader is an out variable.
in/out between shaders - out from vertex, in to fragment, "varying" variables: the value will be interpolated over the surface of a polygon; write in vertex shader, read in fragment shader.
All variables whose names begin with "gl_" are predefined by OpenGL. These are always present, and they can be used without declaring them first. For now, you only need to care about gl_Position, which is a vec4 with the resulting vertex from your vertex shader after transformation and projection. Writing this in your vertex shader is mandatory.
OpenGL will take the output from the vertex shader, interpolate the resulting values over the surface of any neighboring polygons, and then run the fragment shader once for every pixel which the polygon is supposed to render to. Any extra out variables in the vertex shader will also be interpolated over the polygon, and the result is available to the fragment shader in in variables.
You can find a full list of pre-defined variables in the GLSL Language Specification. And, of course, the course book also holds more information.
Debugging shaders
Debugging a shader is a story of its own. We don't have any full shader debugger installed in the lab, so we have to resort to other methods. On the positive side, shaders are often very simple (especially in this lab). However, debugging takes some special tricks.
Compilation errors are reported to stdout. This is a main source for information.
You can also play some tricks in the shaders. If your shader is running, but produces the wrong data, you can use its output for extra information. For example, you can output bright red to signal the result of some test comparison.
1) Setup and getting acquainted with the lab shell
Download the lab package below and unpack it at a suitable location. We start from the first example from Chapter 3. All material for all labs is in one archive.
New version 2025: The makefile now includes several stages so you don’t need to edit it. The common folder is a separate folder that must go in the right place, next to the lab folders.
New version 2026: This lab is now simplified to match the G2 level better!
There are several files included in the lab environment:
makefile - contains rules for how the executable should be built; read by make.
lab1-1.cpp - the actual lab code; this is where the main program resides.
lab1-1.vert - Minimal vertex shader.
lab1-1.frag - Minimal fragment shader.
The folder "common" contains a set of reusable utility files
GL_utilities.c - utilities for loading shaders and more.
MicroGlut.c - a package that uses an API similar to the classic user interface library GLUT but smaller and in a single file in order to improve code transparency and avoid obsolete code. It also adds a few convenient features. Note: MicroGlut is not GLUT, you can not expect GLUT documentation online to be relevant.
VectorUtils4.h - Simple header only vector/matrix package. It is similar to the library glm but, like MicroGlut, small and transparent.
LittleOBJLoader.h - Header only loader for "OBJ” models and for uploading custom data as arrays.
LoadTGA.c - Loader for "TGA" images (for textures), used from Lab 2 and onward.
All these files are relatively small and fairly self-explanatory (except parts of LittleOBJLoader). Throughout the lab material, we strive for code transparency, code that can easily be edited by you any time you need to modify, change a behavior or add a missing feature, and, not least, any time you want to go "behind the scene" and see how things work.
You will be using makefiles, lab1-1.cpp, shaders and respective .h files directly.
Compile the test program by entering the lab1 directory and performing make on the command line. This should produce a new executable file called lab1-1.
Run lab1-1 by typing ./lab1-1 on the command-line. It should show a white triangle against a dark background.
Open lab1-1.cpp and have a look inside it. There are two functions of interest to you, init() and display(). init() is called once during program startup and display() is called every time it is time to render a new frame of graphics.
Currently, display() does three things:
* Clearing of screen
* Rendering of a triangle using the DrawModel() call
* Swapping front- and backbuffer
The init() function do work critical for rendering:
* Sets the background color and activates the Z-buffer
* Uploading the vertex list to the GPU with LoadDataToModel()
* Loading the vertex and fragment shaders
The upload of the vertex list may require some explanation. We will return to it in section 4 (color shading).
Try changing the triangle data, by moving the vertices.
Change the color of the triangle and the background.
For those of you on other systems, you may need other makefiles or project files. Let us know if you need them.
Questions:
Where is the origin placed in the on-screen coordinate system?
Which direction are the X and Y axes pointing in the on-screen coordinate system?
2) Transformations in the vertex shader
Goal: To transform your polygon with 2D transforms defined by matrices.
Copy lab1-1.cpp to lab1-2.cpp. Make this section's changes to lab1-2.c. Also copy the shaders similarly.
Now we will work with matrices with explicit contents. You can create your matrix like this:
mat4 myMatrix = mat4(1, 0, 0, 0.5,
0, 1, 0, 0,
0, 0, 1, 0,
0, 0, 0, 1);
Use the following call to send your matrix to your shaders.
uploadMat4ToShader(shader, "myMatrix", myMatrix);
FYI, this call packages the OpenGL call glUniformMatrix4fv. The call uploadMat4ToShader() is not an efficient call but a convenient call when learning.
The “shader" variable is a reference to your shaders, returned when you first loaded them.
In your vertex shader, declare your matrices and apply them to your vertices as you see fit. For the example above, there should be a matrix declared like this:
uniform mat4 myMatrix;
Questions:
What is the purpose of the "in", "out" and "uniform" modifiers? Be more specific than saying “input” and “output”.
What is the output of the fragment shader?
3) Simple animation
Goal: To add time-based rotation/translation of the object.
Copy lab1-2.cpp to lab1-3.cpp. Make this section's changes to lab1-3.cpp. Also copy the shaders similarly.
You can get the current time using
GLfloat t = (GLfloat)glutGet(GLUT_ELAPSED_TIME);
The function returns an integer, a milliseconds value. We cast it to float to avoid truncation when scaling it.
In order to render new images repeatedly, you should use
glutRepeatingTimer(d)
where d is an integer. This will run a timer that will cause a redisplay every d milliseconds. You call it once during the startup of the program.
Modify matrices using a time-varying variable to produce an animation. Note that you will now need to upload your updated matrices in the display() callback, not in init(). The components of a matrix called myMatrix is written myMatrix.m[i];
When animating objects, you will want to use the sin() and cos() functions. To do that, you should include this header file:
#include <math.h>
and link with the math library using -lm.
Questions:
What is the frame rate of the animation?
4) Color shading
Goal: To interpolate data between vertices.
Copy lab1-3.cpp to lab1-4.cpp. Make this section's changes to lab1-4.cpp. Also copy the shaders similarly.
Now we are going to modify the data upload, so let us look deeper into what actually happens.
You now need an additional buffer for the color data. If you look at the LoadDataToModel function, it has multiple slots for several buffers.
LoadDatatoModel is defined like this:
Model* LoadDataToModel(vec3 *vertices, vec3 *normals, vec2 *texCoords, vec3 *colors, GLuint *indices, int numVert, int numInd);
For our purposes, the “normals” slot will work even though it is not really ment for this. (Advanced students: Feel free to extend LittleOBJLoader to support the color buffer. It is just a placeholder now.)
The DrawModel call will now need to know the name of the color buffer. Since we are using the normal vector slot, that is where you put that variable name. DrawModel is defined like this:
void DrawModel(Model *m, GLuint program, const char* vertexVariableName, const char* normalVariableName, const char* texCoordVariableName)
Pass the colors to "out" variables in the vertex shader, and as "in" variables in the fragment shader.
Use the interpolated color for the fragment color.
Interpolating shades over the surface like this can be considered a kind of Gouraud shading (although that usually refers to light).
Questions:
Did you need to do anything different when uploading the color data?
The "in" and "out" modifiers are now used for something different. What?
5) Building a pyramid, visible surface detection
Goal: To build a pyramid, using more 3D data. See the issues with VSD.
We have not covered VSD (visible surface detection) in the lectures yet. Here you will explore it yourself and then we cover it more formally in the lectures.
Copy lab1-4.cpp to lab1-5.cpp. Make this section's changes to lab1-5.cpp. Also copy the shaders similarly.
Build a pyramid by creating six triangles (4 sides and a square bottom). Make the coordinates within +/- 0.5 units from origin. For this case, define each triangle separately, end the index list to have consecutive numbers (0, 1, 2..).
Set all triangles to have different color.
Use a transformation as of part 3 to rotate the model. Does something look strange?
It is likely that it looks strange in some orientations. We need some kind of visible surface detection (VSD). We will try one of the most widely used VSD methods: Z buffering. To use that, you need to do three things:
1) Set up with Z buffer:
glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE | GLUT_DEPTH);
This call is part of the context configuration, so it must be called before the OpenGL context is created, while all GL calls can only be called after. So, where in the code should this be?
2) Activate the Z buffer and back-face culling:
glEnable(GL_DEPTH_TEST);
3) Erase the Z buffer before rendering (modify existing call):
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Enable/disable Z-buffering. Compare the difference.
Now, turn off Z-buffering and enable back-face culling:
glDisable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
Hint: In order to see all parts of the pyramid, it can be good to rotate it around the X axis.
Any problems this time? How can you fix it?
When you are done, your pyramid should render correctly with back-face culling enabled!
Questions:
What problems did you encounter while building the pyramid?
How do you change the facing of a polygon?
6) Load a 3D model from disc
Goal: To render a complex 3D model read from disc.
Copy lab1-5.cpp to lab1-6.cpp. Make this section's changes to lab1-6.cpp. Also copy the shaders similarly.
The file LittleOBJLoader.h will load a Wavefront OBJ file to disc.
Model *m;
m = LoadModel("bunny.obj");
As before, you draw it with DrawModel().
There are no colors, so you need to edit your shaders. You can use the normal vector in any way you like (be creative!) to select colors by vertex. Then these colors should be interpolated over the triangles.
Finally, disable back-face culling for now. (Without projection it will not be correct. We return to that later.)
glDisable(GL_CULL_FACE);
Questions:
Why do we need normal vectors for a model?
What did you do in your fragment shader?
Should a normal vector always be perpendicular to a certain triangle? If not, why?
That concludes lab 1. Good work! In the next lab, you will experiment with texture mapping, scenes containing multiple objects, and camera placement.
