Lab 1, Vulkan version PRELIMINARY

As of 2025, we are introducing new versions of the labs using Vulkan instead of OpenGL. These are all new, totally non-mandatory and only recommended for ambitious students.

WORK IN PROGRESS! There are bound to be unclear parts and incorrectnesses. Please let us know - and take help from David to get around them,

We have an “engine” that is a layer to simplify some aspects of Vulkan. This will be used for all labs.

Introduction to Vulkan and the Simpler Vulkan API

Goal: In this lab, you will get acquainted with how Vulkan is designed through the help of the Simpler Vulkan API. At the end of the lab, you should be able to display a 3D object.

If you run into problems, you can either look in the textbook, or visit https://registry.khronos.org/vulkan/specs/latest/man/html/. There you will find all the functions and structs and enums used in Vulkan.

Note that you should write down answers to all questions before you get examined!

We will use the C++ language for this lab, plus GLSL for shader programs.

0) Some important notes about C++ and GLSL

& takes the address to a variable.

An array and a pointer to the first element is the same thing.

* is used both for declaring pointer variables and dereferencing them:

int *a; declares the variable "a" as a pointer to an int.

b = *a; dereferences the variable "a" and gets the value it points to and assigns it to b.

#define and #include are preprocessor directives. Be very careful with them. A #define to make a constant is safe, but if you make a #define with variables in it, it quickly goes out of hand.

int a[5]; is an array of 5 ints.

svk_vector_utils define several types that work like in GLSL. The two most important ones are:

vec3 is a struct referring to a 3D vector.

mat4 is a struct referring to a 4x4 matrix;

We will provide you with examples wherever we see a need. Ask us if you feel you need one.

Shader programs

Every Vulkan program in this course must include at least one vertex and one fragment shader. These are small program kernels executed on the GPU as part of rendering, for specifying the behaviour of the vertex and fragment processing steps.

Vertex shaders perform per-vertex calculations. That's where vertices are transformed, per-vertex lighting calculations are done, and where skeletal animation systems do most of their work.

Fragment shaders perform per-pixel calculations. That's where texture and lighting colours are combined into one final pixel colour value.

GLSL

GLSL is the OpenGL Shading Language, and it is the same language that we use for Vulkan. Your GLSL files are compiled into the binary format SPIR-V. GLSL code is similar to C/C++ code, but with a strong emphasis on computation.

Most GLSL code performs floating-point calculations. Common datatypes used are float, vec2, vec3, vec4, mat3 and mat4. These datatypes represent scalars, 2D, 3D, 4D vectors, 3x3 and 4x4 matrices. Arithmetic operations can be performed directly on these datatypes.

For integer calculations (such as counting loop iterations), int is available. The bool datatype is also available.

A small GLSL function can look like:

vec4 applyDirectionalLight(vec3 normal, vec4 originalColor)

{

  vec3 lightDirection = normalize(vec3(0.5, 0.8, 0.7));

  float strength = dot(lightDirection, normal);

  if (strength < 0.0)

    strength = 0.0;

  vec4 color = originalColor.xyxx * strength;

  return color;

}

vec3(0.5, 0.8, 0.7) constructs a new vec3 from three floating-point values.

dot() calls a predefined math function.

originalColor.xyxx performs "swizzling" on the original vector: the result is a vec4 whose XYZW elements are taken from the X, Y, X and X elements of originalColor, respectively.

You can find a complete list of built-in mathematical functions in the GLSL Language Specification.

GLSL program structure and variables

The code for a shader program is enclosed inside the main() function. It takes no arguments, and returns nothing. Communications between Vulkan, the vertex shader and the fragment shader is done by reading/writing global variables.

Variables can have a few different qualifiers:

uniform - the value is constant over every object using the related buffer; it is read-only for fragment and vertex shaders.

push constant - a type of uniform that is unique to each object in the scene.

in/out - input and output. In vertex shaders, these are "attributes", can be unique for every vertex (by passing arrays). The resulting color from a fragment shader is an out variable. 

in/out between shaders - out from vertex, in to fragment, "varying" variables: the value will be interpolated over the surface of a polygon; write in vertex shader, read in fragment shader.

All variables whose names begin with "gl_" are predefined in GLSL. These are always present, and they can be used without declaring them first. For now, you only need to care about gl_Position, which is a vec4 with the resulting vertex from your vertex shader after transformation and projection. Writing this in your vertex shader is mandatory.

Vulkan will take the output from the vertex shader, interpolate the resulting values over the surface of any neighboring polygons, and then run the fragment shader once for every pixel which the polygon is supposed to render to. Any extra out variables in the vertex shader will also be interpolated over the polygon, and the result is available to the fragment shader in in variables.

You can find a full list of pre-defined variables in the GLSL Language Specification. And, of course, the course book also holds more information.

Debugging shaders

Debugging a shader is a story of its own. We don't have any full shader debugger installed in the lab, so we have to resort to other methods. On the positive side, shaders are often very simple (especially in this lab). However, debugging takes some special tricks.

Vulkan uses validation layers to report errors and these can sometimes be helpful when debugging the shader code but this is not a guarantee.

You can also play some tricks in the shaders. If your shader is running, but produces the wrong data, you can use its output for extra information. For example, you can output bright red to signal the result of some test comparison.


1) Setup and getting acquainted with the lab shell

Download the lab package below and unpack it at a suitable location. We start from a modified version of the first example from Chapter 3. All material for all labs is in one archive.

TSBK07_11_labs.tar.gz

There are several files included in the lab environment. For lab 1 we have:

Makefile - contains rules for how the executable should be built; read by make.

lab1-1.cpp - the actual lab code; this is where the main program resides.

shaders/lab1-1.vert - Minimal vertex shader.

shaders/lab1-1.frag - Minimal fragment shader.

assets/bunny.obj - An obj file representing a bunny.

The folder "shared" contains the Simpler Vulkan API and some third party files

svk.cpp the main entrypoint of the API

svk_vector_utils.cpp is modified version of VectorUtils4 suited for C++ and the API. It contains vector and matrix classes for use in linear algebra. 

The other files are helper files for svk.cpp and are of little concern in this course as you will not be calling any functions in those files bar some. If you are, however, interested in how the API works under the hood you are more than welcome to take a look.

Note; Before running, you need to change the permissions of the shader compiler:

cd to shared/third_party/, then type the command

chmod 777 glslangValidator

Compile the test program by entering the lab1 directory and perform the following command in the terminal:

make lab1-1

This will produce an executable file, run it by typing the following into the terminal:

./lab1-1

You should now have a program open that shows a white triangle against a dark-blue background.

Open lab1-1.cpp and have a look inside it. There are two functions of interest to you, init() and display(VkCommandBuffer cmd). init() is called once during program startup and display(VkCommandBuffer cmd) is called every time it is time to render a new frame of graphics.

Currently, display() does four thngs to render the triangle, for each frame it is:

* Obtaining the rendering info, and beginning it.

* Binding the pipeline and vertex buffer using Vulkan commands.

* Drawing the triangle with another Vulkan command

* Ending the rendering.

The init() function initializes the data used in the rendering, it:

* Sets the appropriate binding description and attribute description to be used when initializing the pipeline.

* Initializes an empty descriptor set layout (more on this later)

* Initializes the pipeline layout based on the descriptor set layout

* Initializes the pipeline based on the pipeline layout, the aforementioned descriptions, and paths to the two shaders

* Allocates and uploads the vertice data to a vertex buffer.

* Sets the background color.

The upload of the vertex data may require some explanation. We will return to it in section 4 (color shading).

Try changing the triangle data, by moving the vertices.

Change the color of the triangle and the background.

Questions:

What is the output of the fragment shader?

Where is the origin placed in the on-screen coordinate system?

Which direction are the X and Y axes pointing in the on-screen coordinate system?

2) Transformations in the vertex shader

Goal: To transform your polygon with 2D transforms defined by matrices.

Copy lab1-1.cpp to lab1-2.cpp. Make this section's changes to lab1-2.cpp. Also copy the shaders similarly. To compile, simply run

make lab1-2

Define transformation matrices, somewhat like this:

float myMatrix[] = {1.0f, 0.0f, 0.0f, 0.0f,

                    0.0f, 1.0f, 0.0f, 0.0f,

                    0.0f, 0.0f, 1.0f, 0.0f,

                    0.5f, 0.0f, 0.0f, 1.0f};

What does this matrix do? Define other 2D transformations.

To upload this variable to the shader, you will utilize the push constant uniform.

First, we need to declare how big our push constant is. Do this when initializing the pipeline layout as following:

initPipelineLayout(pipelineLayout, descriptorSetLayout, sizeof(myMatrix));

Then we also need to declare the push constant in the vertex shader like this:

layout(push_constant) uniform constants

{

      mat4 myMatrix;

}

Finally, we need to upload the push constant with the following function:

vkCmdPushConstants(cmd, pipelineLayout, VK_SHADER_STAGE_VERTEX_BIT, 0, sizeof(myMatrix), &myMatrix);

Call this before calling the vkCmdDraw() function.

In your vertex shader, you can now use the matrix as you see fit.

Questions:

What is the purpose of the "in", "out" and "uniform" modifiers? Be more specific than saying “input” and “output”.

For the matrix, you need to modify the bottom row to move the triangle rather than the right column. Why is this?

3) Simple animation

Goal: To add time-based rotation/translation of the object.

Copy lab1-2.cpp to lab1-3.cpp. Make this section's changes to lab1-3.cpp. Also copy the shaders similarly.

You can get the time using

double time = svk.getTimeSinceStart();

Alternatively, you can get the delta (time since last frame) by using

double delta = svk.getDelta();

Modify matrices using a time-varying variable to produce an animation.

When animating objects, you may want to use the std::sin() and std::cos() functions.

Questions:

What is the frame rate (FPS) of the program?

Can you get the time by only calling the svk.getDelta() function and vice versa? How so?

4) Color shading

Goal: To interpolate data between vertices.

Copy lab1-3.cpp to lab1-4.cpp. Make this section's changes to lab1-4.cpp. Also copy the shaders similarly.

Now we are going to modify the data upload, so let us look deeper into what actually happens.

When we want to upload a vertex attribute to the shader we must first declare a binding description and an attribute description for the data. For the attribute description we first declare the following:

* The location in the vertex shader file

* The binding number

* The format of the value

* The offset of the value

Attributes are identified by location numbers. You set the location as a number in a layout statement in the shader like this:

layout(location = 0) in vec3 inPosition;

and refer to it by that same number in the CPU code as the location number.

In the case of the position for our file, the location/binding number set to 0. The format is the type of value. For vec3 we can use VK_FORMAT_R32G32B32_SFLOAT since that gives us 3 32-bit signed float values. For a vec2, we can likewise use VK_FORMAT_R32G32_SFLOAT.

The offset is where to start reading values in the buffer. This is useful if we have a buffer that contains data for several different attributes.

For the binding description, we declare the following:

* The binding number

* The stride

* The input rate

The binding is what connects it to an attribute description of the same binding number. The stride tells the shader how much it should increment between each element. For our position we set it to increment by 3 floats (i.e. the size of a vec3) so we set it to sizeof(float) * 3. The input rate specifies the rate at which the attributes are pulled from the buffer. For our case we always want it to be VK_VERTEX_INPUT_RATE_VERTEX so you don't need to pass this value.

These descriptions are then passed to the initialization of the pipeline. To make it easier, we use the VertexInputDescriptionHandler to help add binding and attribute descriptions, and the order that you pass the values as parameters is the same as mentioned above. If you want to read more about the binding or attribute descriptions, you can do so here and here respectively.

Once we have initialized the pipeline we now need to allocate space for a buffer and then upload the data we want to it. For that we use the following functions:

SVK::allocateVertexBuffer(AllocatedBuffer& buffer, size_t const& allocationSize)

SVK::uploadToGPUBuffer(AllocatedBuffer& buffer, void const* bufferData, size_t const& bufferSize)

First, we allocate size of the buffer by calling svk.allocateVertexBuffer() and passing our buffer as a reference, as well as the size we want to allocate.

Then, we upload our data to the buffer by calling uploadToGPUBuffer() and this is similar to the memcpy() function from C and C++ where the first parameter is a reference to the buffer we want to copy our data to, the second is a pointer to the data we want to copy from, and the third is the size of the data we want to copy. Remember that the size for both functions must be the total size. I.e. the number of elements multiplied by the size of each element.

Now we have both declared what data we want in the vertex shader, and stored that data in a buffer on the GPU. All we need to do now is bind them together. We do that by calling the Vulkan function:

vkCmdBindVertexBuffers(VkCommandBuffer commandBuffer, uint32_t firstBinding, uint32_t bindingCount, const VkBuffer* pBuffers, const VkDeviceSize* pOffsets);

First we pass the command buffer, then we pass the starting index of the bindings we want to bind, then we pass the number of bindings we want to iterate over. After this, we pass an array of buffers, and an array of the offset of each buffer. What this means is that you can pass several buffers, each with their own bindings (in incrementing order from firstBinding to (firstBinding + bindingCount - 1), and each with their own offsets. But we don't need to store our buffers in an array and upload them all at the same time. We can simply call this function several times, modifying the firstBinding parameter and pBuffers parameters each time.

Add a new buffer, similar to the vertex buffer, but this time for colors. You need to add an appropriate binding and attribute description, allocate and upload to the buffer, and then bind it. Don't forget to also create it in the vertex shader with an appropriate location.

Once in the vertex shader, you pass the colors to "out" variables, and then as "in" variables in the fragment shader, again with appropriate locations.

An important note: the location of "in" variables do not correspond to the location of "out" variables, i.e. you can have an "in" variable and an "out" variable both with the same location but referring to different data.

Use the interpolated color for the fragment color.

Interpolating shades over the surface like this can be considered a kind of Gouraud shading (although that usually refers to light).

Questions:

Did you need to do anything different when uploading the color data?

The "in" and "out" modifiers are now used for something different. What for?

If you want to modify the layout(location) of an attribute in the vertex shader, where do you also need to modify it in the main program?

5) Building a pyramid, visible surface detection

Goal: To build a pyramid, using more 3D data.

Copy lab1-4.cpp to lab1-5.cpp. Make this section's changes to lab1-5.cpp. Also copy the shaders similarly.

Before doing anything else, you need to call svk.enableDepthClipControl() before calling svk.init(). This allows the program to use OpenGL's depth range instead of Vulkan.

Build a pyramid by creating six triangles (4 sides and a square bottom). Make the coordinates within +/- 0.5 units from origin.

Set all vertices for each triangle to the same color (that is, different for each triangle).

Use a transformation as of part 3 to rotate the model. Does something look strange?

It is likely that it looks strange in some orientations. We need some kind of visible surface detection (VSD). The two most basic ones are Z buffering and back-face culling. We will start without the Z buffer and focus on the back-face culling. To do that, you need to do two things:

1) Disable the depth buffer, you can do this by calling svk.setDepthTestEnabled(false) before initializing the pipeline.

2) Enable culling, you can do this by calling svk.setCullMode(VK_CULL_MODE_FRONT_BIT) before initializing the pipeline.

This will disable the depth buffer for the pipeline, but set the cull mode so any polygon that faces away from the screen will not be drawn. What this means is that if your triangles are facing the correct way, the pyramid should look correct from all angles. But if you have constructed any of the triangles incorrectly, it will look messed up.

Hint: In order to see all parts of the pyramid, it can be good to rotate it around the X axis.

Note: VkCmdDraw() takes 5 parameters. The first is the command buffer, then the number of vertices, then the instance count, the index of the first vertex, and the index of the first instance. The last three should always be 1, 0, 0. But the number of vertices is different when building the pyramid instead of a single triangle. Set this value accordingly.

When you are done, your pyramid should render correctly with culling enabled!

To see the results in action (because if it does work correctly we don't notice it because the polygons that are removed are behind other polygons) you can set the polygon mode to only show the lines of the vertices by calling svk.setPolygonMode(VK_POLYGON_MODE_LINE) before initializing the pipeline.

Questions:

Why are we using OpenGL's depth range instead of Vulkan's?

What problems did you encounter while building the pyramid?

How do you change the facing of a polygon?

6) Load a 3D model from disc

Goal: To render a complex 3D model read from disc.

Copy lab1-5.cpp to lab1-6.cpp. Make this section's changes to lab1-6.cpp. Also copy the shaders similarly.

From now on, we will be dealing with models. Because models usually deal with position, normal, and texture coordinates. A special container is used that can hold these values. As such, we can know ahead of time what binding and attribute descriptions we want, and how to setup a buffer. What this means is that you no longer need to manually declare the descriptions or allocate and upload to a buffer. Instead declare a model variable like this:

Model<VertexType> model;

where VertexType is the type of attributes you want to work with. It can either be VERTEX_P, VERTEX_PN, VERTEX_PT, or VERTEX_PNT.

P stands for position, N for normal, and T for, you guessed it, texture.

For now, declare a model that has a position and normal. Next, we want to initialize the pipeline with this information. For this you write:

svk.initVertexPipeline<VertexType>(pipeline, pipelineLayout, "shaders/lab1-6.vert", "shaders/lab1-6.frag");

This is similar to the pervious initialization of the pipeline, but instead of passing a handler as a parameter, we instead declare the VertexType which is the same as for the model. Depending on which VertexType you pass, the layout locations in the vertex shader will be different, but it always follows the pattern of "start at 0, and increment by 1 for each attribute" so if we have a VERTEX_PN, then position will be 0 and normal will be 1. If we have VERTEX_PT, position will be 0 and the texture coordinate will be 1. Finally, if we have VERTEX_PNT, position will be 0, normal will be 1, and the texture coordinate will be 2.

To load the model, you simply type the following:

svk.loadModel(model, "assets/bunny.obj");

To bind the vertex buffer, you simply pass &model.vertexBuffer.buffer as a parameter, with only a single binding, starting at 0.

Next you also need to bind the index buffer and this is done as follows:

vkCmdBindIndexBuffer(VkCommandBuffer commandBuffer, VkBuffer buffer, VkDeviceSize offset, VkIndexType indexType);

Again we pass the command buffer as our first parameter, next a simple reference (not pointer) to our index buffer which is model.indexBuffer.buffer in our case, an offset which is always going to be 0 in this course, and the type of index, which will always be VK_INDEX_TYPE_UINT32 in this course.

Next, we want to draw and we will use vkCmdDrawIndexed() instead of vkCmdDraw() and it looks like the following:

void vkCmdDrawIndexed(VkCommandBuffer commandBuffer,uint32_t indexCount, uint32_t instanceCount, uint32_t firstIndex, int32_t vertexOffset, uint32_t firstInstance);

which looks even more daunting than VkCmdBindVertexBuffers() but it's not, for we only care about 2 parameters. The command buffer and the indexCount. The rest of the paramters will always have the following pattern: 1, 0, 0, 0

So when you call it, it will look like this every single time we call it (only the indexCount will differ between calls for different models):

vkCmdDrawIndexed(cmd, model.indices.size(), 1, 0, 0, 0);

Now that you've added all these parts (and removed the old initialization and binding and drawing code) the program should be working and we should see a bunny on the screen.

The normals might now be treated as colors by the shader (provided you put the color attribute at location 1) but you can use the normals in any way you like.

Questions:

Why do we need normal vectors for a model?

What did you do in your fragment shader?

Should a normal vector always be perpendicular to a certain triangle? If not, why?

Why are we binding an index buffer and calling vkCmdDrawIndexed() instead of vkCmdDraw()?

 That concludes lab 1. Good work! In the next lab, you will experiment with texture mapping, scenes containing multiple objects, and camera placement.


Finally, a survey! The new Vulkan labs are made by David and he needs your feedback, regardless of whether you did the Vulkan or OpenGL version of the lab.

https://forms.office.com/e/r6a8CcuHNV

This page is maintained by Ingemar Ragnemalm