Virtual world and specular shading
Goal: In this lab, you will expand your view to a richer virtual world, including hierarcical models and a skybox. You will also implement the Phong lighting model.
This lab can be pretty demanding, parts 3 and 4 in particular.
If you run into problems, you can either look in the textbook, or visit https://registry.khronos.org/vulkan/specs/latest/man/html/. There you will find all the functions and structs and enums used in Vulkan.
1) Hierarcical modelling, the windmill
In the asset folder, there is a windmill in four parts, three making up the mill’s body (walls, roof, balcony), and one wing. Build a working windmill from the parts. Four wings should be placed at the appropriate place and rotate around it. Create appropriate rotation and translation matrices to make suitable model-to-world transformations.
Hint in case you have trouble: You may have to tweak some numbers to make the placement right. How can you do that without making a lot of small changes in numbers in the code followed by recompilations?
Questions:
How can you get all four blades to rotate with just one time-dependent rotation matrix?
How do you make the wings follow the body's movements?
2) Manual viewing controls
The lookAt() function is useful for more than placing the camera in some fixed place. Write controls for moving the camera. You can use keyboard only, or mouse and keyboard.
If you want to use the mouse, you can call svk.bindMouseMotionFunction() with a function returning void and having 2 int32_t as parameters. It can look something like this:
void mouse(int32_t xRel, int32_t yRel)
{
// some code
}
svk.bindMouseMotionFunction(mouse);
This allows you to modify variables by moving the mouse.
For keyboard controls, you can use the function svk.isKeyDown(). For example:
if (svk.isKeyDown("W"))
{
// move forward
}
Note: Each frame, the program checks if a key is down (provided you have called isKeyDown() somewhere). Compare this with the mouse motion, where the program doesn't check every frame but instead receives a signal when the mouse has moved. This means is that if you have a variable that is updated by the mouse moving, that update operation should not be modified by delta.
The manual controls should allow you to move around in the “world".
Questions:
What kind of control did you implement?
3) Virtual world and skybox
Using the manual controls above, you should expand your virtual universe to a simple "virtual world" with a set of basic features.
Add a "ground" as a fairly large square. You can do that with a few vectors:
float groundSize = 100.0f;
std::vector<vec3> vertices = {
vec3{-groundSize, 0.0f, -groundSize},
vec3{-groundSize, 0.0f, groundSize},
vec3{groundSize, -0.0f, -groundSize},
vec3{groundSize, -0.0f, groundSize}
};
std::vector<vec3> vertexnormals = {
vec3{0.0f, 1.0f, 0.0f},
vec3{0.0f, 1.0f, 0.0f},
vec3{0.0f, 1.0f, 0.0f},
vec3{0.0f, 1.0f, 0.0f}
};
std::vector<uint32_t> indices = {
0, 1, 2, 1, 3, 2
};
and pass this data in the function svk.loadCustomModel().
Next, add a "skybox". For this purpose, a skybox model and skybox texture are provided in the asset folder for lab 3. The skybox should follow the camera and seem to be drawn at the back. To do this, you should draw the skybox first, with the depth buffer turned off (This means the skybox needs a separate pipeline with depth test turned off). Culling should also be turned off since you always want to draw the skybox (and the model is not designed for culling). The skybox should be rotated by the world to view matrix but not translated. You can do this with a copy of the world to view matrix where you zero out the translation component.
Questions:
How did you handle the camera matrix for the skybox?
How did you represent the objects? Is this a good way to manage a scene or would you do it differently for a "real" application?
What special considerations are needed when rendering a skybox?
4) Specular shading, external light sources
Now you have a nice scene but you need better light. Implement Phong shading in your shaders (moving light calculations to fragment) and also implement the Phong model for a specular component. Not only should it include a specular component, but it should also do that using light sources that are specified by the CPU. (This is a challenging task, but I find it quite rewarding.) We are omitting some parts, like most materials parameters, light attenuation and make no attempts to do any shadow effects.
The scene
The scene should have the windmill in the origin of the scene. There should be a flat ground, and this time we do not want texture on it. Both the windmill, the ground, and the teapot should be lit in later stages, so use the same shader for all three. Keep the skybox from earlier parts.
Furthermore, put the Utah Teapot at coordinates 20.0f, 0.0f, 20.0f. This places it with the positional light sources between the teapot and the windmill.
The scene, with fake light only
The light sources
You should define your lights and matrices as follows:
struct Uniforms
{
vec4 lightSourcesDirPosArr[4] = {vec4{10.0f, 5.0f, 0.0f}, // Red light, positional
vec4{0.0f, 5.0f, 10.0f}, // Green light, positional
vec4{-1.0f, 0.0f, 0.0f}, // Blue light along X
vec4{0.0f, 0.0f, -1.0f}};// White light along Z ;
vec4 lightSourcesColorArr[4] = {vec4{1.0f, 0.0f, 0.0f}, // Red light
vec4{0.0f, 1.0f, 0.0f}, // Green light
vec4{0.0f, 0.0f, 1.0f}, // Blue light
vec4{1.0f, 1.0f, 1.0f}};// White light
int isDirectional[4] = {0, 0, 1, 1};
};
struct Matrices
{
mat4 modelToWorld;
mat4 worldToView;
mat4 viewToProjection;
float specularExponent;
};
The specular exponent should be defined as the following for each of the objects:
Floor: 100.0
Windmill: 200.0
Teapot: 60.0
You still want to upload the matrices (and specular exponent) as push constants since the modelToWorld as well as specularExponent vary over the models.
For the lights defined in the Uniforms struct you want to upload them to a uniform buffer. The reason for this is that push constants have a limited size and
on the graphics cards used in the lab rooms for this course the limit should be 256 bytes which is not a lot but enough to hold the three matrices and the specularExponent (64 * 3 + 4 = 196 bytes).
Since you want to have the matrices in both the fragment and the vertex shader, you should call svk.initPipelineLayoutCombinedPushConstants() instead of svk.initPipelineLayout(). This ensures that both of the shaders can hold the same amount of data (196 bytes in this case). Next, to upload to both, you want to call the vkCmdPushConstants() with VK_SHADER_STAGE_VERTEX_BIT | VK_SHADER_STAGE_FRAGMENT_BIT as the third parameter.
For the uniform buffer, you need to have an AllocatedBuffer just like in lab 1. To allocate and upload, you call the functions svk.allocateUniformBuffer() and svk.uploadToUniformBuffer() with identical parameter layout as when you called svk.allocateVertexBuffer() and svk.uploadToGPUBuffer(). To get the size of the buffer, you can simply use sizeof(Uniforms). Finally, you want to call object.addUniformBuffer() similarly to when you added textures to an object. The parameters are the following:
* binding number
* pointer to uniform buffer
* size of uniform buffer
The binding number in this case serves the same purpose as for textures. It corresponds to whatever you set it to in the shader.
In the fragment shader, you declare the uniform like the following:
layout(binding = ??) uniform colordata
{
vec4 inLightSourcesDirPosArr[4];
vec4 inLightSourcesColorArr[4];
ivec4 isDirectional;
};
Notice how we're using arrays of vec4 instead of vec3 despite the fact that we're not dealing with color alpha in this lab. This is because Vulkan likes the alignment of 16 bytes per variable. A mat4 fulfills this by being 16 * 4 bytes. An array of 4 vec4 fulfills this by being 16 * 4 as well. An array of 4 vec3 would be 12 * 4 which is not able to align to 16 bytes and thus we have to use vec4 instead. To avoid any incorrect calculations, simply convert these to arrays of 4 vec3 inside the fragment shader before performing operations on them.
Additionally, the array of 4 ints has been converted to an ivec4 inside the fragment shader which serves the same purpose. The reason for this is that Vulkan doesn't seem to like arrays of ints. I don't know exactly why it's like this and have not yet had time to figure out why, so for now we simply convert it to an ivec4 and accept that.
Step 1: Convert data to view coordinates.
I assume that you calculate the lighting in view coordinates. It is vital that we do not mix up the coordinate systems, so ensure that you are in the right coordinate system for every step.
Take special care in keeping track on what coordinate system you work in. Model data starts in model coordinates, light sources are given in world coordinates.
You will need to have the following data in view coordinates:
• Camera position.
Question: Where is the camera in view coordinates?
• Surface position.
• Normal vectors.
• Light direction.
For each of these, apply the model-to-view or world-to-view as appropriate. The light direction is a question for the fragment shader. Normal vectors and surface positions must be passed as varyings (out from vertex, in to fragment).
How do you know that they are correct? Put them on the surfaces! Do this for the normal vectors and the surface position. The normal vectors should be blue for surfaces heading in your direction, the ground should be green. For the surface position, you should get a cross in the middle of the screen.
The scene visualizing normal vectors in world coordinates. This is not what we want.
The scene with normal vectors in view coordinates. We will now always get blue facing the camera.
The scene visualizing surface positions. We should get this cross in the middle, and it will stay there as we move.
This is what you get if you put the surface positions in world coordinates. This is not what we want.
Step 2: Directional diffuse light.
Start with light source 2, blue directional light along the X axis. If it is correct, you get a diffuse light on the X side of both objects, and none on the floor since the light it is parallell to the floor. You get the direction towards the light.
Normalize what needs to be normalized. Remember that a varying vector doesn’t stay normalized under interpolation.
Add light source 3. If #2 was correct, this should pose no problem.
The picture below was made with 0.5 times each light.
Both directional lights, diffuse only.
Step 3: Positional diffuse light.
For a positional light, you need to create a local vector from the surface to the light source.
The positional lights are red and green and should produce diffuse spots between the windmill and the teapot. They do not move as you move the camera.
Diffuse positional light. Note the spots on the ground!
Step 4: Specular light.
For the moment, only use light source 3 (white directional).
Now you need to use the direction towards the viewer and mirror the surface-to-light vector over the normal vector.
You should get some highlights in the models, but also a big lit area on the ground, in the distance.
Specular light from light source 3.
Directional specular light will give a lit area in the distance.
Now turn on all lights. You should get two sharper spots on the ground, red and green. They move as you move the camera. Note: I used 0.5 times diffuse and 0.8 times the specular.
This is what we want to see!
We can note that these separate spots do not feel very realistic. This is due to diffuse light and specular light being two separate components and a better light model could model this better. It is also a question of how sharp the highlights should be, and the lack of ambient light, and of light attenuation. Let’s not worry, we have a pretty decent light with full control from the main program.
Interesting side note on the realism of the separate spots: This is actually not unrealistic, this does happen for some materials!
Questions:
Why was blue facing the camera when visualizing the normal vectors?
For light source 3, why did we get a white area in the distance for the specular light but not for the diffuse?
How do you generate a vector from the surface to the eye?
Which vectors need renormalization in the fragment shader?
Extra) Managing transparency
In your virtual world, make at least two objects semi-transparent. Move around. You should be able to find locations where the transparency (combined with Z buffering) causes problems. Solve these problems.
Questions:
How did you remove the errors caused by transparency?
That concludes lab 3. Good work! In the next lab, we will make the ground more interesting, a 3D terrain.