{"id":28,"date":"2025-05-06T21:15:22","date_gmt":"2025-05-06T21:15:22","guid":{"rendered":"https:\/\/blog.metu.edu.tr\/e251884\/?p=28"},"modified":"2025-05-06T21:19:24","modified_gmt":"2025-05-06T21:19:24","slug":"ceng-469-homework-2","status":"publish","type":"post","link":"https:\/\/blog.metu.edu.tr\/e251884\/2025\/05\/06\/ceng-469-homework-2\/","title":{"rendered":"CENG 469 Homework 2"},"content":{"rendered":"\n<p>This  assignment was about putting together a basic deferred renderer with HDR cubemap support, tone mapping, motion blur and some basic camera movements. I rolled up my sleeves, dove into OpenGL, and here&#8217;s how it all unfolded<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Code Cleanup<\/strong><\/h4>\n\n\n\n<p>I started by refactoring the given code to understand what\u2019s what and build a more organized codebase. Clean code makes life way easier, because when things break (and they <em>will<\/em>), messy code makes debugging intense.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>HDR Cubemap Loading<\/strong><\/h4>\n\n\n\n<p>Next, I focused on initializing the HDR cubemap textures. Parsing the <code>.hdr<\/code> files was thankfully painless thanks to the <code>stb_image<\/code> library. Then came integrating it into the scene. This was easy again. I followed <a href=\"https:\/\/learnopengl.com\/Advanced-OpenGL\/Cubemaps\">OpenGL&#8217;s tutorial<\/a> and changed the given quad.obj file to create a cubemap instead of a single 2D quad.<\/p>\n\n\n\n<p><br><strong>A Weird Issue:<\/strong> Once I added the OBJ model, the texture looked broken, only the middle part showed and it completely occluded the model. I tried tweaking the quad.obj vertices at first (bad idea), then realized the issue was with the field of view. Turns out it should\u2019ve been 90\u00b0, not 45\u00b0. Fixed it, and everything looked as expected.<br>Also updated the OpenGL depth functions (<code>glDepthFunc<\/code>, <code>glDepthMask<\/code>) before and after cubemap rendering to properly handle occlusion.<br><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"647\" height=\"481\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/first-screenshot.png\" alt=\"\" class=\"wp-image-30\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/first-screenshot.png 647w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/first-screenshot-300x223.png 300w\" sizes=\"auto, (max-width: 647px) 100vw, 647px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Tone Mapping + Gamma Correction<\/strong><\/h4>\n\n\n\n<p>I added gamma correction and manually inspected every cubemap face to be sure (I did this simply by changing the gaze vector). Then I implemented <strong>Reinhard tone mapping<\/strong>  and added an exposure control with a text feedback on the screen. The current state:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"642\" height=\"478\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after-step-7.png\" alt=\"\" class=\"wp-image-31\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after-step-7.png 642w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after-step-7-300x223.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/figure>\n\n\n\n<p>I realized that I was supposed to use global Reinhard tone mapping for the homework, but I implemented a simpler version with just an exposure value. Probably should\u2019ve watched the video before jumping into the homework. \ud83d\ude42<\/p>\n\n\n\n<p>I also added the last pressed key to the screen, displayed in the bottom-left corner for 30 frames. It&#8217;s hardcoded for now, but I could adapt it based on FPS if needed. With that, the basic cubemap setup was done: an armadillo object, tone mapped cubemap with adjustable exposure, and key press indicators. This is an example video:<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"482\" style=\"aspect-ratio: 644 \/ 482;\" width=\"644\" controls src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/step10.mp4\"><\/video><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Armadillo Movement<\/strong><\/h4>\n\n\n\n<p>The armadillo\u2019s movement was looking strange. I stripped out all the movement logic except the modeling matrix and simply rotated the model around the Y-axis using quaternions. The armadillo was already facing +Y, so I just adjusted its Z position to bring it closer and made it spin. I added a toggle key (<code>R<\/code>) to pause\/resume rotation as expected from the homework. Example Video:<br><\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"482\" style=\"aspect-ratio: 644 \/ 482;\" width=\"644\" controls src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/step11.mp4\"><\/video><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>FPS Counter<\/strong><\/h4>\n\n\n\n<p>From there, I started to the FPS counter and fixed the key press display logic. Instead of counting a fixed number of frames, I switched to showing the key for about 1\/3 of a second (show #(current fps \/ 3) frames). I also added a rendering mode indicator to the top-right corner of the screen. Right now, it just says &#8220;TONEMAPPED,&#8221; but it&#8217;s set up for more options later.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"644\" height=\"483\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/render-mode.png\" alt=\"\" class=\"wp-image-34\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/render-mode.png 644w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/render-mode-300x225.png 300w\" sizes=\"auto, (max-width: 644px) 100vw, 644px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Proper Reinhard Tone Mapping + Little Additions<\/strong><\/h4>\n\n\n\n<p>Before going further, I decided to fix my Reinhard tone mapping. Initially, I was just using the exposure value directly, but it turns out we are supposed to divide the log average luminance by the exposure and scale accordingly. I corrected that and added more UI text: gamma, exposure, and key value. I also gave users the option to view just the cubemap, no object or tonemapping. For now I am still rendering the object but I will fix that soon.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"482\" style=\"aspect-ratio: 644 \/ 482;\" width=\"644\" controls src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/step14.mp4\"><\/video><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>Then, I added a <code>V<\/code> key  to toggle vsync. With vsync off, FPS jumped to ~1100. Seems good.<\/p>\n\n\n\n<p>After watching the homework presentation video again, I implemented some of the expected functionality: <code>Space<\/code> toggles fullscreen, <code>Up\/Down<\/code> scale the key value, and exposure no longer affects the tonemapped output, only the key value does now. The program started looking much better after these fixes.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"641\" height=\"486\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after16.png\" alt=\"\" class=\"wp-image-36\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after16.png 641w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after16-300x227.png 300w\" sizes=\"auto, (max-width: 641px) 100vw, 641px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Deferred Rendering<\/strong><\/h4>\n\n\n\n<p>Next, I tackled <strong>deferred rendering<\/strong>. I followed a great YouTube <a href=\"https:\/\/www.youtube.com\/watch?v=0ckE-CZpXAo&amp;t=290s&amp;ab_channel=BrianWill\">tutorial by Brian Will.<\/a> At first, I ran into an issue where the geometry render pass seemed to override the cubemap output. To fix that, I adjusted the render order so the geometry pass runs afterward. That helped but initially the visuals still looked pretty bad. As I rotated the scene, the colors were flickering and jumping in strange ways, which made me suspect that depth testing wasn&#8217;t working correctly or that something was off with how the G-buffer was being used. It was looking like z-fighting as you can see:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"641\" height=\"486\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after17.png\" alt=\"\" class=\"wp-image-37\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after17.png 641w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/after17-300x227.png 300w\" sizes=\"auto, (max-width: 641px) 100vw, 641px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"641\" height=\"486\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/z-fighting-1.png\" alt=\"\" class=\"wp-image-51\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/z-fighting-1.png 641w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/z-fighting-1-300x227.png 300w\" sizes=\"auto, (max-width: 641px) 100vw, 641px\" \/><\/figure>\n\n\n\n<p>To fix it, I started with normal calculations. I had forgotten to include the view matrix in the normal transform. Also, the object was dissapearing on window resize, which I fixed by reinitializing the gBuffer as stated in the homework pdf (since it depends on screen dimensions).<br><\/p>\n\n\n\n<p>Z-fighting was a particularly annoying bug which is caused by how I defined <code>gPosition<\/code> and <code>gNormal<\/code> in the fragment shader. When they were <code>vec3<\/code>, depth values weren&#8217;t initialized properly, causing Z-fighting. I changed them to <code>vec4<\/code> and the problem solved magically. I tried a lot of things up to this point because the problem was so unexpected for me.<\/p>\n\n\n\n<p>Another bug: I noticed that the <code>gPosition<\/code> and <code>gNormal<\/code> textures looked identical in the geometry visualization shaders. Both the <code>gPosition<\/code> and <code>gNormal<\/code> data were being written correctly to the G-buffer, so the issue wasn\u2019t with the rendering, just with how the textures were being attached to the shader.<br>It turned out that whichever texture I attached <em>first<\/em> (normals or positions) would end up being used for both, which clued me in that the problem was in the fragment shader bindings.<br>I eventually realized I had forgotten to set the uniform for the <code>gNormal<\/code> sampler in the shader. Adding this line fixed it:<br><strong>glUniform1i(glGetUniformLocation(visualizeProgram, &#8220;gNormal&#8221;), 1);<\/strong><br>That ensured the <code>gNormal<\/code> texture was correctly bound to texture unit 1. After that, both the position and normal visualizations rendered properly.<\/p>\n\n\n\n<p>After some little additions this is a showcase video and some screenshots. Sorry for the flashbang :).<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"480\" style=\"aspect-ratio: 642 \/ 480;\" width=\"642\" controls src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/step22.mp4\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"642\" height=\"478\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-positions.png\" alt=\"\" class=\"wp-image-41\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-positions.png 642w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-positions-300x223.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"647\" height=\"481\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-normals-1.png\" alt=\"\" class=\"wp-image-40\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-normals-1.png 647w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/21-normals-1-300x223.png 300w\" sizes=\"auto, (max-width: 647px) 100vw, 647px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Shading with G-Buffers<\/strong><\/h4>\n\n\n\n<p><br>With that resolved, I moved on to <strong>Blinn-Phong shading<\/strong> with four point lights, one near the head, one at the back, and two around the legs. The initial result was decent, but I noticed lighting didn\u2019t behave consistently with earlier stages. It turned out that I had stored vertex positions in object space instead of world space during the geometry pass. Fixing that brought lighting back to expected behavior.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"642\" height=\"481\" data-id=\"45\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading-1.png\" alt=\"\" class=\"wp-image-45\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading-1.png 642w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading-1-300x225.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/figure>\n<\/figure>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"642\" height=\"481\" data-id=\"46\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shadin2-1.png\" alt=\"\" class=\"wp-image-46\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shadin2-1.png 642w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shadin2-1-300x225.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"642\" height=\"481\" data-id=\"47\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading3-1.png\" alt=\"\" class=\"wp-image-47\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading3-1.png 642w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/bp-shading3-1-300x225.png 300w\" sizes=\"auto, (max-width: 642px) 100vw, 642px\" \/><\/figure>\n<\/figure>\n\n\n\n<p>At the end of the homework I used the same lighting setup with the given shaders to make the scene consistent.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Mouse Movements + HDR Image Fix<\/strong><\/h4>\n\n\n\n<p>Next up: mouse movement. I implemented it using <code>glm::lookAt<\/code>, and it worked nicely. But then I saw a mysterious black patch on the sun in the tonemapped image. My cubemap texture was using <code>GL_RGB16F<\/code>, not enough precision. Switching to <code>GL_RGB32F<\/code> fixed it. You can clearly see the black patch with a low key value tonemap.<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"485\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/result-with-small-key-value.png\" alt=\"\" class=\"wp-image-48\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/result-with-small-key-value.png 640w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/result-with-small-key-value-300x227.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/figure>\n\n\n\n<p>After the fix:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"485\" src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/corrected-tonemapping-25.png\" alt=\"\" class=\"wp-image-49\" srcset=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/corrected-tonemapping-25.png 640w, https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/corrected-tonemapping-25-300x227.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Motion Blur<\/strong><\/h4>\n\n\n\n<p>To finish the homework, I implemented a <strong>motion blur effect<\/strong> as a post-processing step. I did this by sampling the color of several neighboring pixels in the fragment shader and blending them with different weights (simple linear motion blur). The closer a neighboring pixel was to the center (detected with a simple linear calculation), the more influence it had on the final color. This effect is particularly noticeable when the camera or object is rotating quickly. Configuring and deciding the parameters of the motion blur and movement pace detection was harder than the motion blur algorithm.<\/p>\n\n\n\n<p>To apply this effect to the scene I rendered the scene to a frame buffer and use that framebuffer to render the scene to the screen by motion blur shaders.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Known Issues<\/strong><br><\/h4>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Current motion blur is simple. Log average luminance may be used to make a better effect as stated in the homework pdf.<br><br><\/li>\n<\/ul>\n\n\n\n<h4 class=\"wp-block-heading\"><strong>Final Video<br><\/strong><\/h4>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"488\" style=\"aspect-ratio: 644 \/ 488;\" width=\"644\" controls src=\"https:\/\/blog.metu.edu.tr\/e251884\/files\/2025\/05\/final.mp4\"><\/video><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>This assignment was about putting together a basic deferred renderer with HDR cubemap support, tone mapping, motion blur and some basic camera movements. I rolled up my sleeves, dove into OpenGL, and here&#8217;s how it all unfolded Code Cleanup I started by refactoring the given code to understand what\u2019s what and build a more organized [&hellip;]<\/p>\n","protected":false},"author":8372,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[],"class_list":["post-28","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/posts\/28","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/users\/8372"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/comments?post=28"}],"version-history":[{"count":0,"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/posts\/28\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/media?parent=28"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/categories?post=28"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e251884\/wp-json\/wp\/v2\/tags?post=28"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}