{"id":5,"date":"2025-05-08T14:24:17","date_gmt":"2025-05-08T14:24:17","guid":{"rendered":"https:\/\/blog.metu.edu.tr\/e252135\/?p=5"},"modified":"2025-05-08T14:24:19","modified_gmt":"2025-05-08T14:24:19","slug":"deferred-and-multi-pass-rendering","status":"publish","type":"post","link":"https:\/\/blog.metu.edu.tr\/e252135\/2025\/05\/08\/deferred-and-multi-pass-rendering\/","title":{"rendered":"Deferred and Multi-Pass Rendering"},"content":{"rendered":"<p>\u00a0<\/p>\n<h2><strong>Introduction<\/strong><\/h2>\n<p>In this blog, I will share my experience completing\u00a0 the second assigment for course Ceng469 Computer Graphics 2.<\/p>\n<p class=\"ds-markdown-paragraph\">For this assignment, I implemented a deferred and multi-pass renderer with HDR environment lighting, motion blur, and tone mapping.I rendered the given cubemap texture an armadillo model with multiple modes and multiple passes.These modes include rendering the world positions and normals of the model, rendering the model with deferred lighting, showing only the cubemap background, the combined results, the combined results with motion blur, and a final result with tonemapping and motion blur applied to both the model and the cubemap.<\/p>\n<h2><strong>Rendering A Cubemap Texture and Looking Around<\/strong><\/h2>\n<p data-start=\"79\" data-end=\"245\">I started with a key feature of this assignment ,which was rendering a cubemap\u00a0 to create a 360\u00b0 environment around the viewer. The idea was to simulate looking around in a 3D space.<\/p>\n<p data-start=\"0\" data-end=\"145\">The cubemap rendering was one of the more straightforward parts of the assignment, because OpenGL\u00a0 has native cubemap support.<\/p>\n<p data-start=\"147\" data-end=\"499\">I started by setting up a vertex buffer for the cubemap and reading the cubemap texture .Then, I created a shader program for rendering the cubemap. I also set up the exposure values to linearly scale the input color.<\/p>\n<p data-start=\"501\" data-end=\"770\">Next, I bound the cubemap texture and used it as input for the shader. I cleared the color and depth buffers before rendering the cubemap. Finally, I drew the cubemap using a simple cube geometry.<\/p>\n<p data-start=\"247\" data-end=\"544\">After i rendered the cubemap , I implemented rotating the view by using\u00a0 the middle mouse button , you can smoothly rotate the view by moving the mouse. The scene adjusts accordingly, making it feel like you&#8217;re navigating within a virtual world. The background remains static while the view shifts, which gives the illusion of being inside a 360-degree environment.<\/p>\n<p>\u00a0<\/p>\n<p>\u00a0<\/p>\n\n\n<h2 class=\"wp-block-heading\"><strong>Visualizing the World Positions and Normals<\/strong><\/h2>\n\n\n\n<p>This part was also relatively straightforward to implement, though I ran into an issue while rendering the world positions. I started by creating a framebuffer object (FBO) for the geometry pass and creating the geometry shaders. Then, I set up a function to handle the geometry pass and created separate shaders for visualizing world positions and normals.<\/p>\n\n\n\n<p>At first, the world position output didn\u2019t look smooth\u2014it came out looking like a scattered point cloud. After some trial and error, I realized the problem was likely related to how the data was being stored or accessed from the G-buffer. To work around it, I temporarily solved the issue by rendering the world positions directly in the shader instead of sampling from the geometry buffer.  (I was supposed to fix the issue properly later\u2014probably something to do with the G-buffer setup.)<\/p>\n\n\n\n<p>I also animated the model by continuously rotating it around the Y-axis to visualize better.<\/p>\n\n\n\n<p>Normal visualization, on the other hand, worked as expected without much trouble.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/worldpos.mp4\"><\/video><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Deferred Lighting<\/strong><\/h2>\n\n\n\n<p>To display the result, I set up a full-screen quad and wrote shaders that render the final lighting output. To display the result, I rendered it onto a full-screen quad using a separate set of shaders. However, when I first ran it, nothing appeared on the screen. After some debugging, I found that the issue was likely caused by invalid or empty position data. I fixed it by adding a condition in lignting the fragment shader:<\/p>\n\n\n\n<p>if(length(FragPos) == 0.0) {<\/p>\n\n\n\n<p>For deferred lighting, I reused the geometry pass I had already written to store world positions and normals into textures. After that, I created a lighting shader that uses these textures to calculate lighting in screen space. I added two light sources with different positions and used simple ambient, diffuse, and specular lighting calculations to light the scene based on the normal and position data from the G-buffer.<\/p>\n\n\n\n<p>discard;<\/p>\n\n\n\n<p>}<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/deferred.mp4\"><\/video><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Composite<\/strong><\/h2>\n\n\n\n<p>For the composite step, I modified my earlier functions that handled deferred lighting and cubemap rendering so they could take an output fbo as a parameter. This allowed me to render everything offscreen instead of directly to the screen.<\/p>\n\n\n\n<p>Once I had offscreen rendering working, I created a new shader to combine the lighting result with reflections from the cubemap. The shader takes in the position, normal, and lighting textures, along with the cubemap (both in 2D and cube formats). Using this data, it computes a reflection vector and samples the cubemap for environment reflections.<\/p>\n\n\n\n<p>This adds a layer of realism to the scene by blending direct lighting with dynamic reflections from the environment, making surfaces respond more naturally to both light and the viewer\u2019s perspective.<\/p>\n\n\n\n<p>If the position at a fragment was invalid (length zero), I simply rendered the 2D cubemap texture as a fallback. This handled rendering of the cubemap background.<\/p>\n\n\n\n<p>With this , model and the background is rendered correctly.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/compos.mp4\"><\/video><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Composite and Motion Blur<\/strong><\/h2>\n\n\n\n<p>Once the lighting and reflections were working, I moved on to implementing the composite and motion blur step. This part was more tedious than expected. I first modified my composite function to support offscreen rendering by accepting a framebuffer parameter. I then added a motion blur pass on top of it, which uses a simple blur shader that blends nearby pixels in a diagonal direction based on the camera movement.<\/p>\n\n\n\n<p>The motion blur strength is controlled by tracking mouse movement velocity. I used a Gaussian-weighted blur that samples neighboring texels and fades with distance. At first, I was getting strange visual glitches .After experimenting with different  shaders i managed to write a working one.After correcting shader I needed to write the log luminance to alpha chanel for later tonemapping but this brought its own problems. when the blur amount i defined is zero result was supposed to be identical to composite but it was looking different.After trying to debug for hours .I realized part of the problem was related to how I was writing the log luminance to alpha chanell. I figured out that the issue was due to OpenGL\u2019s default blending state. Since I was using a non-standard value in the alpha channel, it was interfering with how the colors were blended on-screen.Adjusting the gl blend fixed the issue. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/blur.mp4\"><\/video><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Tonemapping and Gamma Correction<\/strong><\/h2>\n\n\n\n<p>To finalize the image, I added tonemapping and gamma correction. The tonemapping shader uses the logarithmic luminance stored in the alpha channel of the HDR image to estimate average scene brightness. This value is then passed into a Reinhard tonemapping function to compress the dynamic range of the scene. I also applied gamma correction  with using glEnable(GL_FRAMEBUFFER_SRGB) afterward to ensure proper brightness on standard displays.<\/p>\n\n\n\n<p>Before applying the tonemapping, I first ran the composite and motion blur pass using<code> <\/code>the previous functions i defined for rendering motion blurred composite scene,After that, I called <code>glGenerateMipmap<\/code> on the motion blur texture. This was a crucial step because the tonemapping shader samples the lowest mipmap level to approximate the scene\u2019s average luminance. <\/p>\n\n\n\n<p>However, a key issue I ran into was that the average luminance was calculated dynamically from the current frame&#8217;s contents. As a result, the overall brightness of the scene would shift depending on what was visible on screen. I asked the teaching assistant about it,he stated that this is a normal behavior.<\/p>\n\n\n\n<p>Here the tonemapped  result.<\/p>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/tonemapped.mp4\"><\/video><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Implementing Controls and Rendering Info<\/strong><\/h2>\n\n\n\n<p>To handle user controls, I used GLFW\u2019s keyboard callbacks. Each key press updates internal state variables that control things like exposure, gamma, motion blur, VSync, and render mode. For example, pressing the up or down arrows doubles or halves the exposure value, while left\/right adjusts the tone mapping key. Other keys toggle features like VSync (<code>V<\/code>), gamma correction (<code>G<\/code>), motion blur (<code>B<\/code>), and animation playback (<code>F<\/code>). Pressing space toggles between fullscreen and windowed mode.<\/p>\n\n\n\n<p>I also display recently pressed key  briefly on screen. Mouse input is used to control the camera view by holding the middle mouse button and dragging.<\/p>\n\n\n\n<p>For displaying real-time info like FPS and current settings, I wrote a simple <code>renderInfo()<\/code> function that renders text overlays. This includes FPS, exposure, gamma value, tone mapping key, motion blur status, and VSync state. I calculate FPS using a frame counter and a 1-second timer based on <code>glfwGetTime()<\/code>.<\/p>\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper fourthree\" style=\"width:420px;max-width:100%;margin:5px;\"><div class=\"lyMe\" id=\"WYL_cYC-M8SKEuM\"><div id=\"lyte_cYC-M8SKEuM\" data-src=\"\/\/i.ytimg.com\/vi\/cYC-M8SKEuM\/hqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\"><\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/cYC-M8SKEuM\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/i.ytimg.com\/vi\/cYC-M8SKEuM\/0.jpg\" alt=\"YouTube video thumbnail\" width=\"420\" height=\"295\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:420px;margin:5px;\"><\/div><figcaption><\/figcaption><\/figure>\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>This Deferred and Multi-Pass Rendering assignment took me quite a bit of time to work through due to the complexity of the tasks involved. I started by implementing a system to render objects with cubemaps and control the exposure of HDR images. Then, I worked on deferred shading by first rendering world positions and normals into off-screen textures and applying lighting in a separate pass. Along the way, I added motion blur based on camera rotation speed and implemented tone mapping and gamma correction for a more realistic final image.  Although it took a lot of time to get everything working smoothly,  I had a chance to practice advanced rendering techniques.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\">Interesting Bug Visuals<\/h2>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"470\" style=\"aspect-ratio: 630 \/ 470;\" width=\"630\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/bug1.mp4\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"1080\" style=\"aspect-ratio: 1920 \/ 1080;\" width=\"1920\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/bug3.mp4\"><\/video><\/figure>\n\n\n\n<figure class=\"wp-block-video\"><video height=\"850\" style=\"aspect-ratio: 474 \/ 850;\" width=\"474\" controls src=\"https:\/\/blog.metu.edu.tr\/e252135\/files\/2025\/05\/bug4.mp4\"><\/video><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>\u00a0 Introduction In this blog, I will share my experience completing\u00a0 the second assigment for course Ceng469 Computer Graphics 2. For this assignment, I implemented a deferred and multi-pass renderer with HDR environment lighting, motion blur, and tone mapping.I rendered the given cubemap texture an armadillo model with multiple modes and multiple passes.These modes include [&hellip;]<\/p>\n","protected":false},"author":9367,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[],"class_list":["post-5","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/posts\/5","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/users\/9367"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/comments?post=5"}],"version-history":[{"count":0,"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/posts\/5\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/media?parent=5"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/categories?post=5"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e252135\/wp-json\/wp\/v2\/tags?post=5"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}