Deferred and Multi-Pass Rendering

 

Introduction

In this blog, I will share my experience completing  the second assigment for course Ceng469 Computer Graphics 2.

For this assignment, I implemented a deferred and multi-pass renderer with HDR environment lighting, motion blur, and tone mapping.I rendered the given cubemap texture an armadillo model with multiple modes and multiple passes.These modes include rendering the world positions and normals of the model, rendering the model with deferred lighting, showing only the cubemap background, the combined results, the combined results with motion blur, and a final result with tonemapping and motion blur applied to both the model and the cubemap.

Rendering A Cubemap Texture and Looking Around

I started with a key feature of this assignment ,which was rendering a cubemap  to create a 360° environment around the viewer. The idea was to simulate looking around in a 3D space.

The cubemap rendering was one of the more straightforward parts of the assignment, because OpenGL  has native cubemap support.

I started by setting up a vertex buffer for the cubemap and reading the cubemap texture .Then, I created a shader program for rendering the cubemap. I also set up the exposure values to linearly scale the input color.

Next, I bound the cubemap texture and used it as input for the shader. I cleared the color and depth buffers before rendering the cubemap. Finally, I drew the cubemap using a simple cube geometry.

After i rendered the cubemap , I implemented rotating the view by using  the middle mouse button , you can smoothly rotate the view by moving the mouse. The scene adjusts accordingly, making it feel like you’re navigating within a virtual world. The background remains static while the view shifts, which gives the illusion of being inside a 360-degree environment.

 

 

Visualizing the World Positions and Normals

This part was also relatively straightforward to implement, though I ran into an issue while rendering the world positions. I started by creating a framebuffer object (FBO) for the geometry pass and creating the geometry shaders. Then, I set up a function to handle the geometry pass and created separate shaders for visualizing world positions and normals.

At first, the world position output didn’t look smooth—it came out looking like a scattered point cloud. After some trial and error, I realized the problem was likely related to how the data was being stored or accessed from the G-buffer. To work around it, I temporarily solved the issue by rendering the world positions directly in the shader instead of sampling from the geometry buffer. (I was supposed to fix the issue properly later—probably something to do with the G-buffer setup.)

I also animated the model by continuously rotating it around the Y-axis to visualize better.

Normal visualization, on the other hand, worked as expected without much trouble.

Deferred Lighting

To display the result, I set up a full-screen quad and wrote shaders that render the final lighting output. To display the result, I rendered it onto a full-screen quad using a separate set of shaders. However, when I first ran it, nothing appeared on the screen. After some debugging, I found that the issue was likely caused by invalid or empty position data. I fixed it by adding a condition in lignting the fragment shader:

if(length(FragPos) == 0.0) {

For deferred lighting, I reused the geometry pass I had already written to store world positions and normals into textures. After that, I created a lighting shader that uses these textures to calculate lighting in screen space. I added two light sources with different positions and used simple ambient, diffuse, and specular lighting calculations to light the scene based on the normal and position data from the G-buffer.

discard;

}

Composite

For the composite step, I modified my earlier functions that handled deferred lighting and cubemap rendering so they could take an output fbo as a parameter. This allowed me to render everything offscreen instead of directly to the screen.

Once I had offscreen rendering working, I created a new shader to combine the lighting result with reflections from the cubemap. The shader takes in the position, normal, and lighting textures, along with the cubemap (both in 2D and cube formats). Using this data, it computes a reflection vector and samples the cubemap for environment reflections.

This adds a layer of realism to the scene by blending direct lighting with dynamic reflections from the environment, making surfaces respond more naturally to both light and the viewer’s perspective.

If the position at a fragment was invalid (length zero), I simply rendered the 2D cubemap texture as a fallback. This handled rendering of the cubemap background.

With this , model and the background is rendered correctly.

Composite and Motion Blur

Once the lighting and reflections were working, I moved on to implementing the composite and motion blur step. This part was more tedious than expected. I first modified my composite function to support offscreen rendering by accepting a framebuffer parameter. I then added a motion blur pass on top of it, which uses a simple blur shader that blends nearby pixels in a diagonal direction based on the camera movement.

The motion blur strength is controlled by tracking mouse movement velocity. I used a Gaussian-weighted blur that samples neighboring texels and fades with distance. At first, I was getting strange visual glitches .After experimenting with different shaders i managed to write a working one.After correcting shader I needed to write the log luminance to alpha chanel for later tonemapping but this brought its own problems. when the blur amount i defined is zero result was supposed to be identical to composite but it was looking different.After trying to debug for hours .I realized part of the problem was related to how I was writing the log luminance to alpha chanell. I figured out that the issue was due to OpenGL’s default blending state. Since I was using a non-standard value in the alpha channel, it was interfering with how the colors were blended on-screen.Adjusting the gl blend fixed the issue.

Tonemapping and Gamma Correction

To finalize the image, I added tonemapping and gamma correction. The tonemapping shader uses the logarithmic luminance stored in the alpha channel of the HDR image to estimate average scene brightness. This value is then passed into a Reinhard tonemapping function to compress the dynamic range of the scene. I also applied gamma correction with using glEnable(GL_FRAMEBUFFER_SRGB) afterward to ensure proper brightness on standard displays.

Before applying the tonemapping, I first ran the composite and motion blur pass using the previous functions i defined for rendering motion blurred composite scene,After that, I called glGenerateMipmap on the motion blur texture. This was a crucial step because the tonemapping shader samples the lowest mipmap level to approximate the scene’s average luminance.

However, a key issue I ran into was that the average luminance was calculated dynamically from the current frame’s contents. As a result, the overall brightness of the scene would shift depending on what was visible on screen. I asked the teaching assistant about it,he stated that this is a normal behavior.

Here the tonemapped result.

Implementing Controls and Rendering Info

To handle user controls, I used GLFW’s keyboard callbacks. Each key press updates internal state variables that control things like exposure, gamma, motion blur, VSync, and render mode. For example, pressing the up or down arrows doubles or halves the exposure value, while left/right adjusts the tone mapping key. Other keys toggle features like VSync (V), gamma correction (G), motion blur (B), and animation playback (F). Pressing space toggles between fullscreen and windowed mode.

I also display recently pressed key briefly on screen. Mouse input is used to control the camera view by holding the middle mouse button and dragging.

For displaying real-time info like FPS and current settings, I wrote a simple renderInfo() function that renders text overlays. This includes FPS, exposure, gamma value, tone mapping key, motion blur status, and VSync state. I calculate FPS using a frame counter and a 1-second timer based on glfwGetTime().

Conclusion

This Deferred and Multi-Pass Rendering assignment took me quite a bit of time to work through due to the complexity of the tasks involved. I started by implementing a system to render objects with cubemaps and control the exposure of HDR images. Then, I worked on deferred shading by first rendering world positions and normals into off-screen textures and applying lighting in a separate pass. Along the way, I added motion blur based on camera rotation speed and implemented tone mapping and gamma correction for a more realistic final image. Although it took a lot of time to get everything working smoothly, I had a chance to practice advanced rendering techniques.


Interesting Bug Visuals

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *