Category: Uncategorized

  • Particle Simulation

    Introduction

    In this blog, I will share my experience completing the third assigment for course Ceng469 Computer Graphics 2.

    For this assigment, I implemented a particle system with attractors ,using OpenGL and compute shaders.

    Initializing Window and Buffers

    I first set up the OpenGL context and window  using GLFW. I configured opengl to use version 4.3 for the modern features like compute shaders.

    Creating Buffers and TBOS

    After setting up the OpenGL context, I created GPU buffers for the particle system—one for positions and one for velocities. Each particle is represented as a vec4, where the xyz components store the 3d position, and the w is used for age of the particle,which will be colored accordingly. I also created a vec4 buffer for attractors ,but for their w component is used as mass.

    In initComputeBuffers(), I allocated and filled the position buffer with random 3D vectors using random_vector(), and assigned a random mass (stored in the w component). Velocities were initialized with small random values in xyz, while the w component was set to zero.

    To make these buffers accessible to compute shaders, I created texture buffer objects (TBOs) in initTBOs(). Each TBO binds its corresponding buffer and exposes it to the shader as a samplerBuffer using glTexBuffer().

    Rendering Points

    Before running any physics simulation, I started by rendering the particles as simple GL points to confirm everything was set up correctly. The vertex shader takes each particle’s position (vec4), transforms it using the mvp matrix, and assigns a gl_PointSize for rendering.

    #vertex shader
    #version 430
    
    layout(location = 0) in vec4 in_position;
    
    uniform float pointSize;
    uniform mat4 mvp;
    
    void main()
    {
        gl_Position = mvp * vec4(in_position.xyz, 1.0);
        gl_PointSize = pointSize;
    }
    
    #fragment shader
    #version 430
    
    layout(location = 0) out vec4 color;
    
    void main() {
        color = vec4(0.0, 0.8, 1.0, 1.0);
    }
    

    At first, I couldn’t see anything on the screen. It looked like rendering was broken. After checking everything, I realized I was missing a memory barrier after the compute shader had written to the position buffer.

    Adding this line between the rendering and computing solved the problem:

    glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);

    Simulating Particles

    For the particle simulation, I modified a sample compute shader from the course slides to make it fit my project’s setup. The idea is simple: each particle moves under the influence of multiple attractors, and its velocity gets updated every frame. If a particle “dies” (based on age or going out of bounds), it respawns at the origin with a new random velocity.

    #version 430
    
    layout(std140, binding = 0) uniform AttractorBlock {
        vec4 attractor[12];
    };
    
    uniform int numAttractors;
    uniform vec3 origin;
    
    layout(local_size_x = 100) in;
    
    layout(rgba32f, binding = 0) uniform imageBuffer velocityBuffer;
    layout(rgba32f, binding = 1) uniform imageBuffer positionBuffer;
    
    uniform float dt;
    float rand(vec2 co) {
        return fract(sin(dot(co, vec2(14.9898, 78.23))) * 4378.5453);
    }
    
    
    void main() {
        uint index = gl_GlobalInvocationID.x;
    
        vec4 vel = imageLoad(velocityBuffer, int(index));
        vec4 pos = imageLoad(positionBuffer, int(index));
    
        pos.xyz += vel.xyz * dt;
        pos.w -= 0.005 * dt;
    
        for (int i = 0; i < numAttractors; i++) {
            vec3 dist = attractor[i].xyz - pos.xyz;
            vel.xyz += dt * dt * attractor[i].w * normalize(dist) / (dot(dist, dist) + 10.0);
        }
    
        if (pos.w <= 0.0 || pos.x < -4.0 || pos.x > 4.0 || pos.y < -2.0 || pos.y > 2.0 || pos.z < -2.0 || pos.z > 2.0) {
            vec2 seed = vec2(float(gl_GlobalInvocationID.x), pos.w);
            float r1 = rand(seed);
            float r2 = rand(seed + 1.0);
            float r3 = rand(seed + 2.0);
    
            vel.xyz = vec3(r1, r2, r3) * 2.0 - 1.0;
            vel.xyz *= 0.01;
    
            pos.xyz = origin;
            pos.w = 1.0;
        }
    
        imageStore(positionBuffer, int(index), pos);
        imageStore(velocityBuffer, int(index), vel); 
    }

    Age-Based Rendering

    To visualize the particle “lifespan,” I used the w component of the position (which I treat as age) to interpolate color. As particles get older, they fade from green to red.

    #version 430
    
    layout(location = 0) out vec4 color;
    
    in float intensity;
    void main() {
       color = mix(vec4(0.0f,0.8f,0.2f,1.0f),vec4(0.8f,0.f,0.2f,1.0f),
        intensity);
    }

    with no attractors

    Controls

    🖱 Mouse Controls

    • Left Click
      • In Attractor Mode: Adds an attractor at the point you click in the 3D space. Attractors are limited by a predefined capacity.(12)
      • In Origin Mode: Sets the simulation’s origin point to the clicked location.
    • Right Click
      • Removes the most recently added attractor from the scene, if any exist.

    🔄 Mouse Scroll

    • Scroll Up / Down
      • Increases or decreases the mass of the current attractor to be added.

    ⌨️ Keyboard Controls

    • Q – Quits the simulation.
    • V – Toggles V-Sync on or off.
    • W / S – Speeds up or slows down the simulation time scale.
    • T – Toggles UI visibility.
    • R – Starts or pauses the animation.
    • F – Toggles fullscreen mode.
    • G – Switches between Origin Mode and Attractor Mode.

    Final Result

  • Deferred and Multi-Pass Rendering

     

    Introduction

    In this blog, I will share my experience completing  the second assigment for course Ceng469 Computer Graphics 2.

    For this assignment, I implemented a deferred and multi-pass renderer with HDR environment lighting, motion blur, and tone mapping.I rendered the given cubemap texture an armadillo model with multiple modes and multiple passes.These modes include rendering the world positions and normals of the model, rendering the model with deferred lighting, showing only the cubemap background, the combined results, the combined results with motion blur, and a final result with tonemapping and motion blur applied to both the model and the cubemap.

    Rendering A Cubemap Texture and Looking Around

    I started with a key feature of this assignment ,which was rendering a cubemap  to create a 360° environment around the viewer. The idea was to simulate looking around in a 3D space.

    The cubemap rendering was one of the more straightforward parts of the assignment, because OpenGL  has native cubemap support.

    I started by setting up a vertex buffer for the cubemap and reading the cubemap texture .Then, I created a shader program for rendering the cubemap. I also set up the exposure values to linearly scale the input color.

    Next, I bound the cubemap texture and used it as input for the shader. I cleared the color and depth buffers before rendering the cubemap. Finally, I drew the cubemap using a simple cube geometry.

    After i rendered the cubemap , I implemented rotating the view by using  the middle mouse button , you can smoothly rotate the view by moving the mouse. The scene adjusts accordingly, making it feel like you’re navigating within a virtual world. The background remains static while the view shifts, which gives the illusion of being inside a 360-degree environment.

     

     

    Visualizing the World Positions and Normals

    This part was also relatively straightforward to implement, though I ran into an issue while rendering the world positions. I started by creating a framebuffer object (FBO) for the geometry pass and creating the geometry shaders. Then, I set up a function to handle the geometry pass and created separate shaders for visualizing world positions and normals.

    At first, the world position output didn’t look smooth—it came out looking like a scattered point cloud. After some trial and error, I realized the problem was likely related to how the data was being stored or accessed from the G-buffer. To work around it, I temporarily solved the issue by rendering the world positions directly in the shader instead of sampling from the geometry buffer. (I was supposed to fix the issue properly later—probably something to do with the G-buffer setup.)

    I also animated the model by continuously rotating it around the Y-axis to visualize better.

    Normal visualization, on the other hand, worked as expected without much trouble.

    Deferred Lighting

    To display the result, I set up a full-screen quad and wrote shaders that render the final lighting output. To display the result, I rendered it onto a full-screen quad using a separate set of shaders. However, when I first ran it, nothing appeared on the screen. After some debugging, I found that the issue was likely caused by invalid or empty position data. I fixed it by adding a condition in lignting the fragment shader:

    if(length(FragPos) == 0.0) {

    For deferred lighting, I reused the geometry pass I had already written to store world positions and normals into textures. After that, I created a lighting shader that uses these textures to calculate lighting in screen space. I added two light sources with different positions and used simple ambient, diffuse, and specular lighting calculations to light the scene based on the normal and position data from the G-buffer.

    discard;

    }

    Composite

    For the composite step, I modified my earlier functions that handled deferred lighting and cubemap rendering so they could take an output fbo as a parameter. This allowed me to render everything offscreen instead of directly to the screen.

    Once I had offscreen rendering working, I created a new shader to combine the lighting result with reflections from the cubemap. The shader takes in the position, normal, and lighting textures, along with the cubemap (both in 2D and cube formats). Using this data, it computes a reflection vector and samples the cubemap for environment reflections.

    This adds a layer of realism to the scene by blending direct lighting with dynamic reflections from the environment, making surfaces respond more naturally to both light and the viewer’s perspective.

    If the position at a fragment was invalid (length zero), I simply rendered the 2D cubemap texture as a fallback. This handled rendering of the cubemap background.

    With this , model and the background is rendered correctly.

    Composite and Motion Blur

    Once the lighting and reflections were working, I moved on to implementing the composite and motion blur step. This part was more tedious than expected. I first modified my composite function to support offscreen rendering by accepting a framebuffer parameter. I then added a motion blur pass on top of it, which uses a simple blur shader that blends nearby pixels in a diagonal direction based on the camera movement.

    The motion blur strength is controlled by tracking mouse movement velocity. I used a Gaussian-weighted blur that samples neighboring texels and fades with distance. At first, I was getting strange visual glitches .After experimenting with different shaders i managed to write a working one.After correcting shader I needed to write the log luminance to alpha chanel for later tonemapping but this brought its own problems. when the blur amount i defined is zero result was supposed to be identical to composite but it was looking different.After trying to debug for hours .I realized part of the problem was related to how I was writing the log luminance to alpha chanell. I figured out that the issue was due to OpenGL’s default blending state. Since I was using a non-standard value in the alpha channel, it was interfering with how the colors were blended on-screen.Adjusting the gl blend fixed the issue.

    Tonemapping and Gamma Correction

    To finalize the image, I added tonemapping and gamma correction. The tonemapping shader uses the logarithmic luminance stored in the alpha channel of the HDR image to estimate average scene brightness. This value is then passed into a Reinhard tonemapping function to compress the dynamic range of the scene. I also applied gamma correction with using glEnable(GL_FRAMEBUFFER_SRGB) afterward to ensure proper brightness on standard displays.

    Before applying the tonemapping, I first ran the composite and motion blur pass using the previous functions i defined for rendering motion blurred composite scene,After that, I called glGenerateMipmap on the motion blur texture. This was a crucial step because the tonemapping shader samples the lowest mipmap level to approximate the scene’s average luminance.

    However, a key issue I ran into was that the average luminance was calculated dynamically from the current frame’s contents. As a result, the overall brightness of the scene would shift depending on what was visible on screen. I asked the teaching assistant about it,he stated that this is a normal behavior.

    Here the tonemapped result.

    Implementing Controls and Rendering Info

    To handle user controls, I used GLFW’s keyboard callbacks. Each key press updates internal state variables that control things like exposure, gamma, motion blur, VSync, and render mode. For example, pressing the up or down arrows doubles or halves the exposure value, while left/right adjusts the tone mapping key. Other keys toggle features like VSync (V), gamma correction (G), motion blur (B), and animation playback (F). Pressing space toggles between fullscreen and windowed mode.

    I also display recently pressed key briefly on screen. Mouse input is used to control the camera view by holding the middle mouse button and dragging.

    For displaying real-time info like FPS and current settings, I wrote a simple renderInfo() function that renders text overlays. This includes FPS, exposure, gamma value, tone mapping key, motion blur status, and VSync state. I calculate FPS using a frame counter and a 1-second timer based on glfwGetTime().

    Conclusion

    This Deferred and Multi-Pass Rendering assignment took me quite a bit of time to work through due to the complexity of the tasks involved. I started by implementing a system to render objects with cubemaps and control the exposure of HDR images. Then, I worked on deferred shading by first rendering world positions and normals into off-screen textures and applying lighting in a separate pass. Along the way, I added motion blur based on camera rotation speed and implemented tone mapping and gamma correction for a more realistic final image. Although it took a lot of time to get everything working smoothly, I had a chance to practice advanced rendering techniques.


    Interesting Bug Visuals

  • Hello world!

    Welcome to METU Blog Service. This is your first post. Edit or delete it, then start blogging!