{"id":36,"date":"2025-11-01T17:21:12","date_gmt":"2025-11-01T17:21:12","guid":{"rendered":"https:\/\/blog.metu.edu.tr\/e244824\/?p=36"},"modified":"2025-11-01T20:12:27","modified_gmt":"2025-11-01T20:12:27","slug":"simple-ray-tracer-in-c","status":"publish","type":"post","link":"https:\/\/blog.metu.edu.tr\/e244824\/2025\/11\/01\/simple-ray-tracer-in-c\/","title":{"rendered":"Simple Ray Tracer in C++"},"content":{"rendered":"<p>For the first homework of my CENG 795: Advanced Ray Tracing course, I implemented a simple ray tracer in C++ that reads scene data from a JSON file and renders images using basic ray tracing methods. My ray tracer performs ambient, diffuse, and specular shading calculations for intersected objects, as well as shadow calculations when the light source is obstructed by another object. Additionally, it performs reflection calculations for mirror, conductor, and dielectric objects, and unfortunately, it incorrectly calculates refraction for dielectric objects. In this post I will explain my implementation process, the errors I came across, and the resulting images with their rendering time. I will also provide some explanations that might be redundant. This is totally for the sake of future visitors who might not be familiar with ray tracing and absolutely not to increase the length of this post.<\/p>\n<p>\u00a0<\/p>\n<h2>Parsing and Storing the Scene Data<\/h2>\n<p data-start=\"746\" data-end=\"1081\">According to the homework specifications, I was required to parse both JSON and PLY files for scene data. I used Niels Lohmann\u2019s JSON library to parse the scene information and stored it using a set of custom structs. Unfortunately, I wasn\u2019t able to implement PLY parsing in time, so my ray tracer currently only works with JSON files.<\/p>\n<p data-start=\"1083\" data-end=\"1341\">All scene-related data is contained within a single <code data-start=\"1135\" data-end=\"1142\">Scene<\/code> struct, which includes several other structs corresponding to each field in the JSON file. While the homework document already explains each field, I\u2019ll briefly summarize them here for completeness.<\/p>\n<p>Below is the Scene struct and a brief description of each field:<\/p>\n<div>\u00a0<\/div>\n\n\n<pre class=\"wp-block-code\"><code>struct Scene\n\u00a0 \u00a0 {\n\u00a0 \u00a0 \u00a0 \u00a0 \/\/Data\n\u00a0 \u00a0 \u00a0 \u00a0 Vec3i background_color;\n\u00a0 \u00a0 \u00a0 \u00a0 float shadow_ray_epsilon;\n\u00a0 \u00a0 \u00a0 \u00a0 float intersection_test_epsilon;\n\u00a0 \u00a0 \u00a0 \u00a0 int max_recursion_depth;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Camera&gt; cameras;\n\u00a0 \u00a0 \u00a0 \u00a0 Vec3f ambient_light;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;PointLight&gt; point_lights;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Material&gt; materials;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Vec3f&gt; vertex_data;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Mesh&gt; meshes;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Triangle&gt; triangles;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Sphere&gt; spheres;\n\u00a0 \u00a0 \u00a0 \u00a0 std::vector&lt;Plane&gt; planes;\n\u00a0 \u00a0 \u00a0 \u00a0 \/\/Functions\n\u00a0 \u00a0 \u00a0 \u00a0 void loadFromJSON(std::string filepath);\n\u00a0 \u00a0 };\n<\/code><\/pre>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>background_color:<\/strong> A vector of integers containing the RGB value of the scene background.<\/li>\n\n\n\n<li><strong>shadow_ray_epsilon:<\/strong>\u00a0 A small float value used to offset the intersection points in order to prevent self-intersection.<\/li>\n\n\n\n<li><strong>intersection_test_epsilon:<\/strong> A small float value that was unspecified in the homework documentation. I initially assumed it was supposed to be used to offset the intersection points while calculating reflections. This has caused quite a bit of headache for me while implementing reflection calculation.<\/li>\n\n\n\n<li><strong>max_recursion_depth:\u00a0<\/strong>The maximum amount of times a ray&#8217;s bounce will be calculated when reflected.<\/li>\n\n\n\n<li><strong>cameras:\u00a0<\/strong>An array of Camera objects, each containing information about the camera vectors and the image plane.<\/li>\n\n\n\n<li><strong>ambient_light:\u00a0<\/strong>A vector of floats defining how much light an object receives even in shadow.<\/li>\n\n\n\n<li><strong>point_lights:\u00a0<\/strong>An array of PointLight objects, each containing a vector of floats for position and a vector of floats for intensity.<\/li>\n\n\n\n<li><strong>materials:\u00a0<\/strong>An array of Material objects, each containing the type of material and the necessary values for shading.<\/li>\n\n\n\n<li><strong>vertex_data:\u00a0<\/strong>An array of float vectors, each containing the position of the vertex in the given index.<\/li>\n\n\n\n<li><strong>meshes:\u00a0<\/strong>An array of mesh objects, each containing the shading type, the index of the material, and an array of Face objects.<\/li>\n\n\n\n<li><strong>triangles:\u00a0<\/strong>An array of Triangle objects, each containing the indices of its vertices, the index of its material, and a float vector representing its normal.<\/li>\n\n\n\n<li><strong>spheres:\u00a0<\/strong>An array of Sphere objects, each containing the index of its material, the index of its center vertex, and its radius.<\/li>\n\n\n\n<li><strong>planes:\u00a0<\/strong>An array of Plane objects, each containing the index of its material, the index of its point vertex, and a float vector representing its normal.<\/li>\n\n\n\n<li><strong>loadFromJSON:<\/strong> The function to parse and store the scene data.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Ray Calculation and Object Intersection<\/h2>\n\n\n\n<p>To give a bare bones explanation, ray tracing works by tracing a ray for each pixel of an image, checking for any objects intersected along that ray, and calculating the corresponding color based on the material and lighting. My ray tracer is just as bare bones as this explanation, it simply calculates a ray direction for each pixel, checks if any object is intersected, and computes the shading of the object at that point. Despite these simple steps, I made a lot of mistakes in my initial implementation. I&#8217;ll first show the results of some minor ones. The images below are all rendered from <em>simple.json<\/em>:<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/int_s_u_and_s_v.png\" alt=\"\" class=\"wp-image-37\" style=\"width:840px;height:auto\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/int_s_u_and_s_v.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/int_s_u_and_s_v-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/int_s_u_and_s_v-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/int_s_u_and_s_v-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of casting the horizontal and vertical steps to int instead of float<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_s_u_and_s_v.png\" alt=\"\" class=\"wp-image-38\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_s_u_and_s_v.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_s_u_and_s_v-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_s_u_and_s_v-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_s_u_and_s_v-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of setting an incorrect minimum t after fixing the horizontal and vertical steps<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/t_min_set_to_0.png\" alt=\"\" class=\"wp-image-39\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/t_min_set_to_0.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/t_min_set_to_0-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/t_min_set_to_0-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/t_min_set_to_0-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of overflowing color values at certain pixels after  fixing the minimum t value<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_overflow.png\" alt=\"\" class=\"wp-image-40\" style=\"width:840px;height:auto\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_overflow.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_overflow-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_overflow-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_overflow-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of minor mistakes in intersection functions after fixing the overflow issue<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/Fixed_intersections.png\" alt=\"\" class=\"wp-image-41\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/Fixed_intersections.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/Fixed_intersections-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/Fixed_intersections-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/Fixed_intersections-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of incorrect half vector calculations for specular shading after correcting the intersection functions<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_half_vector_calculation.png\" alt=\"\" class=\"wp-image-42\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_half_vector_calculation.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_half_vector_calculation-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_half_vector_calculation-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/fixed_half_vector_calculation-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Result of fixing the half vector calculation<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>The image above is the correct rendering of simple.json. When I finally obtained the image above I thought that the base of my ray tracer was now complete. However, when I ran the ray tracer on other sample scenes, I got completely black images. This scene was the only one that was being rendered correctly. Debugging this problem took quite a bit of time.<\/p>\n\n\n\n<p>My first thought was to check if there was something wrong with my intersection functions as no intersection occured at any pixel. I went over my implementations several times, I rewrote the triangle intersection function with matrix structs to make it more readable, I tested them with my own sample values. There was no issue with them.<\/p>\n\n\n\n<p>My second thought was to check if there was something wrong with my calculations for the camera vectors and the image plane. I went over my gaze calculations for cameras with &#8220;<em>lookAt<\/em>&#8221; types, checked the corner calculations for the first pixel, yet nothing seemed to be wrong. Since the calculations for the image plane were correct the ray directions should have been correct by extension, or so I thought.<\/p>\n\n\n\n<p>When I checked the ray directions individually while rendering, they turned out to be way off in some scenes. At this point I finally realised the issue: after calculating the pixel position I never subtracted the camera position from it, which gave me a position vector of the pixel instead of the ray direction vector. Since the camera of the <em>simple.json<\/em> scene was located at (0, 0, 0), the image was rendered correctly, while all the other scenes-which had cameras at different positions-appeared completely black. Fixing this mistake gave me the results below:<\/p>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"720\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_spheres.png\" alt=\"\" class=\"wp-image-43\" style=\"width:840px;height:auto\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_spheres.png 720w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_spheres-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_spheres-150x150.png 150w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><figcaption class=\"wp-element-caption\"><em>Rendering of spheres.json scene<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_two_spheres.png\" alt=\"\" class=\"wp-image-44\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_two_spheres.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_two_spheres-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_two_spheres-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_two_spheres-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Rendering of two_spheres.json scene<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_cornellbox.png\" alt=\"\" class=\"wp-image-45\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_cornellbox.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_cornellbox-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_cornellbox-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_cornellbox-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Rendering of cornellbox.json scene<\/em><\/figcaption><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_bunny.png\" alt=\"\" class=\"wp-image-46\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_bunny.png 512w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_bunny-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/10\/working_bunny-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><figcaption class=\"wp-element-caption\"><em>Rendering of bunny.json scene<\/em><\/figcaption><\/figure>\n\n\n\n<p class=\"has-medium-font-size\"><\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Reflection and Refraction Calculations<\/h2>\n\n\n\n<p>Next came the task of calculating the reflections and refractions for mirrors, conductors, and dielectrics. I initially just focused on implementing reflections for mirrors to make sure that I got the reflections working correctly first. The image below is rendered from <em>spheres_mirror.json <\/em>after I finished my initial mirror implementation:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"720\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/horrific_mirror_spheres.png\" alt=\"\" class=\"wp-image-47\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/horrific_mirror_spheres.png 720w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/horrific_mirror_spheres-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/horrific_mirror_spheres-150x150.png 150w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><figcaption class=\"wp-element-caption\"><em>First rendering of spheres_mirror.json<\/em><\/figcaption><\/figure>\n\n\n\n<p>The result was a little off putting. The reflections of the center sphere even looked a little like a creepy, smiling clown. I believe the main reason for this was because I calculated the reflection rays incorrecty, but I wasn&#8217;t too sure as my initial implementation was quite messy. So I decided to redo the mirror implementation from scratch in a clearer way. My second mirror implementation gave the result below:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"720\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/somewhat_fixed_mirrors.png\" alt=\"\" class=\"wp-image-48\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/somewhat_fixed_mirrors.png 720w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/somewhat_fixed_mirrors-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/somewhat_fixed_mirrors-150x150.png 150w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><figcaption class=\"wp-element-caption\"><em>Second rendering of spheres_mirror.json<\/em><\/figcaption><\/figure>\n\n\n\n<p>The result was a lot better than the first one, but there were still some noisy pixels scattered in certain areas. Tracking down the cause of this issue took a very long time, I almost decided to just move on and leave it as it was. I went over everything several times, but I couldn&#8217;t find anything wrong with my implementation. I redid the mirror implementation, but it still gave me the same result. <br>Eventually, I decided to tinker with the values inside the scene file. I tried changing almost every value inside the scene file without success until I finally decided to change the epsilon value. It turned out that the <em>intersection_test_epsilon<\/em> value-which was the value I used to offset the intersection point-was far too small and still caused self-intersection at certain points. When I used <em>shadow_ray_epsilon <\/em>instead, I got the result below:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"720\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/spheres_mirror_with_propper_offset.png\" alt=\"\" class=\"wp-image-49\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/spheres_mirror_with_propper_offset.png 720w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/spheres_mirror_with_propper_offset-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/spheres_mirror_with_propper_offset-150x150.png 150w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><figcaption class=\"wp-element-caption\"><em>Third rendering of spheres_mirror.json<\/em><\/figcaption><\/figure>\n\n\n\n<p>At last, my mirror implementation seemed to be working correctly. After that, implementing Fresnel reflections for dielectrics and conductors didn&#8217;t take much time at all.<br>Now it was time for the final hurdle: calculating refractions. At this point I had very little time left so I decided to just implement a single refraction ray that did not bounce after leaving the object, yet even that proved challenging. Once again, the issue was noisy values at certain points but this time I could not come up with a solution. No matter how I changed the offset values or redid the refraction implementation, the result remained the same. Below is the rendering of <em>cornellbox_recursive.json <\/em>which contains a single conductor sphere to the left and a single dielectric sphere to the right:<\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"800\" height=\"800\" src=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/cornellbox_recursive_faulty_refraction.png\" alt=\"\" class=\"wp-image-50\" srcset=\"https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/cornellbox_recursive_faulty_refraction.png 800w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/cornellbox_recursive_faulty_refraction-300x300.png 300w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/cornellbox_recursive_faulty_refraction-150x150.png 150w, https:\/\/blog.metu.edu.tr\/e244824\/files\/2025\/11\/cornellbox_recursive_faulty_refraction-768x768.png 768w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><figcaption class=\"wp-element-caption\"><em>Rendering of cornellbox_recursive.json<\/em><\/figcaption><\/figure>\n\n\n\n<p>Finally, here are the rendering times for various scenes:<\/p>\n\n\n\n<figure class=\"wp-block-table aligncenter\"><table class=\"has-fixed-layout\"><tbody><tr><td class=\"has-text-align-center\" data-align=\"center\"><strong>Scene<\/strong><\/td><td class=\"has-text-align-center\" data-align=\"center\"><strong>Render Time<\/strong><\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">simple.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">1 second<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">spheres.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">1.35 second<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">spheres_with_plane.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">1 second<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">spheres_mirror.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">1.36 second<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">two_spheres.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">0.2 second<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">cornellbox.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">3.9 seconds<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">cornellbox_recursive.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">4.7 seconds<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">bunny.json<\/td><td class=\"has-text-align-center\" data-align=\"center\">312 seconds<\/td><\/tr><tr><td class=\"has-text-align-center\" data-align=\"center\">chinese_dragon<\/td><td class=\"has-text-align-center\" data-align=\"center\">&gt;12hrs<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Despite a few issues along the way, I learned a lot through this assignment. The mistakes I made were simple but taught me valuable lessons about the fundamentals of ray tracing. I\u2019m excited to keep improving my renderer and explore more advanced techniques in the future.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>For the first homework of my CENG 795: Advanced Ray Tracing course, I implemented a simple ray tracer in C++ that reads scene data from a JSON file and renders images using basic ray tracing methods. My ray tracer performs ambient, diffuse, and specular shading calculations for intersected objects, as well as shadow calculations when [&hellip;]<\/p>\n","protected":false},"author":9060,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","_links_to":"","_links_to_target":""},"categories":[1],"tags":[],"class_list":["post-36","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/posts\/36","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/users\/9060"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/comments?post=36"}],"version-history":[{"count":0,"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/posts\/36\/revisions"}],"wp:attachment":[{"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/media?parent=36"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/categories?post=36"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.metu.edu.tr\/e244824\/wp-json\/wp\/v2\/tags?post=36"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}