Project 14: final project

This is a fun scene with candy. There are two main componenets to this scene - the glass teapot, and the candies. I spent over 30 hours creating this teapot practically from scratch. I started with the Bezier patch description, which I used to create a quad mesh. I then went to work duplicating surfaces, shrinking them to create the inner surfaces, doing some boolean work for cutting out holes, then fusing together all the seams - vertex by vertex. I also added a shelf for the lid to sit on so it would be physically correct.

The candies started out as a single candy prototype which I sculpted starting from a cube. I then created a huge array of candy copies and used a dynamics simulation to drop the candies into one teapot, and onto the ground in front of the other teapot. The backdrop is just a ground with a single wall (a.k.a. an infinite plane). I have two area lights, and an environment image which is creating the beige color of the ground and some interesting reflections. The environment image is behind the camera, and it shows a city street at night with office buildings, street lamps, and a tree. Can you spot the reflection of the tree in the left teapot handle?

The challenge with rendering this scene is all the fully specular paths, which are rays that connect the camera to a light while only hitting specular surfaces such as glass or mirrors. The only way to do this using the rendering methods that we learned in the class is brute force path tracing which takes an extraordinary amount of time. Neither bidirectional path tracing nor photon mapping can do anything for this type of light contribution. To improve the rendering I would need to use a higher order method such as Metropolis Light Transport. With this scene I learned the hard way that it is a really bad idea to use saturated colors. After rendering I realized the image was overblown and impossible to fix with gamma correction because the rest of the scene would become dark. I had to restart the render with lowered color values (changing from 1.0 to 0.7), which made the image look much more realistic.

This image has roughly 30,000 samples per pixel. It took almost 3 days to render, and I used almost 1,000 CPUs spread over several different clusters. The main feature of my raytracer that allowed me to use so many different computing nodes is the use of uniform random samples and incremental rendering. Each pixel would render a certain number of samples (usually 32), where the sample locations are randomly chosen within the 2D area of the pixel. After each loop of rendering the specified number of samples for all the pixels, the intermediate image is saved (for example, image_32.exr, image_64.exr, image_96.exr, etc.). Each separate computing cluster would save their version of the image, so one cluster might go through 20 loops, while a smaller cluster might only finish 5 loops during the same time. To get the composite image I would add the different images together, weighting them by their total number of samples. This reduces the noise considerably, and effectively it is as though only a single image had been rendering with the total combined number of samples. This is highly advantageous because there is no limit on the number of compute nodes that I can use, and there is no needed communication between the different nodes. This is different than breaking up the image and having each cluster render only part of the image. Each cluster rendered the entire image, but because the sampling locations are random, the different images can be combined. For anyone wanting to create a similar image, know that my raytracer was extremely inefficient so it is likely you would need less time and far fewer computing resources.

For most of the scenes that I created during the class, I used a modeling program called Modo (by the Foundry), which allowed me to prototype the locations of objects, lights, and the camera settings. I believe it is absolutely necessary to know the basics of a 3D modeling program (Modo, Maya, Blender, 3DS Max, ...) for setting up scenes. After I was happy with a scene setup I would then export all of the models as OBJ files, and specify those in the scene file for my raytracer. I only used path tracing to create this image. The reason it looks so nice is because I spent a great deal of time and energy on creating the exact scene geometry, light locations, choosing a good environment map, and camera location. I would say that 90% of the effort was in creating the scene (the art), and only 10% for the renderer. It is far better to view scenes from an artistic perspective rather than the technical aspects of the renderer. Ultimately it is about art, not technical capabilities.

Candy


Here I added some teapots to the mother/child scene. I used path tracing with an area light and a night sky environment map.

Mother/Child

This is the same image without caustics.

Mother/Child - No Caustics

This image is made of three layers. With path tracing set up for use in bidirectional path tracing, I saved the contribution of each surface hit in its own vertex. The ray contribution to a pixel is then calculated after tracing it until it either exits the scene or runs out of allowed bounces. I kept track of each vertex's type of light contribution: caustics, environment light, or explicit light sampling from a diffuse surface. The caustics are converging too slowly. The images have 100,000 samples per pixel. It is obvious that path tracing isn't good enough to resolve the caustics.

Mother/Child - Caustics
Mother/Child - Environment Light
Mother/Child - Explicit Light


Models


WT-teapot with separate lid

Complete candy scene (180MB)



index,   previous,   next,   CS6620 - Ray Tracing for Graphics