CS184 Spring 2019 Project 3-2 Pathtracer

Abizer Lokhandwala - cs184-aao - ocf.io/abizer/cs184-sp19/p3-2/

Part 1

The following images were rendered at 64 samples per pixel and 4 samples per light using the staff binary for part 3-1. Each row corresponds to an image rendered with max_ray_depth equal to 0, 1, 2, 3, 4, 5, and 100, respectively.

At max_ray_depth=0, the only light in the scene comes from the area light, or zero_bounce_radiance.

With 1 bounce, we can begint to see the detail on the wall, but not the ceiling, as that is illuminated by light bouncing off the walls. The spheres are not visible because as delta BSDFs, light needs to bounce off something else before hitting them in order for them to have something to show, but this is not possible with only one ray bounce.

At 2 ray bounces, we can begin to resolve more details. First, the ceiling is visible, as well as the mirror sphere. This is composed of rays first bouncing off the mirror, then the wall. The image of the glass sphere on the surface of the mirror sphere is black, because the ray’s bounces have been consumed inside the glass sphere after the ray bounces off the mirror in the direction of the glass sphere. The small amount of lighting we can see on the surface of the glass sphere is that composed of the fraction of light reflected by the glass, as refraction inside the glass consumes the ray’s bounces below 2. As a result, the glass sphere is mostly black. The ceiling is also black inside on the surface of the mirror sphere because we need one more bounce to collect light from the ceiling, which we do not yet have.
At three bounces, we get most of the effect of the glass sphere. This is because it takes at least 2 bounces to traverse the sphere itself (once to enter the sphere, once to exit the sphere), thus requiring three bounces if we want the ray to be able to come back to us. We still cannot see the image of the glass sphere on the surface of the mirror sphere, because bouncing off the mirror sphere requires another bounce that we don’t have yet. We can, however, begin to see the ceiling on the mirror sphere.
At 4 ray bounces, we notice three primary affects. First, the scene is brighter on the walls. Second, we begin to resolve a proper image of the glass sphere on the surface of the mirror sphere, because we have enough bounces to bounce from mirror -> into glass -> out of glass -> bounce off wall and back to the camera. Finally, we can see the light affect of the shadow on the ground of the glass sphere, composed of light bouncing off the light, then entering the glass, exiting the glass towards the floor, then bouncing off the floor towards the camera.

The most prominent affect at 5 ray bounces is the light concentrating on the right wall as reflected off the mirror and through the glass. The path a camera ray takes is bounce off the wall, exit the glass, enter the glass, bounce off the mirror, bounce off the area light.

At 100 ray bounces, the there isn’t much difference compared to 5 ray bounces, besides the scene (primarily the ceiling) being brighter, and being somewhat noisier. The light affect on the right wall is also slightly more pronounced due to more rays being cast.

Part 2

Alpha Values

The following images were rendered at 128 samples per pixel, 1 sample per light, and 5 ray bounces.



The alpha parameter controls ‘glossiness’ - at the lowest end of its acceptable range, this appears to devolve into sharp angles and the dark spots seen here.



At a 10x higher alpha value, we can see that the dragon is much brighter than the previous rendering, though the dark spots remain. The dragon could reasonably be described as ‘glossy’ or perhaps shiny.



Halfway to the maximum value, the still somewhat shiny, but decidedly brighter. The skin appears matte enough to reflect a decent quantity of light to the user.



With alpha as high as it goes, the dragon is quite bright, and the skin is rather matte.

Hemisphere vs Importance Sampling


This bunny is rendered with standard hemisphere sampling. The image is generally about as ‘bright’ as the other bunny, but it is much noisier and grainier. This is expected for hemisphere vs. importance sampling.


The importance-sampled bunny looks much smoother and less noisy at the same sampling levels as hemisphere sampling. From the rate images (img/bunny_importance_rate.png), it looks like the algorithm spends more time on the edges than the hemisphere sampled one, which results in more details being resolved.


I chose beryllium as my metal - it results in a dragon with this interesting blue hue.

Part 3

As I understand it, environment lighting is basically the light emanating from the environment the object finds itself in, the source being far away, and the light coming from all directions at various intensities. This lighting imparts shadows and radiance on the object that can result in interesting lighting patterns on the object.

I have opted to use field.exr. An image can be found linked here.

The probability debug file is as shown:


Uniform vs Importance Sampling

Uniform Sampling, normal image

Importance Sampling, normal image

Between the two images, one can see that the importance sampled render has less noise, as one might expect. In particular, the uniformly sampled bunny appears to have noise pretty much across the board, whereas the importance sampled render has much less noise on the face of the bunny and the sides that are faving the viewer directly, whereas the back of the bunny still appears to be fairly noisy.

Uniform sample, microfaceted image

Importance sample, microfacet image

A similar effect as previously can be seen, though this microfaceted image is far noisier than the previous one. Mainly, the brighter areas, such as on the side of the bunny’s head, the tips of the ears, and the flanks of the bunny are much less noisy in the importance sampled image than in the uniformly sampled one.

Part 4

What’s the difference between a thin lens and a pinhole camera? In a pinhole camera, there is no concept of focal distance - any rays entering the pinhole converge directly on their destination pixel behind the camera origin, due to the small size of the incoming aperture. With a lens, however, multiple rays from different may converge on the sensor plane, causing interference, which is seen as the image being out of focus except at the focal point, which is the distance at which rays enter the sensor and converge clearly.

Focus Stack

Apologies for the cell-rendered box - it was difficult to get the dragon into position and find the correct focal distance and aperture size combination to achieve the results without the viewer segfaulting. I opted to to render the cell alone (with the important details) to increase the in-cell image quality while maintaining a reasonable render time on my 1.9Ghz laptop.

Focal Length: 0.0

Immediately ahead of the lens, before the snout of the dragon.

Focal Length: 1.1

The dragon’s mouth comes into focus at f1.1

Focal Length: 1.3

At 1.3, the dragon’s entire head comes into focus.

Focal Length: 2.0

At 2.0, the head goes out of focus, as the mid-body and parts of the tail enter into focus.

Focal Length: 2.5

At 2.5, the focal point is moving past the tail, and the dragon goes out of focus again.

Aperature Stack

Aperature Length: 0.0 (pinhole camera)

At 0.0 aperature, the view is that of a pinhole camera. Everything is in focus.

Aperture Length: 0.044194

At a small aperature, a similar affect to above can be seen, where it appears the dragon’s head is coming into focus.

Aperture Length: 0.176777

However, this isn’t really true. As the aperature size increases even more, only the closest details of the dragon are resolvable, the rest of the face being lost to noise. This can perhaps be understood as the angle of the sampled blue ray getting sharper compared to the original blue ray.

Aperture Length: 0.50000

At .5 aperature, the dragon looks instead like a lion mane, except for the small amount of resolvable detail near the tongue.