pbrt : a Tutorial Luís Paulo Santos Departamento de Informática Universidade do Minho Janeiro, 2008 Abstract This document is a hands-on tutorial for using pbrt (PHYSICALLY BASED RAY TRACER) and developing plug-ins for it. Some basic examples are used to illustrate how to achieve this. This tutorial is neither an in-depth manual of pbrt nor a discussion of the concepts behind plug-ins or pluggable components. The reader is referred to the pbrt book (Pharr & Humphreys, 2004) for further details. Installation Before trying these tutorials you should install pbrt on your machine. Access http://www.pbrt.org and follow the instructions. Note that pbrt saves images as an EXR file. In order to be able to successfully compile pbrt you need to install OpenEXR beforehand. Please access http://www.openexr.com and install the latest stable version of IlmBase and OpenEXR. In order to view the images you might need the OpenEXR viewers –you might alsouse some freeware viewer that supports EXR, such as InfanView for Windows (http://www.irfanview.com/ ). Introduction pbrt is a ray tracing system with very solid theoretical foundations. It is thus used to illustrate ray tracing and global illumination algorithms throughout this course. The key characteristic of pbrt architecture is that it is organized as a ray tracing core and a set of components. The core coordinates interactions among components, while the components are responsible for all remaining tasks, such as shooting rays, space traversal, objects intersections, integration, image generation, etc. These components are separate object files, loaded by the core at run time – they are referred to as plug-ins. pbrt can thus be easily extended by writing new plug-ins for the various functional blocks. The core imposes a strict API and enforces a strict interface protocol, thus helping ensuring a clean component design. The pbrt executable consists of the core code that drives the system’s main flow of control, but contains no code related to specific elements, such as objects, light sources or cameras. These are provided as separate components, loaded by the core at run time according to the scene
27
Embed
pbrt : a Tutorial - Universidade do Minhogec.di.uminho.pt/DISCIP/MInf/IFR0910/pbrtTutorials.pdf · pbrt : a Tutorial Luís Paulo Santos Departamento de Informática Universidade do
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
pbrt : a Tutorial
Luís Paulo Santos
Departamento de Informática
Universidade do Minho
Janeiro, 2008
Abstract
This document is a hands-on tutorial for using pbrt (PHYSICALLY BASED RAY TRACER) and
developing plug-ins for it. Some basic examples are used to illustrate how to achieve this.
This tutorial is neither an in-depth manual of pbrt nor a discussion of the concepts behind
plug-ins or pluggable components. The reader is referred to the pbrt book (Pharr &
Humphreys, 2004) for further details.
Installation
Before trying these tutorials you should install pbrt on your machine.
Access http://www.pbrt.org and follow the instructions.
Note that pbrt saves images as an EXR file. In order to be able to successfully compile pbrt you
need to install OpenEXR beforehand. Please access http://www.openexr.com and install the
latest stable version of IlmBase and OpenEXR. In order to view the images you might need the
OpenEXR viewers –you might alsouse some freeware viewer that supports EXR, such as
InfanView for Windows (http://www.irfanview.com/).
Introduction
pbrt is a ray tracing system with very solid theoretical foundations. It is thus used to illustrate
ray tracing and global illumination algorithms throughout this course.
The key characteristic of pbrt architecture is that it is organized as a ray tracing core and a set
of components. The core coordinates interactions among components, while the components
are responsible for all remaining tasks, such as shooting rays, space traversal, objects
intersections, integration, image generation, etc. These components are separate object files,
loaded by the core at run time – they are referred to as plug-ins. pbrt can thus be easily
extended by writing new plug-ins for the various functional blocks. The core imposes a strict
API and enforces a strict interface protocol, thus helping ensuring a clean component design.
The pbrt executable consists of the core code that drives the system’s main flow of control, but
contains no code related to specific elements, such as objects, light sources or cameras. These
are provided as separate components, loaded by the core at run time according to the scene
being rendered. There are 13 different types of components, or plug-ins, briefly described by
the following table. By writing new plug-ins the renderer can be extended in a straight forward
manner.
Type Description
Shape
Primitive
Camera
Sampler
Filter
Film
ToneMap
Material
Texture
VolumeRegion
Light
SurfaceIntegrator
VolumeIntegrator Table 1 - pbrt plug-in types
Tutorial 1 – Rendering and visualization
To follow this small, first tutorial you will need access to pbrt, exrdisplay and exrtotiff. Make
sure these are installed in your system. You also need the the cornell box scene and a few High
Dynamic Range (HDR) images in .exr format.
Rendering
The scene description is given in a .pbrt file. This includes geometry, materials’ properties, light
positions and radiant power, etc.
Open a shell, change to the scenes directory and render the cornell box by writing:
> pbrt cornell.pbrt
You can now visualize the image by writing:
> exrdisplay cornell.exr
pbrt is a physically based renderer, thus it generates High Dynamic Range images. In practice
the pixels’ values stored on the output file have floating point values that can range from 0 to
any maximum number. To visualize the image using exrdisplay you may need to adjust
exposure, which you can do using the top slider on the view window.
The quality of the rendered image depends on many parameters and on the particular choice
of algorithms used. The noise you see on the image depends, among others, on the number of
samples, i.e. rays, taken per pixel. Increase this number to 16 by changing the “pixelsamples”
(!hitSomething) {// Handle ray with no intersectionif (alpha) *alpha = 0.;return L;
{ // Initialize _alpha_ for ray hitif (alpha) *alpha = 1.;// Compute emitted and reflected light at ray intersection
// Evaluate BSDF at hit pointBSDF *bsdf = isect.GetBSDF(ray);// Initialize common variables for Whitted integratorconst Point &p = bsdfconst Normal &n = bsdfVector wo = Vector wi; // Compute emitted light if ray hit an area light sourceL += isect.Le(wo);wi = Vector (// get bsdf for this pair of directionsSpectrum f = bsdfL += f; return L;
Figure
The Flat integrator returns the BSDF for the pair of directions given by the primary ray and the
normal of the intersected surface at the intersection point p:
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
Flat Surface Integrator
const Scene *scene,RayDifferential &ray, *alpha) const
Intersection isect; Spectrum L(0.), W(3.);
hitSomething;
// Search for ray-primitive intersectionhitSomething = scene->Intersect(ray, &isect);
(!hitSomething) { // Handle ray with no intersection
(alpha) *alpha = 0.;L;
// Initialize _alpha_ for ray hit(alpha) *alpha = 1.;
// Compute emitted and reflected light at ray intersection
// Evaluate BSDF at hit point*bsdf = isect.GetBSDF(ray);
// Initialize common variables for Whitted integratorPoint &p = bsdfNormal &n = bsdf
Vector wo = -ray.d; i; // incident direction
// Compute emitted light if ray hit an area light sourceL += isect.Le(wo); wi = Vector (-n); // get bsdf for this pair of directionsSpectrum f = bsdf->f(wo, wi);
L;
Example 3 - Flat Surface Integrator code
Figure 2 - cornell box with flat integrator (T=0.8 secs)
BSDF for the pair of directions given by the primary ray and the
normal of the intersected surface at the intersection point p:
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
Flat Surface Integrator
Scene *scene, RayDifferential &ray, const
const {
primitive intersection>Intersect(ray, &isect);
// Handle ray with no intersection(alpha) *alpha = 0.;
// Initialize _alpha_ for ray hit(alpha) *alpha = 1.;
// Compute emitted and reflected light at ray intersection
// Evaluate BSDF at hit point*bsdf = isect.GetBSDF(ray);
// Initialize common variables for Whitted integratorPoint &p = bsdf->dgShading.p; Normal &n = bsdf->dgShading.nn;
ray.d; // outgoing direction // incident direction
// Compute emitted light if ray hit an area light source
// get bsdf for this pair of directions>f(wo, wi);
Flat Surface Integrator code
cornell box with flat integrator (T=0.8 secs)
BSDF for the pair of directions given by the primary ray and the
normal of the intersected surface at the intersection point p: � �
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
Flat Surface Integrator
const Sample *sample,
primitive intersection >Intersect(ray, &isect);
// Handle ray with no intersection
// Initialize _alpha_ for ray hit
// Compute emitted and reflected light at ray intersection
// Evaluate BSDF at hit point *bsdf = isect.GetBSDF(ray);
// Initialize common variables for Whitted integrator>dgShading.p; // intersection point>dgShading.nn; // normal at p// outgoing direction
// incident direction // Compute emitted light if ray hit an area light source
// get bsdf for this pair of directions>f(wo, wi);
Flat Surface Integrator code
cornell box with flat integrator (T=0.8 secs)
BSDF for the pair of directions given by the primary ray and the
� ��−ray.direction
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
Sample *sample,
>Intersect(ray, &isect);
// Compute emitted and reflected light at ray intersection
// Initialize common variables for Whitted integrator// intersection point// normal at p
// outgoing direction
// Compute emitted light if ray hit an area light source
// get bsdf for this pair of directions
cornell box with flat integrator (T=0.8 secs)
BSDF for the pair of directions given by the primary ray and the
direction, ���!"
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
Sample *sample,
// Compute emitted and reflected light at ray intersection
// Initialize common variables for Whitted integrator // intersection point // normal at p
// Compute emitted light if ray hit an area light source
BSDF for the pair of directions given by the primary ray and the
This does not depend on the light sources. For perfect diffuse surfaces the returned radiance
be different from zero for all possible ray directions; for perfect specular surfaces the
returned radiance will only be different from zero if the ray direction is coincident with the
// Compute emitted and reflected light at ray intersection
Example: DirectNoShadows integrator
The DirectNoShadows integrator evaluates the reflected radiance at the intersection point due
to direct lighting (directly from the light sources) without taking into consideration occlusions,
i.e., it does not shoot shadow rays. The returned radiance is given by
� � � ��ray direction, ������� � �� � cos �����for all light sources i
, !������"
DirectNoShadows Surface Integrator
Spectrum directNoShadows::Li(const Scene *scene, const RayDifferential &ray, const Sample *sample, float *alpha) const { Intersection isect; Spectrum L(0.); bool hitSomething; // Search for ray-primitive intersection hitSomething = scene->Intersect(ray, &isect); if (!hitSomething) { // Handle ray with no intersection if (alpha) *alpha = 0.; // account for infinitely far away light sources for (u_int i = 0; i < scene->lights.size(); ++i) L += scene->lights[i]->Le(ray); if (alpha && !L.Black()) *alpha = 1.; return L; } else { // Initialize _alpha_ for ray hit if (alpha) *alpha = 1.; // Compute emitted and reflected light at ray intersection point // Evaluate BSDF at hit point BSDF *bsdf = isect.GetBSDF(ray); // Get the intersection point and the surface normal there const Point &p = bsdf->dgShading.p; // intersection point const Normal &n = bsdf->dgShading.nn; // normal at p Vector wo = -ray.d; // outgoing direction // Compute emitted light if ray hit an area light source L += isect.Le(wo); Vector wi; // incident direction // Add contribution of each light source for (u_int i = 0; i < scene->lights.size(); ++i) { VisibilityTester visibility; // Light::Sample_L() will return the incident radiance alog wi due to this particular light source // it does not check for visibility, but returns a VisibilityTester object that can later be used to do so Spectrum Li = scene->lights[i]->Sample_L(p, &wi, &visibility); if (Li.Black()) continue; // get bsdf for this pair of directions Spectrum f = bsdf->f(wo, wi); // if bsdf != 0 if (!f.Black()) // bsdf * Li * cos(wi, n) L += f * Li * AbsDot(wi, n); } return L; } }
Example 4 - DirectNoShadows surface integrator code
This surface integrator calls Light::Sample_L() to get the incident radiance at the
intersection point, p, coming directly from the light source. This method stochastically selects
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
point
methods.
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
with the number of primary rays
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
Figure
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
point to intersection
methods.
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
with the number of primary rays
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
Figure 3 - cornell box with direct
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
intersection point. This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
with the number of primary rays
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
cornell box with direct
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
with the number of primary rays – see the images below with 4, 16 and 64 rays per pixel and
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
cornell box with direct no shadows integrator (1 rpp, T=1.2 secs)
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
the images below with 4, 16 and 64 rays per pixel and
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
no shadows integrator (1 rpp, T=1.2 secs)
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
the images below with 4, 16 and 64 rays per pixel and
6.5, 25.7 and 104 seconds rendering time, respectively.
a point in the surface of area light sources and returns both the direction from
and a visibility tester method that might be used to test if the light source is visible from
no shadows integrator (1 rpp, T=1.2 secs)
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling b
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
pixel and then integrate their contribution. However, variance is inversely proportional to the
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
the images below with 4, 16 and 64 rays per pixel and
a point in the surface of area light sources and returns both the direction from p to this point
and a visibility tester method that might be used to test if the light source is visible from p
no shadows integrator (1 rpp, T=1.2 secs)
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
always select the same point on the area light source, thus the sampling becomes
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
inversely proportional to the
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
the images below with 4, 16 and 64 rays per pixel and
to this point
p.
The image resulting from applying this integrator is full of noise. This occurs because the point
on the light sources surfaces is selected stochastically and will thus vary from intersection
This noise is the visible effect of the variance inherent to stochastic
There are several different ways to either reduce or eliminate variance. One approach is to
ecomes
deterministic and variance goes away. However, determinism has several problems, such as
aliasing, and is not a good solution. Another alternative is to shoot several primary rays per
inversely proportional to the
square root of the number of samples. This means that in order to halve variance, four times
more samples, i.e., primary rays, have to be evaluated. Unfortunately, rendering time is linear
the images below with 4, 16 and 64 rays per pixel and
Bibliographic References
Pharr, M., & Humphreys, G. (2004). Physically Based Rendering: from Theory to