Date of Award

10-2010

Embargo Period

5-20-2011

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Robotics Institute

Advisor(s)

Srinivasa G. Narasimhan

Second Advisor

Takeo Kanade

Third Advisor

Martial Hebert

Fourth Advisor

Shree K. Nayar

Comments

Light interacts with the world around us in complex ways. These interactions can
broadly be classified as direct illumination – when a scene point is illuminated directly
by the light source, or indirect illumination – when a scene point receives light that is
reflected, refracted or scattered off other scene elements. Several computer vision techniques
make the unrealistic assumption that scenes receive only direct illumination. In many real-
world scenarios, such as indoors, underground caves, underwater, foggy conditions and for
objects made of translucent materials like human tissue, fruits and flowers, the amount
of indirect illumination is significant, often more than the direct illumination. In these
scenarios, vision techniques that do not account for the indirect illumination result in strong
and systematic errors in the recovered scene properties.
The above stated assumption is made because computational models for indirect illu-
mination (also called global illumination or global light transport) are complex, even for
relatively simple scenes. The goal of this thesis is to build simple, tractable models of global
light transport, which can be used for a variety of scene recovery and rendering applications.
This thesis has three contributions. First, recovering scene geometry and appearance de-
spite the presence of global light transport. We show that two different classes of shape
recovery techniques - structured light triangulation and shape from projector defocus - can
be made robust to the effects of global light transport. We demonstrate our approaches on
scenes with complex shapes and optically challenging materials. We then investigate the
problem of recovering scene appearance in the presence of common poor visibility scenarios,
such as murky water, bad weather, dust and smoke. Computer vision systems deployed in
such conditions suffer due to scattering and attenuation of light. We show that by control-
ling the incident illumination, loss of image contrast due to scattering can be significantly
reduced. Our framework can be used for improving visibility in a variety of outdoor appli-
cations, such as designing headlights for vehicles, both terrestrial and underwater.
Global light transport is not always noise. In numerous scenarios, measuring global
light transport can actually provide useful information about the scene. The second con-
tribution is to recover material and scene properties by measuring global light transport.
We present a simple device and technique for robustly measuring the volumetric scattering
properties of a broad class of participating media. We have constructed a data-set of the
scattering properties, which can be immediately used by the computer graphics community
to render realistic images. Next, we model the effects of defocused illumination on the pro-
cess of measuring global light transport in general scenes. Modeling the effects of defocus
is important because projectors, having limited depth-of-field, are increasingly being used
as programmable illumination in vision applications. With our techniques, we can separate the direct and global components of light transport for scenes whose depth-ranges are
significantly greater than the depth of field of projectors (< 0.3m).
The third contribution of this thesis is fast rendering of dynamic and non-homogenous
volumetric media, such as fog, smoke, and dust. Rendering such media requires simulating
the fluid properties (density and velocity fields) and rendering volumetric scattering effects.
Unfortunately, fluid simulation and volumetric rendering have always been treated as two
disparate problems in computer graphics, making it hard to leverage the advances made in
both fields together. In particular, reduced space methods have been developed separately
for both fields, which exploit the observation that the associated fields (density, velocity
and intensity) can be faithfully represented with a relatively small number of parameters.
We develop a unified reduced space framework for both fluid simulation and volumetric
rendering. Since both fluid simulation and volumetric rendering are done in a reduced space,
our technique achieves computational speed-ups of one to three orders of magnitude over
traditional spatial domain methods. We demonstrate complex visual effects resulting from
volumetric scattering in dynamic and non-homogenous media, including fluid simulation
effects such as particles inserted in turbulent wind-fields.

Share

COinS