Date of Original Version



Technical Report

Rights Management

All Rights Reserved

Abstract or Description

Liquid Natural Gas (LNG) processing facilities contain large complex networks of pipes of varying diameter and orientation intermixed with control valves, processes and sensors. Regular inspection of these pipes for corrosion, caused by impurities in the gas processing chain, is critical for safety. Popular existing non-destructive technologies that used for corrosion inspection in LNG pipes include Magnetic Flux Leakage (MFL), radiography (X-rays), and ultrasound among others. These methods can be used to obtain measurements of pipe wall thickness, and by monitoring for changes in pipe wall thickness over time the rate of corrosion can be estimated. For LNG pipes, unlike large mainstream gas pipelines, the complex infrastructure means that these sensors are currently employed external to the pipe itself making comprehensive, regular coverage of the pipe network difficult to impossible. As a result, a sampling-based approach is taken where parts of the pipe network are sampled regularly, and the corrosion estimate is extrapolated to the remainder of the pipe using predictive corrosion models derived from metallurgical properties. We argue that a robot crawler that can move a suite of sensors inside the pipe network, can provide a mechanism to achieve more comprehensive and effective coverage. In this technical report, we explore a vision-based system for building 2D registered appearance maps of the pipe surface whilst simultaneously localizing the robot in the pipe. Such a system is essential to provide a localization estimate for overlaying other non-destructive sensors, registering changes over time, and the resulting 2D metric appearance maps may also be useful for corrosion detection. For this work, we restrict ourselves to linear pipe formations.

We explore two distinct classes of algorithms that can be used to estimate this pose are investigated, both visual odometry systems which estimate motion by observing how the appearance of images change between frames. The first is a class of dense algorithms that use the greyscale intensity values and their derivatives of all pixels in adjacent images. The second class is a sparse algorithm that use the change in position (sparse optical flow) of salient point feature correspondences between adjacent images. Pose estimate results obtained using the dense and sparse algorithms are presented for a number of images sequences captured by different cameras as they moved through two pipes having diameters of 152.40mm (6”) and 406.40mm (16”), and lengths 6 and 4 meters respectively. These results show that accurate pose estimates can be obtained which consistently have errors of less than 1 percent for distance traveled down the pipe. Examples of the stitched images are also presented, which highlight the accuracy of these pose estimates.




Included in

Robotics Commons