SIAM IS14 - MS33 - Models and Methods for Imaging through Turbulence - Supplementary material
The focus of this minisymposium is the general problem of seeing through
turbulent media of nonuniform density; for example the warm atmosphere or the
oscillating surface of water. This is a wide field of research separated into a
few sub-communities: astronomical seeing, underwater imaging, and long-distance
surveillance over a hot terrain. Each of these problems has different
challenges and constraints, but still there are many ideas that could be shared
between them. The objective of this minisymposium is to bring together people
of each of these backgrounds and discuss the differences and similarities
between each turbulence model.
-
10:30-11:00 Atomic Models of Video Turbulence
(online demo)
We recall some simple methods to simulate turbulent videos
of static objects. Each method is motivated by a physical
model, but we describe them as image processing operators
acting on the latent image. The latent image can be
recovered easily from each simulation. Thus, we claim that
the main difficulty in the correction of turbulence must come
from a combination of different effects. Indeed, we find
several of these individual effects appearing in real examples.
Enric Meinhardt-Llopis, ENS Cachan, FRANCE
Tristan Dagobert, Direction Generale de l'Armement, FRANCE
-
10:30-11:00 Independent Components in Dynamic Refraction
Refraction causes random dynamic distortions in atmospheric
turbulence and in views across a water interface. The latter
scenario is experienced by submerged animals seeking to
detect prey or avoid predators, which may be airborne or on
land. Man encounters this when surveying a scene by a
submarine or divers while wishing to avoid the use of an
attention-drawing periscope. The problem of inverting
random refracted dynamic distortions is difficult, particularly
when some of the objects in the field of view are moving. On
the other hand, in many cases, just those moving objects are
of interest, as they reveal animal, human, or machine activity.
Furthermore, detecting and tracking these objects does not
necessitate handling the difficult task of complete recovery of
the scene. We show that moving objects can be detected very
simply, with low false-positive rates, even when the
distortions are very strong and dominate the object motion.
Localizing objects in three dimensions (3D) despite this
random distortion is also important to some predators and
also to submariners avoiding the salient use of periscopes.
Refracted distortion statistics induce a probabilistic relation
between any pixel location and a line of the salient use of
periscopes. Refracted distortion statistics induce a
probabilistic relation between any pixel location and a line of
sight in space. Measurements of an objects random
projection from multiple views and times lead to a likelihood
function of the objects 3D location. The likelihood leads to
estimates of the 3D location and its uncertainty. sight in
space. Measurements of an objects random projection from
multiple views and times lead to a likelihood function of the
objects 3D location. The likelihood leads to estimates of the
3D location and its uncertainty.
Marina Alterman Technion, Israel
Yoav Schechner, Technion, Israel
Pietro Perona, Caltech, USA
Joseph Shamir, Technion, Israel
-
11:00 - 11:30 Video Restoration of Turbulence Distortion
When the video is taken from a long range system,
atmospheric turbulence can corrupt the video sequence and
an object can look distorted. Blurring and diffeomorphism
are couple of the main effects of atmospheric turbulence. We
propose methods to stabilize the video sequence and give a
good reference image. We reconstruct a new video sequence
using Sobolev gradient sharpening with temporal smoothing,
nd one latent image is found further utilizing the
lucky-region method.
Yifei Lou University of California, Irvine, USA
-
11:30 - 12:00
A Geometric Method for Image Recovery Through Optical Turbulence
The phenomenon that is commonly referred to as optical
“turbulence” in imaging is caused by the time and
space-varying refraction index of the air which is due, among
other factors, to temperature, air pressure, humidity, and
wind conditions between the acquired scene and the
image-capturing device. The resulting image sequence is also
affected by the different and changing lighting conditions
within the scene, by the actual distance between the observed
objects and the camera, and by other artifacts introduced by
the device itself. The above described distortion may be
modeled, at least to a first approximation, as the combined
effect of (i) a blur with an anisoplanatic point spread function
and (ii) a time-dependent deformation of the image domain.
In this talk I will describe an algorithm that, starting from
this observation, first employs a geometric method for
restoring the structure of the scene, and then uses variational
deconvolution techniques to yields a crisp, final result. The
algorithm may be viewed as an alternate minimization
procedure of a functional that includes a data matching term,
a regularization term for the deformations, and a
regularization term for the recovered image. The algorithm
has proven very effective for the the recovery of images
affected by both ground-level atmospheric blur, and by
underwater turbulence caused by temperature gradients.
Mario Micheli
Department of Mathematics, University of
Washington, USA