Model based image restoration for underwater images

Model based image restoration for underwater images
Model based image restoration for underwater images

Model based image restoration for underwater images

Thomas Stephan a,Peter Fr¨u hberger b,Stefan Werling b and Michael Heizmann b

a KIT Vision and Fusion Laboratory(IES),Adenauerring4,Karlsruhe,Germany;

b Fraunhofer IOSB,Fraunhoferstr.1,Karlsruhe,Germany;

ABSTRACT

The inspection of o?shore parks,dam walls and other infrastructure under water is expensive and time consuming, because such constructions must be inspected manually by divers.Underwater buildings have to be examined visually to?nd small cracks,spallings or other de?ciencies.

Automation of underwater inspection depends on established water-proved imaging systems.Most underwa-ter imaging systems are based on acoustic sensors(sonar).The disadvantage of such an acoustic system is the loss of the complete visual impression.All information embedded in texture and surface re?ectance gets lost. Therefore acoustic sensors are mostly insu?cient for these kind of visual inspection tasks.

Imaging systems based on optical sensors feature an enormous potential for underwater applications.The bandwidth from visual imaging systems reach from inspection of underwater buildings via marine biological applications through to exploration of the sea?oor.The reason for the lack of established optical systems for underwater inspection tasks lies in technical di?culties of underwater image acquisition and processing. Lightening,highly degraded images make a computational postprocessing absolutely essential.

Keywords:image restoration,image models,underwater imaging

1.INTRODUCTION

Underwater imaging was hardly researched in the past.Only in the recent ten years this topic became more and more interesting in the community of image processing.Terrains and Infrastructures under water were examined by other types of sensor up to that time.Especially acoustical sensors were applied for underwater inspection,navigation and exploration.This is mainly because sonar sensors have adjuvant characteristics under water.These are especially long ranges and relatively high amounts of signal-to-noise-ratios.Nevertheless the disadvantages of such sensors are enormous:Low lateral resolution and no visual-optical information,like color or texture.Thus an interpretation of acoustical data is highly complex and many tasks in underwater inspection, navigation and exploration cannot be done by such sensor systems.

The reason for absence of optical systems must be searched in the in?uence of water in the process of image formation.In contrast to air,water in?uences immensely the propagation of a light beam.Water and therein particles increase absorption and scattering of light transportation.Thus the resulting images are disappointing concerning their quality.Images are often low contrasted,color-shifted,blurred and noised.Therefore underwater images can hardly be used without any image postprocessing like image enhancement or image restoration.

These two suggested post processing steps try to increase the quality of images by taking di?erent approaches. The purpose of image enhancement is to improve the subjective impression of an image.Thus,heuristically approaches are used to enhance the visual quality,where the increase of quality is subjective and therefore cannot be quantized objectively.In contrast,image restoration aims at the restoration of an original image signal,which was decreased by imaging process.For this purpose the degradation of the original image signal must be modeled.

In the?eld of automatic or semi-automatic image analysis of underwater images it is essential to increase the image quality on the basis of an objective,model based quality measure.Decisions concerning inspection, navigation and exploration under water must be based on reproducible and veri?able imaging results.On the basis of only’beautiful’images an objective interpretation is not possible.Basis of this approach is a physically deduced model.

Further author information:(Send correspondence to Thomas Stephan)

Thomas Stephan:E-mail:thomas.stephan@https://www.360docs.net/doc/da9288388.html,,Telephone:+497216091-436

Videometrics, Range Imaging, and Applications XII; and Automated Visual Inspection, edited by Fabio Remondino,

Mark R. Shortis, Jürgen Beyerer, Fernando Puente León, Proc. of SPIE Vol. 8791, 87911F

? 2013 SPIE · CCC code: 0277-786X/13/$18 · doi: 10.1117/12.2021990

1.1STATE OF THE ART

In recent few years research of camera techniques under water become more and more attention.This is because the ocean is gaining a higher value and therefore inspection,navigation and exploration can bene?t of visual-optical sensors under water.In the medium water–a high absorbing,scattering and turbid medium–it is essential to?nd new ways to capture images with a high signal-to-noise-ratio and to extract and amplify the signal coming from the scene and not from the medium water itself.This task of research is still in the early stages of development and requires sophisticated methods of computational imaging.There are only few research groups working on the underwater imaging topic.Raimondo et al.1gives an overview of underwater image processing.

Underwater imaging as shown in this paper bases on the model of light transportation through scattering media.This can be physically described by the r adiative t ransfer e quation(RTE),which was researched ex-tensively from the60th until the90th by Chandrasekhar,2Ishimaru3,4and Mobley5.The RTE provides the basis of underwater imaging today,but?rst computer models have already been derived in the early80th and 90th6,7.They base on strongly simplifying assumptions concerning the small scattering angle of Wells et al.8,9. McGlamery6and Ja?e7simulate the visibility conditions under water in order to enhance the construction of camera based under water vision systems.

The?rst image restoration algorithms for under water purposes are described by Schechner10and Weide-mann11.Weidemann researched the point-spread-function of a viewed point in scattering water.He measured and?tted this function to a parametric model in order to invert it.Schechner10experimented with the angle of polarization.He described the relationships between the di?erence of the’best’and the’worst’polarization angle and the distance to the objects in the scene.To capture images the angle of the polarizer has to be changed. Thus an automated image restoration is quite hard to realize with this technique,because only static scenes can be captured.The approach of He et al.12has brought new ideas to underwater researchers13,based on the dark channel prior.The original approach of He et al.12was designed to restore images taken in foggy or hazy environment.

It can be assumed that the topic of underwater imaging attaches more attention in the near future.The computational capacities have increased immensely since the?rst underwater restoration algorithms,thus more computational expensive imaging models can be used to restore the original image information.

2.UNDER W ATER IMAGING MODEL

In order to describe the imaging process under water it is important to understand the propagation of light in a scattering media and the interaction of light with object surfaces.In the macroscopic?eld of inspection, navigation and exploration the nature of light can be described adequately through light beams and there transportation.Thus the r adiative t ransfer e quation(RTE)2is the model of choice to specify imaging under water.

The RTE is a recursive linear integro-di?erential-equation which quanti?es the change of a light beam de-scribed as spectral radiance Lλ(x,r)for a wavelengthλat a position x∈R3into a direction r∈S2,where S2?R3is the unit sphere.The RTE can be written as

βλ(x,r →r)Lλ(x,r )d r ,(1) r T?x Lλ(x,r)=?(aλ+bλ)Lλ(x,r)+bλ

S2

where aλis the wavelength dependent absorption coe?cient,bλis the wavelength dependent scattering coe?cient and βλ(x,r →r)is the normalized phase-function,which quantizes the amount of scattering of a single light beam at a speci?ed position x and direction r into another direction r.Because of the reciprocity

βλ(x,r →r)= βλ(x,r→r )(2)

of light transportation the phase-function can be written as βλ(x,r ?r).The phase-function mostly does not change within the?eld of view.Hence the dependency of position can be neglected.Thus the phase-function can be written as

βλ(x,r ?r)= βλ(r ?r).(3)

If light hits the surface of an object,it will be re?ected.The shape of the re?ection can be described by the b idirectional r e?ectance d istribution f unction(BRDF),which is,similar to the phase-function,de?ned as

ρλ(x,r ?r)=d Lλ(x,r)

d Eλ(x,r )

,(4)

where E(x,r )is the irradiance of the incident light beam and L(x,r)is the radiance of the outgoing light beam. Under water surfaces of objects are mostly algic or muddy and therefore not re?ecting.Thus those surfaces under water are approximately lambertian,i.e.

ρλ(x,r ?r)=ρλ(x).(5)

2.1DERIV ATION OF THE MODEL

To clarify the quantities in this section?gure1illustrates the used geometry.In order to describe the camera a

Figure1:Illustration of the used quantities and their context

pinhole model will be used.Thus the image intensity gλ(u)for a speci?ed wavelengthλat a pixel position u is proportional to the spectral radiance Lλ(x C,r u)at the pinhole x C into the direction r u to the sensor pixel u

gλ(u)∝Lλ(x C,r u).(6) For each sensor pixel u one can determine an unambiguous point x u on the object surface,so that

x u+d(u)r u=x C,(7) where d(u):= x C?x u .Because of the linearity of the RTE(1)one can divide it into di?erent parts and solve these separately.Thus,for the so called direct component the light beam is regarded,which comes from the surface and reaches the image sensor without scattering.At each traversing point the decrease of spectral radiance can be written as the?rst part of the RTE(see eq.(1))

r T?x Lλ(x,r)=?(aλ+bλ)Lλ(x,r).(8)

The solution is known as the lambert-beers law and is in this speci?c case

g direct

λ

(u):=Lλ(x u,r u)e?cλd(u),(9) where cλ=aλ+bλ.

Natural illumination coming from the water surface will be multiple scattered before reaching the object surface.Thus the scene illumination is often very homogeneously.With this fact and with neglecting the surface interre?ections the surface radiance Lλ(x u,r u)can be determined as

Lλ(x C,r u)=

S2ρλ(x u)d Eλ(x u,r )d r =E nat

λ

ρλ(x u).(10)

From this and(9)follows

g direct λ(u)=E nat

λ

ρλ(x u)e?cλd(u).(11)

The second part,i.e.the indirect component,is not computable e?ciently nor can it be formulated in a closed-form,5because of the speci?c recursive nature.Hence it must be approximated

r T?x Lλ(x,r)=bλ

S2

βλ(x,r →r)Lλ(x,r )d r .(12)

On the assumption,that the light?eld is dominated by an homogeneous natural illumination,the inscattering on a point on the sight line between x u and x C(see?gure1)can be condensed as

d Lλ(x u+τr u,r u)=bλ

S2

βλ(r ?r u)Lλ(x u+τr u,r )d r =bλkλ,(13)

where kλis an illumination constant.The amount of inscattered radiance traveling the attenuating medium and reaching the sensor pixel can be expressed by lambert-beers law as

d Lλ(x C,r u)=d Lλ(x u+τr u,r u)e?cλ(d(u)?τ)=bλkλe?cλ(d(u)?τ).(14) Integrating th

e inscattered radiance along the sight line gives the indirect component

g indirect λ(u):=

d(u)

bλkλe?cλ(d(u)?τ)dτ=

kλ(1?e?cλd(u)).(15)

Combining both,the direct and the indirect component,gives the total image intensity:

gλ(u)∝g direct

λ(u)+g indirect

λ

(u)=E nat

λ

ρλ(x u)e?cλd(u)+

kλ(1?e?cλd(u)).(16)

To handle the spectral nature of light transportation,it is assumed,that the inherent properties like scattering, absorption and re?ecting does not change too much within one color channel of the camera.Thus the di?erent color channels can be treated like di?erent wavelengthsλ∈{r,g,b}.

3.PARAMETER ESTIMATION

In order to restore the re?ectance mapρλ(x u)of the surface,it is required to estimate di?erent parameters.

These are especially the natural surface illumination E nat

λ,the spectral attenuation coe?cient cλthe natural

ambient light constantκλ:=bλ

kλand the distance map d(u).

Without knowing too much about the underwater scene it is even possible to determine theses quantities with the aid of the dark channel prior assumptions.12These are:

Dark Channel Prior It is assumed,that in every image patch U(u)around the image point u it exists at least one spectral signal value,which is vanishingly low,i.e.

?u?λ∈{r,g,b},ξ∈U(u):ρλ(ξ)=0.(17) Distant Object Surface Furthermore it is assumed to have a patch,whose sight lines never intersect the scene,

i.e.

?u?ξ∈U(u):d(ξ)→∞.(18) If?u?ξ∈U(u):d(ξ)≈d(u)one can derive an estimation for the ambient illumination constantκλ.Therefore the function

g dark λ(u):=min

ξ∈U(u)

{gλ(ξ)}(19)

and

γ(u):=min

λ∈{r,g,b}

g dark

λ

(u)

(20)

have to be de?ned.With these functions and the assumptions of(17)and(18)κλcan be approximately

determined as

κλ≈ κλ:=g dark

λ

arg max

u

{γ(u)}

.(21)

Another parameter is the natural illumination at the surface.It is quite hard to estimate it exactly,but it seems natural to assume its wavelength components are weighted equal asκλ,i.e.

E nat

λ

=κλE nat,(22) thus

E nat

λ

∝κλ.(23) 3.1DISTANCE ESTIMATION

The most crucial parameter to estimate is the distance map d(u).A wrong estimated distance between camera x C and object surface x u can cause a large amount disturbing artifacts in restoration result.The distance estimation of He et al.12cannot be used in its original form,because He et al.assume the absorption and the scattering to be wavelength-independent.But under water these assumptions are unsustainable.Therefore another distance estimation method has to be developed.First of all a distance-like function has to be de?ned:

R(u):=?ln

1?

γ(u)

κλmax

≈cλ

max

d(u),(24)

whereλmax:=arg max(cλ)

λ∈{r,g,b}

is most often the red channel.The idea behind is,that the channel with the greatest attenuation better indicates the distance,than other channels.In?gure2an example with an initial estimation R(u)of the distance map is presented.

The distance estimation contains areas with obviously wrong distance values.This comes from violated assumptions,especially of the eq.(17)dark channel assumption.If the dark channel assumption is true to each

channel separately,the function

Rλ(u):=?ln

1?

g dark

λ

(u)

κλ

(25)

is proportional to the distance estimation R(u)!∝Rλ(u).Thus the quotient map

ηλ(u):=R(u)

Rλ(u)

(26)

should be constant over the whole map.To force better distance estimations,the?nal distance map is calculated by

d(u)≈ d(u):=ηλ(u)R(u),(27) where the quotient map with the highest variance is used.The reciprocal property of the variance of the quotient map Var{ηλ(u)}?1indicates the quality of distance estimation.

The choice of the size of the patch U(u)is essential for the quality of distance estimation.If the patch size is too large,details get lost.Otherwise,if the patch size is too small,distance cannot be estimated correctly because of violating the dark channel prior.To determine the optimal patch size it can be increased gradually until its quality measure Var{ηλ(u)}?1is greater than a prede?ned threshold.Thereby a threshold of Var{ηλ(u)}?1>1 appeared as a good choice.In?gure3a distance estimation with di?erent quality measures is illustrated.

4.IMAGE RESTORATION

The restoration of the original signalρλ(u)seems to be quite easy.To solve the equation in(16)forρλ(u)gives

ρλ(u)=

1

E nat

λ

gλ(x)? κλ

e?ηλ(u)R(u)

+ κλ

(28)

and with(23)

ρλ(u)∝1+

gλ(x)? κλ

κλe?ηλ(u)R(u).(29)

When applying this inverse model to an underwater image the result are not very satisfying(see?gure4). Though this is mainly because the inverse equation of(29)is not numerically stable for great distances.The greater the distance the restoration becomes more susceptible with respect to little errors in depth estimation or little noise.To avoid this behavior one has to regularize the inverse model to get a more stable approximation of the inverse model.

4.1REGULARIZATION

The main reason for instability of eq.(29)is the exponent R(u)in the denominator.If the distance R(u)becomes large,the denominator converges against zero.Consequently the distance R(u)has to be regularized.In this approach the following regularization term is suggested

π(u):=

R(u)

( R(u))q+1

,(30)

where and q are regularization parameters.The lower the SNR the higher the parameters have to be chosen. Figure5shows some plots of the behavior ofπ(u)with respect to di?erent regularization parameters.

5.RESULTS

Through adaption of the original dark channel prior approach12to the situation of underwater imaging,impressive restoration results can be realized.The depth estimation is more robust and accurate for underwater scenery. This approach is able to restore images without any prior knowledge about the water inherent properties,the ambient lightening,the distance to the surface or the depth under the water surface.

Result of this approach is at the one hand the estimation of the distance to the object surface,on the other hand a restoration of the original signalρλ(u)of the image.Thereby the contrast can be highly increased and the color shift reduced while keeping noise quite low.

This approach is very robust,because all parameters–except the regularization parameters–are estimated or determined within the algorithm.Nevertheless the regularization parameters can be adjusted easily.For the restored images in?gure6the same regularization parameters are chosen.The restoration results of all images are impressive.

Figure 2:Original image (left),initial distance estimation R (u )(middle)and corrected distance estimation ηλ(u )R (u ).Distance can be estimated with the dark channel prior,but often the assumption (see eq.(17))is violated.To correct the distance estimation the correction term ηλ(u )is introduced in (26).

Figure 3:Original image (left),low quality distance estimation Var {ηλ(u )}?1=0.009with 13×13patch size (middle)and accurate distance estimation Var {ηλ(u )}?1=95.8with 15×15patch size (right).Too small image patches U (u )results in bright white artifacts in distance estimation map.The quality measure is a good indicator for such wrong estimations.

Figure 4:Original image (left),image restoration with too low regularization parameters ( =0,q =1)(middle)and accurate restoration (right).Without regularization the restoration results contains a lot of artifacts.

R H u L p H u L

R H u L

p H u L Figure 5:Plots from π(u )=R (u )

( R (u ))q +1with di?erent regularization parameters

6.CONCLUSION

In order to inspect,navigate or explore automatically or semi-automatically in an underwater environment it is essential to postprocess captured images.This paper describes a model based image restoration approach,which is able to restore the visibility of the origin colors.It is derived from the radiation transfer equation,2thus it guarantees an approximately physical correctness.Assumptions,which are made,are known and thus errors of restorations basing on validation of assumptions are explainable.Apart from restoration results a distance estimation is also determined and can be used for other purposes.

The used model and the restricting assumpions limit the potential resoration results.Based on the model described in (16)it is not possible to increase under water image restoration clearly.Thus more sophisticated image restoration approachs need to have better –but more expensive –models and other aqusition and lightening techniques.Presumable there is no avoiding to measure the water inherent properties and the distance to object surfaces in order to get much better restoration results.

REFERENCES

[1]Schettini,R.,Corchs,S.,“Underwater Image Processing:State of the Art of Restoration and Image En-

hancementMethods,”EURASIP Journal on Advances in Signal Processing 2010(2009).

[2]Chandrasekhar,S.,[Radiative transfer ],Dover,New York (1960).

[3]Ishimaru,A.,[Wave propagation and scattering in random media ],Academic Pr.,New York (1978).

[4]Ishimaru, A.,“Pulse propagation,scattering,and di?usion in scatterers and turbulence,”Radio Sci-

ence 14(2),269–276(1979).

[5]Mobley,C.D.,[Light and water:Radiative transfer in natural waters ],Academic Press,San Diego (1994).

[6]McGlamery,B.L.,“A computer model for underwater camera systems,”Ocean Optics 208(208)(1979).

[7]Ja?e,J.S.,“Computer Modeling and the Design of Optimal Underwater Imaging Systems,”Oceanic Engi-

neering,IEEE Journal of 15(2),101–111(1990).

[8]Wells,W.H.,“Loss of Resolution in Water as a Result of Multiple Small-Angle Scattering,”Journal of the

Optical Society of America 59(6),686(1969).

[9]Ja?e,J.S.,“Monte Carlo modeling of underwater-image formation:validity of the linear and small-angle

approximations,”Applied Optics 34(24),5413(1995).

[10]Schechner,Y.Y.,Karpel,N.,“Clear Underwater Vision,”Proceedings of the IEEE Computer Society on

Computer Vision and Pattern Recognition 1(2004).

[11]Hou,W.,Gray,D.J.,Weidemann,A.D.,“Automated underwater image restoration and retrieval of related

optical properties,”Geoscience and Remote Sensing Symposium,2007.IGARSS 2007.IEEE International ,1889–1892(2007).

[12]He,K.,Sun,J.,Tang,X.,“Single Image Haze Removal Using Dark Channel Prior,”IEEE Transactions on

Pattern Analysis and Machine Intelligence 33(2011).

[13]Chiang,J.Y.and Chen,Y.-C.,“Underwater Image Enhancement by Wavelength Compensation and De-

hazing,”Image Processing,IEEE Transactions on 21(4),1756–1769(2012).

Figure 6:Original images (left),distance estimations (middle)and restored results (right).All these images are restored with the same parameter set.

相关主题
相关文档
最新文档