MRT Richardson–Lucy Algorithm With Total Variation Regularization for 3D Confocal Microscope De

合集下载

普林斯顿算法

普林斯顿算法

普林斯顿算法
普林斯顿算法是一种用于解决最短路径问题的一种经典算法,也称为迪杰斯特拉算法。

它是一种贪婪算法,逐步构建最短路径树,从起始节点开始,依次选择与当前节点距离最近的节点,并更新该节点到其他节点的距离。

通过不断选择最短路径上的节点,最终得到起点到各个节点的最短路径。

普林斯顿算法的基本步骤如下:
1. 创建一个距离列表distances,用于保存起始节点到各个节点的最短距离,初始值为无穷大(表示未知路径)。

2. 创建一个前驱列表predecessors,用于保存路径上每个节点
的前驱节点,初始值为None。

3. 将起始节点的距离设置为0,即distances[start_node] = 0。

4. 选择距离最短且未被访问的节点作为当前节点。

5. 更新当前节点到邻居节点的距离,如果新的距离比原来的距离小,则更新距离和前驱节点。

6. 标记当前节点为已访问。

7. 重复步骤4-6直到所有节点都被访问。

8. 根据distances和predecessors构建最短路径。

普林斯顿算法的时间复杂度为O(V^2),其中V为节点数。


适用于处理节点数不太大的图,但在节点数非常大时,性能可能较差。

为了提高效率,还有一种优化的算法称为堆优化的迪杰斯特拉算法,它使用优先队列来选择最短距离的节点,使得时间复杂度降为O((V+E)logV),其中E为边数。

richardson 公式

richardson 公式

richardson 公式摘要:Richardson 公式概述及应用场景1.Richardson 公式简介2.公式推导与含义3.应用场景及实例4.与其他预测模型对比5.局限性与改进方向正文:Richardson 公式是一种用于预测随机变量大于等于某个阈值概率的统计模型。

该公式在概率论、统计学以及各种实际应用领域中具有广泛的应用。

本文将对Richardson 公式进行详细介绍,包括其推导、含义、应用场景以及局限性与改进方向。

1.Richardson 公式简介Richardson 公式,又称Richardson 插值公式,是由英国数学家Lewis Fry Richardson 在1910 年提出的。

公式如下:P(X ≥ k) = 1 / (1 + b * (log(k) - log(θ)))其中,P(X ≥ k) 表示随机变量X 大于等于k 的概率;k 为阈值;θ 为最小概率,即P(X ≥ θ) = 1;b 为比例系数,可通过拟合数据求得。

2.公式推导与含义Richardson 公式的推导过程较为复杂,这里不再详细展开。

但从公式中可以看出,它与logistic 函数有一定的相似性。

logistic 函数常用于概率论中的Sigmoid 函数,用于描述随机变量在某个阈值附近的概率变化。

Richardson 公式则将这种概率变化推广到了任意阈值上,从而可以用于预测随机变量大于等于某个阈值的概率。

3.应用场景及实例Richardson 公式在许多领域都有广泛的应用,如气象学、生物学、金融学等。

以下举一个气象学的实例:假设我们想要预测某地区未来一周内降雨的概率。

我们可以先收集历史数据,计算出各种降雨阈值(如1mm、5mm、10mm 等)对应的降雨概率。

然后利用Richardson 公式,根据当前气象数据预测未来一周内各种降雨阈值的概率。

4.与其他预测模型对比Richardson 公式具有一定的局限性,如对数据质量要求较高,预测精度受限等。

Introduction_to_Algorithms

Introduction_to_Algorithms

和式的界
思考题 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
第一部分 基础知识
2 在学习本篇的内容时,我们建议读者不必一次将这些数学内容全部消化。先浏览 一下这部分的各章,看看它们包含哪些内容,然后直接去读集中谈算法的章节。在阅 读这些章节时,如果需要对算法分析中所用到的数学工具有个更好的理解的话,再回 过头来看这部分。当然,读者也可顺序地学习这几章,以便很好地掌握有关的数学技 巧。 本篇各章的内容安排如下: • 第一章介绍本书将用到的算法分析和设计的框架。 • 第二章精确定义了几种渐近记号,其目的是使读者采用的记号与本书中的一致, 而不在于向读者介绍新的数学概念。 • 第三章给出了对和式求值和限界的方法,这在算法分析中是常常会遇到的。 • 第四章将给出求解递归式的几种方法。我们已在第一章中用这些方法分析了合并 排序,后面还将多次用到它们。一种有效的技术是“主方法”,它可被用来解决 分治算法中出现的递归式。第四章的大部分内容花在证明主方法的正确性,读者 若不感兴趣可以略过。 • 第五章包含了有关集合、关系、函数、图和树的基本定义和记号。这一章还给出 了这些数学对象的一些基本性质。如果读者已学过离散数学课程,则可以略过这 部分内容。 • 第六章首先介绍计数的基本原则,即排列和组合等内容。这一章的其余部分包含 基本概率的定义和性质。
求和公式和性质 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 线性性质 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 算术级数 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 几何级数 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 调和级数 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 积分级数与微分级数 . . . . . . . . . . . . . . . . . . . . . . . . . . 39 套迭级数 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 积 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 数学归纳法 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 对项的限界 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 分解和式 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 积分近似公式 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

数学物理方法外国教材译本

数学物理方法外国教材译本

数学物理方法外国教材译本
以下是一些常见的数学物理方法外国教材的译本:
1. "Mathematical Methods in the Physical Sciences" by Mary L. Boas (《物理科学中的数学方法》)
2. "Mathematical Methods for Physicists" by George B. Arfken and Hans J. Weber (《物理学家的数学方法》)
3. "Mathematical Physics" by Robert G. Brown (《数学物理学》)
4. "Advanced Mathematical Methods for Scientists and Engineers" by Carl M. Bender and Steven A. Orszag (《科学家和工程师的
高级数学方法》)
5. "Mathematical Methods for Physics and Engineering" by Ken Riley, Michael Hobson, and Stephen Bence (《物理学和工程学
的数学方法》)
这些译本提供了对数学物理方法的详细解释和应用示例,适合学习者从初级到高级的阶段使用。

这些教材通常包含了大量的数学理论和技巧,以及与物理相结合的具体应用。

基于Richardson-Lucy的图像去模糊新算法

基于Richardson-Lucy的图像去模糊新算法
ZH AO Bo ZH ANG e he g, NG u n No e m a e e l r i a g r t , W ns n DI H a . v l i g d b u rng l o ihm s d n ba e o Ri h r s n Lu y Co p t r c a do - c. m ue
M a p a tr ae ic sdI c o ss e sn be aa ees o c iv te bet e u . p r na rs l d mo t e p a mees r dsuse . h o e ra o a l p m tr t ah e e h s r t r r sRs Ex e me tl e ut e ns a i s r t
n tb e i r v me t r n iy b u r d i a e sn h s meh d o a l mp o e n s f o s lr e m g s u i g t i t o . o
Ke r s mae rs rt n d bur g R c ado —u y R y wo d :i g et a o ;elri ; ih rsnL c ( L)a o tm; a a o i n l rh G i M p gi n
铃 效应的有效性 , 同时它很好地保 留了图像 细节 , 对含有 噪声的模 糊 图像也有很好 的复原效果 。 关键 词 : 图像 复原 ; 去模糊 ; i ado . uy R 算法; Rc rsnL c ( L) h 增益 图 DOI1 . 7  ̄i n1 0 .3 1 0 1 4 0 文章编号 :0 28 3 (0 1 3 .0 10 文献标识码 : 中图分类号 :P 9 . :03 8 .s . 28 3 . 1. . 1 7 s 0 2 30 1 0 .3 12 1 )40 0 .4 A T 31 4

计算思维与实用技术

计算思维与实用技术

计算思维与实用技术
x
《计算思维与实用技术》属于计算机类专业教科书,主要介绍使用计算机科学解决实际问题的技术和思想。

该书由美国斯坦福大学计算机科学教授及机器学习专家Charles Isbell和Michael Littman编写,它涵盖了先进的计算机技术、算法,以及利用搜索引擎,谷歌等分析数据的实用方法,旨在帮助技术从业者以及学习者更好地运用计算机科学来逐步解决实际问题。

本书的宗旨就是帮助技术从业者和学习者更好地利用计算机科
学来解决实际问题。

本书主要涵盖了三大板块:概念和原理、日常技术和高级技术。

在概念和原理方面,书中概括性介绍了现代计算机技术的基础知识,涉及加密技术、机器学习、搜索引擎等技术;日常技术部分介绍计算机实用编程语言和开发工具,编程模式,数据挖掘等技术;高级技术部分介绍面向网络的服务器架构,以及大数据处理和机器学习与深度学习等技术。

此外,本书还涉及实际技术实践的内容,帮助读者更好地理解和实践实用技术的思想,从而提升学习者实际运用计算机科学解决各类问题的能力和技术水平。

- 1 -。

牛顿法

牛顿法(英文:Newton's method),又称为牛顿-拉夫森法(英文:Newton-Raphson method),是一种求解实数域和复数域方程的近似方法。

方法用泰勒级数的前几项求方程的根。

起源牛顿法最初由艾萨克·牛顿在《流数法》(Method of Fluxions,1671年完成,在牛顿去世后的1736年公开发表)中提出。

约瑟夫·鲍易也曾于1690年在Analysis Aequationum中提出此方法。

求解最值问题牛顿法也被用于求函数的极值。

由于函数取极值的点处的导数值为零,故可用牛顿法求导函数的零点牛顿法不仅可以用来求解函数的极值问题,还可以用来求解方程的根,二者在本质上是一个问题,因为求解函数极值的思路是寻找导数为0的点,这就是求解方程。

在本文中,我们介绍的是求解函数极值的牛顿法。

在SIGAI之前关于最优方法的系列文章“理解梯度下降法”,“理解凸优化”中,我们介绍了最优化的基本概念和原理,以及迭代法的思想,如果对这些概念还不清楚,请先阅读这两篇文章。

和梯度下降法一样,牛顿法也是寻找导数为0的点,同样是一种迭代法。

核心思想是在某点处用二次函数来近似目标函数,得到导数为0的方程,求解该方程,得到下一个迭代点。

因为是用二次函数近似,因此可能会有误差,需要反复这样迭代,直到到达导数为0的点处。

下面我们开始具体的推导,先考虑一元函数的情况,然后推广到多元函数。

算法的完整流程为:1.给定优化变量的初始值和精度阈值ε,令,2.确定搜索方向3.搜索得到步长4.如果,则迭代结束5.计算6.计算7.,返回步骤2每一步迭代需要计算nxn的矩阵,当n很大时,存储该矩阵非常耗费内存。

为此提出了改进方案L-BFGS,其思想是不存储完整的矩阵,只存储向量.。

基于Lucy-Richardson算法图像复原

实景图像的复原处理一、设计意义和目的意义:图像复原是数字图像处理中的一个重要课题。

它的主要目的是改善给定的图像质量并尽可能恢复原图像。

图像在形成、传输和记录过程中,受多种因素的影响,图像的质量都会有不同程度的下降,典型的表现有图像模糊、失真、有噪声等,这一质量下降的过程称为图像的退化。

图像复原的目的就是尽可能恢复被退化图像的本来面目。

在成像系统中,引起图像退化的原因很多。

例如,成像系统的散焦,成像设备与物体的相对运动,成像器材的固有缺陷以及外部干扰等。

成像目标物体的运动,在摄像后所形成的运动模糊。

当人们拍摄照片时,由于手持照相机的抖动,结果像片上的景物是一个模糊的图像。

由于成像系统的光散射而导致图像的模糊。

又如传感器特性的非线性,光学系统的像差,以致在成像后与原来景物发生了不一致的现象,称为畸变。

再加上多种环境因素,在成像后造成噪声干扰。

人类的视觉系统对于噪声的敏感程度要高于听觉系统,在声音传播中的噪声虽然降低了质量,但时常是感觉不到的。

但景物图像的噪声即使很小都很容易被敏锐的视觉系统所感知。

图像复原的过程就是为了还原图像的本来面目,即由退化了的图像恢复到能够真实反映景物的图像。

目的:图像复原的目的也是改善图像的质量。

图像复原可以看作图像退化的逆过程,是将图像退化的过程加以估计,建立退化的数学模型后,补偿退化过程造成的失真,以便获得未经干扰退化的原始图像或图像的最优估计值,从而改善图像质量。

图像复原是建立在退化的数学模型基础上的,且图像复原是寻求在一定优化准则下的原始图像的最优估计,因此,不同的优化准则会获得不同的图像复原,图像复原结果的好坏通常是按照一个规定的客观准则来评价的,因此,建立图像恢复的反向过程的数学模型和确定导致图像退化的点扩散函数,就是图像复原的主要任务。

二、设计原理1.图像的退化数字图像在获取过程中,由于光学系统的像差、光学成像衍射、成像系统的非线性畸变、成像过程的相对运动、环境随机噪声等原因,图像会产生一定程度的退化。

richardson-lucy 公式

Richardson-Lucy 公式,又称为最小均方重建算法(Richardson-Lucy algorithm),是一种在图像处理中常用的盲复原算法。

1. 背景介绍图像复原是数字图像处理领域中的重要问题之一,指的是从经过退化的图像中还原出原始图像的过程。

图像退化是指由于光学系统、摄像机传感器等原因所导致的图像质量下降,如模糊、噪声等。

盲复原则是指在不知道图像退化过程的情况下进行图像复原。

Richardson-Lucy 公式便是这一领域中被广泛应用的一种方法。

2. 算法原理Richardson-Lucy 公式是一种迭代算法,通过不断迭代更新图像的估计值,使其逐渐逼近原始图像。

其数学表达式如下:\[ \hat{f}^{(k+1)}(x) = \hat{f}^{(k)}(x) \cdot \frac{\sum_{y}\frac{g(y)}{\hat{f}^{(k)}(y)} \cdot h(x-y)}{\sum_{z}\frac{g(z)}{\hat{f}^{(k)}(z)} \cdot \sum_{y}h(z-y)} \]其中,\( \hat{f}^{(k)}(x) \) 是第 k 次迭代后的图像估计值,g(x) 和h(x) 分别代表原始退化图像和点扩展函数,x 为图像的像素坐标。

3. 算法流程Richardson-Lucy 公式的算法流程如下:- 初始化:将估计值\( \hat{f}^{(0)} \)设为退化图像g,设定迭代次数k和收敛阈值ε;- 迭代更新:根据公式进行 k 次迭代更新,直到满足收敛条件;- 输出结果:输出最终估计值\( \hat{f}^{(k)} \)作为复原图像。

4. 算法特点Richardson-Lucy 公式作为盲复原算法,具有以下特点:- 优点:适用于多种图像退化模型,鲁棒性较好,能够有效抑制噪声;- 缺点:对退化过程的要求较高,需要事先对图像退化过程进行建模。

5. 应用领域Richardson-Lucy 公式广泛应用于天文学、医学影像等领域。

国外计算机科学教材系列

国外计算机科学教材系列有很多,这里列举一些广泛使用和备受推崇的系列教材,它们涵盖了计算机科学的各个领域和层次:1. **《计算机科学导论》系列**(Introduction to Computer Science Series):- 作者:Thomas H. Cormen、Charles E. Leiserson、Ronald L. Rivest、Clifford Stein - 描述:这个系列主要以《算法导论》(Introduction to Algorithms)为代表,是计算机科学领域的经典之一,涵盖了算法、数据结构、计算复杂性等核心概念。

2. **《计算机组成与设计》系列**(Computer Organization and Design Series):- 作者:David A. Patterson、John L. Hennessy- 描述:该系列包括了多个版本的教材,覆盖了计算机体系结构、数字逻辑、指令集架构等内容,适用于硬件和低级编程领域的学习。

3. **《人工智能:一种现代方法》系列**(Artificial Intelligence: A Modern Approach Series):- 作者:Stuart Russell、Peter Norvig- 描述:这个系列的核心书籍是《人工智能:一种现代方法》(Artificial Intelligence:A Modern Approach),涵盖了人工智能的基本概念、算法和方法。

4. **《计算机图形学》系列**(Computer Graphics Series):- 作者:James D. Foley、Andries van Dam、Steven K. Feiner、John F. Hughes - 描述:这个系列包括了多个版本的教材,涵盖了计算机图形学的基本原理、算法和技术。

5. **《操作系统概念》系列**(Operating System Concepts Series):- 作者:Abraham Silberschatz、Peter B. Galvin、Greg Gagne- 描述:该系列教材介绍了操作系统的基本概念、设计和实现,是操作系统课程的经典教材之一。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Richardson–Lucy Algorithm With Total Variation Regularization for3D Confocal Microscope Deconvolution NICOLAS DEY,1LAURE BLANC-FERAUD,1*CHRISTOPHE ZIMMER,2PASCAL ROUX,3ZVI KAM,4JEAN-CHRISTOPHE OLIVO-MARIN,2AND JOSIANE ZERUBIA11Ariana Group,INRIA/I3S,2004route des Lucioles–BP93,06902Sophia Antipolis,France2Quantitative Image Analysis Group,Institut Pasteur,25–28rue du Dr.Roux,75015Paris,France3Dynamic Imaging Platform,Institut Pasteur,25–28rue du Dr.Roux,75015Paris,France4Department of Molecular Cell Biology,Weizmann Institute of Science,Rehovot,Israel76100KEY WORDS image deconvolution;total variation regularization;Poisson noise;fluorescence confocal microscopyABSTRACT Confocal laser scanning microscopy is a powerful and popular technique for3D imaging of biological specimens.Although confocal microscopy images are much sharper than standard epifluorescence ones,they are still degraded by residual out-of-focus light and by Poisson noise due to photon-limited detection.Several deconvolution methods have been proposed to reduce these degradations,including the Richardson–Lucy iterative algorithm,which computes maximum likelihood estimation adapted to Poisson statistics.As this algorithm tends to amplify noise,regu-larization constraints based on some prior knowledge on the data have to be applied to stabilize the solution.Here,we propose to combine the Richardson–Lucy algorithm with a regularization con-straint based on Total Variation,which suppresses unstable oscillations while preserving object edges.We show on simulated and real images that this constraint improves the deconvolution results as compared with the unregularized Richardson–Lucy algorithm,both visually and quanti-tatively.Microsc.Res.Tech.69:260–266,2006.V C2006Wiley-Liss,Inc.INTRODUCTIONThe confocal laser scanning microscope(CLSM)is an opticalfluorescence microscope that uses a laser to scan and image point-by-point a sample in3D,and where a pinhole is used to reject most out-of-focus light.The ability of the CLSM to image optical sections explains its widespread use in biological research. Nevertheless,the quality of confocal microscopy images still suffers from two basic physical limitations. First,out-of-focus blur due to the diffraction-limited nature of the optics remains substantial,even though it is reduced compared to epifluorescence microscopy. Second,the confocal pinhole drastically reduces the amount of light detected by the photomultiplier,lead-ing to Poisson noise.The images produced by the CLSM can therefore benefit from postprocessing by deconvolution methods designed to reduce blur and noise.Several deconvolution methods have been proposed for3D microscopy.In wide-field microscopy,Agard and Sedat(1983)and many others achieved out-of-focus deblurring by constrained iterative deconvolution.For confocal images,Tikhonov–Miller inversefilter(van Kempen et al.,1997),Carrington(van Kempen and van Vliet,2000a)and Richardson–Lucy(RL)algo-rithms(Lucy,1974;Richardson,1972)have also been applied.The latter has been used extensively in astro-physical or microscopic imaging(van Kempen et al., 1997),and is of particular interest for confocal micros-copy,because it is adapted to Poisson noise.An impor-tant drawback of RL deconvolution,however,is that it does not converge to the solution because the noise is amplified after a small number of iterations.This sen-sitivity to noise can be avoided with the help of regula-rization constraints,leading to much improved results. van Kempen et al.have applied a RL algorithm using energy-based regularization to biological images(van Kempen and van Vliet,2000a).The term based on the Tikhonov–Miller expression suppresses noise amplifi-cation,but it has the drawback to smooth edges.In this study,we propose to use the total variation (TV)as a regularization term.The main advantages of the TV are that it preserves the edges in the image and smoothes homogeneous areas.It has been proposed for 2D optical gray level image restoration in Rudin et al. (1992).This kind of nonlinear image restoration has been extensively used in image processing inverse problems,for the last10years,and has generated mathematical studies on the Bounded Variation(BV) function space(Ambrosio et al.,2000).In this study,we propose to apply this nonlinear regularization to the deconvolution of3D confocal microscopy images.Con-sidering the statistical Poisson nature of the noise,this leads to a RL algorithm with nonlinear regularization. The paper is organized as follows.Thefirst section presents the image formation model,by describing the point spread function and the noise distribution.In the second section,wefirst recall the RL algorithm in*Correspondence to:Laure Blanc-Feraud,Ariana Group,INRIA/I3S,2004 route des Lucioles–BP93,06902Sophia Antipolis,France.E-mail:Laure.Blanc_Feraud@sophia.inria.frContract grant sponsors:INRIA ARC De´MiTri,P2R Franco-Israeli Program. Received28July2005;accepted in revised form13October2005DOI10.1002/jemt.20294Published online in Wiley InterScience().V C2006WILEY-LISS,INC.MICROSCOPY RESEARCH AND TECHNIQUE69:260–266(2006)the context of microscope imaging,and then describe the nonlinear TV regularized RL algorithm.Next,we present the results of the proposed algorithm on syn-thetic and real data,and compare them with the stand-ard algorithm without regularization,both in terms of visual image quality and a numerical criterion.The last section concludes the paper.IMAGE FORMATION MODELThe image formation can be modeled with the Point Spread Function(PSF)and the statistical distribution of the detection noise.Such a general model can be written as:i¼}ðoÃhÞð1Þwhere i is the observed(measured)image,o is the object we want to retrieve(corresponding to the distri-bution offluorescent markers inside the specimen),h is the PSF,and}models the noise distribution.NoiseIn a confocal microscope,a pinhole is used to reject most out-of focus light.The amount of light reaching the detector is therefore low,and the noise statistics is well described by a Poisson process.The distribution of i at point s,i(s),knowing(oÃh)(s),is a Poisson distri-bution of the parameter(oÃh)(s):PðiðsÞ=ðoÃhÞðsÞÞ¼½ðoÃhÞðsÞ iðsÞeÀðoÃhÞðsÞiðsÞ!ð2ÞAssuming that the noise is spatially uncorrelated,the statistics of the observed image i,knowing the object o, (and the PSF h which is considered here as a parame-ter)is the likelihood distribution given by:Pði=oÞ¼Ys e S ½ðoÃhÞðsÞ iðsÞeÀðoÃhÞðsÞiðsÞ!!ð3Þwhere s is the coordinate of a3D voxel and S is the total set of voxel coordinates in the imaged volume.Confocal Point Spread FunctionUnder the assumptions of a translation invariant PSF,incoherent imaging,and monochromatic light (excitation and emission),we use a well-known image formation model of the CLSM,which has been pro-posed by van Kempen and van Vliet(1999),and is derived from Sheppard and Cogswell(1990).For a defocusing Z,the3D PSF(Sheppard and Cogswell, 1990)is given by:hðX;Y;ZÞ¼j A RðX;YÞÃ^P k emðX;Y;ZÞj23j^P k exðX;Y;ZÞj2ð4Þwhere k ex and k em are the excitation and emission wavelengths,respectively.The pinhole(Born and Wolf, 1999;Goodman,1996)is represented by A R(X,Y),and ^Pk is the2D Fourier transform of the pupil function P k given by:P kðU;V;ZÞ¼P qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiU2þV2pe2i p k WðU;V;ZÞð5ÞHere W(U,V,Z)¼12Z(1Àcos2a)is the aberration phase derived from Goodman(1996).It gives a measure of the wavefront path difference between a focused and a defo-cused beam(Born and Wolf,1999;Stokseth,1969).q¼NA2kis the lateral cutoff frequency,and NA¼n o sin a the numerical aperture,which is related to the cone of light collected by the objective and the immersion medium re-fractive index n o.The model takes into account thefinite size of the pinhole,which is used for the image acquisi-tion.This PSF model will be used both for the creation of the simulated images and for the deconvolution.In van Kempen and van Vliet(1999)and van Kempen and van Vliet,(2000a),the authors use an image forma-tion model for confocal microscopy,which is slightly dif-ferent from Eq.(1):namely i¼q(oÃhþb),where b models the background signal.In a previous work(Dey et al.,2004a,b),we showed that the use of a background term in the model is not necessary if a well suited regu-larization term is included.In fact,it happens that using a background model is equivalent to imposing a support constraint to regularize the solution.We also showed that when using a regularization term,no im-provement is reached by adding the background term, and that it is therefore not needed.CONFOCAL MICROSCOPY IMAGEDECONVOLUTIONThe Multiplicative RL AlgorithmThe RL algorithmfinds o from the observation i, knowing the PSF h,by maximizing the likelihood dis-tribution(Eq.(3))with respect to o.The RL algorithm can be derived by using the Expectation–Maximization (EM)algorithm(Dempster et al.,1977),or by minimiz-ing the functionalÀlog p(i|o),using a multiplicative type algorithm.Let us briefly describe the second approach.MinimizingÀlog p(i|o)is equivalent to mini-mizing J1(o)given by:J1ðoÞ¼XsðÀiðsÞlog½ðoÃhÞðsÞ þðoÃhÞðsÞÞð6ÞSince the functional J1(o)is convex in o,searching for a minimum is equivalent to searching for a zero of the gradient of J1(o)which results in solvingiðsÞðo kÃhÞðsÞÃhðÀsÞ¼1ð7ÞLet o k(s)be the estimate of o at iteration k.Then one iteration of a multiplicative gradient type algorithm (for more details see Lucy(1974)and Richardson1972) or Appendix C of Dey et al.(2004a))is given by:o kþ1ðsÞ¼iðsÞðo kÃhÞðsÞÃhðÀsÞo kðsÞð8Þ261TOTAL VARIATION BASED DECONVOLUTIONwhich defines one iteration of the RL algorithm.An important property of this scheme is that it ensures nonnegativity if thefirst estimate(here taken as the measured3D image,scaled by a value equal to the mean of the image stack)is nonnegative.However, because the inversion problem is ill-posed and the Max-imum Likelihood(ML)estimator is nonregularized,the solution given by Eq.(8)when k?þ?consists of noise only,because of noise amplification.To obtain good results,the algorithm is stopped before the noise is amplified.In order to improve the trade-off between a good deconvolution with sharp edges and noise amplifi-cation,we propose to regularize the solution by mini-mizing its TV.Total Variation RegularizationIn a Bayesian approach,a regularization is obtained by Maximizing the a posteriori distribution(MAP esti-mator)P(o/i)¼P(i/o)3P(o)/P(i)with a suitable a priori distribution P(o).Tikhonov–Miller(TM)(van Kempen and van Vliet,1999;van Kempen and van Vliet,2000b)regularization is the most popular algo-rithm in3D image deconvolution.TM regularization is designed to avoid noise amplification by smoothing, causing smear of the object edges.de Monvel et al., (2003)used a RL algorithm with maximum entropy regularization.This regularization gives better re-sults than that of TM regularization,in the sense that the edges are less smoothed.However,the trade-off between good regularization with a good reduction of the noise,and edge preservation still exists.This compromise can be improved by using wavelet coeffi-cient thresholding,as proposed by de Monvel et al. (2003).Here,we introduce an other a priori constraint on the object by adding to the energy J1,a regularization term J reg,defined by the TV of the solution o as in Rudin et al.(1992):J regðoÞ¼k TVXsjr oðsÞjð9ÞThe total functional to be minimized is then:J1ðoÞþJ regðoÞ¼XsðÀiðsÞlog½ðoÃhÞðsÞþðoÃhÞðsÞÞþk TVXs jr oðsÞjð10ÞUsing the L1norm over!o rather than the L2norm as in TM regularization(van Kempen and van Vliet, 2000b)allows to recover a smooth solution with sharp edges.It can be shown(Rudin et al.,1992)that the smoothing process introduced by J reg acts only in the direction tangential to the image level lines and not in the orthogonal direction,so that edges are preserved. The derivative of J reg with respect to o is a nonlinearterm@@o J reg¼Àk TV divr ojr o jwhere div stands for thedivergence.We minimize Eq.(10)using the multiplica-tive gradient-based algorithm(or equivalently by using EM algorithm for the penalized criterion of Eq.(10)),and we adopt an explicit scheme,as in Green (1990),defined by:o kþ1ðsÞ¼iðsÞðo kÃhÞðsÞÃhðÀsÞ3o kðsÞ1Àk TV div r okðsÞjr okðsÞjð11ÞNumerically we noticed that the regularization param-eter k TV should be neither too small nor too large:if k TV<10À6,RL is dominated by the data model;if k TV $1,RL is dominated by the regularization term.For larger k TV,the denominator of Eq.(8)can become zero or negative.This must be avoided because small de-nominators create points of very high intensity,which are amplified at each iteration.A negative value vio-lates the nonnegativity constraint.We use a param-eter k TV¼0.002for computations,and iterations are stopped at convergence,defined when the relative dif-ference between two images is less than a threshold (10À5in the experiments presented below).RESULTSEvaluation CriteriaIn the following,we report results on three synthetic (simulated)objects and two types of real specimens.To quantify the quality of the deconvolution we used the I-divergence criterion,which is the best measure in the presence of a nonnegativity constraint(Csisza´e´r,1991). The I-divergence between two3D images A and B is defined by:I A;B¼Xs e SA s lnA sB sÀðA sÀB sÞð12ÞThe I-divergence is nonsymmetric,i.e.,I A,B=I B,A.In the experiments with synthetic data,image A is the known undegraded initial image(the one we want to retrieve)and image B is the restored one.The I-diver-gence then quantifies the difference between the ideal image and the reconstructed one.Ideally,a perfect deconvolution should end with I-divergence equal to zero.This numerical criterion cannot be computed on results from real data because the undegraded object is not known.In order to display the visual quality,we present cross sections of the objects before and after deconvolu-tion with the RL algorithm and with the TV regular-ized RL algorithm.Results on Synthetic DataWe show results on three different synthetic objects, shown in Figures1–3(column a).These objects are artificially blurred and corrupted by Poisson noise using the image formation model described in the sec-tion of image formation model.The resulting degraded images are shown in Figures1–3(column b).Deconvo-lution results of the degraded images using standard RL are shown in Figures1–3(column c).For standard RL,we used the number of iterations that achieves minimum I-divergence.The deconvolution results262N.DEY ET AL.using RL with TV regularization (RL–TV)are shown in Figures 1–3(column d).The first simulated object is a full cylinder (Fig.1a).It is a single object,presenting no corner (except at the top and bottom)and which is homogeneous (there are only two intensity values:the background and the cylinder).Figure 1c shows the result of the deconvolution using the standard RL algorithm:in lateral and axial views there is still some blur,and intensity oscillation artifacts are clearly apparent.The RL–TV algorithm greatly improves the results (Fig.1d):the cylinder is sharp (the edges are sharp in lateral and axial views)and there are no intensity oscillations (see the corresponding in-tensity profile in Figure 6a).The axial view shows that the heights of the deconvolved cylinder (Fig.1d)and of the initial cylinder (Fig.1a)are similar.The final I-divergence is 0.766for standard RL and 0.220for RL–TV ,resulting in a more than 3-fold improvement.Figure 2a presents another 3D synthetic image com-posed of five geometrical shapes.These are more com-plex objects with corners and homogeneous regions of different intensity levels.The deconvolution result,with standard RL,is shown in Figure 2c.The different shapes are still blurred and appear thinner than before degradations.Many intensity oscillations appear inside the shapes.All of these artifacts are avoided when using the RL–TV algorithm (Fig.2d):all shapes are sharp,the thickness is preserved,and there are no more intensity oscillations (see the intensity profile in Fig.6b).The cross-section displays a slight effect of rounding corners.In the axial view,the borders are not perfectly sharp.Nevertheless,the final I-divergence is 0.691for the pro-posed method,which represents nearly a 2-fold im-provement compared with standard RL (1.365).In Figure 3a,we present a 3D synthetic object com-bining texture and fine structures.The object is inho-mogeneous and the fine structure exhibits sharp tran-sitions.The degradation (Fig.3b)is sufficiently strong to almost hide these details.The deconvolution ob-tained with standard RL still shows strong axial blur.With RL–TV ,this blur is almost completely removed,and the fine structures are discernable again.However,a drawback of TV regularization is apparent on the XY section:a stair-casing effect.It is due to thenonderiv-Fig.1.Deconvolution of a binary cylinder.(a )the original 3D syn-thetic image;(b )the same image,degraded by blur and Poisson noise according to the image formation model;(c )deconvolution by stand-ard RL (I-divergence is 0.766);(d )deconvolution by RL–TV with k TV ¼0.002(I-divergence is 0.220).The top row shows the lateral (XY )section through the center of the image stack.The bottom row shows the axial (YZ )section corresponding to the dotted line shown in (a).The image size is 1283128364with voxel size 30330350nm in X ,Y ,and Z.Fig.2.Deconvolution of a composite image.(a )original image;(b )degraded by blur and Poisson noise;(c )deconvolved with standard RL;(d )deconvolved with RL–TV.The objects exhibit different inten-sities:255for the cylinder,221for the annulus,170for the cross,102for the triangle,238and 102for the two bars,10for the background.Final I-divergence is 1.365for standard RL (c )and 0.691for RL–TV with k TV ¼0.002(d ).The size of the stack is 1283128364with voxel size 30330350nm in X ,Y ,and Z .263TOTAL VARIATION BASED DECONVOLUTIONability of the absolute value in zero,and appears mainly in parts of the image which contain oscillations of small amplitudes,as textures.The intensity profiles in Figure 6c do not show significant differences be-tween standard RL and RL–TV results.Final I-diver-gence is 0.462for standard RL and 0.404for RL–TV ,which is an improvement by 12.6%.Results on Real DataTo evaluate the performances of our method on real data,we chose two types of objects,beads and shells,whose size and shape are well characterized and which can therefore be used as bona fide references.Thebeads are FocalCheck TM F-24634(Molecular Probes)fluorescent microspheres,with a diameter of 6l m.The shells are 15l m in diameter,with a fluorescent layer of thickness between 0.5and 0.7l m.The fluorescence excitation wavelength is k ex ¼488nm;the emission wavelength is k em ¼520nm;the pinhole size was set to 1Airy unit;in object space,the XY dimension of each voxel was 89nm and the Z dimension 230nm.The microscope was a Zeiss LSM 510,equipped with an immersion oil Apochromat 633with numerical aper-ture NA ¼1.4objective,and an internal magnification of 3.33.Image acquisition was performed using the Zeiss Visionsoftware.Fig.3.Deconvolution of a textured image with fine structure.(a )original image;(b )degraded by blur and Poisson noise;(c )deconvolved with standard RL;(d )deconvolved with RL–TV.Final I-divergence is 0.462for standard RL (c )and 0.404for RL–TV with k TV ¼0.002(d ).Note that axial blur is much reduced with RL–TV compared to RL.The image size is 1283128332.Fig.4.Deconvolution of a spherical shell image.(a )shows the raw image,(b )the result of standard RL deconvolution,(c )the result of RL–TV deconvolution,with k TV ¼0.002.RL–TV deconvolution allows a better estimate of the shell thickness than RL (see text).The image size is 25632563128with voxel size 893893230nm in X ,Y ,andZ .First row represents the central XY image,and second line the cen-tral axial section of this stack corresponding to the dotted line.There is no resampling to obtain the same scale in X ,Y ,and Z ,only a resiz-ing of the image.264N.DEY ET AL.Fig.5.Deconvolution of a bead cluster image.(a )original image;(b )deconvolved with standard RL;(c )deconvolved with RL–TV .The result of RL deconvolution (b )presents a high level of noise and the axial view shows oscillation artifacts.These degradations are much reduced with RL–TV deconvolution (k TV ¼0.005).The stack size is 25632563128with voxels of size 893893230nm.Fig.6.Intensity profiles of four test images.In each panel,the dashed curve corresponds to images deconvolved with standard RL,and the solid curve corresponds to images deconvolved with RL–TV.(a )intensity profile of the synthetic cylinder image (Fig.1);(b )profile of the synthetic composite image (Fig.2);(c )profile of the textured synthetic object with fine structure (Fig.3).The profile corresponds to a section in the middle of the object;(d )profile of the real spherical shell image (Fig.4).We show only a profile through the right part of the shell.Note that RL–TV deconvolution produces a sharper profile than standard RL.Values on the horizontal axes are in microns.265TOTAL VARIATION BASED DECONVOLUTIONThe results of applying the deconvolution algorithm to a spherical shell image are shown in Figure4.The improvements achieved with RL–TV are a better local-ization of the borders and a better estimation of the shell thickness.In Figure6d,we show a zoom of the in-tensity profiles through the right portion of the shell image deconvolved with standard RL and RL–TV. These profiles can be used to estimate the thickness of the shell.On the raw image data(Fig.4a),we mea-sured a shell thickness of0.93l m,which is larger than the true thickness;on the result of RL deconvolution (Fig.4b),the measured thickness is0.26l m,which is too small.The thickness measured using RL–TV deconvolution(Fig.4c),is0.40l m and thus also too small,but this value lies closer to the real thickness (between0.5and0.7l m).We also tested the proposed deconvolution algorithm on a cluster of beads of diameter6l m(Fig.5).For the deconvolution with RL–TV,we used a regularization parameter k TV¼0.005.The improvement of RL–TV deconvolution compared to standard RL can be appre-ciated in Figure5c,as most of the blur has been removed,and the artifacts produced by standard RL deconvolution(visible in the X–Z view of Fig.5b)are no longer present.CONCLUSIONIn this paper,we have presented a new deconvolu-tion approach for3D confocal microscopy.It is based on the RL algorithm that is regularized using the TV.RL with no regularization suffers from unstable noise amplification.Many authors have proposed to regula-rize the energy using different functionals.Tikhonov–Miller functional is often used,but it over-smoothes the edges of objects in the image.We have proposed to use a regularization term based on Total Variation, which is non quadratic and does not smooth edges.We have presented results on simulated and real data that demonstrate,both in qualitative and in quantitative terms,the advantage of the RL–TV algorithm over its nonregularized version.REFERENCESAgard D,Sedat J.1983.Three-dimensional architecture of a polytene nucleus.Nature302:676–681.Ambrosio L,Fusco N,Pallara D.2000.Functions of bounded variation and free discontinuity problems.Oxford:Oxford University Press. Born M,Wolf E.1999.Principles of optics,7th ed.Cambridge,UK: Cambridge University Press.Csiza´e´r I.1991.Why least squares and maximum entropy?The Ann Stat19:2032–2066.Dempster A,Laird N,Rubin D.1977.Maximum Likelihood from incomplete data via the EM algorithm.J Roy Stat Soc B39:1–38. Dey N,Blanc-Fe´raud L,Zimmer C,Roux P,Kam Z,Olivo-Marin JC, Zerubia J.2004a.3D microscopy deconvolution using Richardson–Lucy algorithm with total variation regularization.INRIA Techni-cal Report5272.Dey N,Blanc-Fe´raud L,Zimmer C,Kam Z,Olivo-Marin JC,Zerubia J.2004b.A deconvolution method for confocal microscopy with total variation regularization.In:Proceedings of IEEE International Symposium on Biomedical Imaging(ISBI),April2004.Goodman JW.1996.Introduction to Fourier optics,2nd ed.New York: McGraw-Hill.Green P.1990.On use of the EM algorithm for penalized likelihood estimation.J Roy Stat Soc B52:443–452.van Kempen G,van Vliet L.1999.The influence of the background estimation on the superresolution properties of nonlinear image restoration algorithms.In:Cabib D,Cogswell CJ,Conchello J-A, Lerner JM,Wilson T,editors.Three-dimensional and multidimen-sional microscopy:image acquisition and processing IV,Proceed-ings of the SPIE Conference,vol.3605;pp.179–189.van Kempen G,van Vliet L.2000a.Background estimation in nonlin-ear image restoration.J Opt Soc Am A17:425–433.van Kempen G,van Vliet L.2000b.The influence of the regularization parameter and thefirst estimate on the performance of Tikhonov regularized nonlinear image restoration algorithms.J Microsc 198:63–75.van Kempen G,van Vliet L,Verveer P,van der Voort H.1997.A quan-titative comparison of image restoration methods for confocal mi-croscopy.J Microsc185:354–365.Lucy L.1974.An iterative technique for rectification of observed dis-tributions.Astron J79:745–765.de Monvel JB,Scarfone E,Calvez SL,Ulfendahl M.2003.Image-adaptive deconvolution for three-dimensional deep biological imag-ing.Biophys J85:3991–4001.Richardson WH.1972.Bayeslan-based iterative method of image re-storation.J Opt Soc Am62:55–59.Rudin LI,Osher S,Fatemi E.1992.Nonlinear total variation noise re-moval algorithm.Physica D60:259–266.Sheppard C,Cogswell C.1990.Three-dimensional image formation in confocal microscopy.J Microsc159:179–194.Stokseth P.1969.Properties of a defocused optical system.J Opt Soc Am59:1314–1321.266N.DEY ET AL.。

相关文档
最新文档