Data-driven Approaches for Texture and Motion

合集下载

仿视皮层V1结构的纹理特征提取

仿视皮层V1结构的纹理特征提取

收稿日期:2016-06-29 网络出版时间:2016-12-04基金项目:国家自然科学基金资助项目(61372167,61379104)作者简介:邹洪中(1992-),男,空军工程大学硕士研究生,E -m a i l :h o n g z h o n g z @126.c o m.网络出版地址:h t t p://w w w.c n k i .n e t /k c m s /d e t a i l /61.1076.T N.20161204.0227.034.h t m l d o i :10.3969/j.i s s n .1001-2400.2017.03.017仿视皮层V 1结构的纹理特征提取邹洪中,许悦雷,马时平,李 帅,张文达(空军工程大学航空航天工程学院,陕西西安710038)摘要:针对传统模型在提取图像纹理特征时的局限性,借鉴初级视皮层细胞对边缘及形状的敏感性,提出了基于初级视皮层模型的图像纹理特征提取模型.采用二维G a b o r 滤波器和指数延迟函数来模拟初级视皮层简单细胞的感受野,并引用绝对值求和与归一化进一步得到初级视皮层细胞模型,分析其对自然图像的响应输出特性,解码得到图像的纹理特征.仿真结果表明,建立的初级视皮层细胞模型能够基本拟合生物实验数据,实现对栅格及随机点刺激的运动特征提取;所提纹理特征提取模型在对比选取最佳模型参数时能较好地实现对简单人造图像及自然图像的纹理特征提取,有效地搭建了生物视觉与计算机视觉的联系平台.关键词:纹理;视觉特征;初级视皮层模型;自然图像;运动特征中图分类号:T P 183 文献标识码:A 文章编号:1001-2400(2017)03-0096-05T e x t u r e f e a t u r e e x t r a c t i o n s i m u l a t i n g t h e s t r u c t u r e o fV 1i n t h e c o r t e xZ O U H o n g z h o n g ,X UY u e l e i ,MAS h i p i n g ,LIS h u a i ,Z HA N G W e n d a (I n s t i t u t e o fA e r o n a u t i c s a n dA s t r o n a u t i c sE n g i n e e r i n g ,A i rF o r c eE n g i n e e r i n g Un i v .,X i a n710038,C h i n a )A b s t r a c t : F o c u s i n g o nt h e l i m i t a t i o no f t h et r a d i t i o n a lm o d e l i ne x t r a c t i n g i m a g et e x t u r ef e a t u r e s ,a n d r e f e r r i n g t o t h e s e n s i t i v i t y o fV 1t o t h ee d g ea n ds h a pe ,t h e t e x t u r ef e a t u r ee x t r a c t i o n m o d e l b a s e do nt h e V 1m o d e l i s c r e a t i v e l yp u t f o r w a r d .2DG a b o r f i l t e r s a n d e x p o n e n t i a l d e l a y f u n c t i o n a r e a d o p t e d t o s i m u l a t e t h e r e c e p t i v e f i e l d s o f t h eV 1s i m p l e c e l l ,a n d f u r t h e r aV 1m o d e l i s o b t a i n e db y c i t i ng th e s u mo f a b s o l u t e v a l u e s a n dn o r m a l i z a t i o n ,a n db y a n a l y z i n g t h er e s p o n s ec h a r a c t e r i s t i c so fn a t u r a l i m a g e s ,i m a g et e x t u r e f e a t u r e s c a nb e o b t a i n e db y d e c o d i n g .S i m u l a t i o n r e s u l t s s h o wt h a t t h eV 1m o d e l c a n t a l l y b a s i c a l l y wi t h t h e e x p e r i m e n t a l d a t ao fb i o l o g y a n de x t r a c t m o t i o nf e a t u r e sf r o m t h e g r i da n dr a n d o m -d o ts t i m u l i ,a n db y c h o o s i n g t h eb e s t m o d e l p a r a m e t e r sb y c o m p a r i s o n ,t h e p r o po s e dt e x t u r ef e a t u r ee x t r a c t i o n m o d e lc a n e x t r a c t t e x t u r ef e a t u r e sf r o m s i m p l es y n t h e t i ci m a g e sa n dn a t u r a l i m a g e s ,w h i c he f f e c t i v e l y s e t su p th e c o r r e l a t i o n p l a t f o r mo f b i o l o g i c a l v i s i o na n d c o m p u t e r v i s i o n .K e y Wo r d s : t e x t u r e ;v i s u a l f e a t u r e ;p r i m a r y v i s u a l c o r t e xm o d e l ;n a t u r a l i m a g e ;m o t i o n f e a t u r e 纹理作为重要的视觉特征,包含有图像的细节及结构信息.图像的纹理特征提取是图像处理领域的关键分支,能应用于图像检索㊁图像分析和计算机视觉等[1]领域.生物视觉系统具有强大的视觉信息处理能力,这是绝大部分生物赖以生存的法宝,类脑计算能帮助人们快速准确地处理所接收到的信息,是实现人工智能的核心所在.大量的生物实验数据表明,初级视皮层(p r i m a r y v i s u a l c o r t e x ,V 1)细胞不仅能够提取运动目标的动作特征信息,还能够编码边缘与方位信息[2].笔者借鉴视皮层V 1细胞的纹理信息提取能力来建立仿生模型,能够有效地提取自然图像中的视觉纹理信息,为后续的深入研究奠定基础.2017年6月第44卷 第3期 西安电子科技大学学报(自然科学版)J O UR N A L O F X I D I A N U N I V E R S I T Y J u n .2017V o l .44 N o .3Copyright©博看网 . All Rights Reserved.h t t p ://w w w.x d x b .n e t自20世纪中叶以来,大量的研究者涌入到计算机视觉领域,并建立了种类繁多的纹理特征提取模型,主要分为统计法㊁模型法㊁信号处理法和结构法.统计法通过像元及其邻域的灰度统计数据来描述纹理特征,主要研究纹理区域中的一阶㊁二阶与高阶统计特性[3];模型法建立参数控制的分布模型,并通过纹理图像的实现评估出模型参数,参数再作为特征用于图像分割等工程实际问题[4];信号处理法以时频分析和多尺度分析为基础,对纹理图像的局部区域进行特定变换,提取出较为稳定的特征值来表示区域内的一致性和区域间的相异性[5];结构法通过纹理基元来分析纹理特征,认为图像纹理是由不同类型的纹理基元所构成的,其方向与数目决定了纹理的具体表现形式[6].这4类模型相互之间既有联系又有区别,且涌现出了许多著名的纹理特征提取方法,如自相关函数法㊁灰度共生矩阵和马尔科夫随机场模型等.然而,由于纹理具有的复杂性㊁应用广泛性及定义的模糊性,现有的纹理特征提取方法并不能达到生物视觉系统所具备的纹理信息提取能力,往往局限于某一方面或领域.尽管如今存在不少方法在模型层次上简单结合了生物视觉系统与计算机视觉系统,然而很少有通过生物视觉模型来直接求取图像的纹理特征的.笔者着重研究了V 1细胞的结构及功能特性,并结合现有的V 1细胞模型,以提取图像纹理特征为目的搭建了生物视觉系统与计算机视觉系统的联系平台.1 实现纹理特征提取的V 1模型1.1 V 1模型研究表明,人脑接收的80%以上的信息都需要经过视觉系统的加工和处理[7].V 1区神经元是信息处理的关键环节,一般分为简单细胞和复杂细胞,能够有效地编码边缘㊁形状等静态信息.简单细胞的感受野呈狭长形,对于条状刺激较为敏感,可以检测线条和边缘等简单形状;复杂细胞的感受野由简单细胞感受野叠加而成[8].依据V 1简单细胞的感受野特性,诸多学者用三维G a b o r 滤波器㊁边缘检测和小波变换等方式实现对简单细胞的模拟[9].笔者结合纹理特征提取的现实需要以及较低的计算复杂度,分开建立空间和时间滤波器来模拟简单细胞的感受野,空间部分为简单的二维G a b o r 滤波器,时间部分为指数延迟函数[10].假定灰度级视频图像序列为I (p ,t ),p =(x ,y ),代表图像区域的空间位置.定义空间滤波器为H (p ,θ,f s )=e x p -x 2+y 22σæèçöø÷2e x (pj 2π(f s c o s θx +f s s i n θy )) ,(1)时间滤波器为T (t ,f t )=e x p (-tτ)e x p (j 2πf t t ) ,(2)其中,σ和τ为常量,θ为简单细胞偏好方向,f s 和f t 分别为空间频率和时间频率.则偏好速率V c 为V c =f t f s .(3) 定义空间滤波器和时间滤波器的实部与虚部分别为H r ,T r 和H i ,T i ,则奇偶时空滤波器分别为G o (p ,t ,θ,V c )=H i (p ,θ,f s )T r (t ,f t )+H r (p ,θ,f s )T i (t ,f t ) ,(4)G e (p ,t ,θ,V c )=H r (p ,θ,f s )T i (t ,f t )-H i (p ,θ,f s )T r (t ,f t ) .(5) 这些奇偶对称的滤波器能够表征V 1简单细胞,由此可以得到对特定方向θ和特定速率V c 敏感的简单细胞响应值:R o /e (p ,t ,θ,V c )(=G o /e (㊃,㊃,θ,V c )*(x ,y ,t ))I (p ,t ) .(6) 为了保持较好的边缘提取特性,复杂细胞响应值选取为奇偶简单细胞响应值的绝对值之和[11]:E (p ,t ,θ,V c )=R o (p ,t ,θ,V c )+R e (p ,t ,θ,V c ) .(7) 紧接着对响应值进行归一化处理.假定选取了空间均匀分布的N 个方向,即θ=θ1, ,θN,则最终V 1细胞模型输出为E V 1(p ,t ,θ,V c )=E z (p ,t ,θ,V c )ðN i =1E z (p ,t ,θi ,V c )+[]ε ,(8)其中,0<ε<1,为较小常数,以避免式(8)中分母为零.1.2 解码实现纹理特征提取因为图像纹理形式具有广泛性与多样性,纹理的定义问题一直存在不少争议,至今仍没有得到圆满的解79第3期 邹洪中等:仿视皮层V 1结构的纹理特征提取Copyright©博看网 . All Rights Reserved.h t t p ://w w w.x d x b .n e t决,大多数定义均没能得到公众认可.多数研究者根据自己对纹理的理解及定义进行相应的研究,使得纹理特征提取方法层出不穷,且大多仅仅适用于某一领域,不具有生物视觉系统的普适性[12].笔者着眼于V 1细胞对边缘的敏感特性,仔细分析V 1细胞模型对自然图像的响应特性(详见3.1节),建立了边缘纹理强度函数Q :Q (p ,t )=ðNi =1E V 1(p ,t ,θi ,0) .(9)Q 在一定程度上表示群体V 1细胞模型对具体空间位置的激活状态,象征了纹理的强弱.取V c =0,表示I (p ,t )为静态图像.以M a t l a b2010b 作为仿真平台,基于控制变量法由实验统计得到:σ=2.6,τ=6.4,f t =[-2.43,-1.62,-0.81,-0.27,0.00,0.27,0.81,1.62,2.43],f s =0.27,N =12.2 栅格与随机点视频序列的运动特征提取研究表明,V 1区细胞能够实现对栅格及随机点视频序列的运动特征提取.为了验证V 1模型的生物学特性,选取偏好方向(P r e f e rD i r e c t i o n ,P D )为0ʎ㊁偏好速率为1像素/帧的V 1模型用于实现对输入刺激的运动信息提取.输入刺激为光栅㊁格子(两个不同运动方向的光栅叠加产生)及随机点(密度为0.1),速率为1像素/帧,包含空间均匀分布的运动方向(光栅与格子均含有36个运动方向,随机点为12个),箭头指向仅表示其中之一,具体如图1(a )和图1(d )所示.图1(b )为M o v s h o n 在生物实验上测得的偏好方向为0ʎ的V 1细胞对运动的光栅和格子的方向调谐极坐标图[13],图1(c )及图1(e )分别为笔者建立的模型对3类刺激的仿真结果.图1 栅格与随机点刺激的极坐标响应从图1(a )~图1(c )可以看出,笔者建立的模型与生物实验数据总体趋于一致,对光栅刺激与格子刺激分别表现为单峰及双峰特性,在非峰值方向均存在一定响应,模型数据相较于实验数据更对称,更有规律.从图1(e )可以看出,笔者建立的模型对随机点刺激表现为单峰特性,这与文献[14]在实际生物实验中所得到的结论相吻合,即以运动速率较小的随机点作为刺激时,V 1细胞对与其偏好方向相同的随机点刺激响应最大.以上实验结果有效地说明笔者建立的模型能够大致拟合生物实验数据,具有V 1细胞的基本特性.究其原因可以得出,二维G a b o r 滤波器和指数延迟函数的有效结合,使得笔者建立的模型同V 1细胞感受野有着相似的结构特征,具有方向选择性,是模拟实现V 1细胞功能的有力支撑.笔者建立的模型的仿真结果同生物实验数据的对比充分证明了笔者建立的模型的合理性,可以为后续进一步研究奠定基础.89 西安电子科技大学学报(自然科学版) 第44卷Copyright©博看网 . All Rights Reserved.h t t p ://w w w.x d x b .n e t3 自然图像的纹理特征提取3.1 强弱纹理区响应特性分析由前一节分析可知,建立的V 1模型可以较好地模拟V 1细胞功能,基本符合生物特性.然而自然图像具有更为复杂的结构,为了提取较为清晰的纹理信息,这里选取空间滤波器尺寸为3ˑ3,方向数N 为12(详见3.3节),分析了强弱纹理区的响应特性,具体实验结果如图2所示.图2(b )~图(d )分别对应图2(a)中a ~c 点,依次对应无纹理区㊁强纹理区与弱纹理区,极坐标值对应偏好方向不同的12类细胞模型的响应值.图2 不同纹理区的响应特性图2的实验结果表明,在无纹理区,各方向响应值均为零,V 1群体细胞处于未激活状态;在弱纹理区,对不同方向敏感的V 1细胞模型具有大致相同的响应值,且整体响应值偏低,V 1群体细胞处于半激活状态;在强纹理区,V 1细胞模型往往在对称的两个敏感方向能够取得最大值,而在其他方向则呈递减的趋势,整体响应偏高,V 1群体细胞较为活跃.这与局部纹理的结构性质有密切的关系.当感受野较小时,处于纹理边缘中间区域的像元,沿着纹理边缘往往具有两个处于同一直线的对称的延伸方向(除转角处).这一特性是实现边缘特征提取的关键所在,也是模型解码部分的理论支撑.图3 简单图像的纹理提取3.2 人造简单图像的纹理特征提取通过3.1节的分析可以发现,笔者建立的V 1模型对纹理区与非纹理区的响应特性有明显区别.为了验证最终模型具有图像纹理特征提取能力(参数选择同3.1节),构造3类简单图像作为模型输入,实验结果如图3所示.分析图3可得,不论图像为纯黑或纯白,只要整幅图像中像素无变化,即不存在纹理时,模型响应输出为零,表示该图像不包含局部纹理特征.而当黑㊁白条纹与背景像素存在差异,即局部像素发生了规律性变化时,模型能够明显地提取出边缘纹理特征.以上对比分析证明了笔者建立的纹理特征提取模型具有边缘提取能力.纹理特征一般存在于灰度或亮度发生较大变化的位置,笔者建立的模型通过引入二维G a b o r 滤波器,对纹理特征较为敏感,在图像灰度值发生剧烈变化时模型响应较大,而在图像灰度值几乎不变化时模型响应很小.笔者建立的模型成功地将区域纹理的强弱通过响应值的大小加以体现,是实现纹理特征提取的重要过程.3.3 自然图像的纹理特征提取及参数分析3.3.1 参数分析自然图像较人造图像更为复杂多变.从实验中发现,针对静态图像的纹理特征提取,空间滤波器尺寸与数量(空间均匀分布方向数)的选择对实验结果影响较大.以数据库M i d d l e b u r y 中W o o d e n 的第1帧图像作为输入,如图4(a )所示,仅仅改变滤波器尺寸或数量(其余参数同3.1节)时的实验结果如图4(b )~图4(c)所示.从图4(b )的对比实验可以得到,随着空间滤波器的逐渐增大,局部纹理丢失越多,边缘越模糊,在滤波器选取为3ˑ3时细节保持最佳.对比图4(c )可得,当滤波器数量为6时,对不同区域的纹理特征提取区分度不高,不利于更进一步的研究;当滤波器数量为12和36时,其纹理特征提取效果趋于一致,不同区域纹理特征的区分性99第3期 邹洪中等:仿视皮层V 1结构的纹理特征提取Copyright©博看网 . All Rights Reserved.h t t p ://w w w.x d x b .n et图4 不同参数时的纹理特征提取较滤波器数为6时有较大提升,但继续增大滤波器数量不能进一步增强模型的纹理特征提取能力,反而会增大计算量.通过以上的对比分析可以得到,在空间滤波器的尺寸为3ˑ3㊁数量为12时,笔者建立模型的纹理特征提取效果最佳,可以准确地提取出图像的纹理信息,为后续图像分割等实际应用奠定坚实的基础.图5 不同自然图像的纹理特征提取结果3.3.2 自然图像的纹理特征提取通过以上分析可知,笔者建立的模型在选取合适的滤波器尺寸及数量后能够实现对简单自然图像的纹理特征提取.为了说明该模型的广泛适用性,选取数据库M i d d l e b u r y 中其他一些典型图像作为模型输入,并得到相应的纹理提取结果,如图5所示.由图5的实验结果分析可得,笔者建立的模型针对不同自然图像均具有纹理特征提取能力,具有一定的普适性,但在高度重合纹理区或者边缘不明显区域存在一定的模糊,区分性较差,如图5中E v e r g r e e n 的纹理提取结果.纹理特征是复杂多样的,这是相关研究的难点及重点所在.受当前生物实验手段的限制,生物视觉信息处理的具体流程仍在积极探索之中.笔者在纹理特征提取上做出了大胆的尝试,将生物视觉模型运用到对图像的处理之中,但模型结构与V 1仍有很大不同,还需进一步研究完善.4 总 结图像纹理特征提取的成功与否直接关系到图像纹理的分类与分割.笔者受生物视皮层V 1神经元对边缘及形状的敏感特性启发,提出了基于视皮层V 1模型的图像纹理特征提取方法,有效地将生物学模型与自然图像处理结合到了一起.利用二维的G a b o r 滤波器和指数延迟函数模拟了V 1简单细胞的感受野特性,并采用绝对值求和与归一化的方式构建了易于边缘提取的V 1细胞模型;分析了自然图像的响应输出特性,构建了边缘纹理强度函数来实现对模型输出的解码,最终得到图像的纹理特征.以一定方向运动的随机点刺激和栅格刺激作为模型输入,并对比生物实验数据,验证了模型的生物学特性具有运动特征提取能力;以简单的人造图像作为模型输入,验证了模型具有边缘提取的能力;以不同自然图像作为模型输入,分析了模型参数对实验结果的影响,进一步验证了模型的纹理特征提取能力.笔者初步建立了生物视觉与计算机视觉的联系,运用大量实验进行了分析与验证.下一阶段的工作将围绕纹理特征评价㊁纹理分类及分割展开研究,进而实现目标识别与分类.参考文献:[1]刘丽,匡纲要.图像纹理特征提取方法综述[J ].中国图象图形学报,2009,14(4):622-635.L I U L i ,K U A N G G a n g y a o .O v e r v i e w o fI m a g e T e x t u r a lF e a t u r e E x t r a c t i o n M e t h o d s [J ].J o u r n a lo fI m a g ea n d G r a p h i c s ,2009,14(4):622-635.(下转第107页)001 西安电子科技大学学报(自然科学版) 第44卷Copyright©博看网 . All Rights Reserved.h t t p ://w w w.x d x b .n e t2646-2654.[14]L I U X,B IG,G U A N Y L ,e ta l .J o i n tO p t i m i s a t i o n A l g o r i t h m o fC o o p e r a t i v eS p e c t r u m S e n s i n g w i t hC o o pe r a t i v e O v e r -h e a da n dS u b -b a n d T r a n s m i s s i o nP o w e rf o r W i d e b a n d C o g n i t i v eR a d i o N e t w o r k [J ].T r a n s ac t i o n so n E m e r g i n g T e l e c o mm u n i c a t i o n sT e c h n o l o g i e s ,2015,26(4):586-597.[15]M E S H K A T IF ,P O O R H V,S C HWA R T ZSC .E n e r g y -e f f i c i e n tR e s o u r c eA l l o c a t i o n i n W i r e l e s sN e t w o r k s [J ].I E E E S i g n a l P r o c e s s i n g M a ga z i n e ,2007,24(3):58-68.[16]D I N K E L B A C H W.O nN o n l i n e a rF r a c t i o n a l P r o g r a mm i n g [J ].M a n a g e m e n t S c i e n c e ,1967,13(7):492-498.[17]S C H A I B L ES .F r a c t i o n a l P r o g r a m m i n g Ⅱ,O nD i n k e lb ac h sA l g o r i t h m [J ].M a n a g e m e n t S c i e n c e ,1976,22(8):868-873.(编辑:齐淑娟) (上接第100页)[2]N AM B O O D I R IV M K,HU E R T A SM A,MO N K K,e t a l .V i s u a l l y C u e dA c t i o nT i m i n g i n t h eP r i m a r y Vi s u a l C o r t e x [J ].N e u r o n ,2015,86(1):319-330.[3]Z O UZ ,Y A N GJ ,M E G A L O O I K O N OMO U V,e t a l .T r a b e c u l a rB o n eT e x t u r eC l a s s i f i c a t i o n U s i n g Wa v e l e tL e a d e r s [C ]//P r o c e e d i n g s o f S P I E :9788.B e l l i n g h a m:S P I E ,2016:97880E .[4]A HM A D V A N D A ,K A B I R I P .M u l t i s p e c t r a lM R I I m a g e S e g m e n t a t i o nU s i n g M a r k o vR a n d o mF i e l dM o d e l [J ].S i g n a l ,I m a g e a n dV i d e oP r o c e s s i n g ,2016,10(2):251-258.[5]E R G E N B .S c a l e I n v a r i a n t a n dF i x e d -l e n g t hF e a t u r eE x t r a c t i o nb y I n t e g r a t i n g D i s c r e t eC o s i n eT r a n s f o r m a n dA u t o r e g r e s s i v e S i g n a lM o d e l i n g f o r P a l m p r i n t I d e n t i f i c a t i o n [J ].T u r k i s hJ o u r n a l o fE l e c t r i c a lE n g i n e e r i n g a n dC o m p u t e rS c i e n c e s ,2016,24(3):1768-1781.[6]王卫卫,席灯炎,杨塨鹏,等.利用结构纹理分解的海洋舰船目标检测[J ].西安电子科技大学学报,2012,39(4):131-137.WA N G W e i w e i ,X ID e n g y a n ,Y A N G G o n g p e n g ,e t a l .W a r s h i p T a r g e tD e t e c t i o nA l g o r i t h m B a s e do nC a r t o o n -t e x t u r e D e c o m p o s i t i o n [J ].J o u r n a l o fX i d i a nU n i v e r s i t y ,2012,39(4):131-137.[7]P U R V E SD,MO N S O NBB ,S U N D A R A R A J A NJ ,e t a l .H o wB i o l o g i c a lV i s i o nS u c c e e d s i n t h eP h ys i c a lW o r l d [J ].P r o c e e d i n g s o fN a t i o n a lA c a d e m y o f S c i e n c e s ,2014,111(13):4750-4755.[8]C H E O N GSK,T A I L B YC ,S O L OMO NSG,e t a l .C o r t i c a l -l i k eR e c e p t i v eF I e l d s i n t h eL a t e r a lG e n i c u l a t eN u c l e u s o f M a r m o s e tM o n k e y s [J ].T h e J o u r n a l o fN e u r o s c i e n c e :T h eO f f i c i a l J o u r n a l o f t h eS o c i e t y f o rN e u r o s c i e n c e ,2013,33(16):6864-6876.[9]O D A I B OSG.T h eG a b o r -E i n s t e i n W a v e l e t :aM o d e l f o r t h eR e c e p t i v eF i e l d s o fV 1t o M T N e u r o n s [J ].a r X i v ,2014,1:1-39.[10]S O L A R IF ,C H E S S A M,M E D A T HA T IN V K,e t a l .W h a t C a nW eE x pe c tf r o maV 1-M TF e e d f o r w a r dA r c h i t e c t u r e f o rO p t i c a l F l o wE s t i m a t i o n ?[J ].S ig n a l P r o c e s s i n g :I m a g eC o mm u n i c a t i o n ,2015,39:342-354.[11]B HA T E J A V,S R I V A S T A V A A,S I N G H G,e t a l .A M o d i f i e dS p e c k l eS u p p r e s s i o nA l go r i t h mf o rB r e a s tU l t r a s o u n d I m a g e sU s i n g D i r e c t i o n a lF I l t e r s [C ]//A d v a n c e si nI n t e l l i g e n tS y s t e m sa n d C o m p u t i n g:249V O L UM EI I .B e r l i n :S p r i n g e rV e r l a g ,2014:219-226.[12]G A OS ,P E N GY H,G U O HZ ,e t a l .T e x t u r eA n a l y s i s a n dC l a s s i f i c a t i o n o fU l t r a s o u n dL i v e r I m a g e s [J ].B i o -M e d i c a l M a t e r i a l s a n dE n g i n e e r i n g ,2014,24(1):1209-1216.[13]S I MO N C E L L IEP ,H E E G E RDJ .A M o d e l o fN e u r o n a l R e s po n s e s i nV i s u a lA r e aM T [J ].V i s i o nR e s e a r c h ,1998,38(5):743-761.[14]A N X,G O N G H,M C L O U G H L I N N,e t a l .T h e M e c h a n i s mf o rP r o c e s s i n g R a n d o m -d o tM o t i o na tV a r i o u sS p e e d i n E a r l y C o r t i c e s [J ].P L O SO N E ,2014,9(3):e 93115.(编辑:郭 华)701第3期 杨 明等:能量效率认知无线电协作感知和传输联合优化Copyright©博看网 . All Rights Reserved.。

让纸屑飞舞的实验英文作文

让纸屑飞舞的实验英文作文

让纸屑飞舞的实验英文作文全文共2篇示例,仅供读者参考让纸屑飞舞的实验英文作文1:Title: "Experiment: Making Paper Confetti Dance"Introduction:In this experimental essay, we delve into the captivating world of paper confetti. Paper confetti, often seen as a festive decoration or a playful accessory, can also serve as an intriguing subject for scientific inquiry. Through a series of experiments, we aim to explore the dynamics of paper confetti and uncover the factors that influence its behavior, ultimately revealing the secrets behind making paper confetti dance.Experiment 1: The Effect of Airflow on Paper Confetti MovementTo investigate how airflow affects the movement of paper confetti, we set up a controlled experiment using a fan. First, we scattered a layer of paper confetti evenly on a flat surface. Then, we positioned the fan at various distances and angles relative to the confetti. By adjusting the fan's speed anddirection, we observed how different airflow conditions influenced the motion of the confetti. Our results revealed that higher fan speeds and direct airflow angles produced more pronounced movements in the confetti, creating a mesmerizing dance-like effect.Experiment 2: Surface Texture and Paper Confetti Mobility Next, we explored the role of surface texture in determining the mobility of paper confetti. We prepared two surfaces: one smooth and one rough. After sprinkling confetti onto each surface, we gently tilted the platforms to simulate subtle movements. Surprisingly, we found that the confetti on the rough surface exhibited greater mobility compared to the smooth surface. This observation suggests that surface texture plays a crucial role in facilitating the movement of paper confetti, potentially due to increased friction between the confetti and the rough surface.Experiment 3: The Influence of Vibrations on Paper Confetti BehaviorIn our third experiment, we investigated how vibrations affect the behavior of paper confetti. To induce vibrations, weutilized a simple setup consisting of a vibrating platform powered by a motor. We placed the confetti on the vibrating surface and gradually increased the vibration frequency. As expected, higher vibration frequencies resulted in more vigorous movements among the confetti particles. Furthermore, we observed interesting patterns emerging within the confetti clusters, indicating a complex interplay between vibration intensity and confetti cohesion.Discussion:Our experiments have shed light on the multifaceted dynamics of paper confetti movement. From the influence of airflow and surface texture to the effects of vibrations, we have uncovered several key factors that contribute to the mesmerizing dance of paper confetti. Beyond their aesthetic appeal, these findings hold implications for various fields, including physics, materials science, and event planning. By understanding the underlying principles governing paper confetti behavior, we can harness its potential in innovative applications ranging from kinetic sculptures to interactive installations. Through continued exploration and experimentation, we can unlock even more secrets of theenchanting world of paper confetti.Conclusion:In conclusion, our experimental journey into the realm of paper confetti has provided valuable insights into its dynamic behavior. By systematically exploring the effects of airflow, surface texture, and vibrations, we have gained a deeper understanding of what makes paper confetti dance. As we continue to unravel the mysteries of this seemingly simple yet surprisingly complex phenomenon, we open doors to new possibilities for creativity, science, and exploration. So let us embrace the magic of paper confetti and let it dance freely, inspiring wonder and delight wherever it goes.让纸屑飞舞的实验英文作文2:Title: "Experiment: Letting Paper Flakes Dance"Abstract:In this experiment, we explore the fascinating world of aerodynamics by investigating the factors that influence the flight of paper flakes. Through a series of controlled tests, we manipulate variables such as shape, size, and weight to observe their effects on the motion of paper flakes in the air.By conducting this experiment, we aim to gain insights into the principles of aerodynamics and to inspire curiosity about the science of flight.Introduction:Aerodynamics, the study of how objects move through the air, plays a crucial role in various aspects of our daily lives, from the flight of birds and insects to the design of aircraft and vehicles. One simple yet effective way to explore aerodynamics is through the observation and experimentation with paper flakes. In this experiment, we delve into the dynamics of paper flakes in flight, examining how different factors influence their movement.Materials and Methods:To conduct this experiment, the following materials are required:- Sheets of paper- Scissors- Ruler- Tape- Fan (optional)- StopwatchProcedure:1. Cut the paper into flakes flakes of various shapes and sizes, such as squares, rectangles, triangles, and irregular shapes.2. Label each paper flake with a unique identifier for easy identification during testing.3. Set up a controlled environment for testing, preferably a room with minimal air currents.4. Attach each paper flake to a lightweight object, such as a paper clip, to simulate real-world conditions.5. Position the fan at a fixed distance from the testing area to create a consistent airflow.6. Release each paper flake into the airflow generated by the fan and observe its flight pattern.7. Use a stopwatch to record the time taken for each paper flake to travel a predetermined distance.8. Repeat the experiment multiple times for each paper flake to ensure accuracy and reliability of results.9. Analyze the data collected and draw conclusions regarding the influence of shape, size, and weight on the flightof paper flakes.Results:The results of the experiment revealed several interesting findings:- Paper flakes with larger surface areas tend to experience greater air resistance, resulting in slower flight times.- Flakes with streamlined shapes, such as triangles and streamlined rectangles, exhibit smoother and more stable flight trajectories compared to irregularly shaped flakes.- Increasing the weight of paper flakes leads to faster descent rates due to the force of gravity.- The distance from the fan also plays a significant role in determining the flight characteristics of paper flakes, with flakes positioned closer to the fan experiencing stronger airflow and faster flight speeds.Discussion:The findings of this experiment align with the principles of aerodynamics, demonstrating how factors such as shape, size, weight, and airflow affect the flight of paper flakes. By manipulating these variables, we can gain insights into theunderlying mechanisms governing flight and apply this knowledge to real-world applications, such as aircraft design and environmental monitoring. Additionally, this experiment serves as a valuable educational tool for teaching students about aerodynamics in a hands-on and engaging manner.Conclusion:In conclusion, this experiment offers a captivating exploration of aerodynamics through the observation and experimentation with paper flakes. By investigating the factors that influence the flight of paper flakes, we gain a deeper understanding of the principles governing aerodynamic motion. Furthermore, this experiment highlights the importance of curiosity-driven inquiry and hands-on experimentation in the study of science and engineering. Through continued exploration and experimentation, we can further unravel the mysteries of aerodynamics and inspire future generations of scientists and engineers.。

Autodesk Softimage 2011 Service Pack 1 (SP1) 说明书

Autodesk Softimage 2011 Service Pack 1 (SP1) 说明书

The latest service pack for Autodesk® Softimage® 2011 software includes 130 fixes.Crosswalk 5.1Softimage 2011 SP1 ships with Crosswalk v.5.1.CoexistenceSoftimage 2011 SP1 is a full build and can co-exist with Softimage 2011. Softimage 2011 is not required to be uninstalled first. LicensingSoftimage 2011 SP1 uses the same license as Softimage 2011. When installing Softimage 2011 SP1:•Enter the product key 590C1 and your Softimage 2011 serial number, and then click Next.•If you have a Network license, click Configure to switch to the Network licensing method.DocumentationSoftimage 2011 SP1 includes full English and Japanese documentationThe full list of fixes is as follows:ModelingUDEV00255606 Crash while transforming boolean elementsUDEV00256310 Regression in Boolean behaviorUDEV00246940 Cannot see painted vertex colors after vertex color property modified by another toolUDEV00247045 ClusterProperties do not resize properly on topo change if a CustomOperator are set to the texture projection UDEV00257973 extreme lag when moving pivot using alt key in scale modeUDEV00257773 Boolean operation crashes SoftimageData managementUDEV00257530 Cache Manager - Adding a cache to an object that has an Audio Clip returns an errorUDEV00257308 Cache Manager - Write - Branch select is not considered in Multiple selectionUDEV00257307 Cache Manager - Write - When Attribute list is empty selected attributes stay identicalUDEV00256625 Cache Manager - Write - "Attributes" list doesn't clearly display which attributes is stored for each object UDEV00256623 Cache Manager - Spaced out the numbers preceding the object names in the object listsUDEV00257172 Cache Manager - *.cacheslist should not be default file extension for the read cache browserUDEV00257169 Caching Save Cache to Mixer button create an unused imageclipUDEV00255923 ReadAnimationCache - Can add badly formed path sequenceUDEV00256620 Cache Manager - Write - been able to select a group or hierarchy (in branch) and add the contents to the list UDEV00257539 [CER] Refmodels - Crash with Scene explorer OpenUDEV00257564 [CER] Crash when testing the Displacement Ocean shader node.UDEV00257278 Put a PointCloud on a RefModel, do some color modifs. then update the RefModel -> Simulation is wrong UDEV00257310 CRASH when undoing delete model ( specific )UDEV00257231 Refmodel we cannot export a ref model’s delta in the current project.UDEV00257346 The "Models to Reconnect To" parameter of RefModel not working with multiple models targetsUDEV00257362 Crash when adding CustomPSet to offloaded RefModelUDEV00257180 New Layer or Partition configuration is not persisted with Reference Model objectsUDEV00257688 Texture Layers ports are disconnected when loading an older scene in Softimage 2011CrosswalkUDEV00257368 FBX - imported file from previous version of the format has mesh facetedUDEV00257166 XSI Crosswalk FBX export hard/soft edge problemUDEV00257951 FBX - Linux - Crash importing using HardEdges EnableUDEV00257726 FBX - Crash importing Shape AnimationUDEV00257715 Crosswalk needs to increment to version 5.1Face RobotUDEV00256437 FaceRobot > Assemble doesn't work on Korean LocaleUDEV00257200 FR Script Error pop when doing Clear Fast Playback CacheCompositingUDEV00257358 Fx : plug-in operators are missingUIUDEV00257149 Hang with scene searchUDEV00257519 Crash when editing new LayoutUDEV00257483 Regression - unable to dolly when looking through a spot light without an interest constraintUDEV00257217 Regression from 2010 | items missing in the Modify texture menuUDEV00257374 Softimage does not display proper texture in VM when user changes texture imageclip in an override property. UDEV00255873 memo cam causes XSI to crash ( scene specific )UDEV00257651 Default character models' Keyable Params and Key Sets are not correctRenderingUDEV00257657 Shader preset manager mixes up shaders and class IDs when you have SPDL custom shaders in a workgroup UDEV00258079 Custom realtime shaders does not load in SP1UDEV00257908 Crash | ICE cache files with Motion Blur CrashUDEV00257618 Regression - ICE volume rendering - Particle_Gradient_FCurve is not workingUDEV00257208 Enable to load a volume shader in Mental RayUDEV00257507 Crash when rendering an ICE TreeUDEV00257677 Changes to pass shader parameters force the renderer cache to emptyUDEV00257621 invert flag in texture layer editor does not get set in 2011 from 2010 resulting in different rendered result UDEV00257499 Regression: Crash when using Render Region with "Toon Paint and Host"UDEV00257315 Crash when selecting a clip if the rendertree is activeUDEV00257216 Setting frost samples to 1 produces artifactsUDEV00256792 RTS | Ultimap_Preview node logs SiWarning upon creationUDEV00246516 Exporting Shader Compounds Loses Japanese StringsUDEV00257573 CgFx Shader - Crash Loading specific ShaderUDEV00257207 DirectX Viewport freeze after Screen saver when "On resume, display logon screen" is enableUDEV00257610 Crash when setting Photometric cd/m^2 Factor to zeroUDEV00257606 Display problem with the Projection Lookup NodeUDEV00257425 There is no "SimpleShader_CGFX.scn"UDEV00257271 python stops working correctly when I use xsibatchUDEV00257114 Fast Light Effects - Projected texture flips as you move/animate the light interestUDEV00257254 Editable option in the ShaderCompound properties doesn't anymoreUDEV00257292 multiple camera groups aren't picked up inside modelsUDEV00257677 Changes to pass shader parameters force the renderer cache to emptyUDEV00257571 [CER ] 36427081 | race condition on render cleanup/restart which can lead to a crashUDEV00257572 [CER] Crash when viewing lightmap previewUDEV00257825 32-bit Softimage does not load shaders from "nt-x86" folder of workgroup addonsUDEV00257319 Crash when switching from mib_photmetric with profile to mia_photometricUDEV00255527 Shaders disappear after changing workgroupUDEV00257882 Rendering | Pointcloud is not reading attributes being used by the clipUDEV00257204 Crash when rendering standins and scrubbing the time lineUDEV00257418 Crash: Render region with standins and GIUDEV00257734 Crash when Rendering StandinUDEV00255524 Problems when loading shaders from Plugin ManagerUDEV00254446 Material overrides only works with the first layerUDEV00257781 Camera Lens Shader Items are duplicated each time you undo a camera movementUDEV00257874 Crash changing flow texture color with render region openUDEV00257807 Rendering | Black renders when 2 shaders classes uses the same DLL/shaderUDEV00257743 Softimage hangs indefinitely loading a scene containing shaders which are not installed.UDEV00257771 CER | 37267480 | Crash when changing hair strands with preview render window openUDEV00257236 Satellite rendering is not properly opening up a connection with SoftimageSDK/ScriptingUDEV00257972 SDK ICE: CIndexSet iterator not initializing properlyUDEV00257954 SDK ICE: ICENode.ExposedPorts[0] drills down through all compounds (SDK)UDEV00257953 SDK ICE: ICE Port Dirty States Not Working with Multiphase/Element Generator NodeUDEV00258113 ICEAttribute::IsDefined() always returns trueUDEV00257533 SDK ICE: Few problems with ICE nodesUDEV00256186 XSIApplication.ActiveToolName doesn't return the precise BrushTool mode when using PaintVertexColorTool UDEV00257626 XSI::ClusterProperty::GetValues() crashes Softimage when querying an EnvelopeWeight property for values UDEV00256436 Folded Lines of code not displayed in script editorUDEV00256968 RefGuide: ClusterProperty.Elements.1.vbs Example - for loop index variable is wrong, and script doesn't work. UDEV00257745 SDK Command: XSI.Application cannot access XSI commands not created by the FrameworkSimulationUDEV00258051 RBD | Freeze loading a specific sceneUDEV00257925 RBD | 2010 Client Scene Crash on load in Softimage 2011UDEV00258112 ICE-RBD | Wrong default values for static, dynamic and elasticity coefficientsUDEV00257701 ICE | Crash renaming Set Data nodeUDEV00257506 ICE | crash duplicating bone with ICE tree that sets length (branch connection)UDEV00257642 ICE | Kinematics workgroup : fixed compounds and new sample sceneUDEV00256992 ICE |connecting to texture_projection_def brokenUDEV00257613 Regression - unable to write to a custom param from ICE treeUDEV00257598 ICE-RBD | Angular velocity is clamped at ± PI rad/sUDEV00257432 ICE particles - alpha blended point sprites not depth sorted properlyUDEV00257403 ICE | Crash! Create an ICE graph then put one of the object in a ModelUDEV00257369 ICE | GetData chaining invalidation issuesUDEV00257417 ICE | Graph is not updated when CopyPaste ICE TreeUDEV00257153 selecting the weightmap name rather than the .weights in 'Filter by Weightmap' ICE compound causes crash UDEV00256913 cannot create symmetry mapping templates if deformer's kine is ice drivenUDEV00256099 ICE-Kine | Crash when relocating ICE operator in stack.UDEV00256432 ICE | Sometimes the graph is invalidated when dragging some nodes on it.UDEV00255965 ICE | Tree does not reconnect after save and reloadUDEV00257279 ICE | kinematics : compound needs updateUDEV00257132 ICE | Crash when deleting Scalar ICE node under certain conditionsUDEV00257496 Cannot set implicit bone's length attribute using this.lengthUDEV00257946 ICE | Crash when deleting Compound after exporting privateUDEV00257864 ICE AngularVelocity rotation is inversedUDEV00257751 ICE Cache crash when loading point locator attributesUDEV00257752 ICE Cache | Unable to cache persistable attributesUDEV00257833 XSI can crash while connecting ICE nodesUDEV00257404 ICE | Tree not evaluated correctly after writing to custom parameterUDEV00257753 ICE | Negative zeros can be displayedUDEV00258260 Multi-phase Ice Node Fails to ExecuteLinuxUDEV00256388 I can't run 2011on FC12UDEV00257201 LINUX | No Icons visible on the Essential Skills videos pageUDEV00257094 LINUX | Audio defects when scrubbing or playing frame by frameUDEV00257094 Linux : Now using OpenAL for Audio (resolves issue when scrubbing or playing frame-by-frame)Tools /SetupUDEV00257631 JA localization: Japanese character truncated for Help > About Autodesk Softimage > Build Version (UI) UDEV00257240 XSI crashes when starting from command line with a scene parameterUDEV00257965 Essential movies don't open from NetviewUDEV00257964 Windows7 | 64-Bit | Scripting engine would fail to initialize on certain W7 systems causing a crash on startup UDEV00257775 CER call stacks for 64-bit systems are often corrupt or incorrectDocumentationUDEV00254555 RefGuide: SITransformation.AddLocalRotation > in_bIncrementEulerAngles arg info is missing in the SDK Guide UDEV00254399 RefGuide: The example for AddVertexColor is badly indentedUDEV00249925 SDK Doc: ArrayParameter not shown in Parameter pageUDEV00245887 missing HW shaders exampleUDEV00257126 Japanese Softimage Guide : translation incorrect in "Working with OpenEXR HDR Images in the Fx Tree"。

创意绘画英文作文

创意绘画英文作文

创意绘画英文作文全文共2篇示例,仅供读者参考创意绘画英文作文1:Title: Unleashing Creativity through Art: Exploring the World of Creative PaintingIntroduction:Art has long been celebrated as a medium through which individuals can express themselves, evoke emotions, and transcend boundaries. Within the realm of art, creative painting stands out as a vibrant canvas where imagination knows no limits. In this essay, we delve into the captivating world of creative painting, exploring its essence, significance, and the myriad ways it inspires and empowers individuals to unleash their creativity.Understanding Creative Painting:Creative painting encompasses a spectrum of techniques, styles, and approaches that go beyond traditional methods ofrepresentation. It transcends the confines of realism, inviting artists to explore abstract concepts, experiment with unconventional materials, and challenge established norms. From surreal landscapes to abstract expressions of emotions, creative painting offers a platform for artists to channel their innermost thoughts and visions onto the canvas.Importance of Creative Painting:At its core, creative painting fosters innovation, critical thinking, and self-expression. By encouraging individuals to think outside the box, it cultivates a sense of curiosity and openness to new possibilities. Moreover, creative painting serves as a therapeutic outlet, allowing individuals to alleviate stress, express complex emotions, and gain insights into their subconscious mind. Additionally, it plays a pivotal role in cultural enrichment, serving as a medium for social commentary, cultural reflection, and historical documentation.Inspiration and Sources of Creativity:Creativity is a multifaceted phenomenon, influenced byvarious factors such as personal experiences, cultural influences, and environmental stimuli. For artists engaged in creative painting, inspiration can be found in the beauty of nature, the complexities of human emotions, and the intricacies of everyday life. Moreover, artists often draw inspiration from other forms of art, literature, music, and even scientific discoveries. By integrating diverse sources of inspiration, artists can infuse their work with depth, meaning, and originality.Techniques and Approaches:Creative painting embraces a wide range of techniques and approaches, each offering unique opportunities for artistic exploration. From bold brushstrokes to delicate textures, artists employ a myriad of techniques to convey their message and evoke emotions. Some artists may prefer spontaneous improvisation, allowing their intuition to guide the creative process, while others may meticulously plan each element of their composition. Whether working with acrylics, watercolors, oils, or mixed media, artists have the freedom to experiment with different materials and techniques to achievetheir desired effects.Challenges and Rewards:While creative painting offers boundless opportunities for self-expression and innovation, it is not without its challenges. Artists may grapple with self-doubt, creative blocks, and the pressure to meet external expectations. Moreover, the process of translating abstract ideas into tangible forms can be inherently challenging. However, overcoming these obstacles often leads to profound insights, personal growth, and a deeper understanding of one's artistic voice. Ultimately, the rewards of creative painting far outweigh the challenges, as it offers a means of connecting with others, leaving a lasting impact, and leaving a mark on the world.Conclusion:Creative painting transcends the boundaries of traditional art forms, offering a vibrant canvas for self-expression, innovation, and exploration. As artists embark on their creative journey, they tap into the depths of their imagination, infusingtheir work with meaning, emotion, and authenticity. Through creative painting, individuals not only enrich their own lives but also contribute to the cultural tapestry of society, inspiring others to see the world through a new lens. In a world where conformity often stifles creativity, creative painting serves as a beacon of freedom, empowering individuals to unleash their imagination and create art that resonates with the soul.创意绘画英文作文2:Title: Creative Painting: Unleashing Artistic IngenuityArt is a universal language that transcends boundaries and speaks to the depths of human emotion and imagination. Among the myriad forms of artistic expression, painting stands out as a canvas for boundless creativity. In this discourse, we delve into the realm of creative painting, exploring its significance, techniques, and the profound impact it leaves on both artists and viewers alike.To embark on a journey of creative painting is to embark on a voyage of self-discovery and innovation. It is a process wherein artists transform mundane blank canvases into vibrant tapestries of color, texture, and meaning. Everybrushstroke is a manifestation of their thoughts, emotions, and experiences, woven together to convey a narrative that resonates with observers on a visceral level.At the heart of creative painting lies the fusion of imagination and technique. Artists harness a diverse array of tools and mediums, from traditional brushes and acrylics to digital platforms and mixed media, to bring their visions to life. Experimentation becomes second nature as they explore unconventional methods and push the boundaries of conventional artistry. Through trial and error, they uncover new possibilities, breathing life into their creations with each daring stroke.One of the most captivating aspects of creative painting is its ability to evoke a myriad of emotions and interpretations. Each piece serves as a mirror reflecting the artist's psyche, inviting viewers to immerse themselves in its depths and derive their own meanings. Whether it be an abstract composition bursting with energy or a serene landscape imbued with tranquility, the power of artistic expression lies in its ability to forge connections and foster dialogue acrosscultural and linguistic divides.Moreover, creative painting serves as a catalyst for social change and introspection. Artists often utilize their craft as a means of shedding light on pressing issues, sparking conversations, and challenging societal norms. Through poignant imagery and symbolism, they confront injustices, advocate for marginalized communities, and provoke thought-provoking discourse. In doing so, they harness the transformative power of art to incite positive change and inspire collective action.Furthermore, the act of creating art is inherently therapeutic, serving as a sanctuary for the mind and soul. Engaging in the process of painting allows individuals to escape the rigors of daily life and immerse themselves in a world of boundless possibility. It fosters a sense of mindfulness and presence, enabling artists to channel their innermost thoughts and emotions onto the canvas with cathartic release. In this way, creative painting becomes a form of self-care, nurturing mental well-being and fostering resilience in the face of adversity.In conclusion, creative painting stands as a testament to the enduring power of human creativity and imagination. It transcends linguistic and cultural barriers, forging connections and fostering empathy among individuals from all walks of life. Through experimentation, innovation, and introspection, artists breathe life into their creations, leaving an indelible mark on the world and inspiring generations to come. As we navigate the complexities of the modern world, let us embrace the transformative potential of creative painting as a catalyst for change, growth, and enlightenment.。

Image parsing Unifying segmentation, detection, and recognition

Image parsing Unifying segmentation, detection, and recognition

Image Parsing:Unifying Segmentation,Detection,and Recognition Zhuowen Tu,Xiangrong Chen,Alan L.Yuille,Song-Chun ZhuUniversity of California,Los AngelesLos Angeles,CA,90095ztu,xrchen,yuille,sczhu@AbstractWe propose a general framework for parsing images into regions and objects.In this framework,the detection and recognition of objects proceed simultaneously with image segmentation in a competitive and cooperative manner.We illustrate our approach on natural images of complex c-ity scenes where the objects of primary interest are faces and text.This method makes use of bottom-up proposals combined with top-down generative models using the Data Driven Markov Chain Monte Carlo(DDMCMC)algorith-m which is guaranteed to converge to the optimal estimate asymptotically.More precisely,we define generative model-s for faces,text,and generic regions–e.g.shading,texture, and clutter.These models are activated by bottom-up pro-posals.The proposals for faces and text are learnt using a probabilistic version of AdaBoost.The DDMCMC com-bines reversible jump and diffusion dynamics to enable the generative models to explain the input images in a competi-tive and cooperative manner.Our experiments illustrate the advantages and importance of combining bottom-up and top-down models and of performing segmentation and ob-ject detection/recognition simultaneously.1.IntroductionThis paper presents an framework for parsing images into regions and objects.We demonstrate a specific application on outdoor/indoor scenes where image segmentation,the detection of faces,and the detection and reading of text are combined in an integrated framework.Fig.1shows an ex-ample in which a natural image is decomposed into gener-ic regions(e.g.texture or shading),text,and faces.The tasks of obtaining these three constituents have tradition-ally been studied separately sometimes with detection and recognition being performed after segmentation[10],and sometimes with detection being a separate process,see for example[20].But there is no commonly accepted method of combining segmentation with recognition.In this paper we show that our image parsing approach gives a princi-pled way for addressing all three tasks simultaneously in a common framework which enables them to be solved inaa.An example imageb.Genericregionsc.Textd.FacesFigure1:Illustration of parsing an image into generic re-gions(e.g.texture and shading)and objects.An example image(a)is decomposed into two layers:(b).the region layer and the object layer which is further divided into text (c)and faces(d).cooperative and competitive manner.There are clear ad-vantages to solving these tasks at the same time.For exam-ple,examination of the Berkeley dataset[11]suggests that human observers sometimes use object specific knowledge to perform segmentation but this knowledge is not used by current computer vision segmentation algorithms[9,18].In addition,as we will show,segmentation algorithms can help object detection by“explaining away”shadows and occlud-ers.The application in this paper is motivated by the goal of designing a computer vision system for the blind that can segment images and detect and recognize important objects such as faces and text.We formulate the problem as Bayesian inference.Top-down generative models are used to describe how objects and generic region models(e.g.texture and shading)gener-ate the image intensities.The goal of image parsing is to in-vert this process and represent an input image by the param-eters of the generative models that best describe it together with the boundaries of the regions and objects.It is crucial that all the generative models generate raw image intensi-ties.This enables us to directly compare different models (e.g.by model selection)and thereby treat segmentation, detection and recognition in an integrated framework.For example,this requirement prevents us from using Hinton et al’s generative models for text[14]because these models generate image features and not raw intensities.In order to estimate these parameters we use bottom-up proposals,based on low-level cues,to guide the search through the parameter space.More specifically,we com-bine bottom-up and top-down cues using the Data Driv-en Markov Chain Monte Carlo(DDMCMC)algorithm [18,19]which is,in theory,guaranteed to converge to the MAP estimate asymptotically.The bottom-up proposals for faces and text are learnt from training data by using a variant of the AdaBoost al-gorithm that outputs conditional probabilities[5]instead of classifications[20].The use of conditional probabili-ties means that we do not have to make afirm decision based on AdaBoost and can instead use evidence from the generative models to resolve difficult cases.This improves performance particularly in the presence of occluders and shadows(which can be explained away by the other region models).The top-down generative models for faces and text are based on models with parameters estimated from train-ing data.The bottom-up proposals and top-down generative models for generic regions are those used in previous work [18,19]where they were tested on several hundred images.The structure of this paper is as follow.Section(2) briefly reviews previous work on segmentation,face de-tection,and text detection and reading.In section(3),we describe the representation and the DDMCMC algorithm. Section(4)describes the generative models for faces and text.In section(5),we describe the use of AdaBoost al-gorithm to learn conditional probabilities distributions.D-DMCMC jump and diffusion dynamics design is briefly dis-cussed in section(6).Section(7)shows the results of using AdaBoost by itself and then the results obtained by our im-age parsing approach.2.Related Work on Segmentation,De-tection and RecognitionNo existing work,to the best of our knowledge,combines segmentation,detection,and recognition in an integrated framework.These tasks have often been treated indepen-dently and/or sequentially.For example,Marr([10])pro-posed performing high-level tasks,such as object recogni-tion,on intermediate representations obtained by segmenta-tion and grouping.Current segmentation algorithms[9,18]perform well on large datasets although they do not yet achieve the ground truth results obtained by human subjects[11].From one perspective,the work in this paper extends the DDMCMC segmentation algorithm([18])by introducing object specif-ic models.There has also been impressive work using image fea-tures for face detection[3,15,17,21,22,20]and for text detection and recognition[8,16,1].These approaches can all be used to specify bottom-up proposals for object detec-tion in DDMCMC.It is most convenient for us to use the AdaBoost approach([20])because of it effectivness and its probabilistic interpretation,see section(5).The generative models we use are based on generic re-gion models(e.g.texture and shade)[18]and deformable templates[6,7].Similar models were proposed for tex-t([14])but cannot be used here because they generate image features and not intensities.3.Bayesian FormulationWe formulate image parsing as Bayesian inference.A scene interpretation includes a number of generic regions,letters and digits,and faces denoted by,,and respec-tively.The region representation includes the number of regions,and each region has a label and parameter for its intensity modelwhere.Similarly,we havewhere and.Thus,the solution vector is of the formThe goal is to estimate the most probable interpretation of an input image.This requires computing the that maximizes a posteriori probability over,,the solution s-pace of,(1)The likelihood specifies the image generating pro-cesses from to and the prior probability repre-sents our prior knowledge of the world.By assuming the mutual independence between we have the prior modelTo make generic regions,text,and faces directly compara-ble,we define(2)Details about the definition of region model can be found in[18].We define,,and .The likelihood function can be written asWe use the DDMCMC algorithm for estimating.D-DMCMC[18]is a version of the Metroplis-Hastings al-gorithm and hence is guaranteed to converge to samples from the posterior.It employs data-driven bottom-up pro-posals to drive the convergence of top-down generative models.Moves are selected by samplingfrom and they are accepted with probability :Figure2:Illustration of the DDMCMC approach for seg-mentation,detection,and recognition.These moves can be subdivided into two basic types,jumps which realize moves between different dimensions and diffusion which realizes moves withinfixed dimension. Firstly,jump moves which are discrete and correspond to the birth/death of region hypotheses,splitting and merging of regions,and switching the model for a region(e.g.changing from a texture model to a spline model),changing a generic region into a face,creating a letter,etc.Secondly,diffusion processes which correspond to continuous changes such as altering the boundary shape of a region,text or a face and changing the parameters of a model used to describe a re-gion.Fig.2gives a schematic illustration of howthe jump and diffusion dynamics proceed driven by bottom-up pro-posals.The bottom up proposals for faces and text are learnt us-ing a probabilistic version of AdaBoost,see section(5).The bottom up proposals for generic regions(e.g.shading and texture)were described in[18].In summary,bottom-up proposals drive top-down gener-ative models which compete with each other to explain the image.4.Generative ModelsThis section describes our generative models.We will con-centrate on our text model for space.The models will be used for text detection and reading.Figure3:Random samples drawn from the generative mod-els for letters and digits.In natural scenes,text such as street signs and store names are usually painted in regular fonts,which can be modeled by deformable templates.We define a set of tem-plates,,corresponding to ten digits and twenty six letters in upper case and lower case. Each template is represented by an outer boundary and0or up to2inner boundaries,each of which is modeled by twentyfive control points.Given an input image,we need to inference how many text symbols there are,which type they are and what deformations they have.From the standard shape of each text,we denote its shape by where is the index of template,in-cludes positions of control points,and denotes the affine transformation of.Thus,the prior distribution on can be specified asHere is a uniform distribution on all the digits and letters.is the probability of perturbation of control points w.r.t.the template and it is computed by the distance between contour points of and the template .Using quadratic B-Splines,the contour points can be computed as and.Thus the distribution are expressed aswhere is the distance between con-tour point and.The prior on affine trans-formation is defined such that severe rotation and distor-tion are penalized.Figure(3)shows some samples drawnfrom the above model.The intensities of the text exhibit smooth shading pattern and we use a quadratic formwith parameters.Therefore,the gen-erative model for pixel on the text isFigure4:Samples drawn from the PCA face model.The generative model for faces is simpler and uses tech-niques like Principal Component Analysis(PCA)to obtain representations of the faces.Lower level features,also mod-eled by PCA,can be added[12].Fig.4shows some faces sampled from the PCA model.We also add other features such as occlusion process,as described in Hallinan et al[7].5.AdaBoost and Conditional Proba-bilitiesThe standard AdaBoost algorithm,see for example[20], produces a binary decision–e.g.face or non-face.Here we follow Friedman et al[5]and allow AdaBoost to esti-mate the conditional probabilities instead.Standard AdaBoost learns a“strong classifier”by combining a set of“weak classifiers”using a set of weights:where the selection of features and weights are learned through supervised training off-line[4].Our variant of AdaBoost outputs conditional probabili-ties and is based on the following theorem[5].Theorem.The AdaBoost algorithm trained on data from two classes converges,in probability,to estimates of the conditional distributions of the data:(3)(4)We use AdaBoost to learn these conditional probability distributions so that they can activate our generative mod-els(in practice,the conditional probabilities are extremely small for almost all parts of an image).This allows us to avoid premature decisions about the presence or absence of a face.By contrast,standard AdaBoost can be thought of as using these conditional distributions for classification by the log-likelihood ratio test.5.1.AdaBoost TrainingWe used standard AdaBoost training methods[4,5]com-bined with Viola and Jones’cascade approach using asym-metric weighting[20].The cascade enables the algorithm to rule out most of the image as face,or text,locations with a few tests and allows computational resources to be con-centrated on the more challenging parts of the images(i.e. in our terminology,regions where the conditional probabil-ities arenon-negligible).a.Text(From these,we extracted textsegments.)b.FacesFigure5:Positive training examples for AdaBoost.Our text database contains561text images,some of which can be seen in Fig.5.They are extracted by hand from162static images of San Francisco street sceens.More than half of the images were taken by blind volunteers(so as to simulate the conditions under which our system will eventually be used).We divided each text image into sev-eral overlapping text segments withfixed width-to-height ration2:1.There are in total7,000text segments in the positive training set.The negative examples were obtained by a bootstrap process similar to Drucker et al[2].First we selected negative examples by randomly sampling from windows in the image dataset.After training with these samples,we applied the AdaBoost algorithm to classify all windows in the training images(at a range of sizes).Those misclassified as text were then used as negative examplesfor learning conditional distributions.The image regions most easily confused with text were vegetation,repetitive structures such as railings or building facades,and some chance patterns.The features used for AdaBoost were im-age tests corresponding to the statistics of elementaryfilters –see technical report for more details.The AdaBoost for faces was trained in a similar way. This time we used Haar basis vectors[20]as elementary features.We used the FERET[13]database for our positive examples,see Fig.5,and by allowing small rotation and translation transformation we had5,000positive examples. We used the same strategy as described above(for text)to obtain negative examples.In both cases,we tested AdaBoost for detection(i.e.for classification)using a number of different thresholds.In a-greement with previous work on faces[20],AdaBoost gave very high performance with low false positives and false negatives,see table(1).But the low error rates are slightly misleading because of the enormous number of windows in each image,see table(1).This means that by varying the threshold,we can either eliminate the false positives or the false negatives but not both at the same time.We illustrate this by showing the face regions and text regions proposed by AdaBoost infigure(6).If we attempt classification by putting a threshold then we can only correctly detect all the faces at the expense of false positives.Object False Positive False Negative Images Subwindows Face6526162355,960,040 Face91814162355,960,040 Face75421162355,960,040 Text118273520,183,316 Text187953520,183,316 Table1:Performance of AdaBoost at different thresholds.Instead,we prefer to use AdaBoost as proposals to gen-erative models.Also,generic region proposals canfind text that AdaBoost misses,for example,the‘9’in the bottom panel offigure(6)will fail to be detected by AdaBoost for text,but will be detected as a generic“shading region”and later recognized as a‘9’.putation and algorithmGiven the mixture models in the formulation and our inter-est in obtaining nearly globally optimal solutions,we design Markov chains to simulate walks in the solution space.6.1.Diffusion equationsGiven withfixed number of generic regions,text,and faces,and their model parameters,the interactions between these elements are governed by PDEs for the boundary and template deformation.Fig.7illustrates the motion.The Figure6:The boxes show faces and text as detected by AdaBoost.Observe the false positives due to vegetation, tree structure,and random image patterns.It is impossible to select a threshold which has no false positives and false negatives for this image.Instead we use AdaBoost to out-put conditional probabilities,which will take their biggest values in the boxes,which are used in the DDMCMC algo-rithm.Figure7:The diffusion and evolution of the boundaries is driven by the competition PDEs between regions.PDEs are derived as greedy steps for minimizing the en-ergy functions(or minus log-posterior probability)through variational calculus,especially the Green’s theory.For a boundary whose left and right components are regions or faces,its motion equation is similar as the one in the region competition algorithm[23].There are three energy terms for region:one for the likelihood,and two for the prior on area and perimeter defined in eqns.(2).Likewise,for a letterLet be a point on the boundary of and,i.e.The motion equation for control points can be obtained as where is the Jacobian matrix for the spline function. Thus,control points are moved by the forces transferredfrom boundary points through this motion equation.6.2.Jump dynamicsStructural changes in the solution are realized by Markov chain jumps(see[18]).We design the following reversible jumps between:(i)two regions–model switching:(ii)a region and a text:(iii)a region and a face:(iv)split or merge a region:(v)birth or death of a text:The Markov chain selects one of the above moves at each time,triggered by bottom-up compatibility conditions. 7.ExperimentsWe test the proposed image parsing algorithm on a number of outdoor/indoor images.The speed is comparable to seg-mentation methods such as normalized cuts[9].A detailed description and demonstrations of convergence of the basic DDMCMC paradigm can be seen in[18].The results of our experiments are shown in three ways: (i)synthesized images sampled from using the parameters and boundaries estimated by the DDMCM-C algorithm,(ii)the segmentation boundaries of the image, and(iii)the text and faces extracted from the image,with text symbols indicating the text that has been correctly read by the algorithm.Fig.9shows that we can obtain segmenta-tion,face detection(at a range of scales),and text detection and correct text reading.Moreover,the synthesized images are fairly realistic.High-level knowledge helps segmentation to overcome problem of oversegmentation and provides better synthesis in comparison to[18].Segmentation supports the recogni-tion of objects.Intuitively,the generative models for faces, text,texture,and shading compete to explain the image data. But this competition also enables cooperation.For example, the dark glasses on the two women in Fig.8.a are detected as generic“shading regions”and not as part of the faces. They are then treated as“outlier”data which the face model does not need to explain and hence increases the robustness of the face detection.In Fig.8.d,we show the synthesised faces by removing the sun-glasses.The Parking image in the third row of Fig.9also illustrates another example of cooperativity.For this image,where the bottom-up text Ad-aBoost model failed to propose the digit“9”as a text region, see Fig.9.However,the generic region processes detected it as a homogeneous image region and then proposed it as a letter”9”which was confirmed by the generativemodel.a.Input imageb.Boundariesc.Synthesis1d.Synthesis2 Figure8:Parsing a close-up of the Parking Image.Generic “shading region”processes detect the dark glasses and so the face model does not have to explain this part of the da-ta.Otherwise the face model would have difficulty because it would try tofit the glasses to eyes.Standard AdaBoost would only correctly classify these faces at the expense of false positives,see Fig.6.The Street Image,see the forth row of Fig.9,shows an example where the generative models for faces were required to reject face regions wrongly proposed by Ad-aBoost,see Fig.6.Moreover,this example shows coop-eratively because the shaded regional models were used to “explain away”shadows that otherwise would have disrupt-ed the detection and reading of the text(observe the heavy shading patterns on the text“Heights Optical”).The ability to synthesize the image after estimating the parameters is an advantage of our Bayesian approach, see[18].The synthesis helps illustrate the successes,andsometime the weaknesses,of our generative models.More-over,the synthesized images show how much information about the image has been captured by our models.In ta-ble(2),we show the number of bytes used in our represen-tation and compare them to the jpeg compression for the equivalent images.Image encoding is not the goal of our current work,however,and more sophisticated gener-ative models would be needed to synthesize very realistic images.Nevertheless,our synthesized images are fair ap-proximations and we could reduce the coding of sub-stantially by encoding the boundaries more efficiently(at present,we code boundary pixels independently).Image Stop Soccer Parking Street Westwoodjpg bytes23,99819,56323,31126,17027,7904,8863,9715,0136,3469,687 Table2:Comparison of bytes required by jpg and for each image.8.Summary and ConclusionsThis paper has introduced a framework for image parsing by defining generative models for the processes that create images including specific objects and generic regions such as shading and texture.Bottom-up proposals are learnt by the AdaBoost algorithm which provides conditional prob-abilities for the presence of objects in the image.These conditional probabilities enable inference by rapid search through the parameters of the generative models,and the segmentation boundaries,using the DDMCMC algorithm.We implement our system using generative models for text and faces combined with generic models for shaded and textured regions.Our approach enables these differ-ent models to compete and cooperate to describe the input images.We were able to segment the images,detect faces, and detect and read text in city scenes.Our experiments showed several cases where the shaded models helped face and text detection by explaining away shadows and occlud-ers(sun-glasses).In turn,the text and face models improved the quality of the segmentations.The current limitations of our approach lie in the limited class of objects we currently model.This limitation was motivated by our application goal of detecting text and faces for the visually disabled.But,in principle,our approach can include broad types of objects. AcknowledgmentsThis work is supported by the National Institute of Health (NEI)RO1-EY012691-04and an NSF grant0240148.The authors thank the Smith-Kettlewell research institute for providing us with text training images.References[1]S.Belongie,J.Malik,and J.Puzicha,“Matching shapes”,Proc.of ICCV,2001.[2]H.Drucker,R.Schapire,and P.Simard,“Boosting perfor-mance in neural networks,”Intl J.Pattern Rec.and Artificial Intelligence,vol.7,no.4,1993.[3] F.Fleuret,and D.Geman,“Coarse-to-Fine face detection”,IJCV,June,2000.[4]Y.Freund and R.Schapire,“Experiments with a new boostingalgorithm”,Proc.of ICML,1996.[5]J.Friedman,T.Hastie and R.Tibshirani.“Additive logisticregression:a statistical view of boosting”,Dept.of Statistics, Stanford Univ.Technical Report.1998.[6]U.Grenander,Y.Chow,and D.Keenan.HANDS:A PatternTheoretic Study of Biological Shapes.Springer-Verlag,1990.[7]P.Hallinan,G.Gordon,A.Yuille,P.Giblin,and D.Mumford,“Two and Three Dimensional Patterns of the Face”,AKPeter-s,1999.[8] A.K.Jain and B.Yu,“Automatic text localication in imagesand video frames”,Pattern Recognition,31(12),1998. [9]J.Malik,S.Belongie,T.Leung and J.Shi,“Contour and tex-ture analysis for image segmentation”,IJCV,vol.43,no.1, 2001.[10] D.Marr.Vision.W.H.Freeman and Co.San Francisco,1982.[11] D.Martin,C.Fowlkes,D.Tal and J.Malik,“A database ofhuman segmented natural images and its application to eval-uating segmentation algorithms and measuring ecological s-tatistics”,Proc.of ICCV,2001.[12] B.Moghaddam and A.Pentland,“Probabilistic VisualLearning for Object Representation”,IEEE Trans.PAMI, vol.19,no.7,1997.[13]P.J.Phillips,H.Wechsler,J.Huang,and P.Rauss,“TheFERET database and evaluation procedure for face recogni-tion algorithms”,Image and Vision Computing J,vol.16,no.5,1998.[14]M.Revow,G.K.I.Williamst and G.E.Hinton,“Using gener-ative models for handwritten digit recognition”,IEEE Trans.PAMI,vol.18,1996.[15]H.Rowley,S.Baluja,and T.Kanade,“Neural network-basedface detection”,In IEEE Trans.PAMI,vol.20,1998. [16]T.Sato,T.Kanade,E.Hughes,and M.Smith,“Video OCRfor Digital News Archives,”IEEE Intl.Workshop on Content-Based Access of Image and Video Databases,Jan.,1998. [17]H.Schniederman and T.Kanade,“A Statistical method for3D object detection applied to faces and cars”,Proc.of Com-puter Vision and Pattern Recognition,2000.[18]Z.Tu and S.C.Zhu,“Image segmentation by Data DrivenMarkov chain Monte Carlo”,IEEE Trans.PAMI,vol.24,no.5,2002.[19]Z.Tu and S.C.Zhu,“Parsing images into regions and curveprocesses”,Proc.of ECCV,June,2002.[20]P.Viola and M.Jones,“Fast and Robust Classification usingAsymmetric AdaBoost and a Detector Cascade”,In Proc.of NIPS01,2001.[21]M.Weber,W.Einhuser,M.Welling,P.Perona,“Viewpoint-invariant learning and detection of human heads”,Proc.of Int.Conf.Automatic Face and Gesture Recognition,2000.a.Input imageb.Region layerc.Object layerd.Synthesis imageFigure 9:Results of segmentation and recognition on several outdoor/indoor images:Stop sign (row 1),Soccer (row 2),Parking (row 3),Street (row 4),and Westwood (row 5).[22]Ming-Hsuan Yang,N.Ahuja,D.Kriegman,“Face detectionusing mixtures of linear subspaces”,In Proc.of Int.Conf.Au-tomatic Face and Gesture Recognition ,2000.[23]S.C.Zhu and A.L.Yuille,“Region competition,”IEEETrans.PAMI ,vol.18,no.9,1996.。

智能机器人材料7

智能机器人材料7
Our contribution is as follows:
• A new occupancy grid formulation is proposed to better deal with sensors modeled by multimodal probability distributions such as stereo, integrating not only the least-cost estimate but the whole cost-curve.
M. Branda˜o is with the Graduate School of Advanved Science and Engineering, Waseda University, 41-304, 17 Kikui-cho, Shinjuku-ku, Tokyo 1620044, JAPAN. contact@takanishi.mech.waseda.ac.jp
Abstract— Extensive literature has been written on occupancy grid mapping for different sensors. When stereo vision is applied to the occupancy grid framework it is common, however, to use sensor models that were originally conceived for other sensors such as sonar. Although sonar provides a distance to the nearest obstacle for several directions, stereo has confidence measures available for each distance along each direction. The common approach is to take the highestconfidence distance as the correct one, but such an approach disregards mismatch errors inherent to stereo.

英语种植大蒜全步骤作文

英语种植大蒜全步骤作文全文共3篇示例,供读者参考篇1How to Plant Garlic - A Step-by-Step GuideAs a student interested in gardening, I've learned that growing your own food can be an incredibly rewarding experience. One crop that I've found particularly easy and satisfying to cultivate is garlic. Not only is it a versatile ingredient used in countless dishes, but the process of planting and harvesting garlic is relatively straightforward. In this essay, I'll walk you through the complete steps for planting garlic, from selecting the right bulbs to harvesting and curing your fragrant bounty.Step 1: Choose Your Garlic VarietyThe first step in planting garlic is to select the variety you want to grow. There are two main types: softneck and hardneck. Softneck garlic, such as artichoke and silverskin, is well-suited for braiding and has a longer shelf life. Hardneck varieties like rocambole and porcelain produce a hardy flower stalk called ascape and typically have more robust flavors. Consider your climate, as some varieties thrive better in certain regions.Step 2: Prepare the SoilGarlic grows best in well-drained, nutrient-rich soil with a pH between 6.0 and 7.0. Before planting, it's essential to prepare the soil by removing any weeds and working in compost orwell-rotted manure. This will provide the necessary nutrients for your garlic to thrive. If your soil is particularly heavy or clay-like, consider adding sand or perlite to improve drainage.Step 3: Break Apart the BulbsAbout a week before planting, separate the garlic bulbs into individual cloves, being careful not to remove the papery skins. Discard any cloves that appear damaged or moldy. This process, called "cracking," encourages the formation of new roots and shoots.Step 4: Plant the ClovesThe optimal time for planting garlic varies depending on your location, but generally, you'll want to plant in the fall before the ground freezes. In most regions, mid-October tomid-November is ideal. Plant the individual cloves with the pointed end facing up, about 2 inches deep and 6 inches apart,in rows spaced 12 inches apart. If you're planting hardneck garlic, increase the spacing to 8-10 inches between cloves to accommodate the larger bulbs.Step 5: Mulch and WaterAfter planting, water the area thoroughly to settle the soil around the cloves. Then, apply a 4-6 inch layer of mulch, such as straw, leaves, or bark chips. Mulching helps retain moisture, suppress weeds, and insulate the soil during winter.Step 6: Monitor Growth and CareThroughout the growing season, keep an eye on your garlic and water whenever the top inch of soil becomes dry. Hardneck varieties will send up a flower stalk called a scape in late spring or early summer. Remove these scapes to redirect the plant's energy into bulb growth. Softneck varieties do not typically produce scapes.Step 7: Harvest and CureAs summer approaches, the garlic leaves will begin to yellow and fall over, signaling that the bulbs are ready for harvest. Carefully dig around each plant with a garden fork, being careful not to damage the bulbs. Brush off any excess soil and allow thegarlic to cure in a warm, dry, well-ventilated area for 2-4 weeks. This curing process helps increase shelf life and improves flavor.Step 8: Storage and EnjoymentOnce the garlic has cured, you can trim the roots and stalks, leaving about an inch attached to the bulb. Store the cured garlic in a cool, dry place with plenty of air circulation, such as a mesh bag or open container. With proper care, your homegrown garlic can last for several months, adding flavor and aroma to countless dishes.Planting and harvesting garlic is a rewarding process that not only provides you with a fresh, flavorful crop but also teaches valuable lessons about patience, perseverance, and the joys of growing your own food. As a student, I've found that tending to a garlic patch is a therapeutic break from studies, reminding me of the simple pleasures in life. So why not give it a try? With a little effort and these step-by-step instructions, you too can experience the satisfaction of harvesting your very own garlic bulbs.篇2Growing Garlic: A Step-by-Step GuideAs a student, I've always been fascinated by the process of growing my own food. There's something incredibly satisfying about nurturing a tiny seed or clove into a bountiful harvest. This year, I decided to try my hand at growing garlic, a culinary staple that adds flavor and aroma to countless dishes. Little did I know that cultivating this humble allium would turn into such an enriching learning experience.Introduction to GarlicBefore we dive into the nitty-gritty of garlic planting, let's take a moment to appreciate this incredible plant. Garlic (Allium sativum) is a member of the Amaryllidaceae family, closely related to onions, leeks, and chives. It's believed to have originated in Central Asia and has been cultivated for thousands of years, prized for its pungent flavor and purported medicinal properties.There are two main types of garlic: hardneck and softneck. Hardneck varieties produce a rigid central stem and fewer, but larger, cloves. Softneck garlic, on the other hand, doesn't form a central stem and yields smaller, more numerous cloves that store better. For my first garlic-growing adventure, I opted for a hardneck variety called 'Music,' known for its rich flavor and easy-to-peel cloves.Step 1: Choosing and Preparing the SiteGarlic thrives in well-drained, nutrient-rich soil with a pH ranging from 6.0 to 7.0. I chose a sunny spot in my backyard that received at least six hours of direct sunlight per day. Before planting, I removed any weeds and worked the soil to a depth of about eight inches, incorporating compost or well-rotted manure to improve its fertility and drainage.Step 2: Selecting and Separating the ClovesGarlic is typically planted in the fall, around six to eight weeks before the ground freezes. I purchased a pound of 'Music' garlic bulbs from a local nursery, ensuring that they were certified disease-free and intended for planting (not grocery store garlic, which may have been treated to prevent sprouting).With great care, I separated the bulbs into individual cloves, being cautious not to remove the protective papery skins. I selected the largest, healthiest cloves for planting, setting aside any damaged or small ones for culinary use.Step 3: Planting the ClovesArmed with a trowel and a measuring tape, I embarked on the planting process. Garlic cloves should be planted pointy-end up, about two inches deep and six inches apart, with rows spaced12 to 18 inches apart. I dug a shallow trench, placed the cloves at the appropriate intervals, and covered them with soil, gently tamping it down.To my surprise, planting an entire pound of garlic took quite some time and effort. But as I worked, I couldn't help but marvel at the simplicity and beauty of the process – taking a humble clove and giving it the chance to flourish into a magnificent bulb.Step 4: Mulching and WateringAfter planting, I applied a two-to-three-inch layer of mulch around the garlic bed. Mulching helps retain soil moisture, suppress weeds, and insulate the cloves during the winter months. I used a mixture of shredded leaves and straw, being careful not to bury the planting site too deeply.Proper watering is crucial during the early stages of garlic growth. I made sure to keep the soil consistently moist but not waterlogged, providing about an inch of water per week if there was no significant rainfall. As the winter set in, I cut back on watering, allowing the garlic to go dormant and conserve its energy for the spring.Step 5: Monitoring and MaintenanceThroughout the winter, I kept a watchful eye on my garlic bed. As the first green shoots emerged, I carefully removed any mulch covering them, allowing them to bask in the sun. I also took care to weed the area regularly, ensuring that my precious garlic plants didn't have to compete for nutrients and water.As spring arrived, the garlic plants grew rapidly, sending up sturdy stems and developing their distinctive flat, strap-like leaves. I continued to water them regularly, adjusting the amount based on rainfall and soil moisture levels.Step 6: Scaping and HarvestingOne of the most exciting moments in the garlic-growing journey is the appearance of the scape – a long, curled stem that emerges from the center of the plant. These scapes, if left unchecked, can divert the plant's energy away from bulb development. So, as instructed, I carefully snipped them off, leaving about an inch protruding from the stem.Little did I know that these scapes would become a culinary treasure in their own right! I sautéed them with a bit of olive oil and salt, reveling in their mild garlic flavor and crunchy texture.As the summer solstice approached, the leaves of my garlic plants began to yellow and die back – a sign that the bulbs werenearing maturity. I carefully dug up a few test bulbs, checking for the desired tight wrapping of the outer skins and awell-developed clove structure.Finally, the day of the main harvest arrived! With great excitement, I loosened the soil around each plant and gently lifted the bulbs, brushing off any excess dirt. The sight of those plump, aromatic garlic heads was a true reward for my months of patient tending.Step 7: Curing and StorageAfter the harvest, the garlic bulbs needed to be cured – a process that dries and toughens the outer skins, ensuring better storage life. I carefully tied the plants together in small bundles and hung them in a well-ventilated, shaded area for two to three weeks.Once the tops had turned brown and papery, I trimmed the roots and cut the stems, leaving about an inch attached to the bulb. These beautifully cured garlic heads were then stored in a cool, dry place with good air circulation, ready to flavor my culinary creations throughout the year.ConclusionAs I reflect on my garlic-growing journey, I can't help but feel a sense of pride and accomplishment. What started as a curious experiment turned into a profound lesson in patience, perseverance, and the wonders of nature's cycles.Cultivating garlic taught me the importance of attention to detail, from carefully selecting and planting the cloves to monitoring and maintaining the plants throughout their growth. It also instilled in me a deep appreciation for the hard work and dedication of farmers and gardeners who put food on our tables.But beyond the practical lessons, growing garlic was a reminder of the simple joys in life – the satisfaction of nurturing a living thing, the excitement of witnessing its growth and transformation, and the gratification of enjoying the fruits (or, in this case, bulbs) of one's labor.As I savor the rich, pungent flavor of my homegrown garlic, I can't wait to embark on my next gardening adventure, eager to continue learning and exploring the wonders of the natural world, one seed (or clove) at a time.篇3How to Grow Garlic from Start to FinishHey there! Today I'm going to take you through the entire process of growing garlic from start to finish. Garlic is one of the most rewarding crops you can grow at home. Not only is it super low maintenance, but you get to enjoy that amazing garlicky flavor all year round freshly picked from your own garden. Preparing dishes with garlic you've grown yourself is just so satisfying.I got into growing garlic a couple of years ago when my grandpa showed me how to do it. He's been cultivating his own garlic for decades and makes the most delicious garlicky everything - garlicky mashed potatoes, garlicky pasta, you name it. When I tasted his homegrown garlic for the first time, I was blown away by how much more flavorful and robust it was compared to the stuff you get at the grocery store. That's when I decided I needed to learn his ways and start my own garlic patch.Getting StartedThe first step is to get your hands on some good quality garlic bulbs for planting. Don't just use the garlic from the supermarket as those are often treated to last longer on the shelves. Instead, buy certified disease-free garlic bulbs from a nursery or garden supplier. You'll want to plant the largest cloves from each bulb for the best results.Next, you'll need to prepare your planting bed. Garlic needs lots of nutrient-rich, well-draining soil to really thrive. My grandpa always says "If you want fat bulbs, start with fat soil!" So work in plenty of compost, manure or other organic matter into your planting area a few weeks before you plan to put the cloves in the ground. This gives everything time to break down nicely.When and How to PlantThe ideal planting time for garlic is in the fall, around late September to early November in most climates. You want to get those cloves in the ground about 6 weeks before the really cold winter temperatures hit. This allows the roots to establish themselves while the soil is still workable.To plant, just take each individual clove and push it vertically into the soil with the pointed end facing upwards. Space them about 6 inches apart in rows spaced a foot apart. The cloves should be buried with the tip just below the surface. I like to add a mulch of straw or leaves over the bed after planting to help retain moisture and protect the young sprouts from frost heave.Caring for Your GarlicOne of the best parts about growing garlic is how little maintenance it requires. The plants are extremely hardy and canshrug off cold temperatures, pests and disease way better than most crops. Still, there are a few things you can do to help them along:Weed regularly. Garlic doesn't like competition from other plants stealing its nutrients and moisture, especially when the bulbs start forming.Stop watering completely about 2 weeks before you plan to harvest to help cure and dry the bulbs.Add a layer of mulch each fall and spring to retain moisture and keep weeds down.Use an organic nitrogen fertilizer in early spring when the leaves start growing rapidly.Harvesting TimeMost garlic varieties take around 8-9 months from planting time until they're ready for harvesting. You'll know it's harvest season when about 60% of the plant's leaves have turned brown and dried up, usually sometime in late June or July.To harvest, gently loosen the soil with a garden fork and pull the entire plant up by the leaves, taking care to leave the stem attached to the bulb. Brush off any excess dirt clinging to thebulbs, but be careful not to knock off that outer papery covering as this helps protect the garlic.The Curing ProcessFreshly harvested garlic still needs to go through a curing process before you can store it long-term. This drying period helps lock in flavor and allows the wrappers to fully dry out so the garlic keeps longer. It's an essential step.To cure garlic, you'll need a warm, dry, well-ventilated spot out of direct sunlight. I like to tie the plants together in loose bundles and hang them from rafters in our shed. Let the bundles cure for 3-4 weeks until the wrappers are completely dry and papery.After curing, you can give the bulbs a gentle cleaning by removing any excess stems and roots and brushing off any remaining soil if needed. The garlic is now ready for long-term storage! I usually braid the stems together and hang the braids in our pantry. Properly cured garlic can keep for up to 8 months this way.So there you have it - the full rundown on growing garlic from planting to harvesting to curing for storage. It's such a satisfying process watching those tiny little cloves transform intobeautiful big bulbs bursting with flavor. Not to mention garlic is an insanely useful ingredient to always have on hand in the kitchen! I highly recommend giving it a try if you've never grown it before. Just make sure to save some of those biggest bulbs to replant next fall and keep the cycle going. Trust me, once you taste fresh homegrown garlic, you'll never go back to the stuff from the grocery store!。

simulink-3d-animation

Visualization of Simulink based applications, clockwise from bottom left: self-balancing robot, aircraft over terrain, automotive vehicle dynamics, and wind farm.Authoring and Importing 3D WorldsSimulink 3D Animation provides two editors for authoring and importing virtual reality worlds: V-Realm Builder and 3D World Editor.Building 3D WorldsV-Realm Builder in Simulink 3D Animation is a native VRML authoring tool that enables you to create 3D views and images of physical objects using VRML. 3D World Editor offers a hierarchical, tree-style view of VRML objects that make up the virtual world. It contains a set of object, texture, transform, and material libraries that are stored locally for reuse.3D World Editor showing a hierarchical, tree-style view (left) and scene preview (right) of components of a lunar module.Importing 3D Content from the WebYou can build 3D worlds with several3D authoring tools and export them to the VRML97 format for use with Simulink 3D Animation. In addition, you can download 3D content from the Web and use it to assemble detailed 3D scenes.Importing CAD Models3D World Editor lets you manipulate 3D VRML objects imported from most CAD packages for developing detailed 3D worlds that animate dynamic systems modeled in Simscape™,SimMechanics™, and Aerospace Blockset™. Simulink 3D Animation enables you to process VRML files created by CAD tools such as SolidWorks®and Pro/ENGINEER®. You can use the SimMechanics Link utility to automatically create SimMechanics models from CAD tools and add associated Simulink 3D Animation visualization to them.3D animation of the dynamics of an internal combustion engine modeled in SimMechanics (top) and trajectory trace of an aircraft computed using coordinate transformations from Aerospace Blockset (bottom).Animating 3D WorldsSimulink 3D Animation provides bidirectional MATLAB and Simulink interfaces to 3D worlds.MATLAB Interface to 3D WorldsFrom MATLAB, you can read and change the positions and other properties of VRML objects, read signals from VRML sensors, create callbacks from graphical tools, record animations, and map data onto 3D objects. You can use MATLAB Compiler™to generate standalone applications with Simulink 3D Animation functionality forroyalty-free deployment.MATLAB based 3D application compiled as an executable using MATLAB Compiler and deployed on an end-user machine running MATLAB Compiler Runtime.Simulink Interface to 3D WorldsYou can control the position, rotation, and size of a virtual object in a scene to visualize its motion and deformation. During simulation, VRML object properties in the scene can also be read into Simulink. A set of vector and matrix utilities for axis transformations enables associations of Simulink signals with properties of objects in your virtual world. You can adjust views relative to objects and display Simulink signals as text in the virtual world. You can also trace the 3D trajectory, generated using Curve Fitting Toolbox™, of an object in theassociated virtual scene. For example, you can perform flight-path visualization for the launch of a spacecraft.Modeling and simulation in Simulink of a multi-agent system animated with Simulink 3D Animation. The virtual world is linked through the VR Sink block (middle) and viewed with the Simulink 3D animation viewer (bottom).Viewing and Interacting with 3D WorldsSimulink 3D Animation provides VRML viewers that display your virtual worlds and record scene data. It also provides Simulink blocks and MATLAB functions for user interaction and virtual prototyping with 3D input devices, including 3D mice and force-feedback joysticks.VRML ViewersSimulink 3D Animation includes viewers that let you navigate the virtual world by zooming, panning, moving sideways, and rotating about points of interest known as viewpoints. In the virtual world, you can establish viewpoints that emphasize areas of interest, guide visitors, or observe an object in motion from different positions. During a simulation, you can switch between these viewpoints.Integrating with MATLAB Handle GraphicsThe Simulink 3D Animation viewer integrates with MATLAB figures so that you can combine virtual scenes withMATLAB Handle Graphics®and multiple views of one or more virtual worlds.Example of a graphical interface authored with MATLAB Handle Graphics. The screen shows a car suspension test on a race track that combines multiple 3D views (top), including speed data and visualizations of the steering wheel and force triads, with 2D graphics for trend analysis (bottom).Recording and Sharing AnimationsSimulink 3D Animation enables you to record scene data and share your work.Recording Scene DataSimulink 3D Animation enables you to control frame snapshots (captures) of a virtual scene, or record animations into video files. You can save a frame snapshot of the current viewer scene as a TIFF or PNG file. You can schedule and configure recordings of animation data into AVI video files and VRML animation files for future playback. You can use video and image processing techniques on frame snapshot and animation data. These approaches enable the development of control algorithms using a visual feedback loop through the link with a virtual reality environment instead of physical experimental setups.Enabling Collaborative EnvironmentsSimulink 3D Animation lets you view and interact with simulated virtual worlds on one machine that is running Simulink or on networked computers that are connected locally or via the Internet. In a collaborative work environment, you can view an animated virtual world on multiple client machines connected to a host server through TCP/IP protocol. When you work in an individual (nonnetworked) environment, your modeled system and the 3D visualization run on the same host.Visualizing Real-Time SimulationsSimulink 3D Animation contains functionality to visualize real-time simulations and connect with input hardware. You can use C code generated from Simulink models using Simulink Coder™to drive animations. This approach enhances your hardware-in-the-loop simulations or rapid prototyping applications on xPC Target™and Real-Time Windows Target™by providing a visual animation of your dynamic system model as it connects withreal-time hardware.Product Details, Demos, and System Requirements/products/3d-animationTrial Software/trialrequestSales/contactsalesTechnical Support/support Components of an xPC Target real-time testing environment that includes Simulink 3D Animation for rapid prototyping (top) and hardware-in-the-loop simulation (bottom).ResourcesOnline User Community /matlabcentral Training Services /training Third-Party Products and Services /connections Worldwide Contacts /contact。

英语作文介绍陶瓷

英语作文介绍陶瓷Ceramics, as one of the oldest forms of art and craftsmanship, hold a significant place in human history. From utilitarian vessels to intricate decorative pieces, ceramics have evolved alongside human civilization, reflecting cultural, technological, and aesthetic advancements. In this essay, we will explore the multifaceted world of ceramics, delving into its history, production techniques, cultural significance, and contemporary relevance.Historical Significance:The history of ceramics spans millennia, with evidence of pottery production dating back to ancient civilizations such as the Chinese, Mesopotamians, Egyptians, and Greeks. These early ceramic artifacts served various purposes, ranging from storing food and water to religious rituals and burial practices. The development of ceramic techniques such as wheel-throwing, glazing, and firing revolutionizedthe craft, enabling artisans to create more refined and durable pieces.Production Techniques:Modern ceramic production encompasses a wide range of techniques, each with its own unique characteristics and applications. Wheel-throwing remains a popular method for creating symmetrical vessels, allowing artisans to shape clay on a rotating wheel with precision and control. Handbuilding techniques, such as coiling and slab construction, offer alternative approaches for creating sculptural forms and textured surfaces.Glazing plays a crucial role in ceramic production, providing both decorative and functional properties to the finished pieces. Glazes are composed of various minerals and oxides that melt and fuse to the surface of the clay during firing, creating a durable and often glossy finish. Different firing methods, such as electric, gas, and wood firing, produce distinct effects on the final appearance of ceramics, ranging from vibrant colors to subtle variationsin texture and sheen.Cultural Significance:Ceramics have played a central role in shaping cultural identities and traditions around the world. From the delicate porcelain of China to the vibrant majolica of Spain, each culture has developed its own unique ceramic styles and techniques, reflecting local aesthetics, materials, and craftsmanship. Ceramics have been used to express religious beliefs, social status, and artistic expression, serving as both functional objects and works of art.In many cultures, ceramic traditions are passed down through generations, preserving ancient techniques and designs while also evolving to suit contemporary tastes and lifestyles. Ceremonial vessels, ritualistic figurines, and ornamental tiles are just a few examples of how ceramics have been intertwined with religious, ceremonial, and domestic practices throughout history.Contemporary Relevance:Despite the advent of modern materials andmanufacturing processes, ceramics continue to thrive as a form of artistic expression and functional design. Contemporary ceramic artists explore a wide range of themes, styles, and techniques, pushing the boundaries oftraditional craftsmanship and experimenting with new forms and materials. From minimalist porcelain sculptures toavant-garde installations, ceramics remain a dynamic and evolving medium in the world of contemporary art.In addition to its artistic value, ceramics also play a vital role in various industries, including architecture, aerospace, and biomedical engineering. Advanced ceramics, such as alumina, zirconia, and silicon carbide, offer exceptional strength, heat resistance, and electrical insulation, making them ideal materials for high-tech applications ranging from aerospace components tobiomedical implants.In conclusion, ceramics embody a rich tapestry ofhistory, craftsmanship, and cultural significance that continues to inspire and captivate people around the world. From ancient pottery traditions to cutting-edge innovations, ceramics remain a timeless art form that bridges the past, present, and future.。

Stylized rendering techniques for scalable real-time 3d animation


“We’re searching here, trying to get away from the cut and dried handling of things all the way through—everything—and the only way to do it is to leave things open until we have completely explored every bit of it.”
CR Categories and Subject Descriptors: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.5 [Computer Graphics]: Three-Dimensional Graphics and Realism – Color, Shading, Shadowing, and Texture. Additional Key Words: real-time nonphotorealistic animation and rendering, silhouette edge detection, cartoon rendering, pencil sketch rendering, stylized rendering, cartoon effects.
1 Introduction
Recent advances in consumer-level graphics card performance are making a new domain of real-time 3D applications available for commodity platforms. Since the processor is no longer required to spend time rasterizing polygons, it can be utilized for other effects, such as the computation of subdivision surfaces, real-time physics, cloth modeling, realistic animation and inverse kinematics.
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Data-drivenApproachesforTextureandMotionbyAlexeiA.Efros

B.S.(UniversityofUtah)1997M.S.(UniversityofCalifornia,Berkeley)1999

AdissertationsubmittedinpartialsatisfactionoftherequirementsforthedegreeofDoctorofPhilosophy

inComputerScienceintheGRADUATEDIVISIONoftheUNIVERSITYOFCALIFORNIA,BERKELEY

Committeeincharge:ProfessorJitendraMalik,ChairProfessorDavidA.ForsythProfessorStephenPalmer

Fall2003ThedissertationofAlexeiA.Efrosisapproved:ChairDateDateDateUniversityofCalifornia,BerkeleyFall2003Data-drivenApproachesforTextureandMotionCopyright2003byAlexeiA.Efros1Abstract

Data-drivenApproachesforTextureandMotionbyAlexeiA.EfrosDoctorofPhilosophyinComputerScience

UniversityofCalifornia,BerkeleyProfessorJitendraMalik,Chair

Accesstovastamountsofvisualinformationandtheincreaseincomputingpowerarefacilitatingtheemergenceofanewclassofsimpledata-drivenapproachesforimageanalysisandsynthesis.Insteadofbuildinganexplicitparametricmodelofaphenomenon,thedata-driventechniquesrelyontheunderlyingdatatoserveasitsownrepresentation.Thisdissertationpresentsworkintwodifferentdomains,visualtextureandhumanmotion,wheredata-drivenapproacheshavebeenparticularlysuccessful.PartIdescribestwoalgorithmsfortexturesynthesis,theproblemofsynthesizingvisualtexture(e.g.grass,bricks,pebbles)fromanexampleimage.Thegoalistoproducenovelsamplesofagiventexture,which,whilenotidentical,willbeperceivedbyhumansasthesametexture.Theproposedtexturesynthesisprocessgrowsanewimageoutwardfromaninitialseed,onepixel/patchatatime.AMarkovrandomfieldmodelisassumed,andtheconditionaldistributionofapixel/patchgivenallitsneighborssynthesizedsofarisestimatedbyqueryingthesampleimageandfindingallsimilarneighborhoods.Thedegreeofrandomnessiscontrolledbyasingleperceptuallyintuitiveparameter.Themethodaimsatpreservingasmuchlocalstructureaspossibleandproducesgoodresultsforawidevarietyofsyntheticandreal-worldtextures.Onediscussedapplicationofthemethodistexturetransfer,anoveltechniquethatallowstexturefromoneobjecttobe”painted”ontoadifferentobject.PartIIpresentsanalgorithmfortheanalysisandsynthesisofhumanmotionfromvideo.Insteadofreconstructinga3Dmodelofthehumanfigure(whichishard),theideaisto“explain”anovelmotionsequencewithbitsandpiecesfromalargecollectionofstored2videodata.Thissimplemethodcanbeusedforbothactionrecognitionaswellasmotiontransfer–synthesizinganovelpersonimitatingtheactionsofanotherperson(“DoasIDo”synthesis)orperformingactionsaccordingtothespecifiedactionlabels(“DoasISay”synthesis).

ProfessorJitendraMalikDissertationCommitteeChairi

Tomychair,forsupportingmealltheseyears.iiContentsListofFiguresiv1Introduction11.1LearningbyPlagiarism.............................21.2Overview.....................................3

ITextureSynthesis52OnTextureanditsSynthesis62.1Whatistexture?.................................62.2TextureSynthesis.................................72.2.1Previouswork...............................8

3Pixel-basedSynthesisAlgorithm103.1TheAlgorithm..................................113.1.1Synthesizingonepixel..........................113.1.2Synthesizingtexture...........................133.1.3Algorithmdetails.............................133.2Results.......................................153.3Limitations....................................213.4Applications....................................223.5Follow-upWork..................................24

4ImageQuilting:Patch-basedSynthesisAlgorithm254.1ImageQuilting..................................264.1.1MinimumErrorBoundaryCut.....................274.1.2TheImageQuiltingAlgorithm.....................284.1.3SynthesisResults.............................284.2Conclusion....................................29

5Application:TextureTransfer385.1TextureTransfer.................................405.1.1ComparisontoPixel-basedApproaches................42iii5.2OtherApplications................................46IIHumanActions476ActionRecognitionataDistance486.0.1RelatedWork...............................516.1MeasuringMotionSimilarity..........................526.1.1ComputingMotionDescriptors.....................546.2ClassifyingActions................................576.3Conclusion....................................58

7QueryingtheActionDatabase637.1SkeletonTransfer.................................637.2ActionSynthesis.................................657.3Context-basedFigureCorrection........................677.4Application:ActorReplacement........................687.5Conclusion....................................70

相关文档
最新文档