分形图像编码(英文)
第三章 pacs中的图像压缩技术3

•3.1图像压缩编码方法及分类•3.2 图像编码的评价标准与图像质量和编码效率的评价•3.3 常见的图像编码方式3.3.1 统计编码3.3.2 预测编码3.3.3 变换编码•3.4PACS常用的图像压缩标准简介3.4.1 JPEG标准3.4.2 JPEG2000概述医学图像的描述,一般采用的是位图的方式,即逐点表示出各位置上的颜色、亮度等信息信息。
对单色图像只有亮度信息,称为灰阶(医学图像灰阶数目往往超过256)。
而对彩色图像多采用的是RGB三原色的方式表示,即一个点用红、绿、蓝个分量的值表示。
一般可以用三个矩阵分别表示三个RGB分量,也可以仅用一个矩阵表示整个图像,在这种情况下,矩阵中每一点是由三个值组成的。
为了保证临床诊断的准确性和可靠性,数字化医学图像的采集往往具有空间分辨率和量化精度高、数据量大的特点。
医学图像分辨率的典型值为2048像素×2048像素,甚至更高,其像素深度为8~16位/像素。
例如,胸部的X光片和乳腺的X光片一般需要达到50DPI(Dot Per Inch)的空间分辨率和4096级灰阶负分辨率,一幅这样的图像通常是2048×2048×12 bit的图像MRI等成像技术一般是在512像素×512像素的空间分辨分辨率、12位灰度级下对断层扫描图像信息进行数字化采集,每次采集40帧或80帧层位片,每帧图像为512点×512点,40帧总长约20 MB,80帧长40 MB。
打印机分辨率(DPI)•打印机的分辨率是指打印机在每英寸所能打印的点数(Dot Per Inch)即打印精度(DPI),这是衡量打印质量的一个重要标准,也是一个判断打印机分辨率的基本指标。
•一般的家庭用户和中小型办公用户使用的打印机的分辨率应至少达到300DPI——720DPI之间,但DPI指标不是越大越好。
为了保证临床诊断的准确性和可靠性,数字化医学图像的采集往往具有空间分辨率和量化精度高、数据量大的特点。
基于像素采样的分形图像编码算法

o f s t nd a a r d i m a g e s s h o w t h a t ma t c h s e a r c h i n g t i me c a n b e r e d u c e d s u b s t nt a i a l l y nd a t h e s u b j e c t i ma g e q u a l i t y r e ma i n
p r o po s e d n a i mp r o v e d s c h e me f o r f r a c t a l i ma g e c o d i n g b a s e d o n pi x e l s a mp l i n g. Th e i mp r o ve d s c h e me n o t o nl y d o e s
n o t ne e d a n y c o mp l e x t he o r e t i c a l na a l ys i s , b u t a l s o d o e s no t n e e d t o c h a n g e t h e e x i s t i n g f r a c t a l d e c o d i n g p r o c e d u r e ; t h u s i t c a n b r i n g i n o t h e r f a s t f r a c t a l i ma g e e n c o d i n g a l g o r i t h ms i n a s t r a i g h t f o r wa r d ma n n e r . Co mp u t e r s i mu l a t i o ns o n a s e t
Abs t r a c t : Fr a c t a l i ma g e c o d i n g i s a l l no ve l a nd a r t d e v e l o p e d po t e n t i a l i ma g e c o mp r e s s i o n t e c h ni q u e ba s e d o n t he l o c a l
分形几何在图像处理中的应用

分形几何在图像处理中的应用分形几何是一种描述自相似特征的数学理论,具有广泛的应用领域,其中之一便是图像处理。
分形几何在图像处理中的应用能够提供更加准确和高效的算法,从而实现对图像的分析、识别和变换。
本文将介绍分形几何在图像处理中的应用,并探讨其带来的优势和挑战。
一、分形编码分形编码是分形几何在图像压缩方面的一种应用。
传统的图像压缩算法会造成图像信息的丢失,而分形编码通过寻找图像中的自相似区域,并利用其特征进行编码和解码,实现了无损压缩。
分形编码将图像分成小块,通过计算块与块之间的相似度来实现压缩。
利用分形几何的特性,分形编码能够在较低的数据量下重建出高质量的图像。
二、图像分形生成图像分形生成是指利用分形几何原理和算法进行图像的生成和变换。
通过自相似性,分形生成可以生成具有自然场景中多样性、复杂性的图像,例如树木、云朵等。
分形生成还可以通过迭代的方式进行图像的无限放大和放缩,实现对图像的细节控制。
三、纹理合成分形几何在纹理合成方面的应用相当广泛。
纹理合成是指通过生成新的纹理图像,使其看起来像是具有某种纹理的真实图像。
利用分形几何的自相似性和多样性特征,可以生成逼真的纹理图像。
纹理合成在游戏开发、虚拟现实等领域中有着重要的应用,能够提升用户体验。
四、图像分割和边缘检测分形几何在图像分割和边缘检测领域也有一定的应用。
图像分割是将图像分成不同的区域或对象的过程,而边缘检测则是识别出图像中的边缘信息。
分形几何通过对图像的几何特征进行分析,可以有效地实现图像的分割和边缘检测,为图像分析和识别提供了有力的支持。
分形几何在图像处理中的应用为我们提供了更多的工具和方法,可以更加有效地处理和分析图像。
然而,分形几何在实际应用中也存在一些挑战,例如计算复杂度较高、参数的选取和优化等问题,需要进一步的研究和探索。
综上所述,分形几何在图像处理中具有广泛的应用前景。
通过分形编码、图像分形生成、纹理合成、图像分割和边缘检测等方法,可以实现对图像的高质量处理和分析。
基于混合分类和矩形划分的快速分形编码方法

h b d c a s c t n me h d w sp o o e n su e n ma sc ne fi g .T e a t to o a t l o i gwee y r l si a i t o a r p s d a d i wa s d o s e tro i i f o t ma e h n afs h df r r ca dn r me f c o ti e a e n HV p ri o . E p rme tlr s l n i ae t a i o ta twi x a sie s a c y r lsi c t n b an d b s d o a t n t i xe i na eu t i d c t h t n c nr s s t e h u t e r h h b d ca sf a i h v i i o me h d c n i r v h p e f r ca o ig mu h mo eo u d t n o we o rs in r t n o s u l yo e o t o a mp o e te s e d o a tl d n c r n f n ai f o r mp e s ai a d w re q ai f c — f c o o l c o o t d d d i g .I o ta t i r i a n fr ca sf ai n y rd ca sf ain me h d c n i rv h p e f r ca o i g e ma e n c n rs t o i n l i m ls i c t ,h b i ls i c t t o a mp o e t es e d o a t l dn w h g u o i o i o f c a d te q ai f e o e ma e n h u l y o c d d i g ,m r o e o o d t n i c n a h e e h g e o r s in r t . t d o e v ri s me c n i o t a c iv ih rc mp e so ai n i o Ke r s r ca ma ec mp e s n;i rt d f n t n s s m;HV p r t n d p ie c a s c t n y r lsi c t n y wo d :fa t l i g o rsi o t ae c i y t e u o e a i o ;a a t ls i ai ;h b d ca sf ai t i v i f o i i o
结合DCT补偿的分形图像编码

( 南京邮 电大学 理 学院 , 江 苏 南京 2 1 0 0 2 3 )
摘 要: 分 形编 码是 建立 在分 形迭代 函数 系统 理论 基础 上 的图像压 缩方 法 , 压 缩 比高 , 但 编码 时 间长 , 编 码过 程 复杂 度 高 。
针对 分形 压缩 方法存 在 的这些 不足 , 且保 证在 高压缩 比下使 得 图像 质 量有 明显 改善 , 文 中结 合 离散 余 弦变 换 近 似 的分 形 图像 压缩 方法 , 通过 对灰度 变换 的调 节 , 找到最 佳父 块及 映射 , 使得 均方误 差小 于 容许 误 差 , 以此 完成 编 码过 程 , 从 而达 到
F r a c t a l I ma g e Co d i n g Co mb i n e d wi t h Di s c r e t e Co s i n e
Tr a ns f o r m Co mp l e me n t
ZHANG Ai — h ua, CHANG Ka n g— ka ng
( C o l l e g e o f S c i e n c e , Na n j i n g U n i v e r s i t y o f P o s t s a n d T e l e c o m mu n c a t i o n s , Na n j i n g 2 1 0 0 2 3 , C h i n a )
Abs t r a c t : F r a c t a l c o d i n g i s a n e w i ma g e c o mp r e s s i o n me t h o d b a s e d o n f r a c t a l i t e r a t e d f u n c t i o n s y s t e m. Al ho t u g h he t me ho t d c a n a c h i e v e h i g h c o mp es r s i o n, i t wi l l t a k e l o n g e n c o in d g t i me nd a he t c o d i n g wi h t h i g h c o mp ut a t i o n a l c o mp l e x i t y . I n v i e w o f he t s e d e ic f i e n c i e s f o r
七.图像压缩

像素之间具有很强的相关性。减少或去除其相关 性,获得信息压缩。(高阶熵)
通过去除冗余,对图像数据的无损信息压缩 仅仅通过去除冗余,压缩率远远不够
为什么图像可以压缩
有些信息即使损失了,对人的视觉效果影响不大 。人 眼对于图像上的许多变化不敏感
是任意接近于零的正数。
无损编码定理建立了为达到零编码-解码误差 ,独立编码每一个符号逼近的比特率下限。
图像压缩的理论基础和压缩极限
对离散有记忆信源,应将N个符号同时编码以逼
近比特率下限(如,当N阶Markov信源),或使信
源经变换为无记忆信源(消除相关性),每一个
符号独立编码逼近比特率下限。
图像压缩可能达到的最大压缩比C为:
去相关的方法
变换编码、预测编码、矢量量化编码、行程 编码等
多清晰度图像编码
小波变换图像编码, 子带编码
分形图像变换编码
数字图像压缩
引言
什么是图像压缩 图像压缩的必要性和可能性
理论基础
图像的无损压缩,香农信息熵第一定理 熵编码 图像的有损压缩,香农率失真理论基本方法 图像预测编码 图像变换编码
实验测试验证,高阶马尔可夫过程的高阶信息熵显然低 于其低阶熵。
如何使低阶熵接近高阶熵?关键是将图像消除其相关性。
无损编码理论和熵编码
图像编码的香农信息论第一定理 离散无记忆信源X无损编码所能达到的最小比特率为:
minR H X bits / symbol
此处,R是传输速率,H(x)是信源X的信息熵,
数字图像压缩
引言
什么是图像压缩 图像压缩的必要性和可能性
分形图像编码的误码检测和图像恢复

分形图像编码的误码检测和图像恢复
杨守义;罗伟雄;穆晓敏
【期刊名称】《系统工程与电子技术》
【年(卷),期】2002(024)009
【摘要】对由于传输信道干扰而产生的分形压缩编码数据传输时产生的图像坏块,提出了一种检测方法,并用遗传算法在该图像中搜索坏块的相似块,用相似块来取代坏块,从而达到误差掩蔽的效果.仿真结果表明,对数据传输中所产生的图像坏块,采用所提出的图像坏块检测方法,坏块检出率高,并且在使用所提出的图像误差掩蔽方法后,图像质量有明显改善.
【总页数】4页(P9-12)
【作者】杨守义;罗伟雄;穆晓敏
【作者单位】北京理工大学电子工程系,北京,100081;北京理工大学电子工程系,北京,100081;郑州大学电子工程系,河南,郑州,450052
【正文语种】中文
【中图分类】TP751.1
【相关文献】
1.小波分析应用于迭代分形和统计预测分形相结合的图像编码方法 [J], 王大凯;魏海
2.基于分形图像编码的海空目标检测方法研究 [J], 林洪文;杨绍清;夏志军;康春玉
3.无线信道中图像误码检测和恢复的一种方法 [J], 李晓民;郑建宏
4.分形及分形图像编码 [J], 李杰;付萍
5.基于分形维数和小波的快速分形图像编码 [J], 王英霞; 赵德平; 雷红
因版权原因,仅展示原文概要,查看原文内容请购买。
精品文档-数字图像处理(第三版)(何东健)-第9章

第9章 图像编码
它将标量数据组织成一系列k维矢量, 根据一定的失真测 度(如均方误差、 lp范数、 极大范数等)在码书中搜索出 与输入矢量失真最小的码字的索引, 传输时仅传输相应码字 的索引,接收方根据码字索引在码书中查找对应码字, 再现 输入矢量。 矢量量化编码的核心是码书设计, 经典的码书设 计算法有LBG(Linde, Buzo和Gray三人的首字母) 算法(又称为K-means算法)。 码书设计过程就是寻求把M 个训练矢量分成N类(N<M)的一种最佳方案(如均方误差最 小), 并把各类的中心矢量作为码书中的码字。
第9章 图像编码 9.1.2
人们不断提出新的图像编码方法, 如基于人工神经网络 的编码、 子带编码(Sub band Coding)、 分形编码 (Fractal Coding)、 小波编码(Wavelet Coding)、 基 于模型的编码(Model based Coding)、 基于对象的编码 (Object based Coding)和基于语义的编码(Semantic Based Coding)等。
(2) 预测编码。 预测编码是基于图像数据的空间或时 间冗余特性, 它用相邻的已知像素(或像素块)来预测当 前像素(或像素块)的取值, 然后再对预测误差进行量化和 编码。 预测编码可分为帧内预测和帧间预测, 常用的预测编 码有差分脉码调制(DPCM, Differential Pulse Code Modulation)和运动补偿法。 图9-1和图9-2分别给出了无损 预测编码和有损预测编码系统的原理图,均包括编码器和解码 器, 其中符号编码器通常采用变长编码。
第9章 图像编码 信息熵是无损编码的理论极限, 当平均码长大于等于信 息熵时, 总可设计出一种无失真编码, 这是熵编码的理论基 础。 若使用相同长度的码字表示信源符号, 则称该编码方法 为等长编码, 否则称为变长编码。 变长编码的基本原理是给 出现概率较大的符号赋予短码字, 而给出现概率较小的符号 赋予长码字, 从而使得最终的平均码长很小。 哈夫曼编码和 香农-范诺编码就是两种变长编码方法。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
FRACTAL IMAGE CODING Po-kai Chen,Lizabeth Lipokai@,lizli@1.INTRODUCTIONFractal coding employs an unconventional method of rep-resenting the original image with a series of transforma-tions that map image blocks to smaller,similar blocks within the image.When recursively iterated on any ini-tial image,the contractive transformations produce a se-quence of images that will converge to an approximation of the original [1,2].The mappings considered in this paper are discrete,contractive block transformations involving either spatial contraction or the transformation of pixel values and lo-cations,explored in section 2.The bulk of the encoding time is spent finding block transformations that produce the lowest mean square error.To improve coding effi-ciency and time,in section 3,we classify certain types of blocks in order to reduce the number of allowable trans-formations performed on those blocks.After encoding,the transformations with the least root mean square er-ror are transmitted using a method we describe in section 4.On the decoder end,the application of these trans-formations on an initial image will reach convergence in roughly 4iterations.Section 5contains sample im-ages and results,including rate-distortion curves com-pared with JPEG and JPEG-2000.We present our con-cluding remarks in section 6.2.BLOCK MAPPING AND PROCESSING 2.1.Image PartitionsThe first step in encoding the image involves partitioning the image into non-overlapping B x B pixel blocks called range blocks.For each range block,we search a pool of 2B x 2B domain blocks from the original M x M im-age for the most optimally similar domain block.These domain blocks are generated by sliding a 2B x 2B win-dow across the original image,with spacing 1<δ<B .For an M x M image,there will be (M −2Bδ+1)2domain blocks.We may utilize δ>1in order to speed up the algorithm.Another parameter imposed by our algorithm on the search for domain blocks is the allowable search dis-tance.In order to decrease encoding time,we also limit the search distance for domain blocks so that the algo-rithm does not search through the entire image for a min-imal distortion domain block.In the event that a suitable domain block cannot be found that produces a root mean square error below an error threshold,we split the B x B range parent block into four B/2x B/2child blocks,and rerun the search,searching for domain blocks that are now B x B in size.Child blocks are especially useful in capturing more detail in complex parts of an image,while parent blocks represent an efficient way of encod-ing areas of uniform pixel value.Fig.1represents the partitioning and mapping explained above.Fig.1.Parent and child block partitioning and mapping.12.2.TransformationsTwo basic transformations exist in fractal image coding.The first,the geometric transformation,maps from the larger domain block to the range block.If no other opera-tions occur,this transformation results in the range block pixels being the average of the 4corresponding pixels in the domain block.In addition to the geometric transformation performed for every range block,massic transformations may be employed.Massic transformations include transforma-tions that either change pixel values or pixel locations as listed below:1.Change of Pixel Values(a)Absorption to gray level β(b)DC gray level shift by ∆c (c)Contrast scaling by α2.Change of Pixel Locations(a)Identity(b)Reflection about vertical axis (c)Reflection about horizontal axis (d)Reflection about first diagonal (e)Reflection about second diagonal (f)90◦rotation around center (g)180◦rotation around center (h)−90◦rotation around centerThe algorithm searches for the transformations that produce the lowest distortion measure of root mean square error between the transformed domain block and the tar-get range block.3.DOMAIN BLOCK POOLSTo improve efficiency of the algorithm,we perform a di-rected search by classifying the source domain blocks into 3different types:shade,midrange,and edge.Shade blocks refer to blocks that are approximately uniform in pixel values.They are fundamentally differ-ent from other domain blocks because we can directly evaluate whether or not a range block belongs to this cat-egory rather than searching for a domain block to map from.Our algorithm calculates the difference between the maximum and minimum pixel value within a block,and classifies it as a shade block if the difference is under a certain threshold.For the sample images presented inthis paper,the threshold was strictly set to less than 2,on a gray scale between 0and 255.Given classification into a shade block,the algorithm transmits the DC gray level βof that block,quantized to 6bits within the range of [0,255].This transformation is the quickest mapping of the algorithm since no search has to be made.Midrange blocks refer to textured domain blocks that exhibit a slight gradient in pixel values but no defined edge.Edge blocks have a strong gradient in pixel val-ues.The actual processing of midrange and edge blocks overlap.Everything that is not a shade block is classi-fied as a midrange or edge block.Both midrange and edge blocks undergo contrast scaling and DC shift op-erations.The contrast scaling factor,α,is chosen to equalize the dynamic range between the domain block and range block,where dynamic range is defined as the difference between maximum and minimum pixel values within the block.To maintain contractivity and lower en-tropy respectively,we choose αto be strictly less than 1,and we quantize its value to 2bits.The DC shift level,∆c is chosen afterwards so that the mean pixel values of the domain and range blocks (after contrast scaling)are equal.This DC shift level is chosen to be in the range from [-128,128]and quantized to 6bits.At this point,our algorithm chooses the one of the eight transforma-tions outlined in section 2that minimizes the root mean square error between the domain block and range block.If the identity transformation is chosen,the domain block has essentially been classified as a midrange block.Any other transformation indicates the block as an edge block.Fig.2shows examples of operations performed on pos-sible domain blocks.Fig.2.Example transformations performed on shade,midrange,and edge blocks respectively.24.TRANSMISSIONThis section contains information on the actual data trans-mitted after the transformation and encoding process.For each shade block encoded,only the absorption gray level βis sent.For midrange and edge blocks,the algorithm sends the position of the referenced domain block,the DC shift level ∆c ,the scaling factor αand the index of the transform used.We can differentially encode the lo-cation of the domain block since we search through the image sequentially.The number of bits used to encode each of these parameters is indicated in Fig.3.rmation of each parameter in bits,where N cd =number of child domain blocks and N pd =num-ber of parent domain blocks.This tree indicates the algorithmic decisions made in encoding each image block.One bit is used to indicate the decision to split into child blocks.Another bit con-tains the classification between shade and midrange or edge.In our final encoder,absorption level βand DC shift level ∆c were quantized to 6bits each.The 8trans-formations can be encoded into 3bits,and the domain block position is differentially encoded based on how many domain blocks were available to search from.To calculate our bit rates,we assumed ideal entropy coding of all the information in each of these parameters and di-vided by number of pixels in the encoded image.5.RESULTSWe encoded several test images under the following spec-ifications:Block Size BxB =4x4Image Size MxM =128x128RMSE threshold =1Search distance =16Original image =8bppWe chose a block size B =4because we found it capable of producing image reconstructions with highPSNR (Larger block sizes may increase edge artifacts).Image size was chosen to be 128x 128primarily for en-coding time reasons.The RMSE threshold between split-ting from parent to child blocks was chosen to be 1,to ensure high PSNR in the resulting image.We limited our search distance to across 16pixels to also reduce encod-ing time.We will point out in a later figure that search distance has a slight effect on PSNR but a considerable impact on encoding time.The below images were iter-ated on a uniformly gray image.The first 4iterations of ”Lena”encoded with the above parameters are shown in Fig.4.Fig.4.”Lena”decoded after 4iterations,with PSNR =33.583,rate =5.353,encoding time 3248s.We see that within the first iteration,the reconstruc-tion has captured the essence of the original with a few inaccurate pixel blocks scattered through the image.Vi-sually,we see very little difference between the second and last iteration,although root mean square error de-creases with each iteration.Our algorithm has the bene-fit of decoding very quickly and converging in very few iterations.We wanted to try the decoder on a fractal image as well.The first 4iterations of ”Flower”encoded with the above parameters are shown in Fig. 5.We found that the fractal encoder did not perform better on the fractal image as expected.Several possible reasons exist for this finding.The encoder must still search through the possi-ble domain blocks to find a match even though the image contains a lot of redundancy.This fractal also has a great level of detail -it contains few shade blocks which would normally reduce encoding time.To demonstrate the ease with which the fractal coder3Fig.5.”Flower”decoded after 4iterations,with PSNR =31.536,rate =5.576,encoding time 4374s.works with shade blocks,we encoded a generally uni-form computer generated image ”1up.”For this encod-ing,we decreased our search distance to 0,increased our error threshold to 10,increased block size to 8x 8,and still managed to obtain reconstruction with PSNR =32.53and bit rate of 0.032bpp.The reconstruction con-verged in 1iteration,and is shown compared with the original in Fig.6.Fig.6.”1up”reconstruction (right)compared with orig-inal (left)with PSNR =32.53,rate =0.032,encoding time 0.99s.The ”1up”image was encoded in only 0.99s.This very fast encoding is due to the decreased search dis-tance,increased error threshold,and increased block size.Decreasing the search distance limits the number of do-main blocks searched,and increased error threshold en-sures obtaining a match early without searching for a bet-ter distortion measure.Increased block size also reduces coding time because it also reduces the number of range blocks.In general,making these parameter tweaks will lower PSNR in final image while speeding up the encod-ing time,but in the case of ”1up,”the presence of ma-jority shade blocks ensures almost perfect visual recon-struction.We also measured the rate distortion curve for our fractal encoder,specifically for ”Lena.”To obtain data points on the curve,we varied both the error threshold and quantization of the coding parameters such as β,∆c ,and α.Fig.7and 8compare our rate distortion results for ”Lena”64x 64and 128x 128to JPEG andJPEG-2000.Fig.7.Rate distortion curve for ”Lena”64x64.Fig.8.Rate distortion curve for ”Lena”128x128.4We generated a curve for”Lena”64x64largely to save coding time.For this curve,we kept search distance 64so as to search the entire image for a matching do-main block(and thereby increasingfinal PSNR).In this case,we see that the fractal encoder approaches JPEG and JPEG-2000in both PSNR and rate within the rel-evant range of PSNR over28dB.Especially for lower PSNR,we see the fractal coder performs within1bpp of JPEG.In the curve for”Lena”128x128,we graph the rate distortion curves for our fractal coder using different search distances to demonstrate the effect search distance has on PSNR.We see that increasing search distance increases the PSNR obtained for a given rate.We used relatively small search distances due to the exponential dependence of time on search distance.For reference,a search dis-tance of16took1114seconds to encode,as opposed to 540s and325s for search distances of10and6respec-tively.Due to the prohibitively long encoding time,we did not produce a rate distortion curve for128x128using maximum search distance,but the inclusion of the curve for64x64should be sufficient to indicate that as we in-crease search distance in the128x128image,PSNR will increase and approach JPEG results within a few dB as better domain to range block matches are found.The curve also indicates that the performance of our fractal encoder will surpass that of JPEG for low bit rates.6.CONCLUSIONIn this paper,we have demonstrated a successful imple-mentation of a fractal coder.Fractal coders can perform very well in terms of bit rate and PSNR for images with large areas of uniform color,as demonstrated with the encoding and reconstruction of”1up.”We did notfind the coder to perform any better on fractal images of high complexity,due to the fact that the full search distance must still be traversed in the encoding process.We have also illustrated the many trade-offs associ-ated with fractal coding.The PSNR tops out at around35 dB due to the lossy compression utilized by fractal cod-ing.As we increase search distance,which will increase possible block matches and conceivably lower the rms er-ror,encoding time increases exponentially.Other coding parameters such as quantization levels and error thresh-old settings for parent-child block splitting also influence the encoding time,bit rate,and PSNR.With less quanti-zation levels,PSNR and bit rate generally decrease,al-though not at the same rates,since the entropy of the transmission decreases.Changing the error threshold af-fects the percentage of parent and child blocks used,which factors into both encoding time and bit rate.We have also demonstrated comparable bit rates and PSNR to JPEG at lower bit rates.We conclude that although fractal encoding can pro-duce reconstructed images with very good PSNR and de-cent bit rate,fractal encoding is not attractive over other compression schemes like JPEG and JPEG-2000,which can achieve the same PSNR and rate in a fraction of the time.The fractal coder described in this paper al-ready utilizes directed search to increase speed,but en-coding time is still prohibitive especially for common im-age sizes greater than128x128.Unless further optimiza-tions to the encoding process can be made,we recom-mend that fractal encoding not be used in favor of quicker compression algorithms.7.REFERENCES[1]A.E.Jacquin,”A novel fractal block-coding tech-nique for digital images,”International Conference on Acoustics,Speech,and Signal Processing,1990.[2]A.E.Jacquin,”Fractal image coding:a review,”Pro-ceedings of the IEEE,vol.81,no.10,pp.1451-1465, October1993.8.APPENDIXPokai worked on the base encoder and decoder,including the massic transformations and parent-child block split-ting.Liz worked on coding the geometric transforma-tions and domain block classifications.Both of us worked onfine tuning the code,including adding various speed optimizations and tweaking search and coding parame-ters.Pokai worked on the transmission format of coded parameters and batch encoding.Both of us worked on generating animations and sample images,including col-lecting data for the rate distortion curves.Liz worked on the presentation,and we shared work on the report.5。