图像处理外文翻译
图像处理单词

图像处理核心单词Photoreceptor cells:感光细胞Rod: 杆状细胞Cone: 锥状细胞Retina: 视网膜Iris: 虹膜Fovea: 中央凹Visual cortex: 视觉皮层CCD: charge-coupled devices电荷耦合器件Scanning: 扫描Continuous: 连续的Discrete: 离散的Digitization: 数字化Sampling: 采样Quantization: 量化Band-limited function: 带宽有限函数ADC: analog-to-digital converter 模数转换器Pixel: picture element 象素Gray-scale :灰度Gray level:灰度级Gray-scale resolution: 灰度分辨率Resolution: 分辨率Sample density: 采样密度Bit: 比特Byte: 字节Pixel spacing: 象素间距Contrast: 对比度Noise: 噪声SNR: signal-to-noise ratio 信噪比Frame: 帧Field: 场Line: 行,线Interlaced scanning: 隔行扫描Frame grabber: 帧抓取器Image enhancement:图象增强Image quality:图象质量Algorithm: 算法Globe operation: 全局运算Local operation: 局部运算Point operation: 点运算Spatial: 空间的Spatial domain:空间域Spatial coordinate:空间坐标Linear: 线性Nonlinear: 非线性Frequency: 频率Frequency variable: 频率变量Frequency domain: 频域Fourier transform: 傅立叶变换One-dimensional Fourier transform: 一维傅立叶变换Two-dimensional Fourier transform: 二维傅立叶变换Discrete Fourier transform(DFT): 离散傅立叶变换Fast Fourier transform(FFT): 快速傅立叶变换Inverse Fourier transform: 傅立叶反变换Contrast enhancement: 对比度增强Contrast stretching: 对比度扩展Gray-scale transformation(GST): 灰度变换Logarithm transformation: 对数变换Exponential transformation: 指数变换Threshold: 阈值Thresholding: 二值化、门限化False contour: 假轮廓Histogram: 直方图Multivariable histogram: 多变量直方图Histogram modification: 直方图调整、直方图修改Histogram equalization: 直方图均衡化Histogram specification: 直方图规定化Histogram matching: 直方图匹配Histogram thresholing: 直方图门限化Probability density function(PDF): 概率密度函数Cumulative distribution function(CDF): 累积分布函数Slope: 斜率Normalized: 归一化Inverse function: 反函数Calculus: 微积分Derivative: 导数Integral: 积分Monotonic function: 单调函数Infinite: 无穷大Infinitesimal: 无穷小Equation: 方程Numerator: 分子Denominator: 分母Coefficient: 系数Image smoothing: 图象平滑Image averaging: 图象平均Expectation: 数学期望Mean: 均值Variance: 方差Median filtering: 中值滤波Neighborhood: 邻域Filter: 滤波器Lowpass filter: 低通滤波器Highpass filter: 高通滤波器Bandpass filter: 带通滤波器Bandreject filter、Bandstop filter: 带阻滤波器Ideal filter: 理想滤波器Butterworth filter: 巴特沃思滤波器Exponential filter: 指数滤波器Trapezoidal filter: 梯形滤波器Transfer function: 传递函数Frequency response: 频率响应Cut-off frequency: 截止频率Spectrum: 频谱Amplitude spectrum: 幅值谱Phase spectrum: 相位谱Power spectrum: 功率谱Blur: 模糊Random: 随机Additive: 加性的Uncorrelated: 互不相关的Salt & pepper noise: 椒盐噪声Gaussian noise: 高斯噪声Speckle noise: 斑点噪声Grain noise: 颗粒噪声Bartlett window: 巴特雷窗Hamming window: 汉明窗Hanning window: 汉宁窗Blackman window: 布赖克曼窗Convolution: 卷积Convolution kernel: 卷积核。
图像处理外文翻译 (2)

附录一英文原文Illustrator software and Photoshop software difference Photoshop and Illustrator is by Adobe product of our company, but as everyone more familiar Photoshop software, set scanning images, editing modification, image production, advertising creative, image input and output in one of the image processing software, favored by the vast number of graphic design personnel and computer art lovers alike.Photoshop expertise in image processing, and not graphics creation. Its application field, also very extensive, images, graphics, text, video, publishing various aspects have involved. Look from the function, Photoshop can be divided into image editing, image synthesis, school tonal color and special effects production parts. Image editing is image processing based on the image, can do all kinds of transform such as amplifier, reducing, rotation, lean, mirror, clairvoyant, etc. Also can copy, remove stain, repair damaged image, to modify etc. This in wedding photography, portrait processing production is very useful, and remove the part of the portrait, not satisfied with beautification processing, get let a person very satisfactory results.Image synthesis is will a few image through layer operation, tools application of intact, transmit definite synthesis of meaning images, which is a sure way of fine arts design. Photoshop provide drawing tools let foreign image and creative good fusion, the synthesis of possible make the image is perfect.School colour in photoshop with power is one of the functions of deep, the image can be quickly on the color rendition, color slants adjustment and correction, also can be in different colors to switch to meet in different areas such as web image design, printing and multimedia application.Special effects production in photoshop mainly by filter, passage of comprehensive application tools and finish. Including image effects of creative and special effects words such as paintings, making relief, gypsum paintings, drawings, etc commonly used traditional arts skills can be completed by photoshop effects. And all sorts of effects of production aremany words of fine arts designers keen on photoshop reason to study.Users in the use of Photoshop color function, will meet several different color mode: RGB, CMY K, HSB and Lab. RGB and CMYK color mode will let users always remember natural color, users of color and monitors on the printed page color is a totally different approach to create. The monitor is by sending red, green, blue three beams to create color: it is using RGB (red/green/blue) color mode. In order to make a complex color photographs on a continuous colour and lustre effect, printing technology used a cyan, the red, yellow and black ink presentation combinations from and things, reflect or absorb all kinds of light wavelengths. Through overprint) this print (add four color and create color is CMYK (green/magenta/yellow/black) yan color part of a pattern. HSB (colour and lustre/saturation/brightness) color model is based on the way human feelings, so the color will be natural color for customer computer translation of the color create provides an intuitive methods. The Lab color mode provides a create "don't rely on equipment" color method, this also is, no matter use what monitors.Photoshop expertise in image processing, and not graphics creation. It is necessary to distinguish between the two concepts. Image processing of the existing bitmap image processing and use edit some special effects, the key lies in the image processing processing; Graphic creation software is according to their own idea originality, using vector graphics to design graphics, this kind of software main have another famous company Adobe Illustrator and Macromedia company software Freehand.As the world's most famous Adobe Illustrator, feat graphics software is created, not graphic image processing. Adobe Illustrator is published, multimedia and online image industry standard vector illustration software. Whether production printing line draft of the designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers Illustrator, will find is not only an art products tools. This software for your line of draft to provide unprecedented precision and control, is suitable for the production of any small design to large complex projects.Adobe Illustrator with its powerful function and considerate user interface has occupied most of the global vector editing software share. With incomplete statistics global 37% of stylist is in use Adobe Illustrator art design. Especially the patent PostScript Adobe companybased on the use of technology, has been fully occupied professional Illustrator printed fields. Whether you're line art designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers, had used after Illustrator, its formidable will find the function and concise interface design style only Freehand to compare. (Macromedia Freehand is launched vector graphics software company, following the Macromedia company after the merger by Adobe Illustrator and will decide to continue the development of the software have been withdrawn from market).Adobe company in 1987 when they launched the Illustrator1.1 version. In the following year, and well platform launched 2.0 version. Illustrator really started in 1988, should say is introduced on the Mac Illustrator 88 version. A year after the upgrade to on the Mac version3.0 in 1991, and spread to Unix platforms. First appeared on the platform in the PC version4.0 version of 1992, this version is also the earliest Japanese transplant version. And in the MAC is used most is5.0/5.5 version, because this version used Dan Clark's do alias (anti-aliasing display) display engine is serrated, make originally had been in graphic display of vector graphics have a qualitative leap. At the same time on the screen making significant reform, style and Photoshop is very similar, so for the Adobe old users fairly easy to use, it is no wonder that did not last long, and soon also popular publishing industry launched Japanese. But not offering PC version. Adobe company immediately Mac and Unix platforms in launched version6.0. And by Illustrator real PC users know is introduced in 1997, while7.0 version of Mac and Windows platforms launch. Because the 7.0 version USES the complete PostScript page description language, make the page text and graphics quality got again leap. The more with her and Photoshop good interchangeability, won a good reputation. The only pity is the support of Chinese 7.0 abysmal. In 1998 the company launched landmark Adobe Illustrator8.0, making version - Illustrator became very perfect drawing software, is relying on powerful strength, Adobe company completely solved of Chinese characters and Japanese language support such double byte, more increased powerful "grid transition" tool (there are corresponding Draw9.0 Corel, but the effect the function of poor), text editing tools etc function, causes its fully occupy the professional vector graphics software's supremacy.Adobe Illustrator biggest characteristics is the use of beisaier curve, make simpleoperation powerful vector graphics possible. Now it has integrated functions such as word processing, coloring, not only in illustrations production, in printing products (such as advertising leaflet, booklet) design manufacture aspect is also widely used, in fact has become desktop publishing or (DTP) industry default standard. Its main competitors are in 2005, but MacromediaFreehand Macromedia had been Adobe company mergers.So-called beisaier curve method, in this software is through "the pen tool" set "anchor point" and "direction line" to realize. The average user in the beginning when use all feel not accustomed to, and requires some practice, but once the master later can follow one's inclinations map out all sorts of line, and intuitive and reliable.It also as Creative Suite of software suit with important constituent, and brother software - bitmap graphics software Photoshop have similar interface, and can share some plug-ins and function, realize seamless connection. At the same time it also can put the files output for Flash format. Therefore, can pass Illustrator let Adobe products and Flash connection.Adobe Illustrator CS5 on May 17, 2010 issue. New Adobe Illustrator CS5 software can realize accurate in perspective drawing, create width variable stroke, use lifelike, make full use of paint brush with new Adobe CS Live online service integration. AI CS5 has full control of the width zoom along path variable, and stroke, arrows, dashing and artistic brushes. Without access to multiple tools and panel, can directly on the sketchpad merger, editing and filling shape. AI CS5 can handle a file of most 100 different size, and according to your sketchpad will organize and check them.Here in Adobe Illustrator CS5, for example, briefly introduce the basic function: Adobe IllustratorQuick background layerWhen using Illustrator after making good design, stored in Photoshop opens, if often pattern is in a transparent layer, and have no background ground floor. Want to produce background bottom, are generally add a layer, and then executed merge down or flatten, with background ground floor. We are now introducing you a quick method: as long as in diagram level on press the upper right version, choose new layer, the arrow in the model selection and bottom ", "background can quickly produce. However, in Photoshop 5 after the movementmerged into one instruction, select menu on the "new layer is incomplete incomplete background bottom" to finish.Remove overmuch type clothWhen you open the file, version 5 will introduce the Illustrator before Illustrator version created files disused zone not need. In order to remove these don't need in the zone, click on All Swatches palette Swatches icon and then Select the Select clause in the popup menu, and Trash Unused. Click on the icon to remove irrelevant type cloth. Sometimes you must repeat selection and delete processes to ensure that clear palette. Note that complex documents will take a relatively long time doing cleanup.Put the fabric to define the general-screeningIn Illustrator5 secondary color and process color has two distinct advantages compared to establish for easy: they provide HuaGan tonal; And when you edit the general-screening prescription, be filled some of special color objects will be automatically updated into to the new color. Because process color won't let you build tonal and provides automatic updates, you may want to put all the fabric is defined as the general-screening. But to confirm Illustrator, when you are in QuarkXPress or when PageMaker quaclrochramatic must keep their into process of color.Preferred using CMYKBecause of Illustrator7 can let you to CMYK, RGB and HSB (hue, saturation, bright) color mode, so you want to establish color the creation of carefully, you can now contains the draft with the combination of these modes created objects. When you do, they may have output various kinds of unexpected things will happen. Printing output file should use CMYK; Only if you don't use screen display manuscript RGB. If your creation draft will also be used for printing and screen display, firstly with CMYK create printing output file, then use to copy it brings As ordered the copy and modify to the appropriate color mode.Information source:" Baidu encyclopedia "附录二中文译文Illustrator软件与Photoshop软件的区别Photoshop与Illustrator都是由Adobe公司出品的,而作为大家都比较熟悉的Photoshop软件,集图像扫描、编辑修改、图像制作、广告创意,图像输入与输出于一体的图形图像处理软件,深受广大平面设计人员和电脑美术爱好者的喜爱。
图像处理-毕设论文外文翻译(翻译+原文)

英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up forthe edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed to restore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image(or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to alabel image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the same knowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft Voyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed bygraphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。
2018年关于图像的英文翻译-精选word文档 (2页)

2018年关于图像的英文翻译-精选word文档
本文部分内容来自网络整理,本司不为其真实性负责,如有异议或侵权请及时联系,本司将立即删除!
== 本文为word格式,下载后可方便编辑和修改! ==
关于图像的英文翻译
图像的英文:
image
picture
graphic
参考例句:
Image scanner
图像扫描仪Picture processing
图像处理Animated graphics
活动图像Contiguous graphics
连通图像vision transmitter
图像发射机 vision amplifier
图像放大器 veiled sounds; the image is veiled or foggy.
模糊的声音;图像模糊、朦胧。
The computer programmer morphed the image.
计算机程序设计器使图像变形了。
No one was able to make head or tail of the figures.
这些图像谁也看不明白。
It is the microcosmic image of the macrocosm of the entire planet.
它是整个行星宏观世界的微观图像。
image是什么意思:。
图像处理英文

GLCM:灰度共生矩阵indexing service:索引服务Binary image二值图像;只有两级灰度的数字图像(通常为0和1,黑和白)Blur 模糊;由于散焦、低通滤波、摄像机运动等引起的图像清晰度的下降。
Shape:形状,shape from texture,纹理形状mathematical morphology: 数学形态学Border 边框;一副图像的首、末行或列。
Boundary chain code 边界链码;定义一个物体边界的方向序列。
Boundary pixel 边界像素;至少和一个背景像素相邻接的内部像素Boundary tracking 边界跟踪;一种图像分割技术,通过沿弧从一个像素顺序探索到下一个像素将弧检测出。
Brightness 亮度;和图像一个点相关的值,表示从该点的物体发射或放射的光的量。
Change detection 变化检测;通过相减等操作将两幅匹准图像的像素加以比较Closed curve 封闭曲线;一条首尾点处于同一位置的曲线。
Cluster 聚类、集群;在空间(如在特征空间)中位置接近的点的集合。
Cluster analysis 聚类分析;在空间中对聚类的检测,度量和描述。
Concave 凹的;物体是凹的是指至少存在两个物体内部的点,其连线不能完全包含在物体内Connected 连通的神经网络:neural networkContour encoding 轮廓编码;对具有均匀灰度的区域,只将其边界进行编码的一种图像压缩技术。
Contrast 对比度;物体平均亮度(或灰度)与其周围背景的差别程度Contrast stretch 对比度扩展;一种线性的灰度变换Convolution 卷积;一种将两个函数组合成第三个函数的运算,卷积刻画了线性移不变系统的运算。
Deblurring 去模糊;1一种降低图像模糊,锐化图像细节的运算。
2 消除或降低图像的模糊,通常是图像复原或重构的一个步骤。
图像处理外文翻译

附录A3 Image Enhancement in the Spatial DomainThe principal objective of enhancement is to process an image so that the result is more suitable than the original image for a specific application. The word specific is important, because it establishes at the outset than the techniques discussed in this chapter are very much problem oriented. Thus, for example, a method that is quite useful for enhancing X-ray images may not necessarily be the best approach for enhancing pictures of Mars transmitted by a space probe. Regardless of the method used .However, image enhancement is one of the most interesting and visually appealing areas of image processing.Image enhancement approaches fall into two broad categories: spatial domain methods and frequency domain methods. The term spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image. Fourier transform of an image. Spatial methods are covered in this chapter, and frequency domain enhancement is discussed in Chapter 4.Enhancement techniques based on various combinations of methods from these two categories are not unusual. We note also that many of the fundamental techniques introduced in this chapter in the context of enhancement are used in subsequent chapters for a variety of other image processing applications.There is no general theory of image enhancement. When an image is processed for visual interpretation, the viewer is the ultimate judge of how well a particular method works. Visual evaluation of image quality is a highly is highly subjective process, thus making the definition of a “good image” an elusive standard by which to compare algorithm performance. When the problem is one of processing images for machine perception, the evaluation task is somewhat easier. For example, in dealing with a character recognition application, and leaving aside other issues such as computational requirements, the best image processing method would be the one yielding the best machine recognition results. However, even in situations when aclear-cut criterion of performance can be imposed on the problem, a certain amount of trial and error usually is required before a particular image enhancement approach is selected.3.1 BackgroundAs indicated previously, the term spatial domain refers to the aggregate of pixels composing an image. Spatial domain methods are procedures that operate directly on these pixels. Spatial domain processes will be denotes by the expression()[]=(3.1-1)g x y T f x y,(,)where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined over some neighborhood of (x, y). In addition, T can operate on a set of input images, such as performing the pixel-by-pixel sum of K images for noise reduction, as discussed in Section 3.4.2.The principal approach in defining a neighborhood about a point (x, y) is to use a square or rectangular subimage area centered at (x, y).The center of the subimage is moved from pixel to starting, say, at the top left corner. The operator T is applied at each location (x, y) to yield the output, g, at that location. The process utilizes only the pixels in the area of the image spanned by the neighborhood. Although other neighborhood shapes, such as approximations to a circle, sometimes are used, square and rectangular arrays are by far the most predominant because of their ease of implementation.The simplest from of T is when the neighborhood is of size 1×1 (that is, a single pixel). In this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called an intensity or mapping) transformation function of the form=(3.1-2)s T r()where, for simplicity in notation, r and s are variables denoting, respectively, the grey level of f(x, y) and g(x, y)at any point (x, y).Some fairly simple, yet powerful, processing approaches can be formulates with gray-level transformations. Because enhancement at any point in an image depends only on the grey level at that point, techniques in this category often are referred to as point processing.Larger neighborhoods allow considerably more flexibility. The general approach is to use a function of the values of f in a predefined neighborhood of (x, y) to determine the value of g at (x, y). One of the principal approaches in this formulation is based on the use of so-called masks (also referred to as filters, kernels, templates, or windows). Basically, a mask is a small (say, 3×3) 2-Darray, in which the values of the mask coefficients determine the nature of the type of approach often are referred to as mask processing or filtering. These concepts are discussed in Section 3.5.3.2 Some Basic Gray Level TransformationsWe begin the study of image enhancement techniques by discussing gray-level transformation functions. These are among the simplest of all image enhancement techniques. The values of pixels, before and after processing, will be denoted by r and s, respectively. As indicated in the previous section, these values are related by an expression of the from s = T(r), where T is a transformation that maps a pixel value r into a pixel value s. Since we are dealing with digital quantities, values of the transformation function typically are stored in a one-dimensional array and the mappings from r to s are implemented via table lookups. For an 8-bit environment, a lookup table containing the values of T will have 256 entries.As an introduction to gray-level transformations, which shows three basic types of functions used frequently for image enhancement: linear (negative and identity transformations), logarithmic (log and inverse-log transformations), and power-law (nth power and nth root transformations). The identity function is the trivial case in which out put intensities are identical to input intensities. It is included in the graph only for completeness.3.2.1 Image NegativesThe negative of an image with gray levels in the range [0, L-1]is obtained by using the negative transformation show shown, which is given by the expression=--(3.2-1)s L r1Reversing the intensity levels of an image in this manner produces the equivalent of a photographic negative. This type of processing is particularly suited for enhancing white or grey detail embedded in dark regions of an image, especiallywhen the black areas are dominant in size.3.2.2 Log TransformationsThe general from of the log transformation is=+(3.2-2)log(1)s c rWhere c is a constant, and it is assumed that r ≥0 .The shape of the log curve transformation maps a narrow range of low gray-level values in the input image into a wider range of output levels. The opposite is true of higher values of input levels. We would use a transformation of this type to expand the values of dark pixels in an image while compressing the higher-level values. The opposite is true of the inverse log transformation.Any curve having the general shape of the log functions would accomplish this spreading/compressing of gray levels in an image. In fact, the power-law transformations discussed in the next section are much more versatile for this purpose than the log transformation. However, the log function has the important characteristic that it compresses the dynamic range of image characteristics of spectra. It is not unusual to encounter spectrum values that range from 0 to 106 or higher. While processing numbers such as these presents no problems for a computer, image display systems generally will not be able to reproduce faithfully such a wide range of intensity values .The net effect is that a significant degree of detail will be lost in the display of a typical Fourier spectrum.3.2.3 Power-Law TransformationsPower-Law transformations have the basic froms crϒ=(3.2-3) Where c and y are positive constants .Sometimes Eq. (3.2-3) is written as to account for an offset (that is, a measurable output when the input is zero). However, offsets typically are an issue of display calibration and as a result they are normally ignored in Eq. (3.2-3). Plots of s versus r for various values of y are shown in Fig.3.6. As in the case of the log transformation, power-law curves with fractional values of y map a narrow range of dark input values into a wider range of output values, with theopposite being true for higher values of input levels. Unlike the log function, however, we notice here a family of possible transformation curves obtained simply by varying y. As expected, we see in Fig.3.6 that curves generated with values of y>1 have exactly the opposite effect as those generated with values of y<1. Finally, we note that Eq.(3.2-3) reduces to the identity transformation when c = y = 1.A variety of devices used for image capture, printing, and display respond according to as gamma[hence our use of this symbol in Eq.(3.2-3)].The process used to correct this power-law response phenomena is called gamma correction.Gamma correction is important if displaying an image accurately on a computer screen is of concern. Images that are not corrected properly can look either bleached out, or, what is more likely, too dark. Trying to reproduce colors accurately also requires some knowledge of gamma correction because varying the value of gamma correcting changes not only the brightness, but also the ratios of red to green to blue. Gamma correction has become increasingly important in the past few years, as use of digital images for commercial purposes over the Internet has increased. It is not Internet has increased. It is not unusual that images created for a popular Web site will be viewed by millions of people, the majority of whom will have different monitors and/or monitor settings. Some computer systems even have partial gamma correction built in. Also, current image standards do not contain the value of gamma with which an image was created, thus complicating the issue further. Given these constraints, a reasonable approach when storing images in a Web site is to preprocess the images with a gamma that represents in a Web site is to preprocess the images with a gamma that represents an “average” of the types of monitors and computer systems that one expects in the open market at any given point in time.3.2.4 Piecewise-Linear Transformation FunctionsA complementary approach to the methods discussed in the previous three sections is to use piecewise linear functions. The principal advantage of piecewise linear functions over the types of functions we have discussed thus far is that the form of piecewise functions can be arbitrarily complex. In fact, as we will see shortly, a practical implementation of some important transformations can be formulated onlyas piecewise functions. The principal disadvantage of piecewise functions is that their specification requires considerably more user input.Contrast stretchingOne of the simplest piecewise linear functions is a contrast-stretching transformation. Low-contrast images can result from poor illumination, lack of dynamic range in the imaging sensor, or even wrong setting of a lens aperture during image acquisition. The idea behind contrast stretching is to increase the dynamic range of the gray levels in the image being processed.Gray-level slicingHighlighting a specific range of gray levels in an image often is desired. Applications include enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray images. There are several ways of doing level slicing, but most of them are variations of two basic themes. One approach is to display a high value for all gray levels in the range of interest and a low value for all other gray levels.Bit-plane slicingInstead of highlighting gray-level ranges, highlighting the contribution made to total image appearance by specific bits might be desired. Suppose that each pixel in an image is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and plane 7 contains all the high-order bits.3.3 Histogram ProcessingThe histogram of a digital image with gray levels in the range [0, L-1] is a discrete function , where is the kth gray level and is the number of pixels in the image having gray level . It is common practice to pixels in the image, denoted by n. Thus, a normalized histogram is given by , for , Loosely speaking, gives an estimate of the probability of occurrence of gray level . Note that the sum of all components of a normalized histogram is equal to 1.Histograms are the basis for numerous spatial domain processing techniques.Histogram manipulation can be used effectively for image enhancement, as shown in this section. In addition to providing useful image statistics, we shall see in subsequent chapters that the information inherent in histograms also is quite useful in other image processing applications, such as image compression and segmentation. Histograms are simple to calculate in software and also lend themselves to economic hardware implementations, thus making them a popular tool for real-time image processing.附录B第三章空间域图像增强增强的首要目标是处理图像,使其比原始图像格式和特定应用。
CV专业名词中英文对照

Common人工智能Artificial Intelligence认知科学与神经科学Cognitive Science and Neuroscience 图像处理Image Processing计算机图形学Computer graphics模式识别Pattern Recognized图像表示Image Representation立体视觉与三维重建Stereo Vision and 3D Reconstruction 物体(目标)识别Object Recognition运动检测与跟踪Motion Detection and Tracking边缘edge边缘检测detection区域region图像分割segmentation轮廓与剪影contour and silhouette纹理texture纹理特征提取feature extraction颜色color局部特征local features or blob尺度scale摄像机标定Camera Calibration立体匹配stereo matching图像配准Image Registration特征匹配features matching物体识别Object Recognition人工标注Ground-truth自动标注Automatic Annotation运动检测与跟踪Motion Detection and Tracking背景剪除Background Subtraction背景模型与更新background modeling and update运动跟踪Motion Tracking多目标跟踪multi-target tracking颜色空间color space色调Hue色饱和度Saturation明度Value颜色不变性Color Constancy(人类视觉具有颜色不变性)照明illumination反射模型Reflectance Model明暗分析Shading Analysis成像几何学与成像物理学Imaging Geometry and Physics 全像摄像机Omnidirectional Camera激光扫描仪Laser Scanner透视投影Perspective projection正交投影Orthopedic projection表面方向半球Hemisphere of Directions立体角solid angle透视缩小效应foreshortening辐射度radiance辐照度irradiance亮度intensity漫反射表面、Lambertian(朗伯)表面diffuse surface 镜面Specular Surfaces漫反射率diffuse reflectance明暗模型Shading Models环境光照ambient illumination互反射interreflection反射图Reflectance Map纹理分析Texture Analysis元素elements基元primitives纹理分类texture classification从纹理中恢复图像shape from texture纹理合成synthetic图形绘制graph rendering图像压缩image compression统计方法statistical methods结构方法structural methods基于模型的方法model based methods分形fractal自相关性函数autocorrelation function熵entropy能量energy对比度contrast均匀度homogeneity相关性correlation上下文约束contextual constraintsGibbs随机场吉布斯随机场边缘检测、跟踪、连接Detection、Tracking、LinkingLoG边缘检测算法(墨西哥草帽算子)LoG=Laplacian of Gaussian 霍夫变化Hough Transform链码chain codeB-样条B-spline有理B-样条Rational B-spline非均匀有理B-样条Non-Uniform Rational B-Spline控制点control points节点knot points基函数basis function控制点权值weights曲线拟合curve fitting内插interpolation逼近approximation回归Regression主动轮廓Active Contour Model or Snake 图像二值化Image thresholding连通成分connected component数学形态学mathematical morphology 结构元structuring elements膨胀Dilation腐蚀Erosion开运算opening闭运算closing聚类clustering分裂合并方法split-and-merge区域邻接图region adjacency graphs四叉树quad tree区域生长Region Growing过分割over-segmentation分水岭watered金字塔pyramid亚采样sub-sampling尺度空间Scale Space局部特征Local Features背景混淆clutter遮挡occlusion角点corners强纹理区域strongly textured areas 二阶矩阵Second moment matrix 视觉词袋bag-of-visual-words类内差异intra-class variability类间相似性inter-class similarity生成学习Generative learning判别学习discriminative learning 人脸检测Face detection弱分类器weak learners集成分类器ensemble classifier被动测距传感passive sensing多视点Multiple Views稠密深度图dense depth稀疏深度图sparse depth视差disparity外极epipolar外极几何Epipolor Geometry校正Rectification归一化相关NCC Normalized Cross Correlation平方差的和SSD Sum of Squared Differences绝对值差的和SAD Sum of Absolute Difference俯仰角pitch偏航角yaw扭转角twist高斯混合模型Gaussian Mixture Model运动场motion field光流optical flow贝叶斯跟踪Bayesian tracking粒子滤波Particle Filters颜色直方图color histogram尺度不变特征转换SIFT scale invariant feature transform 孔径问题Aperture problemAAberration 像差Accessory 附件Accessory Shoes 附件插座、热靴Achromatic 消色差的Active 主动的、有源的Acutance 锐度Acute-matte 磨砂毛玻璃Adapter 适配器Advance system 输片系统AE Lock(AEL) 自动曝光锁定AF(Autofocus) 自动聚焦AF Illuminator AF照明器AF spotbeam projector AF照明器Alkaline 碱性Ambient light 环境光Amplification factor 放大倍率Angle finder 弯角取景器Angle of view 视角Anti-Red-eye 防红眼Aperture 光圈Aperture priority 光圈优先APO(APOchromat) 复消色差APZ(Advanced Program zoom) 高级程序变焦Arc 弧形ASA(American Standards Association) 美国标准协会Astigmatism 像散Auto bracket 自动包围Auto composition 自动构图Auto exposure 自动曝光Auto exposure bracketing 自动包围曝光Auto film advance 自动进片Auto flash 自动闪光Auto loading 自动装片Auto multi-program 自动多程序Auto rewind 自动退片Auto wind 自动卷片Auto zoom 自动变焦Automatic exposure(AE) 自动曝光Automation 自动化Auxiliary 辅助BBack 机背Back light 逆光、背光Back light compensation 逆光补偿Background 背景Balance contrast 反差平衡Bar code system 条形码系统Barrel distortion 桶形畸变BAse-Stored Image Sensor (BASIS) 基存储影像传感器Battery check 电池检测Battery holder 电池手柄Bayonet 卡口Bellows 皮腔Blue filter 蓝色滤光镜Body-integral 机身一体化Bridge camera 桥梁相机Brightness control 亮度控制Built in 内置Bulb B 门Button 按钮CCable release 快门线Camera 照相机Camera shake 相机抖动Cap 盖子Caption 贺辞、祝辞、字幕Card 卡Cartridges 暗盒Case 机套CCD(Charge Coupled Device) 电荷耦合器件CdS cell 硫化镉元件Center spot 中空滤光镜Center weighted averaging 中央重点加权平均Chromatic Aberration 色差Circle of confusion 弥散圆Close-up 近摄Coated 镀膜Compact camera 袖珍相机Composition 构图Compound lens 复合透镜Computer 计算机Contact 触点Continuous advance 连续进片Continuous autofocus 连续自动聚焦Contrast 反差、对比Convetor 转换器Coreless 无线圈Correction 校正Coupler 耦合器Coverage 覆盖范围CPU(Central Processing Unit) 中央处理器Creative expansion card 艺术创作软件卡Cross 交叉Curtain 帘幕Customized function 用户自选功能DData back 数据机背Data panel 数据面板Dedicated flash 专用闪光灯Definition 清晰度Delay 延迟、延时Depth of field 景深Depth of field preview 景深预测Detection 检测Diaphragm 光阑Diffuse 柔光Diffusers 柔光镜DIN (Deutsche Industrische Normen) 德国工业标准Diopter 屈光度Dispersion 色散Display 显示Distortion 畸变Double exposure 双重曝光Double ring zoom 双环式变焦镜头Dreams filter 梦幻滤光镜Drive mode 驱动方式Duration of flash 闪光持续时间DX-code DX编码EED(Extra low Dispersion) 超低色散Electro selective pattern(ESP) 电子选择模式EOS(Electronic Optical System) 电子光学系统Ergonomic 人体工程学EV(Exposure value) 曝光值Evaluative metering 综合评价测光Expert 专家、专业Exposure 曝光Exposure adjustment 曝光调整Exposure compensation 曝光补偿Exposure memory 曝光记忆Exposure mode 曝光方式Exposure value(EV) 曝光值Extension tube 近摄接圈Extension ring 近摄接圈External metering 外测光Extra wide angle lens 超广角镜头Eye-level fixed 眼平固定Eye-start 眼启动Eyepiece 目镜Eyesight correction lenses 视力校正镜FField curvature 像场弯曲Fill in 填充(式)Film 胶卷(片)Film speed 胶卷感光度Film transport 输片、过片Filter 滤光镜Finder 取景器First curtain 前帘、第一帘幕Fish eye lens 鱼眼镜头Flare 耀斑、眩光Flash 闪光灯、闪光Flash range 闪光范围Flash ready 闪光灯充电完毕Flexible program 柔性程序Focal length 焦距Focal plane 焦点平面Focus 焦点Focus area 聚焦区域Focus hold 焦点锁定Focus lock 焦点锁定Focus prediction 焦点预测Focus priority 焦点优先Focus screen 聚焦屏Focus tracking 焦点跟踪Focusing 聚焦、对焦、调焦Focusing stages 聚焦级数Fog filter 雾化滤光镜Foreground 前景Frame 张数、帧Freeze 冻结、凝固Fresnel lens 菲涅尔透镜、环状透镜Frontground 前景Fuzzy logic 模糊逻辑GGlare 眩光GN(Guide Number) 闪光指数GPD(Gallium Photo Diode) 稼光电二极管Graduated 渐变HHalf frame 半幅Halfway 半程Hand grip 手柄High eye point 远视点、高眼点High key 高调Highlight 高光、高亮Highlight control 高光控制High speed 高速Honeycomb metering 蜂巢式测光Horizontal 水平Hot shoe 热靴、附件插座Hybrid camera 混合相机Hyper manual 超手动Hyper program 超程序Hyperfocal 超焦距IIC(Integrated Circuit) 集成电路Illumination angle 照明角度Illuminator 照明器Image control 影像控制Image size lock 影像放大倍率锁定Infinity 无限远、无穷远Infra-red(IR) 红外线Instant return 瞬回式Integrated 集成Intelligence 智能化Intelligent power zoom 智能化电动变焦Interactive function 交互式功能Interchangeable 可更换Internal focusing 内调焦Interval shooting 间隔拍摄ISO(International Standard Association) 国际标准化组织JJIS(Japanese Industrial Standards)日本工业标准LLandscape 风景Latitude 宽容度LCD data panel LCD数据面板LCD(Liquid Crystal Display) 液晶显示LED(Light Emitting Diode) 发光二极管Lens 镜头、透镜Lens cap 镜头盖Lens hood 镜头遮光罩Lens release 镜头释放钮Lithium battery 锂电池Lock 闭锁、锁定Low key 低调Low light 低亮度、低光LSI(Large Scale Integrated) 大规模集成MMacro 微距、巨像Magnification 放大倍率Main switch 主开关Manual 手动Manual exposure 手动曝光Manual focusing 手动聚焦Matrix metering 矩阵式测光Maximum 最大Metered manual 测光手动Metering 测光Micro prism 微棱Minimum 最小Mirage 倒影镜Mirror 反光镜Mirror box 反光镜箱Mirror lens 折反射镜头Module 模块Monitor 监视、监视器Monopod 独脚架Motor 电动机、马达Mount 卡口MTF (Modulation Transfer Function 调制传递函数Multi beam 多束Multi control 多重控制Multi-dimensional 多维Multi-exposure 多重曝光Multi-image 多重影Multi-mode 多模式Multi-pattern 多区、多分区、多模式Multi-program 多程序Multi sensor 多传感器、多感光元件Multi spot metering 多点测光Multi task 多任务NNegative 负片Neutral 中性Neutral density filter 中灰密度滤光镜Ni-Cd battery 镍铬(可充电)电池OOff camera 离机Off center 偏离中心OTF(Off The Film) 偏离胶卷平面One ring zoom 单环式变焦镜头One touch 单环式Orange filter 橙色滤光镜Over exposure 曝光过度PPanning 摇拍Panorama 全景Parallel 平行Parallax 平行视差Partial metering 局部测光Passive 被动的、无源的Pastels filter 水粉滤光镜PC(Perspective Control) 透视控制Pentaprism 五棱镜Perspective 透视的Phase detection 相位检测Photography 摄影Pincushion distortion 枕形畸变Plane of focus 焦点平面Point of view 视点Polarizing 偏振、偏光Polarizer 偏振镜Portrait 人像、肖像Power 电源、功率、电动Power focus 电动聚焦Power zoom 电动变焦Predictive 预测Predictive focus control 预测焦点控制Preflash 预闪Professional 专业的Program 程序Program back 程序机背Program flash 程序闪光Program reset 程序复位Program shift 程序偏移Programmed Image Control (PIC) 程序化影像控制QQuartz data back 石英数据机背RRainbows filter 彩虹滤光镜Range finder 测距取景器Release priority 释放优先Rear curtain 后帘Reciprocity failure 倒易律失效Reciprocity Law 倒易律Recompose 重新构图Red eye 红眼Red eye reduction 红眼减少Reflector 反射器、反光板Reflex 反光Remote control terminal 快门线插孔Remote cord 遥控线、快门线Resolution 分辨率Reversal films 反转胶片Rewind 退卷Ring flash 环形闪光灯ROM(Read Only Memory) 只读存储器Rotating zoom 旋转式变焦镜头RTF(Retractable TTL Flash) 可收缩TTL闪光灯SSecond curtain 后帘、第二帘幕Secondary Imaged Registration(SIR) 辅助影像重合Segment 段、区Selection 选择Self-timer 自拍机Sensitivity 灵敏度Sensitivity range 灵敏度范围Sensor 传感器Separator lens 分离镜片Sepia filter 褐色滤光镜Sequence zoom shooting 顺序变焦拍摄Sequential shoot 顺序拍摄Servo autofocus 伺服自动聚焦Setting 设置Shadow 阴影、暗位Shadow control 阴影控制Sharpness 清晰度Shift 偏移、移动Shutter 快门Shutter curtain 快门帘幕Shutter priority 快门优先Shutter release 快门释放Shutter speed 快门速度Shutter speed priority 快门速度优先Silhouette 剪影Single frame advance 单张进片Single shot autofocus 单次自动聚焦Skylight filter 天光滤光镜Slide film 幻灯胶片Slow speed synchronization 慢速同步SLD(Super Lower Dispersion) 超低色散SLR(Single Lens Reflex) 单镜头反光照相机SMC(Super Multi Coated) 超级多层镀膜Soft focus 柔焦、柔光SP(Super Performance) 超级性能SPC(Silicon Photo Cell) 硅光电池SPD(Silicon Photo Dioxide) 硅光电二极管Speedlight 闪光灯、闪光管Split image 裂像Sport 体育、运动Spot metering 点测光Standard 标准Standard lens 标准镜头Starburst 星光镜Stop 档Synchronization 同步TTele converter 增距镜、望远变换器Telephoto lens 长焦距镜头Trailing-shutter curtain 后帘同步Trap focus 陷阱聚焦Tripod 三脚架TS(Tilt and Shift) 倾斜及偏移TTL flash TTL闪光TTL flash metering TTL闪光测光TTL(Through The Lens) 通过镜头、镜后Two touch 双环UUD(Ultra-low Dispersion) 超低色散Ultra wide 超阔、超广Ultrasonic 超声波UV(Ultra-Violet) 紫外线Under exposure 曝光不足VVari-colour 变色Var-program 变程序Variable speed 变速Vertical 垂直Vertical traverse 纵走式View finder 取景器WWarm tone 暖色调Wide angle lens 广角镜头Wide view 广角预视、宽区预视Wildlife 野生动物Wireless remote 无线遥控World time 世界时间XX-sync X-同步ZZoom 变焦Zoom lens 变焦镜头Zoom clip 变焦剪裁Zoom effect 变焦效果OtherTTL 镜后测光NTTL 非镜后测光UM 无机内测光,手动测光MM 机内测光,但需手动设定AP 光圈优先SP 快门优先PR 程序暴光ANCILLARY DEVICES 辅助产品BACKPLANES 底板CABLES AND CONNECTORS 连线及连接器ENCLOSURES 围圈FACTORY AUTOMATION 工厂自动化POWER SUPPLIES 电源APPLICATION-SPECIFIC SOFTWARE 应用软件INDUSTRIAL-INSPECTION SOFTWARE 工业检测软件MEDICAL-IMAGING SOFTWARE 医药图象软件SCIENTIFIC-ANALYSIS SOFTWARE 科学分析软件SEMICONDUCTOR-INSPECTION SOFTWARE 半导体检测软件CAMERAS 相机AREA-ARRAY CAMERAS 面阵相机CAMERA LINK CAMERAS CAMERA-LINK相机CCD CAMERAS-COLOR ccd彩色相机CCD CAMERAS COOLED ccoled型ccd相机CHARGE-INJECTION-DEVICE CAMERAS 充电相机CMOS CAMERAS cmos相机DIGITAL-OUTPUT CAMERAS 数码相机FIREWIRE(1394) CAMERAS 1394接口相机HIGH-SPEED VIDEO CAMERAS 高速摄象机INFRARED CAMERAS 红外相机LINESCAN CAMERAS 行扫描相机LOW-LIGHT-LEVEL CAMERAS 暗光相机MULTISPECTRAL CAMERAS 多光谱相机SMART CAMERAS 微型相机TIME-DELAY-AND-INTEGRATION CAMERAS 时间延迟集成相机USB CAMERAS usb接口相机VIDEO CAMERAS 摄象机DIGITIZERS 数字转换器MEASUREMENT DIGITIZERS 数字测量器MOTION-CAPTURE DIGITIZERS 数字运动捕捉器DISPLAYS 显示器CATHODE-RAY TUBES(CRTs) 阴极摄像管INDUSTRIAL DISPLAYS 工业用型显示器LIQUID-CRYSTAL DISPLAYS 液晶显示器ILLUMINATION SYSTEMS 光源系统BACKLIGHTING DEVICES 背光源FIBEROPTIC ILLUMINATION SYSTEMS 光纤照明系统FLUORESCENT ILLUMINATION SYSTEMS荧光照明系统INFRARED LIGHTING 红外照明LED LIGHTING led照明STRUCTURED LIGHTING 结构化照明ULTRAVIOLET ILLUMINATION SYSTEMS 紫外照明系统WHITE-LIGHT ILLUMINATION SYSTEMS 白光照明系统XENON ILLUMINATION SYSTEMS 氙气照明系统IMAGE-PROCESSING SYSTEMS 图象处理系统AUTOMATION/ROBOTICS 自动化/机器人技术DIGITAL IMAGING SYSTEMS 数字图象系统DOCUMENT-IMAGING SYSTEMS 数据图象系统GUIDANCE/TRACKING SYSTEMS 制导/跟踪系统INFRARED IMAGING SYSTEMS 红外图象系统INSPECTION/NONDESTRUCTIVE TESTING SYSTEMS 检测/非破坏性测试系统INSTRUMENTATION SYSTEMS 测试设备系统INTELLIGENT TRANSPORTATION SYSTEMS 智能交通系统MEDICAL DIAGNOSTICS SYSTEMS 医疗诊断系统METROLOGY/MEASUREMENT/GAUGING SYSTEMS 测绘系统MICROSCOPY SYSTEMS 微观系统MOTION-ANALYSIS SYSTEMS 运动分析系统OPTICAL-CHARACTER-RECOGNITION/OPTICAL-CHARACTER-VERIFICATION SYSTEMS 光学文字识别系统PROCESS-CONTROL SYSTEMS 处理控制系统QUALITY-ASSURANCE SYSTEMS 高保真系统REMOTE SENSING SYSTEMS 遥感系统WEB-SCANNING SYSTEMS 网状扫描系统IMAGE-PROCESSING TOOLKITS 图象处理工具包COMPILERS 编译器DATA-ACQUISITION TOOLKITS 数据采集工具套件DEVELOPMENT TOOLS 开发工具DIGITAL-SIGNAL-PROCESSOR(DSP) DEVELOPMENT TOOLKITS 数字信号处理开发工具套件REAL-TIME OPERATING SYSTEMS(RTOSs) 实时操作系统WINDOWS 窗口IMAGE SOURCES 图象资源FLASHLAMPS 闪光灯FLUORESCENT SOURCES 荧光源LASERS 激光器LIGHT-EMITTING DIODES(LEDs) 发光二极管STROBE ILLUMINATION 闪光照明TUNGSTEN LAMPS 钨灯ULTRAVIOLET LAMPS 紫外灯WHITE-LIGHT SOURCES 白光灯XENON LAMPS 氙气灯X-RAY SOURCES x射线源IMAGE-STORAGE DEVICES 图象存储器HARD DRIVES 硬盘设备OPTICAL STORAGE DEVICES 光存储设备RAID STORAGE DEVICES RAID存储设备(廉价磁盘冗余阵列设备)INTEGRATED CIRCUITS 综合电路ASICS 专用集成电路ANALOG-TO-DIGITAL CONVERTERS 模数转换器COMMUNICATIONS CONTROLLERS 通信控制器DIGITAL-SIGNAL PROCESSORS 数字信号处理器DIGITAL-TO-ANALOG CONVERTERS 数模转换器DISPLAY CONROLLERS 显示器控制器FIELD-PROGRAMMABLE GATE 现场可编程门阵列ARRAYS 阵列GRAPHICS-DISPLAY CONTROLLERS 图形显示控制器IMAGE-PROCESSING ICs 图象处理芯片MIXED-SIGNAL ICs 混合信号芯片VIDEO-PROCESSING ICs 视频处理芯片LENSES 镜头CAMERA LENSES 相机镜头ENLARGING LENSES 放大镜HIGH-RESOLUTION LENSES 高分辨率镜头IMAGE-SCANNING LENSES 图象扫描镜头PROJECTION LENSES 聚光透镜TELECENTRIC LENSES 望远镜VIDEO LENSES 摄象机镜头MONITORS 显示器CATHODE-RAY-TUBE(CRT) MONITORS, COLOR crt彩色监视器CATHODE-RAY-TUBE(CRT) MONITORS, MONOCHROME 单色crt监视器LIQUID-CRYSTAL-DISPLAY(LED) MONITORS lcd监视器。
数字图像处理与边缘检测中英文对照外文翻译文献

中英文资料对照外文翻译Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal applica- tion areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for au- tonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower thanoriginally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “makin g sense”of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.”As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varyingwavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good”enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the bound- ary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation isonly part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con- nection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op- posed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.5 76 4 152 148 149If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds forthresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.中文对照数字图像处理与边缘检测数字图像处理数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进:其二是为使机器自动理解而对图像数据进行存储、传输及显示。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
英文资料翻译Image processing is not a one step process.We are able to distinguish between several steps which must be performed one after the other until we can extract the data of interest from the observed scene.In this way a hierarchical processing scheme is built up as sketched in Fig.The figure gives an overview of the different phases of image processing.Image processing begins with the capture of an image with a suitable,not necessarily optical,acquisition system.In a technical or scientific application,we may choose to select an appropriate imaging system.Furthermore,we can set up the illumination system,choose the best wavelength range,and select other options to capture the object feature of interest in the best way in an image.Once the image is sensed,it must be brought into a form that can be treated with digital computers.This process is called digitization.With the problems of traffic are more and more serious. Thus Intelligent Transport System (ITS) comes out. The subject of the automatic recognition of license plate is one of the most significant subjects that are improved from the connection of computer vision and pattern recognition. The image imputed to the computer is disposed and analyzed in order to localization the position and recognition the characters on the license plate express these characters in text string form The license plate recognition system (LPSR) has important application in ITS. In LPSR, the first step is for locating the license plate in the captured image which is very important for character recognition. The recognition correction rate of license plate is governed by accurate degree of license plate location. In this paper, several of methods in image manipulation are compared and analyzed, then come out the resolutions for localization of the car plate. The experiences show that the good result has been got with these methods. The methods based on edge map and frequency analysis is used in the process of the localization of the license plate, that is to say, extracting the characteristics of the license plate in the car images after being checked up for the edge, and then analyzing and processing until the probably area of license plate is extracted.The automated license plate location is a part of the image processing ,it’s also an important part in the intelligent traffic system.It is the key step in the Vehicle License Plate Recognition(LPR).A method for the recognition of images of different backgrounds and different illuminations is proposed in the paper.the upper and lower borders are determined through the gray variation regulation of the character distribution.The left and right borders are determined through the black-white variation of the pixels in every row.The first steps of digital processing may include a number of different operations and are known as image processing.If the sensor has nonlinear characteristics, these need to be corrected.Likewise,brightness and contrast of the image may require improvement.Commonly,too,coordinate transformations are needed torestore geometrical distortions introduced during image formation.Radiometric and geometric corrections are elementary pixel processing operations.It may be necessary to correct known disturbances in the image,for instance caused by a defocused optics,motion blur,errors in the sensor,or errors in the transmission of image signals.We also deal with reconstruction techniques which are required with many indirect imaging techniques such as tomography that deliver no direct image.A whole chain of processing steps is necessary to analyze and identify objects.First,adequate filtering procedures must be applied in order to distinguish the objects of interest from other objects and the background.Essentially,from an image (or several images),one or more feature images are extracted.The basic tools for this task are averaging and edge detection and the analysis of simple neighborhoods and complex patterns known as texture in image processing.An important feature of an object is also its motion.Techniques to detect and determine motion are necessary.Then the object has to be separated from the background.This means that regions of constant features and discontinuities must be identified.This process leads to a label image.Now that we know the exact geometrical shape of the object,we can extract further information such as the mean gray value,the area,perimeter,and other parameters for the form of the object[3].These parameters can be used to classify objects.This is an important step in many applications of image processing,as the following examples show:In a satellite image showing an agricultural area,we would like to distinguish fields with different fruits and obtain parameters to estimate their ripeness or to detect damage by parasites.There are many medical applications where the essential problem is to detect pathologi-al changes.A classic example is the analysis of aberrations in chromosomes.Character recognition in printed and handwritten text is another example which has been studied since image processing began and still poses significant difficulties.You hopefully do more,namely try to understand the meaning of what you are reading.This is also the final step of image processing,where one aims to understand the observed scene.We perform this task more or less unconsciously whenever we use our visual system.We recognize people,we can easily distinguish between the image of a scientific lab and that of a living room,and we watch the traffic to cross a street safely.We all do this without knowing how the visual system works.For some times now,image processing and computer-graphics have been treated as two different areas.Knowledge in both areas has increased considerably and more complex problems can now be treated.Computer graphics is striving to achieve photorealistic computer-generated images of three-dimensional scenes,while image processing is trying to reconstruct one from an image actually taken with a camera.In this sense,image processing performs the inverse procedure to that of computer graphics.We start with knowledge of the shape and features of an object—at the bottom of Fig. and work upwards until we get a two-dimensional image.To handle image processing or computer graphics,we basically have to work from the sameknowledge.We need to know the interaction between illumination and objects,how a three-dimensional scene is projected onto an image plane,etc.There are still quite a few differences between an image processing and a graphics workstation.But we can envisage that,when the similarities and interrelations between computergraphics and image processing are better understood and the proper hardware is developed,we will see some kind of general-purpose workstation in the future which can handle computer graphics as well as image processing tasks[5].The advent of multimedia,i. e. ,the integration of text,images,sound,and movies,will further accelerate the unification of computer graphics and image processing.In January 1980 Scientific American published a remarkable image called Plume2,the second of eight volcanic eruptions detected on the Jovian moon by the spacecraft V oyager 1 on 5 March 1979.The picture was a landmark image in interplanetary exploration—the first time an erupting volcano had been seen in space.It was also a triumph for image processing.Satellite imagery and images from interplanetary explorers have until fairly recently been the major users of image processing techniques,where a computer image is numerically manipulated to produce some desired effect-such as making a particular aspect or feature in the image more visible.Image processing has its roots in photo reconnaissance in the Second World War where processing operations were optical and interpretation operations were performed by humans who undertook such tasks as quantifying the effect of bombing raids.With the advent of satellite imagery in the late 1960s,much computer-based work began and the color composite satellite images,sometimes startlingly beautiful, have become part of our visual culture and the perception of our planet.Like computer graphics,it was until recently confined to research laboratories which could afford the expensive image processing computers that could cope with the substantial processing overheads required to process large numbers of high-resolution images.With the advent of cheap powerful computers and image collection devices like digital cameras and scanners,we have seen a migration of image processing techniques into the public domain.Classical image processing techniques are routinely employed by graphic designers to manipulate photographic and generated imagery,either to correct defects,change color and so on or creatively to transform the entire look of an image by subjecting it to some operation such as edge enhancement.A recent mainstream application of image processing is the compression of images—either for transmission across the Internet or the compression of moving video images in video telephony and video conferencing.Video telephony is one of the current crossover areas that employ both computer graphics and classical image processing techniques to try to achieve very high compression rates.All this is part of an inexorable trend towards the digital representation of images.Indeed that most powerful image form of the twentieth century—the TV image—is also about to be taken into the digital domain.Image processing is characterized by a large number of algorithms that are specific solutions to specific problems.Some are mathematical or context-independent operations that are applied to each and every pixel.For example,we can use Fourier transforms to perform image filtering operations.Others are“algorithmic”—we may use a complicated recursive strategy to find those pixels that constitute the edges in an image.Image processing operations often form part of a computer vision system.The input image may be filtered to highlight or reveal edges prior to a shape detection usually known as low-level operations.In computer graphics filtering operations are used extensively to avoid abasing or sampling artifacts.中文翻译图像处理不是一步就能完成的过程。