图像处理英文翻译

合集下载

图像处理中值滤波器中英文对照外文翻译文献

图像处理中值滤波器中英文对照外文翻译文献

中英文资料对照外文翻译一、英文原文A NEW CONTENT BASED MEDIAN FILTERABSTRACTIn this paper the hardware implementation of a contentbased median filter suitabl e for real-time impulse noise suppression is presented. The function of the proposed ci rcuitry is adaptive; it detects the existence of impulse noise in an image neighborhood and applies the median filter operator only when necessary. In this way, the blurring o f the imagein process is avoided and the integrity of edge and detail information is pre served. The proposed digital hardware structure is capable of processing gray-scale im ages of 8-bit resolution and is fully pipelined, whereas parallel processing is used to m inimize computational time. The architecturepresented was implemented in FPGA an d it can be used in industrial imaging applications, where fast processing is of the utm ost importance. The typical system clock frequency is 55 MHz.1. INTRODUCTIONTwo applications of great importance in the area of image processing are noise filtering and image enhancement [1].These tasks are an essential part of any image pro cessor,whether the final image is utilized for visual interpretation or for automatic an alysis. The aim of noise filtering is to eliminate noise and its effects on the original im age, while corrupting the image as little as possible. To this end, nonlinear techniques (like the median and, in general, order statistics filters) have been found to provide mo re satisfactory results in comparison to linear methods. Impulse noise exists in many p ractical applications and can be generated by various sources, including a number of man made phenomena, such as unprotected switches, industrial machines and car ign ition systems. Images are often corrupted by impulse noise due to a noisy sensor or ch annel transmission errors. The most common method used for impulse noise suppressi on n forgray-scale and color images is the median filter (MF) [2].The basic drawback o f the application of the MF is the blurringof the image in process. In the general case,t he filter is applied uniformly across an image, modifying pixels that arenot contamina ted by noise. In this way, the effective elimination of impulse noise is often at the exp ense of an overalldegradation of the image and blurred or distorted features[3].In this paper an intelligent hardware structure of a content based median filter (CBMF) suita ble for impulse noise suppression is presented. The function of the proposed circuit is to detect the existence of noise in the image window and apply the corresponding MFonly when necessary. The noise detection procedure is based on the content of the im age and computes the differences between the central pixel and thesurrounding pixels of a neighborhood. The main advantage of this adaptive approach is that image blurrin g is avoided and the integrity of edge and detail information are preserved[4,5]. The pro posed digital hardware structure is capable of processing gray-scale images of 8-bitres olution and performs both positive and negative impulse noise removal. The architectt ure chosen is based on a sequence of four basic functional pipelined stages, and parall el processing is used within each stage. A moving window of a 3×3 and 5×5-pixel im age neighborhood can be selected. However, the system can be easily expanded to acc ommodate windows of larger sizes. The proposed structure was implemented using fi eld programmable gate arrays (FPGA). The digital circuit was designed, compiled and successfully simulated using the MAX+PLUS II Programmable Logic Development S ystem by Altera Corporation. The EPF10K200SFC484-1 FPGA device of the FLEX1 0KE device family was utilized for the realization of the system. The typical clock fre quency is 55 MHz and the system can be used for real-time imaging applications whe re fast processing is required [6]. As an example,the time required to perform filtering of a gray-scale image of 260×244 pixels is approximately 10.6 msec.2. ADAPTIVE FILTERING PROCEDUREThe output of a median filter at a point x of an image f depends on the values of t he image points in the neighborhood of x. This neighborhood is determined by a wind ow W that is located at point x of f including n points x1, x2, …, xn of f, with n=2k+1. The proposed adaptive content based median filter can be utilized for impulse noisesu p pression in gray-scale images. A block diagram of the adaptive filtering procedure is depicted in Fig. 1. The noise detection procedure for both positive and negative noise is as follows:(i) We consider a neighborhood window W that is located at point x of the image f. Th e differences between the central pixel at point x and the pixel values of the n-1surr ounding points of the neighborhood (excluding thevalue of the central pixel) are co mputed.(ii) The sum of the absolute values of these differences is computed, denoted as fabs(x ). This value provides ameasure of closeness between the central pixel and its su rrounding pixels.(iii) The value fabs(x) is compared to fthreshold(x), which is anappropriately selected positive integer threshold value and can be modified. The central pixel is conside red to be noise when the value fabs(x) is greater than thethreshold value fthresho d(x).(iv) When the central pixel is considered to be noise it is substituted by the median val ue of the image neighborhood,denoted as fk+1, which is the normal operationof the median filter. In the opposite case, the value of the central pixel is not altered and the procedure is repeated for the next neighborhood window.From the noised etection scheme described, it should be mentioned that the noise detection level procedure can be controlled and a range of pixel values (and not only the fixedvalues of 0 and 255, salt and pepper noise) is considered asimpulse noise.In Fig. 2 the results of the application of the median filter and the CBMF in the gray-sca le image “Peppers” are depicted.More specifically, in Fig. 2(a) the original,uncor rupted image“Peppers” is depicted. In Fig. 2(b) the original imagedegraded by 5% both positive and negative impulse noise isillustrated. In Figs 2(c) and 2(d) the resultant images of the application of median filter and CBMF for a 3×3-pixel win dow are shown, respectively. Finally, the resultant images of the application of m edian filter and CBMF for a 5×5-pixelwindow are presented in Figs 2(e) and 2(f). It can be noticed that the application of the CBMF preserves much better edges a nddetails of the images, in comparison to the median filter.A number of different objective measures can be utilized forthe evaluation of these results. The most wi dely used measures are the Mean Square Error (MSE) and the Normalized Mean Square Error (NMSE) [1]. The results of the estimation of these measures for the two filters are depicted in Table I.For the estimation of these measures, the result ant images of the filters are compared to the original, uncorrupted image.From T able I it can be noticed that the MSE and NMSE estimatedfor the application of t he CBMF are considerably smaller than those estimated for the median filter, in all the cases.Table I. Similarity measures.3. HARDWARE ARCHITECTUREThe structure of the adaptive filter comprises four basic functional units, the mo ving window unit , the median computation unit , the arithmetic operations unit , and th e output selection unit . The input data of the system are the gray-scale values of the pi xels of the image neighborhood and the noise threshold value. For the computation of the filter output a3×3 or 5×5-pixel image neighborhood can be selected. Image input d ata is serially imported into the first stage. In this way,the total number of the inputpin s are 24 (21 inputs for the input data and 3 inputs for the clock and the control signalsr equired). The output data of the system are the resultant gray-scale values computed f or the operation selected (8pins).The moving window unit is the internal memory of the system,used for storing th e input values of the pixels and for realizing the moving window operation. The pixel values of the input image, denoted as “IMAGE_INPUT[7..0]”, areimported into this u nit in serial. For the representation of thethreshold value used for the detection of a no Filter Impulse noise 5% mse Nmse(×10-2) 3×3 5×5 3×3 5×5Median CBMF 57.554 35.287 130.496 84.788 0.317 0.194 0.718 0.467ise pixel 13 bits are required. For the moving window operation a 3×3 (5×5)-pixel sep entine type memory is used, consisting of 9 (25)registers. In this way,when the windoP1 P2 P3w is moved into the next image neighborhood only 3 or 5 pixel values stored in the memory are altered. The “en5×5” control signal is used for the selection of the size of th e image window, when“en5×5” is equal to “0” (“1”) a 3×3 (5×5)-pixel neighborhood is selected. It should be mentioned that the modules of the circuit used for the 3×3-pix el window are utilized for the 5×5-pixel window as well. For these modules, 2-to-1mu ltiplexers are utilized to select the appropriate pixel values,where necessary. The mod ules that are utilized only in the case of the 5×5-pixel neighborhood are enabled by th e“en5×5” control signal. The outputs of this unit are rows ofpixel values (3 or 5, respe ctively), which are the inputs to the median computation unit.The task of the median c omputation unit is to compute themedian value of the image neighborhood in order to substitutethe central pixel value, if necessary. For this purpose a25-input sorter is utili zeed. The structure of the sorter has been proposed by Batcher and is based on the use of CS blocks. ACS block is a max/min module; its first output is the maximumof the i nputs and its second output the minimum. The implementation of a CS block includes a comparator and two 2-to-1 multiplexers. The outputs values of the sorter, denoted a s “OUT_0[7..0]”…. “OUT_24[7..0]”, produce a “sorted list” of the 25 initial pixel val ues. A 2-to-1 multiplexer isused for the selection of the median value for a 3×3 or 5×5-pixel neighborhood.The function of the arithmetic operations unit is to computethe value fabs(x), whi ch is compared to the noise threshold value in the final stage of the adaptive filter.The in puts of this unit are the surrounding pixel values and the central pixelof the neighb orhood. For the implementation of the mathematical expression of fabs(x), the circuit of this unit contains a number of adder modules. Note that registers have been used to achieve a pipelined operation. An additional 2-to-1 multiplexer is utilized for the selec tion of the appropriate output value, depending on the “en5×5” control signal. From th e implementation point of view, the use of arithmetic blocks makes this stage hardwar e demanding.The output selection unit is used for the selection of the appropriateoutput value of the performed noise suppression operation. For this selection, the corresponding no ise threshold value calculated for the image neighborhood,“NOISE_THRES HOLD[1 2..0]”,is employed. This value is compared to fabs(x) and the result of the comparison Classifies the central pixel either as impulse noise or not. If thevalue fabs(x) is greater than the threshold value fthreshold(x) the central pixel is positive or negative impulse noise and has to be eliminated. For this reason, the output of the comparison is used as the selection signal of a 2-to-1 multiplexer whose inputs are the central pixel and the c orresponding median value for the image neighborhood. The output of the multiplexer is the output of this stage and the final output of the circuit of the adaptive filter.The st ructure of the CBMF, the computation procedure and the design of the four aforeme n tioned units are illustrated in Fig. 3.ImagewindoeFigure 1: Block diagram of the filtering methodFigure 2: Results of the application of the CBMF: (a) Original image, (b) noise corrupted image (c) Restored image by a 3x3 MF, (d) Restored image by a 3x3 CBMF, (e) Restored image by a 5x5 MF and (f) Restored image by a 5x5 CBMF.4. IMPLEMENTATION ISSUESThe proposed structure was implemented in FPGA,which offer an attractive com bination of low cost, high performance and apparent flexibility, using the software pa ckage+PLUS II of Altera Corporation. The FPGA used is the EPF10K200SFC484-1 d evice of the FLEX10KE device family,a device family suitable for designs that requir e high densities and high I/O count. The 99% of the logic cells(9965/9984 logic cells) of the device was utilized to implement the circuit . The typical operating clock frequ ency of the system is 55 MHz. As a comparison, the time required to perform filtering of a gray-scale image of 260×244 pixelsusing Matlab® software on a Pentium 4/2.4 G Hz computer system is approximately 7.2 sec, whereas the corresponding time using h ardware is approximately 10.6 msec.The modification of the system to accommodate windows oflarger sizes can be done in a straightforward way, requiring onlya small nu mber of changes. More specifically, in the first unit the size of the serpentine memory P4P5P6P7P8P9SubtractorarryMedianfilteradder comparatormuitiplexerf abc(x)valueand the corresponding number of multiplexers increase following a square law. In the second unit, the sorter module should be modified,and in the third unit the number of the adder devicesincreases following a square law. In the last unit no changes are requ ired.5. CONCLUSIONSThis paper presents a new hardware structure of a content based median filter, ca pable of performing adaptive impulse noise removal for gray-scale images. The noise detection procedure takes into account the differences between the central pixel and th e surrounding pixels of a neighborhood.The proposed digital circuit is capable ofproce ssing grayscale images of 8-bit resolution, with 3×3 or 5×5-pixel neighborhoods as op tions for the computation of the filter output. However, the design of the circuit is dire ctly expandableto accommodate larger size image windows. The adaptive filter was d eigned and implemented in FPGA. The typical clock frequency is 55 MHz and the sys tem is suitable forreal-time imaging applications.REFERENCES[1] W. K. Pratt, Digital Image Processing. New York: Wiley,1991.[2] G. R. Arce, N. C. Gallagher and T. Nodes, “Median filters:Theory and applicat ions,” in Advances in ComputerVision and Image Processing, Greenwich, CT: JAI, 1986.[3] T. A. Nodes and N. C. Gallagher, Jr., “The output distributionof median type filte rs,” IEEE Transactions onCommunications, vol. COM-32, pp. 532-541, May1984.[4] T. Sun and Y. Neuvo, “Detail-preserving median basedfilters in imageprocessing,” Pattern Recognition Letters,vol. 15, pp. 341-347, Apr. 1994.[5] E. Abreau, M. Lightstone, S. K. Mitra, and K. Arakawa,“A new efficient approachfor the removal of impulsenoise from highly corrupted images,” IEEE Transa ctionson Image Processing, vol. 5, pp. 1012-1025, June 1996.[6] E. R. Dougherty and P. Laplante, Introduction to Real-Time Imaging, Bellingham:SPIE/IEEE Press, 1995.二、英文翻译基于中值滤波的新的内容摘要在本设计中的提出了基于中值滤波的硬件实现用来抑制脉冲噪声的干扰。

图像处理单词

图像处理单词

图像处理核心单词Photoreceptor cells:感光细胞Rod: 杆状细胞Cone: 锥状细胞Retina: 视网膜Iris: 虹膜Fovea: 中央凹Visual cortex: 视觉皮层CCD: charge-coupled devices电荷耦合器件Scanning: 扫描Continuous: 连续的Discrete: 离散的Digitization: 数字化Sampling: 采样Quantization: 量化Band-limited function: 带宽有限函数ADC: analog-to-digital converter 模数转换器Pixel: picture element 象素Gray-scale :灰度Gray level:灰度级Gray-scale resolution: 灰度分辨率Resolution: 分辨率Sample density: 采样密度Bit: 比特Byte: 字节Pixel spacing: 象素间距Contrast: 对比度Noise: 噪声SNR: signal-to-noise ratio 信噪比Frame: 帧Field: 场Line: 行,线Interlaced scanning: 隔行扫描Frame grabber: 帧抓取器Image enhancement:图象增强Image quality:图象质量Algorithm: 算法Globe operation: 全局运算Local operation: 局部运算Point operation: 点运算Spatial: 空间的Spatial domain:空间域Spatial coordinate:空间坐标Linear: 线性Nonlinear: 非线性Frequency: 频率Frequency variable: 频率变量Frequency domain: 频域Fourier transform: 傅立叶变换One-dimensional Fourier transform: 一维傅立叶变换Two-dimensional Fourier transform: 二维傅立叶变换Discrete Fourier transform(DFT): 离散傅立叶变换Fast Fourier transform(FFT): 快速傅立叶变换Inverse Fourier transform: 傅立叶反变换Contrast enhancement: 对比度增强Contrast stretching: 对比度扩展Gray-scale transformation(GST): 灰度变换Logarithm transformation: 对数变换Exponential transformation: 指数变换Threshold: 阈值Thresholding: 二值化、门限化False contour: 假轮廓Histogram: 直方图Multivariable histogram: 多变量直方图Histogram modification: 直方图调整、直方图修改Histogram equalization: 直方图均衡化Histogram specification: 直方图规定化Histogram matching: 直方图匹配Histogram thresholing: 直方图门限化Probability density function(PDF): 概率密度函数Cumulative distribution function(CDF): 累积分布函数Slope: 斜率Normalized: 归一化Inverse function: 反函数Calculus: 微积分Derivative: 导数Integral: 积分Monotonic function: 单调函数Infinite: 无穷大Infinitesimal: 无穷小Equation: 方程Numerator: 分子Denominator: 分母Coefficient: 系数Image smoothing: 图象平滑Image averaging: 图象平均Expectation: 数学期望Mean: 均值Variance: 方差Median filtering: 中值滤波Neighborhood: 邻域Filter: 滤波器Lowpass filter: 低通滤波器Highpass filter: 高通滤波器Bandpass filter: 带通滤波器Bandreject filter、Bandstop filter: 带阻滤波器Ideal filter: 理想滤波器Butterworth filter: 巴特沃思滤波器Exponential filter: 指数滤波器Trapezoidal filter: 梯形滤波器Transfer function: 传递函数Frequency response: 频率响应Cut-off frequency: 截止频率Spectrum: 频谱Amplitude spectrum: 幅值谱Phase spectrum: 相位谱Power spectrum: 功率谱Blur: 模糊Random: 随机Additive: 加性的Uncorrelated: 互不相关的Salt & pepper noise: 椒盐噪声Gaussian noise: 高斯噪声Speckle noise: 斑点噪声Grain noise: 颗粒噪声Bartlett window: 巴特雷窗Hamming window: 汉明窗Hanning window: 汉宁窗Blackman window: 布赖克曼窗Convolution: 卷积Convolution kernel: 卷积核。

图像处理外文翻译 (2)

图像处理外文翻译 (2)

附录一英文原文Illustrator software and Photoshop software difference Photoshop and Illustrator is by Adobe product of our company, but as everyone more familiar Photoshop software, set scanning images, editing modification, image production, advertising creative, image input and output in one of the image processing software, favored by the vast number of graphic design personnel and computer art lovers alike.Photoshop expertise in image processing, and not graphics creation. Its application field, also very extensive, images, graphics, text, video, publishing various aspects have involved. Look from the function, Photoshop can be divided into image editing, image synthesis, school tonal color and special effects production parts. Image editing is image processing based on the image, can do all kinds of transform such as amplifier, reducing, rotation, lean, mirror, clairvoyant, etc. Also can copy, remove stain, repair damaged image, to modify etc. This in wedding photography, portrait processing production is very useful, and remove the part of the portrait, not satisfied with beautification processing, get let a person very satisfactory results.Image synthesis is will a few image through layer operation, tools application of intact, transmit definite synthesis of meaning images, which is a sure way of fine arts design. Photoshop provide drawing tools let foreign image and creative good fusion, the synthesis of possible make the image is perfect.School colour in photoshop with power is one of the functions of deep, the image can be quickly on the color rendition, color slants adjustment and correction, also can be in different colors to switch to meet in different areas such as web image design, printing and multimedia application.Special effects production in photoshop mainly by filter, passage of comprehensive application tools and finish. Including image effects of creative and special effects words such as paintings, making relief, gypsum paintings, drawings, etc commonly used traditional arts skills can be completed by photoshop effects. And all sorts of effects of production aremany words of fine arts designers keen on photoshop reason to study.Users in the use of Photoshop color function, will meet several different color mode: RGB, CMY K, HSB and Lab. RGB and CMYK color mode will let users always remember natural color, users of color and monitors on the printed page color is a totally different approach to create. The monitor is by sending red, green, blue three beams to create color: it is using RGB (red/green/blue) color mode. In order to make a complex color photographs on a continuous colour and lustre effect, printing technology used a cyan, the red, yellow and black ink presentation combinations from and things, reflect or absorb all kinds of light wavelengths. Through overprint) this print (add four color and create color is CMYK (green/magenta/yellow/black) yan color part of a pattern. HSB (colour and lustre/saturation/brightness) color model is based on the way human feelings, so the color will be natural color for customer computer translation of the color create provides an intuitive methods. The Lab color mode provides a create "don't rely on equipment" color method, this also is, no matter use what monitors.Photoshop expertise in image processing, and not graphics creation. It is necessary to distinguish between the two concepts. Image processing of the existing bitmap image processing and use edit some special effects, the key lies in the image processing processing; Graphic creation software is according to their own idea originality, using vector graphics to design graphics, this kind of software main have another famous company Adobe Illustrator and Macromedia company software Freehand.As the world's most famous Adobe Illustrator, feat graphics software is created, not graphic image processing. Adobe Illustrator is published, multimedia and online image industry standard vector illustration software. Whether production printing line draft of the designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers Illustrator, will find is not only an art products tools. This software for your line of draft to provide unprecedented precision and control, is suitable for the production of any small design to large complex projects.Adobe Illustrator with its powerful function and considerate user interface has occupied most of the global vector editing software share. With incomplete statistics global 37% of stylist is in use Adobe Illustrator art design. Especially the patent PostScript Adobe companybased on the use of technology, has been fully occupied professional Illustrator printed fields. Whether you're line art designers and professional Illustrator, production multimedia image of artists, or Internet page or online content producers, had used after Illustrator, its formidable will find the function and concise interface design style only Freehand to compare. (Macromedia Freehand is launched vector graphics software company, following the Macromedia company after the merger by Adobe Illustrator and will decide to continue the development of the software have been withdrawn from market).Adobe company in 1987 when they launched the Illustrator1.1 version. In the following year, and well platform launched 2.0 version. Illustrator really started in 1988, should say is introduced on the Mac Illustrator 88 version. A year after the upgrade to on the Mac version3.0 in 1991, and spread to Unix platforms. First appeared on the platform in the PC version4.0 version of 1992, this version is also the earliest Japanese transplant version. And in the MAC is used most is5.0/5.5 version, because this version used Dan Clark's do alias (anti-aliasing display) display engine is serrated, make originally had been in graphic display of vector graphics have a qualitative leap. At the same time on the screen making significant reform, style and Photoshop is very similar, so for the Adobe old users fairly easy to use, it is no wonder that did not last long, and soon also popular publishing industry launched Japanese. But not offering PC version. Adobe company immediately Mac and Unix platforms in launched version6.0. And by Illustrator real PC users know is introduced in 1997, while7.0 version of Mac and Windows platforms launch. Because the 7.0 version USES the complete PostScript page description language, make the page text and graphics quality got again leap. The more with her and Photoshop good interchangeability, won a good reputation. The only pity is the support of Chinese 7.0 abysmal. In 1998 the company launched landmark Adobe Illustrator8.0, making version - Illustrator became very perfect drawing software, is relying on powerful strength, Adobe company completely solved of Chinese characters and Japanese language support such double byte, more increased powerful "grid transition" tool (there are corresponding Draw9.0 Corel, but the effect the function of poor), text editing tools etc function, causes its fully occupy the professional vector graphics software's supremacy.Adobe Illustrator biggest characteristics is the use of beisaier curve, make simpleoperation powerful vector graphics possible. Now it has integrated functions such as word processing, coloring, not only in illustrations production, in printing products (such as advertising leaflet, booklet) design manufacture aspect is also widely used, in fact has become desktop publishing or (DTP) industry default standard. Its main competitors are in 2005, but MacromediaFreehand Macromedia had been Adobe company mergers.So-called beisaier curve method, in this software is through "the pen tool" set "anchor point" and "direction line" to realize. The average user in the beginning when use all feel not accustomed to, and requires some practice, but once the master later can follow one's inclinations map out all sorts of line, and intuitive and reliable.It also as Creative Suite of software suit with important constituent, and brother software - bitmap graphics software Photoshop have similar interface, and can share some plug-ins and function, realize seamless connection. At the same time it also can put the files output for Flash format. Therefore, can pass Illustrator let Adobe products and Flash connection.Adobe Illustrator CS5 on May 17, 2010 issue. New Adobe Illustrator CS5 software can realize accurate in perspective drawing, create width variable stroke, use lifelike, make full use of paint brush with new Adobe CS Live online service integration. AI CS5 has full control of the width zoom along path variable, and stroke, arrows, dashing and artistic brushes. Without access to multiple tools and panel, can directly on the sketchpad merger, editing and filling shape. AI CS5 can handle a file of most 100 different size, and according to your sketchpad will organize and check them.Here in Adobe Illustrator CS5, for example, briefly introduce the basic function: Adobe IllustratorQuick background layerWhen using Illustrator after making good design, stored in Photoshop opens, if often pattern is in a transparent layer, and have no background ground floor. Want to produce background bottom, are generally add a layer, and then executed merge down or flatten, with background ground floor. We are now introducing you a quick method: as long as in diagram level on press the upper right version, choose new layer, the arrow in the model selection and bottom ", "background can quickly produce. However, in Photoshop 5 after the movementmerged into one instruction, select menu on the "new layer is incomplete incomplete background bottom" to finish.Remove overmuch type clothWhen you open the file, version 5 will introduce the Illustrator before Illustrator version created files disused zone not need. In order to remove these don't need in the zone, click on All Swatches palette Swatches icon and then Select the Select clause in the popup menu, and Trash Unused. Click on the icon to remove irrelevant type cloth. Sometimes you must repeat selection and delete processes to ensure that clear palette. Note that complex documents will take a relatively long time doing cleanup.Put the fabric to define the general-screeningIn Illustrator5 secondary color and process color has two distinct advantages compared to establish for easy: they provide HuaGan tonal; And when you edit the general-screening prescription, be filled some of special color objects will be automatically updated into to the new color. Because process color won't let you build tonal and provides automatic updates, you may want to put all the fabric is defined as the general-screening. But to confirm Illustrator, when you are in QuarkXPress or when PageMaker quaclrochramatic must keep their into process of color.Preferred using CMYKBecause of Illustrator7 can let you to CMYK, RGB and HSB (hue, saturation, bright) color mode, so you want to establish color the creation of carefully, you can now contains the draft with the combination of these modes created objects. When you do, they may have output various kinds of unexpected things will happen. Printing output file should use CMYK; Only if you don't use screen display manuscript RGB. If your creation draft will also be used for printing and screen display, firstly with CMYK create printing output file, then use to copy it brings As ordered the copy and modify to the appropriate color mode.Information source:" Baidu encyclopedia "附录二中文译文Illustrator软件与Photoshop软件的区别Photoshop与Illustrator都是由Adobe公司出品的,而作为大家都比较熟悉的Photoshop软件,集图像扫描、编辑修改、图像制作、广告创意,图像输入与输出于一体的图形图像处理软件,深受广大平面设计人员和电脑美术爱好者的喜爱。

dip国际术语

dip国际术语

dip国际术语DIP国际术语:数字图像处理的基本概念数字图像处理(Digital Image Processing,简称DIP)是指对数字图像进行各种处理、分析和操作的技术和方法。

在DIP领域中,有许多术语被广泛使用,这些术语涵盖了图像获取、图像增强、图像分割、图像压缩等多个方面。

本文将介绍一些DIP国际术语的基本概念和应用。

1. 图像获取(Image Acquisition)图像获取是指使用传感器或其他设备将现实世界中的光学信息转换为数字图像的过程。

常见的图像获取设备包括数码相机、扫描仪和医学影像设备等。

图像获取的质量对后续的图像处理结果有重要影响,因此需要合理选择设备、控制光照条件和调整参数。

2. 图像增强(Image Enhancement)图像增强是指通过调整图像的亮度、对比度、颜色等属性,使图像在视觉上更加清晰、鲜艳或易于分析的过程。

常用的图像增强方法包括直方图均衡化、滤波和锐化等。

图像增强可以改善图像的观感效果,提高图像的质量和清晰度。

3. 图像分割(Image Segmentation)图像分割是将图像划分为不同区域或对象的过程,其目标是提取出感兴趣的图像区域,为后续的图像分析和理解提供基础。

常见的图像分割方法包括阈值分割、边缘检测和区域生长等。

图像分割在医学影像、计算机视觉和目标检测等领域具有广泛应用。

4. 图像压缩(Image Compression)图像压缩是指通过减少图像数据的存储空间或传输带宽,以实现图像数据压缩和恢复的过程。

图像压缩可以分为有损压缩和无损压缩两种。

有损压缩在减小图像文件大小的同时会引入一定的信息损失,而无损压缩则可以完全恢复原始图像。

JPEG和PNG是常用的图像压缩格式。

5. 形态学处理(Morphological Processing)形态学处理是一种基于图像形状和结构的图像处理方法,主要用于图像的特征提取和形态学运算。

形态学处理主要包括腐蚀、膨胀、开运算和闭运算等操作。

数字图像处理英文原版及翻译

数字图像处理英文原版及翻译

Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pixels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields asingle number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high level processes. Low-level processes involve primitive opera- tions such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higher level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting(segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.”As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to theother.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of an image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good”enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmit it.Although storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks indigital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig 2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where theinformation of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig 2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge detection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal blur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there maytherefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian of the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholdingparameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem in segmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.数字图像处理和边缘检测数字图像处理在数字图象处理方法的兴趣从两个主要应用领域的茎:改善人类解释图像信息;和用于存储,传输,和表示用于自主机器感知图像数据的处理。

CV专业名词中英文对照

CV专业名词中英文对照

Common人工智能Artificial Intelligence认知科学与神经科学Cognitive Science and Neuroscience 图像处理Image Processing计算机图形学Computer graphics模式识别Pattern Recognized图像表示Image Representation立体视觉与三维重建Stereo Vision and 3D Reconstruction 物体(目标)识别Object Recognition运动检测与跟踪Motion Detection and Tracking边缘edge边缘检测detection区域region图像分割segmentation轮廓与剪影contour and silhouette纹理texture纹理特征提取feature extraction颜色color局部特征local features or blob尺度scale摄像机标定Camera Calibration立体匹配stereo matching图像配准Image Registration特征匹配features matching物体识别Object Recognition人工标注Ground-truth自动标注Automatic Annotation运动检测与跟踪Motion Detection and Tracking背景剪除Background Subtraction背景模型与更新background modeling and update运动跟踪Motion Tracking多目标跟踪multi-target tracking颜色空间color space色调Hue色饱和度Saturation明度Value颜色不变性Color Constancy(人类视觉具有颜色不变性)照明illumination反射模型Reflectance Model明暗分析Shading Analysis成像几何学与成像物理学Imaging Geometry and Physics 全像摄像机Omnidirectional Camera激光扫描仪Laser Scanner透视投影Perspective projection正交投影Orthopedic projection表面方向半球Hemisphere of Directions立体角solid angle透视缩小效应foreshortening辐射度radiance辐照度irradiance亮度intensity漫反射表面、Lambertian(朗伯)表面diffuse surface 镜面Specular Surfaces漫反射率diffuse reflectance明暗模型Shading Models环境光照ambient illumination互反射interreflection反射图Reflectance Map纹理分析Texture Analysis元素elements基元primitives纹理分类texture classification从纹理中恢复图像shape from texture纹理合成synthetic图形绘制graph rendering图像压缩image compression统计方法statistical methods结构方法structural methods基于模型的方法model based methods分形fractal自相关性函数autocorrelation function熵entropy能量energy对比度contrast均匀度homogeneity相关性correlation上下文约束contextual constraintsGibbs随机场吉布斯随机场边缘检测、跟踪、连接Detection、Tracking、LinkingLoG边缘检测算法(墨西哥草帽算子)LoG=Laplacian of Gaussian 霍夫变化Hough Transform链码chain codeB-样条B-spline有理B-样条Rational B-spline非均匀有理B-样条Non-Uniform Rational B-Spline控制点control points节点knot points基函数basis function控制点权值weights曲线拟合curve fitting内插interpolation逼近approximation回归Regression主动轮廓Active Contour Model or Snake 图像二值化Image thresholding连通成分connected component数学形态学mathematical morphology 结构元structuring elements膨胀Dilation腐蚀Erosion开运算opening闭运算closing聚类clustering分裂合并方法split-and-merge区域邻接图region adjacency graphs四叉树quad tree区域生长Region Growing过分割over-segmentation分水岭watered金字塔pyramid亚采样sub-sampling尺度空间Scale Space局部特征Local Features背景混淆clutter遮挡occlusion角点corners强纹理区域strongly textured areas 二阶矩阵Second moment matrix 视觉词袋bag-of-visual-words类内差异intra-class variability类间相似性inter-class similarity生成学习Generative learning判别学习discriminative learning 人脸检测Face detection弱分类器weak learners集成分类器ensemble classifier被动测距传感passive sensing多视点Multiple Views稠密深度图dense depth稀疏深度图sparse depth视差disparity外极epipolar外极几何Epipolor Geometry校正Rectification归一化相关NCC Normalized Cross Correlation平方差的和SSD Sum of Squared Differences绝对值差的和SAD Sum of Absolute Difference俯仰角pitch偏航角yaw扭转角twist高斯混合模型Gaussian Mixture Model运动场motion field光流optical flow贝叶斯跟踪Bayesian tracking粒子滤波Particle Filters颜色直方图color histogram尺度不变特征转换SIFT scale invariant feature transform 孔径问题Aperture problemAAberration 像差Accessory 附件Accessory Shoes 附件插座、热靴Achromatic 消色差的Active 主动的、有源的Acutance 锐度Acute-matte 磨砂毛玻璃Adapter 适配器Advance system 输片系统AE Lock(AEL) 自动曝光锁定AF(Autofocus) 自动聚焦AF Illuminator AF照明器AF spotbeam projector AF照明器Alkaline 碱性Ambient light 环境光Amplification factor 放大倍率Angle finder 弯角取景器Angle of view 视角Anti-Red-eye 防红眼Aperture 光圈Aperture priority 光圈优先APO(APOchromat) 复消色差APZ(Advanced Program zoom) 高级程序变焦Arc 弧形ASA(American Standards Association) 美国标准协会Astigmatism 像散Auto bracket 自动包围Auto composition 自动构图Auto exposure 自动曝光Auto exposure bracketing 自动包围曝光Auto film advance 自动进片Auto flash 自动闪光Auto loading 自动装片Auto multi-program 自动多程序Auto rewind 自动退片Auto wind 自动卷片Auto zoom 自动变焦Automatic exposure(AE) 自动曝光Automation 自动化Auxiliary 辅助BBack 机背Back light 逆光、背光Back light compensation 逆光补偿Background 背景Balance contrast 反差平衡Bar code system 条形码系统Barrel distortion 桶形畸变BAse-Stored Image Sensor (BASIS) 基存储影像传感器Battery check 电池检测Battery holder 电池手柄Bayonet 卡口Bellows 皮腔Blue filter 蓝色滤光镜Body-integral 机身一体化Bridge camera 桥梁相机Brightness control 亮度控制Built in 内置Bulb B 门Button 按钮CCable release 快门线Camera 照相机Camera shake 相机抖动Cap 盖子Caption 贺辞、祝辞、字幕Card 卡Cartridges 暗盒Case 机套CCD(Charge Coupled Device) 电荷耦合器件CdS cell 硫化镉元件Center spot 中空滤光镜Center weighted averaging 中央重点加权平均Chromatic Aberration 色差Circle of confusion 弥散圆Close-up 近摄Coated 镀膜Compact camera 袖珍相机Composition 构图Compound lens 复合透镜Computer 计算机Contact 触点Continuous advance 连续进片Continuous autofocus 连续自动聚焦Contrast 反差、对比Convetor 转换器Coreless 无线圈Correction 校正Coupler 耦合器Coverage 覆盖范围CPU(Central Processing Unit) 中央处理器Creative expansion card 艺术创作软件卡Cross 交叉Curtain 帘幕Customized function 用户自选功能DData back 数据机背Data panel 数据面板Dedicated flash 专用闪光灯Definition 清晰度Delay 延迟、延时Depth of field 景深Depth of field preview 景深预测Detection 检测Diaphragm 光阑Diffuse 柔光Diffusers 柔光镜DIN (Deutsche Industrische Normen) 德国工业标准Diopter 屈光度Dispersion 色散Display 显示Distortion 畸变Double exposure 双重曝光Double ring zoom 双环式变焦镜头Dreams filter 梦幻滤光镜Drive mode 驱动方式Duration of flash 闪光持续时间DX-code DX编码EED(Extra low Dispersion) 超低色散Electro selective pattern(ESP) 电子选择模式EOS(Electronic Optical System) 电子光学系统Ergonomic 人体工程学EV(Exposure value) 曝光值Evaluative metering 综合评价测光Expert 专家、专业Exposure 曝光Exposure adjustment 曝光调整Exposure compensation 曝光补偿Exposure memory 曝光记忆Exposure mode 曝光方式Exposure value(EV) 曝光值Extension tube 近摄接圈Extension ring 近摄接圈External metering 外测光Extra wide angle lens 超广角镜头Eye-level fixed 眼平固定Eye-start 眼启动Eyepiece 目镜Eyesight correction lenses 视力校正镜FField curvature 像场弯曲Fill in 填充(式)Film 胶卷(片)Film speed 胶卷感光度Film transport 输片、过片Filter 滤光镜Finder 取景器First curtain 前帘、第一帘幕Fish eye lens 鱼眼镜头Flare 耀斑、眩光Flash 闪光灯、闪光Flash range 闪光范围Flash ready 闪光灯充电完毕Flexible program 柔性程序Focal length 焦距Focal plane 焦点平面Focus 焦点Focus area 聚焦区域Focus hold 焦点锁定Focus lock 焦点锁定Focus prediction 焦点预测Focus priority 焦点优先Focus screen 聚焦屏Focus tracking 焦点跟踪Focusing 聚焦、对焦、调焦Focusing stages 聚焦级数Fog filter 雾化滤光镜Foreground 前景Frame 张数、帧Freeze 冻结、凝固Fresnel lens 菲涅尔透镜、环状透镜Frontground 前景Fuzzy logic 模糊逻辑GGlare 眩光GN(Guide Number) 闪光指数GPD(Gallium Photo Diode) 稼光电二极管Graduated 渐变HHalf frame 半幅Halfway 半程Hand grip 手柄High eye point 远视点、高眼点High key 高调Highlight 高光、高亮Highlight control 高光控制High speed 高速Honeycomb metering 蜂巢式测光Horizontal 水平Hot shoe 热靴、附件插座Hybrid camera 混合相机Hyper manual 超手动Hyper program 超程序Hyperfocal 超焦距IIC(Integrated Circuit) 集成电路Illumination angle 照明角度Illuminator 照明器Image control 影像控制Image size lock 影像放大倍率锁定Infinity 无限远、无穷远Infra-red(IR) 红外线Instant return 瞬回式Integrated 集成Intelligence 智能化Intelligent power zoom 智能化电动变焦Interactive function 交互式功能Interchangeable 可更换Internal focusing 内调焦Interval shooting 间隔拍摄ISO(International Standard Association) 国际标准化组织JJIS(Japanese Industrial Standards)日本工业标准LLandscape 风景Latitude 宽容度LCD data panel LCD数据面板LCD(Liquid Crystal Display) 液晶显示LED(Light Emitting Diode) 发光二极管Lens 镜头、透镜Lens cap 镜头盖Lens hood 镜头遮光罩Lens release 镜头释放钮Lithium battery 锂电池Lock 闭锁、锁定Low key 低调Low light 低亮度、低光LSI(Large Scale Integrated) 大规模集成MMacro 微距、巨像Magnification 放大倍率Main switch 主开关Manual 手动Manual exposure 手动曝光Manual focusing 手动聚焦Matrix metering 矩阵式测光Maximum 最大Metered manual 测光手动Metering 测光Micro prism 微棱Minimum 最小Mirage 倒影镜Mirror 反光镜Mirror box 反光镜箱Mirror lens 折反射镜头Module 模块Monitor 监视、监视器Monopod 独脚架Motor 电动机、马达Mount 卡口MTF (Modulation Transfer Function 调制传递函数Multi beam 多束Multi control 多重控制Multi-dimensional 多维Multi-exposure 多重曝光Multi-image 多重影Multi-mode 多模式Multi-pattern 多区、多分区、多模式Multi-program 多程序Multi sensor 多传感器、多感光元件Multi spot metering 多点测光Multi task 多任务NNegative 负片Neutral 中性Neutral density filter 中灰密度滤光镜Ni-Cd battery 镍铬(可充电)电池OOff camera 离机Off center 偏离中心OTF(Off The Film) 偏离胶卷平面One ring zoom 单环式变焦镜头One touch 单环式Orange filter 橙色滤光镜Over exposure 曝光过度PPanning 摇拍Panorama 全景Parallel 平行Parallax 平行视差Partial metering 局部测光Passive 被动的、无源的Pastels filter 水粉滤光镜PC(Perspective Control) 透视控制Pentaprism 五棱镜Perspective 透视的Phase detection 相位检测Photography 摄影Pincushion distortion 枕形畸变Plane of focus 焦点平面Point of view 视点Polarizing 偏振、偏光Polarizer 偏振镜Portrait 人像、肖像Power 电源、功率、电动Power focus 电动聚焦Power zoom 电动变焦Predictive 预测Predictive focus control 预测焦点控制Preflash 预闪Professional 专业的Program 程序Program back 程序机背Program flash 程序闪光Program reset 程序复位Program shift 程序偏移Programmed Image Control (PIC) 程序化影像控制QQuartz data back 石英数据机背RRainbows filter 彩虹滤光镜Range finder 测距取景器Release priority 释放优先Rear curtain 后帘Reciprocity failure 倒易律失效Reciprocity Law 倒易律Recompose 重新构图Red eye 红眼Red eye reduction 红眼减少Reflector 反射器、反光板Reflex 反光Remote control terminal 快门线插孔Remote cord 遥控线、快门线Resolution 分辨率Reversal films 反转胶片Rewind 退卷Ring flash 环形闪光灯ROM(Read Only Memory) 只读存储器Rotating zoom 旋转式变焦镜头RTF(Retractable TTL Flash) 可收缩TTL闪光灯SSecond curtain 后帘、第二帘幕Secondary Imaged Registration(SIR) 辅助影像重合Segment 段、区Selection 选择Self-timer 自拍机Sensitivity 灵敏度Sensitivity range 灵敏度范围Sensor 传感器Separator lens 分离镜片Sepia filter 褐色滤光镜Sequence zoom shooting 顺序变焦拍摄Sequential shoot 顺序拍摄Servo autofocus 伺服自动聚焦Setting 设置Shadow 阴影、暗位Shadow control 阴影控制Sharpness 清晰度Shift 偏移、移动Shutter 快门Shutter curtain 快门帘幕Shutter priority 快门优先Shutter release 快门释放Shutter speed 快门速度Shutter speed priority 快门速度优先Silhouette 剪影Single frame advance 单张进片Single shot autofocus 单次自动聚焦Skylight filter 天光滤光镜Slide film 幻灯胶片Slow speed synchronization 慢速同步SLD(Super Lower Dispersion) 超低色散SLR(Single Lens Reflex) 单镜头反光照相机SMC(Super Multi Coated) 超级多层镀膜Soft focus 柔焦、柔光SP(Super Performance) 超级性能SPC(Silicon Photo Cell) 硅光电池SPD(Silicon Photo Dioxide) 硅光电二极管Speedlight 闪光灯、闪光管Split image 裂像Sport 体育、运动Spot metering 点测光Standard 标准Standard lens 标准镜头Starburst 星光镜Stop 档Synchronization 同步TTele converter 增距镜、望远变换器Telephoto lens 长焦距镜头Trailing-shutter curtain 后帘同步Trap focus 陷阱聚焦Tripod 三脚架TS(Tilt and Shift) 倾斜及偏移TTL flash TTL闪光TTL flash metering TTL闪光测光TTL(Through The Lens) 通过镜头、镜后Two touch 双环UUD(Ultra-low Dispersion) 超低色散Ultra wide 超阔、超广Ultrasonic 超声波UV(Ultra-Violet) 紫外线Under exposure 曝光不足VVari-colour 变色Var-program 变程序Variable speed 变速Vertical 垂直Vertical traverse 纵走式View finder 取景器WWarm tone 暖色调Wide angle lens 广角镜头Wide view 广角预视、宽区预视Wildlife 野生动物Wireless remote 无线遥控World time 世界时间XX-sync X-同步ZZoom 变焦Zoom lens 变焦镜头Zoom clip 变焦剪裁Zoom effect 变焦效果OtherTTL 镜后测光NTTL 非镜后测光UM 无机内测光,手动测光MM 机内测光,但需手动设定AP 光圈优先SP 快门优先PR 程序暴光ANCILLARY DEVICES 辅助产品BACKPLANES 底板CABLES AND CONNECTORS 连线及连接器ENCLOSURES 围圈FACTORY AUTOMATION 工厂自动化POWER SUPPLIES 电源APPLICATION-SPECIFIC SOFTWARE 应用软件INDUSTRIAL-INSPECTION SOFTWARE 工业检测软件MEDICAL-IMAGING SOFTWARE 医药图象软件SCIENTIFIC-ANALYSIS SOFTWARE 科学分析软件SEMICONDUCTOR-INSPECTION SOFTWARE 半导体检测软件CAMERAS 相机AREA-ARRAY CAMERAS 面阵相机CAMERA LINK CAMERAS CAMERA-LINK相机CCD CAMERAS-COLOR ccd彩色相机CCD CAMERAS COOLED ccoled型ccd相机CHARGE-INJECTION-DEVICE CAMERAS 充电相机CMOS CAMERAS cmos相机DIGITAL-OUTPUT CAMERAS 数码相机FIREWIRE(1394) CAMERAS 1394接口相机HIGH-SPEED VIDEO CAMERAS 高速摄象机INFRARED CAMERAS 红外相机LINESCAN CAMERAS 行扫描相机LOW-LIGHT-LEVEL CAMERAS 暗光相机MULTISPECTRAL CAMERAS 多光谱相机SMART CAMERAS 微型相机TIME-DELAY-AND-INTEGRATION CAMERAS 时间延迟集成相机USB CAMERAS usb接口相机VIDEO CAMERAS 摄象机DIGITIZERS 数字转换器MEASUREMENT DIGITIZERS 数字测量器MOTION-CAPTURE DIGITIZERS 数字运动捕捉器DISPLAYS 显示器CATHODE-RAY TUBES(CRTs) 阴极摄像管INDUSTRIAL DISPLAYS 工业用型显示器LIQUID-CRYSTAL DISPLAYS 液晶显示器ILLUMINATION SYSTEMS 光源系统BACKLIGHTING DEVICES 背光源FIBEROPTIC ILLUMINATION SYSTEMS 光纤照明系统FLUORESCENT ILLUMINATION SYSTEMS荧光照明系统INFRARED LIGHTING 红外照明LED LIGHTING led照明STRUCTURED LIGHTING 结构化照明ULTRAVIOLET ILLUMINATION SYSTEMS 紫外照明系统WHITE-LIGHT ILLUMINATION SYSTEMS 白光照明系统XENON ILLUMINATION SYSTEMS 氙气照明系统IMAGE-PROCESSING SYSTEMS 图象处理系统AUTOMATION/ROBOTICS 自动化/机器人技术DIGITAL IMAGING SYSTEMS 数字图象系统DOCUMENT-IMAGING SYSTEMS 数据图象系统GUIDANCE/TRACKING SYSTEMS 制导/跟踪系统INFRARED IMAGING SYSTEMS 红外图象系统INSPECTION/NONDESTRUCTIVE TESTING SYSTEMS 检测/非破坏性测试系统INSTRUMENTATION SYSTEMS 测试设备系统INTELLIGENT TRANSPORTATION SYSTEMS 智能交通系统MEDICAL DIAGNOSTICS SYSTEMS 医疗诊断系统METROLOGY/MEASUREMENT/GAUGING SYSTEMS 测绘系统MICROSCOPY SYSTEMS 微观系统MOTION-ANALYSIS SYSTEMS 运动分析系统OPTICAL-CHARACTER-RECOGNITION/OPTICAL-CHARACTER-VERIFICATION SYSTEMS 光学文字识别系统PROCESS-CONTROL SYSTEMS 处理控制系统QUALITY-ASSURANCE SYSTEMS 高保真系统REMOTE SENSING SYSTEMS 遥感系统WEB-SCANNING SYSTEMS 网状扫描系统IMAGE-PROCESSING TOOLKITS 图象处理工具包COMPILERS 编译器DATA-ACQUISITION TOOLKITS 数据采集工具套件DEVELOPMENT TOOLS 开发工具DIGITAL-SIGNAL-PROCESSOR(DSP) DEVELOPMENT TOOLKITS 数字信号处理开发工具套件REAL-TIME OPERATING SYSTEMS(RTOSs) 实时操作系统WINDOWS 窗口IMAGE SOURCES 图象资源FLASHLAMPS 闪光灯FLUORESCENT SOURCES 荧光源LASERS 激光器LIGHT-EMITTING DIODES(LEDs) 发光二极管STROBE ILLUMINATION 闪光照明TUNGSTEN LAMPS 钨灯ULTRAVIOLET LAMPS 紫外灯WHITE-LIGHT SOURCES 白光灯XENON LAMPS 氙气灯X-RAY SOURCES x射线源IMAGE-STORAGE DEVICES 图象存储器HARD DRIVES 硬盘设备OPTICAL STORAGE DEVICES 光存储设备RAID STORAGE DEVICES RAID存储设备(廉价磁盘冗余阵列设备)INTEGRATED CIRCUITS 综合电路ASICS 专用集成电路ANALOG-TO-DIGITAL CONVERTERS 模数转换器COMMUNICATIONS CONTROLLERS 通信控制器DIGITAL-SIGNAL PROCESSORS 数字信号处理器DIGITAL-TO-ANALOG CONVERTERS 数模转换器DISPLAY CONROLLERS 显示器控制器FIELD-PROGRAMMABLE GATE 现场可编程门阵列ARRAYS 阵列GRAPHICS-DISPLAY CONTROLLERS 图形显示控制器IMAGE-PROCESSING ICs 图象处理芯片MIXED-SIGNAL ICs 混合信号芯片VIDEO-PROCESSING ICs 视频处理芯片LENSES 镜头CAMERA LENSES 相机镜头ENLARGING LENSES 放大镜HIGH-RESOLUTION LENSES 高分辨率镜头IMAGE-SCANNING LENSES 图象扫描镜头PROJECTION LENSES 聚光透镜TELECENTRIC LENSES 望远镜VIDEO LENSES 摄象机镜头MONITORS 显示器CATHODE-RAY-TUBE(CRT) MONITORS, COLOR crt彩色监视器CATHODE-RAY-TUBE(CRT) MONITORS, MONOCHROME 单色crt监视器LIQUID-CRYSTAL-DISPLAY(LED) MONITORS lcd监视器。

CCD图像图像处理外文文献翻译、中英文翻译、外文翻译

CCD图像图像处理外文文献翻译、中英文翻译、外文翻译

附录附录1翻译部分Raw CCD images are exceptional but not perfect. Due to the digital nature of the data many of the imperfections can be compensated for or calibrated out of the final image through digital image processing.Composition of a Raw CCD Image.A raw CCD image consists of the following signal components:IMAGE SIGNAL - The signal from the source.Electrons are generated from the actual source photons.BIAS SIGNAL - Initial signal already on the CCD before the exposure is taken. This signal is due to biasing the CCD offset slightly above zero A/D counts (ADU).THERMAL SIGNAL - Signal (Dark Current thermal electrons) due to the thermal activity of the semiconductor. Thermal signal is reduced by cooling of the CCD to low temperature.Sources of NoiseCCD images are susceptible to the following sources of noise:PHOTON NOISE - Random fluctuations in the photon signal of the source. The rate at which photons are received is not constant.THERMAL NOISE - Statistical fluctuations in the generation of Thermal signal. The rate at which electrons are produced in the semiconductor substrate due to thermal effects is not constant.READOUT NOISE - Errors in reading the signal; generally dominated by theon-chip amplifier.QUANTIZATION NOISE - Errors introduced in the A/D conversion process.SENSITIVITY VARIATION - Sensitivity variations from photosite to photosite on the CCD detector or across the detector. Modern CCD's are uniform to better than 1%between neighboring photosites and uniform to better than 10% across the entire surface.Noise CorrectionsREDUCING NOISE - Readout Noise and Quantization Noise are limited by the construction of the CCD camera and can not be improved upon by the user. Thermal Noise, however, can be reduced by cooling of the CCD (temperature regulation). The Sensitivity Variations can be removed by proper flat fielding.CORRECTING FOR THE BIAS AND THERMAL SIGNALS - The Bias and Thermal signals can be subtracted out from the Raw Image by taking what is called a Dark Exposure. The dark exposure is a measure of the Bias Signal and Thermal Signal and may simply be subtracted from the Raw Image.FLAT FIELDING -A record of the photosite to photosite sensitivity variations can be obtained by taking an exposure of a uniformly lit 'flat field". These variations can then be divided out of the Raw Image to produce an image essentially free from this source of error. Any length exposure will do, but ideally one which saturates the pixels to the 50% or 75% level is best.The Final Processed ImageThe final Processed Image which removes unwanted signals and reduces noise as best we can is computed as follows:Final Processed Image = (Raw - Dark)/FlatAll of the digital image processing functions described above can be accomplished by using CCDOPS software furnished with each SBIG imaging camera. The steps to accomplish them are described in the Operating Manual furnished with each SBIG imaging camera. At SBIG we offer our technical support to help you with questions on how to improve your images.HOW TO SELECT THE CORRECT CCD IMAGING CAMERA FOR YOUR TELESCOPEWhen new customers contact SBIG we discuss their imaging camera application. We try to get an idea of their interests. We have found this method is an effective way of insuring that our customers get the right imaging camera for their purposes. Someof the questions we ask are as follows:What type of telescope do you presently own? Having this information allows us to match the CCD imaging Camera's parameters, pixel size and field of view to your telescope. We can also help you interface the CCD imaging camera's automatic guiding functions to your telescope.Are you a MAC or PC user? Since our software supports both of these platforms we can insure that you receive the correct software. We can also answer questions about any unique functions in one or the other. We can send you a demonstration copy of the appropriate software for your review.Do you have a telescope drive base with an autoguider port? Do you want to operate from a remote computer? Companies like Software Bisque fully support our products with telescope control and imaging camera software.Do you want to take photographic quality images of deep space objects, image planets, or perform wide field searches for near earth asteroids or supernovas? In learning about your interests we can better guide you to the optimum CCD pixel size and imaging area for the application.Do you want to make photometric measurements of variable stars or determine precise asteroid positions? From this information we can recommend a CCD imaging camera model and explain how to use the specific analysis functions to perform these tasks. We can help you characterize your imaging camera by furnishing additional technical data.Do you want to automatically guide long uninterrupted astrophotographs? As the company with the most experience in CCD autoguiding we can help you install and operate a CCD autoguider on your telescope. The Model STV has a worldwide reputation for accurate guiding on dim guide stars. No matter what type of telescope you own we can help you correctly interface it and get it working properly.SBIG CCD IMAGING CAMERASThe SBIG product line consists of a series of thermoelectrically cooled CCD imaging cameras designed for a wide range of applications ranging from astronomy, tricolor imaging, color photometry, spectroscopy, medical imaging, densitometry, to chemiluminescence and epifluorescence imaging, etc. This catalog includes information on astronomical imaging cameras, scientific imaging cameras,autoguiding, and accessories. We have tried to arrange the catalog so that it is easy to compare products by specifications and performance. The tables in the product section compare some of the basic characteristics on each CCD imaging camera in our product line. You will find a more detailed set of specifications with each individual imaging camera description.HOW TO GET STARTED USING YOUR CCD IMAGING CAMERAIt all starts with the software. If there's any company well known for its outstanding imaging camera software it's SBIG. Our CCDOPS Operating Software is well known for its user oriented camera control features and stability. CCDOPS is available for free download from our web site along with sample images that you can display and analyze using the image processing and analysis functions of the CCDOPS software. You can become thoroughly familiar with how our imaging cameras work and the capabilities of the software before you purchase an imaging camera. We also include CCDSoftV5 and TheSky from Software Bisque with most of our cameras at no additional charge. Macintosh users receive a free copy of EquinoX planetarium and camera control software for the MacOS-X operating system. No other manufacturer offers better software than you get with SBIG cameras. New customers receiving their CCD imaging camera should first read the installation section in their CCDOPS Operating Manual. Once you have read that section you should have no difficulty installing CCDOPS software on your hard drive, connecting the USB cable from the imaging camera to your computer, initiating the imaging camera and within minutes start taking your first CCD images. Many of our customers are amazed at how easy it is to start taking images. Additional information can be found by reading the image processing sections of the CCDOPS and CCDSoftV5 Manuals. This information allows you to progress to more advanced features such as automatic dark frame subtraction of images, focusing the imaging camera, viewing, analyzing and processing the images on the monitor, co-adding images, taking automatic sequences of images, photometric and astrometric measurements, etc.A PERSONAL TOUCH FROM SBIGAt SBIG we have had much success with a program in which we continually review customer's images sent to us on disk or via e-mail. We can often determine the cause of a problem from actual images sent in by a user. We review the images and contacteach customer personally. Images displaying poor telescope tracking, improper imaging camera focus, oversaturated images, etc., are typical initial problems. We will help you quickly learn how to improve your images. You can be assured of personal technical support when you need it. The customer support program has furnished SBIG with a large collection of remarkable images. Many customers have had their images published in SBIG catalogs, ads, and various astronomy magazines. We welcome the chance to review your images and hope you will take advantage of our trained staff to help you improve your images.TRACK AND ACCUMULATE (U.S. Patent # 5,365,269)Using an innovative engineering approach SBIG developed an imaging camera function called Track & Accumulate (TRACCUM) in which multiple images are automatically registered to create a single long exposure. Since the long exposure consists of short images the total combined exposure significantly improves resolution by reducing the cumulative telescope periodic error. In the TRACCUM mode each image is shifted to correct guiding errors and added to the image buffer. In this mode the telescope does not need to be adjusted. The great sensitivity of the CCD virtually guarantees that there will be a usable guide star within the field of view. This feature provides dramatic improvement in resolution by reducing the effect of periodic error and allowing unattended hour long exposures. SBIG has been granted U.S. Patent # 5,365,269 for Track & Accumulate.DUAL CCD SELF-GUIDING (U.S. Patent # 5,525,793)In 1994 with the introduction of Models ST-7 and ST-8 CCD Imaging Cameras which incorporate two separate CCD detectors, SBIG was able to accomplish the goal of introducing a truly self-guided CCD imaging camera. The ability to select guide stars with a separate CCD through the full telescope aperture is equivalent to having a thermoelectrically cooled CCD autoguider in your imaging camera. This feature has been expanded to all dual sensor ST series cameras (ST-7/8/9/10/2000) and all STL series cameras (STL-1001/1301/4020/6303/11000). One CCD is used for guiding and the other for collecting the image. They are mounted in close proximity, both focused at the same plane, allowing the imaging CCD to integrate while the PC uses the guiding CCD to correct the telescope. Using a separate CCD for guiding allows 100% of the primary CCD's active area to be used to collect the image. The telescope correction rate and limiting guide star magnitude can be independentlyselected. Tests at SBIG indicated that 95% of the time a star bright enough for guiding will be found on a TC237 tracking CCD without moving the telescope, using an f/6.3 telescope. The self-guiding function quickly established itself as the easiest and most accurate method for guiding CCD images. Placing both detectors in close proximity at the same focal plane insures the best possible guiding. Many of the long integrated exposures now being published are taken with this self-guiding method, producing very high resolution images of deep space objects. SBIG has been granted U.S. Patent # 5,525,793 for the dual CCD Self-Guiding function.COMPUTER PLATFORMSSBIG has been unique in its support of both PC and Macintosh platforms for our cameras. The imaging cameras in this catalog communicate with the host computer through standard serial or USB ports depending on the specific models. Since there are no external plug-in boards required with our imaging camera systems we encourage users to operate with the new family of high resolution graphics laptop computers. We furnish Operating Software for you to install on your host computer. Once the software is installed and communication with the imaging camera is set up complete control of all of the imaging camera functions is through the host computer keyboard. The recommended minimum requirements for memory and video graphics are as shown below.GENERAL CONCLUSION(1) of this item from the theoretical analysis of the use of CCD technology for real-time non-contact measuring the diameter of the feasibility of measuring it is fast, efficient, accurate, high degree of automation, off-production time and so on.(2) projects to test the use of CCD technology to achieve real-time, online non-contact measurement, developed by the CCD-line non-contact diameter measurement system has a significant technology advanced and practical application of significance. (3) from the theoretical and experimental project on the summary of the utilization of CCD technology developed by SCM PV systems improve the measurement accuracy of several ways: improving crystal, a multi-pixel CCD devices and take full advantage of CCD-like device Face width.译文原料CCD图像是例外,但并非十全十美。

图像处理 常见英文名词解释

图像处理 常见英文名词解释

Algebraic operation 代数运算;一种图像处理运算,包括两幅图像对应像素的和、差、积、商。

Aliasing 走样(混叠);当图像像素间距和图像细节相比太大时产生的一种人工痕迹。

Arc 弧;图的一部分;表示一曲线一段的相连的像素集合。

Binary image 二值图像;只有两级灰度的数字图像(通常为0和1,黑和白)Blur 模糊;由于散焦、低通滤波、摄像机运动等引起的图像清晰度的下降。

Border 边框;一副图像的首、末行或列。

Boundary chain code 边界链码;定义一个物体边界的方向序列。

Boundary pixel 边界像素;至少和一个背景像素相邻接的内部像素(比较:外部像素、内部像素)Boundary tracking 边界跟踪;一种图像分割技术,通过沿弧从一个像素顺序探索到下一个像素将弧检测出。

Brightness 亮度;和图像一个点相关的值,表示从该点的物体发射或放射的光的量。

Change detection 变化检测;通过相减等操作将两幅匹准图像的像素加以比较从而检测出其中物体差别的技术。

Class 类;见模或类Closed curve 封闭曲线;一条首尾点处于同一位置的曲线。

Cluster 聚类、集群;在空间(如在特征空间)中位置接近的点的集合。

Cluster analysis 聚类分析;在空间中对聚类的检测,度量和描述。

Concave 凹的;物体是凹的是指至少存在两个物体内部的点,其连线不能完全包含在物体内部(反义词为凸)Connected 连通的Contour encoding 轮廓编码;对具有均匀灰度的区域,只将其边界进行编码的一种图像压缩技术。

Contrast 对比度;物体平均亮度(或灰度)与其周围背景的差别程度Contrast stretch 对比度扩展;一种线性的灰度变换Convex 凸的;物体是凸的是指连接物体内部任意两点的直线均落在物体内部。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

数字图像处理英文翻译(Matlab帮助信息简介)xxxxxxxxx xxx IntroductionMATLAB is a high-level technical computing language and interactive environment for algorithm development, data visualization, data analysis, and numeric computation. Using the MATLAB product, you can solve technical computing problems faster than with traditional programming languages, such as C, C++, and Fortran.You can use MATLAB in a wide range of applications, including signal and image processing, communications, control design, test and measurement, financial modeling and analysis, and computational biology. Add-on toolboxes (collections of special-purpose MATLAB functions, available separately) extend the MATLAB environment to solve particular classes of problems in these application areas.The MATLAB system consists of these main parts:Desktop Tools and Development EnvironmentThis part of MATLAB is the set of tools and facilities that help you use and become more productive with MATLAB functions and files. Many of these tools are graphical user interfaces. It includes: theMATLAB desktop and Command Window, an editor and debugger, a code analyzer, and browsers for viewing help, the workspace, and folders. Mathematical Function LibraryThis library is a vast collection of computational algorithms ranging from elementary functions, like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigenvalues, Bessel functions, and fast Fourier transforms.The LanguageThe MATLAB language is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick programs you do not intend to reuse. You can also do "programming in the large" to create complex application programs intended for reuse.GraphicsMATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for two-dimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. Italso includes low-level functions that allow you to fully customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.External InterfacesThe external interfaces library allows you to write C/C++ and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), for calling MATLAB as a computational engine, and for reading and writing MAT-files.MATLAB provides a number of features for documenting and sharing your work. You can integrate your MATLAB code with other languages and applications, and distribute your MATLAB algorithms and applications. Features include:High-level language for technical computingDevelopment environment for managing code, files, and dataInteractive tools for iterative exploration, design, and problem solving Mathematical functions for linear algebra, statistics, Fourier analysis, filtering, optimization, and numerical integration2-D and 3-D graphics functions for visualizing dataTools for building custom graphical user interfacesFunctions for integrating MATLAB based algorithms with external appli cations and languages, such as C, C++, Fortran, Java™, COM, andMicrosoft® ExcelThe basic data structure in MATLAB is the array, an ordered set of real or complex elements. This object is naturally suited to the representation of images, real-valued ordered sets of color or intensity data.MATLAB stores most images as two-dimensional arrays (i.e., matrices), in which each element of the matrix corresponds to a single pixel in the displayed image. (Pixel is derived from picture element and usually denotes a single dot on a computer display.)For example, an image composed of 200 rows and 300 columns of different colored dots would be stored in MATLAB as a 200-by-300 matrix. Some images, such as truecolor images, require a three-dimensional array, where the first plane in the third dimension represents the red pixel intensities, the second plane represents the green pixel intensities, and the third plane represents the blue pixel intensities. This convention makes working with images in MATLAB similar to working with any other type of matrix data, and makes the full power of MATLAB available for image processing applications.The Image Processing Toolbox software is a collection of functions that extend the capability of the MATLAB numeric computing environment. The toolbox supports a wide range of image processing operations, includingSpatial image transformationsMorphological operationsNeighborhood and block operationsLinear filtering and filter designTransformsImage analysis and enhancementImage registrationDeblurringRegion of interest operationsMany of the toolbox functions are MATLAB files with a series of MATLAB statements that implement specialized image processing algorithms. You can view the MATLAB code for these functions using the statement:type function_nameYou can extend the capabilities of the toolbox by writing your own files, or by using the toolbox in combination with other toolboxes, such as the Signal Processing Toolbox™ software and the Wavelet Toolbox™ software.Configuration NotesTo determine if the Image Processing Toolbox software is installed on your system, type this command at the MATLAB prompt.verWhen you enter this command, MATLAB displays information about the version of MATLAB you are running, including a list of all toolboxes installed on your system and their version numbers.For information about installing the toolbox, see the installation guide.For the most up-to-date information about system requirements, see the system requirements page, available in the products area at the MathWorks Web site ().Related ProductsMathWorks provides several products that are relevant to the kinds of tasks you can perform with the Image Processing Toolbox software and that extend the capabilities of MATLAB. For information about these related products, see /products/image/related.html. CompilabilityThe Image Processing Toolbox software is compilable with the MATLAB Compiler except for the following functions that launch GUIs cpselectimplayimtool。

相关文档
最新文档