基于数字图像处理技术的边缘特征提取 翻译

合集下载

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献

数字图像处理外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Application Of Digital Image Processing In The MeasurementOf Casting Surface RoughnessAhstract- This paper presents a surface image acquisition system based on digital image processing technology. The image acquired by CCD is pre-processed through the procedure of image editing, image equalization, the image binary conversation and feature parameters extraction to achieve casting surface roughness measurement. The three-dimensional evaluation method is taken to obtain the evaluation parametersand the casting surface roughness based on feature parameters extraction. An automatic detection interface of casting surface roughness based on MA TLAB is compiled which can provide a solid foundation for the online and fast detection of casting surface roughness based on image processing technology.Keywords-casting surface; roughness measurement; image processing; feature parametersⅠ.INTRODUCTIONNowadays the demand for the quality and surface roughness of machining is highly increased, and the machine vision inspection based on image processing has become one of the hotspot of measuring technology in mechanical industry due to their advantages such as non-contact, fast speed, suitable precision, strong ability of anti-interference, etc [1,2]. As there is no laws about the casting surface and the range of roughness is wide, detection parameters just related to highly direction can not meet the current requirements of the development of the photoelectric technology, horizontal spacing or roughness also requires a quantitative representation. Therefore, the three-dimensional evaluation system of the casting surface roughness is established as the goal [3,4], surface roughness measurement based on image processing technology is presented. Image preprocessing is deduced through the image enhancement processing, the image binary conversation. The three-dimensional roughness evaluation based on the feature parameters is performed . An automatic detection interface of casting surface roughness based on MA TLAB is compiled which provides a solid foundation for the online and fast detection of casting surface roughness.II. CASTING SURFACE IMAGE ACQUISITION SYSTEMThe acquisition system is composed of the sample carrier, microscope, CCD camera, image acquisition card and the computer. Sample carrier is used to place tested castings. According to the experimental requirements, we can select a fixed carrier and the sample location can be manually transformed, or select curing specimens and the position of the sampling stage can be changed. Figure 1 shows the whole processing procedure.,Firstly,the detected castings should be placed in the illuminated backgrounds as far as possible, and then through regulating optical lens, setting the CCD camera resolution and exposure time, the pictures collected by CCD are saved to computer memory through the acquisition card. The image preprocessing and feature value extraction on casting surface based on corresponding software are followed. Finally the detecting result is output.III. CASTING SURFACE IMAGE PROCESSINGCasting surface image processing includes image editing, equalization processing, image enhancement and the image binary conversation,etc. The original and clipped images of the measured casting is given in Figure 2. In which a) presents the original image and b) shows the clipped image.A.Image EnhancementImage enhancement is a kind of processing method which can highlight certain image information according to some specific needs and weaken or remove some unwanted informations at the same time[5].In order to obtain more clearly contour of the casting surface equalization processing of the image namely the correction of the image histogram should be pre-processed before image segmentation processing. Figure 3 shows the original grayscale image and equalization processing image and their histograms. As shown in the figure, each gray level of the histogram has substantially the same pixel point and becomes more flat after gray equalization processing. The image appears more clearly after the correction and the contrast of the image is enhanced.Fig.2 Casting surface imageFig.3 Equalization processing imageB. Image SegmentationImage segmentation is the process of pixel classification in essence. It is a very important technology by threshold classification. The optimal threshold is attained through the instmction thresh = graythresh (II). Figure 4 shows the image of the binary conversation. The gray value of the black areas of the Image displays the portion of the contour less than the threshold (0.43137), while the white area shows the gray value greater than the threshold. The shadows and shading emerge in the bright region may be caused by noise or surface depression.Fig4 Binary conversationIV. ROUGHNESS PARAMETER EXTRACTIONIn order to detect the surface roughness, it is necessary to extract feature parameters of roughness. The average histogram and variance are parameters used to characterize the texture size of surface contour. While unit surface's peak area is parameter that can reflect the roughness of horizontal workpiece.And kurtosis parameter can both characterize the roughness of vertical direction and horizontal direction. Therefore, this paper establisheshistogram of the mean and variance, the unit surface's peak area and the steepness as the roughness evaluating parameters of the castings 3D assessment. Image preprocessing and feature extraction interface is compiled based on MATLAB. Figure 5 shows the detection interface of surface roughness. Image preprocessing of the clipped casting can be successfully achieved by this software, which includes image filtering, image enhancement, image segmentation and histogram equalization, and it can also display the extracted evaluation parameters of surface roughness.Fig.5 Automatic roughness measurement interfaceV. CONCLUSIONSThis paper investigates the casting surface roughness measuring method based on digital Image processing technology. The method is composed of image acquisition, image enhancement, the image binary conversation and the extraction of characteristic parameters of roughness casting surface. The interface of image preprocessing and the extraction of roughness evaluation parameters is compiled by MA TLAB which can provide a solid foundation for the online and fast detection of casting surface roughness.REFERENCE[1] Xu Deyan, Lin Zunqi. The optical surface roughness research pro gress and direction[1]. Optical instruments 1996, 18 (1): 32-37.[2] Wang Yujing. Turning surface roughness based on image measurement [D]. Harbin:Harbin University of Science and Technology[3] BRADLEY C. Automated surface roughness measurement[1]. The InternationalJournal of Advanced Manufacturing Technology ,2000,16(9) :668-674.[4] Li Chenggui, Li xing-shan, Qiang XI-FU 3D surface topography measurement method[J]. Aerospace measurement technology, 2000, 20(4): 2-10.[5] Liu He. Digital image processing and application [ M]. China Electric Power Press,2005译文:数字图像处理在铸件表面粗糙度测量中的应用摘要—本文提出了一种表面图像采集基于数字图像处理技术的系统。

基于CCS的图像边缘检测和提取设计

基于CCS的图像边缘检测和提取设计

基于CCS的图像边缘检测和提取设计作者:王年应骏来源:《现代电子技术》2012年第10期摘要:Matlab对数字图像的处理在工程化方面存在一定的不足。

针对这一不足,利用硬件仿真平台CCS,采用数字图像灰度梯度最大值法、Sobel算子边缘检测算法对数字图像进行检测,实现了数字图像的边缘提取。

实验表明,Sobel算子边缘检测算法对数字图像进行边缘检测和提取的效果比较理想,且为图像处理提供了一种硬件实现方法。

关键词:边缘检测;梯度; Sobel算子; CCS中图分类号:文献标识码:A 文章编号:边缘包含了物体最基本的信息,是图像分割、识别及分析中抽取物体特征的重要属性。

在图像分析中借助边缘部分能大大地减少所要处理的信息,又保留了图像中物体的形状信息。

边缘能勾画出目标物体,使观察者一目了然,而且边缘蕴含着丰富的内在信息(方向,形状等)。

从本质上说,图像边缘是图像局部特征不连续性(灰度突变、颜色突变、纹理结构突变等)的反应,它标志着一个区域的结束和另一个区域的开始[1]。

两个具有不同灰度值的相邻区域之间必定存在边缘,边缘是灰度值不连续的结果,这种不连续性通常可以利用求导数的方法方便地检测到,因此边缘检测一般利用图像一阶导数的极值或者二阶导数的过零点信息来提供判断边缘点的基本依据[2]。

经典的边缘检测方法是对原始图像中像素的某小邻域来构造边缘检测算子[3]。

边缘检测的设计大多是基于Matlab软件设计的,这与工程化有一定的距离。

针对这一不足,提出基于CCS(Code Composer Studio)的数字图像边缘检测设计,为图像处理提供一种硬件设计方法。

1 Sobel算法描述Sobel算子是一个梯度算子,一幅数字图像的一阶导数是基于各种二维梯度的近似值[],该算法就是通过2个3×3的模板,对选定的二维图像中同样大小的窗口进行卷积,得到图像的梯度,通过梯度值的大小与设定的阈值进行比较,如果得到的结果大于阈值,就是边缘部分,把3×3图像窗口中央的像素灰度值用255来代替。

外文翻译---MATLAB 在图像边缘检测中的应用

外文翻译---MATLAB 在图像边缘检测中的应用

英文资料翻译MATLAB application in image edge detection MATLAB of the 1984 countries MathWorks company to market since, after 10 years of development, has become internationally recognized the best technology application software. MATLAB is not only a kind of direct, efficient computer language, and at the same time, a scientific computing platform, it for data analysis and data visualization, algorithm and application development to provide the most core of math and advanced graphics tools. According to provide it with the more than 500 math and engineering function, engineering and technical personnel and scientific workers can integrated environment of developing or programming to complete their calculation.MATLAB software has very strong openness and adapt to sex. Keep the kernel in under the condition of invariable, MATLAB is in view of the different application subject of launch corresponding Toolbox (Toolbox), has now launched image processing Toolbox, signal processing Toolbox, wavelet Toolbox, neural network Toolbox and communication tools box, etc multiple disciplines special kit, which would place of different subjects research work.MATLAB image processing kit is by a series of support image processing function from the composition, the support of the image processing operation: geometric operation area of operation and operation; Linear filter and filter design; Transform (DCT transform); Image analysis and strengthened; Binary image manipulation, etc. Image processing tool kit function, the function can be divided into the following categories: image display; Image file input and output; Geometric operation; Pixels statistics; Image analysis and strengthened; Image filtering; Sex 2 d filter design; Image transformation; Fields and piece of operation; Binary image operation; Color mapping and color space transformation; Image types and type conversion; Kit acquiring parameters and Settings.1.Edge detection thisUse computer image processing has two purposes: produce more suitable for human observation and identification of the images; Hope can by the automatic computer image recognition and understanding.No matter what kind of purpose to, image processing the key step is to contain a variety of scenery of decomposition of image information. Decomposition of the end result is that break down into some has some kind of characteristics of the smallest components, known as the image of the yuan. Relative to the whole image of speaking, this the yuan more easily to be rapid processing.Image characteristics is to point to the image can be used as the sign of the field properties, it can be divided into the statistical features of the image and image visual, two types of levy. The statistical features of the image is to point to some people the characteristics of definition, through the transform to get, such as image histogram, moments, spectrum, etc.; Image visual characteristics is refers to person visual sense can be directly by the natural features, such as the brightness of the area, and texture or outline, etc. The two kinds of characteristics of the image into a series of meaningful goal or regional p rocess called image segmentation.The image is the basic characteristics of edge, the edge is to show its pixel grayscale around a step change order or roof of the collection of those changes pixels. It exists in target and background, goals and objectives, regional and region, the yuan and the yuan between, therefore, it is the image segmentation dependent on the most important characteristic that the texture characteristics of important information sources and shape characteristics of the foundation, and the image of the texture characteristics and the extraction of shape often dependent on image segmentation. Image edge extraction is also the basis of image matching, because it is the sign of position, the change of the original is not sensitive, and can be used for matching the feature points.The edge of the image is reflected by gray not continuity. Classic edge extraction method is investigation of each pixel image in an area of the gray change, use edge first or second order nearby directional derivative change rule,with simple method of edge detection, this method called edge detection method of local operators.The type of edge can be divided into two types: (1) step representation sexual edge, it on both sides of the pixel gray value varies significantly different; (2) the roof edges, it is located in gray value from the change of increased to reduce the turning point. For order jump sexual edge, second order directional derivative in edge is zero cross; For the roof edges, second order directional derivative in edge take extreme value.If a pixel fell in the image a certain object boundary, then its field will become a gray level with the change. The most useful to change two features is the rate of change and the gray direction, they are in the range of the gradient vector and the direction to said. Edge detection operator check every pixel grayscale rate fields and evaluation, and also include to determine the directions of the most use based on directional derivative deconvolution method for masking.Digital image processing technique has been widely applied to the biomedical field, the use of computer image processing and analysis, and complete detection and recognition of cancer cells can help doctors make a diagnosis of tumor cancers. Need to be made in the identification of cancer cells, the quantitative results, the human eye is difficult to accurately complete such work, and the use of computer image processing to complete the analysis and identification of the microscopic images have made great progress. In recent years, domestic and foreign medical images of cancer cells testing to identify the researchers put forward a lot of theory and method for the diagnosis of cancer cells has very important meaning and practical value.Cell edge detection is the cell area of the number of roundness and color, shape and chromaticity calculation and the basis of the analysis their test results directly affect the analysis and diagnosis of the disease. Classical edge detection operators such as Sobel operator, Laplacian operator, each pixel neighborhood of the image gray scale changes to detect the edge. Although these operators is simple, fast, but there are sensitive to noise, get isolated or in short sections of acontinuous edge pixels, overlapping the adjacent cell edge defects, while the optimal threshold segmentation and contour extraction method of combining edge detection, obtained by the iterative algorithm for the optimal threshold for image segmentation, contour extraction algorithm, digging inside the cell pixels, the last remaining part of the image is the edge of the cell, change the processing order of the traditional edge detection algorithm, by MATLAB programming, the experimental results that can effectively suppress the noise impact at the same time be able to objectively and correctly select the edge detection threshold, precision cell edge detection.2.Edge detection of MATLABMATLAB image processing toolkit defines the edge () function is used to test the edge of gray image.(1) BW = edge (I, "method"), returns and I size binary image BW, includingelements of 1 said is on the edge of the point, 0 means the edge points.Method for the following a string of:1) soble: the default value, with derivative Sobel edge detectionapproximate measure, to return to a maximum gradient edge;2) prewitt: with the derivative prewitt approximate edge detection, amaximum gradient to return to edge;3) Roberts: with the derivative Roberts approximate edge detection margins,return to a maximum gradient edge;4) the log: use the Laplace operation gaussian filter to I carry filtering,through the looking for 0 intersecting detection of edge;5) zerocross: use the filter to designated I filter, looking for 0 intersectingdetection of edge.(2) BW = edge (I, "method", thresh) with thresh designated sensitivitythreshold value, rather than the edge of all not thresh are ignored.(3) BW = edge (I, "method" thresh, direction, for soble and prewitt methodspecified direction, direction for string, including horizontal level said direction; Vertical said to hang straight party; Both said the two directions(the default).(4) BW = edge (I, 'log', thresh, log sigma), with sigma specified standarddeviation.(5) [BW, thresh] = edge (...), the return value of a function in fact have multiple(" BW "and" thresh "), but because the brace up with u said as a matrix, and so can be thought a return only parameters, which also shows the introduction of the concept of matrix MATLAB unity and superiority.st wordMATLAB has strong image processing function, provide a simple function calls to realize many classic image processing method. Not only is the image edge detection, in transform domain processing, image enhancement, mathematics morphological processing, and other aspects of the study, MATLAB can greatly improve the efficiency rapidly in the study of new ideas.MATLAB 在图像边缘检测中的应用MATLAB自1984年由国MathWorks公司推向市场以来,历经十几年的发展,现已成为国际公认的最优秀的科技应用软件。

图像处理中的边缘检测和特征提取方法

图像处理中的边缘检测和特征提取方法

图像处理中的边缘检测和特征提取方法图像处理是计算机视觉领域中的关键技术之一,而边缘检测和特征提取是图像处理中重要的基础操作。

边缘检测可以帮助我们分析图像中的轮廓和结构,而特征提取则有助于识别和分类图像。

本文将介绍边缘检测和特征提取的常见方法。

1. 边缘检测方法边缘检测是指在图像中找到不同区域之间的边缘或过渡的技术。

常用的边缘检测方法包括Sobel算子、Prewitt算子和Canny算子。

Sobel算子是一种基于梯度的边缘检测算法,通过对图像进行卷积操作,可以获取图像在水平和垂直方向上的梯度值,并计算获得边缘的强度和方向。

Prewitt算子也是一种基于梯度的边缘检测算法,类似于Sobel算子,但其卷积核的权重设置略有不同。

Prewitt算子同样可以提取图像的边缘信息。

Canny算子是一种常用且经典的边缘检测算法。

它结合了梯度信息和非极大值抑制算法,可以有效地检测到图像中的边缘,并且在边缘检测的同时还能削弱图像中的噪声信号。

这些边缘检测算法在实际应用中常常结合使用,选择合适的算法取决于具体的任务需求和图像特点。

2. 特征提取方法特征提取是指从原始图像中提取出具有代表性的特征,以便进行后续的图像分析、识别或分类等任务。

常用的特征提取方法包括纹理特征、形状特征和颜色特征。

纹理特征描述了图像中的纹理信息,常用的纹理特征包括灰度共生矩阵(GLCM)、局部二值模式(LBP)和方向梯度直方图(HOG)。

GLCM通过统计图像中像素之间的灰度变化分布来描述纹理特征,LBP通过比较像素与其邻域像素的灰度值来提取纹理特征,HOG则是通过计算图像中梯度的方向和强度来提取纹理特征。

这些纹理特征可以用于图像分类、目标检测等任务。

形状特征描述了图像中物体的形状信息,常用的形状特征包括边界描述子(BDS)、尺度不变特征变换(SIFT)和速度不变特征变换(SURF)。

BDS通过提取物体边界的特征点来描述形状特征,SIFT和SURF则是通过提取图像中的关键点和描述子来描述形状特征。

边缘检测中英文翻译

边缘检测中英文翻译

Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal application areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for autonomous machine perception.An image may be defined as a two-dimensional function, f(x,y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spectrum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultrasound, electron microscopy, and computer generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition,even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence(AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low, mid, and highlevel processes. Low-level processes involve primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognize d objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some formof organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source (e.g., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands are grouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.Fig1Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image.A familiar example of enhancement is when we increase the contrast of an imagebecause “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” en hancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.F ig2Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi it.Although storagetechnology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region (i.e., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the pro cess that assigns a label (e.g., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital imageprocessing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as opposed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in image processing and computer vision, particularly in the areas of feature detection and feature extraction, to refer to algorithms which aim at identifying points in a digital image at which the image brightness changes sharply or more formally has discontinuities.Although point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects:1.focal b lur caused by a finite depth-of-field and finite point spread function; 2.penumbral blur caused by shadows created by light sources of non-zero radius; 3.shading at a smooth object edge; 4.local specularities or interreflections in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a block of yellow. In contrast a line (as can be extracted by a ridge detector) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problemof detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.If the intensity difference were smaller between the 4th and the 5th pixels and if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several edges.Hence, to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the Laplacian or the zero-crossings of a non-linear differential expression, as will be described in the section on differential edge detection following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also noise reduction).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to noise, and also to picking outirrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using thresholding with hysteresis. This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the rate of change in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that point.Since we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a threshold.Thus we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an edge.The term edge segment generally is used if the edge is short in relation to the dimensions of the image.A key problem insegmentation is to assemble edge segments into longer edges.An alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second derivative.The definition of an edge in this case is the same as above.It is important to note that these definitions do not guarantee success in finding edge in an image.They simply give us a formalism to look for them.First-order derivatives in an image are computed using the gradient.Second-order derivatives are obtained using the Laplacian.数字图像处理与边缘检测数字图像处理数字图像处理方法的研究源于两个主要应用领域:其一是改进图像信息以便于人们分析;其二是为使机器自动理解而对图像数据进行存储、传输及显示。

如何使用数字图像处理进行测绘数据提取和分析

如何使用数字图像处理进行测绘数据提取和分析

如何使用数字图像处理进行测绘数据提取和分析数字图像处理是指利用计算机技术,对数字图像进行操作和处理的过程。

近年来,随着数字技术的快速发展,数字图像处理在各个领域得到了广泛应用,其中之一就是测绘数据的提取和分析。

测绘数据提取是指从图像中提取出与测绘相关的数据信息。

在过去,传统的测绘工作通常需要人工测量和绘制,耗时耗力且容易出错。

而通过数字图像处理技术,可以通过对特定区域的图像进行分析和处理,提取出所需的数据信息,极大地提高了测绘工作的效率和准确性。

首先,在进行数据提取前,我们需要对数字图像进行预处理。

预处理包括图像去噪、增强和几何校正等步骤。

去噪是指通过滤波器等方法去除图像中的噪声,在一定程度上提高图像的质量。

增强是指对图像的亮度、对比度和色彩进行调整,使得图像更加清晰明确。

几何校正是为了纠正图像中的几何畸变,使得图像的形状和大小符合实际测量情况。

接下来,我们可以利用数字图像处理技术进行特征提取。

特征提取是指从图像中提取出对于测绘而言具有代表性的特征信息。

在测绘数据提取中,常用的特征包括边界、角点、线段等。

边界是指图像中物体的边界线,可以通过边缘检测算法来提取。

角点是指图像中物体的拐角位置,可以通过角点检测算法来提取。

线段是指图像中具有一定长度的线条,可以通过直线检测算法来提取。

通过提取这些特征,可以进一步分析测绘数据的形态和结构。

此外,数字图像处理还可以用于测绘数据的分类和识别。

分类是指将图像数据划分为不同的类别。

在测绘中,常用的分类方法有基于像素值的阈值法和基于特征的分类方法。

阈值法是指通过设定一个阈值,将图像中大于该阈值和小于该阈值的像素分别归类。

基于特征的分类方法是指通过提取图像的特征信息,使用机器学习算法对图像进行分类。

通过分类,可以将不同的地物和目标从图像中提取出来,为后续的测绘分析提供基础。

最后,数字图像处理还可以用于测绘数据的量测和分析。

量测是指测量目标物体在图像中的大小和位置。

通过标定图像和目标物体之间的关系,可以利用数字图像处理技术计算出目标物体在实际世界中的大小和位置。

DIP岗位职责

DIP岗位职责

DIP岗位职责DIP(Digital Image Processing)工程师是一种基于数字图像处理和计算机视觉的应用型人才,他们的主要职责是应用数字图像处理技术,对图像进行特征提取、图像增强、图像分割、图像识别等处理,将其应用于医学影像、遥感图像、视频图像、计算机视觉等领域。

以下是DIP工程师的详细职责:1.设计和实现数字图像处理算法,完成各种图像处理任务。

在保证算法鲁棒性和性能的前提下,通过合理的参数设置和优化算法,进行目标检测、目标跟踪、图像增强、图像分割、三维重建等一系列处理工作。

2.研究和开发数字图像处理技术,跟踪和尝试应用新的技术潮流,提高数字图像处理的效率和效果。

3.布置和管理数字图像处理硬件、软件系统、维护与优化,保证系统稳定,避免出现故障和其他安全隐患。

4.应用数字图像处理技术对图像数据进行分析和处理,有效提取其中的特征信息,进行广泛研究。

5.对大量图像数据进行处理,利用人工智能、深度学习等新技术进行数据挖掘,提供数据支持。

6.与其他部门协调、沟通,根据行业、市场发现新需求、新问题,在完成基本任务的同时,拓展应用的领域和深度。

7.对数字图像处理的相关文献进行学习、梳理和查阅,从中获取前沿的技术信息,加深与业界同仁的交流,为学术和实践的专业化发展提供理论和实践保障。

8.定期或不定期汇报数字图像处理相关工作的进展情况,总结分析工作中遇到的问题、提出改进方案和总结实验数据等。

9.与其它相关部门进行沟通,确保图像处理的整体效果,支撑医学、军事、物流、安全、环保等相关应用领域,走向更广泛的社会应用。

以上就是DIP工程师的详细职责,他们不仅拥有深厚的计算机技术功底,还需要具备很好的创新能力和逻辑思维能力,对数字图像和计算机视觉领域有一定的热情和兴趣,熟悉现代计算机技术的基本理论和应用技术,是现代科技领域中重要的技术人才之一。

如何利用数字图像处理技术进行地物提取

如何利用数字图像处理技术进行地物提取

如何利用数字图像处理技术进行地物提取概述:数字图像处理技术是一种通过计算机对图像进行处理和分析的方法,可以帮助我们提取出图像中的目标地物。

地物提取在土地利用、环境研究、城市规划等领域中具有重要的意义。

本文将讨论如何利用数字图像处理技术进行地物提取。

一、图像预处理在进行地物提取之前,首先需要对原始图像进行预处理。

常用的预处理方法有直方图均衡化、噪声去除、边缘增强等。

直方图均衡化可以提高图像的对比度,使得地物的边界更加清晰。

噪声去除可以通过滤波器进行,如中值滤波器可以有效地去除椒盐噪声和高斯噪声。

边缘增强可以通过边缘检测算法实现,如Sobel算子、Canny算子等。

这些预处理方法可以帮助我们更好地进行地物提取。

二、色彩空间转换在进行地物提取时,常常需要将图像从RGB色彩空间转换到其他色彩空间,如灰度色彩空间或者HSV色彩空间。

灰度色彩空间只考虑图像的亮度信息,可以有效地提取出地物的形状。

HSV色彩空间将色彩信息和亮度信息分离开来,可以帮助我们更好地进行地物分类。

通过进行色彩空间的转换,可以提高地物提取的效果。

三、阈值分割阈值分割是一种常用的地物提取方法。

通过设定一个阈值,将图像中亮度大于该阈值的像素点划分为地物,亮度小于该阈值的像素点划分为背景。

阈值的选择对地物提取结果影响很大,通常需要利用试错法进行调整。

另外,为了进一步提高阈值分割的效果,可以采用自适应阈值分割算法,根据图像的局部特性自动确定阈值,使得地物提取更加准确。

四、边缘检测边缘检测是一种可以提取出图像中轮廓信息的方法,可以帮助我们更好地提取地物。

常用的边缘检测算法有Sobel算子、Canny算子等。

Sobel算子可以提取出图像中的水平边缘和垂直边缘,Canny算子可以提取出图像中具有一定强度的边缘。

边缘检测可以帮助我们确定地物的形状和位置,从而更好地进行地物提取。

五、形态学操作形态学操作是一种基于像素周围邻域的操作,可以帮助我们进一步提取地物。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Edge Feature Extraction Based on Digital Image Processing TechniquesAbstractEdge detection is a basic and important subject in computer vision and image processing. In this paper we discuss several digital image processing techniques applied in edge feature extraction. Firstly, wavelet transform is used to remove noises from the image collected. Secondly, some edge detection operators such as Differential edge detection, Log edge detection, Canny edge detection and Binary morphology are analyzed. And then according to the simulation results, the advantages and disadvantages of these edge detection operators are compared. It is shown that the Binary morphology operator can obtain better edge feature. Finally, in order to gain clear and integral image profile, the method of bordering closed is given. After experimentation, edge detection method proposed in this paper is feasible.Index:Terms-Edge detection, digital image processing, operator, wavelet analvsisI. INTRODUCTIONThe edge is a set of those pixels whose grey have step change and rooftop change, and it exists between object and background, object and object, region and region, and between clement and clement. Edge always indwells in two neighboring areas having different grey level. It is the result of grey level being discontinuous. Edge detection is a kind of method of image segmentation based on range non-continuity. Image edge detection is one of the basal contents in the image processing and analysis, and also is a kind of issues which are unable to be resolved completely so far. When image is acquired, the factors such as the projection, mix, aberrance and noise are produced. These factors bring on image feature's blur and distortion, consequently it is very difficult to extract image feature. Moreover, due to such factors it is also difficult to detect edge. The method of image edge and outline characteristic's detection and extraction has been research hot in the domain of image processing and analysis technique.Edge feature extraction has been applied in many areas widely. This paper mainly discusses about advantages and disadvantages of several edge detection operators applied in the cable insulation parameter measurement. In order to gain more legible image outline, firstly the acquired image is filtered and denoised. In the process of denoising, wavelet transformation is used. And then different operators are applied to detect edge including Differential operator, Log operator, Canny operator and Binary morphology operator. Finally the edge pixels of image are connected using the method of bordering closed. Then a clear and complete image outline will be obtained.II. IMAGE DENOISINGAs we all know, the actual gathered images contain noises in the process of formation, transmission, reception and processing. Noises deteriorate the quality of the image. They make image blur. And many important features are covered up.This brings lots of difficulties to the analysis. Therefore, the main purpose is to remove noises of the image in the stage of pretreatment.The traditional denoising method is the use of a low-pass or band-pass filter to denoise. Its shortcoming is that the signal is blurred when noises are removed. There is irreconcilable contradiction between removing noise and edge maintenance. Yet wavelet analysis has been proved to be a powerful tool for image processing. Because Wavelet denoising uses a different frequency band-pass filters on the signal filtering. It removes the coefficients of some scales which mainly reflect the noise frequency. Then the coefficient of every remaining scale is integrated for inverse transform, so that noise can be suppressed well. So wavelet analysis can be widely used in many aspects such as image compression, image denoising, etc.Fig. 1 the sketch of removing image noises with wavelet transformation The basic process of denoising making use of wavelet transform is shown in Fig. 1, its main steps are as follows:1) Image is preprocessed (such as the gray-scale adjustment, etc.).2)Wavelet multi-scale decomposition is adopted to process image.3)In each scale, wavelet coefficients belonging to noises are removed and the wavelet coefficients are remained and enhanced.4)The enhanced image after denoising is gained using wavelet inverse transform.The simulation effect of wavelet denoising through Matlab is shown in Fig.2.original image with image after median image after wavelet noise filtering denoisingFig. 2 the comparison of two denoising methodsComparing with the traditional matched filter, the high-frequency components of image may not be destroyed using wavelet transform to denoise. In addition, there are many advantages such as the strong adaptive ability, calculating quickly, completely reconstructed, etc. So the signal to noise ratio of image can be improved effectively making use of wavelet transform.III. EDGE DETECTIONThe edge detection of digital image is quite important foundation in the field of image analysis including image division, identification ofobjective region and pick-up of region shape and so on. Edge detection is very important in the digital image processing, because the edge is boundary of the target and the background. And only when obtaining the edge we can differentiate the target and the background.The basic idea of image detection is to outstand partial edge of the image making use of edge enhancement operator firstly. Then we define the `edge intensity' of pixels and extract the set of edge points through setting threshold. But the borderline detected may produce interruption as a result of existing noise and image dark. Thus edge detection contains the following two parts:1)Using edge operators the edge points set are extracted.2)Some edge points in the edge points set are removed and a number of edge points are filled in the edge points set. Then the obtained are connected to be a line.The common used operators are the Differential, Log, Canny operators and Binary morphology, etc.A. Differential operatorDifferential operator can outstand grey change. There are some points where grey change is bigger. And the value calculated in those points is higher applying derivative operator. So these differential values may be regarded as relevant `edge intensity' and gather the points set of the edge through setting thresholds for these differential values.First derivative is the simplest differential coefficient. Suppose that the image is ,and its derivative operator is the first order partial derivative , .They represent the rate-of-change that the gray f is in the direction of x and y.Yet the gray rate of change in the direction of a is shown in the equation (1):(1)Under consecutive circumstances,the differential of the function is .The direction derivative of function has a maximum at a certain point. And the direction of this point is .The maximum of direction derivative is .The vector with this direction and modulus is called as the gradient of the function f,that is, .So the gradient modulus operator is designed in the equation (2).(2)For the digital image, the gradient template operator is designed as: (3)Where ,.Differential operator mostly includes Robots operator and Sobel operator.(1) Roberts operatorRobots operator is a kind of the most simple operator which makes use of partial difference operator to look for edge. Its effect is the best for the image with steep low noise. But the borderline of the extractedimage is quite thick using the Robots operator, so the edge location is not very accurate.Robots operator is defined as:(4)But absolute deviation algorithm is usually used to predigest the equation (4) in practice. The following equations (5) and (6) are the process of reduction.(5)(6)The template of the Robots operator is shown in Fig. 3Fig. 3 Roberts operator(2)Sobel and Prewitt operatorTo reduce the influence of noise when detecting edge, the Prewitt operator enlarges edge detection operator template from two by two to three by three to compute difference operator. Using the Prewitt operator can not only detect edge points, but also restrain the noise. The Sobel operator has the similar function as the Prewitt operator, but the edge detected by the Sobel operator is wider.Suppose that the pixel number in the3x3 sub-domain of image is as follows:We order that and .Then Prcwitt operator is as follows:(7)Or (8)Prewitt operator is said in Fig.4 in the form of the template.Fig. 4 Prewitt operatorSobel operator can process noises and gray gradient those images with lots of well. We order thatand .Then Sobel operator is as follows:(9)Or (10)The template of the Sobel operator is shown in Fig.5.Fig.5 Sobel operatorThe original image of cable insulation layer and the edge detection drawing of Sobel operator gained using MatLab simulation are shown in Fig.6 and Fig. 7.Fig.6 the original image Fig. 7 the edge detection drawingof Sobel operatorFrom the simulation drawing Fig. 7, we can know that the edge positionis very accurate. And the effect of Sobel edge detection is very satisfying. In a word, the Sobel and Prewitt operators have a better effect for such images with grey level changing gradually and more noses.B. Log operatorThe Log operator is a linear and time-invariant operator. It detects edge points through searching for spots which two-order differential coefficient is zero in the image grey levels. For a continuous function ,the Log operator is defined as at point :(11)The Log operator is the process of filtering and counting differential coefficient for the image. It determines the zero overlapping position of filter output using convolution of revolving symmetrical Log template and the image. The Log operator's template is shown in Fig. 8.Fig. 8 Log operatorIn the detection process of the Log operator, we firstly pre-smooth the image with Gauss low-pass filter, and then find the steep edge in the image making use of the Log operator. Finally we carry on Binarization with zero grey level to give birth to closed, connected outline and eliminate all internal spots. But double pixels boundary usually appears using the Log operator to detect edge, and the operator is very sensitive to noise. So the Log operator is often employed to judge that edge pixels lie in either bright section or dark section of the image.C. Canny operatorThe Canny operator is a sort of new edge detection operator. It has good performance of detecting edge, which has a wide application. The Canny operator edge detection is to search for the partial maximum value of image gradient. The gradient is counted by the derivative of Gauss filter. The Canny operator uses two thresholds to detect strong edge and weak edge respectively. And only when strong edge is connected with weak edge, weak edge will be contained in the output value. The theory basis of canny operator is shown in equations (12)--(15).Gauss: (12)Edge normals: (13)Edge strengths: (14)Maximal strengths: (15)For two-dimensional image,canny operator can produce two information including the border gradient direction and intensity.Canny operator is actually using templates of different directions to do convolution to the image respectively. Then the mostly direction is taken. From the viewpoint of positioning accuracy, canny operator is better than the other operators.Therefore, this method is not easily disturbed by noise and can keep the good balance between noise and edge detection. It can detect the trueweak edge.D. Binary morphologyMathematical morphology is a new method applied in image processing. The basic idea is to measure and extract the corresponding shape from image with structural elements having stated form. So that the image processing and analyzing can be completed.Using mathematical morphology to detect the edge is better than using differential treatment. Because it is not sensitive to noise, and the edge extracted is relatively smooth. Binary image is also known as black-and-white image. The object can be easily identified from the image background. So we adopt the combination of binary image and mathematical morphology to detect edge. It is called Binary morphology.Suppose that the region is shown in form of the set A. Its border is .B is an appropriate structure element, and it is symmetrical around the origin. Firstly we corrupt A with B recorded as ,where is a translation B along the vector. The interior of region is available with .And is the borderline naturally. Then is obtained. The equation of edge extraction can be said .Structuring element is larger, the edge gained will be Wider.E. Simulative results analysisIn order to know about the advantages and disadvantages of these edge detection operators, we detect edge using these different operators respectively. The simulation results are shown in Fig.9 and Fig.10.original image Binary image edge extractionFig. 9 detecting edge with Binary morphologyoriginal image roberts operator sobel operatorprewitt operator canny operator log operatorFig. 10 several edge detection algorithm comparisonFrom the simulation results we can conclude that: the effect of detecting edge with sobel operator after wavelet de-noising and with Binary morphology directly is better. So these two methods can be used. But finally we choose Binary morphology method based on specific measurement errors.IV. BORDERLINE CLOSEDAlthough image is denoised before detecting edge, yet noises are still introduced when detecting edge. When noise exists, the borderline, which is obtained using derivative algorithm to detect image, usually produces the phenomenon of break. Under this situation, we need to connect the edge pixels. Thus, we will introduce a kind of method closing the borderline with the magnitude and direction of pixels gradient.The basis of connecting edge pixels is that they have definite similarity. Two aspects information can be gained using gradient algorithm to process image. One is the magnitude of gradient; theother is direction of gradient. According to edge pixels gradient's similarity on these two aspects, the edge pixels can be connected. Specific speaking, if the pixel( s, t)is in the neighbor region of the pixel(x, y) and their gradient magnitudes and gradient directions must satisfy two conditions (16)and(17) respectively, then the pixels in(s, t)and the pixels in(x, y) can be connected. The closed boundary will be obtained if all edge pixels are judged and connected.(16)(17)Where T is magnitude threshold, A is angle threshold.V. CONCLUSIONThese edge detection operators can have better edge effect under the circumstances of obvious edge and low noise. But the actual collected image has lots of noises. So many noises may be considered as edge to be detected. In order to solve the problem, wavelet transformation is used to denoise in the paper. Y et its effect will be better if those simulation images processed above are again processed through edge thinning and tracking. Although there are various edge detection methods in the domain of image edge detection, certain disadvantages always exist. For example, restraining noise and keeping detail can't achieve optimal effect simultaneously. Hence we will acquire satisfactory result if choosing suitable edge detection operator according to specific situation in practice.。

相关文档
最新文档