图像科学综述 外文文献 外文翻译 英文文献

合集下载

图像识别中英文对照外文翻译文献

图像识别中英文对照外文翻译文献

中英文对照外文翻译文献(文档含英文原文和中文翻译)Elastic image matchingAbstractOne fundamental problem in image recognition is to establish the resemblance of two images. This can be done by searching the best pixel to pixel mapping taking into account monotonicity and continuity constraints. We show that this problem is NP-complete by reduction from 3-SAT, thus giving evidence that the known exponential time algorithms are justified, but approximation algorithms or simplifications are necessary.Keywords: Elastic image matching; Two-dimensional warping; NP-completeness 1. IntroductionIn image recognition, a common problem is to match two given images, e.g. when comparing an observed image to given references. In that pro-cess, elastic image matching, two-dimensional (2D-)warping (Uchida and Sakoe, 1998) or similar types of invariant methods (Keysers et al., 2000) can be used. For this purpose, we can define cost functions depending on the distortion introduced in the matching andsearch for the best matching with respect to a given cost function. In this paper, we show that it is an algorithmically hard problem to decide whether a matching between two images exists with costs below a given threshold. We show that the problem image matching is NP-complete by means of a reduction from 3-SAT, which is a common method of demonstrating a problem to be intrinsically hard (Garey and Johnson, 1979). This result shows the inherent computational difficulties in this type of image comparison, while interestingly the same problem is solvable for 1D sequences in polynomial time, e.g. the dynamic time warping problem in speech recognition (see e.g. Ney et al., 1992). This has the following implications: researchers who are interested in an exact solution to this problem cannot hope to find a polynomial time algorithm, unless P=NP. Furthermore, one can conclude that exponential time algorithms as presented and extended by Uchida and Sakoe (1998, 1999a,b, 2000a,b) may be justified for some image matching applications. On the other hand this shows that those interested in faster algorithms––e.g. for pattern recognition purposes––are right in searching for sub-optimal solutions. One method to do this is the restriction to local optimizations or linear approximations of global transformations as presented in (Keysers et al., 2000). Another possibility is to use heuristic approaches like simulated annealing or genetic algorithms to find an approximate solution. Furthermore, methods like beam search are promising candidates, as these are used successfully in speech recognition, although linguistic decoding is also an NP-complete problem (Casacuberta and de la Higuera, 1999). 2. Image matchingAmong the varieties of matching algorithms,we choose the one presented by Uchida and Sakoe(1998) as a starting point to formalize the problem image matching. Let the images be given as(without loss of generality) square grids of size M×M with gray values (respectively node labels)from a finite alphabet &={1,…,G}. To define thed:&×&→N , problem, two distance functions are needed,one acting on gray valuesg measuring the match in gray values, and one acting on displacement differences :Z×Z→N , measuring the distortion introduced by t he matching. For these distance ddfunctions we assume that they are monotonous functions (computable in polynomial time) of the commonly used squared Euclid-ean distance, i.ed g (g 1,g 2)=f 1(||g 1-g 2||²)and d d (z)=f 2(||z||²) monotonously increasing. Now we call the following optimization problem the image matching problem (let µ={1,…M} ).Instance: The pair( A ; B ) of two images A and B of size M×M .Solution: A mapping function f :µ×µ→µ×µ.Measure:c (A,B,f )=),(),(j i f ij g B Ad ∑μμ⨯∈),(j i+∑⨯-⋅⋅⋅∈+-+μ}1,{1,),()))0,1(),(())0,1(),(((M j i d j i f j i f dμ⨯-⋅⋅⋅∈}1,{1,),(M j i +∑⋅⋅⋅⨯∈+-+1}-M ,{1,),()))1,0(),(())1,0(),(((μj i d j i f j i f d 1}-M ,{1,),(⋅⋅⋅⨯∈μj iGoal:min f c(A,B,f).In other words, the problem is to find the mapping from A onto B that minimizes the distance between the mapped gray values together with a measure for the distortion introduced by the mapping. Here, the distortion is measured by the deviation from the identity mapping in the two dimensions. The identity mapping fulfills f(i,j)=(i,j),and therefore ,f((i,j)+(x,y))=f(i,j)+(x,y)The corresponding decision problem is fixed by the followingQuestion:Given an instance of image matching and a cost c′, does there exist a ma pping f such that c(A,B,f)≤c′?In the definition of the problem some care must be taken concerning the distance functions. For example, if either one of the distance functions is a constant function, the problem is clearly in P (for d g constant, the minimum is given by the identity mapping and for d d constant, the minimum can be determined by sorting all possible matching for each pixel by gray value cost and mapping to one of the pixels with minimum cost). But these special cases are not those we are concerned with in image matching in general.We choose the matching problem of Uchida and Sakoe (1998) to complete the definition of the problem. Here, the mapping functions are restricted by continuity and monotonicity constraints: the deviations from the identity mapping may locally be at most one pixel (i.e. limited to the eight-neighborhood with squared Euclidean distance less than or equal to 2). This can be formalized in this approach bychoosing the functions f1,f2as e.g.f 1=id,f2(x)=step(x):=⎩⎨⎧.2,)10(,2,0>≤⋅xGxMM3. Reduction from 3-SAT3-SAT is a very well-known NP-complete problem (Garey and Johnson, 1979), where 3-SAT is defined as follows:Instance: Collection of clauses C={C1,···,CK} on a set of variables X={x1, (x)L}such that each ckconsists of 3 literals for k=1,···K .Each literal is a variable or the negation of a variable.Question:Is there a truth assignment for X which satisfies each clause ck, k=1,···K ?The dependency graph D(Ф)corresponding to an instance Ф of 3-SAT is defined to be the bipartite graph whose independent sets are formed by the set of clauses Cand the set of variables X .Two vert ices ck and x1are adjacent iff ckinvolvesx 1or-xL.Given any 3-SAT formula U, we show how to construct in polynomial time anequivalent image matching problem l(Ф)=(A(Ф),B(Ф)); . The two images of l (Ф)are similar according to the cost function (i.e.f:c(A(Ф),B(Ф),f)≤0) iff the formulaФ is satisfiable. We perform the reduction from 3-SAT using the following steps:• From the formula Ф we construct the dependency graph D(Ф).• The dependency graph D(Ф)is drawn in the plane.• The drawing of D(Ф)is refined to depict the logical behaviour of Ф , yielding two images(A(Ф),B(Ф)).For this, we use three types of components: one component to represent variables of Ф , one component to represent clauses of Ф, and components which act as interfaces between the former two types. Before we give the formal reduction, we introduce these components.3.1. Basic componentsFor the reduction from 3-SAT we need five components from which we will construct the in-stances for image matching , given a Boolean formula in 3-DNF,respectively its graph. The five components are the building blocks needed for the graph drawing and will be introduced in the following, namely the representations of connectors,crossings, variables, and clauses. The connectors represent the edges and have two varieties, straight connectors and corner connectors. Each of the components consists of two parts, one for image A and one for image B , where blank pixels are considered to be of the‘background ’color.We will depict possible mappings in the following using arrows indicating the direction of displacement (where displacements within the eight-neighborhood of a pixel are the only cases considered). Blank squares represent mapping to the respective counterpart in the second image.For example, the following displacements of neighboring pixels can be used with zero cost:On the other hand, the following displacements result in costs greater than zero:Fig. 1 shows the first component, the straight connector component, which consists of a line of two different interchanging colors,here denoted by the two symbols◇and□. Given that the outside pixels are mapped to their respe ctive counterparts and the connector is continued infinitely, there are two possible ways in which the colored pixels can be mapped, namely to the left (i.e. f(2,j)=(2,j-1)) or to the right (i.e. f(2,j)=(2,j+1)),where the background pixels have different possibilities for the mapping, not influencing the main property of the connector. This property, which justifies the name ‘connector ’, is the following: It is not possible to find a mapping, which yields zero cost where the relative displacements of the connector pixels are not equal, i.e. one always has f(2,j)-(2,j)=f(2,j')-(2,j'),which can easily be observed by induction over j'.That is, given an initial displacement of one pixel (which will be ±1 in this context), the remaining end of the connector has the same displacement if overall costs of the mapping are zero. Given this property and the direction of a connector, which we define to be directed from variable to clause, wecan define the state of the connector as carrying the‘true’truth value, if the displacement is 1 pixel in the direction of the connector and as carrying the‘false’ truth value, if the displacement is -1 pixel in the direction of the connector. This property then ensures that the truth value transmitted by the connector cannot change at mappings of zero cost.Image A image Bmapping 1 mapping 2Fig. 1. The straight connector component with two possible zero cost mappings.For drawing of arbitrary graphs, clearly one also needs corners,which are represented in Fig. 2.By considering all possible displacements which guarantee overall cost zero, one can observe that the corner component also ensures the basic connector property. For example, consider the first depicted mapping, which has zero cost. On the other hand, the second mapping shows, that it is not possible to construct a zero cost mapping with both connectors‘leaving’the component. In that case, the pixel at the position marked‘? ’either has a conflict (that i s, introduces a cost greater than zero in the criterion function because of mapping mismatch) with the pixel above or to the right of it,if the same color is to be met and otherwise, a cost in the gray value mismatch term is introduced.image A image Bmapping 1 mapping 2Fig. 2. The corner connector component and two example mappings.Fig. 3 shows the variable component, in this case with two positive (to the left) and one negated output (to the right) leaving the component as connectors. Here, a fourth color is used, denoted by ·.This component has two possible mappings for thecolored pixels with zero cost, which map the vertical component of the source image to the left or the right vertical component in the target image, respectively. (In both cases the second vertical element in the target image is not a target of the mapping.) This ensures±1 pixel relative displacements at the entry to the connectors. This property again can be deducted by regarding all possible mappings of the two images.The property that follows (which is necessary for the use as variable) is that all zero cost mappings ensure that all positive connectors carry the same truth value,which is the opposite of the truth value for all the negated connectors. It is easy to see from this example how variable components for arbitrary numbers of positive and negated outputs can be constructed.image A image BImage C image DFig. 3. The variable component with two positive and one negated output and two possible mappings (for true and false truth value).Fig. 4 shows the most complex of the components, the clause component. This component consists of two parts. The first part is the horizontal connector with a 'bend' in it to the right.This part has the property that cost zero mappings are possible for all truth values of x and y with the exception of two 'false' values. This two input disjunction,can be extended to a three input dis-junction using the part in the lower left. If the z connector carries a 'false' truth value, this part can only be mapped one pixel downwards at zero cost.In that case the junction pixel (the fourth pixel in the third row) cannot be mapped upwards at zero cost and the 'two input clause' behaves as de-scribed above. On the other hand, if the z connector carries a 'true' truth value, this part can only be mapped one pixel upwards at zero cost,and the junction pixel can be mapped upwards,thus allowing both x and y to carry a 'false' truth value in a zero cost mapping. Thus there exists a zero cost mapping of the clause component iff at least one of the input connectors carries a truth value.image Aimage B mapping 1(true,true,false)mapping 2 (false,false,true,)Fig. 4. The clause component with three incoming connectors x, y , z and zero cost mappings forthe two cases(true,true,false)and (false, false, true).The described components are already sufficient to prove NP-completeness by reduction from planar 3-SAT (which is an NP-complete sub-problem of 3-SAT where the additional constraints on the instances is that the dependency graph is planar),but in order to derive a reduction from 3-SAT, we also include the possibility of crossing connectors.Fig. 5 shows the connector crossing, whose basic property is to allow zero cost mappings if the truth–values are consistently propagated. This is assured by a color change of the vertical connector and a 'flexible' middle part, which can be mapped to four different positions depending on the truth value distribution.image Aimage Bzero cost mappingFig. 5. The connector crossing component and one zero cost mapping.3.2. ReductionUsing the previously introduced components, we can now perform the reduction from 3-SAT to image matching .Proof of the claim that the image matching problem is NP-complete:Clearly, the image matching problem is in NP since, given a mapping f and two images A and B ,the computation of c(A,B,f)can be done in polynomial time. To prove NP-hardness, we construct a reduction from the 3-SAT problem. Given an instance of 3-SAT we construct two images A and B , for which a mapping of cost zero exists iff all the clauses can be satisfied.Given the dependency graph D ,we construct an embedding of the graph into a 2D pixel grid, placing the vertices on a large enough distance from each other (say100(K+L)² ).This can be done using well-known methods from graph drawing (see e.g.di Battista et al.,1999).From this image of the graph D we construct the two images A and B , using the components described above.Each vertex belonging to a variable is replaced with the respective parts of the variable component, having a number of leaving connectors equal to the number of incident edges under consideration of the positive or negative use in the respective clause. Each vertex belonging to a clause is replaced by the respective clause component,and each crossing of edges is replaced by the respective crossing component. Finally, all the edges are replaced with connectors and corner connectors, and the remaining pixels inside the rectangular hull of the construction are set to the background gray value. Clearly, the placement of the components can be done in such a way that all the components are at a large enough distance from each other, where the background pixels act as an 'insulation' against mapping of pixels, which do not belong to the same component. It can be easily seen, that the size of the constructed images is polynomial with respect to the number of vertices and edges of D and thus polynomial in the size of the instance of 3-SAT, at most in the order (K+L)².Furthermore, it can obviously be constructed in polynomial time, as the corresponding graph drawing algorithms are polynomial.Let there exist a truth assignment to the variables x1,…,xL, which satisfies allthe clauses c1,…,cK. We construct a mapping f , that satisfies c(f,A,B)=0 asfollows.For all pixels (i, j ) belonging to variable component l with A(i,j)not of the background color,set f(i,j)=(i,j-1)if xlis assigned the truth value 'true' , set f(i,j)=(i,j+1), otherwise. For the remaining pixels of the variable component set A(i,j)=B(i,j),if f(i,j)=(i,j), otherwise choose f(i,j)from{(i,j+1),(i+1,j+1),(i-1,j+1)}for xl'false' respectively from {(i,j-1),(i+1,j-1),(i-1,j-1)}for xl'true ',such that A(i,j)=B(f(i,j)). This assignment is always possible and has zero cost, as can be easily verified.For the pixels(i,j)belonging to (corner) connector components,the mapping function can only be extended in one way without the introduction of nonzero cost,starting from the connection with the variable component. This is ensured by thebasic connector property. By choosing f (i ,j )=(i,j )for all pixels of background color, we obtain a valid extension for the connectors. For the connector crossing components the extension is straight forward, although here ––as in the variable mapping ––some care must be taken with the assign ment of the background value pixels, but a zero cost assignment is always possible using the same scheme as presented for the variable mapping.It remains to be shown that the clause components can be mapped at zero cost, if at least one of the input connectors x , y , z carries a ' true' truth value.For a proof we regard alls even possibilities and construct a mapping for each case. In thedescription of the clause component it was already argued that this is possible,and due to space limitations we omit the formalization of the argument here.Finally, for all the pixels (i ,j )not belonging to any of the components, we set f (i ,j )=(i ,j )thus arriving at a mapping function which has c (f ,A ,B )=0。

外文文献及翻译DigitalImageProcessingandEdgeDetection数字图像处理与边缘检测

外文文献及翻译DigitalImageProcessingandEdgeDetection数字图像处理与边缘检测

Digital Image Processing and Edge DetectionDigital Image ProcessingInterest in digital image processing methods stems from two principal applica- tion areas: improvement of pictorial information for human interpretation; and processing of image data for storage, transmission, and representation for au- tonomous machine perception.An image may be defined as a two-dimensional function, f(x, y), where x and y are spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x, y) is called the intensity or gray level of the image at that point. When x, y, and the amplitude values of f are all finite, discrete quantities, we call the image a digital image. The field of digital image processing refers to processing digital images by means of a digital computer. Note that a digital image is composed of a finite number of elements, each of which has a particular location and value. These elements are referred to as picture elements, image elements, pels, and pixels. Pixel is the term most widely used to denote the elements of a digital image.Vision is the most advanced of our senses, so it is not surprising that images play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the electromagnetic (EM) spec- trum, imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate on images generated by sources that humans are not accustomed to associating with images. These include ultra- sound, electron microscopy, and computer-generated images. Thus, digital image processing encompasses a wide and varied field of applications.There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vi- sion, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in be- tween image processing and computer vision.There are no clearcut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and highlevel processes. Low-level processes involve primitive opera- tions such as imagepreprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images. Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A midlevel process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images ., edges, contours, and the identity of individual objects). Finally, higherlevel processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.Based on the preceding comments, we see that a logical place of overlap between image processing and image analysis is the area of recognition of individual regions or objects in an image. Thus, what we call in this book digital image processing encompasses processes whose inputs and outputs are images and, in addition, encompasses processes that extract attributes from images, up to and including the recognition of individual objects. As a simple illustration to clarify these concepts, consider the area of automated analysis of text. The processes of acquiring an image of the area containing the text, preprocessing that image, extracting (segmenting) the individual characters, describing the characters in a form suitable for computer processing, and recognizing those individual characters are in the scope of what we call digital image processing in this book. Making sense of the content of the page may be viewed as being in the domain of image analysis and even computer vision, depending on the level of complexity implied by the statement “making sense.” As will become evident shortly, digital image processing, as we have defined it, is used successfully in a broad range of areas of exceptional social and economic value.The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field. One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source ., visual, X-ray, and so on). The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic (in the form of electron beams used in electron microscopy). Synthetic images, used for modeling and visualization, are generated by computer. In this section we discuss briefly how images are generated in these various categories and the areas in which they are applied.Images based on radiation from the EM spectrum are the most familiar, es- pecially images in the X-ray and visual bands of the spectrum. Electromagnet- ic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. If spectral bands aregrouped according to energy per photon, we obtain the spectrum shown in fig. below, ranging from gamma rays (highest energy) at one end to radio waves (lowest energy) at the other. The bands are shown shaded to convey the fact that bands of the EM spectrum are not distinct but rather transition smoothly from one to the other.Image acquisition is the first process. Note that acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling.Image enhancement is among the simplest and most appealing areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured, or simply to highlight certain features of interest in an image. A familiar example of enhancement is when we increase the contrast of a n image because “it looks better.” It is important to keep in mind that enhancement is a very subjective area of image processing. Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image degradation. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a “good” enhancement result.Color image processing is an area that has been gaining in importance because of the significant increase in the use of digital images over the Internet. It covers a number of fundamental concepts in color models and basic color processing in a digital domain. Color is used also in later chapters as the basis for extracting features of interest in an image.Wavelets are the foundation for representing images in various degrees of resolution. In particular, this material is used in this book for image data compression and for pyramidal representation, in which images are subdivided successively into smaller regions.Compression, as the name implies, deals with techniques for reducing the storage required to save an image, or the bandwidth required to transmi storage technology has improved significantly over the past decade, the same cannot be said for transmission capacity. This is true particularly in uses of the Internet, which are characterized by significant pictorial content. Image compression is familiar (perhaps inadvertently) to most users of computers in the form of image file extensions, such as the jpg file extension used in the JPEG (Joint Photographic Experts Group) image compression standard.Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape. The material in this chapter begins a transition from processes that output images to processes that output image attributes.Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually. On the other hand, weak or erratic segmentation algorithms almost always guarantee eventual failure. In general, the more accurate the segmentation, the more likely recognition is to succeed.Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the bound- aryof a region ., the set of pixels separating one image region from another) or all the points in the region itself. In either case, converting the data to a form suitable for computer processing is necessary. The first decision that must be made is whether the data should be represented as a boundary or as a complete region. Boundary representation is appropriate when the focus is on external shape characteristics, such as corners and inflections. Regional representation is appropriate when the focus is on internal properties, such as texture or skeletal shape. In some applications, these representations complement each other. Choosing a representation is only part of the solution for trans- forming raw data into a form suitable for subsequent computer processing. A method must also be specified for describing the data so that features of interest are highlighted. Description, also called feature selection, deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.Recognition is the process that assigns a label ., “vehicle”) to an object based on its descriptors. As detailed before, we conclude our coverage of digital image processing with the development of methods for recognition of individual objects.So far we have said nothing about the need for prior knowledge or about the interaction between the knowledge base and the processing modules in Fig2 above. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. This knowledge may be as sim- ple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in con- nection with change-detection applications. In addition to guiding the operation of each processing module, the knowledge base also controls the interaction between modules. This distinction is made in Fig2 above by the use of double-headed arrows between the processing modules and the knowledge base, as op- posed to single-headed arrows linking the processing modules.Edge detectionEdge detection is a terminology in and , particularly in the areas of and , to refer to which aim at identifying points in a at which the changes sharply or more formally has point and line detection certainly are important in any discussion on segmentation,edge dectection is by far the most common approach for detecting meaningful discounties in gray level.Although certain literature has considered the detection of ideal step edges, the edges obtained from natural images are usually not at all ideal step edges. Instead they are normally affected by one or several of the following effects: blur caused by a finite and finite ; 2. caused by shadows created by light sources of non-zero radius; 3. at a smooth object edge; or in the vicinity of object edges.A typical edge might for instance be the border between a block of red color and a blockof yellow. In contrast a (as can be extracted by a ) can be a small number of pixels of a different color on an otherwise unchanging background. For a line, there may therefore usually be one edge on each side of the line.To illustrate why edge detection is not a trivial task, let us consider the problem of detecting edges in the following one-dimensional signal. Here, we may intuitively say that there should be an edge between the 4th and 5th pixels.5 76 4 152 148 149If if the intensity differences between the adjacent neighbouring pixels were higher, it would not be as easy to say that there should be an edge in the corresponding region. Moreover, one could argue that this case is one in which there are several , to firmly state a specific threshold on how large the intensity change between two neighbouring pixels must be for us to say that there should be an edge between these pixels is not always a simple problem. Indeed, this is one of the reasons why edge detection may be a non-trivial problem unless the objects in the scene are particularly simple and the illumination conditions can be well controlled.There are many methods for edge detection, but most of them can be grouped into two categories,search-based and based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. The zero-crossing based methods search for zero crossings in a second-order derivative expression computed from the image in order to find edges, usually the zero-crossings of the or the zero-crossings of a non-linear differential expression, as will be described in the section on following below. As a pre-processing step to edge detection, a smoothing stage, typically Gaussian smoothing, is almost always applied (see also ).The edge detection methods that have been published mainly differ in the types of smoothing filters that are applied and the way the measures of edge strength are computed. As many edge detection methods rely on the computation of image gradients, they also differ in the types of filters used for computing gradient estimates in the x- and y-directions.Once we have computed a measure of edge strength (typically the gradient magnitude), the next stage is to apply a threshold, to decide whether edges are present or not at an image point. The lower the threshold, the more edges will be detected, and the result will be increasingly susceptible to , and also to picking out irrelevant features from the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges.If the edge thresholding is applied to just the gradient magnitude image, the resulting edges will in general be thick and some type of edge thinning post-processing is necessary. For edges detected with non-maximum suppression however, the edge curves are thin by definition and the edge pixels can be linked into edge polygon by an edge linking (edge tracking) procedure. On a discrete grid, the non-maximum suppression stage can be implemented by estimating the gradient direction using first-order derivatives, then rounding off the gradient direction to multiples of 45 degrees, and finally comparing the values of the gradient magnitude in the estimated gradient direction.A commonly used approach to handle the problem of appropriate thresholds for thresholding is by using with . This method uses multiple thresholds to find edges. We begin by using the upper threshold to find the start of an edge. Once we have a start point, we then trace the path of the edge through the image pixel by pixel, marking an edge whenever we are above the lower threshold. We stop marking our edge only when the value falls below our lower threshold. This approach makes the assumption that edges are likely to be in continuous curves, and allows us to follow a faint section of an edge we have previously seen, without meaning that every noisy pixel in the image is marked down as an edge. Still, however, we have the problem of choosing appropriate thresholding parameters, and suitable thresholding values may vary over the image.Some edge-detection operators are instead based upon second-order derivatives of the intensity. This essentially captures the in the intensity gradient. Thus, in the ideal continuous case, detection of zero-crossings in the second derivative captures local maxima in the gradient.We can come to a conclusion that,to be classified as a meaningful edge point,the transition in gray level associated with that point has to be significantly stronger than the background at that we are dealing with local computations,the method of choice to determine whether a value is “significant” or not id to use a we define a point in an image as being as being an edge point if its two-dimensional first-order derivative is greater than a specified criterion of connectedness is by definition an term edge segment generally is used if the edge is short in relation to the dimensions of the key problem in segmentation is to assemble edge segments into longer alternate definition if we elect to use the second-derivative is simply to define the edge ponits in an image as the zero crossings of its second definition of an edge in this case is the same as is important to note that these definitions do not guarantee success in finding edge in an simply give us a formalism to look for derivatives in an image are computed using the derivatives are obtained using the Laplacian.数字图像处理与边缘检测数字图像处理数字图像处理方法的研究源于两个主要应用领域:其一是为了便于人们分析而对图像信息进行改进:其二是为使机器自动理解而对图像数据进行存储、传输及显示。

关于机器视觉的外文文献

关于机器视觉的外文文献

关于机器视觉的外文文献Title: Overview of Machine Vision: Technologies and ApplicationsAbstract:Machine vision is an important technology that enables computers to interpret visual information from images or videos. This paper provides an overview of the technologies behind machine vision and their various applications. Thefirst section covers the basics of machine vision, including image acquisition, processing, and analysis. The secondsection discusses some common machine vision techniques suchas pattern recognition, object detection and classification. The third section gives examples of machine vision applications, including manufacturing, medical diagnosis, and surveillance.Introduction:Machine vision has become increasingly important in many fields, from manufacturing to medical diagnosis to robotics.It is a technology that allows computers to interpret visual information from images or videos, enabling them to make automated decisions based on that information. This paperwill provide an overview of machine vision technologies and their applications.Section 1: Basics of Machine Vision1.1 Image Acquisition: This section discusses the different ways in which images or videos can be captured and stored for machine vision applications. It covers topics such as cameras, sensors, and storage devices.1.2 Image Processing: This section explains the different methods used to clean up and enhance images before analysis. Techniques such as filtering and image segmentation are discussed.1.3 Image Analysis: This section discusses the different algorithms used to analyze images and extract important features such as edges, corners, and textures.Section 2: Machine Vision Techniques2.1 Pattern Recognition: This section covers the basics of pattern recognition and its importance in machine vision. Methods such as template matching and machine learning are discussed.2.2 Object Detection: This section explains the different ways in which objects can be detected in a scene, including feature-based methods and deep learning.2.3 Object Classification: This section covers the different algorithms used to classify objects based on their features or attributes. Methods such as decision trees and support vector machines are discussed.Section 3: Applications of Machine Vision3.1 Manufacturing: This section discusses the various applications of machine vision in manufacturing, including quality control, inspection, and assembly.3.2 Medical Diagnosis: This section covers the different ways in which machine vision can be used for medical diagnosis, including pathology, radiology, and ophthalmology.3.3 Surveillance: This section explains the different ways in which machine vision can be used for surveillance, including face recognition and crowd monitoring.Conclusion:Machine vision is a powerful technology that is becomingincreasingly important in many fields. With its ability to interpret visual information and make automated decisions based on that data, it has the potential to revolutionize the way we live and work. As the technology continues to evolve and improve, we can expect to see even more exciting applications of machine vision in the future.。

(完整word版)光学外文文献及翻译

(完整word版)光学外文文献及翻译

学号2013211033 昆明理工大学专业英语专业光学姓名辜苏导师李重光教授分数导师签字日期2015年5月6日研究生部专业英语考核In digital holography, the recording CCD is placed on the ξ-ηplane in order to register the hologramx ',y 'when the object lies inthe x-y plane. Forthe reconstruction ofthe information ofthe object wave,phase-shifting digital holography includes two steps:(1) getting objectwave on hologram plane, and (2) reconstructing original object wave.2.1 Getting information of object wave on hologram plateDoing phase shifting N-1 times and capturing N holograms. Supposing the interferogram after k- 1 times phase-shifting is]),(cos[),(),(),,(k k b a I δηξφηξηξδηξ-⋅+= (1) Phase detection can apply two kinds of algorithms:synchronous phase detection algorithms [9]and the least squares iterative algorithm [10]. The four-step algorithm in synchronous phase detection algorithm is in common use. The calculation equation is)2/3,,(),,()]2/,,()0,,([2/1),(πηξπηξπηξηξηξiI I iI I E --+=2.2 Reconstructing original object wave by reverse-transform algorithmObject wave from the original object spreads front.The processing has exact and clear description and expression in physics and mathematics. By phase-shifting technique, we have obtained information of the object wave spreading to a certain distance from the original object. Therefore, in order to get the information of the object wave at its initial spreading position, what we need to do is a reverse work.Fig.1 Geometric coordinate of digital holographyexact registering distance.The focusing functions normally applied can be divided into four types: gray and gradient function, frequency-domain function, informatics function and statistics function. Gray evaluation function is easy to calculate and also robust. It can satisfy the demand of common focusing precision. We apply the intensity sum of reconstruction image as the evaluation function:min ),(11==∑∑==M k Nl l k SThe calculation is described in Fig.2. The position occurring the turning point correspondes to the best registration distanced, also equals to the reconstructing distance d '.It should be indicated that if we only need to reconstruct the phase map of the object wave, the registration distance substituted into the calculation equation is permitted having a departure from its true value.4 Spatial resolution of digital holography4.1 Affecting factors of the spatial resolution of digital holographyIt should be considered in three respects: (1) sizes of the object and the registering material, and the direction of the reference beam, (2) resolution of the registering material, and (3) diffraction limitation.For pointx2on the object shown in Fig.3, the limits of spatial frequency are λξθλθθ⎥⎦⎤⎢⎣⎡⎪⎪⎭⎫ ⎝⎛-'-=-=-0211maxmax tan sin sin sin sin z x f R R Fig.2 Determining reconstructing distanceλξθλθθ⎥⎦⎤⎢⎣⎡⎪⎪⎭⎫⎝⎛-'-=-=-211minmintansinsinsinsin zxfRRFrequency range isλξξ⎥⎦⎤⎢⎣⎡⎪⎪⎭⎫⎝⎛-'-⎥⎦⎤⎢⎣⎡⎪⎪⎭⎫⎝⎛-=∆--211211tansintansinzxzxfso the range is unrelated to the reference beam.Considering the resolution of registering material in order to satisfy the sampling theory, phase difference between adjacent points on the recording plate should be less than π, namely resolution of the registration material.cfff=∆η21)(minmaxπ4.2 Expanding the spatial resolution of reconstruction imageExpanding the spatial resolution can be realized at least in three ways: (1) Reducing the registration distance z0 can improve the reconstruction resolution, but it goes with reduction of the reconstruction area at the same ratio.Therefore, this method has its limitation. (2) Increasing the resolution and the imaging size of CCD with expensive price. (3) Applying image-synthesizing technique[11]CCD captures a few of images between which there is small displacement (usually a fraction of the pixel size) vertical to the CCD plane, shown in Fig.4(Schematic of vertical moving is the same).This method has two disadvantages. First, it is unsuitable for dynamic testing and can only be applied in the static image reconstruction. Second, because the pixel size is small (usually 5μm to 10μm) and the displacement should a fraction of this size (for example 2μm), it needs a moving table with high resolution and precision. Also it needs high stability in whole testing.In general, improvement of the spatial resolution of digital reconstruction is Fig.3 Relationship between object and CCDstill a big problem for the application of digital holography.5 Testing resultsFig.5 is the photo of the testing system. The paper does testing on two coins. The pixel size of the CCD is 4.65μm and there are 1 392×1 040 pixels. The firstis one Yuan coin of RMB (525 mm) used for image reconstruction by phase-shifting digital holography. The second is one Jiao coin of RMB (520 mm) for the testing of deformation measurement also by phase-shifting digital holography.5.1 Result of image reconstructionThe dimension of the one Yuancoin is 25 mm. The registrationdistance measured by ruler isabout 385mm. We capture ourphase-shifting holograms andreconstruct the image byphase-shifting digital holography.Fig.6 is the reconstructed image.Fig.7 is the curve of the auto-focusFig.4 Image capturing by moving CCD along horizontal directionFig.5 Photo of the testing systemfunction, from which we determine the real registration distance 370 mm. We can also change the controlling precision, for example 5mm, 0.1 mm,etc., to get more course or precision reconstruction position.5.2 Deformation measurementIn digital holography, the method of measuring deformation measurement differs from the traditional holography. It gets object wave before and after deformation and then subtract their phases to obtain the deformation. The study tested effect of heating deformation on the coin of one Jiao. The results are shown in Fig.8, Where (a) is the interferential signal of the object waves before and after deformation, and (b) is the wrapped phase difference.5.3 Improving the spatial resolutionFor the tested coin, we applied four sub-low-resolution holograms to reconstruct the high-resolution by the image-synthesizing technique. Fig.9 (a) is the reconstructed image by one low-resolution hologram, and (b) is the high-resolution image reconstructed from four low-resolution holograms.Fig.6 Reconstructed image Fig.7 Auto-focus functionFig.8 Heating deformation resultsFig.9 Comparing between the low and high resolution reconstructed image6 SummaryDigital holography can obtain phase and amplitude of the object wave at the same time. Compared to other techniques is a big advantage. Phase-shifting digital holography can realize image reconstruction and deformation with less noise. But it is unsuitable for dynamic testing. Applying the intensity sum of the reconstruction image as the auto-focusing function to evaluate the registering distance is easy, and computation is fast. Its precision is also sufficient. The image-synthesizing technique can improve spatial resolution of digital holography, but its static characteristic reduces its practicability. The limited dimension and too big pixel size are still the main obstacles for widely application of digital holography.外文文献译文:标题:图像重建中的相移数字全息摘要:相移数字全息术被用来研究研究艺术品的内部缺陷。

医学影像学英文文献

医学影像学英文文献

医学影像学英文文献英文回答:Within the realm of medical imaging, sophisticated imaging techniques empower healthcare professionals with the ability to visualize and comprehend anatomical structures and physiological processes in the human body. These techniques are instrumental in diagnosing diseases, guiding therapeutic interventions, and monitoring treatment outcomes.Computed tomography (CT) and magnetic resonance imaging (MRI) are two cornerstone imaging modalities widely employed in medical practice. CT utilizes X-rays and advanced computational algorithms to generate cross-sectional images of the body, providing detailed depictions of bones, soft tissues, and blood vessels. MRI, on the other hand, harnesses the power of powerful magnets and radiofrequency waves to create intricate images that excel in showcasing soft tissue structures, including the brain,spinal cord, and internal organs.Positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are nuclear medicine imaging techniques that involve the administration of radioactive tracers into the body. These tracers accumulate in specific organs or tissues, enabling the visualization and assessment of metabolic processes and disease activity. PET is particularly valuable in oncology, as it can detect the presence and extent of cancerous lesions.Ultrasound, also known as sonography, utilizes high-frequency sound waves to produce images of internal structures. It is a versatile technique commonly employed in obstetrics, cardiology, and abdominal imaging. Ultrasound offers real-time visualization, making it ideal for guiding procedures such as biopsies and injections.Interventional radiology is a specialized field that combines imaging guidance with minimally invasive procedures. Interventional radiologists utilize imaging techniques to precisely navigate catheters and otherinstruments through the body, enabling the diagnosis and treatment of conditions without the need for open surgery. This approach offers reduced invasiveness and faster recovery times compared to traditional surgical interventions.Medical imaging has revolutionized healthcare by providing invaluable insights into the human body. The ability to visualize anatomical structures andphysiological processes in exquisite detail has transformed the practice of medicine, leading to more accurate diagnoses, targeted treatments, and improved patient outcomes.中文回答:医学影像学是现代医学不可或缺的一部分,它利用各种成像技术对人体的解剖结构和生理过程进行可视化和理解,在疾病诊断、治疗方案制定和治疗效果评估中发挥着至关重要的作用。

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

外文翻译----数字图像处理和模式识别技术关于检测癌症的应用

引言英文文献原文Digital image processing and pattern recognition techniques for the detection of cancerCancer is the second leading cause of death for both men and women in the world , and is expected to become the leading cause of death in the next few decades . In recent years , cancer detection has become a significant area of research activities in the image processing and pattern recognition community .Medical imaging technologies have already made a great impact on our capabilities of detecting cancer early and diagnosing the disease more accurately . In order to further improve the efficiency and veracity of diagnoses and treatment , image processing and pattern recognition techniques have been widely applied to analysis and recognition of cancer , evaluation of the effectiveness of treatment , and prediction of the development of cancer . The aim of this special issue is to bring together researchers working on image processing and pattern recognition techniques for the detection and assessment of cancer , and to promote research in image processing and pattern recognition for oncology . A number of papers were submitted to this special issue and each was peer-reviewed by at least three experts in the field . From these submitted papers , 17were finally selected for inclusion in this special issue . These selected papers cover a broad range of topics that are representative of the state-of-the-art in computer-aided detection or diagnosis(CAD)of cancer . They cover several imaging modalities(such as CT , MRI , and mammography) and different types of cancer (including breast cancer , skin cancer , etc.) , which we summarize below .Skin cancer is the most prevalent among all types of cancers . Three papers in this special issue deal with skin cancer . Y uan et al. propose a skin lesion segmentation method. The method is based on region fusion and narrow-band energy graph partitioning . The method can deal with challenging situations with skin lesions , such as topological changes , weak or false edges , and asymmetry . T ang proposes a snake-based approach using multi-direction gradient vector flow (GVF) for the segmentation of skin cancer images . A new anisotropic diffusion filter is developed as a preprocessing step . After the noise is removed , the image is segmented using a GVF1snake . The proposed method is robust to noise and can correctly trace the boundary of the skin cancer even if there are other objects near the skin cancer region . Serrano et al. present a method based on Markov random fields (MRF) to detect different patterns in dermoscopic images . Different from previous approaches on automatic dermatological image classification with the ABCD rule (Asymmetry , Border irregularity , Color variegation , and Diameter greater than 6mm or growing) , this paper follows a new trend to look for specific patterns in lesions which could lead physicians to a clinical assessment.Breast cancer is the most frequently diagnosed cancer other than skin cancer and a leading cause of cancer deaths in women in developed countries . In recent years , CAD schemes have been developed as a potentially efficacious solution to improving radiologists’diagnostic accuracy in breast cancer screening and diagnosis . The predominant approach of CAD in breast cancer and medical imaging in general is to use automated image analysis to serve as a “second reader”, with the aim of improving radiologists’diagnostic performance . Thanks to intense research and development efforts , CAD schemes have now been introduces in screening mammography , and clinical studies have shown that such schemes can result in higher sensitivity at the cost of a small increase in recall rate . In this issue , we have three papers in the area of CAD for breast cancer . Wei et al. propose an image-retrieval based approach to CAD , in which retrieved images similar to that being evaluated (called the query image) are used to support a CAD classifier , yielding an improved measure of malignancy . This involves searching a large database for the images that are most similar to the query image , based on features that are automatically extracted from the images . Dominguez et al. investigate the use of image features characterizing the boundary contours of mass lesions in mammograms for classification of benign vs. Malignant masses . They study and evaluate the impact of these features on diagnostic accuracy with several different classifier designs when the lesion contours are extracted using two different automatic segmentation techniques . Schaefer et al. study the use of thermal imaging for breast cancer detection . In their scheme , statistical features are extracted from thermograms to quantify bilateral differences between left and right breast regions , which are used subsequently as input to a fuzzy-rule-based classification system for diagnosis.Colon cancer is the third most common cancer in men and women , and also the third mostcommon cause of cancer-related death in the USA . Y ao et al. propose a novel technique to detect colonic polyps using CT Colonography . They use ideas from geographic information systems to employ topographical height maps , which mimic the procedure used by radiologists for the detection of polyps . The technique can also be used to measure consistently the size of polyps . Hafner et al. present a technique to classify and assess colonic polyps , which are precursors of colorectal cancer . The classification is performed based on the pit-pattern in zoom-endoscopy images . They propose a novel color waveler cross co-occurence matrix which employs the wavelet transform to extract texture features from color channels.Lung cancer occurs most commonly between the ages of 45 and 70 years , and has one of the worse survival rates of all the types of cancer . Two papers are included in this special issue on lung cancer research . Pattichis et al. evaluate new mathematical models that are based on statistics , logic functions , and several statistical classifiers to analyze reader performance in grading chest radiographs for pneumoconiosis . The technique can be potentially applied to the detection of nodules related to early stages of lung cancer . El-Baz et al. focus on the early diagnosis of pulmonary nodules that may lead to lung cancer . Their methods monitor the development of lung nodules in successive low-dose chest CT scans . They propose a new two-step registration method to align globally and locally two detected nodules . Experments on a relatively large data set demonstrate that the proposed registration method contributes to precise identification and diagnosis of nodule development .It is estimated that almost a quarter of a million people in the USA are living with kidney cancer and that the number increases by 51000 every year . Linguraru et al. propose a computer-assisted radiology tool to assess renal tumors in contrast-enhanced CT for the management of tumor diagnosis and response to treatment . The tool accurately segments , measures , and characterizes renal tumors, and has been adopted in clinical practice . V alidation against manual tools shows high correlation .Neuroblastoma is a cancer of the sympathetic nervous system and one of the most malignant diseases affecting children . Two papers in this field are included in this special issue . Sertel et al. present techniques for classification of the degree of Schwannian stromal development as either stroma-rich or stroma-poor , which is a critical decision factor affecting theprognosis . The classification is based on texture features extracted using co-occurrence statistics and local binary patterns . Their work is useful in helping pathologists in the decision-making process . Kong et al. propose image processing and pattern recognition techniques to classify the grade of neuroblastic differentiation on whole-slide histology images . The presented technique is promising to facilitate grading of whole-slide images of neuroblastoma biopsies with high throughput .This special issue also includes papers which are not derectly focused on the detection or diagnosis of a specific type of cancer but deal with the development of techniques applicable to cancer detection . T a et al. propose a framework of graph-based tools for the segmentation of microscopic cellular images . Based on the framework , automatic or interactive segmentation schemes are developed for color cytological and histological images . T osun et al. propose an object-oriented segmentation algorithm for biopsy images for the detection of cancer . The proposed algorithm uses a homogeneity measure based on the distribution of the objects to characterize tissue components . Colon biopsy images were used to verify the effectiveness of the method ; the segmentation accuracy was improved as compared to its pixel-based counterpart . Narasimha et al. present a machine-learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using an ion-abrasion scanning electron microscope . The proposed approach has minimal user intervention and can achieve high classification accuracy . El Naqa et al. investigate intensity-volume histogram metrics as well as shape and texture features extracted from PET images to predict a patient’s response to treatment . Preliminary results suggest that the proposed approach could potentially provide better tools and discriminant power for functional imaging in clinical prognosis.We hope that the collection of the selected papers in this special issue will serve as a basis for inspiring further rigorous research in CAD of various types of cancer . We invite you to explore this special issue and benefit from these papers .On behalf of the Editorial Committee , we take this opportunity to gratefully acknowledge the autors and the reviewers for their diligence in abilding by the editorial timeline . Our thanks also go to the Editors-in-Chief of Pattern Recognition , Dr. Robert S. Ledley and Dr.C.Y. Suen , for their encouragement and support for this special issue .英文文献译文数字图像处理和模式识别技术关于检测癌症的应用世界上癌症是对于人类(不论男人还是女人)生命的第二杀手。

外文翻译外文文献英文文献国内混合动力汽车发展

外文翻译外文文献英文文献国内混合动力汽车发展

China Hybrid Electric Vehicle DevelopmentWith the depletion of oil resources, increase awareness of environmental protection, hybrid vehicles and electric vehicles will become the first decades of the new century, the development of mainstream cars and automobile industry become the consensusof all of the industry. The Chinese government also has the National High Technology Research and Development Program (863 Program) specifically listed, including hybrid vehicles, including electric cars of major projects. At present, China's independent innovation of new energy vehicles in the process, adhere to the government support to core technology, key components and system integration focusing on the principles established in hybrid electric vehicles, pure electric vehicles, fuel cell vehicles as a "three vertical "To vehicle control systems, motor drive systems, power battery / fuel cellfor the "three horizontal" distribution of R & D, through close links between production cooperation, China's independent innovation of hybrid cars has made significant progress.With completely independent intellectual property rights form the power system technology platform, established a hybrid electric vehicle technology development. Is the core of hybrid vehicles batteries (including battery management system) technology. In addition, also include engine technology, motor control, vehicle control technology, engine and electrical interface between the power conversion and is also the key. From the current situation, China has established a hybrid electric vehicle power system through Cooperative R & D technology platforms and systems, made a series of breakthroughs for vehicle development has laid a solid foundation. As of January 31, 2009,Technology in hybrid vehicles, China Intellectual Property Office to receive and open for the 1116 patent applications in China. In 1116 patent applications, invention 782 (authority for the 107), utility model for the 334.Mastered the entire vehicle key development, the formation of a capability to develop various types of electric vehicles. Hybrid cars in China in systems integration, reliability, fuel economy and other aspects of the marked progress in achieving fuel economy of different technical solutions can be 10% -40%.Meanwhile, the hybrid vehicle automotive enterprises and industrial R & D investment significantly enhanced, accelerating the pace of industrialization. Currently, domestic automakers have hybrid vehicles as the next major competitive products in the strategic high priority, FAW, Dongfeng, SAIC Motor, Changan, Chery, BYD, etc. have put a lot of manpower, material resources,Hybrid prototyping has been completed, and some models have achieved low-volume market.FAW GroupDevelopment Goal: By 2012, the Group plans to build an annual capacity of 11,000 hybrid cars, hybrid bus production base of 1000.FAW Group since 1999 and a new energy vehicles for theoretical research and development work, and the development of a red car performance hybrid sample. "15" period, the FAW Group is committed to the national "863" major project in the "red card in series hybrid electric vehicle research and development" mission,officially began the research and development of new energy vehicles. Beginning in 2006, FAW B70 in the Besturn, based on the technology for hybrid-based research, the original longitudinal into transverse engine assembly engine assembly, using a transverse engine and dual-motor hybrid technology. At the same time, FAW also pay close attention to the engine, mechanical and electrical integration, transmission, vehicle control networks, vehicle control systems development, the current FAW hybrid electric car has achieved 42% fuel saving effect, reached the international advanced level.Jiefang CA6100HEV Hybrid Electric BusFAW "Liberation brand CA6100HEV Hybrid Electric Bus" project is a national "863" electric vehicle major projects funded project, with pure electric drive, the engine alone drives (and charge), the joint drive motor starts the engine, and sliding regenerative braking 5 kinds of basic operation. The power hybrid electric bus and economy to the leading level, 38% fuel economy than traditional buses, emissions reduced by 30%.Red Flag CA7180AE hybrid carsRed Flag hybrid cars CA7180AE according to the national "863 Plan" is thefirst in complete with industrial prospects of the car, it is built on the basisof red car with good performance and operational smoothness.Series which is a hybrid sedan, the luxury car ,0-100km acceleration time of 14s, fuel-efficientthan traditional cars by about 50%, Euro 川emission standard.Besturn B70 hybrid carsBesturn B70 Hybrid cars using petrol - electric hybrid approach. Dual motor power system programs, mixed degree of 40/103, is all mixed (Full-Hybrid, also known as re-mixed) configurations. Besturn B70 Hybrid cars are petrol versioncosts two to three times Besturn models, mass production will be gradually reduced after the costs, even if this hybrid version Besturn market, the price certainly higher than the existing Besturn models, but high the price of petrol will not exceed 30% version of Besturn models.SAICDevelopment Goals: 2010 launch in the mixed hybrid cars, plug-in 2012, SAIC strong mix of cars and pure electric cars will be on the market.In the R & D on new energy vehicles, SAIC made clear to focus on hybrid, fuel cell for the direction, and speed up the development of alternative products. Hybrid vehicles, fuel cell vehicles, alternative fuel vehicles as a new energy strategy SAIC three key.2010 SAIC Roewe 750 hybrid cars in the mix will be put on the market, during the World Expo in Shanghai, SAIC will put 150 hybrid cars in the Expo Line on the River Run. 2012 Roewe 550 plug-in hybrid cars will be strong market, the current car's power system has been launched early development and progress.Apply the new hybrid bus moving on the 1stApply the new hybrid bus moving on the 1st Academy of Engineering by the SAIC and Shanghai Jiaotong University and other units jointly developed with independent intellectual property rights. Existing cities in the Sunwin Bus Power platform, "the new dynamic application No. 1" uses a parallel hybrid electric vehicle drive program, so that hybrid electric vehicle operating conditions in the electric air-conditioning, steering, braking and other accessoriesstill able to work without additional electric system, while use of super capacitors, to improve starting power,braking energy recovery efficiency, thereby enhancing vehicle dynamic performance, reduce fuel consumption. Car length 10m, width 2.5m, high-3.2m, can accommodate 76 people.Roewe 750 hybrid carsRoewe 750 hybrid cars in the mixed system with BSG (Belt drive start generating one machine), with "smart stop zero-emission" and "environmental protection and the power of both the" two prominent features of a top speed of 205 km / h, the maximum added driving range of up to 500 km. As for the industrialization of SAIC's first own-brand hybrid car, the Roewe 750 hybrid integrated hybrid fuel-efficient cars can achieve rates of around 20%.Dongfeng Motor GroupDevelopment Goals: Plans move into 33 billion in 10 years to develop a range of environmentally friendly hybrid vehicles, including cars.EQ7200HEV hybrid carsEQ7200HEV hybrid cars are "863" project of major projects and major strategic projects of Dongfeng Motor Corporation. The car is EQ7200-U model (Fengshen Bluebird cars) is based on an electronically controlled automatic transmission with innovative electromechanical coupling in parallel programs, configure DC brushless motor and nickel-hydrogen batteries, plans to "10 5 "during the industrialization. In dustrializatio n, the vehicle cost more tha n EQ7200 cars in crease in cost W 30%.EQ61100HEV Hybrid Electric BusEQ61100HEV electric hybrid bus by Dongfeng Vehicle Company Limited Joint Beijing Jiaotong University, Beijing, China Textile Co., Ltd. and Hunan sharp Electromechanical Technology Co., Ltd. jointly developed Shenzhou. EQ61100HEV hybrid electric bus with switched reluctance motor, Cummins ISBe1504 cylinder common rail electronic injection diesel engine, new chassis design of the system, electronically controlled automatic transmission and innovative electromechanical coupling parallel program. In the annual output reached 200, the vehicle cost more tha n the in crease in automobile engine equipped with 6CT W 30%.China ChanganDevelopment Goals: the next three years, the formation of different grades, different purposes, carry a different system of mixed platforms, weak mix of scale, strong mixed industrial R & D capabilities, covering commercial, A grade, B grade, C grade products. 2014 will achieve sales of new energy vehicles 150 000 2020 sales of new energy vehicles for more than 500,000."Eleventh Five-Year Plan" period, Chang-an increased investment in clean energy vehicles, a diversified energy technologies to carry out exploratory research. Environmental protection through energy-saving models continues to introduce new technology to lead the industry to upgrade and fully utilize and mobilize global resources,Chang'an in the middle hybrid cars, hybrid cars and other technological strength of the field are explored. Chang's first hybrid car long Anjie Xun HEV was successfully listed in June 2009; the first batch of 20 hybrid taxis Long An Zhixiang in January of this year officially put into operation in Chongqing.CheryDevelopment Goals: after 2010, more than half of Chery's products carry different levels of hybrid systems.From 2003 to 2008, mainly mixed with moderate Chery hybrid cars and energy saving system development, and industrialization; Chery in Wuhu, a taxi has been carried out on probation, fuel consumption will be reduced by 10% to 30% to reach Europe IV Standard. Since 2004, Chery hybrid cars mainly for the development of strong and industrialization. Chery hybrid car fuel consumption target to reach 100 km 3 liters, to reach Europe and the United States emissions regulations.Chery A5BSGChery A5BSG is a weak parallel hybrid electric car, using fuel engines, electric engines complementary mode, the two different power sources in the car while driving to work together or separately, through this combination to achieve the least fuel consumption and exhaust emissions, in order to achieve fuel efficiency and environmental protection purposes. Compared with the conventional car, the car in urban conditions can save 10% -15% of fuel and reduce carbon dioxide emissions by about 12%, while costs increased by only about 25% -30%.Chery A5ISGChery A5 ISG hybrid power system consists of "1.3L gasoline engine + 5-speed manual transmission +10 kW motor +144 V Ni-MH battery," the composition of the battery system used by the Johnson Controls developed "plug-in" nickel metal hydride (Ni-MH), motor with permanent magnet synchronous motor and with the motor control system, inverter and DC / DC converters. The system enables the vehicle power to 1.6L displacement level and rate of 30% fuel savings and significantly reduce the emissions of Euro V standards.Cherry A3ISGChery A3 ISG has 1.3L473F gasoline engine and equipped with 10KW motor. By gasoline engines and electric motors with torque overlay approach to dynamic mixed to provide the best vehicle power operating efficiency and energy saving environmental protection goals. Chery A3 ISG also has Stop_Restart the idling stop function such as flame start to start (BSG function), to reduce red light in the vehicle stopped or suspended when the fuel consumption and emissions expenses.FY 2BSGFY 2 BSG carry 1.5LSQR477F inline four-cylinder engine configuration BSGstart / stop and so one electric motor, red light in the vehicle stopped thedriver into the gap, it will automatically enter standby mode to turn off the engine, starting moments after the entry block automatically start the engine. FY 2 BSG vehicle average fuel consumption than the 1.5L petrol cars reduce about 5-10%, average fuel consumption can be reduced up to 15%.BYD AutoDevelopment Goal: to electric cars as a transitional mode, the electric car as the ultimate goal, the development of new energy cars BYD.BYD follow the "independent research and development, independent production, independent brand" development path, and the "core technology, vertical integration" development strategy, as the transition to dual-mode electric vehicles, electric vehicles as the ultimate goal, the development of BYD new energy vehicles.国内混合动力汽车发展随着石油资源的枯竭、人们环保意识的提高,混合动力汽车及电动汽车将成为新世纪前几十年汽车发展的主流,并成为我国汽车界所有业内人士的共识。

外文文献及翻译-fpga实现实时适应图像阈值-其他专业

外文文献及翻译-fpga实现实时适应图像阈值-其他专业

FPGA实现实时适应图像阈值Elham Ashari电气与计算机工程系,滑铁卢大学理查德霍恩西计算机科学和工程系,纽约大学摘要:本文提出了一种基于实时阈值的通用FPGA结构。

硬件架构是基于一种加权聚类算法的架构,这种算法的重点就在于聚类的前景和背景像素的阈值问题。

该方法采用聚类的二值加权神经网络法找到两个像素组的质心。

图像的阈值是两个质心的平均值。

因为对于每个输入的像素,选定的最近的权值是用来更新的,因而推荐一种自适应的阈值技术。

更新是基于输入像素的灰度级和相关权值的差额的,通过学习快慢因素来衡量其速率。

硬件系统是在FPGA平台上实现的,它包含两个功能模块。

第一个模块获得图像框架阈值,另一个模块将阈值应用于图像的框架。

两个模块的并行性和简单的硬件组成部分使其适用于实时应用程序,并且,其性能可与经常用于离线阈值技术相媲美。

通过利用FPGA对无数的例子进行模拟和实验,得到该算法的结果。

这项工作的基本应用是确定激光的质心,但接下来将会讨论它在其他方面的应用。

关键词:实时阈值,自适应阈值,FPGA实现、神经网络1 简介图像二值化是图像处理的一个主要问题。

如果要从一张图像上提取有用的信息,我们需要将它分成不同的部分(例如背景色和前景色)来进行更为详细的分析。

一般来说,前景色的像素的灰度级与背景色的灰度级是不同的。

现在已有一些较好的使图像二值化地算法,就性能而不是就速度而言,这些算法的主要目标在于高效率,然而对于一些应用,尤其对是在那些定制的硬件和实时应用程序来说,速度则是最关键的要求。

可实现的快速而简单的阈值技术在实际成像系统中得到广泛应用。

例如,结合了CMOS图像传感器的片上图像处理技术普遍存在于各种各样的成像系统当中。

在这样一个系统当中,图像的实时处理及其得到的相关信息是至关重要的。

实时阈值技术的应用领域包括机器人、汽车、目标追踪以及激光测距。

在激光测距,即确定目标的运动范围的过程中,所捕获的图像为二值图像。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

附录图像科学综述近几年来,图像处理与识别技术得到了迅速的发展,现在人们己充分认识到图像处理和识别技术是认识世界、改造世界的重要手段。

目前它己应用于许多领域,成为2l世纪信息时代的一门重要的高新科学技术。

1.图像处理与识别技术概述图像就是用各种观测系统以不同形式和手段观测客观世界而获得的,可以直接或间接作用于人眼而产生视知觉的实体。

科学研究和统计表明,人类从外界获得的信息约有75%来自于视觉系统,也就是说,人类的大部分信息都是从图像中获得的。

图像处理是人类视觉延伸的重要手段,可以便人们看到任意波长上所测得的图像。

例如,借助伽马相机、x光机,人们可以看到红外和超声图像:借助CT可看到物体内部的断层图像;借助相应工具可看到立体图像和剖视图像。

1964年,美国在太空探索中拍回了大量月球照片,但是由于种种环境因素的影响,这些照片是非常不清晰的,为此,美国喷射推进实验室(JPL)使用计算机对图像进行处理,使照片中的重要信息得以清晰再现。

这是这门技术发展的重要里程碑。

此后,图像处理技术在空间研究方面得到广泛的应用。

总体来说,图像处理技术的发展大致经历了初创期、发展期、普及期和实用化期4个阶段。

初创期开始于20世纪60年代,当时的图像采用像素型光栅进行扫描显示,大多采用巾、大型机对其进行处理。

在这一时期,由于图像存储成本高,处理设备造价高,因而其应用面很窄。

20世纪70年代进入了发展期,开始大量采用中、小型机进行处理,图像处理也逐渐改用光栅扫描显示方式,特别是出现了CT和卫星遥感图像,对图像处理技术的发展起到了很好的促进作用。

到了20世纪80年代,图像处理技术进入普及期,此时购微机已经能够担当起图形图像处理的任务。

VLSL的出现更使得处理速度大大提高,其造价也进一步降低,极大地促进了图形图像系统的普及和应用。

20世纪90年代是图像技术的实用化时期,图像处理的信息量巨大,对处理速度的要求极高。

21世纪的图像技术要向高质量化方面发展,主要体现在以下几点:①高分辨率、高速度,图像处理技术发展的最终目标是要实现图像的实时处理,这在移动目标的生成、识别和跟踪上有着重要意义:②立体化,立体化所包括的信息最为完整和丰富,数字全息技术将有利于达到这个目的;②智能化,其目的是实现图像的智能生成、处理、识别和理解。

2.图像处理与识别技术的应用领域目前,图像处理与识别技术的主要应用领域有生物医学、文件处理、工业检测、机器人视觉、货物检测、邮政编码、金融、公安、银行、机械、交通、电子商务和多媒体网络通信等领域。

3.图像处理与识别数字图像处理和识别学科所涉及的知识非常广泛,具体的方法种类繁多,应用也极为普遍,但从学科研究内容上可以分为以下几个方面:图像数字化、图像变换、图像增强、图像分割、图像分析。

4.图像识别技术图像识别是近20年来发展起来的一门新型技术科学,它以研究某些对象或过程(统称图像)的分类与描述为主要内容。

图像识别所研究的领域十分广泛,它可以是医学图像中的癌细胞识别;机械加工中零部件的识别、分类;可以是认遥感图片中辨别农作物、森林、湖泊和军事设施,以及判断农作物的长势,预测收获量等;可以是自导引小车中的路径识别;邮政系统中自动分拣信函;交通管制、识别违章行驶的汽车牌照;银行的支票识别、身份证识别等。

上述都是图像识别研究的课题。

总体来说所研究的问题,主要是分类问题。

5.图像处理在研究图像时,首先要对获得的图像信息进行预处理(前处理)以滤去干扰、噪声,作几何、彩色校正等。

这样可提高信噪比;有时由于信息微弱,无法辨识,还得进行增强处理。

增强的作用,在于提供一个满足一定要求的图像,或对图像进行变换,以便人或计算机分析。

并且为了从图像中找到需要识别的东西,还得对图像进行分割,也就是进行定位和分离,以分出不同的物体。

为了给观察者以清晰的图像,还要对图像进行政善,即进行复原处理,它是把已经退化了图像加以重建或恢复的过程,以便改进图像的保真度。

在实际处理中,由于图像信息量非常大,在存储及传送时,还要对图像信息进行压缩。

上述工作必须用计算机进行,因而要进行编码等工作。

编码的作用,是用最少数量的编码位(亦称比特),表示单色和彩色图像,以便更有效地传输和存储。

以上所述都属图像处理的范畴。

因此,图像处理包括图像编码、图像增强、图像压缩、图像复原、图像分割等。

对图像处理环节来说,输入是图像,输出也是图像。

由图像处理的内容可见,图像处理的目的主要在于解决两个问题:一是判断图像中有无需要的信息:另一是确定这些信息是什么。

6.图像理解所谓图像理解是一个总称。

上述图像处理及图像识别的最终目的,就在于对图像作描述和解释,以便最终理解它是什么图像。

所以它是在图像处理及图像识别的基础上,再根据分类作结构句法分析,去描述图像和解释图像。

因而图像理解包括图像处理、图像识别和结构分析。

对理解部分米说,输入是图像,输出则是图像的描述与解释。

7.图像识别与图像处理及图像理解的关系上面说过,图像理解是一个总称。

图像识别是一个系统。

其中每一部分和其前面的一部分都有一定的关系,也可以说有一种反馈作用,例如分割可以在预处理中进行。

并且,该系统不是孤立的,为了发挥其功能,它时时刻刻需要来自外界的必要信息,以便使每个部分能有效地工作。

这些外界信息是指处理问题及解决问题的看法、设想、方法等。

例如,根据实际图像,在处理部分需要采用什么样的预处理,在识别部分需要怎样分割,抽取什么样的特征及怎样抽取特征,怎样进行分类,要分多少类,以及最后提供结构分析所需的结构信息等。

在该系统中,图像预处理、图像分割为图像处理;图像特征提取和图像分类属图像识别;而结构句法分析涉及的内容则是从图像分割到图像结构分析这一过程;整个系统所得到的结果是图像的描述及解释。

当某个新的对象(图像)送进系统时,就可以进行解释,说明它是什么。

8.Matlab技术简介Mat lab是Math works公司与1982年推出的一套高性能的数值计算和可视化数学软件。

被誉为“巨人肩上的工具”。

由于使用Mat lab编程运算与人进行科学计算的思路和表达方式完全一致,所以不像学习其它高级语言,如Basic和C等那样难于掌握,用Mat lab编写程序犹如在演算纸上排列出公式与求解问题,所以又被称为演算科学算法语言。

一般数值分析、矩阵运算、数字信号处理、建模和系统控制和优化等应用程序,并集应用程序和图形于一个便于使用的集成环境中。

在这个环境下,对所要求解的问题,用户只需简单地列出数学表达式,其结果便以数值或图形方式显示出来。

Mat lab的含义是矩阵实验室(MATRIXLABORATORY),主要用于方便矩阵的存取,其基本元素是无须定义维数的矩阵。

Mat lab自问世以来,就是以数值计算称雄。

Mat lab进行数值计算的基本单位是复数数组(或称阵列),这使得Mat lab高度“向量化”。

经过十几年的完善和扩充,它现已发展成为线性代数课程的标准工具。

由于它不需定义数组的维数,并给出矩阵函数、特殊矩阵专门的库函数,使之在求解诸如信号处理、建模、系统识别、控制、优化等领域的问题时,显得大为简捷、高效、方便,这是其它高级语言所不能比拟的。

美国许多大学的实验室都安装有Mat lab供学习和研究之用。

在那里,Mat lab是攻读学位的大学生、硕士生、博士生必须掌握的基本工具。

Mat lab中包括了被称作工具箱(TOOU玉OX)的各类应用问题的求解工具。

工具箱实际上是对Mat lab进行扩展应用的一系列Mat lab 函数(称为M文件),它可用来求解各类学科的问题,包括信号处理、图象处理、控制系统辨识、神经网络等。

随着Mat lab版本的不断升级,其所含的工具箱的功能也越来越丰富,因此,应用范围也越来越广泛,成为涉及数值分析的各类工程师不可不用的工具。

Image Science ReviewIn recent years, image processing and recognition technology has developed rapidly, now people have been fully aware of image processing and recognition technology is to understand the world and an important means of transforming the world. Currently, it has been used in many fields, 21 century information age an important high-tech science and technology.1. Image Processing and Recognition TechnologyImage is to use all kinds of observing systems in various forms and means of observation and access to the objective world, directly or indirectly, in the role of a human eye and visual perception of the entity. Scientific research and statistics show that the human race from the outside world the information obtained about 75 percent comes from the visual system, that is, human beings are the most of the information obtained from the image.Image processing is an important extension of human vision means that people can be arbitrary wavelength measured by the image. With CT can see objects within the fault image with the corresponding tools can be seen as three-dimensional images and post-image. 1964, the United States in space exploration shoot the moon to a large number of photos, but because of various environmental factors, these photos is not very clear, to that end, the U.S. Jet Propulsion Laboratory (JPL) on the use of computer image processing, Photos of important information to clear reproduction. This is the door an important milestone in the development of technology.Generally speaking, the development of image processing technologies generally experienced a start-up phase, the period of development, popularization and applicationof view of four stages. Start-up phase began in the 1960s, the image grating used to scan revealed that most of a towel, its mainframe processing. During this period, due to the high cost of image storage, processing equipment costs, thus its use of very narrow. In the 1970s entered a period of development, began using large, image processing is gradually switching to raster scan display, especially in the CT and satellite remote sensing image, the image processing technology development played a very good Facilitating role. By the 1980s, image processing technology into the popular view, this purchase has been able to take on computer graphics image processing tasks. VLSL makes the emergence of more processing speed greatly improved, further reducing its cost, greatly promote the popularity of graphics imaging systems and applications. 20 images in the 1990s is the period of practical technology, image processing of the huge amount of information on the requirements of high processing speed.21st century technology to high-quality images of the area of development, embodied in the following points: ①high-resolution, high speed, image processing technology development of the ultimate goal is to achieve real-time image processing, which in the formation of a moving target, identification And on the track is of great significance: ②three-dimensional, three-dimensional information covered by the most complete and rich, digital holographic technology will help to achieve this objective;②intelligent, and its purpose is to achieve a smart image formation, processing, recognition and understanding .2. Image processing and recognition technology applicationsAt present, image processing and recognition technology the main areas of biomedical applications, document processing, industrial inspection, robot vision, cargo inspection, zip code, finance, public security, banking, machinery, transport, e-commerce and multimedia communication networks, and other fields.3.Image processing and recognitionDigital image processing and identifying the knowledge disciplines involved in a very wide variety of specific ways, the application also very common, but research on the subject can be divided into the following areas: digital image, image transformation, image enhancement, image segmentation.4. Image Recognition TechnologyRecognition is nearly 20 years to develop a new type of science and technology, to study certain object or process (collectively images) and the classification described as the main content.Image Recognition by the study a wide variety of areas, it can be a medical image of the cancer cell recognition; machining parts and components in the identification, classification can be identified in remote sensing images to identify crops, forests, lakes and military facilities, and the judge of crops Growth is forecast harvest levels; can be a self-guided car in the path recognition; postal system of automatic letter sorting; traffic control, identification of regulations on vehicle, bank identification, ID cards and other identification. These images recognition of are the subject. Overall, the study's main problem is classification issues.5. Image ProcessingIn examining images, the first to receive the image information on the pretreatment (before processing) filtered to interference, noise, for geometry, color correction, and so on. This will improve the signal to noise ratio; sometimes because of weak information, is not recognized, had to deal with enhancements. Enhance the role is to provide a meet certain requirements of the image, or images transform, or to computer analysis. And in order to find images from the need to identify things have to split the image, that is, positioning and separation, separation of different objects. To give observers a clear image, but also the image of good governance, that is, rehabilitation treatment, it is already degraded the image to rebuild or restore the process in order to improve the image fidelity. In the actual processing, because image is very large amount of information in storage and transmission, but also the image data compression.The work must be carried out by computer, thus to encode and so on. The role is the least number of digital coding (also known as bits), said monochrome and color image for a more effective transmission and storage.The above are considered image processing areas. Therefore, image processing, including image coding, image enhancement, image compression, image restoration, image segmentation, and so on. On the part of image processing, image processing, themain purpose is to solve two problems: First, determine whether the image needed: another is to determine what information is.6. Image UnderstandingThe so-called image understanding is a general term. The image processing and image recognition of the ultimate goal is to make the image described and explained that in order to understand what it is the final image. So it is in image processing and image recognition on the basis of re-classification under the syntactic structure for analysis, and interpretation of images to describe images. Thus image understanding, including image processing, image recognition and analysis.7. The relationship of Image recognition and image processing and image understandingAbove that image is a general term for understanding. Recognition is a system. Each of these parts and the front part of a certain relationship can also say there is a feedback effect, for example, can be divided in the pretreatment of conduct. In addition, it always needs the necessary information from the outside world, so that each part can work effectively. For example, according to the actual images, some need to adopt in dealing with what kind of preprocessing, in recognition of the need to separate what, from what kind of features and characteristics of how to take and how to classify, to the number of sub-category, and, finally, to provide the necessary structural analysis The structure of information.In this system, image pre-processing, image segmentation for image processing, image feature extraction and image classification of image recognition and the syntactic structure of the content is separated from image to image the structure of this process; system as a whole received The result is the description and image interpretation. Whena new object (images) into the system, you can interpret that what it is.8. Mat lab Technical OverviewMat lab is Math works with the 1982 launch of a high-performance numerical computing and visualization of mathematical software. As the use of computing and programming Mat lab conducted scientific computing ideas and expression exactly the same, so unlike other high-level language learning, such as Basic and C, and so on, asdifficult to grasp, with Mat lab procedures for the preparation of the calculus is like a piece of paper with formula And solving problems, is also known as calculus scientific.General numerical analysis, matrix computing, digital signal processing, modeling and system control and optimization applications, and collect applications and graphics in a user-friendly integrated environment. In this environment, the solution of the problem required, users simply set out mathematical expression, the result will be numerical or graphic to show.Mat lab is the meaning of Matrix Laboratories (MATRIXLABORATORY), the basic element is not defined dimension of the matrix. Mat lab has since come out. Mat lab numerical calculation is the basic unit complex array (or array), which makes high Mat lab "to quantify." After more than 10 years of refinement and expansion, it has become the standard linear algebra course tools. Because no definition of an array of dimension, and gives matrix function, special matrix specialized library function, so that in solving such as signal processing, modeling, system identification, control and optimize the areas of the issue, it Greatly simple, efficient, convenient and other senior This is unprecedented language. Many American universities have installed a laboratory Mat lab for study and research purposes. There, doctoral students must master the basic tools. Mat lab, known as included in the toolbox application of the various tools to solve the problem. Kit is actually Mat lab to expand the application of a series of Mat lab function it can be used to solve various disciplines, including signal processing, image processing, the control system identification, neural networks. With Mat lab version of the continually escalating, the functions contained in the toolbox and extensive Therefore, the application of more and more extensive, involving a numerical analysis of the types of engineers do not have to be a tool.。

相关文档
最新文档