人脸识别英文原文(可供中文翻译)

人脸识别英文原文(可供中文翻译)
人脸识别英文原文(可供中文翻译)

Copyright 1998 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional pur- poses or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Neural Network-Based Face Detection

Henry A.Rowley,Shumeet Baluja,and Takeo Kanade

Abstract

We present a neural network-based upright frontal face detection system. A retinally con- nected neural network examines small windows of an image, and decides whether each win- dow contains a face. The system arbitrates between multiple networks to improve performance over a single network. We present a straightforward procedure for aligning positive face ex- amples for training. To collect negative examples, we use a bootstrap algorithm, which adds false detections into the training set as training progresses. This eliminates the difficult task of manually selecting nonface training examples, which must be chosen to span the entire space of nonface images. Simple heuristics, such as using the fact that faces rarely overlap in images, can further improve the accuracy. Comparisons with several other state-of-the-art face detec- tion systems are presented; showing that our system has comparable performance in terms of detection and false-positive rates.

Keywords: Face detection, Pattern recognition, Computer vision, Artificial neural networks, Ma- chine learning

1 Introduction

In this paper, we present a neural network-based algorithm to detect upright, frontal views of faces in gray-scale images1. The algorithm works by applying one or more neural networks directly to portions of the input image, and arbitrating their results. Each network is trained to output the presence or absence of a face. The algorithms and training methods are designed to be general, with little customization for faces.

Many face detection researchers have used the idea that facial images can be characterized directly in terms of pixel intensities. These images can be characterized by probabilistic models of the set of face images [4, 13, 15], or implicitly by neural networks or other mechanisms [3, 12, 14,

19, 21, 23, 25, 26]. The parameters for these models are adjusted either automatically from example

images (as in our work) or by hand. A few authors have taken the approach of extracting features and applying either manually or automatically generated rules for evaluating these features [7, 11].

Training a neural network for the face detection task is challenging because of the difficulty in characterizing prototypical “non f ace” images. Unlike face recognition, in which the classes to be discriminated are different faces, the two classes to be discriminated in face detection are “images containing faces”and “images not containing f aces”. It is easy to get a representative sample of images which contain faces, but much harder to get a representative sample of those which do not. We avoid the problem of using a huge training set for nonfaces by selectively adding images to the

training set as training progresses [21]. This “bootstra p”method reduces the size of the training set needed. The use of arbitration between multiple networks and heuristics to clean up the results significantly improves the accuracy of the detector.

Detailed descriptions of the example collection and training methods, network architecture,

and arbitration methods are given in Section 2. In Section 3, the performance of the system is examined. We find that the system is able to detect 90.5% of the faces over a test set of 130 complex images, with an acceptable number of false positives. Section 4 briefly discusses some techniques that can be used to make the system run faster, and Section 5 compares this system with similar systems. Conclusions and directions for future research are presented in Section 6.

2 Description of the System

Our system operates in two stages: it first applies a set of neural network-based filters to an image, and then uses an arbitrator to combine the outputs. The filters examine each location in the image at several scales, looking for locations that might contain a face. The arbitrator then merges detections from individual filters and eliminates overlapping detections.

2.1 Stage One: A Neural Network-Based Filter

The first component of our system is a filter that receives as input a 20x20 pixel region of the image, and generates an output ranging from 1 to -1, signifying the presence or absence of a face, respectively. To detect faces anywhere in the input, the filter is applied at every location in the image. To detect faces larger than the window size, the input image is repeatedly reduced in size (by subsampling), and the filter is applied at each size. This filter must have some invariance to position and scale. The amount of invariance determines the number of scales and positions at which it must be applied. For the work presented here, we apply the filter at every pixel position in the image, and scale the image down by a factor of 1.2 for each step in the pyramid.

The filtering algorithm is shown in Fig. 1. First, a preprocessing step, adapted from [21], is

applied to a window of the image. The window is then passed through a neural network, which decides whether the window contains a face. The preprocessing first attempts to equalize the intensity values in across the window. We fit a function which varies linearly across the window to the intensity values in an oval region inside the window. Pixels outside the oval (shown in Fig. 2a) may represent the background, so those intensity values are ignored in computing the lighting variation across the face. The linear function will approximate the overall brightness of each part of the window, and can be subtracted from the window to compensate for a variety of lighting conditions. Then histogram equalization is performed, which non-linearly maps the intensity values to expand the range of intensities in the window. The histogram is computed for pixels inside an oval region in the window. This compensates for differences in camera input gains, as well as improving contrast in some cases. The preprocessing steps are shown in Fig. 2.

The preprocessed window is then passed through a neural network. The network has retinal

connections to its input layer; the receptive fields of hidden units are shown in Fig. 1. There are three types of hidden units: 4 which look at 10x10 pixel subregions, 16 which look at 5x5 pixel subregions, and 6 which look at overlapping 20x5 pixel horizontal stripes of pixels. Each of these types was chosen to allow the hidden units to detect local features that might be important forface detection. In particular, the horizontal stripes allow the hidden units to detect such features as mouths or pairs of eyes, while the hidden units with square receptive fields might detect features such as individual eyes, the nose, or corners of the mouth. Although the figure shows a single hidden unit for each subregion of the input, these units can be replicated. For the experiments which are described later, we use networks with two and three sets of these hidden units. Similar input connection patterns are commonly used in speech and character recognition tasks [10, 24]. The network has a single, real-valued output, which indicates whether or not the window contains a face.

Examples of output from a single network are shown in Fig. 3. In the figure, each box represents

the position and size of a window to which the neural network gave a positive response. The network has some invariance to position and scale, which results in multiple boxes around some faces. Note also that there are some false detections; they will be eliminated by methods presented in Section 2.2.

TTo train the neural network used in stage one to serve as an accurate filter, a large number of

face and nonface images are needed. Nearly 1050 face examples were gathered from face databases at CMU, Harvard2, and from the World Wide Web. The images contained faces of various sizes, orientations, positions, and intensities. The eyes, tip of nose, and corners and center of the mouth of each face were labelled manually. hese points were used to normalize each face to the same scale, orientation, and position, as follows:

1. Initialize , a vector which will be the average positions of each labelled feature over all the faces, with the feature locations in the first face F.

2. The feature coordinates in are rotated, translated, and scaled, so that the average locations of the eyes will appear at predetermined locations in a 20x20 pixel window.

3. For each face i, compute the best rotation, translation, and scaling to align the face’s features

F with the average feature locations . Such transformations can be written as a linear

function of their parameters. Thus, we can write a system of linear equations mapping the

features from F to. The least squares solution to this over-constrained system

yields the

p arameters for the best alignment transformation. Call the aligned feature locations F.

4. Update by averaging the aligned feature locations

for each face .

5. Go to step 2.

The alignment algorithm converges within five iterations, yielding for each face a function which

maps that face to a 20x20 pixel window. Fifteen face examples are generated for the training set

from each original image, by randomly rotating the images (about their center points) up to 10,

scaling between 90% and 110%, translating up to half a pixel, and mirroring. Each 20x20 window in the set is then preprocessed (by applying lighting correction and histogram equalization). A few example images are shown in Fig. 4. The randomization gives the filter invariance to translations of less than a pixel and scalings of 20%. Larger changes in translation and scale are dealt with by applying the filter at every pixel position in an image pyramid, in which the images are scaled by factors of 1.2.

Practically any image can serve as a nonface example because the space of nonface images is

much larger than the space of face images. However, collecting a “representativ e” set of nonfaces

is difficult. Instead of collecting the images before training is started, the images are collected during training, in the following manner, adapted from [21]:

1. Create an initial set of nonface images by generating 1000 random images. Apply the pre- processing steps to each of these images.

2. Train a neural network to produce an output of 1 for the face examples, and -1 for the nonface examples. The training algorithm is standard error backpropogation with momentum [8]. On the first iteration of this loop, the network’s weights are initialized randomly. After the first iteration, we use the weights computed by training in the previous iteration as the starting point.

3. Run the system on an image of scenery which contains no faces. Collect subimages in which the network incorrectly identifies a face (an output activation ).

4. Select up to 250 of these subimages at random, apply the preprocessing steps, and add them into the training set as negative examples. Go to step 2.

Some examples of nonfaces that are collected during training are shown in Fig. 5. Note that some of the examples resemble faces, although they are not very close to the positive examples shown in Fig. 4. The presence of these examples forces the neural network to learn the precise boundary between face and nonface images. We used 120 images of scenery for collecting negative examples in the bootstrap manner described above. A typical training run selects approximately

8000 nonface images from the 146,212,178 subimages that are available at all locations and scales

in the training scenery images. A similar training algorithm was described in [5], where at each iteration an entirely new network was trained with the examples on which the previous networks had made mistakes.

2.2 Stage Two: Merging Overlapping Detections and Arbitration

The examples in Fig. 3 showed that the raw output from a single network will contain a number of false detections. In this section, we present two strategies to improve the reliability of the detector: merging overlapping detections from a single network and arbitrating among multiple networks.

2.2.1 Merging Overlapping Detections

Note that in Fig. 3, most faces are detected at multiple nearby positions or scales, while false detec- tions often occur with less consistency. This observation leads to a heuristic which can eliminate many false detections. For each location and scale, the number of detections within a specified neighborhood of that location can be counted. If the number is above a threshold, then that lo- cation is classified as a face. The centroid of the nearby detections defines the location of the detection result, thereby collapsing multiple detections. In the experiments section, this heuristic will be referred to as “thres holding”.

If a particular location is correctly identified as a face, then all other detection locations which

overlap it are likely to be errors, and can therefore be eliminated. Based on the above heuristic regarding nearby detections, we preserve the location with the higher number of detections within

a small neighborhood, and eliminate locations with fewer detections. In the discussion of the experiments, this heuristic is called “overlap elimination”. There are relatively few cases in which this heuristic fails; however, one such case is illustrated by the left two faces in Fig. 3B, where one face partially occludes another.

The implementation of these two heuristics is illustrated in Fig. 6. Each detection at a particular

location and scale is marked in an image pyramid, labelled the “output”pyramid. Then, each location in the pyramid is replaced by the number of detections in a specified neighborhood of that location. This has the effect of “spreading out”the detections. Normally, the neighborhood extends an equal number of pixels in the dimensions of scale and position, but for clarity in Fig. 6 detections are only spread out in position. A threshold is applied to these values, and the centroids (in both position and scale) of all above threshold regions are computed. All detections contributing to a centroid are collapsed down to a single point. Each centroid is then examined in order, starting from the ones which had the highest number of detections within the specified neighborhood. If any other centroid locations represent a face overlapping with the current centroid, they are removed from the output pyramid. All remaining centroid locations constitute the final detection result. In the face detection work described in [3], similar observations about the nature of the outputs were made, resulting in the development of heuristics similar to those described above.

2.2.2 Arbitration among Multiple Networks

To further reduce the number of false positives, we can apply multiple networks, and arbitrate between their outputs to produce the final decision. Each network is trained in a similar manner, but with random initial weights, random initial nonface images, and permutations of the order of presentation of the scenery images. As will be seen in the next section, the detection and false positive rates of the individual networks will be quite close. However, because of different training conditions and because of self-selection of negative training examples, the networks will have different biases and will make different errors. the implementation of arbitration is illustrated in Fig. 7. Each detection at a particular position and scale is recorded in an image pyramid, as was done with the previous heuristics. One way to combine two such pyramids is by ANDing them. This strategy signals a detection only if both networks detect a face at precisely the same scale and position. Due to the different biases of the individual networks, they will rarely agree on a false detection of a face. This allows ANDing to eliminate most false detections. Unfortunately, this heuristic can decrease the detection rate because a face detected by only one network will be thrown out. However, we will see later that individual networks can all detect roughly the same set of faces, so that the number of faces lost due to ANDing is small.

Similar heuristics, such as ORing the outputs of two networks, or voting among three networks,

were also tried. Each of these arbitration methods can be applied before or after the “thres holding” and “overlap elimina tion”heuristics. If applied afterwards, we combine the centroid locations rather than actual detection locations, and require them to be within some neighborhood of one another rather than precisely aligned.

Arbitration strategies such as ANDing, ORing, or voting seem intuitively reasonable, but per- haps there are some less obvious heuristics that could perform better. To test this hypothesis, we applied a separate neural network to arbitrate among multiple detection networks. For a location of interest, the arbitration network examines a

small neighborhood surrounding that location in the

output pyramid of each individual network. For each pyramid, we count the number of detections in a 3x3 pixel region at each of three scales around the location of interest, resulting in three num- bers for each detector, which are fed to the arbitration network, as shown in Fig. 8. The arbitration network is trained to produce a positive output for a given set of inputs only if that location con- tains a face, and to produce a negative output for locations without a face. As will be seen in the next section, using an arbitration network in this fashion produced results comparable to (and in some cases, slightly better than) those produced by the heuristics presented earlier.

3 Experimental Results

A number of experiments were performed to evaluate the system. We first show an analysis of which features the neural network is using to detect faces, then present the error rates of the system over two large test sets.

3.1 Sensitivity AnalysiS

In order to determine which part of its input image the network uses to decide whether the input is a face, we performed a sensitivity analysis using the method of [2]. We collected a positive test set based on the training database of face images, but with different randomized scales, translations, and rotations than were used for training. The negative test set was built from a set of negative examples collected during the training of other networks. Each of the 20x20 pixel input images was divided into 100 2x2 pixel subimages. For each subimage in turn, we went through the test set, replacing that subimage with random noise, and tested the neural network. The resulting root mean square error of the network on the test set is an indication of how important that portion of the image is for the detection task. Plots of the error rates for two networks we trained are shown in Fig. 9. Network 1 uses two sets of the hidden units illustrated in Fig. 1, while Network 2 uses three sets.

The networks rely most heavily on the eyes, then on the nose, and then on the mouth (Fig. 9).

Anecdotally, we have seen this behavior on several real test images. In cases in which only one eye is visible, detection of a face is possible, though less reliable, than when the entire face is visible. The system is less sensitive to the occlusion of the nose or mouth. 3.2 Testing The system was tested on two large sets of images, which are distinct from the training sets. Test Set 1 consists of a total of 130 images collected at CMU, including images from the World Wide Web, scanned from photographs and newspaper pictures, and digitized from broadcast television3. It also includes 23 images used in [21] to measure the accuracy of their system. The images contain a total of 507 frontal faces, and require the networks to examine 83,099,211 20x20 pixel windows. The images have a wide variety of complex backgrounds, and are useful in measuring the false alarm rate of the system. Test Set 2 is a subset of the FERET database [16, 17]. Each image contains one face, and has (in most cases) a uniform background and good lighting. There are a wide variety of faces in the database, which are taken at a variety of angles. Thus these images aremore useful for checking the angular sensitivity of the detector, and less useful for measuring the false alarm rate.The outputs from our face detection networks are not binary. The neural network produce reavalues between 1 and -1, indicating whether or not the input contains a face. A threshold value of zero is used during training to select the negative examples (if the network outputs a value of greater than zero for any input from a scenery image, it is considered a mistake). Although this value is intuitively reasonable, by changing this value during testing, we can vary how conserva- tive the system is. To examine the effect of this threshold value during testing, we measured the detection and false positive rates as the threshold was varied from 1 to -1. At a threshold of 1, the false detection rate is zero, but no faces are detected. As the threshold is decreased, the number of correct detections will increase, but so will the number of false detections. This tradeoff is presented in Fig. 10, which shows the detection rate plotted against the number of false positives as the threshold is varied, for the two networks presented in the previous section. Since the zero threshold locations are close to the “knees”of the curves, as can be seen from the figure, we used a zero threshold value

throughout testing.

Table 1 shows the performance of different versions of the detector on Test Set 1. The four

columns show the number of faces missed (out of 507), the detection rate, the total number of false detections, and the false detection rate. The last rate is in terms of the number of 20x20 pixel windows that must be examined, which is approximately

3:3times the number of pixels in an image (taking into account all the levels in the input pyramid). First we tested four networks working alone, then examined the effect of

overlap elimination and collapsing multiple detections, and tested arbitration using ANDing, ORing, voting, and neural networks. Networks 3 and 4 are identical to Networks 1 and 2, respectively, except that the negative example images were presented in a different order during training. The results for ANDing and ORing networks were based on Networks 1 and 2, while voting and network arbitration were based on Networks 1, 2, and 3. The neural network arbitrators were trained using the images from which the face examples were extracted. Three different architectures for the network arbitrator were used. The first used 5 hidden units, as shown in Fig. 8. The second used two hidden layers of 5 units each, with complete connections between each layer, and additional connections between the first hidden layer and the output. The last architecture was a simple perceptron, with no hidden units.

As discussed earlier, the “thres holding” heuristic for merging detections requires two param-

eters, which specify the size of the neighborhood used in searching for nearby detections, and the threshold on the number of detections that must be found in that neighborhood. In the table, these two parameters are shown in parentheses after the word “thres hold”. Similarly, the ANDing, ORing, and voting arbitration methods have a parameter specifying how close two detections (or detection centroids) must be in order to be counted as identical.

Systems 1 through 4 show the raw performance of the networks. Systems 5 through 8 use

the same networks, but include the thresholding and overlap elimination steps which decrease the number of false detections significantly, at the expense of a small decrease in the detection rate. The remaining systems all use arbitration among multiple networks. Using arbitration further reduces the false positive rate, and in some cases increases the detection rate slightly. Note that for systems using arbitration, the ratio of false detections to windows examined is extremely low, ranging from 1 false detection per 449;184windows to down to 1 in 41;549;605, depending on the type of arbitration used. Systems 10, 11, and 12 show that the detector can be tuned to make it more or less conservative. System 10, which uses ANDing, gives an extremely small number of false positives, and has a detection rate of about 77.9%. On the other hand, System 12, which is based on ORing, has a higher detection rate of 90.3% but also has a larger number of false detections. System 11 provides a compromise between the two. The differences in performance of these systems can be understood by considering the arbitration strategy. When using ANDing, a false detection made by only one network is suppressed, leading to a lower false positive rate. On the other hand, when ORing is used, faces detected correctly by only one network will be preserved, improving the detection rate.

人脸识别英文专业词汇教学提纲

人脸识别英文专业词 汇

gallery set参考图像集 Probe set=test set测试图像集 face rendering Facial Landmark Detection人脸特征点检测 3D Morphable Model 3D形变模型 AAM (Active Appearance Model)主动外观模型Aging modeling老化建模 Aging simulation老化模拟 Analysis by synthesis 综合分析 Aperture stop孔径光标栏 Appearance Feature表观特征 Baseline基准系统 Benchmarking 确定基准 Bidirectional relighting 双向重光照 Camera calibration摄像机标定(校正)Cascade of classifiers 级联分类器 face detection 人脸检测 Facial expression面部表情 Depth of field 景深 Edgelet 小边特征 Eigen light-fields本征光场 Eigenface特征脸 Exposure time曝光时间

Expression editing表情编辑 Expression mapping表情映射 Partial Expression Ratio Image局部表情比率图(,PERI) extrapersonal variations类间变化 Eye localization,眼睛定位 face image acquisition 人脸图像获取 Face aging人脸老化 Face alignment人脸对齐 Face categorization人脸分类 Frontal faces 正面人脸 Face Identification人脸识别 Face recognition vendor test人脸识别供应商测试Face tracking人脸跟踪 Facial action coding system面部动作编码系统 Facial aging面部老化 Facial animation parameters脸部动画参数 Facial expression analysis人脸表情分析 Facial landmark面部特征点 Facial Definition Parameters人脸定义参数 Field of view视场 Focal length焦距 Geometric warping几何扭曲

中文地址翻译英文详细讲解以及例句

中文地址翻译英文详细讲解以及例句 No.3, Building 2, Jilong Lignum Market, Dalingshan Town, DongGuan City, Guangdong Prov, China 或者 No.3, Building 2,Jilong Mucai Shichang, Dalingshan Town, DongGuan City, Guangdong Prov, China 翻译原则:先小后大。 中国人喜欢先说小的后说大的,如**区**路**号 而外国人喜欢先说大的后说小的,如**号**路**区,因此您在翻译时就应该先写小的后写大的。 例如:中国广东深圳市华中路1023号5栋401房,您就要从房开始写起,Room 401, Buliding 5, No.1023,HuaZhong Road, ShenZhen, GuangDong Prov., China(逗号后面有空格)。注意其中路名、公司名、村名等均不用翻译成同意的英文,只要照写拼音就行了。因为您的支票是中国的邮递员送过来,关键是要他们明白。技术大厦您写成Technology Building,他们可能更迷糊呢。 现在每个城市的中国邮政信件分拣中心都有专人负责将外国来信地址翻译成中文地址,并写在信封上交下面邮递员送过来. 重要: 你的邮政编码一定要写正确,因为外国信件中间的几道邮政环节都是靠邮政编码区域投递的。 常见中英文对照: ***室/ 房Room *** ***村*** Village ***号No. *** ***号宿舍*** Dormitory ***楼/ 层*** F ***住宅区/ 小区*** Residential Quater 甲/ 乙/ 丙/ 丁 A / B / C / D ***巷/ 弄Lane *** ***单元Unit *** ***号楼/ 栋*** Building ***公司***Com./*** Crop/***CO.LTD ***厂*** Factory ***酒楼/酒店*** Hotel ***路*** Road ***花园*** Garden ***街*** Street ***信箱Mailbox *** ***区*** District ***县*** County ***镇*** Town ***市*** City ***省*** Prov. ***院***Yard ***大学***College **表示序数词,比如1st、2nd、3rd、4th……如果不会,就用No.***代替,或者直接填数字吧! 另外有一些***里之类难翻译的东西,就直接写拼音*** Li。而***东(南、西、

常用英文缩写大全(全)

企业各职位英文缩写: GM(General Manager)总经理 VP(Vice President)副总裁 FVP(First Vice President)第一副总裁 AVP(Assistant Vice President)副总裁助理 CEO(Chief Executive Officer)首席执行官,类似总经理、总裁,是企业的法人代表。 COO(Chief Operations Officer)首席运营官,类似常务总经理 CFO(Chief Financial Officer)首席财务官,类似财务总经理 CIO(Chief Information Officer)首席信息官,主管企业信息的收集和发布CTO(Chief technology officer)首席技术官类似总工程师 HRD(Human Resource Director)人力资源总监 OD(Operations Director)运营总监 MD(Marketing Director)市场总监 OM(Operations Manager)运作经理 PM(Production Manager)生产经理 (Product Manager)产品经理 其他: CAO: Art 艺术总监 CBO: Business 商务总监 CCO: Content 内容总监 CDO: Development 开发总监 CGO: Gonverment 政府关系 CHO: Human resource 人事总监 CJO: Jet 把营运指标都加一个或多个零使公司市值像火箭般上升的人 CKO: Knowledge 知识总监 CLO: Labour 工会主席 CMO: Marketing 市场总监 CNO: Negotiation 首席谈判代表CPO: Public relation 公关总监 CQO: Quality control 质控总监 CRO: Research 研究总监 CSO: Sales 销售总监 CUO: User 客户总监 CVO: Valuation 评估总监 CWO: Women 妇联主席 CXO: 什么都可以管的不管部部长 CYO: Yes 什么都点头的老好人 CZO: 现在排最后,等待接班的太子 常用聊天英语缩写

人脸识别论文文献翻译中英文

人脸识别论文中英文 附录(原文及译文) 翻译原文来自 Thomas David Heseltine BSc. Hons. The University of York Department of Computer Science For the Qualification of PhD. -- September 2005 - 《Face Recognition: Two-Dimensional and Three-Dimensional Techniques》 4 Two-dimensional Face Recognition 4.1 Feature Localization Before discussing the methods of comparing two facial images we now take a brief look at some at the preliminary processes of facial feature alignment. This process typically consists of two stages: face detection and eye localisation. Depending on the application, if the position of the face within the image is known beforehand (for a cooperative subject in a door access system for example) then the face detection stage can often be skipped, as the region of interest is already known. Therefore, we discuss eye localisation here, with a brief discussion of face detection in the literature review(section 3.1.1). The eye localisation method is used to align the 2D face images of the various test sets used throughout this section. However, to ensure that all results presented are representative of the face recognition accuracy and not a product of the performance of the eye localisation routine, all image alignments are manually checked and any errors corrected, prior to testing and evaluation. We detect the position of the eyes within an image using a simple template based method. A training set of manually pre-aligned images of faces is taken, and each image cropped to an area around both eyes. The average image is calculated and used as a template. Figure 4-1 - The average eyes. Used as a template for eye detection. Both eyes are included in a single template, rather than individually searching for each eye in turn, as the characteristic symmetry of the eyes either side of the nose, provides a useful feature that helps distinguish between the eyes and other false positives that may be picked up in the background. Although this method is highly susceptible to scale(i.e. subject distance from the

最新中文地址如何翻译成英文(精)

5栋 Building No.5 ----------- 请看相关资料 翻译原则:先小后大。 中国人喜欢先说小的后说大的,如 **区 **路 **号 而外国人喜欢先说大的后说小的,如 **号 **路 **区,因此您在翻译时就应该先写小的后写大的 . 中文地址的排列顺序是由大到小, 如:X 国 X 省 X 市 X 区 X 路 X 号, 而英文地址则刚好相反, 是由小到大。如上例写成英文就是:X 号, X 路, X 区, X 市, X 省, X 国。掌握了这个原则,翻译起来就容易多了! X 室 Room X X 号 No. X X 单元 Unit X X 号楼 Building No. X X 街 X Street X 路 X Road X 区 X District X 县 X County X 镇 X Town

X 市 X City X 省 X Province 请注意:翻译人名、路名、街道名等,最好用拼音。 中文地址翻译范例: 宝山区示范新村 37号 403室 Room 403, No. 37, SiFang Residential Quarter, BaoShan District 虹口区西康南路 125弄 34号 201室 Room 201, No. 34, Lane 125, XiKang Road(South, HongKou District 473004河南省南阳市中州路 42号李有财 Li Youcai Room 42 Zhongzhou Road, Nanyang City Henan Prov. China 473004 434000湖北省荆州市红苑大酒店李有财 Li Youcai Hongyuan Hotel Jingzhou city Hubei Prov. China 434000 473000河南南阳市八一路 272号特钢公司李有财

英文翻译中文

Accounting ethics Barron's Kathleen Elliott Abstract Accounting ethics is primarily a field of applied ethics, the study of moral values and judgments as they apply to accountancy. It is an example of professional ethics. Accounting ethics were first introduced by Luca Pacioli, and later expanded by government groups, professional organizations, and independent companies. Ethics are taught in accounting courses at higher education institutions as well as by companies training accountants and auditors. Key words:Accounting Ethics Education Contents 1 Importance of ethics 2 History 3 Teaching ethics 4 Accounting scandals 1.Importance of ethics The nature of the work carried out by accountants and auditors requires a high level of ethics. Shareholders, potential shareholders, and other users of the financial statements rely heavily on the yearly financial statements of a company as they can use this information to make an informed decision about investment. They rely on the opinion of the accountants who prepared the statements, as well as the auditors that verified it, to present a true and fair view of the company. Knowledge of ethics can help accountants and auditors to overcome ethical dilemmas, allowing for the right choice that, although it may not benefit the company, will benefit the public who relies on the accountant/auditor's reporting. Most countries have differing focuses on enforcing accounting laws. In Germany, accounting legislation is governed by "tax law"; in Sweden, by "accounting law"; and in the United Kingdom, by the "company law". In addition, countries have their own organizations which regulate accounting. For example, Sweden has the Bokf?ringsn?mden (BFN - Accounting Standards Board), Spain the Instituto de Comtabilidad y Auditoria de Cuentas (ICAC), and the United States the Financial Accounting Standards Board (FASB). 2.History Luca Pacioli, the "Father of Accounting", wrote on accounting ethics in his first book Summa de arithmetica, geometria, proportioni, et proportionalita, published in 1494. Ethical standards have since then been developed through government groups, professional organizations, and independent companies. These various groups have led accountants to follow several codes of ethics to perform their duties in a professional work environment. Accountants must follow the code of ethics set out by the professional body of which they are a member. United States accounting societies such as the Association of Government Accountants, Institute of Internal Auditors, and the National Association of Accountants all have codes of ethics, and

网络中常用简称(在网络中常用的一些英文缩写及解释)

DARPA :国防高级研究计划局 ARPARNET(Internet) :阿帕网 ICCC :国际计算机通信会议 CCITT :国际电报电话咨询委员会 SNA :系统网络体系结构(IBM) DNA :数字网络体系结构(DEC) CSMA/CD :载波监听多路访问/冲突检测(Xerox) NGI :下一代INTERNET Internet2 :第二代INTERNET TCP/IP SNA SPX/IPX AppleT alk :网络协议 NII :国家信息基础设施(信息高速公路) GII :全球信息基础设施 MIPS :PC的处理能力 Petabit :10^15BIT/S Cu芯片: :铜 OC48 :光缆通信 SDH :同步数字复用 WDH :波分复用 ADSL :不对称数字用户服务线 HFE/HFC:结构和Cable-modem 机顶盒 PCS :便携式智能终端 CODEC :编码解码器 ASK(amplitude shift keying) :幅移键控法 FSK(frequency shift keying) :频移键控法 PSK(phase shift keying) :相移键控法 NRZ (Non return to zero) :不归零制 PCM(pulse code modulation) :脉冲代码调制nonlinear encoding :非线性编程 FDM :频分多路复用 TDM :时分多路复用 STDM :统计时分多路复用 DS0 :64kb/s DS1 :24DS0 DS1C :48DS0 DS2 :96DS0 DS3 :762DS0 DS4 :4032DS0 CSU(channel service unit) :信道服务部件SONET/SDH :同步光纤网络接口 LRC :纵向冗余校验 CRC :循环冗余校验 ARQ :自动重发请求 ACK :确认 NAK :不确认

唯美的中文翻译成英文

唯美的中文翻译成英文 Abandon 放弃 Disguise 伪装 Abiding 持久的,不变的~friendship Indifferent 无所谓 Forever 最爱 I know what you want 我知道你想要什么 See you forget the breathe 看见你忘了呼吸 Destiny takes a hand.命中注定 anyway 不管怎样 sunflower high-profile向日葵,高姿态。 look like love 看起来像爱 Holding my hand, eyes closed you would not get lost 牵着我的手,闭着眼睛走你也不会迷路 If one day the world betrayed you, at least I betray the world for you! 假如有一天世界背叛了你,至少还有我为你背叛这个世界! This was spoiled child, do not know the heart hurts, naive cruel. 这样被宠惯了的小孩子,不知道人心是会伤的,天真的残忍。

How I want to see you, have a look you changed recently, no longer said once, just greetings, said one to you, just say the word, long time no see. 我多么想和你见一面,看看你最近的改变,不再去说从前,只是寒暄,对你说一句,只说这一句,好久不见。 In fact, not wine, but when the thought of drinking the unbearable past. 其实酒不醉人,只是在喝的时候想起了那不堪的过去。 The wind does not know clouds drift, day not know rain down, eyes do not understand the tears of weakness, so you don't know me 风不懂云的漂泊,天不懂雨的落魄,眼不懂泪的懦弱,所以你不懂我 Some people a lifetime to deceive people, but some people a lifetime to cheat a person 有些人一辈子都在骗人,而有些人用一辈子去骗一个人 Alone and lonely, is always better than sad together 独自寂寞,总好过一起悲伤 You are my one city, one day, you go, my city, also fell 你是我的一座城,有一天,你离开了,我的城,也就倒了。

经典中文英文翻译

经典中文的英译 但愿人长久,千里共婵娟。 We wish each other a long life so as to share the beauty of this graceful moonlight, even though miles apart. 独在异乡为异客,每逢佳节倍思亲。 A lonely stranger in a strange land I am cast, I miss my family all the more on every festive day. 大江东去,浪淘尽,千古风流人物。 The endless river eastward flows; with its huge waves are gone all those gallant heroes of bygone years. 二人同心,其利断金。 If two people are of the same mind, their sharpness can cut through metal. 富贵不能淫,贫贱不能移,威武不能曲,此之谓大丈夫。 It is a true great man whom no money and rank can confuse, no poverty and hardship can shake, and no power and force can suffocate. 海内存知己,天涯若比邻。 A bosom friend afar brings distance near.

合抱之木,生于毫末,九层之台,起于累土;千里之行始于足下。 A huge tree that fills one’s arms grows f rom a tiny seedling; a nine-storied tower rises from a heap of earth; a thousand li journey starts with the first step. 祸兮,福之所依;福兮,祸之所伏。 Misfortune, that is where happiness depends; happiness, that is where misfortune underlies. 见贤思齐焉,见不贤而内自省也。 On seeing a man of virtue, try to become his equal; on seeing a man without virtue, examine yourself not to have the same defects. 江山如此多娇,引无数英雄尽折腰。 This land so rich in beauty has made countless heroes bow in homage. 举头望明月,低头思故乡。 Raising my head, I see the moon so bright; withdrawing my eyes, my nostalgia comes around. 俱往矣,数风流人物,还看今朝。 All are past and gone; we look to this age for truly great men.

常见的英语缩写

常见的英语缩写 一、常见的口语缩写 在口语中我们经常碰到一些缩写,诸如wanna, gonna 之类,那么他们到底是怎么来的呢?又是什么意思呢? 1.wanna (= want to) 【美国口语】 wanna 是"want to" 的缩写,意为“想要”“希望” Wanna grab a drink tonight? 今晚喝一杯如何? wanna的使用范围极广,从日常口语到歌曲名称都有它的影子,有一首传唱度很高的歌想必大家都听过,歌名就是"B What U Wanna B" 2.gonna(= going to) gonna 是"going to" 的缩写,用在将来时中,一般与be 动词连用成“be gonna”结构,但在口语中也有省略be 的情况出现。【美国口语】 Who's gonna believe you? 谁会信你呢? 3.kinda (= kind of) kinda 是"kind of"的缩写,意为“有点”“有几分” I'm kinda freaking out! 我快疯了! 4.sorta (=sort of)【美国口语】 sorta 是"sort of" 的缩写,意为“有几分”“可以说是” I'm sorta excited。 我有点小兴奋! 5.gotta (=got to)【美国口语】 gotta是"got to"的缩写,意为“不得不”“必须” I gotta go now。 我现在得走了。 二、常见的书面缩写 【品牌缩写】 “IKEA:(宜家)瑞典公司的牌子为四个词【Ingvar Kamprad Elmtaryd Agunnaryd】的首字母缩写,在瑞典语中的发音确实是“ee-key-a”,英美人的发音却是“eye-key-a”,估计是遵从了“IBM”、“ISO”这种缩略词的字母I发音模式,只不过后面的“KEA”连读而已,类似“idea”。 【英文考试缩写】 ①IELTS–International English Language Testing System 雅思; 中文意为“国际英语水平测试”

中文地址翻译成英文地址的方法和技巧

中文地址翻译成英文地址的方法和技巧 中文地址的排列顺序是由大到小,如:X国X省X市X区X路X号,而英文地址则刚好相反,是由小到大。如上例写成英文就是:X号,X路,X区,X市,X省,X国。 1.各部分写法 ●X室:Room X ●X号:No. X ●X单元:Unit X ●X楼/层:X/F ●X号楼:Building No. X ●住宅区/小区:ResidentialQuater ●X街:XStreet ●X路:XRoad East/Central/West东路/ 中路/ 西路 芙蓉西二路/ West 2nd Furong Road Central Dalian Rd. /大连中路 芙蓉中路的“中”可以用Central,也有用Middle的,一般用Mid比较简洁。 ●X区:XDistrict ●X镇:XTown ●X县:XCounty ●X市:XCity ●X省:XProvince ●国家(State)中华人民共和国:The People’s Republic of China、P.R.China、P.R.C.、 China ●X信箱:M ailbox X 请注意:翻译人名、路名、街道名等,最好用拼音。 各地址单元间要加逗号隔开。

2.英文通信地址常用翻译 201室/房Room 201 二单元Unit 2 马塘村MatangVallage 一号楼/栋Building 1 华为科技公司Huawei Technologies Co., Ltd.

xx公司xx Corp. / xx Co., Ltd. 宿舍Dormitory 厂Factory 楼/层Floor 酒楼/酒店Hotel 住宅区/小区Residential Quater 县County 甲/乙/丙/丁A/B/C/D 镇Town 巷/弄Lane 市City 路Road(也简写作Rd.,注意后面的点不能省略)一环路1st Ring Road 省Province(也简写作Prov.) 花园Garden 院Yard 街Street/Avenue 大学College/University 信箱Mailbox 区District A座Suite A 广场Square 州State 大厦/写字楼Tower/Center/Plaza 胡同Alley(北京地名中的条即是胡同的意思) 中国部分行政区划对照 自治区Autonomous Region 直辖市Municipality 特别行政区Special Administration Region 简称SAR 自治州Autonomous Prefecture

一些常用的英文缩写

报刊上常用的: ADB Asian Development Bank 亚洲开发银行(亚行) AMC asset management company 资产管理公司 BOT build-operate-transfer 建设-运营-移交 CBRC China Banking Regulatory Commission 中国银行业监督管理委员会(银监会) CGC credit guarantee company 信用担保公司 CGS credit guarantee scheme 信用担保制度 CIRC China Insurance Regulatory Commission 中国保险监督管理委员会(保监会) CPDF China Project Development Facility(世行)中国项目开发中心 CSRC China Securities Regulatory Commission 中国证券监督管理委员会(证监会) DFID Department for International Development (of the United Kingdom)英国国际发展部DRC Development Research Center (of the China State Council)国务院发展研究中心EU European Union 欧盟 FDI foreign direct investment 外商直接投资 FIE foreign invested enterprise 外资企业 GDP gross domestic product 国内生产总值 IFC International Finance Corporation 国际金融公司 IPO initial public offering(股票)首次公开发行 IPR intellectual property right 知识产权 JV joint venture 合资企业 MOC Ministry of Commerce 商务部 MOF Ministry of Finance 财政部

英文翻译成中文

第四部分翻译 Part Ⅰ英译汉 练习: Unit 1 1.年轻时,他对学业漫不经心,加之他一直不愿考虑运动员以外的职业,到这时候,这一切终于给他带来了不幸。 2.护士们对不得不日复一日地参与欺骗病人的做法也许深恶痛绝,但要抵制却感到无能为力。 3.我不会在初版的《失乐园》上乱写乱画,就像我不会把一幅伦勃朗的原作连同一套蜡笔交给我的婴儿任意涂抹一样。 4.只有假设地球表面呈曲线状,这一现象才能得到解释。 5.鹿减少生存所需的能耗以增加越冬生存的机会,从生物学的角度看是合情合理的。 6.不论好坏,不论是何结果,美国人不仅会一概接受,还要去铲除那些反对者,尽管对于成千上万的人来说,这决定与自己的意愿背道而驰。 7.你可曾为了接电话在洗澡时从浴室冲出来,或是嚼着饭从饭桌旁站起来,或是昏昏沉沉的从床上爬起来,而结果却是有人打错了。 8.实际上,大把花钱的满足感大于商品本身带给他们的乐趣。 9.但是蓝色也可以表示伤感(我很伤感),白色常代表纯洁,尽管在中国,人们在婚礼上穿白的,在葬礼上穿黑的。 10. 晚上十点到十二点,美国处在权力真空状态——除了纽约广播公司总部和两家大的新闻机构之外,全国范围内就再没有别的信息中心。 Unit 2 1) 1800年英国与法国之间将爆发一场持续15后的大战。 2) 我相信,到1816年,英国将在滑铁卢村附近赢得一场伟大战役的胜利。 3) 然而,到1870年,对于英国来说,德国将成为一个比法国更具危险性的国家。 4) 在20世纪初,俄国、美国和日本将成为大国,而英国将不再是世界上最强大的国家了。 5) 反过来,农民的业绩大小取决于农业的组织形式,经济环境,市场结构这些与之息息相关的因素。 6) 他被接回来时,不停地跟人讲,一些可怕的怪物瞪着眼睛盯着他,把他带到了一个宇宙飞船上。 7) 烫伤大多数发生在老人和孩子身上,往往是由于浴室里水温太高而造成的。 8) 尽量多地了解可能发生的事情,这样你可以提前做好准备。 9) 市场的变化迫使很多网站关闭,而其它网站也仅是勉强维持。 10)因为在农民生产率低下的国家,需要劳动人口中大多数人种粮食,因此就没有多少人从事投资货物的生产或进行经济增长所必须的其它活动。 Unit 3 1. 在牛顿之前,亚里士多德已经发现物体的自然状态是静止的,除非有力作用于物体。所以运动着的物体会停下来。 2.人们在家中或是类似家的地方感觉最为亲密——和一个或几个亲近的人呆在一起——也就是在私人交谈的时候。 3.当一个人长时间在干道或高速公路上驾车行驶,就会存在两个问题:一是如何保持稳定的车速;二是如何确保他不撞上前面的车。 4.这个系统尤其适用于汽车拥挤的情况,因为电脑不仅能够控制车速,与前面车子的距离,还能够控制方向。

手机常用的英文缩写

IDGIC:逻辑 LOOPFLITER:环路滤波器 LO:本机振荡器LOCKED:锁机。 LPF:低通滤波器。多出现在频率合成环路。它滤除鉴相器输出中的高频成分,防止这个高频成分干扰VCO的工作。 MAINCLK(MCLK):表示13MHz时钟,用于摩托罗拉手机。也有使用 MAGIC-13MHz的,诺基亚手机常采用RFC表示这个信号,爱立信手机常采用MCLK表示,松下手机采用13MHzCLK表示。 MDM:调制解调。 MEMORY:存储器。 MENU:菜单。 MF:陶瓷滤波器。 MIC:送话器、咪、微音器、拾音器、话筒。是一个声电转换器件,它将话音信号*为模拟的电信号。 MIX:混频器。在手机电路中,通常是指接收机的混频器。混频器是超外差接收机的核心部件,它将接收到的高频信号变换成为频率比较低的中频信号。 MIX-275:一般用于摩托罗拉手机中,表示2.75V混频器电源。有些手机的混频器电源用VCCMIX表示。 MIXOUT:混频器输出。 MOBILE:移动。 MOD:调制。 MODIP:调制工信号正。 MODIN:调制工信号负。 MODQP:调制Q信号正。 MODQN:调制Q信号负。 MODEM:调制解调器。摩托罗拉手机使用,是逻辑射频接口电路。它提供AFC、AOC及GMSK调制解调等。 MS:移动台。 MSC:移动交换中心。 MSIN:移动台识别码。 MSRN:漫游。 MUTE:静音。 NAM:号码分配模块。 NC:空,不接。 NEG:负压。 NI-H:镍氢。 NI-G:镍镉。 NONETWORK:无网络。

OFSET:偏置。 OMC:操作维护中心。 ONSRQ:免提开关控制。 ONSWAN:开机触发信号。 ON/OFF:开关机控制。 OSC:振荡器。振荡器将直流信号*为交流信号供相应的电路使用。 OUT:输出。 PA:功率放大器,在发射机的未级电路。 PAC:功率控制。 PA-ON:功率启动控制 PCB:印刷电路板。手机电路中使用的都是多层板。 PCH:寻呼信道。 PCM:脉冲编码调制。 PCMDCLK:脉冲编码时钟。 PCMRXDATA:脉冲编码接收数据。 PCMSCLK:脉冲编码取样时钟。 PCMTXDATA:脉冲编码发送数据。 PCN:个人通信网络。数字通信系统的一种,不过其称谓还不大统一,在一些书上有叫PCS。在诺基亚手机中,1800M系统常被标注为PCN,其它手机则标注为DCS。 PCS:个人通信系统。 PD:鉴相器。通常用在锁相环中,是一个信号相位比较器,它将信号相位的变化*为电压的变化,我们把这个电压信号称为相差电信号。频率合成器中PD的输出就是VCO的控制信号。 PDATA:并行数据。 PHASE:相位。 PIN:个人识别码。 PLL:锁相环。常用于控制及频率合成电路。 PM:调相。 POWCONTROL:功率控制。 POWLEV:功率级别。 POWRSRC〃供电选择。 POWER:电源。 手机电路中各种英文缩写很多,掌握了解这些缩写对我们分析电路帮助很大。下面,介绍在手机中较常使用的一些英文符号,供分析电路和维修时参考。 A/D:模数转换。 AC:交流。 ADDRESS:地址线。

生活中常用英文缩写大盘点

生活中常用英文缩写大盘点 如同我们会用“生快”来代替“生日快乐”一样,歪果人民的智慧结晶则是各式各样的缩写。英文缩写主要是通过将短语或者句子中各个英语词汇的首字母连起来形成一个新的“单词”,或者通过谐音将用发音类似的字母来代替单词中的一部分,以达到简化的目的。今天呢我们就来盘点一下生活中比较常用的英文缩写。 日常生活中的缩写 RPS "Rock,Paper,Sissors"的缩写。没错,就是这个风靡世界经久不衰举世皆知老少咸宜的猜拳 游戏,也是决定困难症患者的好帮手,当你和同伴需要决定任务分配或顺序的时候,来一个"RPS",简单明了。 RIP "Rest in Peace"的缩写,意为“安息”。在美国,每个人的墓碑上都会刻上"R.I.P."。这是一个比较常见的缩写,要在特定情境下使用:当有人去世了,我们可以用"RIP"来表达我们 的悼念之情。前不久,大卫·鲍威(David Bowie)这位宗师级英国摇滚音乐家逝世,许多小伙伴就纷纷发推或在Facebook上更新:David Bowie,R.I.P.

GG 这个缩写有很多种意思,不过在这里只介绍生活中最常用的两种: 1.Gotta Go 这里"Gotta"本身其实也是一个口语化缩写(gotta=got to),在各种美剧中经常出现。这个词组的意思是“该走了”“要走了”,在跟朋友告别的时候可以用哦。在下面一个意思出现之前,这一直是GG这个缩写的主流含义,不过嘛~ 2.Good Game 玩游戏的小伙伴对这个缩写一定不陌生,没错,这句话常在游戏中出现,意为“很棒/不错的游戏”,是游戏失败的一方对胜利一方的称赞。这是一句很有风度的夸奖,也是一种比较体面的认输。虽然在发展过程中,逐渐侧重了“输”这一含义,但是希望大家不要忘记,GG是一句礼貌用语哦。 类似的缩写有GL(Good Luck),HF(Have Fun)等等,也是常常出现在游戏中的缩写。 邮件/短信中的缩写 缩写的最大特点就是短,因此在发短信/邮件的时候,利用缩写来表达,可以事半功倍哟。LMK 即"Let me know"。这一句的适用范围非常广,只要是需要等待对方回复的,都可以加上一个"LMK"。 ASAP 非常常见的一个缩写,即"As soon as possible",意为“尽快”。在使用这个缩写的时候,请注意礼貌,可以说"I will be back ASAP."但是尽量不要用ASAP去要求对方,这个缩写中的催促含义非常明显,直用略为不礼貌,如果想要礼貌地表示希望尽快收到回复,可以用"at your earliest convenience"搭配"Would/Could you"和"I'd appreciate"的句式:I'd appreciate if you can come here at your earliest convenience. CTN CTN=Can't talk now.如果有人打电话来,而刚好你在忙,不方便通话,可以直接短信回复

相关文档
最新文档