单片机Sensor-technology传感器技术大学毕业论文外文文献翻译及原文
红外传感器论文中英文资料对照外文翻译

中英文资料对照外文翻译外文资料Moving Object Counting with an Infrared Sensor NetworkBy KI, Chi KeungAbstractWireless Sensor Network (WSN) has become a hot research topic recently. Great benefit can be gained through the deployment of the WSN over a wide range of applications, covering the domains of commercial, military as well as residential. In this project, we design a counting system which tracks people who pass through a detecting zone as well as the corresponding moving directions. Such a system can be deployed in traffic control, resource management, and human flow control. Our design is based on our self-made cost-effective Infrared Sensing Module board which co-operates with a WSN. The design of our system includes Infrared Sensing Module design, sensor clustering, node communication, system architecture and deployment. We conduct a series of experiments to evaluate the system performance which demonstrates the efficiency of our Moving Object Counting system.Keywords:Infrared radiation,Wireless Sensor Node1.1 Introduction to InfraredInfrared radiation is a part of the electromagnetic radiation with a wavelength lying between visible light and radio waves. Infrared have be widely used nowadays including data communications, night vision, object tracking and so on. People commonly use infrared in data communication, since it is easily generated and only suffers little from electromagnetic interference. Take the TV remote control as an example, which can be found in everyone's home. The infrared remote control systems use infrared light-emitting diodes (LEDs) to send out an IR (infrared) signal when the button is pushed. A different pattern of pulses indicates the corresponding button being pushed. To allow the control of multiple appliances such as a TV, VCR, and cable box, without interference, systems generally have a preamble and an address to synchronize the receiver and identify the source and location of the infrared signal. To encode the data, systems generally vary the width of the pulses (pulse-width modulation) or the width of the spaces between the pulses (pulse space modulation). Another popular system, bi-phase encoding, uses signal transitions to convey information. Each pulse is actually a burst of IR at the carrier frequency. A 'high' means a burst of IR energy at the carrier frequency and a 'low' represents an absence of IR energy. There is no encoding standard. However, while a great many home entertainment devices use their own proprietary encoding schemes, somequasi-standards do exist. These include RC-5, RC-6, and REC-80. In addition, many manufacturers, such as NEC, have also established their own standards.Wireless Sensor Network (WSN) has become a hot research topic recently. Great benefit can be gained through the deployment of the WSN over a wide range of applications, covering the domains of commercial, military as well as residential. In this project, we design a counting system which tracks people who pass through a detecting zone as well as the corresponding moving directions. Such a system can be deployed in traffic control, resource management, and human flow control. Our design is based on our self-made cost-effective Infrared Sensing Module board which co-operates with a WSN. The design of our system includes Infrared Sensing Module design, sensor clustering, node communication, system architecture and deployment. We conduct a series of experiments to evaluate the system performance which demonstrates the efficiency of our Moving Object Counting system.1.2 Wireless sensor networkWireless sensor network (WSN) is a wireless network which consists of a vast number of autonomous sensor nodes using sensors to monitor physical or environmental conditions, such as temperature, acoustics, vibration, pressure, motion or pollutants, at different locations. Each node in a sensor network is typically equipped with a wireless communications device, a small microcontroller, one or more sensors, and an energy source, usually a battery. The size of a single sensor node can be as large as a shoebox and can be as small as the size of a grain of dust, depending on different applications. The cost of sensor nodes is similarly variable, ranging from hundreds of dollars to a few cents, depending on the size of the sensor network and the complexity requirement of the individual sensor nodes. The size and cost are constrained by sensor nodes, therefore, have result in corresponding limitations on available inputs such as energy, memory, computational speed and bandwidth. The development of wireless sensor networks (WSN) was originally motivated by military applications such as battlefield surveillance. Due to the advancement in micro-electronic mechanical system technology (MEMS), embedded microprocessors, and wireless networking, the WSN can be benefited in many civilian application areas, including habitat monitoring, healthcare applications, and home automation.1.3 Types of Wireless Sensor NetworksWireless sensor network nodes are typically less complex than general-purpose operating systems both because of the special requirements of sensor network applications and the resource constraints in sensor network hardware platforms. The operating system does not need to include support for user interfaces. Furthermore, the resource constraints in terms of memory and memory mapping hardware support make mechanisms such as virtual memory either unnecessary or impossible to implement. TinyOS [TinyOS] is possibly the first operating system specifically designed for wireless sensor networks. Unlike most other operating systems, TinyOS is based on an event-driven programming model instead of multithreading. TinyOS programs are composed into event handlers and tasks with run to completion-semantics. When an external event occurs, such as an incomingdata packet or a sensor reading, TinyOS calls the appropriate event handler to handle the event. The TinyOS system and programs are both written in a special programming language called nesC [nesC] which is an extension to the C programming language. NesC is designed to detect race conditions between tasks and event handlers. There are also operating systems that allow programming in C. Examples of such operating systems include Contiki [Contiki], and MANTIS. Contiki is designed to support loading modules over the network and supports run-time loading of standard ELF files. The Contiki kernel is event-driven, like TinyOS, but the system supports multithreading on a per-application basis. Unlike the event-driven Contiki kernel, the MANTIS kernel is based on preemptive multithreading. With preemptive multithreading, applications do not need to explicitly yield the microprocessor to other processes.1.4 Introduction to Wireless Sensor NodeA sensor node, also known as a mote, is a node in a wireless sensor network that is capable of performing processing, gathering sensory information and communicating with other connected nodes in the network. Sensor node should be in small size, consuming extremely low energy, autonomous and operate unattended, and adaptive to the environment. As wireless sensor nodes are micro-electronic sensor device, they can only be equipped with a limited power source. The main components of a sensor node include sensors, microcontroller, transceiver, and power source. Sensors are hardware devices that can produce measurable response to a change in a physical condition such as light density and sound density. The continuous analog signal collected by the sensors is digitized by Analog-to-Digital converter. The digitized signal is then passed to controllers for further processing. Most of the theoretical work on WSNs considers Passive and Omni directional sensors. Passive and Omni directional sensors sense the data without actually manipulating the environmen t with active probing, while no notion of “direction” involved in these measurements. Commonly people deploy sensor for detecting heat (e.g. thermal sensor), light (e.g. infrared sensor), ultra sound (e.g. ultrasonic sensor), or electromagnetism (e.g. magnetic sensor). In practice, a sensor node can equip with more than one sensor. Microcontroller performs tasks, processes data and controls the operations of other components in the sensor node. The sensor node is responsible for the signal processing upon the detection of the physical events as needed or on demand. It handles the interruption from the transceiver. In addition, it deals with the internal behavior, such as application-specific computation.The function of both transmitter and receiver are combined into a single device know as transceivers that are used in sensor nodes. Transceivers allow a sensor node to exchange information between the neighboring sensors and the sink node (a central receiver). The operational states of a transceiver are Transmit, Receive, Idle and Sleep. Power is stored either in the batteries or the capacitors. Batteries are the main source of power supply for the sensor nodes. Two types of batteries used are chargeable and non-rechargeable. They are also classified according to electrochemical material used for electrode such as NiCd(nickel-cadmium), NiZn(nickel-zinc), Nimh(nickel metal hydride), and Lithium-Ion. Current sensors are developed which are able to renewtheir energy from solar to vibration energy. Two major power saving policies used are Dynamic Power Management (DPM) and Dynamic V oltage Scaling (DVS). DPM takes care of shutting down parts of sensor node which are not currently used or active. DVS scheme varies the power levels depending on the non-deterministic workload. By varying the voltage along with the frequency, it is possible to obtain quadratic reduction in power consumption.1.5 ChallengesThe major challenges in the design and implementation of the wireless sensor network are mainly the energy limitation, hardware limitation and the area of coverage. Energy is the scarcest resource of WSN nodes, and it determines the lifetime of WSNs. WSNs are meant to be deployed in large numbers in various environments, including remote and hostile regions, with ad-hoc communications as key. For this reason, algorithms and protocols need to be lifetime maximization, robustness and fault tolerance and self-configuration. The challenge in hardware is to produce low cost and tiny sensor nodes. With respect to these objectives, current sensor nodes usually have limited computational capability and memory space. Consequently, the application software and algorithms in WSN should be well-optimized and condensed. In order to maximize the coverage area with a high stability and robustness of each signal node, multi-hop communication with low power consumption is preferred. Furthermore, to deal with the large network size, the designed protocol for a large scale WSN must be distributed.1.6 Research IssuesResearchers are interested in various areas of wireless sensor network, which include the design, implementation, and operation. These include hardware, software and middleware, which means primitives between the software and the hardware. As the WSNs are generally deployed in the resources-constrained environments with battery operated node, the researchers are mainly focus on the issues of energy optimization, coverage areas improvement, errors reduction, sensor network application, data security, sensor node mobility, and data packet routing algorithm among the sensors. In literature, a large group of researchers devoted a great amount of effort in the WSN. They focused in various areas, including physical property, sensor training, security through intelligent node cooperation, medium access, sensor coverage with random and deterministic placement, object locating and tracking, sensor location determination, addressing, energy efficient broadcasting and active scheduling, energy conserved routing, connectivity, data dissemination and gathering, sensor centric quality of routing, topology control and maintenance, etc.中文译文移动目标点数与红外传感器网络作者KI, Chi Keung摘要无线传感器网络(WSN)已成为最近的一个研究热点。
单片机 英文参考文献 翻译

天津科技大学毕业生外文资料翻译姓名:学院:电子信息与自动化学院专业:测控技术与仪器一.英文原文Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009Maurice WilkesThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the firstoperating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predicationwhich makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entiremarket.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wiresgoing from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for UScompanies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that seriousproblems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。
单片机英文文献资料及翻译

单片机英文文献资料及翻译单片机(英文:Microcontroller)Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.MEMORYMicrocontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.INPUT/OUTPUTMicrocontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.APPLICATIONSMicrocontrollers are widely used in a variety of applications, including:- Home automation systems- Automotive electronics- Medical devices- Industrial control systems- Consumer electronics- RoboticsCONCLUSIONIn conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。
传感器技术论文中英文对照资料外文翻译文献

传感器技术论文中英文对照资料外文翻译文献Development of New Sensor TechnologiesSensors are devices that can convert physical。
chemical。
logical quantities。
etc。
into electrical signals。
The output signals can take different forms。
such as voltage。
current。
frequency。
pulse。
etc。
and can meet the requirements of n n。
processing。
recording。
display。
and control。
They are indispensable components in automatic n systems and automatic control systems。
If computers are compared to brains。
then sensors are like the five senses。
Sensors can correctly sense the measured quantity and convert it into a corresponding output。
playing a decisive role in the quality of the system。
The higher the degree of n。
the higher the requirements for sensors。
In today's n age。
the n industry includes three parts: sensing technology。
n technology。
and computer technology。
单片机英文文献及翻译

附录A英文文献翻译原文Temperature Control Using a Microcontroller:An Interdisciplinary Undergraduate Engineering Design ProjectJames S. McDonaldDepartment of Engineering ScienceTrinity UniversitySan Antonio, TX 78212AbstractThis paper describes an interdisc iplinary design project which was done under the author’s supervision by a group of four senior students in the Department of Engineering Science at Trinity University. The objective of the project was to develop a temperature control system for an air-filled chamber. The system was to allow entry of a desired chamber temperature in a prescribed range and to exhibit overshoot and steady-state temperature error of less than 1 degree Kelvin in the actual chamber temperature step response. The details of the design developed by this group of students, based on a Motorola MC68HC05 family microcontroller, are described. The pedagogical value of the problem is also discussed through a description of some of the key steps in the design process. It is shown that the solution requires broad knowledge drawn from several engineering disciplines including electrical, mechanical, and control systems engineering.1 IntroductionThe design project which is the subject of this paper originated from a real-world application.A prototype of a microscope slide dryer had been developed around an OmegaTM modelCN-390 temperature controller, and the objective was to develop a custom temperature control system to replace the Omega system. The motivation was that a custom controller targeted specifically for the application should be able to achieve the same functionality at a much lower cost, as the Omega system is unnecessarily versatile and equipped to handle a wide variety of applications.The mechanical layout of the slide dryer prototype is shown in Figure 1. The main element of the dryer is a large, insulated, air-filled chamber in which microscope slides, each with a tissue sample encased in paraffin, can be set on caddies. In order that the paraffin maintain the proper consistency, the temperature in the slide chamber must be maintained at a desired (constant) temperature. A second chamber (the electronics enclosure) houses a resistive heater and the temperature controller, and a fan mounted on the end of the dryer blows air across theheater, carrying heat into the slide chamber. This design project was carried out during academic year 1996–97 by four students under the author’s supervision as a Senior Design project in the Department of Engineering Science at Trinity University. The purpose of this paper isto describe the problem and the students’ solution in some detail, and to discuss some of the pedagogical opportunities offered by an interdisciplinary design project of this type. The students’ own report was presented a t the 1997 National Conference on Undergraduate Research [1]. Section 2 gives a more detailed statement of the problem, including performance specifications, and Section 3 describes the students’ design. Section 4 makes up the bulk of the paper, and discusses in some detail several aspects of the design process which offer unique pedagogical opportunities. Finally, Section 5 offers some conclusions.2 Problem StatementThe basic idea of the project is to replace the relevant parts of the functionality of an Omega CN-390 temperature controller using a custom-designed system. The application dictates that temperature settings are usually kept constant for long periods of time, but it’s nonetheless important that step changes be tracked in a “reasonable” manner. Thus the main requirements boil down to·allowing a chamber temperature set-point to be entered,·displaying both set-point and actual temperatures, and·tracking step changes in set-point temperature with acceptable rise time, steady-state error, and overshoot.Although not explicitly a part of the specifications in Table 1, it was clear that the customer desired digital displays of set-point and actual temperatures, and that set-point temperature entry should be digital as well (as opposed to, say, through a potentiometer setting).3 System DesignThe requirements for digital temperature displays and setpoint entry alone are enough to dictate that a microcontrollerbased design is likely the most appropriate. Figure 2 shows a block diagram of the stude nts’ design.The microcontroller, a MotorolaMC68HC705B16 (6805 for short), is the heart of the system. It accepts inputs from a simple four-key keypad which allow specification of the set-point temperature, and it displays both set-point and measured chamber temperatures using two-digit seven-segment LED displays controlled by a display driver. All these inputs and outputs are accommodated by parallel ports on the 6805. Chamber temperature is sensed using apre-calibrated thermistor and input via one of the 6805’s analog-to-digital inputs. Finally, a pulse-width modulation (PWM) output on the 6805 is used to drive a relay which switches line power to the resistive heater off and on.Figure 3 shows a more detailed schematic of the electronics and their interfacing to the 6805. The keypad, a Storm 3K041103, has four keys which are interfaced to pins PA0{ PA3 of Port A, configured as inputs. One key functions as a mode switch. Two modes are supported: set mode and run mode. In set mode two of the other keys are used to specify the set-point temperature: one increments it and one decrements. The fourth key is unused at present. The LED displays are driven by a Harris Semiconductor ICM7212 display driver interfaced to pins PB0{PB6 of Port B, configured as outputs. The temperature-sensing thermistor drives, through a voltage divider, pin AN0 (one of eight analog inputs). Finally, pin PLMA (one of two PWM outputs) drives the heater relay.Software on the 6805 implements the temperature control algorithm, maintains the temperature displays, and alters the set-point in response to keypad inputs. Because it is not complete at this writing, software will not be discussed in detail in this paper. The control algorithm in particular has not been determined, but it is likely to be a simple proportional controller and certainly not more complex than a PID. Some control design issues will be discussed in Section 4, however.4 The Design ProcessAlthough essentially the project is just to build a thermostat, it presents many nice pedagogical opportunities. The knowledge and experience base of a senior engineering undergraduate are just enough to bring him or her to the brink of a solution to various aspects of the problem. Yet, in each case, realworld considerations complicate the situation significantly.Fortunately these complications are not insurmountable, and the result is a very beneficial design experience. The remainder of this section looks at a few aspects of the problem which present the type of learning opportunity just described. Section 4.1 discusses some of the features of a simplified mathematical model of the thermal properties of the system and how it can beeasily validated experimentally. Section 4.2 describes how realistic control algorithm designs can be arrived at using introductory concepts in control design. Section 4.3 points out some important deficiencies of such a simplified modeling/control design process and how they can be overcome through simulation. Finally, Section 4.4 gives an overview of some of the microcontroller-related design issues which arise and learning opportunities offered.4.1 MathematicalModelLumped-element thermal systems are described in almost any introductory linear control systems text, and just this sort of model is applicable to the slide dryer problem. Figure 4 shows a second-order lumped-element thermal model of the slide dryer. The state variables are the temperatures Ta of the air in the box and Tb of the box itself. The inputs to the system are the power output q(t) of the heater and the ambient temperature T¥. ma and mb are the masses of the air and the box, respectively, and Ca and Cb their specific heats. μ1 and μ2 are heat transfer coefficients from the air to the box and from the box to the external world, respectively.It’s not hard to show that the (linearized) state equationscorresponding to Figure 4 areTaking Laplace transforms of (1) and (2) and solving for Ta(s), which is the output of interest, gives the following open-loop model of the thermal system:where K is a constant and D(s) is a second-order polynomial.K, tz, and the coefficients ofD(s) are functions of the variousparameters appearing in (1) and (2).Of course the various parameters in (1) and (2) are completely unknown, but it’s not hard to show that, regardless of their values, D(s) has two real zeros. Therefore the main transfer function of interest (which isthe one from Q(s), since we’ll assume constant ambient temperature) can be writtenMoreover, it’s not too hard to show that 1=tp1 <1=tz <1=tp2, i.e., that the zero lies between the two poles. Both of these are excellent exercises for the student, and the result is the openloop pole-zero diagram of Figure 5.Obtaining a complete thermal model, then, is reduced to identifying the constant K and the three unknown time constants in (3). Four unknown parameters is quite a few, but simple experiments show that 1=tp1 _ 1=tz;1=tp2 so that tz;tp2 _ 0 are good approximations. Thus the open-loop system is essentially first-order and can therefore be written(where the subscript p1 has been dropped).Simple open-loop step response experiments show that,for a wide range of initial temperatures and heat inputs, K _0:14 _=W and t _ 295 s.14.2 Control System DesignUsing the first-order model of (4) for the open-loop transfer function Gaq(s) and assuming for the moment that linear control of the heater power output q(t) is possible, the block diagram of Figure 6 represents the closed-loop system. Td(s) is the desired, or set-point, temperature,C(s) is the compensator transfer function, and Q(s) is the heater output in watts.Given this simple situation, introductory linear control design tools such as the root locus method can be used to arrive at a C(s) which meets the step response requirements on rise time, steady-state error, and overshoot specified in Table 1. The upshot, of course, is that a proportional controller with sufficient gain can meet all specifications. Overshoot is impossible, and increasing gains decreases both steady-state error and rise time.Unfortunately, sufficient gain to meet the specifications may require larger heat outputs than the heater is capable of producing. This was indeed the case for this system, and the result is that the rise time specification cannot be met. It is quite revealing to the student how useful such an oversimplified model, carefully arrived at, can be in determining overall performance limitations.4.3 Simulation ModelGross performance and its limitations can be determined using the simplified model of Figure 6, but there are a number of other aspects of the closed-loop system whose effects on performance are not so simply modeled. Chief among these are·quantization error in analog-to-digital conversion of the measured temperature and· the use of PWM to control the heater.Both of these are nonlinear and time-varying effects, and the only practical way to study them is through simulation (or experiment, of course).Figure 7 shows a SimulinkTM block diagram of the closed-loop system which incorporates these effects. A/D converter quantization and saturation are modeled using standard Simulink quantizer and saturation blocks. Modeling PWM is more complicated and requires a customS-function to represent it.This simulation model has proven particularly useful in gauging the effects of varying thebasic PWM parameters and hence selecting them appropriately. (I.e., the longer the period, the larger the temperature error PWM introduces. On the other hand, a long period is desirable to avoid excessiv e relay “chatter,” among other things.) PWM is often difficult for students to grasp, and the simulation model allows an exploration of its operation and effects which is quite revealing.4.4 The MicrocontrollerSimple closed-loop control, keypad reading, and display control are some of the classic applications of microcontrollers, and this project incorporates all three. It is therefore an excellent all-around exercise in microcontroller applications. In addition, because the project isto produce an actua l packaged prototype, it won’t do to use a simple evaluation board with theI/O pins jumpered to the target system. Instead, it’s necessary to develop a complete embedded application. This entails the choice of an appropriate part from the broad range offered in a typical microcontroller family and learning to use a fairly sophisticated development environment. Finally, a custom printed-circuit board for the microcontroller and peripherals must be designed and fabricated.Microcontroller Selection. In view of existing local expertise, the Motorola line of microcontrollers was chosen for this project. Still, this does not narrow the choice down much. A fairly disciplined study of system requirements is necessary to specify which microcontroller, out of scores of variants, is required for the job. This is difficult for students, as they generally lack the experience and intuition needed as well as the perseverance to wade through manufacturers’ selection guides.Part of the problem is in choosing methods for interfacing the various peripherals (e.g., what kind of display driver should be used?). A study of relevant Motorola application notes [2, 3, 4] proved very helpful in understandingwhat basic approaches are available, and what microcontroller/peripheral combinations should be considered.The MC68HC705B16 was finally chosen on the basis of its availableA/D inputs and PWMoutputs as well as 24 digital I/O lines. In retrospect this is probably overkill, as only oneA/D channel, one PWM channel, and 11 I/O pins are actually required (see Figure 3). The decision was made to err on the safe side because a complete development system specific to the chosen part was necessary, and the project budget did not permit a second such system to be purchased should the firstprove inadequate.Microcontroller Application Development. Breadboarding of the peripheral hardware, development of microcontroller software, and final debugging and testing of a customprinted-circuit board for the microcontroller and peripherals all require a development environment of some kind. The choice of a development environment, like that of themicrocontroller itself, can be bewildering and requires some faculty expertise. Motorola makes three grades of development environment ranging from simple evaluation boards (at around $100) to full-blown real-time in-circuit emulators (at more like $7500). The middle option was chosen for this project: the MMEVS, which consists of _ a platform board (which supports all 6805-family parts), _ an emulator module (specific to B-series parts), and _ a cable and target head adapter (package-specific). Overall, the system costs about $900 and provides, with some limitations, in-circuit emulation capability. It also comes with the simple but sufficient software development environment RAPID [5].Students find learning to use this type of system challenging, but the experience they gain in real-world microcontroller application development greatly exceeds the typical first-course experience using simple evaluation boards.Printed-Circuit Board. The layout of a simple (though definitely not trivial) printed-circuit board is another practical learning opportunity presented by this project. The final board layout, with package outlines, is shown (at 50% of actual size) in Figure 8. The relative simplicity of the circuit makes manual placement and routing practical—in fact, it likely gives better results than automatic in an application like this—and the student is therefore exposed to fundamental issues of printed-circuit layout and basic design rules. The layout software used was the very nice package pcb,2 and the board was fabricated in-house with the aid of our staff electronics technician.5 ConclusionThe aim of this paper has been to describe an interdisciplinary, undergraduate engineering design project: a microcontroller- based temperature control system with digital set-point entry and set-point/actual temperature display. A particular design of such a system has been described, and a number of design issues which arise—from a variety of engineering disciplines—have been discussed. Resolution of these issues generally requires knowledge beyond that acquired in introductory courses, but realistically accessible to advance undergraduate students, especiallywith the advice and supervision of faculty.Desirable features of the problem, from a pedagogical viewpoint, include the use of a microcontroller with simple peripherals, the opportunity to usefully apply introductorylevel modeling of physical systems and design of closed-loop controls, and the need for relatively simple experimentation (for model validation) and simulation (for detailed performance prediction). Also desirable are some of the technologyrelated aspects of the problem including practical use of resistive heaters and temperature sensors (requiring knowledge of PWM and calibration techniques, respectively), microcontroller selection and use of development systems, and printedcircuit design.AcknowledgementsThe author would like to acknowledge the hard work, dedication, and ability shown by the students involved in this project: Mark Langsdorf, Matt Rall, PamRinehart, and David Schuchmann. It is their project, and credit for its success belongs to them.References[1] M. Langsdorf, M. Rall, D. Schuchmann, and P. Rinehart,“Temperature control of a microscope slide dryer,” in1997 National Conference on Undergraduate Research,(Austin, TX), April 1997. Poster presentation.[2] Motorola, Inc., Phoenix, AZ, Temperature Measurementand Display Using the MC68HC05B4 and the MC14489,1990. Motorola SemiconductorApplicationNote AN431.[3] Motorola, Inc., Phoenix, AZ, HC05 MCU LED DriveTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1238.[4] Motorola, Inc., Phoenix, AZ, HC05MCU Keypad DecodingTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1239.[5] Motorola, Inc., Phoenix, AZ, RAPID Integrated DevelopmentEnvironment User’s Manual, 1993. (RAPID wasdeveloped by P & E Microcomputer Systems, Inc.).附录B英文文献翻译中文单片机温度控制:一个跨学科的本科生工程设计项目JamesS.McDonald工程科学系三一大学德克萨斯州圣安东尼奥市78212摘要本文所描述的是作者领导由四个三一大学高年级学生组成的团队进行的一个跨学科工程项目的设计。
Sensor-technology传感器技术大学毕业论文外文文献翻译及原文

毕业设计(论文)外文文献翻译文献、资料中文题目:传感器技术文献、资料英文题目:Sensor-technology文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14Sensor technologyA sensor is a device which produces a signal in response to its detecting or measuring a property ,such as position , force , torque , pressure , temperature , humidity , speed , acceleration , or vibration .Traditionally ,sensors (such as actuators and switches )have been used to set limits on the performance of machines .Common examples are (a) stops on machine tools to restrict work table movements ,(b) pressure and temperature gages with automatics shut-off features , and (c) governors on engines to prevent excessive speed of operation . Sensor technology has become an important aspect of manufacturing processes and systems .It is essential for proper data acquisition and for the monitoring , communication , and computer control of machines and systems .Because they convert one quantity to another , sensors often are referred to as transducers .Analog sensors produce a signal , such as voltage ,which is proportional to the measured quantity .Digital sensors have numeric or digital outputs that can be transferred to computers directly .Analog-to-coverter(ADC) is available for interfacing analog sensors with computers .Classifications of SensorsSensors that are of interest in manufacturing may be classified generally as follows:Machanical sensors measure such as quantities aspositions ,shape ,velocity ,force ,torque , pressure , vibration , strain , and mass .Electrical sensors measure voltage , current , charge , and conductivity .Magnetic sensors measure magnetic field ,flux , and permeablity .Thermal sensors measure temperature , flux ,conductivity , and special heat .Other types are acoustic , ultrasonic , chemical , optical , radiation , laser ,and fiber-optic .Depending on its application , a sensor may consist of metallic , nonmetallic , organic , or inorganic materials , as well as fluids ,gases ,plasmas , or semiconductors .Using the special characteristics of these materials , sensors covert the quantity or property measured to analog or digital output. The operation of an ordinary mercury thermometer , for example , is based on the difference between the thermal expansion of mercury and that of glass.Similarly , a machine part , a physical obstruction , or barrier in a space can be detected by breaking the beam of light when sensed by a photoelectric cell . A proximity sensor ( which senses and measures the distance between it and an object or a moving member of a machine ) can be based on acoustics , magnetism , capacitance , or optics . Other actuators contact the object and take appropriate action ( usually by electromechanical means ) . Sensors are essential to the conduct of intelligent robots , and are being developed with capabilities that resemble those of humans ( smart sensors , see the following ).This is America, the development of such a surgery Lin Bai an example, through the screen, through a remote control operator to control another manipulator, through the realization of the right abdominal surgery A few years ago our country theexhibition, the United States has been successful in achieving the right to the heart valve surgery and bypass surgery. This robot has in the area, caused a great sensation, but also, AESOP's surgical robot, In fact, it through some equipment to some of the lesions inspections, through a manipulator can be achieved on some parts of the operation Also including remotely operated manipulator, and many doctors are able to participate in the robot under surgery Robot doctor to include doctors with pliers, tweezers or a knife to replace the nurses, while lighting automatically to the doctor's movements linked, the doctor hands off, lighting went off, This is very good, a doctor's assistant.Tactile sensing is the continuous of variable contact forces , commonly by an array of sensors . Such a system is capable of performing within an arbitrarythree-dimensional space .has gradually shifted from manufacturing tonon-manufacturing and service industries, we are talking about the car manufacturer belonging to the manufacturing industry, However, the services sector including cleaning, refueling, rescue, rescue, relief, etc. These belong to the non-manufacturing industries and service industries, so here is compared with the industrial robot, it is a very important difference. It is primarily a mobile platform, it can move to sports, there are some arms operate, also installed some as a force sensor and visual sensors, ultrasonic ranging sensors, etc. It’s surrounding environment for the conduct of identification, to determine its campaign to complete some work, this is service robot’s one of the basic characteristicsIn visual sensing (machine vision , computer vision ) , cameral optically sense the presence and shape of the object . A microprocessor then processes the image ( usually in less than one second ) , the image is measured , and the measurements are digitized ( image recognition ) .Machine vision is suitable particularly for inaccessible parts , in hostile manufacturing environments , for measuring a large number of small features , and in situations where physics contact with the part may cause damage .Small sensors have the capability to perform a logic function , to conducttwo-way communication , and to make a decisions and take appropriate actions . The necessary input and the knowledge required to make a decision can be built into a smart sensor . For example , a computer chip with sensors can be programmed to turn a machine tool off when a cutting tool fails . Likewise , a smart sensor can stop a mobile robot or a robot arm from accidentally coming in contact with an object or people by using quantities such as distance , heat , and noise .Sensor fusion . Sensor fusion basically involves the integration of multiple sensors in such a manner where the individual data from each of the sensors ( such as force , vibration , temperature , and dimensions ) are combined to provide a higher level of information and reliability . A common application of sensor fusion occurs when someone drinks a cup of hot coffee . Although we take such a quotidian event for granted ,it readily can be seen that this process involves data input from the person's eyes , lips , tongue , and hands .Through our basic senses of sight , hearing , smell , taste , and touch , there is real-time monitoring of relative movements , positions , and temperatures . Thus if the coffee is too hot , the hand movement of the cup toward the lip is controlled and adjusted accordingly .The earliest applications of sensor fusion were in robot movement control , missile flight tracking , and similar military applications . Primarily because these activities involve movements that mimic human behavior . Another example of sensor fusion is a machine operation in which a set of different but integrated sensors monitors (a) the dimensions and surface finish of workpiece , (b) tool forces , vibrations ,and wear ,(c) the temperature in various regions of the tool-workpiece system , and (d) the spindle power .An important aspect in sensor fusion is sensor validation : the failure of one particular sensor is detected so that the control system maintains high reliability . For this application ,the receiving of redundant data from different sensors is essential . It can be seen that the receiving , integrating of all data from various sensors can be a complex problem .With advances in sensor size , quality , and technology and continued developments in computer-control systems , artificial neural networks , sensor fusion has become practical and available at low cost .Movement is relatively independent of the number of components, the equivalent of our body, waist is a rotary degree of freedom We have to be able to hold his arm, Arm can be bent, then this three degrees of freedom, Meanwhile there is a wrist posture adjustment to the use of the three autonomy, the general robot has six degrees of freedom. We will be able to space the three locations, three postures, the robot fully achieved, and of course we have less than six degrees of freedomFiber-optic sensors are being developed for gas-turbine engines . These sensors will be installed in critical locations and will monitor the conditions inside the engine , such as temperature , pressure , and flow of gas . Continuous monitoring of the signals from thes sensors will help detect possible engine problems and also provide the necessary data for improving the efficiency of the engines .传感器技术传感器一种通过检测某一参数而产生信号的装置。
单片机的外文文献及中文翻译
SCM is an integrated circuit chip,is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM,read—only memory ROM,a variety of I / O port and interrupt system, timers / timer functions (which may also include display driver circuitry,pulse width modulation circuit,analog multiplexer, A / D converter circuit) integrated into a silicon constitute a small and complete computer systems。
SCM is also known as micro—controller (Microcontroller),because it is the first to be used in industrial control. Only a single chip by the CPU chip developed from a dedicated processor。
The first design is by a large number of peripherals and CPU on a chip in the computer system,smaller, more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways。
单片机外文文献和中文翻译
Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services。
In this paper,we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment。
Topics include the types of tests that are required and the design coverage (i.e.,design libraries: do they need validating for each application?)。
Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened—By—Design,microcontroller,radiation effects。
单片机外文文献毕业翻译
单片机介绍单片机也被称为微控制器(Microcontroller Unit),常用英文字母的缩写MCU表示单片机,它最早是被用在工业控制领域。
单片机由芯片内仅有CPU 的专用处理器发展而来。
最早的设计理念是通过将大量外围设备和CPU 集成在一个芯片中,使计算机系统更小, 更容易集成进复杂的而对体积要求严格的控制设备当中。
INTEL 的Z80是最早按照这种思想设计出的处理器, 从此以后, 单片机和专用处理器的发展便分道扬镳.早期的单片机都是8 位或 4 位的。
其中最成功的是INTEL 的8031, 因为简单可靠而性能不错获得了很大的好评. 此后在8031 上发展出了MCS51 系列单片机系统。
基于这一系统的单片机系统直到现在还在广泛使用。
随着工业控制领域要求的提高,开始出现了16 位单片机, 但因为性价比不理想并未得到很广泛的应用。
90 年代后随着消费电子产品大发展, 单片机技术得到了巨大提高。
随着INTEL i960 系列特别是后来的ARM 系列的广泛应用,32 位单片机迅速取代16 位单片机的高端地位,并且进入主流市场。
而传统的8 位单片机的性能也得到了飞速提高,处理能力比起80 年代提高了数百倍。
目前,高端的32 位单片机主频已经超过300MHz,性能直追90 年代中期的专用处理器,而普通的型号出厂价格跌落至 1 美元,最高端的型号也只有10 美元。
当代单片机系统已经不再只在裸机环境下开发和使用,大量专用的嵌入式操作系统被广泛应用在全系列的单片机上。
而在作为掌上电脑和手机核心处理的高端单片机甚至可以直接使用专用的Windows 和Linux 操作系统。
单片机比专用处理器更适合应用于嵌入式系统, 因此它得到了最多的应用。
事实上单片机是世界上数量最多的计算机。
现代人类生活中所用的几乎每件电子和机械产品中都会集成有单片机。
手机、电话、计算器、家用电器、电子玩具、掌上电脑以及鼠标等电脑配件中都配有1—2 部单片机。
传感器技术外文文献及中文翻译
传感器技术外文文献及中文翻译Sensor technologyA sensor is a device which produces a signal in response to its detecting or measuring a property ,such as position , force , torque , pressure , temperature , humidity , speed , acceleration , or vibration .Traditionally ,sensors (such as actuators and switches )have been used to set limits on the performance of machines .Common examples are (a) stops on machine tools to restrict work table movements ,(b) pressure and temperature gages with automatics shut-off features , and (c) governors on engines to prevent excessive speed of operation . Sensor technology has become an important aspect of manufacturing processes and systems .It is essential for proper data acquisition and for the monitoring , communication , and computer control of machines and systems .Because they convert one quantity to another , sensors often are referred to as transducers .Analog sensors produce a signal , such as voltage ,which is proportional to the measured quantity .Digital sensors have numeric or digital outputs that can be transferred to computers directly .Analog-to-coverter(ADC) is available for interfacing analog sensors with computers .Classifications of SensorsSensors that are of interest in manufacturing may be classified generally as follows:Machanical sensors measure such as quantities aspositions ,shape ,velocity ,force ,torque , pressure , vibration , strain , andmass .Electrical sensors measure voltage , current , charge , and conductivity .Magnetic sensors measure magnetic field ,flux , and permeablity .Thermal sensors measure temperature , flux ,conductivity , and special heat .Other types are acoustic , ultrasonic , chemical , optical , radiation ,laser ,and fiber-optic .Depending on its application , a sensor may consist of metallic , nonmetallic , organic , or inorganic materials , as well as fluids ,gases ,plasmas , or semiconductors .Using the special characteristics of these materials , sensors covert the quantity or property measured to analog or digital output. The operation of an ordinary mercury thermometer , for example , is based on the difference between the thermal expansion of mercury and that of glass.Similarly , a machine part , a physical obstruction , or barrier in a space can be detected by breaking the beam of light when sensed by a photoelectric cell . A proximity sensor ( which senses and measures the distance between it and an object or a moving member of a machine ) can be based on acoustics , magnetism , capacitance , or optics . Other actuators contact the object and take appropriate action ( usually by electromechanical means ) . Sensors are essential to the conduct of intelligent robots , and are being developed with capabilities that resemble those of humans ( smart sensors , see the following ).This is America, the development of such a surgery Lin Bai an example,through the screen, through a remote control operator to control another manipulator, through the realization of the right abdominal surgery A few years ago our country the exhibition, the United States has been successful in achieving the right to the heart valve surgery and bypass surgery. This robot has in the area, caused a great sensation, but also, AESOP's surgical robot, In fact, it through some equipment to some of the lesions inspections, through a manipulator can be achieved on some parts of the operation Also including remotely operated manipulator, and many doctors are able to participate in the robot under surgery Robot doctor to include doctors with pliers, tweezers or a knife to replace the nurses, while lighting automatically to the doctor's movements linked, the doctor hands off, lighting went off, This is very good, a doctor's assistant.Tactile sensing is the continuous of variable contact forces , commonly by an array of sensors . Such a system is capable of performing within an arbitrary three-dimensional space .has gradually shifted from manufacturing tonon-manufacturing and service industries, we are talking about the car manufacturer belonging to the manufacturing industry, However, the services sector including cleaning, refueling, rescue, rescue, relief, etc. These belong to the non-manufacturing industries and service industries, so here is compared with the industrial robot, it is a very important difference. It is primarily a mobile platform, it can move to sports, there are some arms operate, also installed some as a force sensor and visual sensors, ultrasonic ranging sensors, etc. It’s surrounding environment for the conduct of identification, to determine its campaign to complete some work, this is service robot’s one of the basic characteristicsIn visual sensing (machine vision , computer vision ) , cameral optically sense the presence and shape of the object . A microprocessor then processes the image ( usually in less than one second ) , the image is measured , and the measurements are digitized ( image recognition ) .Machine vision is suitable particularly for inaccessible parts , in hostile manufacturing environments , for measuring a large number of small features , and in situations where physics contact with the part may cause damage .Small sensors have the capability to perform a logic function , to conduct two-way communication , and to make a decisions and take appropriate actions . The necessary input and the knowledge required to make a decision can be built into a smart sensor . For example , a computer chip with sensors can be programmed to turn a machine tool off when a cutting tool fails . Likewise , a smart sensor can stop a mobile robot or a robot arm from accidentally coming in contact with an object or people by using quantities such as distance , heat , and noise .Sensor fusion . Sensor fusion basically involves the integration of multiple sensors in such a manner where the individual data from each of the sensors ( such as force , vibration , temperature , and dimensions ) are combined to provide a higher level of information and reliability . A common application ofsensor fusion occurs when someone drinks a cup of hot coffee . Although we take such a quotidian event for granted ,it readily can be seen that this process involves data input from the person's eyes , lips , tongue , and hands .Through our basic senses of sight , hearing , smell , taste , and touch , there is real-time monitoring of relative movements , positions , and temperatures . Thus if the coffee is too hot , the hand movement of the cup toward the lip is controlled and adjusted accordingly .The earliest applications of sensor fusion were in robot movement control , missile flight tracking , and similar military applications . Primarily because these activities involve movements that mimic human behavior . Another example of sensor fusion is a machine operation in which a set of different but integrated sensors monitors (a) the dimensions and surface finish of workpiece , (b) tool forces , vibrations ,and wear ,(c) the temperature in various regions of the tool-workpiece system , and (d) the spindle power .An important aspect in sensor fusion is sensor validation : the failure of one particular sensor is detected so that the control system maintains high reliability . For this application ,the receiving of redundant data from different sensors is essential . It can be seen that the receiving , integrating of all data from various sensors can be a complex problem .With advances in sensor size , quality , and technology and continued developments in computer-control systems , artificial neural networks , sensor fusion has become practical and available at low cost .Movement is relatively independent of the number of components, the equivalent of our body, waist is a rotary degree of freedom We have to be able to hold his arm, Arm can be bent, then this three degrees of freedom, Meanwhile there is a wrist posture adjustment to the use of the three autonomy, the general robot has six degrees of freedom. We will be able to space the three locations, three postures, the robot fully achieved, and of course we have less than six degrees of freedom Fiber-optic sensors are being developed for gas-turbine engines . These sensors will be installed in critical locations and will monitor the conditions inside the engine , such as temperature , pressure , and flow of gas . Continuous monitoring of the signals from thes sensors will help detect possible engine problems and also provide the necessary data for improving the efficiency of the engines .传感器技术传感器一种通过检测某一参数而产生信号的装置。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
毕业设计(论文)外文文献翻译文献、资料中文题目:传感器技术文献、资料英文题目:Sensor technology文献、资料来源:文献、资料发表(出版)日期:院(部):专业:班级:姓名:学号:指导教师:翻译日期: 2017.02.14微机发展简史IEEE的论文剑桥大学,2004/2/5莫里斯威尔克斯计算机实验室剑桥大学第一台存储程序的计算开始出现于1950前后,它就是1949年夏天在剑桥大学,我们创造的延迟存储自动电子计算机(EDSAC)。
最初实验用的计算机是由象我一样有着广博知识的人构造的。
我们在电子工程方面都有着丰富的经验,并且我们深信这些经验对我们大有裨益。
后来,被证明是正确的,尽管我们也要学习很多新东西。
最重要的是瞬态一定要小心应付,虽然它只会在电视机的荧幕上一起一个无害的闪光,但是在计算机上这将导致一系列的错误。
在电路的设计过程中,我们经常陷入两难的境地。
举例来说,我可以使用真空二级管做为门电路,就象在EDSAC中一样,或者在两个栅格之间用带控制信号的五级管,这被广泛用于其他系统设计,这类的选择一直在持续着直到逻辑门电路开始应用。
在计算机领域工作的人都应该记得TTL,ECL和CMOS,到目前为止,CMOS已经占据了主导地位。
在最初的几年,IEE(电子工程师协会)仍然由动力工程占据主导地位。
为了让IEE 认识到无线工程和快速发展的电子工程并行发展是它自己的一项权利,我们不得不面对一些障碍。
由于动力工程师们做事的方式与我们不同,我们也遇到了许多困难。
让人有些愤怒的是,所有的IEE出版的论文都被期望以冗长的早期研究的陈述开头,无非是些在早期阶段由于没有太多经验而遇到的困难之类的陈述。
60年代的巩固阶段60年代初,个人英雄时代结束了,计算机真正引起了重视。
世界上的计算机数量已经增加了许多,并且性能比以前更加可靠。
这些我认为归因与高级语言的起步和第一个操作系统的诞生。
分时系统开始起步,并且计算机图形学随之而来。
综上所述,晶体管开始代替正空管。
这个变化对当时的工程师们是个不可回避的挑战。
他们必须忘记他们熟悉的电路重新开始。
只能说他们鼓起勇气接受了挑战,尽管这个转变并不会一帆风顺。
小规模集成电路和小型机很快,在一个硅片上可以放不止一个晶体管,由此集成电路诞生了。
随着时间的推移,一个片子能够容纳的最大数量的晶体管或稍微少些的逻辑门和翻转门集成度达到了一个最大限度。
由此出现了我们所知道7400系列微机。
每个门电路或翻转电路是相互独立的并且有自己的引脚。
他们可通过导线连接在一起,作成一个计算机或其他的东西。
这些芯片为制造一种新的计算机提供了可能。
它被称为小型机。
他比大型机稍逊,但功能强大,并且更能让人负担的起。
一个商业部门或大学有能力拥有一台小型机而不是得到一台大型组织所需昂贵的大型机。
随着微机的开始流行并且功能的完善,世界急切获得它的计算能力但总是由于工业上不能规模供应和它可观的价格而受到挫折。
微机的出现解决了这个局面。
计算消耗的下降并非起源与微机,它本来就应该是那个样子。
这就是我在概要中提到的“通货膨胀”在计算机工业中走上了歧途之说。
随着时间的推移,人们比他们付出的金钱得到的更多。
硬件的研究我所描述的时代对于从事计算机硬件研究的人们是令人惊奇的时代。
7400系列的用户能够工作在逻辑门和开关级别并且芯片的集成度可靠性比单独晶体管高很多。
大学或各地的研究者,可以充分发挥他们的想象力构造任何微机可以连接的数字设备。
在剑桥大学实验室力,我们构造了CAP,一个有令人惊奇逻辑能力的微机。
7400在70年代中期还不断发展壮大,并且被宽带局域网的先驱组织Cambridge Ring所采用。
令牌环设计研究的发表先于以太网。
在这两种系统出现之前,人们大多满足于基于电报交换机的本地局域网。
令牌环网需要高可靠性,由于脉冲在令牌环中传递,他们必须不断的被放大并且再生。
是7400的高可靠性给了我们勇气,使得我们着手Cambridge Ring.项目。
精简指令计算机的诞生早期的计算机有简单的指令集,随着时间的推移,商业用微机的设计者增加了另外的他们认为可以微机性能的特性。
很少的测试方法被建立,总的来说特性的选取很大程度上依赖于设计者的直觉。
1980年,RISC运动改变了微机世界。
该运动是由Patterson 和Ditzel发表了一篇命名为精简指令计算机的情况论文而引起的。
除了RISC这个引人注目缩略词外,这个标题传达了一些指令集合设计的见解,随之引发了RISC运动。
从某种意义上说,它推动了线程的发展,在处理器中,同一时间有几个指令在不同的执行阶段称为线程。
线程不是个新概念,但是它对微机来说是从未有过的。
RISC受益于一个最近的可用的方法的诞生,该方法使估计计算机性能成为可能而不去真正实现该微机的设计。
我的意思是说利用目前存在的功能强大的计算机去模拟新的设计。
通过模拟该设计,RISC的提倡者能够有信心的预言,一台使用和传统计算机相同电路的RISC计算机可以和传统的最好的计算机有同样的性能。
模拟仿真加快了开发进度并且被计算机设计者广泛采用。
随后,计算机设计者变的多些可理性少了一些艺术性。
今天,设计者们希望有满屋可用计算机做他们的仿真,而不只是一台,X86指令集除非出现很大意外,要不很少听到有计算机使用早期的RISC指令集了。
INTEL 8086及其后裔都与x86密切相关。
X86构架已经占据了计算机核心指令集的主导地位。
被认为是相当成功的RISC指令集现在的生存空间越来越小了。
对于我们这些从事计算机学术研究的人,X86的统治地位让我们感到失望。
毫无疑问,商业上对于x86的生存会有更多的考虑,但是这里还有很多原因,尽管我们多么希望人们考虑其他的方面。
高级语言并没有完全消除对机器原始编码的的使用。
我们仍需要不断提醒我们自己:我们应该严格的与先前的应用在机器层面上保持兼容。
然而,情况也许有所不同,如果Intel的主要目的是为是生产一个好的RISC芯片。
有一个已经取得了更大的成功,我所说的i860(不是i960,它们有一些不同)。
从许多方面来说,i860是个卓越的芯片,但是它的软件借口不适合在工作站上应用。
对于x86取得胜利的最后有一件有意思的事情。
直接应用先前x86的实现方式对于满足RISC处理器的持续增长的速度要求,是不可能的。
因此,设计者们没有完全实现RISC指令集,尽管这不是很明显。
表面上,一片现代的x86芯片包含了隐藏实现的部分,好象和实现RISC指令集的芯片一样。
当致命的异常发生时,X86引入的代码是,经过适当的篡改后,被转化为它的内部代码并且被RISC芯片处理。
对于以上RISC运动的总结,我非常信赖最新版本的哈里斯和培生出版社的有关计算机设计的书籍。
请参考特殊计算机体系构造,第三版,2003,P146,151-4,157-8IA-64指令集很久以前,Intel 和Hewlett-Packard引进了IA-64指令集。
这最初主要是为了满足通常的64位地址空间问题。
在这种情况下,随后出现了MIPS R4000和Alpha。
然而,人们普遍认为Intel应该与x86构架保持兼容,可令人疑惑的是恰恰相反。
进一步说,IA-64的设计与其他所有的指令集在主要实现方式上有所不同。
特别的,每条指令它需要附加的6位。
这打乱了传统的在指令字长和信息内容的平衡,并且它改变了编译器作者的原先的大纲。
尽管IA-64是个全新的指令集,但Intel发表了一个令人困惑的声明:基于IA-64的芯片将与早期的x86芯片保持兼容。
很难弄懂它所指的是什么。
最新的称为Itaninu IA-64处理器显然需要特殊的兼容性的硬件,尽管如此,x86编码运行的相当慢。
由于以上的复杂因素,IA-64的实现需要更大的体积相对与传统的指令集,这暗示着更大的消耗。
因此,在任何情况下,作为常识和一般性的标准,Gordon Moore在访问剑桥最近开放的Betty and Gordon Moore 图书馆时所反复强调。
在听到他说问题出现在Intel内部也许有所不同,我很不理解。
但是我已经作好了准备,去接受这样的事实,我已经完全不了解半导体经济学了。
AMD已经定义了一种64位的与x86更加兼容的指令集,并且他们已经取得了进展。
这种片子并不是很大。
很多人认为这才是Intel应该做的。
(在这篇演讲稿被提交之前,Intel表示他们将销售一系列本质上与AMD兼容的芯片)更小晶体管的出现集成度还在不断增加,这是通过缩小原始晶体管以致可以更容易放在一个片子上。
进一步说,物理学的定律占在了制造商的一方。
晶体管变的更快,更简单,更小。
因此,同时导致了更高的集成度和速度。
这有个更明显的优势。
芯片被放在硅片上,称为晶片。
每一个晶片拥有很大数量的独立芯片,他们被同时加工然后分离。
因为缩小以致在每块晶片上有了更多的芯片,所以每块芯片的价格下降了。
单元价格下降对于计算机工业是重要的,因为,如果最新的芯片性能和以前一样但价格更便宜,就没有理由继续提供老产品,至少不应该无限期提供。
对于整个市场只需一种产品。
然而,详细计算各项消耗,随着芯片小到一定程度,为了继续保持产品的优势,移到一个更大的圆晶片上是十分必要的。
尺寸的不断增加使的圆晶片不再是很小的东西了。
最初,圆晶片直径上只有1到2英寸,到2000年已经达到了12英寸。
起初,我不太明白,芯片的缩小导致了一系列的问题,工业上应该在制造更大的圆晶片上遇到更多的问题。
现在,我明白了,单元消耗的减少在工业上和在一个芯片上增加电子晶体管的数量是同等重要的,并且,在风险中增加圆晶片厂的投资被证明是正确的。
集成度被特殊的尺寸所衡量,对于特定的技术,它是用在一块高密度芯片上导线间距离的一半来衡量的。
目前,90纳米的晶片正在被建成。
对Murphy‟s定理的怀疑1997年3月,在Cavendish实验室建立一百周年纪念庆典上,Gordon Moore 被邀作为一名演讲者。
在他演讲的过程中,我第一次了解到这样一个事实,我们可以使得硅芯片既快并且消耗低,从而违反在英国被称为Murphy‟s 定律或Sod‟s 定律。
Moore说在其它领域你也许不在二者之间做出取舍,但事实上,在硅片上,同时拥有二者是可能的。
在网上可得到一本相关的书籍,Murphy是在美国空军中从事人体重力加速度研究的工程师。
然而在我们的学生时代就已经相当熟悉该定律,当时我们对于该定律有个更接近散文的名字而不是上面我们提到的那两个名字,我们称为General Cussedness定律。
甚至它都曾出现在我们的试卷上。
问题是这样,第一部分是关于该定律的定义,第二部分是应用该定律解决一道问题。
我们的试题是:一、给出General Cussedness定律的定义;二、当一个骑自行车人围绕着圆做运动时,在任何情况下,考虑到风的因素得到一个平衡公式。