单片机英文文献资料及翻译
单片机 外文翻译 外文文献 英文文献 单片机简介 中英对

单片机外文翻译外文文献英文文献单片机简介中英对原文来源图书馆电子资源Single chip brief introductionThe monolithic integrated circuit said that the monolithic micro controller, it is not completes some logical function the chip, but integrates a computer system to a chip on. Summary speaking: A chip has become a computer. Its volume is small, the quality is light, and the price cheap, for the study, the application and the development has provided the convenient condition. At the same time, the study use monolithic integrated circuit is understands the computer principle and the structure best choice.The monolithic integrated circuit interior also uses with the computer function similar module, for instance CPU, memory, parallel main line, but also has with the hard disk behave identically the memory component7 what is different is its these part performance is opposite our home-use computer weak many, but the price is also low, generally does not surpass 10 Yuan then Made some control electric appliance one kind with it is not the 'very complex work foot, We use now the completely automatic drum washer, the platoon petti-coat pipe: VCD and so on Inside the electrical appliances may see its form! It is mainly takes the control section the core part.It is one kind of online -like real-time control computer, online -like is the scene control, needs to have the strong antijamming ability,the low cost, this is also and the off-line type computer (for instance home use PC,) main differenceThe monolithic integrated circuit is depending on the procedure, and may revise. Realizes the different function through the different procedure, particularly special unique some functions, this is other component needs to take the very big effort to be able to achieve, some are the flowered big strength is also very difficult to achieve. One is not the very complex function, if develops in the 50s with the US 74 series, or the 60s's CD4000 series these pure hardware do decides, the electric circuit certainly arc a big PCB board ! But if, if succeeded in the 70s with the US puts in the market the series monolithic integrated circuit, the result will have the huge difference. Because only the monolithic integrated circuit compiles through you the procedure may realize the high intelligence, high efficiency, as well as redundant reliabilityThe CPU is the key component of a digital computer. Its purpose isto decode instruction received from memory and perform transfers, arithmetic, logic, and control operations with data stored in internal registers, memory, or I/O interface units. Externally, the CPU provides one or more buses for transferring instructions, data, and control information to and from components connected to it. A microcontroller is present in the keyboard and in the monitor in the generic computer; thus these components are also shaded. In such microcontrollers, the CPU may be quite different from those discussed in this chapter. The wordlengths may be short, the number of registers small, and the instruction sets limited. Performance, relatively speaking, is poor, but adequatefor the task. Most important, the cost of these microcontrollers is very low, making their use cost effective.Because the monolithic integrated circuit to the cost is sensitive, therefore present occupies the dominant status the software is the most preliminary assembly language7 it was except the binary machine code above the most preliminary language, sincewhy were such preliminary must use?Why high-level did the language already achieve the visualization programming level not to use? The reason is very simple, is the monolithic integrated circuit docs not have home computer such CPU, and also has not looked like the hard disk such mass memory equipment. Inside even if a visualization higher order language compilation script only then a button, also will achieve several dozens K the sizes! Does not speak anything regarding the home use PC hard disk, but says regarding the monolithic integrated circuit cannot accept. The monolithic integrated circuit in the hardware source aspect's use factor must very Gao Caixing, therefore assembly, although primitive actually massively is using, Same truth, if attains supercomputer's on operating system and the application software home use PC to come up the movement, home use PC could also not withstand.It can be said that the 20th century surmounted three "the electricity" the time, namely the electrical time, the Electronic Ageand already entered computer time. However, this kind of computer, usually refers to the personal computer, is called PC machine. It by the main engine, the keyboard, the monitor and so on is composed. Also has a kind of computer, most people actually not how familiar. This kind of computer is entrusts with the intelligence each kind of mechanical monolithic integrated circuit (also to call micro controller). , This kind of computer's smallest system only has used as the name suggests a piece of integrated circuit, then carries on the simple operation and the control. Because its volume is small, usually hides in is accused the machinery "the belly". It in the entire installment, plays is having like the human brains role, it went wrong, the entire installment paralyzed. Now, this kind of monolithic integrated circuit's use domain already very widespread, like the intelligent measuring appliance, the solid work paid by time control, the communication equipment, the guidance system, the domestic electric appliances and so on, Once each product used the monolithic integrated circuit, could get up causes the effect which the product turned to a new generation, often before product range crown by adjective---- …intelligence?, like intelligence washer and so on. Now some factory's technical personnel or other extra-curricular electronic exploiter do certain products, are not theelectric circuit are too complex, is the function is too simple, and is imitated extremely easily. Investigates its reason, possibly on card, in the product has not used on the monolithic integrated circuit or other programmable logical component.单片机简介单片机又称单片微控制器,它不是完成某一个逻辑功能的芯片,而是把一个计算机系统集成到一个芯片上。
单片机英文文献及翻译)

Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。
单片机英文文献及翻译

附录A英文文献翻译原文Temperature Control Using a Microcontroller:An Interdisciplinary Undergraduate Engineering Design ProjectJames S. McDonaldDepartment of Engineering ScienceTrinity UniversitySan Antonio, TX 78212AbstractThis paper describes an interdisc iplinary design project which was done under the author’s supervision by a group of four senior students in the Department of Engineering Science at Trinity University. The objective of the project was to develop a temperature control system for an air-filled chamber. The system was to allow entry of a desired chamber temperature in a prescribed range and to exhibit overshoot and steady-state temperature error of less than 1 degree Kelvin in the actual chamber temperature step response. The details of the design developed by this group of students, based on a Motorola MC68HC05 family microcontroller, are described. The pedagogical value of the problem is also discussed through a description of some of the key steps in the design process. It is shown that the solution requires broad knowledge drawn from several engineering disciplines including electrical, mechanical, and control systems engineering.1 IntroductionThe design project which is the subject of this paper originated from a real-world application.A prototype of a microscope slide dryer had been developed around an OmegaTM modelCN-390 temperature controller, and the objective was to develop a custom temperature control system to replace the Omega system. The motivation was that a custom controller targeted specifically for the application should be able to achieve the same functionality at a much lower cost, as the Omega system is unnecessarily versatile and equipped to handle a wide variety of applications.The mechanical layout of the slide dryer prototype is shown in Figure 1. The main element of the dryer is a large, insulated, air-filled chamber in which microscope slides, each with a tissue sample encased in paraffin, can be set on caddies. In order that the paraffin maintain the proper consistency, the temperature in the slide chamber must be maintained at a desired (constant) temperature. A second chamber (the electronics enclosure) houses a resistive heater and the temperature controller, and a fan mounted on the end of the dryer blows air across theheater, carrying heat into the slide chamber. This design project was carried out during academic year 1996–97 by four students under the author’s supervision as a Senior Design project in the Department of Engineering Science at Trinity University. The purpose of this paper isto describe the problem and the students’ solution in some detail, and to discuss some of the pedagogical opportunities offered by an interdisciplinary design project of this type. The students’ own report was presented a t the 1997 National Conference on Undergraduate Research [1]. Section 2 gives a more detailed statement of the problem, including performance specifications, and Section 3 describes the students’ design. Section 4 makes up the bulk of the paper, and discusses in some detail several aspects of the design process which offer unique pedagogical opportunities. Finally, Section 5 offers some conclusions.2 Problem StatementThe basic idea of the project is to replace the relevant parts of the functionality of an Omega CN-390 temperature controller using a custom-designed system. The application dictates that temperature settings are usually kept constant for long periods of time, but it’s nonetheless important that step changes be tracked in a “reasonable” manner. Thus the main requirements boil down to·allowing a chamber temperature set-point to be entered,·displaying both set-point and actual temperatures, and·tracking step changes in set-point temperature with acceptable rise time, steady-state error, and overshoot.Although not explicitly a part of the specifications in Table 1, it was clear that the customer desired digital displays of set-point and actual temperatures, and that set-point temperature entry should be digital as well (as opposed to, say, through a potentiometer setting).3 System DesignThe requirements for digital temperature displays and setpoint entry alone are enough to dictate that a microcontrollerbased design is likely the most appropriate. Figure 2 shows a block diagram of the stude nts’ design.The microcontroller, a MotorolaMC68HC705B16 (6805 for short), is the heart of the system. It accepts inputs from a simple four-key keypad which allow specification of the set-point temperature, and it displays both set-point and measured chamber temperatures using two-digit seven-segment LED displays controlled by a display driver. All these inputs and outputs are accommodated by parallel ports on the 6805. Chamber temperature is sensed using apre-calibrated thermistor and input via one of the 6805’s analog-to-digital inputs. Finally, a pulse-width modulation (PWM) output on the 6805 is used to drive a relay which switches line power to the resistive heater off and on.Figure 3 shows a more detailed schematic of the electronics and their interfacing to the 6805. The keypad, a Storm 3K041103, has four keys which are interfaced to pins PA0{ PA3 of Port A, configured as inputs. One key functions as a mode switch. Two modes are supported: set mode and run mode. In set mode two of the other keys are used to specify the set-point temperature: one increments it and one decrements. The fourth key is unused at present. The LED displays are driven by a Harris Semiconductor ICM7212 display driver interfaced to pins PB0{PB6 of Port B, configured as outputs. The temperature-sensing thermistor drives, through a voltage divider, pin AN0 (one of eight analog inputs). Finally, pin PLMA (one of two PWM outputs) drives the heater relay.Software on the 6805 implements the temperature control algorithm, maintains the temperature displays, and alters the set-point in response to keypad inputs. Because it is not complete at this writing, software will not be discussed in detail in this paper. The control algorithm in particular has not been determined, but it is likely to be a simple proportional controller and certainly not more complex than a PID. Some control design issues will be discussed in Section 4, however.4 The Design ProcessAlthough essentially the project is just to build a thermostat, it presents many nice pedagogical opportunities. The knowledge and experience base of a senior engineering undergraduate are just enough to bring him or her to the brink of a solution to various aspects of the problem. Yet, in each case, realworld considerations complicate the situation significantly.Fortunately these complications are not insurmountable, and the result is a very beneficial design experience. The remainder of this section looks at a few aspects of the problem which present the type of learning opportunity just described. Section 4.1 discusses some of the features of a simplified mathematical model of the thermal properties of the system and how it can beeasily validated experimentally. Section 4.2 describes how realistic control algorithm designs can be arrived at using introductory concepts in control design. Section 4.3 points out some important deficiencies of such a simplified modeling/control design process and how they can be overcome through simulation. Finally, Section 4.4 gives an overview of some of the microcontroller-related design issues which arise and learning opportunities offered.4.1 MathematicalModelLumped-element thermal systems are described in almost any introductory linear control systems text, and just this sort of model is applicable to the slide dryer problem. Figure 4 shows a second-order lumped-element thermal model of the slide dryer. The state variables are the temperatures Ta of the air in the box and Tb of the box itself. The inputs to the system are the power output q(t) of the heater and the ambient temperature T¥. ma and mb are the masses of the air and the box, respectively, and Ca and Cb their specific heats. μ1 and μ2 are heat transfer coefficients from the air to the box and from the box to the external world, respectively.It’s not hard to show that the (linearized) state equationscorresponding to Figure 4 areTaking Laplace transforms of (1) and (2) and solving for Ta(s), which is the output of interest, gives the following open-loop model of the thermal system:where K is a constant and D(s) is a second-order polynomial.K, tz, and the coefficients ofD(s) are functions of the variousparameters appearing in (1) and (2).Of course the various parameters in (1) and (2) are completely unknown, but it’s not hard to show that, regardless of their values, D(s) has two real zeros. Therefore the main transfer function of interest (which isthe one from Q(s), since we’ll assume constant ambient temperature) can be writtenMoreover, it’s not too hard to show that 1=tp1 <1=tz <1=tp2, i.e., that the zero lies between the two poles. Both of these are excellent exercises for the student, and the result is the openloop pole-zero diagram of Figure 5.Obtaining a complete thermal model, then, is reduced to identifying the constant K and the three unknown time constants in (3). Four unknown parameters is quite a few, but simple experiments show that 1=tp1 _ 1=tz;1=tp2 so that tz;tp2 _ 0 are good approximations. Thus the open-loop system is essentially first-order and can therefore be written(where the subscript p1 has been dropped).Simple open-loop step response experiments show that,for a wide range of initial temperatures and heat inputs, K _0:14 _=W and t _ 295 s.14.2 Control System DesignUsing the first-order model of (4) for the open-loop transfer function Gaq(s) and assuming for the moment that linear control of the heater power output q(t) is possible, the block diagram of Figure 6 represents the closed-loop system. Td(s) is the desired, or set-point, temperature,C(s) is the compensator transfer function, and Q(s) is the heater output in watts.Given this simple situation, introductory linear control design tools such as the root locus method can be used to arrive at a C(s) which meets the step response requirements on rise time, steady-state error, and overshoot specified in Table 1. The upshot, of course, is that a proportional controller with sufficient gain can meet all specifications. Overshoot is impossible, and increasing gains decreases both steady-state error and rise time.Unfortunately, sufficient gain to meet the specifications may require larger heat outputs than the heater is capable of producing. This was indeed the case for this system, and the result is that the rise time specification cannot be met. It is quite revealing to the student how useful such an oversimplified model, carefully arrived at, can be in determining overall performance limitations.4.3 Simulation ModelGross performance and its limitations can be determined using the simplified model of Figure 6, but there are a number of other aspects of the closed-loop system whose effects on performance are not so simply modeled. Chief among these are·quantization error in analog-to-digital conversion of the measured temperature and· the use of PWM to control the heater.Both of these are nonlinear and time-varying effects, and the only practical way to study them is through simulation (or experiment, of course).Figure 7 shows a SimulinkTM block diagram of the closed-loop system which incorporates these effects. A/D converter quantization and saturation are modeled using standard Simulink quantizer and saturation blocks. Modeling PWM is more complicated and requires a customS-function to represent it.This simulation model has proven particularly useful in gauging the effects of varying thebasic PWM parameters and hence selecting them appropriately. (I.e., the longer the period, the larger the temperature error PWM introduces. On the other hand, a long period is desirable to avoid excessiv e relay “chatter,” among other things.) PWM is often difficult for students to grasp, and the simulation model allows an exploration of its operation and effects which is quite revealing.4.4 The MicrocontrollerSimple closed-loop control, keypad reading, and display control are some of the classic applications of microcontrollers, and this project incorporates all three. It is therefore an excellent all-around exercise in microcontroller applications. In addition, because the project isto produce an actua l packaged prototype, it won’t do to use a simple evaluation board with theI/O pins jumpered to the target system. Instead, it’s necessary to develop a complete embedded application. This entails the choice of an appropriate part from the broad range offered in a typical microcontroller family and learning to use a fairly sophisticated development environment. Finally, a custom printed-circuit board for the microcontroller and peripherals must be designed and fabricated.Microcontroller Selection. In view of existing local expertise, the Motorola line of microcontrollers was chosen for this project. Still, this does not narrow the choice down much. A fairly disciplined study of system requirements is necessary to specify which microcontroller, out of scores of variants, is required for the job. This is difficult for students, as they generally lack the experience and intuition needed as well as the perseverance to wade through manufacturers’ selection guides.Part of the problem is in choosing methods for interfacing the various peripherals (e.g., what kind of display driver should be used?). A study of relevant Motorola application notes [2, 3, 4] proved very helpful in understandingwhat basic approaches are available, and what microcontroller/peripheral combinations should be considered.The MC68HC705B16 was finally chosen on the basis of its availableA/D inputs and PWMoutputs as well as 24 digital I/O lines. In retrospect this is probably overkill, as only oneA/D channel, one PWM channel, and 11 I/O pins are actually required (see Figure 3). The decision was made to err on the safe side because a complete development system specific to the chosen part was necessary, and the project budget did not permit a second such system to be purchased should the firstprove inadequate.Microcontroller Application Development. Breadboarding of the peripheral hardware, development of microcontroller software, and final debugging and testing of a customprinted-circuit board for the microcontroller and peripherals all require a development environment of some kind. The choice of a development environment, like that of themicrocontroller itself, can be bewildering and requires some faculty expertise. Motorola makes three grades of development environment ranging from simple evaluation boards (at around $100) to full-blown real-time in-circuit emulators (at more like $7500). The middle option was chosen for this project: the MMEVS, which consists of _ a platform board (which supports all 6805-family parts), _ an emulator module (specific to B-series parts), and _ a cable and target head adapter (package-specific). Overall, the system costs about $900 and provides, with some limitations, in-circuit emulation capability. It also comes with the simple but sufficient software development environment RAPID [5].Students find learning to use this type of system challenging, but the experience they gain in real-world microcontroller application development greatly exceeds the typical first-course experience using simple evaluation boards.Printed-Circuit Board. The layout of a simple (though definitely not trivial) printed-circuit board is another practical learning opportunity presented by this project. The final board layout, with package outlines, is shown (at 50% of actual size) in Figure 8. The relative simplicity of the circuit makes manual placement and routing practical—in fact, it likely gives better results than automatic in an application like this—and the student is therefore exposed to fundamental issues of printed-circuit layout and basic design rules. The layout software used was the very nice package pcb,2 and the board was fabricated in-house with the aid of our staff electronics technician.5 ConclusionThe aim of this paper has been to describe an interdisciplinary, undergraduate engineering design project: a microcontroller- based temperature control system with digital set-point entry and set-point/actual temperature display. A particular design of such a system has been described, and a number of design issues which arise—from a variety of engineering disciplines—have been discussed. Resolution of these issues generally requires knowledge beyond that acquired in introductory courses, but realistically accessible to advance undergraduate students, especiallywith the advice and supervision of faculty.Desirable features of the problem, from a pedagogical viewpoint, include the use of a microcontroller with simple peripherals, the opportunity to usefully apply introductorylevel modeling of physical systems and design of closed-loop controls, and the need for relatively simple experimentation (for model validation) and simulation (for detailed performance prediction). Also desirable are some of the technologyrelated aspects of the problem including practical use of resistive heaters and temperature sensors (requiring knowledge of PWM and calibration techniques, respectively), microcontroller selection and use of development systems, and printedcircuit design.AcknowledgementsThe author would like to acknowledge the hard work, dedication, and ability shown by the students involved in this project: Mark Langsdorf, Matt Rall, PamRinehart, and David Schuchmann. It is their project, and credit for its success belongs to them.References[1] M. Langsdorf, M. Rall, D. Schuchmann, and P. Rinehart,“Temperature control of a microscope slide dryer,” in1997 National Conference on Undergraduate Research,(Austin, TX), April 1997. Poster presentation.[2] Motorola, Inc., Phoenix, AZ, Temperature Measurementand Display Using the MC68HC05B4 and the MC14489,1990. Motorola SemiconductorApplicationNote AN431.[3] Motorola, Inc., Phoenix, AZ, HC05 MCU LED DriveTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1238.[4] Motorola, Inc., Phoenix, AZ, HC05MCU Keypad DecodingTechniques Using the MC68HC705J1A, 1995. MotorolaSemiconductor Application Note AN1239.[5] Motorola, Inc., Phoenix, AZ, RAPID Integrated DevelopmentEnvironment User’s Manual, 1993. (RAPID wasdeveloped by P & E Microcomputer Systems, Inc.).附录B英文文献翻译中文单片机温度控制:一个跨学科的本科生工程设计项目JamesS.McDonald工程科学系三一大学德克萨斯州圣安东尼奥市78212摘要本文所描述的是作者领导由四个三一大学高年级学生组成的团队进行的一个跨学科工程项目的设计。
plc单片机 毕业论文文献翻译 中英文对照

外文翻译:The monolithic In order to prevent without authorization the visit or the copy monolithic integrated circuit machine in the procedure, the majority of monolithic integrated circuits all has the encryption to lock the localization or the encryption byte, by protects the internal procedure. If in programming time encrypts locks the localization to enable (locking), is unable with the ordinary programming directly reading in the monolithic integrated circuit the procedure, this is the so-called copy protection or says the fixed function. In fact, such protective measures are very frail, is very easily explained. The monolithic integrated circuit aggressor with the aid of the special purpose equipment or the self-made equipment, using the monolithic integrated circuit chip design in loophole or the software flaw, through the many kinds of technical method, may withdraw the essential information from the chip, gains in the monolithic integrated circuit the procedure. Therefore, has the newest technology extremely as electronic products project engineer which the essential understanding current monolithic integrated circuit attacks, achieves knows oneself and the other side, knows fairly well, can effectively prevent oneself spends the product which the massive moneys and the time laboriously designs the matter occurrence which is counterfeited by a others night between.monolithic integrated circuits attacks technology:At present, attacks the monolithic integrated circuit mainly to have four kind of technologies, respectively is:This technical usual use processor correspondence connection and in the use agreement, the encryption algorithm or these algorithm security loophole carries on the attack. The software attack obtains the success a case in point is to early A T M E L A the T 89 C series monolithic integrated circuit attack. The aggressor has used in this series monolithic integrated circuit cleaning operation succession design loophole, uses from arranges the procedure to lock the localization after the cleaning encryption, stops the next step of cleaning internal program memory data the operation, thus makes to add the dense monolithic integrated circuit not to turn the encryption monolithic integrated circuit, then use programming read-out internal procedure.This technology usually monitors the processor by the high time resolution when the normal operation all power sources and the connection connection simulation characteristic, and through monitors its electromagnetic radiation characteristic to implement the attack. Because the monolithic integrated circuit is an active electronic device, when it carries out the different instruction, the corresponding mains input consumption also correspondingly changes. Like this analyzes and examines these changes through the use special electronic surveying instrument and mathematics statistical method, then gains in the monolithic integrated circuit the specific essential information.the mistake has the technology This technical use exceptionally working condition causes the processor to make a mistake, then provides the extra visit to carry on the attack. Uses the most widespread mistake to have the attack method including the voltage impact and the clock impact. The low voltage and the high voltage attack may usefor to forbid the protection circuit work or to fortected the information. The power source and the clock transient state jump may affect the single scroll instruction in certain processors the decoding and the ece the processor to carry out the misoperation. Perhaps the clock transient state jump can reposition the protection circuit but not to be able to destroy is proxecution.This technology is the direct exposed chip interior segment, then the observation, holds controls, disturbs the monolithic integrated circuit by to achieve the attack goal.In order to facilitate in order to, the people divide into above four kind of attacks technology two kinds, a kind is the invasion attack (physical attack), this kind of attack needs to destroy the seal, then with the aid of the semiconductor test facility, the microscope and the micro locator, several hours even several week time can complete on the special laboratory flower. All micro probes technology all belongs to the invasion attack. Moreover three methods belong to the non- invasion attack, the monolithic integrated circuit which attacks cannot by the physical damage. In certain situation non- invasion attacks is specially dangerous, this is because the non- invasion attack needs the equipment usually to be possible the self-restraint and the promotion, therefore is extremely inexpensive.The majority of non- invasions attack needs the aggressor to have the good processor knowledge and the software knowledge. Is opposite with it, the invasion probe attack then does not need too many initial knowledge,moreover usually may use the one whole set similar technology to cope with the width scope the product. Therefore, the attack often starts to the monolithic integrated circuit from the invasion reverse engineering, the accumulation experience is helpful to the development more inexpensive and the fast non- invasion attack technology.Last step will be seeks the protection melt silk the position and protects the melt silk to expose under the ultraviolet ray. With enlargement factor at least 100 time of microscopes, inputs the foot from the programming voltage the segment to track generally, seeks the protection melt silk.This technical use exceptionally working condition causes the processor to make a mistake, then provides the extra visit to carry on the attack. Uses the most widespread mistake to have the attack method including the voltage impact and the clock impact. The low voltage and the high voltage attack may use for to forbid the protection circuit work or to force the processor to carry out the misoperation. Perhaps the clock transient state jump can reposition the protection circuit but not to be able to destroy is protected the information. The power source and the clock transient state jump may affect the single scroll instruction in certain processors the decoding and the execution.(4) probe technologyThis technology is the direct exposed chip interior segment, then the observation, holds controls, disturbs the monolithic integrated circuit by to achieve the attack goal.In order to facilitate in order to, the people divide into above four kindof attacks technology two kinds, a kind is the invasion attack (physical attack), this kind of attack needs to destroy the seal, then with the aid of the semiconductor test facility, the microscope and the micro locator, several hours even several week time can complete on the special laboratory flower. All micro probes technology all belongs to the invasion attack. Moreover three methods belong to the non- invasion attack, the monolithic integrated circuit which attacks cannot by the physical damage. In certain situation non- invasion attacks is specially dangerous, this is because the non- invasion attack needs the equipment usually to be possible the self-restraint and the promotion, therefore is extremely inexpensive.The majority of non- invasions attack needs the aggressor to have the good processor knowledge and the software knowledge. Is opposite with it, the invasion probe attack then does not need too many initial knowledge,moreover usually may use the one whole set similar technology to cope with the width scope the product. Therefore, the attack often starts to the monolithic integrated circuit from the invasion reverse engineering, the accumulation experience is helpful to the development more inexpensive and the fast non- invasion attack technology.3 invasions attacks general process:The invasion attack first step uncovers the chip seal. Some two methods may achieve this goal: The first kind is dissolves the chip seal completely, the exposed metal segment. The second kind is only moves above the silicon nucleus plastic seal. The first method needs the chip to tests on the jig, with the aid of Taiwan to operate. The second method except needs to have the aggressor certain knowledge and Wants outside skill, but also needs individual wisdom and the patience, but operates relatively quite is convenient.Above the chip plastic may use the knife to open, around the chip epoxy resin may use the aqua fortis perish. The hot aqua fortis can dissolve the chip seal but not to be able to affect the chip and the segment. This process carries on generally under the extremely dry condition, because the water existence possibly can corrode already the aluminum wire connection which exposes.Then first uses the acetone in the supersonic pond to clean this chip by except the remaining nitric acid, then cleans with the clear water by and is dry except the salinity. Not the supersonic pond, jumps over generally this step. In this kind of situation, the chip surface can a little dirty, but not too affects the ultraviolet ray to the chip operation effect.Last step will be seeks the protection melt silk the position and protects the melt silk to expose under the ultraviolet ray. With enlargement factor at least 100 time of microscopes, inputs the foot from the programming voltage the segment to track generally, seeks the protection melt silk.If does not have the microscope, then uses the chip different partially exposes to the ultraviolet ray under and the observed result way carries on the simple search. When operation applies not the opaque slip of paper cover chipby to protect the program memory not by the ultraviolet ray cleaning. Will protect the melt silk to expose in the ultraviolet ray next 5 ~ 10 minutes can broken the protection position protective function, afterwards, will use the simple programming to be possible the direct readout program memory content.Regarding used the protective layer to protect E E P R O the M unit the monolithic integrated circuit to say that, the use ultraviolet ray repositioned the protection circuit is not feasible. Regarding this kind of type monolithic integrated circuit, uses the micro probe technology reading the memory content generally. Opens after the chip seal, puts in the chip under the microscope to be able very easy finding中文翻译单片机为了防止未经授权访问或拷贝单片机的机内程序,大部分单片机都带有加密锁定位或者加密字节,以保护片内程序。
单片机的外文文献及中文翻译

SCM is an integrated circuit chip,is the use of large scale integrated circuit technology to a data processing capability of CPU CPU random access memory RAM, read-only memory ROM,a variety of I / O port and interrupt system,timers / timer functions (which may also include display driver circuitry,pulse width modulation circuit, analog multiplexer, A / D converter circuit)integrated into a silicon constitute a small and complete computer systems.SCM is also known as micro—controller (Microcontroller),because it is the first to be used in industrial control。
Only a single chip by the CPU chip developed from a dedicated processor. The first design is by a large number of peripherals and CPU on a chip in the computer system, smaller,more easily integrated into a complex and demanding on the volume control device which. The Z80 INTEL is the first designed in accordance with this idea processor, then on the development of microcontroller and dedicated processors will be parting ways。
单片机设计外文文献翻译(含中英文)

附录A 外文翻译——AT89S52/AT89S51技术手册AT89S52译文主要性能与MCS-51单片机产品兼容8K字节在系统可编程Flash存储器1000次擦写周期全静态操作:0Hz~33Hz三级加密程序存储器32个可编程I/O口线三个16位定时器/计数器八个中断源全双工UART串行通道低功耗空闲和掉电模式掉电后中断可唤醒看门狗定时器双数据指针掉电标识符功能特性描述AT89S52是一种低功耗、高性能CMOS8位微控制器,具有8K在系统可编程Flash 存储器。
使用Atmel公司高密度非易失性存储器技术制造,与工业80C51产品指令和引脚完全兼容。
片上Flash 允许程序存储器在系统可编程,亦适于常规编程器。
在单芯片上,拥有灵巧的8位CPU和在系统可编程Flash,使得AT89S52为众多嵌入式控制应用系统提供高灵活、超有效的解决方案。
AT89S52具有以下标准功能:8k字节Flash,256字节RAM,32位I/O口线,看门狗定时器,2个数据指针,三个16位定时器/计数器,一个6向量2级中断结构,全双工串行口,片内晶振及时钟电路。
另外,AT89S52可降至0Hz静态逻辑操作,支持2种软件可选择节电模式。
空闲模式下,CPU停止工作,允许RAM、定时器/计数器、串口、中断继续工作。
掉电保护方式下,RAM内容被保存,振荡器被冻结,单片机一切工作停止,直到下一个中断或硬件复位为止。
引脚结构方框图VCC : 电源GND :地P0口:P0口是一个8位漏极开路的双向I/O口。
作为输出口,每位能驱动8个TTL逻辑电平。
对P0端口写“1”时,引脚用作高阻抗输入。
当访问外部程序和数据存储器时,P0口也被作为低8位地址/数据复用。
在这种模式下,P0具有内部上拉电阻。
在flash编程时,P0口也用来接收指令字节;在程序校验时,输出指令字节。
程序校验时,需要外部上拉电阻。
P1口:P1 口是一个具有内部上拉电阻的8位双向I/O 口,p1 输出缓冲器能驱动4个TTL 逻辑电平。
单片机外文翻译外文文献英文文献单片机的发展与应用

单片机外文翻译外文文献英文文献单片机的发展与应用THE Application and Development ofMicrocontroller UnitMonolithic integrated circuits are a computer chip. It uses tec hnology will have a data processing ability of the microprocessor (cpu), storage in rom (program memory and data storage ram ), the input, output interfaces circuit (I/O) integration interface i tu rned around with a chip in that small, constitutes a very good and the computer hardware system, where the application under the c ontrol of a monolithic integrated circuits can be accurate, fast and efficient procedures provided in advance to complete the task. So, a monolithic integrated circuits will have a computer chip of all t he functions.Thus, the microprocessor (monolithic integrated circuits has generally cpu )chips are not functional, it can independently com plete modern industrial control required for intelligent control func tions, it is monolithic integrated circuits of the biggest characteristi c.Monolithic integrated circuits, however, and different from mac hines ( a microprocessor chips, the memory chip and input and o utput interfaces chip in with a piece of printed circuit board of a microcomputer ), Monolithic integrated circuits chip in developing ago, it is only a function vlsi will have a strong, If of application development, it is a small microcomputer control system, but it m achine or a personal computer (pc is essential. the difference betw een).Monolithic integrated circuits of the application of chips at the level of application, the user (monolithic integrated circuits lear ners with users understand the structure of the chip )monolithic integrated circuits and instruction system, and the integrated use o f technology and system design to the theory and techniques, in th is particular chip design application, thereby, the chip with a parti cular function.Different monolithic integrated circuits have different hardware and software, or the technical features are different, Character de pends on a hardware chip monolithic integrated circuits the intern al structure of the user to use some monolithic integrated circuits, we must know this type of product whether to meet the needs of the facilities and application of the indicators required. The tech nical features include functional characteristics, control and electric al attributes, These information to manufacturers in the technical manual. Software features refers to an instruction system and devel opment support of the environment, the quality of instruction or monolithic integrated circuits for reference, data processing and log ical processing, output characteristics and to the power input requi rements, etc. Development support of the environment, including th e instructions of compatible and portable. support software (contai ns can support the development and application software and hard ware resources. resources). To take advantage of the model of deve lopment of a monolithic integrated circuits application systems, lea rn its structural features and technological characteristic is require d.Monolithic integrated circuits to control system will ever use o f sophisticated electronic circuit or circuit, a control system to achi eve the software controls and enable intelligent, It is monolithic in tegrated circuits to control areas, such as communications products and household appliances, the instruments and processes to contr ol and control devices, theapplication of more monolithic integrate d circuits sector.Monolithic integrated circuits, of course, the application is not limited to the application or the category of the economic perfor mance is more important it is a fundamental change in the traditi onal methods designed to control and mind control techniques. it i s a revolution is an important milestone.Can say now is the policy, a hundred schools of thought conte nd "monolithic integrated circuits, World chip all the company unv eiled his monolithic integrated circuits, from 8, 16 to 32 bits, and,with mainstream c51 series of, and there is not compatible with e ach other, but they, as complementary to monolithic integrated circ uits, the application of the world provide a broad.Throughout monolithic integrated circuits of the development p rocess, the trend of a monolithic integrated circuits, has :1.the low TDP COMSMcs -51 8031 a series of TDP for 630mw, and now a monolit hic integrated circuits, and generally in 100mw. As to ask for lowe r TDP monolithic integrated circuits, and now each monolithic inte grated circuits are used in the basic cmos (complementary metal o xides semiconductor technology). Like 80c51 adopt a hmos (the hig h density metal oxides semiconductor technology) and chmos (com plementary high density metal oxides semiconductor technology). C mos although TDP low, but owing to their physical characteristics to their work at a speed isn't high enough, but it has a high-spee d chmos TDP and low, these features are more appropriate to ask for lower TDP in a battery operated applications. so this process will be for a period of development. the main way to monolithic i ntegrated circuits。
单片机 英文参考文献 翻译

天津科技大学毕业生外文资料翻译姓名:学院:电子信息与自动化学院专业:测控技术与仪器一.英文原文Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2009Maurice WilkesThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and we had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practice Consolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the firstoperating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level of integration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computers The RISC movement benefited greatly from methods which had recently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventional computers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predicationwhich makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chip than is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entiremarket.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building upSuspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains a problem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wiresgoing from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followedEqually surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for UScompanies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in 1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that seriousproblems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another, shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
单片机英文文献资料及翻译
单片机(英文:Microcontroller)
Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.
A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.
MEMORY
Microcontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.
INPUT/OUTPUT
Microcontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,
such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.
APPLICATIONS
Microcontrollers are widely used in a variety of applications, including:
- Home automation systems
- Automotive electronics
- Medical devices
- Industrial control systems
- Consumer electronics
- Robotics
CONCLUSION
In conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。