单片机_英文参考文献

合集下载

单片机英文文献资料及翻译

单片机英文文献资料及翻译

单片机英文文献资料及翻译单片机(英文:Microcontroller)Microcontroller is a small computer on a single integrated circuit that contains a processor core, memory, and programmable input/output peripherals. Microcontrollers are designed for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications.A microcontroller's processor core is typically a small, low-power computer dedicated to controlling the operation of the device in which it is embedded. It is often designed to provide efficient and reliable control of simple and repetitive tasks, such as switching on and off lights, or monitoring temperature or pressure sensors.MEMORYMicrocontrollers typically have a limited amount of memory, divided into program memory and data memory. The program memory is where the software that controls the device is stored, and is often a type of Read-Only Memory (ROM). The data memory, on the other hand, is used to store data that is used by the program, and is often volatile, meaning that it loses its contents when power is removed.INPUT/OUTPUTMicrocontrollers typically have a number of programmable input/output (I/O) pins that can be used to interface with external sensors, switches, actuators, and other devices. These pins can be programmed to perform specific functions,such as reading a sensor value, controlling a motor, or generating a signal. Many microcontrollers also support communication protocols like serial, parallel, and USB, allowing them to interface with other devices, including other microcontrollers, computers, and smartphones.APPLICATIONSMicrocontrollers are widely used in a variety of applications, including:- Home automation systems- Automotive electronics- Medical devices- Industrial control systems- Consumer electronics- RoboticsCONCLUSIONIn conclusion, microcontrollers are powerful and versatile devices that have become an essential component in many embedded systems. With their small size, low power consumption, and high level of integration, microcontrollers offer an effective and cost-efficient solution for controlling a wide range of devices and applications.。

单片机英文文献及翻译)

单片机英文文献及翻译)

Validation and Testing of Design Hardening for Single Event Effects Using the 8051 MicrocontrollerAbstractWith the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the single event effects (SEE) in the space environment. Topics include the types of tests that are required and the design coverage (i.e., design libraries: do they need validating for each application?). Finally, an 8051 microcontroller core from NASA Institute of Advanced Microelectronics (IAμE) CMOS Ultra Low Power Radiation Tolerant (CULPRiT) design is evaluated for SEE mitigative techniques against two commercial 8051 devices.Index TermsSingle Event Effects, Hardened-By-Design, microcontroller, radiation effects.I. INTRODUCTIONNASA constantly strives to provide the best capture of science while operating in a space radiation environment using a minimum of resources [1,2]. With a relatively limited selection of radiation-hardened microelectronic devices that are often two or more generations of performance behind commercialstate-ofthe-art technologies, NASA’s performance of this task is quite challenging. One method of alleviating this is by the use of commercial foundry alternatives with no or minimally invasive design techniques for hardening. This is often called hardened-by-design (HBD).Building custom-type HBD devices using design libraries and automated design tools may provide NASA the solution it needs to meet stringent science performance specifications in a timely,cost-effective, and reliable manner.However, one question still exists: traditional radiation-hardened devices have lot and/or wafer radiation qualification tests performed; what types of tests are required for HBD validation?II. TESTING HBD DEVICES CONSIDERATIONSTest methodologies in the United States exist to qualify individual devices through standards and organizations such as ASTM, JEDEC, and MIL-STD- 883. Typically, TID (Co-60) and SEE (heavy ion and/or proton) are required for device validation. So what is unique to HBD devices?As opposed to a “regular” commercial-off-the-shelf (COTS) device or application specific integrated circuit (ASIC) where no hardening has been performed, one needs to determine how validated is the design library as opposed to determining the device hardness. That is, by using test chips, can we “qualify” a future device using the same library?Consider if Vendor A has designed a new HBD library portable to foundries B and C. A test chip is designed, tested, and deemed acceptable. Nine months later a NASA flight project enters the mix by designing a new device using Vendor A’s library. Does this device require complete radiation qualification testing? To answer this, other questions must be asked.How complete was the test chip? Was there sufficient statistical coverage of all library elements to validate each cell? If the new NASA design uses a partially or insufficiently characterized portion of the design library, full testing might be required. Of course, if part of the HBD was relying on inherent radiation hardness of a process, some of the tests (like SEL in the earlier example) may be waived.Other considerations include speed of operation and operating voltage. For example, if the test chip was tested statically for SEE at a power supply voltage of 3.3V, is the data applicable to a 100 MHz operating frequency at 2.5V? Dynamic considerations (i.e., nonstatic operation) include the propagated effects of Single Event Transients (SETs). These can be a greater concern at higher frequencies.The point of the considerations is that the design library must be known, the coverage used during testing is known, the test application must be thoroughly understood and the characteristics of the foundry must be known. If all these are applicable or have been validated by the test chip, then no testing may be necessary. A task within NASA’s Electronic Parts and Packaging (NEPP) Program was performed to explore these types of considerations.III. HBD TECHNOLOGY EVALUATION USING THE 8051 MICROCONTROLLERWith their increasing capabilities and lower power consumption, microcontrollers are increasingly being used in NASA and DOD system designs. There are existing NASA and DoD programs that are doing technology development to provide HBD. Microcontrollers are one such vehicle that is being investigated to quantify the radiation hardness improvement. Examples of these programs are the 8051 microcontroller being developed by Mission Research Corporation (MRC) and the IAμE (the focus of this study). As these HBD technologies become available, validation of the technology, in the natural space radiation environment, for NASA’s use in spaceflight systems is required.The 8051 microcontroller is an industry standard architecture that has broad acceptance, wide-ranging applications and development tools available. There are numerous commercial vendors that supply this controller or have it integrated into some type of system-on-a-chip structure. Both MRC and IAμE chose this device to demonstrate two distinctly different technologies for hardening. The MRC example of this is to use temporal latches that require specific timing to ensure that single event effects are minimized. The IAμE technology uses ultra low power, and layout and architecture HBD design rules to achieve their results. These are fundamentally different than the approach by Aeroflex-United Technologies Microelectronics Center (UTMC), the commercial vendor of a radiation–hardened 8051, that built their 8051 microcontroller using radiationhardened processes. This broad range of technology within one device structure makes the 8051an ideal vehicle for performing this technology evaluation.The objective of this work is the technology evaluation of the CULPRiT process [3] from IAμE. The process has been baselined against two other processes, the standard 8051 commercial device from Intel and a version using state-of-the-art processing from Dallas Semiconductor. By performing this side-by-side comparison, the cost benefit, performance, and reliability trade study can be done.In the performance of the technology evaluation, this task developed hardware and software for testing microcontrollers. A thorough process was done to optimize the test process to obtain as complete an evaluation as possible. This included taking advantage of the available hardware and writing software that exercised the microcontroller such that all substructures of the processor were evaluated. This process is also leading to a more complete understanding of how to test complex structures, such as microcontrollers, and how to more efficiently test these structures in the future.IV. TEST DEVICESThree devices were used in this test evaluation. The first is the NASA CULPRiT device, which is the primary device to be evaluated. The other two devices are two versions of a commercial 8051, manufactured by Intel and Dallas Semiconductor, respectively.The Intel devices are the ROMless, CMOS version of the classic 8052 MCS-51 microcontroller. They are rated for operation at +5V, over a temperature range of 0 to 70 °C and at a clock speeds of 3.5 MHz to 24 MHz. They are manufactured in Intel’s P629.0 CHMOS III-E process.The Dallas Semiconductor devices are similar in that they are ROMless 8052 microcontrollers, but they are enhanced in various ways. They are rated for operation from 4.25 to 5.5 Volts over 0 to 70 °C at clock speeds up to 25 MHz. They have a second full serial port built in, seven additional interrupts, a watchdog timer, a power fail reset, dual data pointers and variable speed peripheral access. In addition, the core is redesigned so that the machine cycle is shortened for most instructions, resulting in an effective processing ability that is roughly 2.5 times greater (faster) than the standard 8052 device. None of these features, other than those inherent in the device operation, were utilized in order to maximize the similarity between the Dallas and Intel test codes.The CULPRiT technology device is a version of the MSC-51 family compatible C8051 HDL core licensed from the Ultra Low Power (ULP) process foundry. The CULPRiT technology C8051 device is designed to operate at a supply voltage of 500 mV and includes an on-chip input/output signal level-shifting interface with conventional higher voltage parts. The CULPRiT C8051 device requires two separate supply voltages; the 500 mV and the desired interface voltage. The CULPRiT C8051 is ROMless and is intended to be instruction set compatible with the MSC-51 family.V. TEST HARDWAREThe 8051 Device Under Test (DUT) was tested as a component of a functional computer. Aside from DUT itself, the other componentsof the DUT computer were removed from the immediate area of the irradiation beam.A small card (one per DUT package type) with a unique hard-wired identifier byte contained the DUT, its crystal, and bypass capacitors (and voltage level shifters for the CULPRiT DUTs). This "DUT Board" was connected to the "Main Board" by a short 60-conductor ribbon cable. The Main Board had all other components required to complete the DUT Computer, including some which nominally are not necessary in some designs (such as external RAM, external ROM and address latch). The DUT Computer and the Test Control Computer were connected via a serial cable and communications were established between the two by the Controller (that runs custom designed serial interface software). This Controller software allowed for commanding of the DUT, downloading DUT Code to the DUT, and real-time error collection from the DUT during and post irradiation. A 1 Hz signal source provided an external watchdog timing signal to the DUT, whose watchdog output was monitored via an oscilloscope. The power supply was monitored to provide indication of latchup.VI. TEST SOFTWAREThe 8051 test software concept is straightforward. It was designed to be a modular series of small test programs each exercising a specific part of the DUT. Since each test was stand alone, they were loaded independently of each other for execution on the DUT. This ensured that only the desired portion of the 8051 DUT was exercised during the test and helped pinpoint location of errors that occur during testing. All test programs resided on the controller PC until loaded via the serial interface to the DUT computer. In this way, individual tests could have been modified at any time without the necessity of burning PROMs. Additional tests could have also been developed and added without impacting the overall test design. The only permanent code, which was resident on the DUT, was the boot code and serial code loader routines that established communications between the controller PC and the DUT.All test programs implemented:• An external Universal Asynchronous Receive and Transmit device (UART) for transmission of error information and communication to controller computer.• An external real-time clock for data error tag.•A watchdog routine designed to provide visual verification of 8051 health and restart test code if necessary.• A "foul-up" routine to reset program counter if it wanders out of code space.• An external telemetry data storage memory to provide backup of data in the event of an interruption in data transmission.The brief description of each of the software tests used is given below. It should be noted that for each test, the returned telemetry (including time tag) was sent to both the test controller and the telemetry memory, giving the highest reliability that all data is captured.Interrupt –This test used 4 of 6 available interrupt vectors (Serial, External, Timer0 Overflow, and Timer1 Overflow) to trigger routines that sequentially modified a value in the accumulator which was periodically compared to a known value. Unexpected values were transmitted with register information.Logic –This test performed a series of logic and math computations and provided three types of error identifications: 1) addition/subtraction, 2) logic and 3) multiplication/division. All miscompares of computations and expected results were transmitted with other relevant register information.Memory – This test loaded internal data memory at locations D:0x20 through D:0xff (or D:0x20 through D:0x080 for the CULPRiT DUT), indirectly, with an 0x55 pattern. Compares were performed continuously and miscompares were corrected while error information and register values were transmitted.Program Counter -The program counter was used to continuously fetch constants at various offsets in the code. Constants were compared with known values and miscompares were transmitted along with relevant register information. Registers – This test loaded each of four (0,1,2,3) banks of general-purpose registers with either 0xAA (for banks 0 and 2) or 0x55 (for banks 1 and 3). The pattern was alternated in order to test the Program Status Word (PSW) special function register, which controls general-purpose register bank selection. General-purpose register banks were then compared with their expected values. All miscompares were corrected and error information was transmitted.Special Function Registers (SFR) – This test used learned static values of 12 out 21 available SFRs and then constantly compared the learned value with the current one. Miscompares were reloaded with learned value and error information was transmitted.Stack – This test performed arithmetic by pushing and popping operands on the stack. Unexpected results were attributed to errors on the stack or to the stack pointer itself and were transmitted with relevant register information.VII. TEST METHODOLOGYThe DUT Computer booted by executing the instruction code located at address 0x0000. Initially, the device at this location was an EPROM previously loaded with "Boot/Serial Loader" code. This code initialized the DUT Computer and interface through a serial connection to the controlling computer, the "Test Controller". The DUT Computer downloaded Test Code and put it into Program Code RAM (located on the Main Board of the DUT Computer). It then activated a circuit which simultaneously performed two functions: held the DUT reset line active for some time (~10 ms); and, remapped the Test Code residing in the Program Code RAM to locate it to address 0x0000 (the EPROM will no longer be accessible in the DUT Computer's memory space). Upon awaking from the reset, the DUT computer again booted by executing the instruction code at address 0x0000, except this time that code was not be the Boot/Serial Loader code but the Test Code.The Test Control Computer always retained the ability to force the reset/remap function, regardless of the DUT Computer's functionality. Thus, if the test ran without a Single Event Functional Interrupt (SEFI) either the DUT Computer itselfor the Test Controller could have terminated the test and allowed the post-test functions to be executed. If a SEFI occurred, the Test Controller forced a reboot into Boot/Serial Loader code and then executed the post-test functions. During any test of the DUT, the DUT exercised a portion of its functionality (e.g., Register operations or Internal RAM check, or Timer operations) at the highest utilization possible, while making a minimal periodic report to the Test Control Computer to convey that the DUT Computer was still functional. If this reportceased, the Test Controller knew that a SEFI had occurred. This periodic data was called "telemetry". If the DUT encountered an error that was not interrupting the functionality (e.g., a data register miscompare) it sent a more lengthy report through the serial port describing that error, and continued with the test.VIII.DISCUSSIONA. Single Event LatchupThe main argument for why latchup is not an issue for the CULPRiT devices is that the operating voltage of 0.5 volts should be below the holding voltage required for latchup to occur. In addition to this, the cell library used also incorporates the heavy dual guard-barring scheme [4]. This scheme has been demonstrated multiple times to be very effective in rendering CMOS circuits completely immune to SEL up to test limits of 120 MeV-cm2/mg. This is true in circuits operating at 5, 3.3, and 2.5 Volts, as well as the 0.5 Volt CULPRiT circuits. In one case, a 5 Volt circuit fabricated on noncircuits wafers even exhibited such SEL immunity.B. Single Event UpsetThe primary structure of the storage unit used in the CULPRiT devices is the Single Event Resistant Topology (SERT) [5]. Given the SERT cell topology and a single upset node assumption, it is expected that the SERT cell will be completely immune to SEUs occurring internal to the memory cell itself. Obviously there are other things going on. The CULPRiT 8051 results reported here are quite similar to some resultsobtained with a CULPRiT CCSDS lossless compression chip (USES) [6]. The CULPRiT USES was synthesized using exactly the same tools and library as the CULPRiT 8051.With the CULPRiT USES, the SEU cross section data [7] was taken as a function of frequency at two LET values, 37.6 and 58.5 MeV-cm2/mg. In both cases the data fit well to a linear model where cross section is proportional to clock. In the LET 37.6 case, the zero frequency intercept occurred essentially at the zero cross section point, indicating that virtually all of these SEUs are captured SETs from the combinational logic. The LET 58.5 data indicated that the SET (frequency dependent) component is sitting on top of a "dc-bias" component –presumably a second upset mechanism is occurring internal to the SERT cells only at a second, higher LET threshold.The SET mitigation scheme used in the CULPRiT devices is based on the SERT cell's fault tolerant input property when redundant input data is provided to separate storage nodes. The idea is that the redundant input data is provided through a total duplication of combinational logic (referred to as “dual rail design”) such that a simple SET on one rail cannot produce an upset. Therefore, some other upset mechanism must be happening. It is possible that a single particle strike is placing an SET on both halves of the logic streams, allowing an SET to produce an upset. Care was taken to separate the dual sensitive nodes in the SERT cell layouts but the automated place-and-route of the combinatorial logic paths may have placed dual sensitive nodes close enough.At this point, the theory for the CULPRiT SEU response is that at about an LET of 20, the energy deposition is sufficiently wide enough (and in the right locations) to produce an SET in both halves of the combinatorial logic streams. Increasing LET allows for more regions to be sensitive to this effect, yielding a larger cross section. Further, the second SEU mechanism that starts at an LET of about 40-60 has to do with when the charge collection disturbance cloud gets large enough to effectively upset multiples of the redundant storage nodes within the SERT cell itself. In this 0.35 μm library, the node separation is several microns. However, since it takes less charge to upset a node operating at 0.5 Volts, with transistors having effective thresholds around 70 mV, this is likely the effect being observed. Also the fact that the per-bit memory upset cross section for the CULPRiT devices and the commercial technologies are approximately equal, as shown in Figure 9, indicates that the cell itself has become sensitive to upset.IX. SUMMARYA detailed comparison of the SEE sensitivity of a HBD technology (CULPRiT) utilizing the 8051 microcontroller as a test vehicle has been completed. This paper discusses the test methodology used and presents a comparison of the commercial versus CULPRiT technologies based on the data taken. The CULPRiT devices consistently show significantly higher threshold LETs and an immunity to latchup. In all but the memory test at the highest LETs, the cross section curves for all upset events is one to two orders of magnitude lower than the commercial devices. Additionally, theory is presented, based on the CULPRiT technology, that explain these results.This paper also demonstrates the test methodology for quantifying the level of hardness designed into a HBD technology. By using the HBD technology in a real-world device structure (i.e., not just a test chip), and comparing results to equivalent commercial devices, one can have confidence in the level of hardness that would be available from that HBD technology in any circuit application.ACKNOWLEDGEMENTSThe authors of this paper would like to acknowledge the sponsors of this work. These are the NASA Electronic Parts and Packaging Program (NEPP), NASA Flight Programs, and the Defense Threat Reduction Agency (DTRA).。

单片机外文翻译外文文献英文文献单片机的发展与应用

单片机外文翻译外文文献英文文献单片机的发展与应用

单片机外文翻译外文文献英文文献单片机的发展与应用THE Application and Development ofMicrocontroller UnitMonolithic integrated circuits are a computer chip. It uses tec hnology will have a data processing ability of the microprocessor (cpu), storage in rom (program memory and data storage ram ), the input, output interfaces circuit (I/O) integration interface i tu rned around with a chip in that small, constitutes a very good and the computer hardware system, where the application under the c ontrol of a monolithic integrated circuits can be accurate, fast and efficient procedures provided in advance to complete the task. So, a monolithic integrated circuits will have a computer chip of all t he functions.Thus, the microprocessor (monolithic integrated circuits has generally cpu )chips are not functional, it can independently com plete modern industrial control required for intelligent control func tions, it is monolithic integrated circuits of the biggest characteristi c.Monolithic integrated circuits, however, and different from mac hines ( a microprocessor chips, the memory chip and input and o utput interfaces chip in with a piece of printed circuit board of a microcomputer ), Monolithic integrated circuits chip in developing ago, it is only a function vlsi will have a strong, If of application development, it is a small microcomputer control system, but it m achine or a personal computer (pc is essential. the difference betw een).Monolithic integrated circuits of the application of chips at the level of application, the user (monolithic integrated circuits lear ners with users understand the structure of the chip )monolithic integrated circuits and instruction system, and the integrated use o f technology and system design to the theory and techniques, in th is particular chip design application, thereby, the chip with a parti cular function.Different monolithic integrated circuits have different hardware and software, or the technical features are different, Character de pends on a hardware chip monolithic integrated circuits the intern al structure of the user to use some monolithic integrated circuits, we must know this type of product whether to meet the needs of the facilities and application of the indicators required. The tech nical features include functional characteristics, control and electric al attributes, These information to manufacturers in the technical manual. Software features refers to an instruction system and devel opment support of the environment, the quality of instruction or monolithic integrated circuits for reference, data processing and log ical processing, output characteristics and to the power input requi rements, etc. Development support of the environment, including th e instructions of compatible and portable. support software (contai ns can support the development and application software and hard ware resources. resources). To take advantage of the model of deve lopment of a monolithic integrated circuits application systems, lea rn its structural features and technological characteristic is require d.Monolithic integrated circuits to control system will ever use o f sophisticated electronic circuit or circuit, a control system to achi eve the software controls and enable intelligent, It is monolithic in tegrated circuits to control areas, such as communications products and household appliances, the instruments and processes to contr ol and control devices, theapplication of more monolithic integrate d circuits sector.Monolithic integrated circuits, of course, the application is not limited to the application or the category of the economic perfor mance is more important it is a fundamental change in the traditi onal methods designed to control and mind control techniques. it i s a revolution is an important milestone.Can say now is the policy, a hundred schools of thought conte nd "monolithic integrated circuits, World chip all the company unv eiled his monolithic integrated circuits, from 8, 16 to 32 bits, and,with mainstream c51 series of, and there is not compatible with e ach other, but they, as complementary to monolithic integrated circ uits, the application of the world provide a broad.Throughout monolithic integrated circuits of the development p rocess, the trend of a monolithic integrated circuits, has :1.the low TDP COMSMcs -51 8031 a series of TDP for 630mw, and now a monolit hic integrated circuits, and generally in 100mw. As to ask for lowe r TDP monolithic integrated circuits, and now each monolithic inte grated circuits are used in the basic cmos (complementary metal o xides semiconductor technology). Like 80c51 adopt a hmos (the hig h density metal oxides semiconductor technology) and chmos (com plementary high density metal oxides semiconductor technology). C mos although TDP low, but owing to their physical characteristics to their work at a speed isn't high enough, but it has a high-spee d chmos TDP and low, these features are more appropriate to ask for lower TDP in a battery operated applications. so this process will be for a period of development. the main way to monolithic i ntegrated circuits。

单片机英文参考文献

单片机英文参考文献

Progress in ComputersPrestige Lecture delivered to IEE, Cambridge, on 5 February 2004Maurice WilkesComputer LaboratoryUniversity of CambridgeThe first stored program computers began to work around 1950. The one we built in Cambridge, the EDSAC was first used in the summer of 1949.These early experimental computers were built by people like myself with varying backgrounds. We all had extensive experience in electronic engineering and were confident that that experience would stand us in good stead. This proved true, although we had some new things to learn. The most important of these was that transients must be treated correctly; what would cause a harmless flash on the screen of a television set could lead to a serious error in a computer.As far as computing circuits were concerned, we found ourselves with an embarass de richess. For example, we could use vacuum tube diodes for gates as we did in the EDSAC or pentodes with control signals on both grids, a system widely used elsewhere. This sort of choice persisted and the term families of logic came into use. Those who have worked in the computer field will remember TTL, ECL and CMOS. Of these, CMOS has now become dominant.In those early years, the IEE was still dominated by power engineering and w e had to fight a number of major battles in order to get radio engineering along with the rapidly developing subject of electronics.dubbed in the IEE light current electrical engineering.properly recognised as an activity in its own right. I remember that we had some difficulty in organising a conference because the power engineers’ ways of doing things were not our ways. A minor source of irritation was that all IEE published papers were expected to start with a lengthy statement of earlier practice, something difficult to do when there was no earlier practiceConsolidation in the 1960sBy the late 50s or early 1960s, the heroic pioneering stage was over and the computer field was starting up in real earnest. The number of computers in the world had increased and they were much more reliable than the very early ones . To those years we can ascribe the first steps in high level languages and the first operating systems. Experimental time-sharing was beginning, and ultimately computer graphics was to come along.Above all, transistors began to replace vacuum tubes. This change presented a formidable challenge to the engineers of the day. They had to forget what they knew about circuits and start again. It can only be said that they measured up superbly well to the challenge and that the change could not have gone more smoothly.Soon it was found possible to put more than one transistor on the same bit of silicon, and this was the beginning of integrated circuits. As time went on, a sufficient level ofintegration was reached for one chip to accommodate enough transistors for a small number of gates or flip flops. This led to a range of chips known as the 7400 series. The gates and flip flops were independent of one another and each had its own pins. They could be connected by off-chip wiring to make a computer or anything else.These chips made a new kind of computer possible. It was called a minicomputer. It was something less that a mainframe, but still very powerful, and much more affordable. Instead of having one expensive mainframe for the whole organisation, a business or a university was able to have a minicomputer for each major department.Before long minicomputers began to spread and become more powerful. The world was hungry for computing power and it had been very frustrating for industry not to be able to supply it on the scale required and at a reasonable cost. Minicomputers transformed the situation.The fall in the cost of computing did not start with the minicomputer; it had always been that way. This was what I meant when I referred in my abstract to inflation in the computer industry ‘going the other way’. As time goes on people get more for their money, not less.Research in Computer Hardware.The time that I am describing was a wonderful one for research in computer hardware. The user of the 7400 series could work at the gate and flip-flop level and yet the overall level of integration was sufficient to give a degree of reliability far above that of discreet transistors. The researcher, in a university or elsewhere, could build any digital device that a fertile imagination could conjure up. In the Computer Laboratory we built the Cambridge CAP, a full-scale minicomputer with fancy capability logic.The 7400 series was still going strong in the mid 1970s and was used for the Cambridge Ring, a pioneering wide-band local area network. Publication of the design study for the Ring came just before the announcement of the Ethernet. Until these two systems appeared, users had mostly been content with teletype-based local area networks.Rings need high reliability because, as the pulses go repeatedly round the ring, they must be continually amplified and regenerated. It was the high reliability provided by the 7400 series of chips that gave us the courage needed to embark on the project for the Cambridge Ring.The RISC Movement and Its AftermathEarly computers had simple instruction sets. As time went on designers of commercially available machines added additional features which they thought would improve performance. Few comparative measurements were done and on the whole the choice of features depended upon the designer’s intuition.In 1980, the RISC movement that was to change all this broke on the world. The movement opened with a paper by Patterson and Ditzel entitled The Case for the Reduced Instructions Set Computer.Apart from leading to a striking acronym, this title conveys little of the insights into instruction set design which went with the RISC movement, in particular the way it facilitated pipelining, a system whereby several instructions may be in different stages of execution within the processor at the same time. Pipelining was not new, but it was new for small computersThe RISC movement benefited greatly from methods which had rec ently become available for estimating the performance to be expected from a computer design without actually implementing it. I refer to the use of a powerful existing computer to simulate the new design. By the use of simulation, RISC advocates were able to predict with some confidence that a good RISC design would be able to out-perform the best conventionalcomputers using the same circuit technology. This prediction was ultimately born out in practice.Simulation made rapid progress and soon came into universal use by computer designers. In consequence, computer design has become more of a science and less of an art. Today, designers expect to have a roomful of, computers available to do their simulations, not just one. They refer to such a roomful by the attractive name of computer farm.The x86 Instruction SetLittle is now heard of pre-RISC instruction sets with one major exception, namely that of the Intel 8086 and its progeny, collectively referred to as x86. This has become the dominant instruction set and the RISC instruction sets that originally had a considerable measure of success are having to put up a hard fight for survival.This dominance of x86 disappoints people like myself who come from the research wings.both academic and industrial.of the computer field. No doubt, business considerations have a lot to do with the survival of x86, but there are other reasons as well. However much we research oriented people would like to think otherwise. high level languages have not yet eliminated the use of machine code altogether. We need to keep reminding ourselves that there is much to be said for strict binary compatibility with previous usage when that can be attained. Nevertheless, things might have been different if Intel’s major attempt to produ ce a good RISC chip had been more successful. I am referring to the i860 (not the i960, which was something different). In many ways the i860 was an excellent chip, but its software interface did not fit it to be used in a workstation.There is an interesting sting in the tail of this apparently easy triumph of the x86 instruction set. It proved impossible to match the steadily increasing speed of RISC processors by direct implementation of the x86 instruction set as had been done in the past. Instead, designers took a leaf out of the RISC book; although it is not obvious, on the surface, a modern x86 processor chip contains hidden within it a RISC-style processor with its own internal RISC coding. The incoming x86 code is, after suitable massaging, converted into this internal code and handed over to the RISC processor where the critical execution is performed.In this summing up of the RISC movement, I rely heavily on the latest edition of Hennessy and Patterson’s books on computer design as my supporting authority; see in particular Computer Architecture, third edition, 2003, pp 146, 151-4, 157-8.The IA-64 instruction set.Some time ago, Intel and Hewlett-Packard introduced the IA-64 instruction set. This was primarily intended to meet a generally recognised need for a 64 bit address space. In this, it followed the lead of the designers of the MIPS R4000 and Alpha. However one would have thought that Intel would have stressed compatibility with the x86; the puzzle is that they did the exact opposite.Moreover, built into the design of IA-64 is a feature known as predication which makes it incompatible in a major way with all other instruction sets. In particular, it needs 6 extra bits with each instruction. This upsets the traditional balance between instruction word length and information content, and it changes significantly the brief of the compiler writer.In spite of having an entirely new instruction set, Intel made the puzzling claim that chips based on IA-64 would be compatible with earlier x86 chips. It was hard to see exactly what was meant.Chips for the latest IA-64 processor, namely, the Itanium, appear to have special hardware for compatibility. Even so, x86 code runs very slowly.Because of the above complications, implementation of IA-64 requires a larger chipthan is required for more conventional instruction sets. This in turn implies a higher cost. Such at any rate, is the received wisdom, and, as a general principle, it was repeated as such by Gordon Moore when he visited Cambridge recently to open the Betty and Gordon Moore Library. I have, however, heard it said that the matter appears differently from within Intel. This I do not understand. But I am very ready to admit that I am completely out of my depth as regards the economics of the semiconductor industry.AMD have defined a 64 bit instruction set that is more compatible with x86 and they appear to be making headway with it. The chip is not a particularly large one. Some people think that this is what Intel should have done. [Since the lecture was delivered, Intel have announced that they will market a range of chips essentially compatible with those offered by AMD.]The Relentless Drive towards Smaller TransistorsThe scale of integration continued to increase. This was achieved by shrinking the original transistors so that more could be put on a chip. Moreover, the laws of physics were on the side of the manufacturers. The transistors also got faster, simply by getting smaller. It was therefore possible to have, at the same time, both high density and high speed.There was a further advantage. Chips are made on discs of silicon, known as wafers. Each wafer has on it a large number of individual chips, which are processed together and later separated. Since shrinkage makes it possible to get more chips on a wafer, the cost per chip goes down.Falling unit cost was important to the industry because, if the latest chips are cheaper to make as well as faster, there is no reason to go on offering the old ones, at least not indefinitely. There can thus be one product for the entire market.However, detailed cost calculations showed that, in order to maintain this advantage as shrinkage proceeded beyond a certain point, it would be necessary to move to larger wafers. The increase in the size of wafers was no small matter. Originally, wafers were one or two inches in diameter, and by 2000 they were as much as twelve inches. At first, it puzzled me that, when shrinkage presented so many other problems, the industry should make things harder for itself by going to larger wafers. I now see that reducing unit cost was just as important to the industry as increasing the number of transistors on a chip, and that this justified the additional investment in foundries and the increased risk.The degree of integration is measured by the feature size, which, for a given technology, is best defined as the half the distance between wires in the densest chips made in that technology. At the present time, production of 90 nm chips is still building up Suspension of LawIn March 1997, Gordon Moore was a guest speaker at the celebrations of the centenary of the discovery of the electron held at the Cavendish Laboratory. It was during the course of his lecture that I first heard the fact that you can have silicon chips that are both fast and low in cost described as a violation of Murphy’s law.or Sod’s law as it is usually called in the UK. Moore said that experience in other fields would lead you to expect to have to choose between speed and cost, or to compromise between them. In fact, in the case of silicon chips, it is possible to have both.In a reference book available on the web, Murphy is identified as an engineer working on human acceleration tests for the US Air Force in 1949. However, we were perfectly familiar with the law in my student days, when we called it by a much more prosaic name than either of those mentioned above, namely, the Law of General Cussedness. We even had a mock examination question in which the law featured. It was the type of question in which the first part asks for a definition of some law or principle and the second part contains aproblem to be solved with the aid of it. In our case the first part was to define the Law of General Cussedness and the second was the problem;A cyclist sets out on a circular cycling tour. Derive an equation giving the direction of the wind at any time.The single-chip computerAt each shrinkage the number of chips was reduced and there were fewer wires going from one chip to another. This led to an additional increment in overall speed, since the transmission of signals from one chip to another takes a long time.Eventually, shrinkage proceeded to the point at which the whole processor except for the caches could be put on one chip. This enabled a workstation to be built that out-performed the fastest minicomputer of the day, and the result was to kill the minicomputer stone dead. As we all know, this had severe consequences for the computer industry and for the people working in it.From the above time the high density CMOS silicon chip was Cock of the Roost. Shrinkage went on until millions of transistors could be put on a single chip and the speed went up in proportion.Processor designers began to experiment with new architectural features designed to give extra speed. One very successful experiment concerned methods for predicting the way program branches would go. It was a surprise to me how successful this was. It led to a significant speeding up of program execution and other forms of prediction followed Equally surprising is what it has been found possible to put on a single chip computer by way of advanced features. For example, features that had been developed for the IBM Model 91.the giant computer at the top of the System 360 range.are now to be found on microcomputersMurphy’s Law remained in a state of suspension. No longer did it make sense to build experimental computers out of chips with a small scale of integration, such as that provided by the 7400 series. People who wanted to do hardware research at the circuit level had no option but to design chips and seek for ways to get them made. For a time, this was possible, if not easyUnfortunately, there has since been a dramatic increase in the cost of making chips, mainly because of the increased cost of making masks for lithography, a photographic process used in the manufacture of chips. It has, in consequence, again become very difficult to finance the making of research chips, and this is a currently cause for some concern.The Semiconductor Road MapThe extensive research and development work underlying the above advances has been made possible by a remarkable cooperative effort on the part of the international semiconductor industry.At one time US monopoly laws would probably have made it illegal for US companies to participate in such an effort. However about 1980 significant and far reaching changes took place in the laws. The concept of pre-competitive research was introduced. Companies can now collaborate at the pre-competitive stage and later go on to develop products of their own in the regular competitive manner.The agent by which the pre-competitive research in the semi-conductor industry is managed is known as the Semiconductor Industry Association (SIA). This has been active as a US organisation since 1992 and it became international in 1998. Membership is open to any organisation that can contribute to the research effort.Every two years SIA produces a new version of a document known as the International Technological Roadmap for Semiconductors (ITRS), with an update in the intermediate years. The first volume bearing the title ‘Roadmap’ was issued in 1994 but two reports, written in1992 and distributed in 1993, are regarded as the true beginning of the series.Successive roadmaps aim at providing the best available industrial consensus on the way that the industry should move forward. They set out in great detail.over a 15 year horizon. the targets that must be achieved if the number of components on a chip is to be doubled every eighteen months.that is, if Moore’s law is to be maintained.-and if the cost per chip is to fall.In the case of some items, the way ahead is clear. In others, manufacturing problems are foreseen and solutions to them are known, although not yet fully worked out; these areas are coloured yellow in the tables. Areas for which problems are foreseen, but for which no manufacturable solutions are known, are coloured red. Red areas are referred to as Red Brick Walls.The targets set out in the Roadmaps have proved realistic as well as challenging, and the progress of the industry as a whole has followed the Roadmaps closely. This is a remarkable achievement and it may be said that the merits of cooperation and competition have been combined in an admirable manner.It is to be noted that the major strategic decisions affecting the progress of the industry have been taken at the pre-competitive level in relative openness, rather than behind closed doors. These include the progression to larger wafers.By 1995, I had begun to wonder exactly what would happen when the inevitable point was reached at which it became impossible to make transistors any smaller. My enquiries led me to visit ARPA headquarters in Washington DC, where I was given a copy of the recently produced Roadmap for 1994. This made it plain that serious problems would arise when a feature size of 100 nm was reached, an event projected to happen in 2007, with 70 nm following in 2010. The year for which the coming of 100 nm (or rather 90 nm) was projected was in later Roadmaps moved forward to 2004 and in the event the industry got there a little sooner.I presented the above information from the 1994 Roadmap, along with such other information that I could obtain, in a lecture to the IEE in London, entitled The CMOS end-point and related topics in Computing and delivered on 8 February 1996.The idea that I then had was that the end would be a direct consequence of the number of electrons available to represent a one being reduced from thousands to a few hundred. At this point statistical fluctuations would become troublesome, and thereafter the circuits would either fail to work, or if they did work would not be any faster. In fact the physical limitations that are now beginning to make themselves felt do not arise through shortage of electrons, but because the insulating layers on the chip have become so thin that leakage due to quantum mechanical tunnelling has become troublesome.There are many problems facing the chip manufacturer other than those that arise from fundamental physics, especially problems with lithography. In an update to the 2001 Roadmap published in 2002, it was stated that the continuation of progress at present rate will be at risk as we approach 2005 when the roadmap projects that progress will stall without research break-throughs in most technical areas “. This was the most specific statement about the Red Brick Wall, that had so far come from the SIA and it was a strong one. The 2003 Roadmap reinforces this statement by showing many areas marked red, indicating the existence of problems for which no manufacturable solutions are known.It is satisfactory to report that, so far, timely solutions have been found to all the problems encountered. The Roadmap is a remarkable document and, for all its frankness about the problems looming above, it radiates immense confidence. Prevailing opinion reflects that confidence and there is a general expectation that, by one means or another,shrinkage will continue, perhaps down to 45 nm or even less.However, costs will rise steeply and at an increasing rate. It is cost that will ultimately be seen as the reason for calling a halt. The exact point at which an industrial consensus is reached that the escalating costs can no longer be met will depend on the general economic climate as well as on the financial strength of the semiconductor industry itself.。

关于单片机的英文文献

关于单片机的英文文献

关于单片机的英文文献engine-control systems, brakingsystems (ABS). applications thatbenefitThe General Situation of AT89C51Microcontrollers are used in a multitude of commercial applicationssuch as modems, motor-control systems, air conditioner control systems, automotive engine and amongothers. The high processing speed and enhanced peripheral set of these microcontrollers make them suitable for such high-speed event-based applications. However, these critical application domains also require that these microcontrollers are highly reliable. The highreliability and low market risks can be ensured by a robust testing process and a proper tools environment for the validation of these microcontrollers both at the component and at the system level. Intel Platform Engineering department developed an object-oriented multi-threaded test environment for the validation of its AT89C51 automotive microcontrollers. The goals of this environment was not only to provide a robust testing environment for theAT89C51 automotive microcontrollers, but to develop an environment which canbe easilyextended and reused for the validation of several other futuremicrocontrollers. The environment was developed in conjunction withMicrosoft Foundation Classes (AT89C51). The paper describes the design and mechanism of this test environment, its interactions with varioushardware/software environmental components, and how to use AT89C51.1.1 IntroductionThe 8-bit AT89C51 CHMOSmicrocontrollers are designed to handle high-speed calculations and fast input/output operations. MCS 51microcontrollers are typically used for high-speed event control systems. Commercial applications include modems,motor-control systems, printers, photocopiers, air conditioner control systems, disk drives, and medical instruments. The automotive industry use MCS 51 microcontrollers in airbags, suspension systems, and antilock The AT89C51 is especially well suited to from itsprocessing speed and enhanced on-chip dynamicsuspension, antilock braking, and stability control applications.peripheral functions set, such as automotive power-train control, vehicleBecause of these critical applications, the market requires a reliable cost-effective controller with a low interrupt latency response, abilityto service the high number of time and event driven integrated peripherals needed in real time applications, and a CPUwith above average processing power in a single package. The financial and legal risk of having devices that operate unpredictably is very high. Once in the market, particularly in mission critical applications such as an autopilot or anti-lockbraking system, mistakes are financially prohibitive. Redesign costs can run as high as a $500K, much more if the fix means 2 back annotating it across a product family that share the samecore and/or peripheral design flaw. In addition, field replacements of components is extremely expensive, as the devices are typically sealed in modules with a total value several times that of the component. To mitigate these problems, it is essential that comprehensive testing of the controllers be carried out at both the component level and system level under worst case environmental and voltage conditions. This complete and thorough validation necessitates not only a well-defined process but also a proper environment and tools to facilitate and execute the mission successfully. Intel Chandler Platform Engineering group provides post silicon system validation (SV)of various micro-controllers and processors. The system validation process can be broken into three major parts. The type of the device and its application requirements determine which types of testing are performed on the device.1.2 The AT89C51 provides the following standard features:4Kbytes of Flash, 128 bytes of RAM, 32 I/O lines, two 16-bittimer/counters, a five vector two-level interrupt architecture, a full duple serial port, on-chip oscillator and clock circuitry. In addition, the AT89C51 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Modestops the CPUwhile allowing the RAM,timer/counters, serial port and interrupt sys -tem to continue functioning. The Power-down Mode saves the。

单片机英文参考文献(精选120个)

单片机英文参考文献(精选120个)

我国的单片机起步虽然较晚,但经过几十年的发展,也取得了巨大的成就。

不论是工业生产还是社会生活的各个方面都离不开单片机的使用。

下面是搜素整理的单片机英文参考文献的分享,以供参考。

单片机英文参考文献一: [1]Hui Wang. Optimal Design of Single Chip Microcomputer Multi-machine Serial Communication based on Signal VerificationTechnology[J]. International Journal of Intelligent Information and Management Science,2020,9(1)。

[2]Philip J. Basford,Steven J. Johnston,Colin S. Perkins,Tony Garnock-Jones,Fung Po Tso,Dimitrios Pezaros,Robert D. Mullins,Eiko Yoneki,Jeremy Singer,Simon J. Cox. Performance analysis of single board computer clusters[J]. Future Generation ComputerSystems,2020,102. [3]. Computers; Reports from University of Southampton Describe Recent Advances in Computers (Performance Analysis of Single Board Computer Clusters)[J]. Computers, Networks & Communications,2020. [4]Yunyu Cao,Jinjin Dang,Chenxu Cao. Design of Automobile Digital Tire Pressure Detector[J]. Journal of Scientific Research and Reports,2019. [5]Sudad J. Ashaj,Ergun Er?elebi. Reduce Cost Smart Power Management System by Utilize Single Board Computer Artificial Neural Networks for Smart Systems[J]. International Journal of Computational Intelligence Systems,2019. [6]Hanhong Tan*, Yanfei Teng. Design of PWM Lighting brightness Control based on LAN QIAO Cup single Chip Microcomputer[J]. International Journal of Computational and Engineering,2019,4(3)。

单片机英文文献

单片机英文文献

单片机英文文献Microcontrollers, often referred to as the "brains" of a system, are small computers integrated into a single chip. They are designed to perform a specific set of tasks and are widely used in various applications, from householdappliances to automotive systems and industrial automation.The evolution of microcontrollers has been remarkable,with continuous advancements in technology leading to increased processing power, reduced size, and improved energy efficiency. Modern microcontrollers are equipped with various features such as integrated memory, peripherals for communication, and support for multiple programming languages, making them versatile tools for developers.One of the key aspects of microcontroller development is the ability to program them. Programming a microcontroller involves writing code that defines the operations the device will perform. This code is typically written in languagessuch as C, C++, or assembly, and is then compiled anduploaded to the microcontroller's memory.Debugging is another critical component ofmicrocontroller development. It involves testing the code to ensure that it functions as intended and identifying anyerrors that may occur. Debugging tools and techniques are essential for developers to refine their code and achieve optimal performance.In addition to programming and debugging, microcontroller development also encompasses the design of the hardware that the microcontroller will control. This includes selecting appropriate sensors, actuators, and other components thatwill interact with the microcontroller to achieve the desired functionality.The field of microcontroller development is constantly evolving, with new technologies and techniques emerging regularly. As a result, it is important for developers to stay informed about the latest advancements and tocontinually update their skills and knowledge.In conclusion, microcontrollers are integral to the functioning of many modern systems and devices. Their development involves a combination of programming, debugging, and hardware design, and requires a deep understanding of both software and hardware concepts. As technology continues to advance, the role of microcontrollers in our daily livesis likely to become even more significant.。

单片机英文文献

单片机英文文献

Principle of MCUSingle-chip is an integrated on a single chip a complete computer system. Even though most of his features in a small chip, but it has a need to complete the majority of computer components: CPU, memory, internal and external bus system, most will have the Core. At the same time, such as integrated communication interfaces, timers, real-time clock and other peripheral equipment. And now the most powerful single-chip microcomputer system can even voice, image, networking, input and output complex system integration on a single chip.Also known as single-chip MCU (Microcontroller), because it was first used in the field of industrial control. Only by the single-chip CPU chip developed from the dedicated processor. The design concept is the first by a large number of peripherals and CPU in a single chip, the computer system so that smaller, more easily integrated into the complex and demanding on the volume control devices. INTEL the Z80 is one of the first design in accordance with the idea of the processor, From then on, the MCU and the development of a dedicated processor parted ways.Early single-chip 8-bit or all of the four. One of the most successful is INTEL's 8031, because the performance of a simple and reliable access to a lot of good praise. Since then in 8031 to develop a single-chip microcomputer system MCS51 series. Based on single-chip microcomputer system of the system is still widely used until now. As the field of industrial control requirements increase in the beginning of a 16-bit single-chip, but not ideal because the price has not been very widely used. After the 90's with the big consumer electronics product development, single-chip technology is a huge improvement. INTEL i960 Series with subsequent ARM in particular, a broad range of applications, quickly replaced by 32-bit single-chip 16-bit single-chip high-end status, and enter the mainstream market. Traditional 8-bit single-chip performance has been the rapid increase in processing power compared to the 80's to raise a few hundred times. At present, the high-end 32-bit single-chip frequency over 300MHz, the performance of the mid-90's close on the heels of a special processor, while the ordinary price of the model dropped to one U.S. dollars, the most high-end models, only 10 U.S. dollars. Contemporary single-chip microcomputer system is no longer only the bare-metal environment in the development and use of a large number of dedicated embedded operating system is widely used in the full range of single-chip microcomputer. In PDAs and cell phones as the core processing of high-end single-chip or even a dedicated direct access to Windows and Linux operating systems.More than a dedicated single-chip processor suitable for embedded systems, so it was up to the application. In fact the number of single-chip is the world's largest computer. Modern human life used in almost every piece of electronic and mechanical products will have a single-chip integration. Phone, telephone, calculator, home appliances, electronic toys, handheld computers and computer accessories such as a mouse in the Department are equipped with 1-2 single chip. And personal computers also have a large number of single-chip microcomputer in the workplace. Vehicles equipped with more than 40 Department of the general single-chip, complex industrial control systems and even single-chip may have hundreds of work at the same time! SCM is not only far exceeds the number of PC and other integrated computing, even more than the number of human beings.Hardwave introductionThe 8051 family of micro controllers is based on an architecture which is highly optimized for embedded control systems. It is used in a wide variety of applications from militaryequipment to automobiles to the keyboard on your PC. Second only to the Motorola 68HC11 in eight bit processors sales, the 8051 family of microcontrollers is available in a wide array of variations from manufacturers such as Intel, Philips, and Siemens. These manufacturers have added numerous features and peripherals to the 8051 such as I2C interfaces, analog to digital converters, watchdog timers, and pulse width modulated outputs. Variations of the 8051 with clock speeds up to 40MHz and voltage requirements down to 1.5 volts are available. This wide range of parts based on one core makes the 8051 family an excellent choice as the base architecture for a company's entire line of products since it can perform many functions and developers will only have to learn this one platform.The basic architecture consists of the following features:·an eight bit ALU·32 descrete I/O pins (4 groups of 8) which can be individually accessed·two 16 bit timer/counters·full duplex UART· 6 interrupt sources with 2 priority levels·128 bytes of on board RAM·separate 64K byte address spaces for DATA and CODE memoryOne 8051 processor cycle consists of twelve oscillator periods. Each of the twelve oscillator periods is used for a special function by the 8051 core such as op code fetches and samples of the interrupt daisy chain for pending interrupts. The time required for any 8051 instruction can be computed by dividing the clock frequency by 12, inverting that result and multiplying it by the number of processor cycles required by the instruction in question. Therefore, if you have a system which is using an 11.059MHz clock, you can compute the number of instructions per second by dividing this value by 12. This gives an instruction frequency of 921583 instructions per second. Inverting this will provide the amount of time taken by each instruction cycle (1.085 microseconds).单片机原理单片机是指一个集成在一块芯片上的完整计算机系统。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Structure and function of the MCS-51 seriesStructure and function of the MCS-51 series one-chip computer MCS-51 is a name of a piece of one-chip computer series which Intel Company produces. This company introduced 8 top-grade one-chip computers of MCS-51 series in 1980 after introducing 8 one-chip computers of MCS-48 series in 1976. It belong to a lot of kinds this line of one-chip computer the chips have,such as 8051, 8031, 8751, 80C51BH, 80C31BH,etc., their basic composition, basic performance and instruction system are all the same. 8051 daily representatives- 51 serial one-chip computers .An one-chip computer system is made up of several following parts: ( 1) One microprocessor of 8 (CPU). ( 2) At slice data memory RAM (128B/256B),it use not depositting not can reading /data that write, such as result not middle of operation, final result and data wanted to show, etc. ( 3) Procedure memory ROM/EPROM (4KB/8KB ), is used to preserve the procedure , some initial data and form in slice. But does not take ROM/EPROM within some one-chip computers, such as 8031 , 8032, 80C ,etc.. ( 4) Four 8 run side by side I/O interface P0 four P3, each mouth can use as introduction , may use as exporting too. ( 5) Two timer / counter, each timer / counter may set up and count in the way, used to count to the external incident, can set up into a timing way too, and can according to count or result of timing realize the control of the computer. ( 6) Five cut off cutting off the control system of the source . ( 7) One all duplexing serial I/O mouth of UART (universal asynchronous receiver/transmitter (UART) ), is it realize one-chip computer or one-chip computer and serial communication of computer to use for. ( 8) Stretch oscillator and clock produce circuit, quartz crystal finely tune electric capacity need outer. Allow oscillation frequency as 12 megahertas now at most. Every the above-mentioned part was joined through the inside data bus .Among them, CPU is a core of the one-chip computer, it is the control of the computer and command centre, made up of such parts as arithmetic unit and controller , etc.. The arithmetic unit can carry on 8 persons of arithmetic operation and unit ALU of logic operation while including one, the 1 storing device temporarilies of 8, storing device 2 temporarily, 8's accumulationdevice ACC, register B and procedure state register PSW, etc. Person who accumulate ACC count by 2 input ends entered of checking etc. temporarily as one operation often, come from person who store 1 operation is it is it make operation to go on to count temporarily , operation result and loopback ACC with another one. In addition, ACC is often regarded as the transfer station of data transmission on 8051 inside . The same as general microprocessor, it is the busiest register. Help remembering that agreeing with A expresses in the order. The controller includes the procedure counter , the order is depositted, the order decipher, the oscillator and timing circuit, etc. The procedure counter is made up of counter of 8 for two, amounts to 16. It is a byte address counter of the procedure in fact, the content is the next IA that will carried out in PC. The content which changes it can change the direction that the procedure carries out . Shake the circuit in 8051 one-chip computers, only need outer quartz crystal and frequency to finely tune the electric capacity, its frequency range is its 12MHZ of 1.2MHZ. This pulse signal, as 8051 basic beats of working, namely the minimum unit of time. 8051 is the same as other computers, the work in harmony under the control of the basic beat, just like an orchestra according to the beat play that is commanded.There are ROM (procedure memory , can only read ) and RAM in 8051 slices (data memory, can is it can write ) two to read, they have each independent memory address space, dispose way to be the same with general memory of computer. Procedure 8051 memory and 8751 slice procedure memory capacity 4KB, address begin from 0000H, used for preserving the procedure and form constant. Data 8051- 8751 8031 of memory data memory 128B, address false 00FH, use for middle result to deposit operation, the data are stored temporarily and the data are buffered etc.. In RAM of this 128B, there is unit of 32 byteses that can be appointed as the job register, this and general microprocessor is different, 8051 slice RAM and job register rank one formation the same to arrange the location. It is not very the same that the memory of MCS-51 series one-chip computer and general computer disposes the way in addition. General computer for first address space, ROM and RAM can arrange in different space within the range of this address at will, namely the addresses of ROMand RAM, with distributing different address space in a formation. While visiting the memory, corresponding and only an address Memory unit, can ROM, it can be RAM too, and by visiting the order similarly. This kind of memory structure is called the structure of Princeton. 8051 memories are divided into procedure memory space and data memory space on the physics structure, there are four memory spaces in all: The procedure stores in one and data memory space outside data memory and one in procedure memory space and one outside one, the structure forms of this kind of procedure device and data memory separated form data memory, called Harvard structure. But use the angle from users, 8051 memory address space is divided into three kinds: (1) In the slice, arrange blocks of FFFFH , 0000H of location , in unison outside the slice (use 16 addresses). (2) The data memory address space outside one of 64KB, the address is arranged from 0000H 64KB FFFFH (with 16 addresses ) too to the location. (3) Data memory address space of 256B (use 8 addresses). Three above-mentioned memory space addresses overlap, for distinguishing and designing the order symbol of different data transmission in the instruction system of 8051: CPU visit slice, ROM order spend MOVC , visit block RAM order uses MOVX outside the slice, RAM order uses MOV to visit in slice.8051 one-chip computer have four 8 walk abreast I/O port, call P0, P1, P2 and P3. Each port is 8 accurate two-way mouths, accounts for 32 pins altogether. Every one I/O line can be used as introduction and exported independently. Each port includes a latch (namely special function register ), one exports the driver and a introduction buffer . Make data can latch when outputting, data can buffer when making introduction , but four function of passway these self-same. four port these may serve as accurate two-way mouth of I/O in common use. Expand among the system of memory outside having slice, P2 mouth see high 8 address off; P0 mouth is a two-way bus, send the introduction of 8 low addresses and data / export in timesharing Output grade , P3 of mouth , P1 of mouth , connect with inside have load resistance of drawing , every one of they can drive 4 Model LS TTL load to output. As while inputting the mouth, any TTL or NMOS circuit can drive P1 of 8051 one-chip computers as P3 mouth in a normal way . Because draw resistance on outputgrade of them have, can open a way collector too or drain-source resistance is it urge to open a way, do not need to have the resistance of drawing outerly . Mouths are all accurate two-way mouths too. When the conduct is input, must write the corresponding port latch with 1 first . As to 80C51 one-chip computer, port can only offer milliampere of output electric currents, is it output mouth go when urging one ordinary basing of transistor to regard as, should contact a resistance among the port and transistor base , in order to the electricity while restraining the high level from exporting P1~P3 Being restored to the throne is the operation of initializing of an one-chip computer. Its main function is to turn PC into 0000H initially , make the one-chip computer begin to hold the conduct procedure from unit 0000H. Except that the ones that enter the system are initialized normally,as because procedure operate it make mistakes or operate there aren't mistake, in order to extricate oneself from a predicament , need to be pressed and restored to the throne the key restarting too. It is an input end which is restored to the throne the signal in 8051 China RST pin. Restore to the throne signal high level effective , should sustain 24 shake cycle (namely 2 machine cycles ) the above its effective times. If 6 of frequency of utilization brilliant to shake, restore to the throne signal duration should exceed 4 delicate to finish restoring to the throne and operating. Produce the logic picture of circuit which is restored to the throne the signal:Restore to the throne the circuit and include two parts outside in the chip entirely. Outside that circuit produce to restore to the throne signal (RST ) hand over to Schmitt's trigger, restore to the throne circuit sample to output , Schmitt of trigger constantly in each S5P2 , machine of cycle in having one more , then just got and restored to the throne and operated the necessary signal insidly. Restore to the throne resistance of circuit generally, electric capacity parameter suitable for 6 brilliant to shake, can is it restore to the throne signal high level duration greater than 2 machine cycles to guarantee. Being restored to the throne in the circuit is simple, its function is very important. Pieces of one-chip computer system could normal running,should first check it can restore to the throne not succeeding. Checking and can pop one's head and monitor the pin with the oscillograph tentatively, push and is restored to thethrone the key, the wave form that observes and has enough range is exported (instantaneous), can also through is it restore to the throne circuit group holding value carry on the experiment to change.译文51系列单片机的功能和结构MCS51电脑芯片的结构和功能。

相关文档
最新文档