Building Systems from Commercial Components Using Model Problems
英语长篇阅读文章

英语长篇阅读文章对于语言学习者而言,阅读是语言输入的重要方式。
阅读策略是语言学习者为了提高阅读理解而采取的技巧和方法。
下面是店铺带来的英语长篇阅读文章,欢迎阅读!英语长篇阅读文章1科技与自然Technology that imitates natureBiomimetics: Engineers are increasingly taking a leaf out of nature's book when looking for solutions to design problems AFTER taking his dog for a walk one day in the early 1940s, George de Mestral, a Swiss inventor, became curious about the seeds of the burdock plant that had attached themselves to his clothes and to the dog's fur. Under a microscope, he looked closely at the hook-and-loop system that the seeds have evolved to hitchhike on passing animals and aid pollination, and he realised that the same approach could be used to join other things together. The result was Velcr a product that was arguably more than three billion years in the making, since that is how long the natural mechanism that inspired it took to evolve.Velcro is probably the most famous and certainly the most successful example of bio logical mimicry, or “biomimetics”. In fields from robotics to materials science, technologists are increasingly borrowing ideas from nature, and with good reason: nature's designs have, by definition, stood the test of time, so it would be foolish to ignore them. Yet transplanting natural designs into man-made technologies is still a hit-or-miss affair.Engineers depend on biologists to discover interesting mechanisms for them to exploit, says Julian Vincent, the director of the Centre for Biomimetic and Natural Technologies at theUniversity of Bath in England. So he and his colleagues have been working on a scheme to enable engineers to bypass the biologists and tap into nature's ingenuity directly, via a database of “biological patents”. The idea is that this database will let anyone search through a wide range of biological mechanisms and properties to find natural solutions to technological problems.How not to reinvent the wheelSurely human intellect, and the deliberate application of design knowledge, can devise better mechanisms than the mindless, random process of evolution? Far from it. Over billions of years of trial and error, nature has devised effective solutions to all sorts of complicated real-world problems. Take the slippery task of controlling a submersible vehicle, for example. Using propellers, it is incredibly difficult to make refined movements. But Nekton Research, a company based in Durham, North Carolina, has developed a robot fish called Madeleine that manoeuvres using fins instead.In some cases, engineers can spend decades inventing and perfecting a new technology, only to discover that nature beat them to it. The Venus flower basket, for example, a kind of deep-sea sponge, has spiny skeletal outgrowths that are remarkably similar, both in appearance and optical properties, to commercial optical fibres, notes Joanna Aizenberg, a researcher at Lucent Technology's Bell Laboratories in New Jersey. And sometimes the systems found in nature can make even the most advanced technologies look primitive by comparison, she says.The skeletons of brittlestars, which are sea creatures related to starfish and sea urchins, contain thousands of tiny lenses that collectively form a single, distributed eye. This enables brittlestarsto escape predators and distinguish between night and day. Besides having unusual optical properties and being very small—each is just one-twentieth of a millimetre in diameter—the lenses have another trick of particular relevance to micro-optical systems. Although the lenses are fixed in shape, they are connected via a network of fluid-filled channels, containing a light-absorbing pigment. The creature can vary the contrast of the lenses by controlling this fluid. The same idea can be applied in man-made lenses, says Dr Aiz enberg. “These are made from silicon and so cannot change their properties,” she says. But by copying the brittlestar's fluidic system, she has been able to make biomimetic lens arrays with the same flexibility.Another demonstration of the power of biomimetics comes from the gecko. This lizard's ability to walk up walls and along ceilings is of much interest, and not only to fans of Spider-Man. Two groups of researchers, one led by Andre Geim at Manchester University and the other by Ron Fearing at the University of California, Berkeley, have independently developed ways to copy the gecko's ability to cling to walls. The secret of the gecko's success lies in the tiny hair-like structures, called setae, that cover its feet. Instead of secreting a sticky substance, as you might expect, they owe their adhesive properties to incredibly weak intermolecular attractive forces. These van der Waals forces, as they are known, which exist between any two adjacent objects, arise between the setae and the wall to which the gecko is clinging. Normally such forces are negligible, but the setae, with their spatula-like tips, maximise the surface area in contact with the wall. The weak forces, multiplied across thousands of setae, are then sufficient to hold the lizard's weight.Both the British and American teams have shown that theintricate design of these microscopic setae can be reproduced using synthetic materials. Dr Geim calls the result “gecko tape”. The technology is still some years away from commercialisation, says Thomas Kenny of Stanford University, who is a member of Dr Fearing's group. But when it does reach the market, rather than being used to make wall-crawling gloves, it will probably be used as an alternative to Velcro, or in sticking plasters. Indeed, says Dr Kenny, it could be particularly useful in medical applications where chemical adhesives cannot be used.While it is far from obvious that geckos' feet could inspire a new kind of sticking plaster, there are some fields—such as robotics—in which borrowing designs from nature is self-evidently the sensible thing to do. The next generation of planetary exploration vehicles being designed by America's space agency, NASA, for example, will have legs rather than wheels. That is because legs can get you places that wheels cannot, says Dr Kenny. Wheels work well on flat surfaces, but are much less efficient on uneven terrain. Scientists at NASA's Ames Research Centre in Mountain View, California, are evaluating an eight-legged walking robot modelled on a scorpion, and America's Defence Advanced Research Projects Agency (DARPA) is funding research into four-legged robot dogs, with a view to applying the technology on the battlefield.Having legs is only half the story—it's how you control them that counts, says Joseph Ayers, a biologist and neurophysiologist at Northeastern University, Massachusetts. He has spent recent years developing a biomimetic robotic lobster that does not just look like a lobster but actually emulates parts of a lobster's nervous system to control its walking behaviour. The control system of the scorpion robot, which is being developed by NASAin conjunction with the University of Bremen in Germany, is also biologically inspired. Meanwhile, a Finnish technology firm, Plustech, has developed a six-legged tractor for use in forestry. Clambering over fallen logs and up steep hills, it can cross terrain that would be impassable in a wheeled vehicle.Other examples of biomimetics abound: Autotype, a materials firm, has developed a plastic film based on the complex microstructures found in moth eyes, which have evolved to collect as much light as possible without reflection. When applied to the screen of a mobile phone, the film reduces reflections and improves readability, and improves battery life since there is less need to illuminate the screen. Researchers at the University of Florida, meanwhile, have devised a coating inspired by the rough, bristly skin of sharks. It can be applied to the hulls of ships and submarines to prevent algae and barnacles from attaching themselves. At Penn State University, engineers have designed aircraft wings that can change shape in different phases of flight, just as birds' wings do. And Dr Vincent has devised a smart fabric, inspired by the way in which pine cones open and close depending on the humidity, that could be used to make clothing that adjusts to changing body temperatures and keeps the wearer cool.From hit-and-miss to point-and-clickYet despite all these successes, biomimetics still depends far too heavily on serendipity, says Dr Vincent. He estimates that there is only a 10% overlap between biological and technological mechanisms used to solve particular problems. In other words, there is still an enormous number of potentially useful mechanisms that have yet to be exploited. The problem is that the engineers looking for solutions depend on biologists havingalready found them—and the two groups move in different circles and speak very different languages. A natural mechanism or property must first be discovered by biologists, described in technological terms, and then picked up by an engineer who recognises its potential.This process is entirely the wrong way round, says Dr Vincent. “To be effective, biomimetics should be providing examples of suitable technologies from biology which fulfil the requirements of a particular engineering problem,” he explains. That is why he and his colleagues, with funding from Britain's Engineering and Physical Sciences Research Council, have spent the past three years building a database of biological tricks which engineers will be able to access to find natural solutions to their design problems. A search of the database with the keyword “propulsion”, for example, produces a range of propulsion mechanisms used by jellyfish, frogs and crustaceans.The database can also be queried using a technique developed in Russia, known as the theory of inventive problem solving, or TRIZ. In essence, this is a set of rules that breaks down a problem into smaller parts, and those parts into particular functions that must be performed by components of the solution. Usually these functions are compared against a database of engineering patents, but Dr Vincent's team have substituted their database of “biological patents” instead. Thes e are not patents in the conventional sense, of course, since the information will be available for use by anyone. By calling biomimetic tricks “biological patents”, the researchers are just emphasising that nature is, in effect, the patent holder.One way to use the system is to characterise an engineering problem in the form of a list of desirable features that thesolution ought to have, and another list of undesirable features that it ought to avoid. The database is then searched for any biological patents that meet those criteria. So, for example, searching for a means of defying gravity might produce a number of possible solutions taken from different flying creatures but described in engineering terms. “If you want flight, you don't copy a bird, but you do copy the use of wings and aerofoils,” says Dr Vincent.He hopes that the database will store more than just blueprints for biological mechanisms that can be replicated using technology. Biomimetics can help with software, as well as hardware, as the robolobster built by Dr Ayers demonstrates. Its physical design and control systems are both biologically inspired. Most current robots, in contrast, are deterministically programmed. When building a robot, the designers must anticipate every contingency of the robot's environment and tell it how to respond in each case. Animal models, however, provide a plethora of proven solutions to real-world problems that could be useful in all sorts of applications. “The set of behavioural acts that a lobster goes through when searching for food is exactly what one would want a robot to do to search for underwater mines,” says Dr Ayers. It took nature millions of years of trial and error to evolve these behaviours, he says, so it would be silly not to take advantage of them.Although Dr Vincent's database will not be capable of providing such specific results as control algorithms, it could help to identify natural systems and behaviours that might be useful to engineers. But it is still early days. So far the database contains only 2,500 patents. To make it really useful, Dr Vincent wants to collect ten times as many, a task for which he intends to ask theonline community for help. Building a repository of nature's cleverest designs, he hopes, will eventually make it easier and quicker for engineers to steal and reuse them.英语长篇阅读文章2 Lessons from a feminist paradise on Equal Pay DayOn the surface, Sweden appears to be a feminist paradise. Look at any global survey of gender equity and Sweden will be near the top. Family-friendly policies are its norm —with 16 months of paid parental leave, special protections for part-time workers, and state-subsidized preschools where, according to a government website, “gender-awareness education is increasingly common.” Due to an u nofficial quota system, women hold 45 percent of positions in the Swedish parliament. They have enjoyed the protection of government agencies with titles like the Ministry of Integration and Gender Equality and the Secretariat of Gender Research. So why are American women so far ahead of their Swedish counterparts in breaking through the glass ceiling?In a 2012 report, the World Economic Forum found that when it comes to closing the gender gap in “economic participation and opportunity,” the United States is ahead of not only Sweden but also Finland, Denmark, the Netherlands, Iceland, Germany, and the United Kingdom. Sweden’s rank in the report can largely be explained by its political quota system. Though the United States has fewer women in the workforce (68 percent compared to Sweden’s 77 percent), American women who choose to be employed are far more likely to work full-time and to hold high-level jobs as managers or professionals. Compared to their European counterparts, they own more businesses, launch more start start-ups, and more often work in traditionallymale fields. As for breaking the glass ceiling in business, American women are well in the lead, as the chart below shows.What explains the American advantage? How can it be that societies like Sweden, where gender equity is relentlessly pursued and enforced, have fewer female managers, executives, professionals, and business owners than the laissez-faire United States? A new study by Cornell economists Francine Blau and Lawrence Kahn gives an explanation.Generous parental leave policies and readily available part-time options have unintended consequences: instead of strengthening women’s attachment to the workplace, they appear to weaken it. In addition to a 16-month leave, a Swedish parent has the right to work six hours a day (for a reduced salary) until his or her child is eight years old. Mothers are far more likely than fathers to take advantage of this law. But extended leaves and part-time employment are known to be harmful to careers — for both genders. And with women a second factor comes into play: most seem to enjoy the flex-time arrangement (once known as the “mommy track”) and never find their way back to full-time or high-level employment. In sum: generous family-friendly policies do keep more women in the labor market, but they also tend to diminish their careers.According to Blau and Kahn, Swedish-style paternal leave policies and flex-time arrangements pose a second threat to women’s progress: they make employers wary of hiring wom en for full-time positions at all. Offering a job to a man is the safer bet. He is far less likely to take a year of parental leave and then return on a reduced work schedule for the next eight years.I became aware of the trials of career-focused European women a few years ago when I met a post-doctoral student fromGermany who was then a visiting fellow at Johns Hopkins. She was astonished by the professional possibilities afforded to young American women. Her best hope in Germany was a government job ––prospects for women in the private sector were dim. “In Germany,” she told me, “we have all the benefits, but employers don’t want to hire us.”Swedish economists Magnus Henrekson and Mikael Stenkula addressed the following question in their 2009 study: why are there so few female top executives in the European egalitarian welfare states? Their answer: “Broad-based welfare-state policies impede women’s representation in elite competitive positions.”It is tempting to declare the Swedish policies regressive and hail the American system as superior. But that would be shortsighted. The Swedes can certainly take a lesson from the United States and look for ways to clear a path for their high-octane female careerists. But most women are not committed careerists. When the Pew Research Center recently asked American parents to identify their "ideal" life arrangement, 47 percent of mothers said they would prefer to work part-time and 20 percent said they would prefer not to work at all. Fathers answered differently: 75 percent preferred full-time work. Some version of the Swedish system might work well for a majority of American parents, but the United States is unlikely to fully embrace the Swedish model. Still, we can learn from their experience.Despite its failure to shatter the glass ceiling, Sweden has one of the most powerful and innovative economies in the world. In its 2011-2012 survey, the World Economic Forum ranked Sweden as the world’s third most competitive economy; the UnitedStates came in fifth. Sweden, dubbed the "rockstar of the recovery" in the Washington Post, also leads the world in life satisfaction and happiness. It is a society well worth studying, and its efforts to conquer the gender gap impart a vital lesson —though not the lesson the Swedes had in mind.Sweden has gone farther than any nation on earth to integrate the sexes and to offer women the same opportunities and freedoms as men. For decades, these descendants of the Vikings have been trying to show the world that the right mix of enlightened policy, consciousness raising, and non-sexist child rearing would close the gender divide once and for all. Yet the divide persists.A 2012 press release from Statistics Sweden bears the title “Gender Equality in Sweden Treading Water” and notes: The total income from employment for all ages is lower for women than for men.One in three employed women and one in ten employed men work part-time.Women’s working time is influenced by the number and age of their children, but men’s working time is not affected by these factors.Of all employees, only 13 percent of the women and 12 percent of the men have occupations with an even distribution of the sexes.Confronted with such facts, some Swedish activists and legislators are demanding more extreme and far-reaching measures, such as replacing male and female pronouns with a neutral alternative and monitoring children more closely to correct them when they gravitate toward gendered play. When it came to light last year that mothers, far more than fathers, choseto stay home from work to care for their sick toddlers, Ulf Kristersson, minister of social security, quickly commissioned a study to determine the causes of and possible cures for this disturbing state of affairs.I have another suggestion for Kristersson and his compatriots: acknowledge the results of your own 40-year experiment. The sexes are not interchangeable. When Catherine Hakim, a sociologist at the London School of Economics, studied the preferences of women and men in Western Europe, her results matched those of the aforementioned Pew study. Women, far more than men, give priority to domestic life. The Swedes should consider the possibility that the current division of labor is not an artifact of sexism, but the triumph of liberated preference.In the 1940s, the American playwright, congresswoman, and conservative feminist Clare Boothe Luce made a prediction about what would happen to men and women under conditions of freedom:It is time to leave the question of the role of women in society up to Mother Nature — a difficult lady to fool. You have only to give women the same opportunities as men, and you will soon find out what is or is not in their nature. What is in women’s nature to do they will do, and you won’t be able to stop them. But you will also find, and so will they, that what is not in their nature, even if they are given every opportunity, they will not do, and you won’t be able to make them do it.In Luce’s day, sex-role stereotypes still powerfully limited women’s choices. More than half a century later, women in the Western democracies enjoy the equality of opportunity of which she spoke. Nowhere is this more true than Sweden. And althoughit was not the Swedes’ intention, they have demonstrated to the world what the sexes will and will not do when offered the same opportunities.Today is Equal Pay Day. But as most feminists know by now, the wage gap is largely the result of women’s vocational choices and how they prefer to balance home and family. To close the gap, it won’t be e nough to change society or reform the workplace ––it is women’s elemental preferences that will have to change. But look to Sweden: women’s preferences remain the same.Not only feminists, but also liberal and conservative policymakers should pay attentio n. Sweden is not the “tax and spend” welfare state of old ––while the rest of the world is floundering in debt, Sweden (along with its Nordic neighbors) has been downsizing, reforming entitlements, and balancing its books. The budget deficit in Sweden is about 0.2 percent of its GDP; in the United States, it’s 7 percent. But Sweden’s generous family-friendly policies remain in place. The practical, problem-solving Swedes have judged them to be a good investment. They may be right.Swedish family policies, by accommodating women’s preferences so effectively, are reducing the number of women in elite competitive positions. The Swedes will find this paradoxical and try to find solutions. Let us hope these do not include banning gender pronouns, policing childr en’s play, implementing more gender quotas, or treating women’s special attachment to home and family as a social injustice. Most mothers do not aspire to elite, competitive full-time positions: the Swedish policies have given them the freedom and opportunity to live the lives they prefer. Americans should lookpast the gender rhetoric and consider what these Scandinavians have achieved. On their way to creating a feminist paradise, the Swedes have inadvertently created a haven for normal mortals.英语长篇阅读文章3科学家告诉你:这样学才记得牢The older we get, the harder it seems to remember names, dates, facts of all kinds. It takes longer to retrieve the information we want, and it often pops right up a few minutes or hours later when we are thinking about something else. The experts say that keeping your mind sharp with games like Sudoku and crossword puzzles slows the aging process, and that may be true, but we found three other things you can do to sharpen your memory.随着年龄的增长,我们似乎越来越记不住人名、日期、还有各种事情。
软件工程(外文翻译文献)

外文文献资料1、Software EngineeringSoftware is the sequences of instructions in one or more programming languages that comprise a computer application to automate some business function. Engineering is the use of tools and techniques in problem solving. Putting the two words together, software engineering is the systemtic application of tools and techniques in the development of computer-based applications.The software engineering process describes the steps it takes to deelop the system. We begin a development project with the notion that there is a problem to be solved via automation. The process is how you get from problem recognition to a working solution. A quality process is desirable because it is more likely to lead to a quality product. The process followed by a project team during the development life cycle of an application should be orderly, goal-oriented, enjoyable, and a learning experience.Object-oriented methodology is an approach to system lifecycle development that takes a top-down view of data objects, their allowable actions, and the underlying communication requirement to define a system architecture. The data and action components are encapsulated, that is , they are combined together, to form abstract data types Encapsulation means that if I know what data I want ,I also know the allowable processes against that data. Data are designed as lattice hierarchies of relationships to ensure that top-down, hierarchic inheritance and side ways relationships are accommodated. Encapsulated objects are constrained only to communicate via messages. At a minimum, messages indicate the receiver and action requested. Messages may be more elaborate, including the sender and data to be acted upon.That we try to apply engineering discipline to software development does not mean that we have all the answers about how to build applications. On the contrary, we still build systems that are not useful and thus are not used. Part of the reason for continuing problems in application development, is that we are constantly trying to hita moving target. Both the technology and the type of applications needed by businesses are constantly changing and becoming more complex. Our ability to develop and disseminate knowledge about how to successfully build systems for new technologies and new application types seriously lags behind technological and business changes.Another reason for continuing problems in application development is that we aren’t always free to do what we like and it is hard to change habits and cultures from the old way of doing things, as well as get users to agree with a new sequence of events or an unfamiliar format for documentation.You might ask then, if many organizations don’t use good software engineering practices, why should I bother learning them? There are two good answers to this question. First, if you never know the right thing to do, you have no chance of ever using it. Second, organizations will frequently accept evolutionary, small steps of change instead of revolutionary, massive change. You can learn individual techniques that can be applied without complete devotion to one way of developing systems. In this way, software engineering can speed changee in their organizations by demonstrating how the tools and techniques enhance th quality of both the product and the process of building a system.2、Data Base System1、IntroductionThe development of corporate databases will be one of the most important data-processing activities for the rest of the 1970s. Date will be increasingly regarded as a vital corporate resource, which must be organized so as to maximize their value. In addition to the databases within an organization, a vast new demand is growing for database services, which will collect, organize, and sell data.The files of data which computers can use are growing at a staggering rate. The growth rate in the size of computer storage is greater than the growth in the size or power of any other component in the exploding data processing industry. The more data the computers have access to, the greater is their potential power. In all walks of life and in all areas of industry, data banks will change the areas of what it is possiblefor man to do. In the end of this century, historians will look back to the coming of computer data banks and their associated facilities as a step which changed the nature of the evolution of society, perhaps eventually having a greater effect on the human condition than even the invention of the printing press.Some most impressive corporate growth stories of the generation are largely attributable to the explosive growth in the need of information.The vast majority of this information is not yet computerized. However, the cost of data storage hardware is dropping more rapidly than other costs in data processing. It will become cheaper to store data on computer files than to store them on paper. Not only printed information will be stored. The computer industry is improving its capability to store line drawing, data in facsimile form, photo-graphs, human speech, etc. In fact, any form of information other than the most intimate communications between humans can be transmitted and stored digitally.There are two main technology developments likely to become available in the near future. First, there are electromagnetic devices that will hold much more data than disks but have much longer access time. Second, there are solid-state technologies that will give microsecond access time but capacities are smaller than disks.Disks themselves may be increased in capacity somewhat. For the longer term future there are a number of new technologies which are currently working in research labs which may replace disks and may provide very large microsecond-access-time devices. A steady stream of new storage devices is thus likely to reach the marketplace over the next 5 years, rapidly lowering the cost of storing data.Given the available technologies, it is likely that on-line data bases will use two or three levels of storage. One solid-state with microsecond access time, one electromagnetic with access time of a fraction of a second. If two ,three ,or four levels of storage are used, physical storage organization will become more complex ,probably with paging mechanisms to move data between the levels; solid-state storage offers the possibility of parallel search operation and associativememory.Both the quantity of data stored and the complexity of their organization are going up by leaps and bounds. The first trillion bit on-line stores are now in use . in a few year’s time ,stores of this size may be common.A particularly important consideration in data base design is to store the data so that the can be used for a wide variety of applications and so that the way they can be changed quickly and easily. On computer installation prior to the data base era it has been remarkably difficult to change the way data are used. Different programmers view the data in different ways and constantly want to modify them as new needs arise modification , however ,can set off a chain reaction of changes to existing programs and hence can be exceedingly expensive to accomplish .Consequently , data processing has tended to become frozen into its old data structures .To achieve flexibility of data usage that is essential in most commercial situations . Two aspects of data base design are important. First, it should be possible to interrogate and search the data base without the lengthy operation of writing programs in conventional programming languages. Second ,the data should be independent of the programs which use them so that they can be added to or restructured without the programs being changed .The work of designing a data base is becoming increasing difficult , especially if it is to perform in an optimal fashion . There are many different ways in which data can be structured ,and they have different types of data need to be organized in different ways. Different data have different characteristics , which ought to effect the data organization ,and different users have fundamentally different requirements. So we need a kind of data base management system(DBMS)to manage data.Data base design using the entity-relationship model begins with a list of the entity types involved and the relationships among them. The philosophy of assuming that the designer knows what the entity types are at the outset is significantly different from the philosophy behind the normalization-based approach.The entity-relationship(E-R)approach uses entity-relationship diagrams. The E-Rapproach requires several steps to produre a structure that is acceptable by the particular DBMS. These steps are:(1) Data analysis(2) Producing and optimizing the entity model.(3) Logical schema development(4) Physical data base design process.Developing a data base structure from user requirements is called data bases design. Most practitioners agree that there are two separate phases to the data base design process. The design of a logical database structure that is processable by the data base management system(DBMS)d escribes the user’s view of data, and is the selection of a physical structure such as the indexed sequential or direct access method of the intended DBMS.Current data base design technology shows many residual effects of its outgrowth from single-record file design methods. File design is primarily application program dependent since the data has been defined and structured in terms of individual applications to use them. The advent of DBMS revised the emphasis in data and program design approaches.There are many interlocking questions in the design of data-base systems and many types of technique that one can use is answer to the question so many; in fact, that one often sees valuable approaches being overlooked in the design and vital questions not being asked.There will soon be new storage devices, new software techniques, and new types of data bases. The details will change, but most of the principles will remain. Therefore, the reader should concentrate on the principles.2、Data base systemThe conception used for describing files and data bases has varied substantially in the same organization.A data base may be defined as a collection of interrelated data stored together with as little redundancy as possible to serve on or more applications in an optimal fashion; the data are stored so that they are independent of programs which use thedata; a common and controlled approach is used in adding new data and in modifying and retrieving existing data within the data base. One system is said to contain a collection of data bases if they are entirely separate in structure.A data base may be designed for batch processing, real-time processing, or in-line processing. A data base system involve application program, DBMS, and data base.One of the most important characteristics of most data bases is that they will constantly need to change and grow. Easy restructuring of the data base must be possible as new data types and new applications are added. The restructuring should be possible without having to rewrite the application program and in general should cause as little upheaval as possible. The ease with which a data base can be changed will have a major effect on the rate at which data-processing application can be developed in a corporation.The term data independence is often quoted as being one of the main attributes of a data base. It implies that the data and the application programs which use them are independent so that either may be changed without changing the other. When a single set of data items serves a variety of applications, different application programs perceive different relationships between the data items. To a large extent, data-base organization is concerned with the representation of relationship between data items and records as well as how and where the data are stored. A data base used for many applications can have multiple interconnections between the data item about which we may wish to record. It can describes the real world. The data item represents an attribute, and the attribute must be associated with the relevant entity. We design values to the attributes, one attribute has a special significance in that it identifies the entity.An attribute or set of attribute which the computer uses to identify a record or tuple is referred to as a key. The primary key is defined as that key used to uniquely identify one record or tuple. The primary key is of great importance because it is used by the computer in locating the record or tuple by means of an index or addressing algorithm.If the function of a data base were merely to store data, its organization would be simple. Most of the complexities arise from the fact that is must also show the relationships between the various items of data that are stored. It is different to describe the data in logical or physical.The logical data base description is referred to as a schema .A schema is a chart of the types of data that one used. It gives the names of the entities and attributes, and specifics the relations between them. It is a framework into which the values of the data-items can be fitted.We must distinguish between a record type and a instance of the record. When we talk about a “personnel record”,this is really a record type.There are no data values associated with it.The term schema is used to mean an overall chart of all of the dataitem types and record types stored in a data he uses. Many different subschema can be derived from one schema.The schema and the subschema are both used by the data-base management system, the primary function of which is to serve the application programs by executing their data operations.A DBMS will usually be handing multiple data calls concurrently. It must organize its system buffers so that different data operations can be in process together. It provides a data definition language to specify the conceptual schema and most likely, some of the details regarding the implementation of the conceptual schema by the physical schema. The data definition language is a high-level language, enabling one to describe the conceptual schema in terms of a “data model” .The choice of a data model is a difficult one, since it must be rich enough in structure to describe significant aspects of the real world, yet it must be possible to determine fairly automatically an efficient implementation of the conceptual schema by a physical schema. It should be emphasized that while a DBMS might be used to build small data bases, many data bases involve millions of bytes, and an inefficient implementation can be disastrous.We will discuss the data model in the following.3、Three Data ModelsLogical schemas are defined as data models with the underlying structure of particular database management systems superimposed on them. At the present time, there are three main underlying structures for database management systems. These are :RelationalHierarchicalNetworkThe hierarchical and network structures have been used for DBMS since the 1960s. The relational structure was introduced in the early 1970s.In the relational model, the entities and their relationships are represented by two-dimensional tables. Every table represents an entity and is made up of rows and columns. Relationships between entities are represented by common columns containing identical values from a domain or range of possible values.The last user is presented with a simple data model. His and her request are formulated in terms of the information content and do not reflect any complexities due to system-oriented aspects. A relational data model is what the user sees, but it is not necessarily what will be implemented physically.The relational data model removes the details of storage structure and access strategy from the user interface. The model provides a relatively higher degree of data. To be able to make use of this property of the relational data model however, the design of the relations must be complete and accurate.Although some DBMS based on the relational data model are commercially available today, it is difficult to provide a complete set of operational capabilities with required efficiency on a large scale. It appears today that technological improvements in providing faster and more reliable hardware may answer the question positively.The hierarchical data model is based on a tree-like structure made up of nodes and branches. A node is a collection of data attributes describing the entity at that point.The highest node of the hierarchical tree structure is called a root. The nodes at succeeding lower levels are called children .A hierarchical data model always starts with a root node. Every node consists of one or more attributes describing the entity at that node. Dependent nodes can follow the succeeding levels. The node in the preceding level becomes the parent node of the new dependent nodes. A parent node can have one child node as a dependent or many children nodes. The major advantage of the hierarchical data model is the existence of proven database management systems that use the hierarchical data model as the basic structure. There is a reduction of data dependency but any child node is accessible only through its parent node, the many-to –many relationship can be implemented only in a clumsy way. This often results in a redundancy in stored data.The network data model interconnects the entities of an enterprise into a network. In the network data model a data base consists of a number of areas. An area contains records. In turn, a record may consist of fields. A set which is a grouping of records, may reside in an area or span a number of areas. A set type is based on the owner record type and the member record type. The many-to many relation-ship, which occurs quite frequently in real life can be implemented easily. The network data model is very complex, the application programmer must be familiar with the logical structure of the data base.4、Logical Design and Physical DesignLogical design of databases is mainly concerned with superimposing the constructs of the data base management system on the logical data model. There are three mainly models: hierarchical, relational, network we have mentioned above.The physical model is a framework of the database to be stored on physical devices. The model must be constructed with every regard given to the performance of the resulting database. One should carry out an analysis of the physical model with average frequencies of occurrences of the grou pings of the data elements, with expected space estimates, and with respect to time estimates for retrieving and maintaining the data.The database designer may find it necessary to have multiple entry points into a database, or to access a particular segment type with more than one key. To provide this type of access; it may be necessary to invert the segment on the keys. Thephysical designer must have expertise in knowledge of the DBMS functions and understanding of the characteristics of direct access devices and knowledge of the applications.Many data bases have links between one record and another, called pointers. A pointer is a field in one record which indicates where a second record is located on the storage devices.Records that exist on storage devices is a given physical sequence. This sequencing may be employed for some purpose. The most common pupose is that records are needed in a given sequence by certain data-processing operations and so they are stored in that sequences.Different applications may need records in different sequences.The most common method of ordering records is to have them in sequence by a key —that key which is most commonly used for addressing them. An index is required to find any record without a lengthy search of the file.If the data records are laid out sequentially by key, the index for that key can be much smaller than they are nonsequential.Hashing has been used for addressing random-access storages since they first came into existence in the mid-1950s. But nobody had the temerity to use the word hashing until 1968.Many systems analysis has avoided the use of hashing in the suspicion that it is complicated. In fact, it is simple to use and has two important advantages over indexing. First, it finds most records with only one seek and second, insertion and deletions can be handled without added complexity. Indexing, however, can be used with a file which is sequential by prime key and this is an overriding advantage, for some batch-pro-cessing applications.Many data-base systems use chains to interconnect records also. A chain refers to a group of records scatters within the files and interconnected by a sequence of pointers. The software that is used to retrive the chained records will make them appear to the application programmer as a contiguous logical file.The primary disadvantage of chained records is that many read operations areneeded in order to follow lengthy chains. Sometimes this does not matter because the records have to be read anyway. In most search operations, however, the chains have to be followed through records which would not otherwise to read. In some file organizations the chains can be contained within blocked physical records so that excessive reads do not occur.Rings have been used in many file organizations. They are used to eliminate redundancy. When a ring or a chain is entered at a point some distance from its head, it may be desirable to obtain the information at the head quickly without stepping through all the intervening links.5、Data Description LanguagesIt is necessary for both the programmers and the data administrator to be able to describe their data precisely; they do so by means of data description languages. A data description language is the means of declaring to data-base management system what data structures will be used.A data description languages giving a logical data description should perform the folloeing functions:It should give a unique name to each data-item type, file type, data base and other data subdivision.It should identify the types of data subdivision such as data item segment , record and base file.It may define the type of encoding the program uses in the data items (binary , character ,bit string , etc.)It may define the length of the data items and the range of the values that a data item can assume .It may specify the sequence of records in a file or the sequence of groups of record in the data base .It may specify means of checking for errors in the data .It may specify privacy locks for preventing unauthorized reading or modification of the data .These may operate at the data-item ,segment ,record, file or data-base level and if necessary may be extended to the contents(value) of individual data items .The authorization may , on the other hand, be separate defined .It is more subject to change than the data structures, and changes in authorization proceduresshould not force changes in application programs.A logical data description should not specify addressing ,indexing ,or searching techniques or specify the placement of data on the storage units ,because these topics are in the domain of physical ,not logical organization .It may give an indication of how the data will be used or of searching requirement .So that the physical technique can be selected optimally but such indications should not be logically limiting.Most DBMS have their own languages for defining the schemas that are used . In most cases these data description languages are different to other programmer language, because other programmer do not have the capability to define to variety of relationship that may exit in the schemas.附录 B 外文译文1、软件工程软件是指令的序列,该指令序列由一种或者多种程序语言编写,它能使计算机应用于某些事物的运用自动化。
福禄克 冲击力量测试仪操作说明书

The Fluke 62 Mini Infrared ThermometerFor quick, basic temperature checksAdvances in technology have made the smallest infrared ther-mometers, such as the Fluke 62 Mini, especially practical. They’re convenient to carry and afford-able enough for everyone on an entire crew to own one, so that infrared temperature measure-ment isn’t limited to specialists. And the latest models are more accurate and measure greater temperature ranges than earlier “mini” generations.Point, shoot and readTo use the Fluke 62 Mini Infrared Thermometer, use the laser sight-ing to pinpoint the target, and pull the trigger to see the tem-perature on the built-in display.Increases in temperature are often the first sign of trouble for mechanical equipment, electri-cal circuits and building systems such as heating, ventilation and air conditioning (HVAC). A quick temperature check of key compo-nents and equipment can detect potential problems and prevent catastrophic failures. Regular contact measurement with a thermometer and probe takes time and can require gettingclose to dangerous or inaccessible operational equipment or shutting equipment down. Non-contact infrared (IR) thermometers take quick, safe measurements from a distance while equipment is operational.Application NoteFor more information on Fluke Predictive PART TWOof a predictive maintenance seriesCheck motor temperatures quickly, without contact.The thermometer works by measuring the infrared energy emitted from surfaces and con-verting the information into a temperature reading. It measures temperatures from -30 °C to +500 °C (-20 °F to +932 °F), is accurate to ± 1% of reading and can capture the maximum reading among a range of readings.While there are endless ways to use an infrared thermometer, here are the three primary ones:1) M easure the absolutetemperature at a spot. This is useful for trending thetemperature of an object such as a bearing housing over time. With a repeatability of ± 0.5 %, the new thermom-eters make this practice quite accurate.2) C ompare the temperature differential of two spots. For example to compare the run-ning temperatures of two like objects to determine if one is overheating.3) S can an object and detect changes within a continuous area on it, to find hot or cold spots on housings, panels and structures.Securing accurate measurementsThe uses for handheld infrared thermometers are limited only by the nature of infrared technology. The key restriction is the sur-face of the target object. Simply stated, these instruments cannot accurately measure shiny sur-faces. The issue is emitted versusreflected energy.CHAPTER 2.3EmissivityOf the kinds of energy—reflected, transmitted and emitted—emanat-ing from an object, only emitted infrared energy indicates the object’s surface temperature. Transmitted and reflected energy do not. When IR thermometers measure surface temperatures, they sense all three kinds of en-ergy. Therefore, they have to be adjusted to read emitted energy only. The Fluke 62 Mini Infrared Thermometer has a fixed, pre-set emissivity of 0.95, which is the emissivity value for most organic materials as well as painted or oxidized surfaces.To accurately measure the surface temperature of a shinyobject, cover the target surfacewith masking tape or flat blackpaint and allow enough timefor the tape or paint to reachthe temperature of the material underneath.Distance-to-spot ratioThe optical system of an infrared thermometer collects the infrared energy from a circular area or spot and focuses it on the detec-tor. The farther a target is fromcreated on the target will be. Optical resolution is defined by the ratio of the distance from theglass and, as noted, will be inac-curate if used to measure shiny or polished metal surfaces (stainless steel, aluminum, etc.).Users of IR thermometers also must be alert to environmen-tal conditions. Steam, dust and smoke, for example, can prevent accurate temperature readings by obstructing a unit’s optics. A dirty lens can also affect read-ings. Lenses should be cleaned with dry, clean plant air or a fluid made specifically for cleaning lenses. Also, changes in ambi-ent temperature can influence a thermometer’s performance. If an IR unit is exposed to abrupt temperature changes of 11 °C (20 °F) or more, the user should allow at least 20 minutes for the unit to adjust to the new ambienttemperature.instrument to the object com-pared to the size of the spot (“dis-tance-to-spot” or D:S ratio). For the Fluke 62 Mini the distance-to-spot ratio is 10:1. This means that at a distance of 10 inches the spot is about one inch in diam-eter. The larger its ratio number the better is the instrument’s resolution. Resolution is important becauseit relates directly to getting good readings by ensuring that the target is larger than the spot size. The smaller the target, the closer one must be to it. When accuracy is critical, the target should be at least twice as large as the spot. Other factors to consider These instruments measure only surface temperatures, not internal temperatures. Furthermore, they cannot take readings throughEven considering the limitations of infrared temperature moni-toring, there are still so many possible uses for this technology that trying to list them all would be fruitless. Here are some of the most common and particularly successful applications. Predictive maintenance Regular maintenance in industrial and institutional locations keeps motors, pumps and gearboxes from experiencing catastrophic failures that can halt production or pose safety problems. In an infrared maintenance program, technicians set up an inspection route and measurement param-eters for each piece of key equip-ment and/or component. Then, they take an infrared temperature measurement on a regular basis, record the measurement, and compare against previous read-ings for any changes.As an example, a technician can use a Fluke 62 Mini to check the operation of an induction motor on a critical piece of equip-ment. She or he would start by reading the unit’s specifications on the plate attached to it. The plate will reveal either a Temper-ature Rise Rating or a Motor Class Rating for the motor. The rise rat-ing gives the maximum allowable operating temperature above am-bient. The motor class rating, e.g. “Class A,” will reveal an absolute maximum operating temperature. Both pertain to internal-winding temperatures. Of course, a contact thermometer cannot measure these temperatures while the motor is running. However, an operator or technician can use a non-contact IR thermometer to measure the temperature of the motor case. She or he should add 10 °C (18 °F) to surface scans to determine the internal operating temperature. For each 10 °C (18 °F) above the maximum operating temperature, the life of the motor is likely to decrease by 50%.If the motor is extremely hot it could be a fire hazard.Using infrared thermometryfor plant maintenance reducesrepair costs and avoids equip-ment stoppages. Industrialmaintenance personnel, buildingmanagers, HVAC technicians andeven homeowners can reducecosts by repairing only whatneeds to be fixed. They can avoidunplanned equipment stoppagesby making specific, necessaryrepairs before equipment fails.Then, after repairs, they can per-form new temperature measure-ments on the same equipment todetermine whether the repairswere successful.Electrical inspectionsElectrical systems supply essen-tial power to every industrial,commercial and residential set-ting. With degradation over timeand the general vulnerability ofelectrical connections, it’s impor-tant to monitor electrical sys-tems for loose, dirty or corrodedconnections, flaws in transformerwindings, hot spots in panelboxes and other telltale signs oftrouble.The Fluke 62 Mini can beinvaluable for finding developinghotspots in electrical equipmentthat may indicate a short circuit,a fused switch or an overload. Ingeneral, higher operating temper-atures reduce the life of electricalcomponents by damaging insula-tion and raising the resistance ofconductor materials. Pinpointedby a non-contact IR thermometer,these situations signal that actionis required.Measure moving targets easily.Use unit in close range fornear-distance targets.HVAC inspectionsHeating and cooling systems, whether for maintaining pro-duction parameters or human comfort, are easily monitored with the Fluke 62 Mini Infrared Thermometer. Check air stratifica-tion, supply and return registers, furnace performance and steam distribution systems and conduct energy audits to pinpoint system upgrade opportunities.For example, IR non-contact thermometers can be used to troubleshoot steam traps, which are designed to remove water (condensate) that has condensed from the steam as it travels in transfer pipes. If a steam trap fails while open, it will leak steam, causing an energy loss. If it fails while closed, it won’t remove condensate from the steam line, making it useless. A faulty steam trap can cost a plant $500 USD or more per year, and in any given year, 10 % of all industrial steam traps fail. Since many plants have as many as 1,000 traps, they can quickly become high-value main-tenance targets.To verify whether a steam trap is working properly, use a non-contact thermometer like the 62 Mini to measure from input to output. On a properly operating trap, the temperature should drop significantly. If the temperature doesn’t drop, the steam trap has failed open and is passing super-heated steam into the condensate line. If the temperature drop is overly large, the trap may be stuck closed and is not ejecting heated condensate. Condensate in steam lines reduces the effec-tive energy of the steam and can cause difficulties in steam drivenprocesses.Use non-contact temperature measurements for inaccessible targets.Fluke.Keeping your worldup and running.Fluke CorporationPO Box 9090Everett, WA USA 98206Fluke Europe B.V.PO Box 1186, 5602 BDEindhoven, The NetherlandsFor more information call:U.S.A. (800) 443-5853 orFax (425) 446-5116Europe/M-East/Africa (31 40) 2 675 200 orFax (31 40) 2 675 222Canada (800) 36-FLUKE orFax (905) 890-6866Other countries (425) 446-5500 orFax (425) 446-5116Web access: ©2005 Fluke Corporation. All rights reserved.Printed in U.S.A. 6/2005 2517382 A-EN-N Rev A。
信息系统建设英文课件 (3)

Problem Solving and Systems Development
Developing an information system solution is based on the problemsolving process.
Figure 11-1
Problem Solving and Systems Development
• What caused the problem? • Why does it persist? • Why hasn’t it been solved? • What are the objectives of a solution? • Information requirements
Problem Solving and Systems Development
satisfaction with systems • Also poses risks because systems are created so quickly,
without formal development methodology, testing, documentation
Alternative Systems-Building Approaches
A New Ordering System for Girl Scout Cookies
• Problem: inefficient manual procedures, high error rate.
• Solutions: eliminate manual procedures, design new ordering process, and implement database building software to batch and track orders automatically and schedule order pickups.
Pressurization Control in Large Commercial Buildings

Pressurization Control in Large Commercial Buildings The role of outside-air-static-pressure-measurement termination and control sequences in system optimizationIn many large commercial buildings, central-station air handlers are used to maintain occupant comfort. Often, these units include return fans, which draw air from occupied spaces for recirculation or exhaust. Commonly, the return fans are controlled with variable-frequency drives (VFDs), which receive a speed signal based on building differential static pressure — the difference between interior-space static pressure and outside-air static pressure.Building differential static pressure commonly is measured with a sensor with two ports — a high-pressure port and a low-pressure port. Tubing extends from both ports. Commonly, low-port tubing is extended to the outside of a building, while high-port tubing is extended to an interior space. The sensor determines the difference in static pressure between the two ports, which it can report to a building-automation system. Typically, this is done as part of a strategy to keep interior-space pressure slightly positive relative to outdoor pressure, reducing infiltration that could lead to comfort and/or indoor-air-quality (IAQ) issues.The recent retrocommissioning of two office buildings — one 25 stories and 500,000 sq ft, the other 24 stories and 200,000 sq ft — served by central-station variable-air-volume air handlers in California revealed substantial opportunities for optimizing pressurization-control systems in large commercial buildings. This article will discuss two such opportunities: outside-air-static-pressure-measurement termination and control sequences.Outside-air-pressure-tubing terminationIn both retrocommissioned buildings, the low port of each pressure sensor was open to the control panel in which the sensor was installed, not extended to the outside via tubing.In one building, the control panel was in the penthouse mechanical room, which was open to the outside via boiler outside-air-intake louvers. While that location was good for minimizing wind effects and protecting the sensor from rain, readings likely were not representative of true outside-air static pressure because the panel was in aroom containing equipment (boiler burner fan, exhaust fans, etc.) that could have a significant effect on pressure in the space.In the other building, the control panel containing the sensor was located in the return-air plenum, on the discharge side of the return fan. At this location, with the low-side port open to the panel and, thus, the plenum, the pressure sensed by the low-side port was higher than the pressure sensed by the high-side port. There was intent to extend the tubing to the outside, as evidenced by a hole in the adjacent exterior wall, but the tubing never was installed (Photo A).It is worth mentioning that neither of the buildings was commissioned originally.The termination of low-port tubing is important, as wind can have a significant impact on measured outside-air pressure. For example, a 10-mph outside breeze translates to wind pressure of 0.05 in., per Bernoulli's Equation. This light breeze can have a significant impact on pressure control, as pressurization systems typically operate to maintain a slightly positive building pressure in the same range (0.05 in.).Many outside-air-pressure-sensing devices are available for installation at low-port-tubing terminations. Designed to minimize the effects of wind speed, these devices typically consist of two plates, with the sensing element in the middle of the inside surface of one of the plates. Most manufacturers, as well as the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE),1 recommendinstallation of the devices 10 to 15 ft above a building to further minimize wind effects.Building-Pressurization-Control SchemeThe building-pressurization-control scheme for the two retrocommissioned buildings is similar to the one employed in many buildings with central-station air-handling systems: Return-fan speed is varied to maintain a set differential between interior-space and outside-air pressure. The two buildings also employ an economizer-damper-control scheme that is fairly typical: The economizer-damper controller sends a signal to control outside-, return-, and relief-air dampers, with the return-air dampers operating opposite of the outside- and relief-air dampers (Figure 1).These control schemes can cause performance issues because the control sequences overlap somewhat: The relief-air damper operates based on the economizer-damper signal (temperature- or enthalpy-based control), but is an integral part of the building-pressurization-control scheme (pressure-based control). For example, on a past retrocommissioning project, the pressure in the main air handler's mixed-air plenum was positive during minimum-outside-air operation. (Normally, a mixed-air plenum should be at a negative pressure to draw in outside air.) Instead of outside air being drawn in, return air was being exhausted through the outside-air damper; in other words, the system was operating in 100-percent recirculation mode. Thisissue was attributable to oversized return-air dampers — typically, return-air dampers should be smaller than outside- and relief-air dampers in return-fan systems2— and the overlap of temperature- and pressure-control strategies.A better method may be to control the relief damper to maintain building differential pressure and control the return fan to maintain return-air-plenum pressure (Figure 2).3With this scheme, the relief-air damper is controlled directly by pressure instead of the economizer cycle, and the return fan operates to ensure there is enough pressure in the return-air plenum to push air through either the relief-air damper or the return-air damper. The outside- and return-air dampers would continue to be controlled based on the economizer cycle.1 Note that proper sizing of the outside, return, and relief dampers is required to help ensure sufficient outside-air intake during non-economizer operation.Converting a control system from the arrangement in Figure 1 to the arrangement in Figure 2 would require minimal capital outlay, as no major system modifications beyond the addition of a pressure sensor and related points and the modification of control sequences would be required. Depending on the as-found performance of a system, such a conversion could result in reduced energy use through return-fan-speed reduction and increased indoor environmental quality through adequate outside-air intake.Another possible improvement involves the use of a relief fan instead of a return fan.A relief fan is located downstream of the return-air plenum, drawing air from the plenum and pushing it through the relief-air damper (Figure 3). The relief fan should be controlled to maintain building pressurization, not used as part of an economizer-control scheme. Typically, it will not need to operate during minimum-outside-air mode.3 For a relief fan with a VFD, the relief damper is closed when the fan is off and open when the fan is on. In buildings utilizing return fans, changing to a relief-fan system may not be feasible because of space constraints and cost. In buildings with a relief-fan arrangement already in place, an investigation of system performance is worthwhile; as with a return-fan system, check the low-port-tubing termination and control sequences.Another method of building-pressurization control is indirect, by which supply- and return-air volumes or intake- and exhaust-air volumes are measured instead of interior/outside pressures. For an existing building with a direct-measurement control system already in place, this may not be feasible because of the space required for proper installation of airflow-measuring stations.Common CluesFollowing are indicators of a possible issue with a direct-measurement building-pressurization-control system's operation:∙Return or relief fans operate near 100-percent speed most of the time.∙Building differential static pressure fluctuates widely and/or rapidly.∙Return- or relief-fan speed fluctuates widely and/or rapidly.∙Exterior doors are hard to open or do not close securely.If any of these is noticed:∙Inspect the low- and high-port-tubing terminations. Ensure the high-port tubing terminates at a location representative of the space pressure and away from doors and that the low-port tubing terminates in an accessible location and a manner minimizing wind effects. Consider installing an outside-air-pressure-sensing device at the low-port termination to mitigate wind effects.∙Check and, if necessary, adjust the calibration of the differential-pressure sensor.∙Inspect the integrity of the tubing between the sensor and interior and outdoor spaces.∙Review the control sequences. If building differential pressure is controlling the return fan(s), consider using the signal to control the relief-air damper instead and installing a pressure sensor in the return-air plenum to control return-fan speed.The economizer-damper-control sequence would need to be modified as well.∙Ensure the outside-, return-, and relief-air dampers are sized properly.2∙Trace the return-air pathway from the interior space to the fan. Ensure return-air grilles/openings are sufficient and that the pathway is not obstructed by interior walls extending to the floor deck above (for return-air plenums), closedfire/smoke dampers, or damaged internal duct insulation.For more information on building-pressurization-control systems, including the diagnosis of problems and testing of performance, see “Functional Testing Guide: From the Fundamentals to the Field.”4ConclusionMany large commercial buildings utilize direct-measurement building-pressurization-control systems. Optimizing the performance of those systems as part of a retrocommissioning project can yield energy savings and improved IAQ through reduced return/relief-fan speeds, reduced infiltration, and proper outside-air intake.ASHRAE President Gordon V.R. Holness, PE, FASHRAE, has urged ASHRAEmembers to focus on increasing energy efficiency in existing buildings.5 Optimizing the performance of building-pressurization systems can be a low-cost method of doing that.References1.ASHRAE. (2007). 2007 ASHRAE handbook - HVAC applications (ch. 46).Atlanta: American Society of Heating, Refrigerating and Air-Conditioning Engineers.2.Lizardos, E., & Elovitz, K.M. (2000, April). Practical guide: Damper sizingusing damper authority. ASHRAE Journal, pp. 37-43.3.Taylor, S.T. (2000, September). Comparing economizer reliefsystems. ASHRAE Journal, pp. 33-40, 42.4.PECI. (n.d.). Functional testing guide: From the fundamentals to the field.Portland, OR: Portland Energy Conservation Inc. Availableat /ftguide/ftg/index.htm5.Holness, G.V.R. (2009, August). Sustaining our future by rebuilding our past:Energy efficiency in existing buildings — Our greatest opportunity for a sustainable future.ASHRAE Journal, pp. 16-21.。
D-Wave Ocean程序入门指南说明书

CONTACTCorporate Headquarters 3033Beta AveBurnaby,BC V5G 4M9CanadaTel.604-630-1428US O ice2650E Bayshore Rd Palo Alto,CA 94303Email:*****************OverviewThis guide provides a gentle introduction to Ocean programs for begin-ners new to D-Wave’s Ocean SDK.Here you will find the building blocks and basic components required in an Ocean program in order to suc-cessfully run problems using D-Wave’s hardware and so waretools.Ocean Programs for BeginnersWHITEPAPER2021-09-28Notice and DisclaimerD-Wave Systems Inc.(“D-Wave”)reserves its intellectual property rights in and to this document,any doc-uments referenced herein,and its proprietary technology,including copyright,trademark rights,industrial design rights,and patent rights.D-Wave trademarks used herein include D-WA VE®,Leap™quantum cloud service,Ocean™,Advantage™quantum system,D-Wave2000Q™,D-Wave2X™,and the D-Wave logo(the“D-Wave Marks”).Other marks used in this document are the property of their respective owners.D-Wave does not grant any license,assignment,or other grant of interest in or to the copyright of this document or any referenced documents,the D-Wave Marks,any other marks used in this document,or any other intellectualproperty rights used or referred to herein,except as D-Wave may expressly provide in a written agreement.Contents1Introduction1 2Core Components12.1Setting Up The Problem (1)2.2Building a Quadratic Model (2)2.3Interacting with a Sampler or Solver (3)2.4Calling a Sampler (4)2.5Examining Results from a Sampler (4)3A More Complex Example6 4Using Matrices for Binary Quadratic Models8 5Further Example Programs9 References101IntroductionD-Wave’s Ocean software development kit(SDK)allows users to interact with the D-Wave quan-tum processing units(QPUs)and hybrid solvers through Python programs.The purpose of this guide is to step new Ocean users through the basic components of an Oceanprogram.Before working through this guide,please review our introduction to binary quadraticmodels(BQMs)[1].A user interacts with D-Wave solvers by formulating a quadratic model(QM)for their problem,writing a Python program that uses the Ocean SDK,running that Python program,and reviewingthe results returned.The Python program introduces a QM and submits it to the selected solver tofind the minimum energy value for that model.For example,if the solver is the QPU,the OceanSDK provides the proper inputs to the physical QPU so that the energy landscape matches theBQM provided.2Core ComponentsAn Ocean program has several core components.First,we must build a quadratic model(QM)that will be provided to the solver.This can be done in a number of ways,such as either quadraticunconstrained binary optimization(QUBO)or Ising form.Second,we need to select a samplerto run our problem and provide results.In this section we will step you through each of thesecomponents and demonstrate using the maximum cut problem from D-Wave’s collection of codeexamples[2].2.1Setting Up The ProblemFor this guide,we will work with a simple example called the maximum cut problem.In the maxi-mum cut problem our objective is to maximize the number of cut edges in a graph.In other words,we want to divide the set of nodes in the graph into two subsets so that we have as many edges aspossible that go between the two sets.In the image below,the two sets are denoted using blue andwhite nodes and the cut edges are represented with dashed lines.For this example,the maximumcut size is5.We can represent this problem as a binary quadratic model(BQM)with the objective:min(i,j)∈E−x i−x j+2x i x jFor the full details on how this formulation is derived,see the full description in the Collectionof Code Examples[2].This particular problem consists only of one single objective and no con-straints.2.2Building a Quadratic ModelWe need to define a quadratic model(QM)that represents our problem.To learn about how toformulate a QM for your problem as a QUBO or Ising model,check out our“Learning to For-mulate Problems”guide[1].That guide will take us through the steps of how to formulate yourproblem as a QM and how to represent it in mathematical and matrix form.Once we have our QM,we need to build it in our Python program.Binary Quadratic Models:The simplest way to build a binary quadratic model(BQM)is usingOcean’s symbolic variables.For each mathematical variable in your BQM,we define a symbolicbinary variable using Ocean.x={n:Binary(n)for n in G.nodes}This defines a Python dictionary in which we have a symbolic binary variable defined for each nodein our graph.In this dictionary,each key is a node in the graph while the value is the correspondingsymbolic binary variable.Now that we have binary variables that we can use in our Python program,we can build our BQMusing these binary variables.Recall that our BQM is represented by the following mathematicalexpression.min(i,j)∈E−x i−x j+2x i x jUsing our symbolic binary variables,we define a QM that matches this mathematical expression.bqm=sum(-x[i]-x[j]+2*x[i]*x[j]for i,j in G.edges)Once the QM is defined,it is stored as a BinaryQuadraticModel object.This object stores the linear and quadratic coefficients of the mathematical expression,any constant term or offset,and the type of variables used to build the model.In this case,printing out the object bqm that we have constructed reveals the following.BinaryQuadraticModel({1:-2.0,2:-2.0,3:-3.0,4:-3.0,5:-2.0},{(2,1):2.0,(3,1):2.0,(4,2):2.0,(4,3):2.0,(5,3):2.0,(5,4):2.0}, 0.0,’BINARY’)For the5-node example graph,simplifying the mathematical expression for our objective produces linear and quadratic terms that match the constructed BinaryQuadraticModel object.Constrained Quadratic Models:A constrained quadratic model(CQM)allows us to explicitly setan objective and constraints to model our problem.We begin in the same way as the BQM exampleby defining our variables.For the maximum cut problem,these are all binary.x={n:Binary(n)for n in G.nodes}Next,we initialize our CQM object and set the objective for our CQM to represent the minimiza-tion problem that we are looking to solve.#Initialize a CQM objectcqm=ConstrainedQuadraticModel()#Set an objective function for the CQMcqm.set_objective(sum(-x[i]-x[j]+2*x[i]*x[j]for i,j in G.edges))2.3Interacting with a Sampler or SolverTofind the minimum energy state for a QM(the assignment of variable values that gives us theminimum energy value for our QM),the Ocean SDK provides samplers and solvers.A solver isa resource that runs a problem.Samplers are processes that run a problem many times to obtaina collection of samples,each of which is a possible solution to our problem.For convenience,wewill generally refer to Ocean’s samplers as a whole,to include solvers as well.The Ocean SDK provides a variety of different samplers that we can use to examine our prob-lem.These range from the D-Wave QPU(DWaveSampler)to classical algorithms like tabu search(TabuSampler)and even hybrid tools(LeapHybridSampler).More information on samplers canbe found in the full Ocean documentation[3].Defining a Sampler.To define a sampler,we need to understand which package it belongs toin the Ocean SDK.In Figure1is a list of some commonly used samplers for beginners.Eachsampler obtains samples in a different way and can be useful at different stages of your applicationdevelopment.To define a sampler for our program,wefirst import the package that contains the sampler usingthe same syntax as we would for any other Python package import.Then we can instantiate oursampler object in our program so that it is ready to be called.Example.To use the D-Wave QPU,we might use the following lines of code.#Import the packages requiredfrom dwave.system.samplers import DWaveSampler,EmbeddingComposite#Define the samplersampler=EmbeddingComposite(DWaveSampler())In these lines we see a few different things taking place.Sampler Description Ocean Package Useful for: DWaveSampler D-Wave QPU dwave-system Running problems directly onthe QPUExactSolver Considers all possible an-swers tofind the optimal so-lution dimod Running very small problemsclassically(<10variables)SimulatedAnnealingSampler Classical algorithm for sim-ulated annealing dwave-neal Running medium-sized prob-lems classicallyLeapHybridSampler Quantum-Classical hybridsampler dwave-system Running large problems acrossa portfolio of quantum and clas-sical hardwareFigure1:List of commonly used Ocean samplersFirst,DWaveSampler tells the system that we want to use the D-Wave quantum computer.Wrappedaround our call to DWaveSampler we see EmbeddingComposite.This tool lets the Ocean SDKfind the best way to map,or embed,our logical problem(our QM)onto the physical QPU.It de-cides which qubits to map our variables onto and will unembed solutions so that they are returnedto us in terms of our variables.2.4Calling a SamplerOnce we have established our sampler in our program,we can call it for our QM.Each type of QMmodel has its own method for interacting with the sampler,whether it be QUBO,BinaryQuadr-ticModel,or any other QM.We call the sampler to sample our QM using one of Ocean’s samplefunctions,depending on what type of QM we are using.For example,the code snippet belowdemonstrates how we can sample a BinaryQuadraticModel object named bqm using the QPU.#Define the samplersampler=EmbeddingComposite(DWaveSampler())#Sample the BQM and store the results in the SampleSet objectsampleset=sampler.sample(bqm,num_reads=100)Note that each sampler has its own set of parameters,or additional settings,in the sample methodsavailable.In the previous code snippet,the parameter num_reads is used to run the BQM100times on the QPU.A list of the properties and parameters specific to the QPU(DWaveSampler)is available here[4].Beginners should pay particular attention to the chainstrength and num_-reads(number of reads)parameters,as discussed in the documentation.2.5Examining Results from a SamplerAfter we have sampled our QM,the sampler returns a SampleSet object.This object contains allof the samples returned along with their corresponding energy value,number of occurrences,andmore.The additional information varies depending on which sampler is used.As users get more comfortable with the Ocean SDK and the variety of samplers available,it is often useful to take some time to explore the wealth of information provided in the SampleSet object.Some of the key properties and methods of a SampleSet that we access are the following.SampleSet.record:The full set of samples.Each line shows a sample(solution)that was returned,along with the corresponding energy value, number of occurrences,and additional information like chain break fraction(QPU samplers), feasibility(CQM solver),or other sampler-specific information.Example from a QPU sampler:[([0,1,1,0,0],-5.,26,0.)([0,1,1,0,1],-5.,33,0.)([1,0,0,1,0],-5.,24,0.)([1,0,0,1,1],-5.,17,0.)]SampleSet.first:The sample with the lowest energy.Example from a QPU sampler:Sample(sample={1:0,2:1,3:1,4:0,5:0},energy=-5.0,num_occurrences=26, chain_break_fraction=0.0)SampleSet.data:The complete information about the solutions and sampler.Example from a QPU sampler:<bound method SampleSet.data ofSampleSet(rec.array([([0,1,1,0,0],-5.,26,0.),([0,1,1,0,1],-5.,33,0.),([1,0,0,1,0],-5.,24,0.),([1,0,0,1,1],-5.,17,0.)],dtype=[(’sample’,’i1’,(5,)),(’energy’,’<f8’),(’num_occurrences’,’<i8’),(’chain_break_fraction’,’<f8’)]),[1,2,3,4,5],{’timing’:{’qpu_sampling_time’:2389,’qpu_anneal_time_per_sample’:20,’qpu_readout_time_per_sample’:198,’qpu_access_time’:13078,’qpu_access_overhead_time’:4062,’qpu_programming_time’:10689,’qpu_delay_time_per_sample’:21,’total_post_processing_time’:426,’post_processing_overhead_time’:426,’total_real_time’:13078,’run_time_chip’:2389,’anneal_time_per_run’:20,’readout_time_per_run’:198},’problem_id’:’9925c084-f2e6-4124-a361-11607a92439c’},’BINARY’)>3A More Complex ExampleSimilar to the maximum cut problem is another problem from graph theory called the graph par-titioning problem.In this problem,our objective is to minimize the number of cut edges and ourconstraint is that both subsets of nodes must have equal size.Mathematically,this can be expressedas the following.Objective:min(i,j)∈Ex i+x j−2x i x jConstraint:v∈Gx v=|G|/2Binary Quadratic Model:We will again begin by defining a binary variable for each node in our graph.x={n:Binary(n)for n in G.nodes}We begin the construction of our QM by building a BinaryQuadraticModel object that consists of our objective expression.bqm =sum(-x[i]-x[j]+2*x[i]*x[j]for i,j in G.edges)Next,we add in our constraint using the add_linear_equality_constraint function fromOcean.To use this function we must have a constraint of the form (i a i x i )+C =0.To map our mathematical constraint to this form,we move the constant term |G |/2to the left-hand side of the equation.When using this function,the first parameter is a list of tuples representing (vari-able,coefficient),or (x i ,a i ).The constant parameter is the constant term in the equation,and the lagrange_multiplier parameter provides a weighting coefficient to effectively balance the objective and constraint for the problem.Note that for this constraint to be satisfied,we must have an even number of nodes in the graph.bqm.add_linear_equality_constraint([(n,1)for n in G.nodes],constant =-G.number_of_nodes()/2,lagrange_multiplier =1)Now the BinaryQuadraticModel called bqm completely models our optimization problem and can be sent over to one of the available samplers.Constrained Quadratic Model:As before,we define a binary variable for each node in our graph.x ={n:Binary(n)for n in G.nodes}To build the CQM for the graph partitioning problem,we initialize our ConstrainedQuadraticModel object,set our objective,and add in the constraint.This is shown in the code snippet below.#Import the required packagefrom dwave.system import LeapHybridCQMSampler #Initialize the CQMcqm =ConstrainedQuadraticModel()#Set the objectivecqm.set_objective(sum (-x[i]-x[j]+2*x[i]*x[j]for i,j in G.edges))#Add a constraintcqm.add_constraint(sum (x[i]for i in G.nodes)==(G.number_of_nodes()/2),label=’partition-size’)Note that a label is provided for the constraint.This allows the SampleSet returned from the sam-pler to check whether or not each individual constraint is satisfied.For example,in the followingcode snippet we define our sampler to be LeapHybridCQMSampler,sample the CQM object,andprint out the SampleSet object returned.#Define the samplersampler=LeapHybridCQMSampler()#Sample the CQM objectsampleset=sampler.sample_cqm(cqm)#Print the resultsprint(sampleset.first)This displays for the user the following SampleSet.Sample(sample={1:0.0,2:1.0,3:1.0,4:0.0,5:0.0,6:1.0},energy=-6.0, num_occurrences=1,is_feasible=True,is_satisfied=array([True]))First we see the sample with binary variable assignments,followed by the energy and numberof stly,we see twofields unique to constrained quadratic models:is_feasible:True if all constraints are satisfied,and is_satisfied:an array with an entry for each constraintindicating True if the constraint is satisfied.4Using Matrices for Binary Quadratic Models An alternative to building binary quadratic models symbolically is to build them using a matrixrepresentation.A matrix representation of a BQM contains linear coefficients along the diagonaland quadratic coefficients on the off-diagonal.An easy way to think of a BQM in matrix form is toimagine variable names across the rows and columns of our ing Ocean,we can encodethis matrix in a variety of ways such as with a Python dictionary or with a NumPy array.Belowwe show converting a BQM matrix to a Python dictionary.x1x2is equivalent to{(x1,x2):2,(x2,x2):1}.x102x201When we enter our BQM matrix into a dictionary,we generally only include the non-zero entries,which allows us to save space and run larger problems more efficiently.Maximum Cut Example.In the maximum cut example,the BQM equation for a graph with edgeset E was determined to be:(−x i−x j+2x i x j).min(i,j)∈EIn maximum_cut.py,the Python dictionary for our matrix is built in the following code snippet.from collections import defaultdict#Initialize our Q matrixQ=defaultdict(int)#Update Q matrix for every edge in the graphfor u,v in G.edges:Q[(u,u)]+=-1Q[(v,v)]+=-1Q[(u,v)]+=2In this example,we use defaultdict(int)to initialize the dictionary Q.This allows us to createnew dictionary elements that are initialized with the value0as they are added to the dictionary[5].5Further Example ProgramsFor some simple Ocean program examples,check out our Collection of Code Examples[2].Theseexamples can be fully explored through our Leap cloud platform and with our integrated developerenvironment(IDE),available at /leap.References1Learn to formulate problems,https:///media/bu0lh5ee/bqm_dev_guide.pdf(2020).2Collection of code examples,https:///dwave-examples(2020).3Ocean documentation,https://(2020).4D-Wave system documentation,https:///docs/latest/c_solver_parameters.html(2020).5Python documentation:collections,https:///3/library/collections.html(2020).。
green building 外文文献翻译
外文文献:Green buildingGreen building (also known as green construction or sustainable building) refers to a structure and using process that is environmentally responsible and resource-efficient throughout a building's life-cycle: from sitting to design, construction, operation, maintenance, renovation, and demolition. This requires close cooperation of the design team, the architects, the engineers, and the client at all project stages. The Green Building practice expands and complements the classical building design concerns of economy, utility, durability, and comfort.Although new technologies are constantly being developed to complement current practices in creating greener structures, the common objective is that green buildings are designed to reduce the overall impact of the built environment on human health and the natural environment by:Efficiently using energy, water, and other resourcesProtecting occupant health and improving employee productivityReducing waste, pollution and environmental degradationA similar concept is natural building, which is usually on a smaller scale and tends to focus on the use of natural materials that are available locally. Other related topics include sustainable design and green architecture. Sustainability may be defined as meeting the needs of present generations without compromising the ability of future generations to meet their needs. Although some green building programs don't address the issue of the retrofitting existing homes, others do. Green construction principles can easily be applied to retrofit work as well as new construction.A 2009 report by the U.S. General Services Administration found 12 sustainably designed buildings cost less to operate and have excellent energy performance. In addition, occupants were more satisfied with the overall building than those in typical commercial buildings.Green building practices aim to reduce the environmental impact of buildings, so the very first rule is: the greenest building is the building that doesn't get built. New construction almost always degrades a building site, so not building is preferable to building. The second rule is: every building should be as small as possible. The third rule is: do not contribute to sprawl (the tendency for cities to spread out in a disordered fashion). No matter how much grass you put onyour roof, no matter how many energy-efficient windows, etc., you use, if you 1 contribute to sprawl, you've just defeated your purpose. Urban infill sites are preferable to suburban "Greenfield" sites.Buildings account for a large amount of land. According to the National Resources Inventory, approximately 107 million acres (430,000 km2) of land in the United States are developed. The International Energy Agency released a publication that estimated that existing buildings are responsible for more than 40% of the world’s total primary energy consumption and for 24% of global carbon dioxide emissions.The concept of sustainable development can be traced to the energy (especially fossil oil) crisis and the environment pollution concern in the 1970s. The green building movement in the U.S. originated from the need and desire for more energy efficient and environmentally friendly construction practices. There are a number of motives for building green, including environmental, economic, and social benefits. However, modern sustainability initiatives call for an integrated and synergistic design to both new construction and in theretrofiring of existing structures. Also known as sustainable design, this approach integrates the building life-cycle with each green practice employed with a design-purpose to create a synergy among the practices used.Green building brings together a vast array of practices, techniques, and skills to reduce and ultimately eliminate the impacts of buildings on the environment and human health. It often emphasizes taking advantage resources, e.g., using sunlight through passive solar, active solar, and photovoltaic techniques and using plants and trees through green roofs, rain gardens, and reduction of rainwater run-off. Many other techniques are used, such as using wood as a building material, or using packed gravel or permeable concrete instead of conventional concrete or asphalt to enhance replenishment of ground water.While the practices, or technologies, employed in green building are constantly evolving and may differ from region to region, fundamental principles persist from which the method is derived: Sitting and Structure Design Efficiency, Energy Efficiency, Water Efficiency, Materials Efficiency, Indoor Environmental Quality Enhancement, Operations and Maintenance Optimization, and Waste and Toxics Reduction. The essence of green building is an optimizationof one or more of these principles. Also, with the proper synergistic design, individual green building technologies may work together to produce a greater cumulative effect.On the aesthetic side of green architecture or sustainable design is the philosophy of designing a building that is in harmony with the natural features and resources surrounding the site. There are several key steps in designing sustainable buildings: specify 'green' building materials from local sources, reduce loads, optimize systems, and generate on-site renewable energy.The foundation of any construction project is rooted in the concept and design stages. The concept stage, in fact, is one of the major steps in a project life cycle, as it has the largest impact on cost and performance. In designing environmentally optimal buildings, the objective is to minimize the total environmental impact associated with all life-cycle stages of the building project. However, building as a process is not as streamlined as an industrial process, and varies from one building to the other, never repeating itself identically. In addition, buildings are much more complex products, composed of a multitude of materials and components each constituting various design variables to be decided at the design stage. A variation of every design variable may affect the environment during all the building's relevant life-cycle stages.Green buildings often include measures to reduce energy consumption – both the embodied energy required to extract, process, transport and install building materials and operating energy to provide services such as heating and power for equipment.As high-performance buildings use less operating energy, embodied energy has assumed much greater importance – and may make up as much as 30% of the overall life cycle energy consumption. Studies such as the U.S. LCI Database Project show buildings built primarily with wood will have a lower embodied energy than those built primarily with brick, concrete or steel.To reduce operating energy use, designers use details that reduce air leakage through the building envelope (the barrier between conditioned and unconditioned space). They also specify high-performance windows and extra insulation in walls, ceilings, and floors. Another strategy, passive solar building design, is often implemented in low-energy homes. Designers orient windows and walls and place awnings, porches, and trees to shade windows and roofs during the summer while maximizing solar gain in the winter. In addition, effective window placement(daylighting) can provide more natural light and lessen the need for electric lighting during the day.Onsite generation of renewable energy through solar power, wind power, hydro power, or biomass can significantly reduce the environmental impact of the building. Power generation is generally the most expensive feature to add to a building.Reducing water consumption and protecting water quality are key objectives in sustainable building. One critical issue of water consumption is that in many areas, the demands on the supplying aquifer exceed its ability to replenish itself. To the maximum extent feasible, facilities should increase their dependence on water that is collected, used, purified, and reused on-site. The protection and conservation of water throughout the life of a building may be accomplished by designing for dual plumbing that recycles water in toilet flushing. Waste-water may be minimized by utilizing water conserving fixtures such as ultra-low flush toilets and low-flow shower heads. Bidets help eliminate the use of toilet paper, reducing sewer traffic and increasing possibilities of re-using water on-site. Point of use water treatment and heating improves both water quality and energy efficiency while reducing the amount of water in circulation. The use of non-sewage and grey water for on-site use such as site-irrigation will minimize demands on the local aquifer.Building materials typically considered to be 'green' include lumber from forests that have been certified to a third-party forest standard, rapidly renewable plant materials like bamboo and straw, dimension stone, recycled stone, recycled metal (see: copper sustainability and recyclability), and other products that are non-toxic, reusable, renewable, and/or recyclable (e.g., Trass, Linoleum, sheep wool, panels made from paper flakes, compressed earth block, adobe, baked earth, rammed earth, clay, vermiculite, flax linen, sisal, sea grass, cork, expanded clay grains, coconut, wood fibre plates, calcium sand stone, concrete (high and ultra high performance, roman self-healing concrete, etc.) The EPA (Environmental Protection Agency) also suggests using recycled industrial goods, such as coal combustion products, foundry sand, and demolition debris in construction projects Building materials should be extracted and manufactured locally to the building site to minimize the energy embedded in their transportation. Where possible, building elements should be manufactured off-site and delivered to site, to maximise benefits of off-site manufacture including minimising waste, maximising recycling (because manufacture isin one location), high quality elements, better OHS management, less noise and dust. Energy efficient building materials and appliances are promoted in the United States through energy rebate programs, which are increasingly communicated to consumers through energy rebate database services such as GreenOhm.The Indoor Environmental Quality (IEQ) category in LEED standards, one of the five environmental categories, was created to provide comfort, well-being, and productivity of occupants. The LEED IEQ category addresses design and construction guidelines especially: indoor air quality (IAQ), thermal quality, and lighting quality.Indoor Air Quality seeks to reduce volatile organic compounds, or VOCs, and other air impurities such as microbial contaminants. Buildings rely on a properly designed ventilation system (passively/naturally or mechanically powered) to provide adequate ventilation of cleaner air from outdoors or recirculated, filtered air as well as isolated operations (kitchens, dry cleaners, etc.) from other occupancies. During the design and construction process choosing construction materials and interior finish products with zero or low VOC emissions will improve IAQ. Most building materials and cleaning/maintenance products emit gases, some of them toxic, such as many VOCs including formaldehyde. These gases can have a detrimental impact on occupants' health, comfort, and productivity. Avoiding these products will increase a building's IEQ. LEED. HQE and Green Star contain specifications on use of low-emitting interior. Draft LEED 2012 is about to expand the scope of the involved products. BREEA Mlimits formaldehyde emissions, no other VOCs.Also important to indoor air quality is the control of moisture accumulation (dampness) leading to mold growth and the presence of bacteria and viruses as well as dust mites and other organisms and microbiological concerns. Water intrusion through a building's envelope or water condensing on cold surfaces on the building's interior can enhance and sustain microbial growth.A well-insulated and tightly sealed envelope will reduce moisture problems but adequate ventilation is also necessary to eliminate moisture from sources indoors including human metabolic processes, cooking, bathing, cleaning, and other activities.Personal temperature and airflow control over the HVAC system coupled with a properly designed building envelope will also aid in increasing a building's thermal quality. Creating ahigh performance luminous environment through the careful integration of daylight and electrical light sources will improve on the lighting quality and energy performance of a structure.Solid wood products, particularly flooring, are often specified in environments where occupants are known to have allergies to dust or other particulates. Wood itself is considered to be hypo-allergenic and its smooth surfaces prevent the buildup of particles common in soft finishes like carpet. The Asthma and Allergy Foundation of American recommends hardwood, vinyl, linoleum tile or slate flooring instead of carpet. The use of wood products can also improve air quality by absorbing or releasing moisture in the air to moderate humidity.No matter how sustainable a building may have been in its design and construction, it can only remain so if it is operated responsibly and maintained properly. Ensuring operations and maintenance(O&M) personnel are part of the project's planning and development process will help retain the green criteria designed at the onset of the project. Every aspect of green building is integrated into the O&M phase of a building's life. The addition of new green technologies also falls on the O&M staff. Although the goal of waste reduction may be applied during the design, construction and demolition phases of a building's life-cycle, it is in the O&M phase that green practices such as recycling and air quality enhancement take place. Waste reduction Green architecture also seeks to reduce waste of energy, water and materials used during construction. For example, in California nearly 60% of the state's waste comes from commercial buildings. During the construction phase, one goal should be to reduce the amount of material going to landfills. Well-designed buildings also help reduce the amount of waste generated by the occupants as well, by providing on-site solutions such as compost bins to reduce matter going to landfills.To reduce the amount of wood that goes to landfill, Neutral Alliance (a coalition of government, NGOs and the forest industry) created the website . The site includes a variety of resources for regulators, municipalities, developers, contractors, owner/operators and individuals/homeowners looking for information on wood recycling.When buildings reach the end of their useful life, they are typically demolished and hauled to landfills. Deconstruction is a method of harvesting what is commonly considered "waste" and reclaiming it into useful building material. Extending the useful life of a structure also reduceswaste – building materials such as wood that are light and easy to work with make renovations easier.To reduce the impact on wells or water treatment plants, several options exist. "Grey water", wastewater from sources such as dishwashing or washing machines, can be used for subsurface irrigation, or if treated, for non-potable purposes, e.g., to flush toilets and wash cars. Rainwater collectors are used for similar purposes.Centralized wastewater treatment systems can be costly and use a lot of energy. An alternative to this process is converting waste and wastewater into fertilizer, which avoids these costs and shows other benefits. By collecting human waste at the source and running it to a semi-centralized biogas plant with other biological waste, liquid fertilizer can be produced. This concept was demonstrated by a settlement in Lubeck Germany in the late 1990s. Practices like these provide soil with organic nutrients and create carbon sinks that remove carbon dioxide from the atmosphere, offsetting greenhouse gas emission. Producing artificial fertilizer is also more costly in energy than this process.中文译文:绿色建筑绿色建筑(也被称为绿色建筑或可持续建筑)是指一个结构和使用的过程,是对环境负责和资源节约型整个建筑物的循环生活:从选址到设计,施工,运行,维护,改造和拆迁。
软件工程评估和管理英文讲义process3
The debugging process
Locate error
Design error repair
Repair error
Re-test program
CSEM01 - Wk 3 - Process
12
Software validation
CSEM01 - Wk 3 - Process
8
The software design process
CSEM01 - Wk 3 - Process
9
Structured methods
• Systematic approaches to developing a software design. • The design is usually documented as a set of graphical models. • Possible models
CSEM01 - Wk 3 - Process 15
Testing phases
R equirements specification
System specification
System design
Detailed design
Acceptance test plan
System integration test plan
Software Processes
CSEM01
SE Evolution & Management
Anne Comer Helen Edwards
CSEM01 - Wk 3 - Process 1
Objectives
• To describe outline process models for the activities: requirements engineering, software development, testing and evolution • To describe three generic process models and when they may be used • To introduce software process models • To overview the Rational Unified Process model • To overview Software Process Improvement CMMI – and mention Software Measurement
WindowsNT文件系统技术内幕(前言)
Overview Part I introduces the Windows NT Operating System and some of the issues of file system driver development.• Chapter 1, Windows NT System Components• Chapter 2, File System Driver Development• Chapter 3, Structured Driver DevelopmentIn ibis chapter:• The Basics« The Windows NT* The Windows NTWindows NT SystemComponentsThe focus of this book is the Windows NT file system and the interaction of the file system with the other core operating system components. If you are interested in providing value-added software for the Windows NT platform, the topics on filter driver design and development should provide you with a good under-standing of some of the mechanics involved in designing such software.File systems and filter drivers don't exist in a vacuum, but interact heavily with the rest of the operating system. This chapter provides an overview of the main components of the Windows NT operating system.The BasicsOperating systems deal with issues that users prefer to forget, including initial-izing processor states, coordinating multiple CPUs, maintaining CPU cache coherency, managing the local bus, managing physical memory, providing virtual memory support, dealing with external devices, defining and scheduling user processes/threads, managing user data stored on external devices, and providing the foundation for an easily manageable and user-friendly computing system. Above all, the operating system must be perceived as reliable and efficient, since any perceived lack of these qualities will almost certainly result in the universal rejection and thereby in the quick death of the operating system.Contrary to what you may have heard, Windows NT is not a state-of-the-art oper-ating system by any means. It employs concepts and principles that have been known for years and have actually been implemented in many other commercial operating systems. You can envision the Windows NT platform as the result of a confluence of ideas, principles, and practices obtained from a wide variety ofChapter 1: Windows NT System Components sources, from both commercial products and research projects conducted by universities.Design principles and methodologies from the venerable UNIX and OpenVMS operating system platforms, as well as the MACH operating system developed at CMU, are obvious in Windows NT. You can also see the influence of less sophisti-cated systems, such as MS-DOS and OS/2. However, do not be misled into thinking that Windows NT can be dismissed as just another conglomeration of rehashed design principles and ideas. The fact that the designers of Windows NT were willing to learn from their own experiences in designing other operating systems and the experiences of others has led to the development of a fairly stable and serious computing platform.The Core ArchitectureCertain philosophies derived from the MACH operating system are visible in the design of Windows NT. These include an effort to minimize the size of the kernel and to implement parts of the operating system using the client-server model, with a message-passing method to transfer information between modules. Further-more, the designers have tried to implement a layered operating system, where each component interacts with other layers via a well-defined interface.The operating system was designed specifically to run in both single-processor and symmetric multiprocessor environments.Finally, one of the primary goals was to make the operating system easily portable across many different hardware architectures. The designers tried to achieve this goal by using an object-based model to design operating system components and by abstracting out those small pieces of the operating system that are hardware-dependent and therefore need to be reimplemented for each supported platform; the more portable components can, theoretically, simply be recompiled for the different architectures.Figure 1-1 illustrates how the Windows NT operating system is structured. The figure shows that Windows NT can be broadly divided into two main compo-nents: user mode and kernel mode.User modeThe operating system provides support for protected subsystems. Each protected subsystem resides in its own process with its memory protected from other subsystems. Memory protection support is provided by the Windows NT Virtual Memory Manager.The BasicsFigure 1-1. Overview of the Windows NT operating system environmentThe subsystems provide well-defined Application Programming Interfaces (APIs) that can be used by user-mode processes to obtain desired functionality. The subsystems then communicate with the kernel-mode portion of the operating system using well-defined system service calls.Chapter 1: Windows NT System Components NOTE Microsoft has never really documented the operating-system-provid-ed system-service calls. They instead encourage application develop-ers for the Windows NT platform to use the services of one of thesubsystems provided by the operating system environment.By not documenting the native Windows NT system service APIs,the designers have tried to maintain an abstract view of the operat-ing system. Therefore, applications only interact with their preferrednative subsystem, leaving the subsystem to interact with the operat-ing system. The benefit to Microsoft of using this philosophy is totie most applications to the easily portable Win32 subsystem (thesubsystem of choice and, sometimes, the subsystem of necessity),and also to allow the operating system to evolve more easily thanwould be possible if major applications depended on certain specif-ic native Windows NT system services.However, it is sometimes more efficient (or necessary) for WindowsNT applications and kernel-mode driver developers to be able to ac-cess the system services directly. In Appendix A, Windows NT Sys-tem Services, you'll find a list of the system services provided by theWindows NT I/O Manager to perform file I/O operations.Environment subsystems provide an API and an execution environment to user processes that emulates some specific operating system (e.g., an OS/2 or UNIX or Windows 3.x operating system). Think of a subsystem as the personality of the operating system as viewed by a user process. The process can execute comfort-ably within the safe and nurturing environment provided by the specific subsystem without having to worry about the capabilities, programming inter-faces, and requirements of the core Windows NT operating system.The following environment subsystems are provided with Windows NT:Win32The native execution environment for Windows NT. Microsoft actively encour-ages application developers to use the Win32 API in their software to obtain operating system services.This subsystem is also more privileged than the others.* It is solely respon-sible for managing the devices used to interact with users; the monitor, keyboard, and mouse are all controlled by the Win32 subsystem. It is also the * In reality, this is the only subsystem that is actively encouraged by Microsoft for use by third-party ap-plication program designers. The other subsystems work (more often than not) but seem to exist only as checklist items. If, for example, you decided to develop an application using the POSIX subsystem in-stead, you will undoubtedly encounter limitations and frustrations due to the very visible lack of commit-ment on behalf of Microsoft in making the subsystem fully functional and full featured.The Basics________________________________________________7 sole Window Manager for the system and defines the policies that control the appearance of graphical user interfaces.POSIXThis exists to provide support to applications conforming to the POSIX 1003.1 source-code standard. If you have applications that were developed to use the APIs defined in that standard, you should theoretically be able to compile, link, and execute them on a Windows NT platform.There are severe restrictions on functionality provided by the POSIX subsystem that your applications must be prepared to accept. For example, no networking support is provided by the POSIX subsystem.OS/2Provides API support for 16-bit OS/2 applications on the Intel x86 hardware platform.WOW (Windows on Windows)This provides support for 16-bit Windows 3.x applications. Note, however, that 16-bit applications that try to control or access hardware directly will not execute on Windows NT platforms.VDM (Virtual DOS Machine)Provided to support execution of 16-bit DOS applications. As in the case of 16-bit Windows 3.x applications, any process attempting to directly control or access system hardware will not execute on Windows NT.Integral subsystems extend the operating system into user space and provide important system functionality. These include the user-space components of the Security subsystem (e.g., the Local Service Authority); the user-space components of the Windows NT LAN Manager Networking software; and the Service Control Manager responsible for loading, unloading, and managing kernel-mode drivers and system services, among others.Kernel modeThe difference between executing code in kernel mode and in user mode is the hardware privilege level at which the CPU is executing the code.Most CPU architectures provide at least two hardware privilege levels, and many provide multiple levels. The hardware privilege level of the CPU determines the possible set of instructions that code can execute. For example, when executing in user mode, processes cannot directly modify or access CPU registers or page-tables used for virtual memory management. Allowing all user processes access to such privileges would quickly result in chaos and would preclude any serious tasks from being performed on the CPU.8___________________________Chapter 1: Windows NT System Components Windows NT uses a simplified hardware privilege model that has two privilege levels: kernel mode, which allows code to do anything necessary on the processor;* and user mode, where the process is tightly constrained in the range of allowed operations.If you're familiar with the Intel x86 architecture set, kernel mode is equivalent to the Ring 0 privilege level for processors in the set and user mode to Ring 3-The terms kernel mode and user mode, although often used to describe code (functions), are actually privilege levels associated with the processor. Therefore, the term kernel-mode code simply means that the CPU will always be in kernel-mode privilege level when it executes that particular code, and the term user-mode code means that the CPU will execute the code at user-mode privilege level. Typically, as a third-party developer, you cannot execute Windows NT programs while the CPU is at kernel-mode privilege level unless you design and develop Windows NT kernel-mode drivers.The kernel-mode portion of Windows NT is composed of the following:The Hardware Abstraction Layer (HAL)The Windows NT operating system was designed to be portable across multiple architectures. In fact, you can run Windows NT on Intel x86 plat-forms, DEC Alpha platforms, and also the MIPS-based platforms (although support for this architecture has recently been discontinued by Microsoft).Furthermore, there are many kinds of external buses that you could use with Windows NT, including (but not limited to) ISA, EISA, VL-Bus, and PCI bus architectures. The Windows NT developers created the HAL to isolate hard-ware-specific code. The HAL is a relatively thin layer of software that interfaces directly with the CPU and other hardware system components and is responsible for providing appropriate abstractions to the rest of the system.The rest of the Windows NT Kernel sees an idealized view of the hardware, as presented by the HAL. All differences across multiple hardware architec-tures are managed internally by the HAL. The set of functions exported by the HAL are invoked by both the core operating system components (e.g., the Windows NT Kernel component), and device drivers added to the operating system.The HAL exports functions that allow access to system timers, I/O buses, DMA and Interrupt controllers, device registers, and so on.* Code that executes in kernel mode can do virtually anything with the system. This includes crashing the system or corrupting user data. Therefore, with the flexibility of kernel-mode privileges comes a lot of responsibility that kernel-mode designers must be aware of.The Windows NT KernelThe Windows NT KernelThe Windows NT Kernel provides the fundamental operating system function-ality that is used by the rest of the operating system. Think of the kernel as the module responsible for providing the building blocks that can subse-quently be used by the Windows NT Executive to provide all of the powerful functionality offered by the operating system. The kernel is responsible for providing process and thread scheduling support, support for multiprocessor synchronization via spin lock structures, interrupt handling and dispatching, and other such functionality.The Windows NT Kernel is described further in the next section.The Windows NT ExecutiveThe Executive comprises the largest portion of Windows NT. It uses the services of the kernel and the HAL, and is therefore highly portable across architectures and hardware platforms. It provides a rich set of system services to the various subsystems, allowing them to access the operating system functionality.The major components of the Windows NT Executive include the Object Manager, the Virtual Memory Manager, the Process Manager, the I/O Manager, the Security Reference Monitor, the Local Procedure Call facility, the Configuration Manager, and the Cache Manager.File systems, device drivers, and intermediate drivers form a part of the I/O subsystem that is managed by the I/O Manager and are part of the Windows NT Executive.The Windows NT KernelThe Windows NT Kernel has been described as the heart of the operating system, although it is quite small compared to the Windows NT Executive. The kernel is responsible for providing the following basic functionality:• Support for kernel objects• Thread dispatching• Multiprocessor synchronization• Hardware exception handling• Interrupt handling and dispatching• Trap handling• Other hardware specific functionalityChapter 1: Windows NT System Components The Windows NT Kernel code executes at the highest privilege level on the processor.* It is designed to execute concurrently on multiple processors in a symmetric multiprocessing environment.The kernel cannot take page faults; therefore, all of the code and data for the kernel is always resident in system memory. Furthermore, the kernel code cannot be preempted; therefore, context switches are not allowed when a processor executes code belonging to the kernel. However, all code executing on any processor can always be interrupted, provided the interrupt level is higher than the level at which the code is executing.IRQ LevelsThe Windows NT Kernel defines and uses Interrupt Request Levels (IRQLs) to prioritize execution of kernel-mode components. The particular IRQL at which a piece of kernel-mode code executes determines its hardware priority. All inter-rupts with an IRQL that is less than or equal to the IRQL of the currently executing kernel-mode code are masked off (i.e., disabled) by the Windows NT Kernel. However, the currently executing code on a processor can be interrupted by any software or hardware interrupt with an IRQL greater than that of the executing code. IRQLs are hierarchically ordered and are defined as follows (in increasing order of priority):PASSIVE_LEVELNormal thread execution interrupt levels. Most file system drivers are asked to provide functionality by a thread executing at IRQL PASSIVE_LEVEL, though this is not guaranteed. Most lower-level drivers, such as device drivers, are invoked at a higher IRQL than PASSIVE_LEVEL.This IRQL is also known as LOW_LEVEL.APC_LEVELAsynchronous Procedure Call (APC) interrupt level. Asynchronous Procedure Calls are invoked by a software interrupt, and affect the control flow for a target thread. The thread to which an APC is directed will be interrupted, and the procedure specified when creating the APC will be executed in the context of the interrupted thread at APC_LEVEL IRQL.DISPATCH_LEVELThread dispatch (scheduling) and Deferred Procedure Call (DPC) interrupt level. DPCs are defined in Chapter 3, Structured Driver Development. Once a* The highest privilege level is defined as the level at which the operating system software has complete and unrestricted access to all capabilities provided by the underlying CPU architecture.The Windows NT Kernelthread IRQL has been raised to DPC level or greater, thread scheduling is automatically suspended.Device Interrupt Levels (DIRQLs)Platform-specific number and values of the device IRQ levels.PROFILE_LEVELTimer used for profiling.CLOCK1_LEVELInterval timer clock 1.CLOCK2_LEVELInterval timer clock 2.IPI_LEVELInterprocessor interrupt level used only on multiprocessor systems.POWER_LEVELPower failure interrupt.HIGHEST_LEVELTypically used for machine check and bus errors.APC_LEVEL and DISPATCH_LEVEL interrupts are software interrupts. They are requested by the kernel-mode code and are lower in priority than any of the hard-ware interrupt levels. The interrupts in the range CLOCK1_LEVEL to HIGH_ LEVEL are the most time-critical interrupts, and they are therefore the highest priority levels for thread execution.Support f or Kernel ObjectsThe Windows NT Kernel also tries to maintain an object-based environment. It provides a core set of objects that can be used by the Windows NT Executive and also provides functions to access and manipulate such objects. Note that the Windows NT Kernel does not depend upon the Object Manager (which forms part of the Executive) to manage the kernel-defined object types.The Windows NT Executive uses objects exported by the kernel to construct even more complex objects made available to users.Kernel objects are of the following two types:Dispatcher objectsThese objects control the synchronization and dispatching of system threads.Dispatcher objects include thread, event, timer, mutex, and semaphore object types. You will find a description of most of these object types in Chapter 3./2___________________________Chapter 1: Windows NT System Components Control objectsThese objects affect the operation of kernel-mode code but do not affect dispatching or synchronization. Control objects include APC objects, DPC objects, interrupt objects, process objects, and device queue objects.The Windows NT Kernel also maintains the following data structures:Interrupt Dispatcher TableThis is a table maintained by the kernel to associate interrupt sources with appropriate Interrupt Service Routines.Processor Control Blocks (PRCBs)There is one PRCB for each processor on the system. This structure contains all sorts of processor-specific information, including pointers to the thread currently scheduled for execution, the next thread to be scheduled, and the idle thread.NOTE Each processor has an idle thread that executes •whenever no otherthread is available. The idle thread has a priority below that of allother threads on the system. The idle thread continuously loopslooking for work such as processing the DPC queue and initiating acontext switch whenever another thread becomes ready to executeon the processor.Processor Control RegionThis is a hardware architecture-specific kernel structure that contains pointers to the PRCB structure, the Global Descriptor Table (GOT), the Interrupt Descriptor Table (IDT), and other information.DPC queueThis global queue contains a list of procedures to be invoked whenever the IRQL on a processor falls below IRQL DISPATCH_LEVEL.Timer queueA global timer queue is also maintained by the NT Kernel. This queuecontains the list of timers that are scheduled to expire at some future time. Dispatcher databaseThe thread dispatcher module maintains a database containing the execution state of all processors and threads on the system. This database is used by the dispatcher module to schedule thread execution.In addition to the object types mentioned above, the Windows NT Kernel main-tains device queues, power notification queues, processor requester queues, and other such data structures required for the correct functioning of the kernel itself.The Windows NT Kernel_______________________________________13Processes and ThreadsA process is an object* that represents an instance of an executing program. In Windows NT, each process must have at least one thread of execution. The process abstraction is composed of the process-private virtual address space for the process, the code and data that is private to the process and contained within the virtual address space, and system resources that have been allocated to the process during the course of execution.Note that process objects are not schedulable entities in themselves. Therefore you cannot actually schedule a process to execute. However, each process contains one or more schedulable threads of execution.Each thread object executes program code for the process and is therefore sched-uled for execution by the Windows NT Kernel. As noted above, more than one thread can be associated with any process, and each thread is scheduled for execution individually.The context of a thread object consists of user- and kernel-stack pointers for the thread, a program counter for the thread indicating the current instruction being executed, system registers (including integer and floating-point registers) containing state information, and other processor status maintained while the thread is executing.Each thread has a scheduling state associated with it. The possible states are initialized, ready-to-run, standby, running, waiting, and terminated. Only one thread can be in the running state on any processor at any given instant, though multiple threads can be in this state on multiprocessor systems (one per processor).Threads have execution priority levels associated with them; higher priority threads are always given preference during scheduling decisions and always preempt the execution of lower priority threads. Priority levels are categorized into the real-time priority class and the variable priority class.* The Windows NT Kernel defines the fundamental thread and process objects. The Windows NT Exec-utive uses the core structures defined by the kernel to define Executive thread and process object abstrac-tions.14 ____ __________Chapter 1: Windows NT System ComponentsNOTE It is possible to encounter situations of priority-inversion on Win-dows NT systems, where a lower-priority thread may be holding acritical resource required by a higher-priority thread (even a threadexecuting with real-time priority). Any thread that is of higher-priori-ty than the one holding the critical resource would then get the op-portunity to execute even if it has a priority lower than that of thethread waiting for the resource.*The scenario described above violates the assumption that higherpriority threads will always preempt and execute before any lowerpriority threads are allowed to execute. This could lead to incorrectbehavior, especially in situations where thread priorities must bemaintained (e.g., for real-time processes). Kernel-mode designersmust anticipate and understand that these situations can occur un-less they ensure that resource acquisition hierarchies are correctlydefined and maintained.Windows NT does not provide support for features such as priorityinheritance that could automatically help avoid the priority inver-sion problem.Most kernel-provided routines for programmatically manipulating or accessing thread or process structures are not exposed to third-party driver developers. Thread Context and TrapsA trap is the processor-provided mechanism for capturing the context of an executing thread when certain events occur. Events that cause a trap include inter-rupts, exception conditions (described in Chapter 3), or a system service call causing a change in processor mode from user mode to kernel mode of execution. When a trap condition occurs, the operating system trap handler is invoked.t The Windows NT trap handler code saves the information for an executing thread in the form of a call frame before invoking an appropriate routine to process the trap condition. Here are two components of a call frame:A trap frameThis contains the volatile register state.* Priority inversion requires three threads to be running concurrently: the high-priority thread that re-quires the critical resource, the low-priority thread that has the resource acquired, and the intermediate-priority thread that does not want or need the resource and therefore gets the opportunity to preempt the low-priority thread (because it has a higher relative priority) but also (in the process) prevents the high-priority thread from executing even though it has a relatively lower priority.t The trap handler is written in assembly, is highly processor- and architecture-specific, and is a core piece of functionality provided by the Windows NT Kernel.The Windows NT Executive_____________________________________75 An exception frameWhen exception conditions occur that cause the trap handler to be invoked, the nonvolatile register state is also saved.In addition, the trap handler also saves the previous machine state and any infor-mation that will allow the thread to resume execution after the trap condition has been processed appropriately.The Windows NT ExecutiveThe Windows NT Executive is composed of distinct modules, or subsystems, each of which assumes responsibility for a primary piece of functionality. Typically, references to Windows NT kernel-mode code actually refer to modules in the Executive.The Executive provides a rich set of system service calls (an API) for subsystems to access its services. In addition, the Executive also provides comprehensive support to developers who wish to extend the existing functionality. Develop-ment is usually in the form of third-party device drivers, installable file system drivers, and other intermediate and filter drivers used to provide value-added services.The various components that comprise the Windows NT Executive maintain more or less strict boundaries around themselves. Once again, the object-based nature of the operating system manifests itself in the prolific use of abstract data types and methods. Modules do not directly access the internal data structures of other modules; note that, although the designers have managed to stick to well-defined interfaces internally, modules still make many assumptions when they invoke each other. The assumptions are often in the form of expectations of what processing the called module will perform and how error conditions will be handled and/or reported. Finally, as you will observe later in this book, the synchronization hierarchy employed by the Executive components when they recursively invoke each other is more than just a little complicated.The Windows NT Object ManagerAll components of the Windows NT Executive that export data types for use by other kernel-mode modules use the services of the Object Manager to define, create, and manage the data types, as well as instances of the data types.The NT Object Manager manages objects. An object is defined as an opaque data structure implemented and manipulated by a specific kernel-mode (Executive) component. Each object may have a set of operations defined for it; these include。
adams液压系统仿真
1Getting Started Using ADAMS/HydraulicsOverviewWelcome to ADAMS/Hydraulics. ADAMS/Hydraulics is a plugin to ADAMS/View, ADAMS/Solver, ADAMS/Car, and ADAMS/Engine.ADAMS/Hydraulics is a plugin to ADAMS/View, ADAMS/Solver, ADAMS/Car, and ADAMS/Engine. It lets you model and simulate fluid-powered circuits and control how the circuits interact with mechanical models.ADAMS/Hydraulics contains all of the hydraulic components you need to model your hydraulic circuits: valves, pumps, cylinders, and so on. The components take advantage of the parameterization and function capabilities of ADAMS/View and ADAMS/Solver. The result is a powerful open environment for complete modeling of complex hydraulic-driven mechanisms and systems.Introducing ADAMS/Hydraulics 3Introducing the Problem 5Creating a Hydraulic Circuit 11Testing Your Model 392Getting Started Using ADAMS/HydraulicsCopyrighThe information in this document is furnished for informational use only, may be revised from time to time, and should not be construed as a commitment by MSC.Software Corporation. MSC.Software Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document.Copyright InformationThis document contains proprietary and copyrighted information. MSC.Software Corporation permits licensees of MSC.ADAMS?software products to print out or copy this document or portions thereof solely for internal use in connection with the licensed software. No part of this document may be copied for any other purpose or distributed or translated into any other language without the prior written permission of MSC.Software Corporation。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Building Systems from Commercial ComponentsUsing Model ProblemsRobert C. SeacordModel problems are focused experimental prototypes that reveal technology/product capabilities, benefits, and limitations in well-bounded ways. They are useful for adding rigor to component-based system design as well as reforming the means by which component evaluation and selection is performed.In 1982 my employer presented me with a new toy, an IBM 5051 computer, which became better known as the “IBM PC.” I was also provided with a list of all the software programs available for the platform—on a single sheet of paper.For some time, developing software for this platform (and others) meant developing custom software. Commercial components were simply not available. While this situation was not particularly conducive to productivity (it seemed the first step of each project in those days was to create a systems services layer), it was a comfortable situation for developers. Once you had mastered the DOS and BIOS interfaces, and had a comfortable understanding of 8086/8088 assembler (or a higher-level programming language), there were not many surprises.Today, developers do not have this comfort zone when they build systems from commercial components. The dynamics of the component market practically guarantee that engineers are, at least in part, unfamiliar with the capabilities and limitations of individual software components and their interactions when combined into a component ensemble.Fundamental IdeasPrototyping is a fundamental technique of software engineering, as it has been incorporated into Boehm’s Spiral Model [Boehm 88] and institutionalized in various industrial-strength software processes [Cusumano 95, Jacobson 99]. In spiral process models, a project is conceived as a series of iterations over a prescribed development process. Usually, each iteration (except the last, of course) produces a prototype that is further refined by succeeding iterations.Prototypes may be developed for the customer of the system or they may be built for the designers of the system. Ultimately, all prototyping is motivated by the desire to reduce risk. For example, the development team might build a prototype to reduce the risk that a customer will be unsatisfied with the interface or planned functionality of a system.There are three motivations for designers to build prototypes. The first is for the designer to develop a basic level of competence in a component, or in an ensemble of components. Typically, this is best accomplished through unstructured “play” with the component. The second motivation is to answer a specific design question: for example, can I use my Web browser to launch an external application? The use of model problems—focused experimental prototypes that reveal technology/product capabilities, benefits, and limitations in well-bounded ways—is well suited to this purpose. The third motivation for building prototypes for thedesigner is persuasion—after all, good ideas are irrelevant if they are not adopted. Model problems are also an effective mechanism for advocating a particular design approach.Model ProblemsThe use of model problems in evaluating technology and product capabilities is fully explored in our new book, Building Systems from Commercial Components, published by Addison-Wesley [Wallnau 01]. In the remainder of this article, I provide an introduction to model problems including the terminology and an overview of the process.A model problem is actually a description of the design context. The design context defines the constraints on the implementation. For example, if the software under development must provide a Web-based interface for both Netscape Navigator and Microsoft Internet Explorer, this is part of the design context that constrains the solution space.A prototype situated in a specific design context is called a model solution. A model problem may have any number of model solutions. The number of model solutions developed depends on the severity of the risk inherent in the design context, and the relative success of the model solutions in addressing this risk.Model problems are normally used by design teams. Optimally, the design team consists of an architect who is the technical lead on the project and makes the principal design decisions, as well as a number of designers/engineers who may be tasked by the architect to execute the model problem.The overall process consists of the following steps that can be executed in sequence:1. The architect and engineer identify a design question. The design question initiates the model problem. It refers to an unknown that is expressed as a hypothesis.2. The architect and engineer define the a priori evaluation criteria. These criteria describe how the model solution will be shown to support or contradict the hypothesis.3. The architect and engineer define the implementation constraints. The implementation constraints specify the fixed (i.e., inflexible) part of the design context that governs the implementation of the model solution. These constraints might include such things as platform requirements, component versions, and business rules.4. The engineer produces a model solution situated in the design context. The model solution is a minimal spanning application that uses only those features of a component (or components) that are necessary to support or contradict the hypothesis.5. The engineer identifies a posteriori evaluation criteria. A posteriori evaluation criteria include the a priori criteria plus criteria that are discovered as a byproduct of implementing the model solution.6. Finally, the architect performs an evaluation of the model solution against the a posteriori criteria. The evaluation may result in the design solution being rejected or adopted, but often leads to the generation of new design questions that must be resolved in similar fashion. SummaryModel problems have been successfully applied by the SEI in collaboration with organizations over the past five years. These can be applied with varying degrees of formality, but successful model problems always consist of the steps outlined in this article (that is a design question, a priori evaluation criteria, implementation constraints, one or more model solutions, a posteriori evaluation criteria, and evaluation).Model problems are extensively covered in Building Systems from Commercial Components, including an extensive case study describing multiple applications of this technique in the Web-based system and security domains. Generally speaking, model problems have proven to be a successful technique for adding rigor to component-based system design as well as reforming the means by which component evaluation and selection is performed.References[Boehm 88] Barry Boehm. “A Spiral Model of Software Development andEnhancement.” Computer, May 1988: pp. 61-72.[Cusumano 95] Michael A. Cusumano and Richard W. Selby. Microsoft Secrets. NewYork: The Free Press, 1995.[Jacobson 99] Ivar Jacobson, Grady Booch, and James Rumbaugh. The UnifiedSoftware Development Process. Reading, MA: Addison-Wesley,1999.[Wallnau 01] Kurt Wallnau, Scott Hissam, Robert Seacord. Building Systems fromCommercial Components. Reading, MA: Addison-Wesley, 2001.About the AuthorRobert C. Seacord is a senior member of the technical staff at the SEI and an eclectic technologist. He is coauthor of the book Building Systems from Commercial Components as well as more than 30 papers on component-based software engineering, Web-based system design, legacy system modernization, component repositories and search engines, security, and user interface design and development.Copyright © 2001 Carnegie Mellon UniversityThe Software Engineering Institute (SEI) is a federally funded research and development center sponsored by the U.S. Department of Defense and operated by Carnegie Mellon University.® CMM, Capability Maturity Model, Capability Maturity Modeling, Carnegie Mellon, CERT, and CERT Coordination Center are registered in the U.S. Patent Trademark Office.SM ATAM; Architecture Tradeoff Analysis Method; CMMI; CMM Integration; CURE; IDEAL; Interim Profile; OCTAVE; Operationally Critical Threat, Asset, and Vulnerability Evaluation; Personal Software Process; PSP; SCAMPI; SCAMPI Lead Assessor; SCE; Team Software Process; and TSP are servicemarks of Carnegie Mellon University.TM Simplex is a trademark of Carnegie Mellon University.。