92黄本补充本(谭)要点

合集下载

Contents

Contents

. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
26
27
27 29 30 31 31 33 35 36
IV IOA Reference Manual
10 Lexical syntax 11 Automaton de nitions 12 Type de nitions 13 Primitive automata
13.1 13.2 13.3 13.4
IOA: A Language for Specifying, Programming, and Validating Distributed Systems Draft
Stephen J. Garland, Nancy A. Lynch, and Mandana Vaziri MIT Laboratory for Computer Science1 September 30, 1997
Contents
I IOA Tutorial
1 Introduction
1.1 1.2 1.3 1.4 I/O automata . . . . . . Executions and traces . Operations on automata Properties of automata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Edited by Lorna Carter

Edited by Lorna Carter

MA RC H 3,1849U .S .D EPAR T M EN T O F T HEI NT ER I ORAggregates from Natural and Recycled SourcesEconomic Assessments for Construction Applications—A Materials Flow AnalysisByDavid R. Wilburn and Thomas G. GoonanU.S. GEOLOGICAL SURVEY CIRCULAR 1176U.S. DEPARTMENT OF THE INTERIORBRUCE BABBITT, SecretaryU.S. GEOLOGICAL SURVEYThomas J. Casadevall, Acting DirectorAny use of trade, product, or firm names in this publication is for descriptive purposes only and does not imply endorsement by the U.S. GovernmentCONTENTSAbstract (1)Executive Summary (1)Introduction (3)Structure of the Aggregates Industry (3)Aggregates Processing Technology (6)Technical Factors Affecting Aggregates Recycling (8)Transportation Factors (11)Locating an Aggregates Recycling Facility (12)Costs of Producing Recycled Aggregates (13)Methodology (13)Costs (13)Sensitivity Analysis (19)Public Policy (24)Conclusions (26)Selected References (27)Appendix 1. State Concrete Recycling Activity (29)Appendix 2. Aggregates Production Technology (31)FIGURES1.Construction aggregates flow system (4)2.Pie charts showing consumption of aggregates by source and market sector (5)3.Generalized flow diagram for an aggregates recycling operation (9)4.Diagram illustrating locating a concrete recycling facility (12)5–7.Histograms showing:5.Estimated 1996 costs for a 110,000 t/yr recycled aggregates operation (16)6.Estimated 1996 costs for a 253,000 t/yr recycled aggregates operation (17)7.Estimated 1996 costs for a 312,000 t/yr recycled aggregates operation (18)8.Diagram showing estimated costs and revenues of recycled aggregates (20)9–11.Graphs showing:9.Profitability of a 110,000 t/yr recycled aggregates operation (21)10.Profitability of a 253,000 t/yr recycled aggregates operation (22)11.Profitability of a 312,000 t/yr recycled aggregates operation (23)12.Photograph showing typical natural aggregates operation (31)13.Photograph showing typical recycled aggregates operation (35)TABLES1. Significant technological aspects of natural and recycled aggregates (7)2. Material requirements for a typical highway project (11)3. Assumptions used in this evaluation (14)4. Estimated 1996 costs for recycled aggregate operations (15)5. Crusher combinations commonly used in concrete and asphalt recycling (36)IIIAGGREGATES FROM NATURAL ANDRECYCLED SOURCESEconomic Assessments for Construction Applications—AMaterials Flow StudyBy David R. Wilburn and Thomas G. GoonanABSTRACTIncreased amounts of recycled materials are being used to supplement natural aggregates (derived from crushed stone, sand and gravel) in road construction. An understand-ing of the economics and factors affecting the level of aggre-gates recycling is useful in estimating the potential for recycling and in assessing the total supply picture of aggre-gates. This investigation includes a descriptive analysis of the supply sources, technology, costs, incentives, deterrents, and market relationships associated with the production of aggre-gates. Results derived from cash flow analyses indicate that under certain conditions aggregates derived from construc-tion and demolition debris or reclaimed asphalt pavement can economically meet the needs of certain markets, but this material can only supplement the use of natural aggregates in construction applications because the available supply is much less than total demand for aggregates. Producers of nat-ural aggregates benefit from their ability to sell a wide, higher valued range of aggregate products and will continue to dom-inate high-end product applications such as portland cement concrete and top-course asphalt.Although recycled aggregates can be used in a variety of road construction applications, product variability and strength characteristics usually limit their use to road base, backfill, and asphalt pavement. Quality of the products con-taining recycled material is often source dependent, and indiscriminant blending may lead to inferior performance. Careful feed monitoring, testing, and marketing can broaden the use of recycled aggregates into other applications.Aggregates recycling is most likely to be successful where transportation dynamics, disposal and tipping fee structures, resource supply/product markets, and municipal support are favorable. Recycling operations often must over-come risks associated with feed and product availability, pric-ing, and quality.Costs for three representative operations of different sizes were modeled in this study. Under study conditions, all were found to be profitable and highly dependent upon local tipping fees and market prices, which can vary significantly by location. Smaller operations were found to have different operational dynamics, often requiring creative marketing or incentives to maintain profitability.Nationally, consumption of recycled aggregates from crushed concrete increased 170 percent between 1994 and 1996, but constituted less than 0.4 percent of total aggregates consumed in 1995. The supply of construction debris is regional, and is determined by local infrastructure decay and replacement rates. Aggregate recycling rates are greatest in urban areas where replacement of infrastructure is occurring, natural aggregate resources are limited, disposal costs are high, or strict environmental regulations prevent disposal. Consumption is expected to grow as construction contractors recycle as a means of saving on transportation, disposal, and new material costs and natural aggregate producers include recycled material as part of their product mix in order to pro-long the life of their reserves and improve product revenues. In some locations, the amount of material available for recy-cling is insufficient to meet present industry demand. The use of recycled aggregates should be evaluated locally based upon relative cost, quality, and market factors. Policy makers often must weigh the potential benefits of recycling with competing land use, development issues, and economic and societal pressures.EXECUTIVE SUMMARYMuch of our Nation’s infrastructure (roads, buildings, and bridges) built during the middle twentieth century is in need of repair or replacement. A large volume of cement- and asphalt-concrete aggregates will be required to rebuild this infrastructure and support new construction. Use of1AGGREGATES FROM NATURAL AND RECYCLED SOURCES—ECONOMIC ASSESSMENTS 2construction and demolition debris and reclaimed asphalt pavement as sources of aggregates is increasing. What are the factors that influence the aggregates recycling industry? How much does it cost to produce recycled aggregates? What are the incentives and deterrents for recycling? Where is the niche for recycled aggregates? These are some of the questions addressed in this study.What are the factors that influence the aggregates recycling industry?Urbanization has generated a high demand for construc-tion aggregates and increased quantities of construction debris that may provide an additional source for aggregates. Recycling is impacted by local and regional conditions and market specifications. Relative transportation distances and costs among construction and demolition sites, recyclers, competing natural aggregate producers, local landfills, and markets influence how much material is available for recy-cling and set local pricing and fee structures. Plant location, design, and efficiency can have significant impact on eco-nomic performance. The quantity, consistency, quality of feed material and a skilled labor force also affect plant effi-ciency and market options available to the recycler. Costs associated with equipment, labor, and overhead are impor-tant to operational economics, but revenues generated by product pricing and tipping fees are even more significant. There will continue to be opportunities for new entrants, but adding new recycling capacity to a market with limited resources impacts the profitability of all participants.How much does it cost to produce recycled aggregates?Entry into the aggregates recycling business requires a capital investment of $4 to $8 per metric ton of annual capac-ity, a cost that is most significant for a small producer because of economies of scale. Processing costs for the aggregates recycler range from about $2.50 to $6 per metric ton. Operating rate and revenues generated from tipping charges and product prices are the most important factors affecting profitability, but can vary considerably by opera-tion and region. Transportation costs associated with feed-stock acquisition, while significant to regional dynamics of the industry, were assumed to indirectly affect profitability of a recycler, because such costs are typically incurred by a construction contractor that supplies material rather than the recycler, which processes that material.Cash flow analyses indicate that all operations except the small recycler could achieve at least a 12 percent rate of return on total investment. Most larger recyclers are more profitable under study conditions because of economies of scale. Recycling operations benefit from tipping fee revenues and relatively low net production costs. Where market forces permit, smaller recyclers can, for example, increase their economic viability by increasing tipping fees or charging higher product prices, or by positioning themselves to gain transportation cost advantages over competitors, acting as subcontractors, operating ad-hoc supplementary businesses, or receiving government subsidies or recycling mandates. Economic benefits for a natural aggregates producer to begin recycling are substantial.What are the incentives and deterrents for recycling?The success of aggregates recycling varies by region and municipality. Recycling may reduce the amount of con-struction debris disposed of in landfills, may reduce the rate of natural resource depletion and environmental disturbance, and has the potential to provide energy and cost savings. Mobile, job-site recycling is becoming common for larger construction projects as a means of avoiding high transporta-tion, disposal, and new material costs. Successful operations must have a favorable transportation and tipping fee structure when compared to alternatives. An abundant local supply and varied markets make it easy and financially attractive for the supplier and construction contractor, and can provide an increase in economic activity to the local community.A recycling operation may not be the most appropriate alternative in all situations. Without proper site design and layout, equipment and operator efficiency, and creative mar-keting, many recycling operations could easily fail. An abun-dant supply of consistent feed material is essential. High capital requirements, inadequate public support, and quality problems or perceptions can also make it difficult for a recy-cler to compete effectively. Recyclers often have little con-trol over product demand and pricing, which are influenced by the amount of natural aggregates locally available. Where is the niche for recycled aggregates?Natural aggregate producers benefit from their ability to sell a wide, higher valued range of aggregate products and will continue to dominate high-end product applications such as portland cement concrete and top-course asphalt. Pres-ently the recycling rate for asphalt pavement is approxi-mately 85 percent. Recycled aggregates are, however, increasingly being used to supplement natural aggregates in road construction in a variety of applications; 44 States allow their use in road base applications, 15 States for backfill, 8 States for portland cement mix, and 7 States for top-course asphalt and selected other applications. Recycled aggregates are commonly used in lower quality product applications such as road base, where recycled aggregates meet or exceed State specifications. This material is presently often not con-sidered acceptable for higher quality product applications such as high-strength concrete because of performance con-siderations and perception of some decision makers.Aggregate recycling rates are greatest in urban areas where replacement of infrastructure is occurring, natural aggregate resources are limited, disposal costs are high, or strict environmental regulations prevent disposal. Consump-tion is expected to grow as construction contractors recycle as a means of saving on transportation, disposal, and new material costs and aggregate producers include recycled3 STRUCTURE OF THE AGGREGATES INDUSTRYmaterial in order to prolong the life of their reserves and improve product mix. In some locations, the amount of material available for recycling is insufficient to meet present industry demand. Although recycled aggregates are a supplement or substitution for natural aggregates in selected road applications, their use should be evaluated locally based upon relative cost, quality, and market factors. Policy makers often must weigh the potential benefits of recycling with competing land use, development issues, and societal pressures.This study is intended to provide insights for resource decision making and provide a framework for future studies on construction materials, a vital sector in the U.S. economy. Further research is needed to improve quality or expand mar-kets of recycled aggregates, but limits to locally available construction debris could restrict significant growth in the use of recycled aggregates in construction. Additional work is also needed to determine local future supply of such mate-rial. Improved technology in combination with expanded education, specification changes, or legislative mandates could make the use of recycled aggregates a more attractive option and broaden product markets.INTRODUCTIONSince the beginning of the twentieth century, our Nation’s infrastructure has grown tremendously. Much of the core infrastructure, including roads, bridges, water sys-tems, and sewers, was put in place during the first half of this century. The Interstate Highway System was constructed during the 1950’s, 1960’s, and 1970’s. Much of this infra-structure has now deteriorated to a point that extensive repair or replacement is required. In areas of rapid population growth, new infrastructure is necessary to meet growing needs.Construction materials in general, and aggregates in particular, are important components of infrastructure. Development and extraction of natural aggregate resources (primarily crushed stone and sand and gravel) are increas-ingly being constrained by urbanization, zoning regulations, increased costs, and environmental concerns, while use of recycled materials from roads and buildings is growing as a supplement to natural aggregates in road construction. Recy-cling represents one way to convert a waste product into a resource. It has the potential to (1) extend the life of natural resources by supplementing resource supply, (2) reduce environmental disturbance around construction sites, and (3) enhance sustainable development of our natural resources.This study was undertaken to provide an understanding of the options for aggregates supply in construction. Techni-cal and economic information on the aggregates recycling industry is developed in order to analyze the factors influ-encing aggregates recycling, determine why recycling is occurring, and assess the effects of recycling on the natural aggregates industry. Although data on aggregates recycling are available, no concise data source exists for this important emerging industry. A discussion of the technological, social, and economic factors influencing this industry is intended to provide background information for informed decisions by those interacting with this industry (operators, suppliers, consumers, or regulators), and for those interested in devel-oping sustainable U.S. natural resource and land-use plan-ning and policies.Related work currently being conducted by the U.S. Geological Survey (USGS) includes the Aggregates Auto-mation conference, the Construction Debris Recycling con-ference, Construction Materials Flow studies, the Mid-Atlantic Geology and Infrastructure Case Study, Infrastruc-ture project studies, and the Front Range Corridor Initiative. For information on any of these projects access the World Wide Web (WWW) at:/minerals/pubs/commodity/ aggregatesor direct inquiries to the Minerals Information Team, 983 National Center, U.S. Geological Survey, Reston, VA 20192; telephone 703-648-6941.Information for this study was gathered from a variety of published sources, site visits, and personal contacts. Cost data were developed from representative industry data. Appreciation is conveyed to Russel Hawkins of Allied Recy-cled Aggregates, Larry Horwedel of Excel Recycling & Manufacturing, Inc., William Langer, USGS, and Gregory Norris of Sylvatica Inc. for their contributions of data and technical reviews of this paper.Specific cost assumptions are documented. Costs and prices for the Denver, Colo., metropolitan area were used in some cases to represent the industry. Although costs and prices in other regions of the country may differ from those assumed in this study, inferences using values different from those used in this study are presented.STRUCTURE OF THEAGGREGATES INDUSTRY Aggregates are defined in this study as materials, either natural or manufactured, that are either crushed and com-bined with a binding agent to form bituminous or cement concrete, or treated alone to form products such as railroad ballast, filter beds, or fluxed material (Langer, 1988). The most common forms of concrete are prepared using portland cement and asphalt as binding agents. About 87 percent of portland cement concrete and about 95 percent of asphaltic concrete are composed of aggregates (Herrick, 1994).Figure 1 illustrates a generalized version of the flow of aggregate materials in construction. Most natural aggregates are derived from crushed stone and sand and gravel,5STRUCTURE OF THE AGGREGATES INDUSTRYrecovered from widespread, naturally occurring mineral deposits. Vertical arrows represent losses to the environ-ment, which occur throughout the flow system. More than 2billion metric tons (tons 1 ) of crushed stone and sand and gravel were consumed as aggregates in the United States in 1996, much of which was used in road construction and maintenance (Tepordei, 1997a; Bolen, 1997). Recycled material used to produce construction aggregates for con-crete comes from two primary sources: (1) road construction and maintenance debris, and (2) structural construction and dance with USGS practice. The term “tons” is used to refer to the metric ton unit of 2,205 pounds.bridges, and airport runways). Virtually all the asphalt for recycling comes from roads and parking lots. Some asphaltic concrete is milled and relaid as base material in place, but most recycled material goes through the process of recovery (demolition, breaking, and collecting), transportation (to a local collection point), processing (crushing, screening, sep-arating, and stockpiling), and marketing (as sized products with multiple uses). Recycled aggregates currently account for less than 1 percent of the total demand for construction aggregates, but the amount recycled is thought to be increas-ing. Precise consumption statistics for the recycled materials are not available, but estimates for each source and market sector are shown in figure 2. A more detailed analysis of con-struction aggregates substitution is currently being con-ducted by the USGS.Figure 2.Consumption of aggregates by source and market sector.AGGREGATES FROM NATURAL AND RECYCLED SOURCES—ECONOMIC ASSESSMENTS 6As shown in figure 2, most of the demand for aggregates is supplied by sand and gravel or crushed stone producers. Aggregates derived from crushed stone are consumed in portland cement concrete, road base, asphaltic concrete, and other applications, whereas almost half of the aggregates derived from sand and gravel is consumed in portland cement concrete. Currently, more than 50 percent of all cement con-crete debris and about 20 percent of all asphalt pavement debris end up in landfills. An estimated 85 percent of all cement concrete debris that is recycled is used as road base, with minor amounts used in asphaltic concrete and fill mate-rial. About 90 percent of asphalt pavement debris that is recy-cled is reused to make asphaltic concrete.As costs, regulations, land-use policies, and social acceptance of more sustainable natural resource practices have a greater impact on the natural aggregates industry, increased aggregates recycling in urban areas is likely to occur. Producers of natural aggregates and independent entrepreneurs are beginning to consider the recycling of con-struction and demolition debris as one option for material use, as it has the potential to (1) extend the life of natural resources by supplementing resource supply, (2) reduce environmental disturbance around construction sites, and (3) enhance sustainable development of our natural resources—yet it can be profitable. In some urban areas, recycling of concrete and asphalt has reduced the flow of waste to landfill areas and reduced road construction and maintenance costs. In less urbanized areas, aggregates recy-cling is expensive or impractical on a large scale. Because of the high transportation cost associated with disposal of con-struction waste materials and the demand for this material in new construction, the aggregates recycling industry has developed locally or regionally, most often in urban areas. As each region has its own particular needs, a thorough under-standing of factors affecting the aggregates industry in a par-ticular area is necessary to determine whether aggregates recycling is advantageous.Because the aggregates industry is a high-volume, low-unit-value industry, a small variation in operation economics can have a significant impact on the profitability of an oper-ation. Entry into this business often requires significant cap-ital investment, particularly for small operators, and equipment suitable for processing natural aggregates may not be suitable for processing recycled aggregates. The rela-tive distance and associated cost of transporting material between construction, mining, processing, and disposal (landfill) sites influence production site location.AGGREGATES PROCESSINGTECHNOLOGYThe technology required for raw material acquisition and processing of aggregates from both natural and recycled sources is summarized in table 1, which focuses on technical factors that provide both incentives and deterrents to aggre-gates recycling. A detailed description of processing technol-ogy and the technical factors influencing equipment selection are reported in Appendix 2 for the production of aggregates from crushed stone, sand and gravel, recycled aggregates from concrete, and recycled aggregates from reclaimed asphalt pavement.7AGGREGATES PROCESSING TECHNOLOGYNatural Aggregates Recycled AggregatesAbout2 billion tons of sand and gravel and crushed stone were reported to have been consumed as aggregates in the United States in 1996 (Tepordei, 1997a).Less than 80 million tons of recycled material were estimated to have been consumed in construction applications in the United States in 1996 (T. D. Kelly, oral commun., 1997).Aggregates are derived from a variety of source rocks and mined primarily by surface methods.Aggregates are derived from debris of road and building construction projects.Mining requires environmental monitoring and reclamation. Costs for exploration, permitting, overburden removal, site preparation, and both ongoing and final site reclamation must be considered.Recycling requires limited monitoring and reclamation. Costs for exploration, mining, or stripping are not incurred, but costs for ongoing reclamation, site cleanup, and dust and noise reduction may be incurred.Quality depends primarily upon the physical and chemical properties of the source deposit.Quality varies significantly due to large variation in type and impurities of debris sources.Must conform to Federal, State, or local technical specifications for each product application.Must conform to Federal, State, or local technical specifications for each product application.Currently used in road base, concrete, and asphalt applications in all States (see Appendix 1).Forty-four States allow its use as road base, other permissible applications vary by State (see Appendix 1).Processing primarily consists of crushing, sizing, and blending.Processing similar to natural aggregates, but increased wear of equipment may result because of variable size and angularity of feed and the presence of deleterious material.Location dependent upon resource. Equipment selectiondepends upon numerous technical, economic, andmarket factors. Transportation distances and costs among resources, processing facilities, and marketsaffect end uses.Location determined by feed sources and markets. Location, equipment selection, and plant layout affect operational economics. Transportation distances and costs affect both feed supply and markets.Mine and plant layout in part determines the efficiency of an operation.Recycler must be able to adjust material feed and output to meet changing product requirements.Processing generally occurs at mine site, often outside city limits. Resource suitable for multiple products.Processing often at centrally located site in urban area using mobile equipment. Product mix often limited.Mobile, on-site plants may be used for large projects; time required for takedown, transport, and setup.Mobile plants commonly relocate 4 to 20 times each year, affecting productivity; time required for takedown, transport, and setup.Products marketed locally or regionally, mostly in urban areas. Higher valued products may have larger marketing area.Products marketed locally in urban areas. Lower valued product mix may constrain markets.Figure 3 illustrates the typical steps required to process recycled material. Technology primarily involves crushing, sizing, and blending to provide aggregates suitable for a vari-ety of applications. Concrete and asphalt recycling plants can be used to process natural sand and gravel, but sand and gravel plants usually won’t process recycled material efficiently. Construction concrete often contains metal and waste materials that must be detected and removed at the start of processing by manual picking or magnetic separa-tion. Feed for recycling is not uniform in size or composi-tion, so equipment must be capable of handling variations in feed materials.Table 1.Significant technological aspects of natural and recycled aggregates.AGGREGATES FROM NATURAL AND RECYCLED SOURCES—ECONOMIC ASSESSMENTS 8TECHNICAL FACTORS AFFECTINGAGGREGATES RECYCLINGBased upon data from reference documents, personal communications, and site visits, the following technical fac-tors were determined to affect the profitability of an aggregates recycling operation. All factors don’t always apply, but they have been found to apply in many cases. Product Sizes:Screen product-size distributions determine the amount of each product available for sale. Regional sup-ply and demand considerations often dictate local prices for various size products. Because different products have dif-ferent values in any given market, the operation that is able to market high-value size distributions is likely to improve its cash flow position. Screen configuration can be adjustable to reflect changing market conditions for different size prod-ucts. Experienced operators have the ability to maximize production of high-value products and to respond to changes in product requirements.Operational Design:In order to maximize efficiency and profitability, careful consideration must be given to opera-tional layout and design, production capacity, and equipment sizing. Although economy-of-scale efficiencies benefit larger operations, the higher capital cost of equipment and the limited availability of feed material may limit the size of an operation. Equipment configuration also affects product mix (what products are produced; mixes of products) and plant efficiency. Equipment selection is influenced by the decision on whether to be a fixed or mobile recycler. Mobile plants must meet roadway restrictions to be allowed to move from site to site. Fixed site equipment can be somewhat larger and perhaps more durable, thereby trading off lower unit production costs with reduced transportation costs for the mobile unit. Busse (1993, p. 52) explained, “The smaller processing plants are a great concept. They work well for asphalt recycling. But for concrete, the preparation cost is enormous when using small crushers because the material needs to be broken down tremendously. If only flat work or roadwork is being processed, perhaps it can be done. If bridges, parapets, demolition debris, or building columns are being processed, the small plants won’t work. The wear cost is too high.”。

船舶维修工程定额

船舶维修工程定额

船舶修理价格表2006版作者: 中国船舶工程行业协会出版社: 交通科技出版社2006年册数: 16开上下(二册)定价: 628.00 元优惠价:380元详细说明:船舶修理价格表目录第一部分中华人民共和国交通部1993年修船价格船坞工程进出干船坞,在坞,坞内移位,特殊船型棑楞,倾斜试验进出浮船坞,在坞,坞内望光,特殊船型排楞,倾斜试验船舶上下排,在排,上下船台,在台,加临斜撑……钢质船体及上层建筑工程平直钢板拆装平直钢板换新摺边钢板换新波形围壁钢板换新……木质家具工程大舱舱底板,污水沟盖板,隔舱板,防撞新制大舱舱底板,污水沟盖板,托架新制木质舱盖板,跳板,脚手板,接货板,斜形垫板新制地板新制,拆换,拆除……除锈涂漆工程水尺舷圈凿刻,堆焊,填折漆,原字批平水尺干舷圈空心字填白漆及描写船名,港籍船壳外部除锈涂漆……甲板舾装工程锚,锚链拆装,拉力试验,除锈涂漆备锚拆装,拉力,投落试验,称重,涂漆吊锚杆及锚链制止器拆装……管路及薄板工程系统管路拆卸安装及法兰拆卸安装系统管路煨炎,水压及系统水压,疏通清洁系统管路新制排装及拆换蒸汽辅机工程蒸汽锚机,绞盘机,绞缆机及起货机拆装修理蒸汽锚机,绞缆机及起货机整台拆装蒸汽水泵及空气泵拆装修理......柴油机及辅机工程柴油机修理柴油机,离合器,齿轮减速箱拆装,修理铜管式水(油)冷却器拆装,修理,化学清洗……锅炉工程锅炉水压试验锅炉拆装,移位,运厂,装复锅炉堆焊,熔焊……电气工程电机修理综合说明直流电机预防性修理直流电机小修……第十一章工程船舶修理工程泥泵拆装,修理高压冲水离心泵拆装,修理泥门倒挂式液压油缸拆装,修理……第十二章其它工程弹簧新制制动轮及弹性,钢性联轴器新制,拆装,拂配锻钢,铸铁圆柱齿轮新制,拂配拆装第二部分中国船舶工业总公司1992年国内民用船舶修理价格服务项目坞修工程钢质工程起货设备油漆工程锚及锚链工程轮机工程电气工程中船总92黄本补充本(第二次征求意见本)服务项目工程项目(对黄本的补充)付机工程第三部分中国船舶工程行业协会2001年船舶修理价格服务费坞修工程船体工程机械工程电气工程冷藏工程渔加工传送装置换新附录一船舶修理规则附录二船舶修理通讯录。

Amos Fiat

Amos Fiat

Haim Kaplan
School of Computer Science Tel Aviv University Tel Aviv 69978, Israel
edith@
fiat@cs.tau.ac.il
haimk@cs.tau.ac.il
The success of a P2P le-sharing network highly depends on the scalability and versatility of its search mechanism. Two particularly desirable search features are scope (ability to nd infrequent items) and support for partial-match queries (queries that contain typos or include a subset of keywords). While centralized-index architectures (such as Napster) can support both these features, existing decentralized architectures seem to support at most one: prevailing protocols (such as Gnutella and FastTrack) support partial-match queries, but since search is unrelated to the query, they have limited scope. Distributed Hash Tables (such as CAN and CHORD) constitute another class of P2P architectures promoted by the research community. DHTs couple index location with the item's hash value and are able to provide scope but can not e ectively support partialmatch queries; another hurdle in DHT deployment is their tight control the overlay structure and data placement which makes them more sensitive to failures. Associative overlays are a new class of decentralized P2P architectures. They are designed as a collection of unstructured P2P networks (based on popular architectures such as gnutella or FastTrack), and the design retains many of their appealing properties including support for partial match queries, and relative resilience to peer failures. Yet, the search process is orders of magnitude more e ective in locating rare items. Our design exploits associations inherent in human selections to steer the search process to peers that are more likely to have an answer to the query. Peer-to-peer (P2P) networks have become, in a short period of time, one of the fastest growing and most popular Internet applications. As for any heavily used large distributed source of data, the e ectiveness of a P2P network is largely a function of the versatility and scalability of its search mechanism. Peer-to-peer networks came to fame with the advent of Napster 23], a centralized architecture, where the shared items of all peers are indexed in a single location. Queries were sent to the Napster Web site and results were returned after locally searching the central index; subsequent downloads were performed directly from peers. The legal issues which led to Napster's demise exposed all centralized architectures to a similar fate. Internet users and the research community subsequently turned to decentralized P2P architectures, where the search index and query processing, as well as the downloads, are distributed among peers.

By

By

Network Visualisation With 3D MetaphorsThesisSubmitted in partial fulfilment of the requirements of the degree Bachelor of Science (Honours) of Rhodes UniversityByMelekam Asrat TsegayeComputer Science DepartmentNovember 2001AbstractLarge amounts of data flow over today’s networks. Tracking this data, processing and visualising it will enable the identification of problem areas and better usage of network devices. Currently most network analysis tools present their data using tables populated with text or at best 2D graphs. A better alternative is to use 3D metaphors for visualising network data. This paper investigates the use of a number of 3D metaphors for visualising network data in a VR environment.AcknowledgementsThanks to Prof. Shaun Bangay, my supervisor, for his guidance throughout the project. A special thanks to Guy Antony Halse for proof reading drafts of this document and for his input during the course of the year.Contents1 Introduction (8)1.1 Overview (8)1.2 The Need for 3D Visualisation of Networks (8)1.3 Our Approach (8)1.4 The Test Visualisation System (9)1.5 Data Sources (9)1.6 3D Visualisation Methods Investigated (9)1.7 Issues and Problems Involved (10)2 3D Network Visualisation, a Theoretical Background (11)2.1 Metaphors (11)2.1.1 Source and Target Domains (11)2.1.2 Magic Features (12)2.1.3 Mismatches (13)2.1.4 The Desktop Metaphor (13)2.1.5 From the Desktop to Virtual Reality (14)2.1.5.1 Virtual Reality (14)2.2 3D Visualisation (14)2.2.1 Problems with 3D Visualisations (15)2.2.2 Colour Selection for Visualisation (15)2.2.2.1 Weaknesses of the RGB Colour Model (16)2.2.2.2 The HSV Colour Model (16)2.2.2.3 Selecting Colours (17)2.2.3 3D Visualisation Software (18)2.3 Network Monitoring (20)2.3.1 SNMP (20)2.3.1.1 Different Versions of SNMP (21)2.3.1.2 UCD-SNMP (21)2.3.2 Web Server and Proxy Log File Monitoring (21)2.3.3 Packet Monitoring (22)2.3.4 The Round Robin Tool (RRDtool) (22)2.3.5 Greatdane (24)2.3.5.1 Implementation (24)2.3.5.1 Components (25)2.3.5.2 Threading (25)2.4 Visualisation Research (25)3.0 Designing a 3D Network Visualisation System (27)3.1 Network Visualisation Methods (27)3.1.1 Data Management (27)3.1.1.1 Long Term Network Data (27)3.1.1.2 Live Network Data (28)3.1.1.3 Log File Data (29)3.1.2 Visualisation Strategies (29)3.1.2.1 Block View of Interface Data (30)3.1.2.2 3D Graph View of Interface Data (30)3.1.2.3 Polar View of Interface Data (31)3.1.2.4 Proxy Server Log File View (31)3.1.2.5 Web Server Log File Visualisation with Particles (32)3.1.2.6 Bars and Spheres (33)3.1.2.7 More Host Status Visualisation (34)3.1.2.8 Animated Packets (35)3.1.2.9 Chernoff Faces (36)3.1.2.10 Visualising Data with 3D Character Faces (37)3.2 The Test Visualisation System (37)3.2.1 The VR System (38)3.2.2 Components of a Visualisation Module (39)3.2.2.1 Module Identification (39)3.2.2.2 Registration (39)3.2.2.3 Status (39)3.2.2.3 Data Processing (40)3.2.2.4 Rendering (40)3.2.3 The Module Manager (40)3.2.4 Miscellaneous Services (40)3.2.5 Reasons for Designing a Modular System (41)4.0 Creating the Visualisation System (42)4.1 Data Collection (42)4.1.1 The SNMP Interface (42)4.1.2 Calculating Rates (43)4.1.3 Managing Interface Data in a Round Robin Database (44)4.1.3.1 Collecting Interface Data from Many Hosts (44)4.1.3.2 Formatting RRD Database Data for Visualising (46)4.1.4 Processing Log File Data (47)4.1.4.1 Proxy Server Log Files (47)4.1.4.2 Web Server Log Files (49)4.1.5 Collecting Host Resource Usage Data (50)4.1.5.1 General Host Resource Data (50)4.1.5.2 Host Network Status Data (51)4.1.5.3 Storage Usage Data (52)4.1.6 Packet Grabbing Interface (53)4.2 Visualising the Collected Data (54)4.2.1 Colour Map Application (54)4.2.2 Blocks (54)4.2.3 3D Graph View (56)4.2.4 Polar View (57)4.2.5 Log File Data Views (58)4.2.5.1 Visualising Data with Particles (58)4.2.5.2 Proxy Log File Summary View (60)4.2.6 Bars and Spheres (61)4.2.7 Animated Pyramids (62)4.2.8 Visualisation with Facial Expressions (63)4.2.9 Visualisation with Other 3D Objects (64)4.3 The Visualisation System (65)4.3.1 Interfacing with the VR System (66)4.3.1.1 User Navigation (67)4.3.2 A Visualisation Module (67)4.3.2.1 Visualisation Module Interface (VMI) (68)4.3.3 The Module Manager (69)4.3.3.1 Dynamic Object Loading (69)4.3.3.2 Dynamic Loading of Visualisation Modules (70)4.3.3.3 The Module Control GUI (mcGUI) (72)4.3.4 Common System Objects (73)4.3.4.1 The Texture Manager (73)4.3.4.2 Text to Speech (TTS) (73)4.3.4.3 MotionControl (74)4.3.4.3.1 BezierCoordinate (75)4.3.4.3.2 FallingBody (75)4.3.4.3.3 MotionControl use Example (76)4.3.4.4 Camera Control (77)4.3.4.5 Utility Objects (78)4.3.4.6 Configuration Management (78)5.0 Results (79)5.1 3D Graph View (79)5.2 Polar Graph View (80)5.3 Interface Data Visualisation (81)5.4 System Resource Visualisation with Blocks and Cubes (82)5.5 Web Server Log File Visualisation with Particles (83)5.6 Storage Device Usage Visualisation (83)5.7 IP state Visualisation Using a Chernoff Face (84)5.8 ICMP State Visualisation with a Water Tank (85)5.9 Packet Visualisation (85)5.10 Presenting a 3D Proxy Log File Summary (86)5.11 Interface Device Visualisation (87)5.12 The Visualisation System (88)6 Conclusion (89)7 Future Research (90)References (91)1 Introduction1.1 Overview•Chapter 1, this chapter, is a summary of the paper.•Chapter 2 presents a theoretical background on 3D network visualisation. Here we discuss the use of metaphors, virtual reality, colour selection for visualisation, current research, 3D visualisation software and the standards and tools we have chosen for monitoringnetworks.•Chapter 3 focuses on network data collection methods, visualisation approaches and the design of a 3D visualising system.•Chapter 4 describes the implementation of components that collect data, visualise data and the visualisation system itself.•Chapter 5 presents results from network data visualisations that were performed using the various visualisation approaches that were discussed in chapter 3.•In Chapter 6 we present our conclusions.•Chapter 7 lists future 3D visualisation research areas.1.2 The Need for 3D Visualisation of NetworksA large amount of data flows throughout our networks. If this data could be seen in visual form it would help greatly in understanding how our networks are affected by it. A growing number of applications exist that attempt to do this. A large majority of these applications offer two dimensional representations of various network data. Others offer three dimensional views of the data. Although the addition of the third dimension helps, it still does not allow for easy analysis of data with a large number of independent variables and doesn’t offer users a 3D environment where they can move around and examine their data closely.1.3 Our ApproachThis paper investigates the use of 3D objects, their properties and characteristics as metaphors for visualising network data. By using 3D objects already familiar to human beings we want to make looking at network data and understanding it a much simpler process than it is at the moment.In this paper we focus on•Network data collection•Visualisation of this network data using 3D metaphors and traditional 3D graphs, their design and implementation•The design and implementation of a modular visualisation system1.4 The Test Visualisation SystemA modular system was designed and implemented to allow easy exploration and creation of visualisation modules. All visualisation modules implement a uniform interface that allows them to be recognised by the visualisation control system. This system is built on top of a Virtual Reality system, code-named Greatdane, developed by the Rhodes University Virtual Reality Special Interest Group (VRSIG). This enables the output from visualisation modules to be observed through a VR output device such as a head mounted display (HMD).1.5 Data SourcesThe data for our this investigation consists of host resource usage data such as memory, CPU and disk space usage, log file data from web and proxy servers, packets grabbed live off the wire and network traffic data flowing through network interfaces. All log files were processed internally by a specific module written for analysing and visualising that log file. SNMP was used to monitor network devices such as servers and switches and the data collected was stored in a round robin database. Packet capturing was done using Libpcap and visualisation done based on packet header information.1.6 3D Visualisation Methods InvestigatedUsing the test visualisation system, visualisation modules were developed that feed off the data collected. We looked at representation of system load using spheres and bars, web server log file view using a particle system, wire and polar views of RRD data and visualisation of data using facial expressions. We also animated packets as we grabbed them off the wire and classified them.A set of system variables that affect system load were identified and 3D visual representations based on these was created.1.7 Issues and Problems InvolvedThe VR system used is constructed using the Java programming language. Parts of this system are written using C/C++ that make the test system platform dependent. SNMP version one was used throughout since this is the most widely implemented version of the protocol on SNMP agents. The use of already existing log file analysers was looked at. Most of the log analysers produced summarised output in HTML format and were not very helpful except for checking the correctness of the analysis done by the test system. Multithreading was introduced to Greatdane, which is a single thread system, using Java’s default synchronisation.2 3D Network Visualisation, a Theoretical Background2.1 MetaphorsA metaphor is a mapping of knowledge about a familiar domain in terms of elements and their relationships with each other to elements and their relationships in an unfamiliar domain [Preece et al, 1994]. The aim is that familiarity in one domain will enable easier understanding of the elements and their relationships in the unfamiliar one. For example if an individual comes in contact with PC for the first time and doesn't know what it is, perhaps the individual recognises the monitor and thinks it’s a TV screen, and hears the PC's sound card producing a sound similar to that produced by a fire truck, based on the individual’s prior knowledge, the most likely interpretation of the event would be that the situation isn't quite right and it would be best to vacate the building. The common element from the familiar domain, the fire truck, and unfamiliar domain, the PC, would be the siren.Table 2.1 Examples of applications and associated metaphors. [Preece et al, 1994]knowledgeApplication Metaphor FamiliarOS Gui Desktop Office tasks, file managementWord processor Typewriter Document processing with a typewriter Spreadsheet Ledger sheet Experience working with an accounting ledger/Math sheetEmail Client/Server Postman/Post office Use of postal services2.1.1 Source and Target DomainsGentner et al [Gentner et al, 1996] discuss the familiar domain as the source domain and the unfamiliar domain as the target domain (Figure 2.1) and highlight three problems with metaphors, lets take as an example a book from the real world and a software document.1. The target domain has features not in the source domaine.g. we can conveniently store the software version of a large book as a file on a floppy disk oremail it to our friends.2. The source domain has features not in the target domaine.g. we can carry the book around and read it at leisure anywhere without the need to switch ona computer and fire up a document viewer application.3. Some features exist in both domains but work very differentlye.g.a) We can flap quickly through all 1000 pages of a book in the real world. Using adocument viewer application we would have to scroll with the mouse or use page upor down keys and the process would be slow.b) We can use the find feature on a document viewer to find any block of text veryquickly. In the real world we would check the index and if the word was not therethen a tedious search through the entire text would be required.Figure 2.1 shows the relationship between source and target domains. [Ellis et al, 2000]2.1.2 Magic FeaturesIn the target domain of a metaphor if tasks can be achieved that would otherwise be impossible in the source domain while adding convenience at the same time then the metaphor used has a magic feature [Dieberger, 1995]. The email system used on the Internet is an example. It enables instantaneous sending and receipt of messages from regions of the world spanning thousands of kilometres, this feature is attributed to the properties of electricity in the underlying physicalimplementation of the network hardware. In contrast in the source domain instantaneous mail delivery does not happen.2.1.3 MismatchesThe user's familiarity with elements and their interactions in the source domain of a metaphor will not always match up to the behaviour of elements in the target domain. When this happens a metaphorical mismatch has occurred [Ellis et al, 2000]. This often happens on computer systems. When a computer user deletes a confidential file from his computer's hard drive he expects it to have been erased. In reality that does not happen. The deleted file is recoverable and when the user learns of this fact he begins to distrust the metaphor. Ellis et al point out that the effect of metaphoric mismatches are not always negative and might make users’ better understand their system. In the above example the user might enquire further and learn of tools that enable him to erase his files completely.2.1.4 The Desktop MetaphorThe desktop metaphor is the most widely used metaphor on computer systems that have graphical user interfaces. Desktops, icons, windows scrollbars and folders are some of the objects that are used. Users store their useful data in files, and those files are placed in folders just like they would in an office cabinet in the real world. There are obvious differences with real world counter parts such as space limitations, whereas on a computer system there is a large amount of "virtual" storage space available, occupying a small area, considerably large areas would be required to store the same amount of data in an office.Some desktops have an icon representing a trashcan on which unwanted data can be dropped onto, again simulating the real world process of throwing away rubbish in a trashcan. Apple's Mac OS extended this metaphor and made it’s desktop such that users could drag icons representing their floppy drives onto a trashcan and the system would then eject the disk from its drive. This extension was rejected by Hayes [Hayes, 2000] as it lead to confusion amongst users. Users misinterpreted the metaphor as meaning “delete the contents of this disk by dragging it onto a trashcan”.2.1.5 From the Desktop to Virtual RealityThe problem with systems based on the desktop metaphor is that they attempt to represent three dimensional objects from the real world as two dimensional objects on a flat 2D screen. This problem is compounded with the reality that many computer systems come equipped with a keyboard and mouse all of which are 2D input devices and encourage application developers to continue to develop 2D applications. Nielsen [Nielsen, 1998] in his discussion mentions how difficult it is to control 3D space with interaction techniques such as scrolling. Representation of real world objects on screen with resolutions that do not allow sufficient detail for objects being rendered also doesn't help. This could all change with the availability of powerful CPUs and GPUs on the market such as the Pentium 4 and GForce 3.2.1.5.1 Virtual RealityVirtual reality offers presence simulation to users [Gobbetti et al, 1998] and is much more than just a 3D interface. Ideally it immerses users in an environment that provides sensory feedback allowing visual, depth, sound, touch, position, force and olfactory perception. The advantage of virtual reality is that it enables users to interact closely with objects in a virtual environment in the same way as they would in the real world. For example scientists studying DNA strands can manipulate virtual representations of DNA molecules, surgeons can operate remotely on patients and gamers can play their games in a world that simulates environments close to reality.In order to have a virtual world integrated with the physical world there needs to be improvements made in hardware such as displays and input devices, an integration of VR systems with existing systems such as AI, voice, DBMSs and better ways to visualise data and effective abstractions for visualisation [Hibbard, 1999].2.2 3D VisualisationThe volume of data that flows through computer systems over periods of time is huge. Traditional approaches that attempt to track and analyse the flow of this data such as log files and databases that yield textual results are not sufficient. 3D visualisation is the process of constructing 3D graphical representations of data to enable the analysis and manipulation of data in 3D space and iswell suited to this task. By selecting a suitable visualisation metaphor a 3D representation can be constructed to give meaning to the data.2.2.1 Problems with 3D VisualisationsAlthough 3D visualisations can be of great help in looking at data they may not always present an accurate view of the data [Zeleznik, 1998] due to the complexities involved in rendering a representation for every piece of data being processed. Often it is necessary to summarise the dataset and lose detail for a summary of the dataset's properties. Depending on the application area, losing detail may or may not be acceptable.3D Visualisations use the 3 axes x, y, z to display datasets that have 3 variables. For datasets with a larger number of variables colour, the shape of objects their characteristics and behaviour can be used. If colour and object characteristics are not selected carefully they can be confusing to users. The colour map shown in Figure 2.2.1a is one of the most frequently used in visualisations [Rogowitz et al, 2001]. The problem with it is that it produces discrete transitions on a diagram (Figure 2.2.1a) , which is composed, of continuously varying data.2.2.1a The colours of the rainbow used as acolour map for 2D visualisation of fluiddensity from a simulation of the noiseproduced by a jet aircraft engine. Figure2.2.1b A different colour map, that highlights values of interest in thevisualisation was constructed using a rulebased colour map selection tool developedby Bergman et al [Bergman et al, 2001].2.2.2 Colour Selection for VisualisationColour is one of the most important attributes that is used in 3D visualisations. Proper selection of colour is essential for successfully displaying the properties of a datasets. Colour is the perceptual result of light in the visible region of the spectrum, having wavelengths in the region of 400nm to700nm, incident upon the retina [Poynton, 1997]. Computer hardware uses colour from the RGB colour space. This colour space is represented as cube in 3d space with axis x, y, z ranging from 0 to 1. (Figure2.2.2)2.2.2.1 Weaknesses of the RGB Colour ModelUsing this colour space for visualisation is not recommended for the following reasons1. It is device dependent since all monitors produce equivalent colours.2. It is not a perceptually uniform colour space i.e. there is no relation between the distance of any two values on the RGB cube (Figure 2.2.2.1) and how different the two colours appear to an observer [Watt, 1989]. It is thus necessary to use a perceptual colour model.3. The RGB cube does not describe all colours perceivable by humans.Figure 2.2.2.1 The RGB colour space forms aunit Cube.Figure 2.2.2.2a. The HSV colour model is described by a cone.2.2.2.2 The HSV Colour Model We use the hue-saturation-value (HSV) model, which is controlled by perceptually based variables, to select colours. The H value determines colour distinctions e.g. red, blue, yellow etc. The S value determines the intensity of the colour. The V value determines the lightness of the colour [Watt, 1989]. Figure 2.2.2.2b shows one possible user interface implementation. In realty the HSV space is represented by the cone shown in Figure 2.2.2.2a.Figure 2.2.2.2b. A user interface for the HSV colour model. (GTK)Varying the hue tab will enable selection of colour around the face of the cone, varying the saturation will enable selection of colour from the centre of the cone to the outside of the circle, direction being dependent on the hue value, hence the values of hue range from 0 to 360. Varying the value will select colours on the line from the bottom of the cone up to the face of the cone, the radius being dependent on the saturation value.2.2.2.3 Selecting ColoursAppropriate colours for use in a visualisation need to be selected and a colour map constructed so that these can be used by the visualisation system at runtime. Bergman et al [Bergman et al, 2001] at the IBM Thomas J. Watson Research Centre have constructed taxonomy of colour map construction that is dependent on the data types to be visualised, the representation task and users' perception of colour (Table 2.2.2.3). They have experimental evidence (Figure2.2.2.3) that shows that the luminance mechanism in human vision is tuned for data with high spatial frequency and the hue mechanism tuned for data with low spatial frequency.Table 2.2.2.3 Taxonomy of colour map selectionData type Spatialfrequency Highlighting Ratio/ intervalsHigh Large colour range for highlighted featuresLow Small colour for highlighted features OrdinalHigh Increase luminance of highlighted areaLow Increase saturation of highlighted area Nominal Any Increase luminance or saturation ofhighlighted areaFigure 2.2.2.3 A plot of human vision sensitivity against varying Spatial Frequency for visualrepresentations of data that have either more hue or luminance. [Bergman et al, 2001]2.2.3 3D Visualisation SoftwareCommercial applications such as 3dv8 (Figure 2.2.3a) take in 2D data and create 3D representations of the data [3dv8, 2001]. 3dv8 takes in 2D data from any problem domain arranged as tab delimited lines of text. For example data from a spreadsheet can be viewed in 3D selecting from a range of available 3D views. It enables 360 degrees of motion around the visual representation. The 3D presentation makes the data very clear to look at and conduct further analysis, compared to the columnar data available in spreadsheet cells or 2D bar graphs.Figure 2.2.3a. 3Dv8’s Cone View representationof employee data.Figure 2.2.3b. 3dv8’s Planet Viewrepresentation of employee data.Another powerful data visualisation tools is OpenDX (Figure 2.2.3b). It is an open source visualisation application based on IBM's commercial data visualisation application, Data Explorer. OpenDX contains numerous modules suited for different types of visualisations [OpenDX, 2001]. Figure 2.2.3d shows an example of how one possible visual program is set up. Various categories of modules are available. On the right of the diagram is where a visual program is constructed after selecting appropriate modules. This example below shows:• The file selector, a module that pops up a file selection dialog for a user and gets thelocation of a data file. Its output is connected to the input of an import module.• The import module, knows how to import the data and process it. Its output is connected to the input of an autocolour module.• The autocolour module assigns colours to values imported by the import module. Its output is connected to the input of a rubbersheet module.• The rubbersheet module wraps 2D data and creates the 3D representation shown on Figure2.2.3b. Its output is connected to the input of the image module.• The image module renders a 2D image that can be visually examined (Figure 2.2.3b).Figure 2.2.3c OpenDX’s visual editorFigure 2.2.3d A 2D image wrapped to form a3D representation, based on values of 2Ddata. It is produced by the rubbersheetmodule left, the colouring is done by theautocolour module.OpenDX's architecture allows very easy visualisation of data for users, by allowing them to manipulate modules as shown in Figure 2.2.3c.2.3 Network MonitoringIn section 2.2 we mentioned the need for visualising data that flows to and from network devices in 3D. To be able to accomplish this task we investigated commonly available standards and tools for monitoring networks. This section presents a summary of these.2.3.1 SNMPThe simple network management protocol (SNMP) is a widely used network management protocol.A manager operates a console from where he controls various SNMP enabled network devices (SNMP agents) e.g. PCs, switches, routers. A Management Information Base (MIB), a virtual database, is used to store state values of the various devices of the SNMP agents (RFC 1213). The MIB is organised like an upside down tree. The leaf nodes of the tree contain the object instances whose values are accessed and read from/written to by a manager. This scheme is known as the Structure of Management Information (SMI) and is defined in RFC 1155 [Stevens, 1994].Figure 2.3.1a.Full OID for the systemDescr variable.dod.internet.mgmt.mib11.system.sysDescr.01 3 6 12 1 1 1 0retrieving the value using ucd snmp’s snmpget app> [melekam@csh12]snmpget rucus.ru.ac.za public 1.1.0> system.sysDescr.0 = FreeBSD rucus.ru.ac.za 4.4-RELEASE FreeBSD 4.4-RELEASE #0: Tue Oct i386A leaf node is identified by the sequence of numbers used to traverse the tree. SNMP v1 protocol (RFC 1157) defines 5 operations that can be used for interaction with an SNMP agent. These are Get, Set, GetNext, GetResponse and Trap. For example the value of the system description variable (sysDescr) in Figure 2.3.1a, can be accessed as Get 1.3.6.1.2.1.1.0. The SNMP management stationand agents exchange Protocol Data Units (PDUs). The PDUs are defined using Abstract Syntax Notation (ASN.1).2.3.1.1 Different Versions of SNMPThe initial SNMP standard SNMPv1, defined simple operations such as get and set. There was no way to get multiple object values with a single request. To retrieve 100 values from an agent we would have to send 100 get requests. SNMP PDUs were exchanged by agents and managers using a very weak form of authentication, one based on the hostname of the SNMP entity and a string value referred to as a community string. Before replying to a request an agent would check that the community string is valid and the hostname of the manager is from a valid network block. The community string is exchanged over the wire in clear text thus anyone can get hold of it.To improve v1 an SNMPv2 standard was proposed in 1996. It added bulk transfer operations such as snmpbulkget and offered other enhancements such as manager to manager comunication but still did not incorporate better authentication mechanisms [Stallings, 2001]. To address this another standard SNMPv3 was proposed in 1998. It adds better access control and encryption. Despite its problems SNMPv1 still remains the most widely implemented version and still remains the only version that has been fully standardized.2.3.1.2 UCD-SNMPThere are a number of SNMP libraries available and one of the most popular is UCD SNMP [UCD SNMP, 2001]. The UCD SNMP project is an implementation of SNMP v1, v2 and v3. It is available for wide variety of platforms on both windows and Unix. It consists of an agent, a C library, applications for reading or setting object values in an MIB and other tools such are a GUI for browsing an MIB. This is the library that is used in this project.2.3.2 Web Server and Proxy Log File MonitoringSNMP allows us to monitor network devices remotely but for monitoring local traffic such as web server and proxy server traffic, analysis of log files generated by web and proxy servers locally is necessary. The proxy and web servers we looked at use the common log file format (CLF).。

Project Lead

Project Lead

Harvard Brain Tissue Resource CenterNational Brain DatabankNeuroscience Gene Expression RepositoryResearch on Standards and PlatformsWorking Technical ReportAugust 11, 2003Project LeadNitin Sawhney, Ph.D.Technical DevelopmentTom Hickerson, Shai Sachs, Dmitriy AndreyevAbstractThe Harvard Brain Tissue Resource Center (or The Brainbank) at the McLean Hospital is one of three federally funded centers for the collection and distribution of human brain specimens for research, and the only designated acquisition center. The Brainbank seeks to establish a publicly accessible repository (The National Brain Databank) to collect and disseminate results of postmortem studies of neurological and psychiatric disorders. The National Brain Databank will primarily provide neuropathology information including gene expression data, which will be accessed and queried using a web-based interface. The project will utilize key microarray metadata standards such as MAIME and MAGE-ML and best practices employed by existing gene expression repositories like NIH’s Gene Expression Omnibus (GEO) and ArrayExpress at the European Bioinformatics Institute.The National Brain Databank initiative requires a long term perspective to develop an appropriate application platform with a scaleable and robust database while incorporating suitable microarray standards and ontologies. In this technical paper, we survey the overall lifecycle of research at the Brainbank with respect to the microarray experiments. We also review the main gene expression repositories and analytic tools as well as the emerging MAIME and MAGE-ML standards being adopted by the research community.We propose a system architecture that allows integration of existing Affymetrix-based microarray data using the MAGE object model and interfaces, while retaining the data in its raw form. We believe the proposed repository will benefit from an architecture using the Java J2EE application framework and the Oracle 9i relational database running on a secure and high-performance Linux-based server. This architecture enables an open, scaleable and extensible approach towards development and deployment of the repository in conjunction with existing software tools and standards in academic settings. We believe that the basic framework outlined in this technical report should serve as a robust foundation for the evolving gene expression repository at the Brainbank.Table of ContentsKey Recommendations (3)1Introduction: Objectives of the National Brain Databank (4)2Lifecycle of Research at the Brainbank (5)2.1Acquisition and Curation of Brain Tissue Samples (5)2.2Gene Expression Experiments using Microarrays (5)2.3Analysis of Expression Data: Software Tools and Data Standards (7)2.4Current Computing Infrastructure and Databases at the Brainbank (8)2.5Basic Requirements for National Brain Databank (9)3Public Gene Expression Repositories (12)4Microarray Standards and Ontologies (13)4.1Motivation for Microarray Standards (13)4.2What is an Ontology? (14)4.3Understanding the Role of MIAME (14)4.4Understanding MAGE-OM and MAGE-ML (15)4.5Software Support for MAGE (16)4.5.1Affymetrix GDAC Exporter (16)4.5.2MGED’s MAGE-stk (16)4.5.3Commercial Software: Rosetta Resolver (16)4.6Data Formats used by Gene Expression Repositories (16)4.6.1SOFT Format at GEO (16)4.6.2MAGE Standards at ArrayExpress (17)4.6.3GeneXML at GeneX (17)4.7Historical Evolution of MAGE Standards (17)4.8Proposed Use of MIAME/MAGE and Related Technologies (18)4.8.1National Brain Databank Database Structure (18)4.8.2Importing Experimental Data (18)4.8.3Curating the Brainbank Data (19)4.8.4Searching the Data (20)4.8.5Browsing the Data (20)4.8.6Exporting the Data (20)5National Brain Databank: Proposed Model and Approach (21)5.1Summary of Preliminary Requirements (21)5.2Proposed Application Model and System Architecture (22)5.3Designing the Application Platform: Adopting Java J2EE (24)5.3.1What is J2EE? (24)5.3.2Case Study: PhenoDB Project at Massachusetts General Hospital (25)5.3.3Available Java Tools and Comparison with Other Languages (26)5.4Adopting a UNIX Operating Environment for the National Brain Databank Server (28)5.5Adopting a Relational Database: Comparison of Database Platforms (29)6Summary of Ongoing Requirements Analysis (32)7Conclusions (33)References (34)Appendix: Comparison of Databases and Security IssuesKey RecommendationsØTo support the large volume of heterogeneous data generated from microarray experiments at the Brainbank, the system must provide a range a mechanisms for indexing, annotating and linking thedatasets with clinical and diagnostic data on the brain tissue samples. Hence, use of standardizedapproaches such as the MAIME ontology is important along with a robust and scaleable database.ØTo ensure standardized submission, export and exchange with other gene expression repositories, the system should support the MAGE-OM object model and the XML-based MAGE-ML data exchangestandards. These standards are increasingly being adopted by many databases and software tools.ØWhile the MAGE standards are becoming popular, many existing databases and analytic tools are only now beginning to adapt to these standards. Hence, for the foreseeable future the Brainbank mustcontinue to provide gene expression data in their native formats to enable analysis by current software.The system must export data using MAGE while providing access to raw data files stored in the server.ØTo maintain the high standards for archiving and disseminating data to the neuroscience community, the Brainbank must carefully curate data submitted from internal experiments and external investigators.Hence software tools and workflows should be provided to annotate, validate, cross-reference, and map data to the internal representation. These data submission and curation mechanisms should be MAIME compliant and can be adapted form existing software tools.ØSimilar to existing gene expression repositories, the National Brain Databank must provide adequate tools for querying the diagnostic and gene expression data along a number of searchable parameters.This requires that the experimental data be submitted using MAIME compliant processes as well asindexing the raw data and clinical reports to extract relevant keywords and terms for extensive queries.ØTo allow data to be usable it must be referenced to standardized Gene sequences in GenBank and linked to relevant publications in online resources such as PubMed. The system must support mechanisms to cross-link and reference these online sources using a combination of manual and automated methods.ØSince the brain samples collected and gene expression data generated are based on patient profiles and the online repository is designed to be a publicly accessible resource, data must be selectivelydisseminated to comply with HIPAA guidelines. Hence, the system should support user authenticationmechanisms, a range of user roles and privileges for certain datasets and files, while enforcing adequate security measures in a robust and secure database.ØExtracting and archiving gene expression data in the online repository requires acquiring data from specialized software like Affymetrix using export tools like GDAC and other utilities for converting content to MAIME and MAGE-ML-based formats. The system must support extensible interfaces and APIs toallow integration with such tools. It is important to use nonproprietary platforms, open standards andmethodologies in the design of the system architecture.ØThe deployment architecture for the National Brain Databank must ensure long term scalability, robustness, performance, extensibility and interoperability with other systems and platforms. We proposea system architecture using the Java J2EE application framework and the Oracle 9i relational databaserunning on a secure and high-performance Linux-based server. We believe this architecture provides the most secure and extensible foundation in the long term for deploying a public gene expression repository.1 Introduction: Objectives of the National Brain DatabankThe Harvard Brain Tissue Resource Center1 (or The Brainbank) directed by Dr. Francine M. Benes at the McLean Hospital is one of three federally funded centers for the collection and distribution of human brain specimens for research, and the only designated acquisition center. The center’s brain tissue provides a critical resource for scientists worldwide to assist in their investigations into the functioning of the nervous system and the evolution of many psychiatric diseases.The Brainbank seeks to establish a publicly accessible repository (The National Brain Databank) to collect and disseminate results of postmortem studies of neurological and psychiatric disorders. For this project, Akaza Research2 has been contracted to conduct research, design and development of the public gene expression repository for the Brainbank’s National Brain Databank. Akaza Research is an informatics consulting firm based in Cambridge, MA that provides its academic and nonprofit clients with open and customized solutions to facilitate public research in the life sciences. The National Brain Databank will primarily provide neuropathology information including gene expression data along with anonymous demographics, which will be accessed and queried using a web-based interface. While general information will be publicly available, authorized researchers will have access to detailed results and export data into relevant standardized formats. As the system evolves, distributed researchers will have the ability to upload their own results using a specified metadata format, pending a process of approval and curation from administrators at the National Brain Databank.The project will utilize key microarray metadata standards such as MAIME and MAGE-ML3 and best practices employed by existing gene expression repositories like NIH’s Gene Expression Omnibus (GEO)4 and ArrayExpress at the European Bioinformatics Institute. Akaza is conducting requirements analysis to identity the core specifications of the system over several phases of software releases that address the near-term needs and long term vision of the National Brain Databank. This research and analysis effort conducted in conjunction with the Brainbank will be distilled into technical papers (such as this one) and formal specifications. Based on feedback from the Brainbank, Akaza will commence on the design and development of the system’s first release which will include a project website, implementing the new database schema and data migration from existing MS Access and MS SQL Server databases at the Brainbank, as well as the deployment of the core Java J2EE based web-application framework for the online repository.A key aspect of the National Brain Databank project includes specification and design of appropriate metadata formats and related import/export mechanisms. In addition, several workflow processes will be implemented to provide administrators with mechanisms for selective authorization of users, data import/depositing andcuration/administration of the repository. The Brainbank eventually intends to support the neuroscience research community by expanding the scope of neuropathology information available to include SNP and proteomics data, while providing additional online tools for advanced search and cross-indexing, and supporting the ability to exchange relevant data with other online repositories. As the system is deployed, Akaza will continue to conduct ongoing evaluation, documentation, training and testing with lead users and administrators for iterative refinement of the system to ensure a useful and robust repository for the neuroscience research community.This working technical paper, based on preliminary requirements gathering and background research, summarizes the key goals of the National Brain Databank, the process of research at the Brainbank, existing gene expression repositories and metadata standards as well as relevant software tools and databases. The paper proposes a high-level implementation approach for the National Brain Databank’s online gene expression repository including the conceptual database model and application framework, rationale for adopting Java J2EE, Oracle and Linux as the basis for the system and outlines the ongoing requirements analysis work. Based on review and feedback from Brainbank, key decisions and tradeoffs indicated here will be resolved to finalize the key requirements and specifications towards development of the first system release of the National Brain Databank.123/4/geo/2 Lifecycle of Research at the BrainbankThe Harvard Brain Tissue Resource Center (the Brainbank) was established at McLean Hospital as a centralized, federally funded resource for the collection and distribution of human brain specimens for research. As a designated “NIH National Resource”, the Brainbank provides a vital public service by collecting and disseminating postmortem brain tissue samples to the neuroscience research community (at no charge). These brain tissues are typically related to neurological disorders including Huntington's, Parkinson's and Alzheimer's, psychiatric disorders like schizophrenia or manic-depression (bipolar disorder), as well as normal control specimens which are essential for comparative work. Collectively, these specimens are used for a wide variety of applications, including receptor binding, immunocytochemistry, in situ hybridization, virus detection, polymerase chain reaction (PCR), DNA sequencing, mRNA isolation, and a broad range of neurochemical assays.2.1 Acquisition and Curation of Brain Tissue SamplesHaving been established for over 20 years, the Brainbank has created a strong reputation as a NIH National Resource for brain tissue collection, archiving and dissemination to aid neuroscience research. To maintain this high standard, the Brainbank takes very meticulous care in receiving, documenting, caring for, and collecting background data for its cases. Samples are examined by neuropathologists and extensive case histories and family interviews are performed wherever possible, given privacy and practical limitations.There are currently over 5800 brains stored in the Brainbank. Previously, brain tissue samples for Huntington's, Parkinson's, and Alzheimer's disease were collected, whereas now the Brainbank additionally collects samples from patients with psychiatric disorders such as schizophrenia or manic-depression as well as normal control tissue. The Brainbank also houses private collections of brain tissue samples for the Tourette Syndrome Association (TSA), which are managed by the organization. Over the years, the Brainbank has compiled a representative brain tissue sample for research called the "McLean 66" cohort5 (with samples from about 66-67 brains) includes roughly equal numbers of Schizophrenic, Bipolar (hardest to obtain), and control cases. Gene expression data is now being derived from this set and will be included in the online repository initially.The Brainbank’s website currently provides password-based access to an anonymized catalog of brain tissue samples, with demographic information, diagnosis information, some neuropathological and clinical information, and related images. Investigators can browse and query the database and request additional demographic information as well as the actual samples from the Brainbank. Requests for samples are handled by an independent committee that provides a recommendation to the Brainbank, before it can supply these tissue samples to the investigators. Currently the Brainbank supplies nearly 100 investigators with about 4000 samples every year.2.2 Gene Expression Experiments using MicroarraysIn addition to providing brain tissue samples with the relevant patient demographic information, the Brainbank is currently extracting gene expression levels from thousands of DNA samples of its tissue specimens. Over the last 2 years the Brainbank has expanded its capability to extract gene expression data using newly acquired microarray technologies6 primarily from Affymetrix, including GeneChip® microarrays. Previously all gene expression experiments were contracted out to external labs; however the results were neither consistent nor of high quality. Hence the decision was made to bring this capability in-house.Affymetrix offers high-density microarrays for human, mouse and rat genomes. These arrays are clustered into sets of GeneChips containing probe pairs for up to 12,000 transcripts. For example, the Human U133 Genome Set of more than 39,000 transcripts is divided over two GeneChips labeled A (composed of known genes) and B (composed of express sequence tag or EST7 with unknown function). Affymetrix matching uses 25 base pair (bp) probes8 affixed to known regions on a DNA chip, which has between 8,900 and 33,000 probes. The Microarray scanner uses lasers to detect DNA stained with fluorescence, to help analyze binding of complementary5 This previously originated as the “McLean 60” cohort sample, which has since been slightly expanded to include additional brain samples.6 Tutorial on microarrays: /About/primer/microarrays.html7 Express sequence tag (EST) is a single-pass sequence from either end of a cDNA clone; approximately 200 to 600 base pairs in length.8 A labeled, single-stranded DNA or RNA molecule of specific base sequence, that is used to detect the complementary base sequence by hybridization. At Affymetrix, probe refers to unlabeled oligonucleotides synthesized on a GeneChip probe array.cDNA/RNA sequences from the tissue sample. Each chip costs about $600 in materials (including reagents) to prepare and run, and takes about a week to prepare (as part of a batch).Researchers at the Brainbank create 5-7 gene expression profiles for each case, corresponding to the various brain regions that need to be studied. A gene expression experiment is rarely repeated for the same tissue sample to create another profile, unless the first one yields poor quality data. For example, over-washing and straining of the tissue samples in preparation for gene expression experiments can yield uniformly white images which are not useful for analysis. Hybridization quality is verified using background calculations in the report files generated, particularly examining the 3’/5’ signals at housekeeping genes (values of 2 are considered good). Before the Brainbank had acquired in-house capacity to conduct microarray experiments, it had provided the RNA solutions for the McLean 60 cohort study to a commercial firm, Psychiatric Genomics9 in Maryland to allow them to replicate these experiments using their own approach and unique procedures. This data may be provided to the Brainbank in the future, and hence it must be archived as a replicated data set accordingly. Experimental replicates may be assigned the same or different accession number.Gene expression experiments can generate nearly 72 MB of data for a just a single typical array according to Affymetrix10, hence the storage and management of such data becomes a crucial task. Each sample hybridized to an Affymetrix GeneChip generates five Absolute Analysis files:1. EXP: The experimental file (in ASCII text) stores laboratory information, for each array including theexperiment name, sample, and the type of GeneChip array being used.2. DAT: The data file contains the raw image of the scanned GeneChip array, corresponding to the rawhybridization data. These data are not processed, and no scaling factors or normalization is embedded.(40-70 MB)3. CEL: The cell intensity file assigns X, Y coordinates to each cell on the array and calculates the averageintensity of each cell. This file can be used to re-analyze data with different expression algorithmparameters. This file provides a normalized image. (the ASCII/Excel file is around 10-12 MB)4. CHP: The chip file is generated using information in the CEL file to determine the presence or absence ofeach transcript and its relative expression level. (Binary file around 7 MB for Rats and 14MB for Humans)5. RPT: The report file (in ASCII text) provides quick access to the quality control information for eachhybridization, including noise, scale factor, target values, percent present, absent or marginal andaverage signal values, and housekeeping controls such as Sig(3'/5').The report file is often examined first after running an experiment to ensure the quality of results and then the image files are used to check for any artifacts. Affymetrix software uses the EXP file together with the DAT file to process the raw data in order to generate data analysis files. The chip file is primarily used for statistical analysis. Although the EXP, DAT, CEL, CEL, CHP and RPT files can only be read using Affymetrix software, the quantitative and qualitative expression values in each CHP file can be exported as text (tab delimited) files. The DAT and CHP image files can be saved in TIFF format and later converted to JPEG for easy viewing. Optionally a mask file can also be generated to provide additional information on the microarray chip quality.The Affymetrix Absolute Analysis text files contain a row for each transcript represented in the microarray and columns of the raw expression data for that transcript (indicating mRNA expression levels). The Affymetrix platform contains multiple pairs of perfect match and mismatch oligonucleotides11 for each transcript examined. The software uses the pattern and intensity of hybridization to these oligos to calculate a relative expression value for each transcript (referred to as ‘Signal’ in version 5.0 Microarray Suite and ‘Average Difference’ in previous910/technology/data_analysis/11 A short length of single-stranded nucleotides; used as probes on GeneChip® arrays.software versions).12 The algorithms also determine whether each transcript was detected during the hybridization. This qualitative information is reported as a “Present”, “Absent” or “Marginal”.Each Affymetrix microarray contains thousands of different oligonucleotide probes. The sequences of these probes are available at the Affymetrix NetAffx13 website. It provides background/annotation info on the Affymetrix probes (based on probe ID) and also maps relationships between Affymetrix microarray chip probe IDs with that of repositories like GenBank. Currently, GenBank and dChip do not read Affymetrix IDs. Generating and using these IDs from the NetAffx website is somewhat confusing as they are not always corresponding and the relationships between them can often be many to 1, 1 to many, or many to many.Affymetrix software includes MicroSuite for cataloging microarray data, the MicroDB database, and Data Mining tools which perform statistical tests and run on MicroDB. The Affymetrix Analysis Data Model14(AADM) is the relational database schema provided along with a set of Application Programming Interfaces (API) implemented as views to provide access to data stored in Affymetrix-based local gene expression databases. While the raw microarray gene expression data may be stored in an internal database, the results are valuable for the neuroscience researchers if the data is shared along with the relevant experimental metadata and demographic details. Hence it is important to consider standards and ontologies for sharing microarray data among databases and analytic tools used by the research community.2.3 Analysis of Expression Data: Software Tools and Data StandardsA number of software tools are used for analysis of gene expression data generated by Microarray experiments. In addition to Affymetrix’s own Data Mining Tool (DMT) and a number of proprietary commercial tools, several freely available tools are used within the research community including dChip, BioConductor and GeneCluster. Affymetrix provides the Data Mining Tool (DMT) v3.015 to allow filtering and sorting of expression results from microarray experiments, perform cluster and matrix analysis as well as annotate genes (manually or from the NetAffx website). DMT software runs on Windows NT and allows multiple queries to be performed in multiple GeneChip experiments simultaneously. To load data, one must register and select the MicroDB database to query and view the CHP files generated by Affymetrix. These can then be filtered to perform relevant analysis. Despite having been developed for Affymetrix users, the software interface does not appear to be intuitive, and many of these features have now been incorporated in publicly available analysis tools.16The DNA Chip Analyzer (or dChip)17 is the most commonly used microarray analysis software, particularly utilized at the Brainbank. It was developed by Dr. Cheng Li (2003) at the Harvard School of Public Health and is freely available from Harvard. dChip requires the CDF chip file and the CEL files for conducting analysis. The software can normalize the data, export expression values, filter genes, and perform hierarchical clustering or compare genes between groups of samples. The authors of dChip encourage researchers to make their gene expression results available publicly for analysis by others:18“We encourage researchers who generate Affymetrix data to also put the CEL or DAT files available with the paper. This will enhance the efforts of improving on the low-level analysis of Affymetrix microarray such as feature extraction, normalization and expression indexes, as well as ease the data-sharing and cross-reference among researchers since CEL level files can be pooled to analyze in a more controlled manner.CEL files have text format and contain summarized probe-level (PM, MM) data of Affymetrix array. dChip software uses the raw CEL files. If CEL files are stored in a central database system (containing the raw CEL files or directory links to CEL files), such a function would be convenient (as implemented in theWhitehead Xchip database): users query the database through web interface for their experiments, and request the raw CEL files to be stored temporarily on a ftp site for downloading.“12/documents/tech/Tech%20Note%20-%20Data%20Deliverables.pdf13/analysis/index.affx14/support/developer/15/products/software/specific/dmt.affx16 Manual on Affymetrix Data Mining Tool compiled by Bob Burke at the Brainbank, Summer 2003.17/complab/dchip/18/complab/dchip/public%20data.htmBioConductor19 is collaborative open source software developed by researchers at the Dana Farber Cancer Institute and the Harvard Medical School/Harvard School of Public Health. It provides a range of tools for statistical and graphical methods for analysis of genomic data and facilitates integration of biological metadata from PubMed and LocusLink. It is based on the “R” statistical programming language. The system handles Affymetrix data by allowing users to provide CEL files, as well as phenotypic and MAIME information through graphical widgets for data entry.GeneCluster 2.020 is a Java-based software tool developed by Whitehead Institute/MIT Center for Genome Research (WICGR). GeneCluster allows data analysis using supervised classification such as K nearest neighbor, gene selection and permutation tests. GeneCluster supports 2 data formats – the WICGR RES file format (*.res) and the GCT (Gene Cluster Text) file format (*.gct). The main difference between the two file formats is the RES file format contains labels for each gene's absent (A) versus present (P) calls as generated by Affymetrix's GeneChip software (which are currently ignored by GeneCluster). Data files for use in GeneCluster can be created automatically by a special tool such as WICGR's Res File Creation Tool or manually by standard tools such as Microsoft Excel and text editors.To support data exchange with a range of analytic tools, the online repository for the National Brain Databank must provide the CHIP (for Affymetrix DMT), CDF and most importantly the CEL files in raw form for downloading. In addition, any report and experiment files may also be desired by some researchers to gain confidence in the experiments, while experimental metadata in accordance with MAIME will be useful for analysis as well. All files generated by Affymetrix can be placed in a secure directory within the server and referenced in the sample metadata, such that they can be easily accessed if the online user has appropriate privileges. In the future many analytic tools will begin to support MAIME metadata and microarray data import/export in MAGE-ML formats, such as GeneSpring21 and GenePix22.2.4 Current Computing Infrastructure and Databases at the BrainbankThe Brainbank currently houses its databases in 2 main servers (Brain Servers 1 and 2) while a third server is being deployed for the National Brain Databank and an additional machine will be provided for development. Clinical Server (or Brainserver-1) hosts the primary brain tissue and clinical data. As it contains the initial unanonymized patient data (Brains DB), it maintains restricted access in compliance with HIPAA guidelines. The server configuration is a HP Proliant ML370 G2 with 1 GHz processor, 256 MB RAM and 37.8 GB storage, RAID5 w/ (6) 9.1 GB removable hard drives. It runs on Windows NT 4.00.1381 with ML SQL Server 7.00.839 and MS Access databases. Clinical demographic and diagnostic data for brain samples are archived on these databases. It also includes brain tissue information and freezer inventory as well as neuropathology reports. This server is isolated from other machines on the network to maintain security of sensitive data.Public Web Server (or Brainserver-2) hosts the publicly accessible website for the Brainbank23 and the Harvard Image Database v1.0024 which allows restricted access to query the anonymized data on brain tissue samples. The server configuration is a HP Proliant ML370 G3 with 2.4 GHz processor, 1.5 GB RAM and 90.2 GB storage, RAID5 w/ (6) 18.2 GB removable hard drives. It runs on Windows NT 4.00.1381, IIS Server with ML SQL Server 7.00.839 and Webhunter v4.0 databases. The Webhunter is a database product developed by ADS Image, Inc.25 which is used for querying and indexing brain tissue images stored in SQL Server (previously in Access). The database (Anonymous Brains) contains anonymized brain tissue and clinical data, which is bulk imported manually using SQL Server scripts from the databases in the clinical server.National Brain Databank (Brainserver-3 or National-DB) will host the public gene expression repository for the Brainbank. Some data from other Brainbank databases will be imported into the database running on this server. The server configuration is a HP Proliant ML370 G3 with dual 2.4 GHz processors, 1.5 GB RAM and 90.2 GB19/20/cancer/software/genecluster2/gc2.html21/cgi/SiG.cgi/Products/GeneSpring/index.smf22/GN_GenePixSoftware.html2324/BrainDB/default.htm25。

1992黄本补充本-船舶修理

1992黄本补充本-船舶修理

-4-
四.海底阀研磨: 2. 伐座光车接拆装价 25%计
伐盘光车接拆装价 25% 3. 伐拆装运车间按截止伐拆检 100%计 4. 海底伐试压按拆检 50% 5. 海底伐壳内铲刷油漆按截止伐拆检 15%计 6. 防浪伐拆检按截止伐拆检加价 50%计 7. 粗水慨滤器拆装清洁,检查按闸阀拆检计价
五.船底伐研磨 1. 船底塞单打水泥包 11 元/只
序号
缸径
清污泥油漆
1
400
47
2
500
50
3
600
54
4
700
62
5
800
67
6
900
72
9. 高压燃油泵、喷油器 C. 单独校对喷油定时按拆装 35%计
磨凸痕 58 94 130 170 218 270
-8-
10. 序号 1 2 3 4 5 6
空气分配器
缸径 400 500 600 700 800 900
D机舱花铁板下清洁油污垃圾 4.80 元/M2
八 轻重载水线弹正描写
序号
船长
1
50~75
2
76~100
3
101~125
4
126~150
5
151~175
6
176~200
7
201~225
1. 只弹正描写一条水线者按 60%
弹正 328 382 437 491 546 600 655
11.20 8.40
10.48 7.86
3
500~1000 19877 15900 13251 11027 10340
4
1000~5000
15157 12630 10419 10073

完整word版神奇语音黄色本教案

完整word版神奇语音黄色本教案

神奇语音黄色本参考教案教学内容:词首辅音bl, br, pl, pr教具:学生卡,测试题,小读者,学生卡磁带,测试题磁带教学过程:呈现新知识1.教师翻开神奇语音卡片,通过卡屮的图片呈现本节课要学习的发音词首辅音bl, br, pl, pr o学生看教师的图片,边听边理解。

具体步骤如下:1)教师翻开卡片第一排找到词首辅音bl,指着蓝色的图片,读:blue, blue, /bl//bl/.再指着流血的图片, 读:bleed, bleed,/bl//bl/.教师指着卡片中的bl,读:/bl//bVbl.教师发出指令:Magicfingers, Magic fingers up and down, Magic fingers, Magic fingerswrite in the air.教师带着学生在空中写字母,帮助学生记忆字形。

2)教师翻开卡片第一排找到词首辅音br,指着新娘的图片,10分钟bride, bride, /br/ /br/.再指着脑的图片,读:bra in, bra in,/br//br/.教师指着卡片中的br,读:/br//br/br.教师发出指令:Magicfingers, Magic fingers up and down, Magic fingers, Magic fingerswrite in the air.教师带着学生在空中写字母,帮助学生记忆字形。

3)教师翻开卡片第一排找到词首辅音pl,指着玩牌的图片,读:play, play, /pl/ /pl/.再指着盘子的图片,读:plate, plate, /pl/ /pl/. 教师指着卡片中的pl,读:/pl//pl/pl.教师发出指令:Magic fingers, Magic fingers up and down, Magic fingers,Magic fingers write in the air.教师带着学生在空中写字母,帮助学生记忆字形。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

中船总92黄本补充本第二次征求意见本广州远洋船舶修理厂有限公司1999.12.18说明:1、本计价本是对中船总92黄本的补充。

2、本计价本尚处于征求意见阶段,暂未投入使用。

3、本计价本仍在完善和调整之中。

4、仍有许多工程项目暂未包容在本计价本中,这些项目我们正在整理。

5、第二次征求意见本中注明了公司对第一次征求意见本的意见及船厂所做的解释和修改。

一、服务项目1、靠离码头系解缆*码头费*拖轮:160m以下10560次160m—200m 14880/次200以上16800/次公司:建议参照文船补充本。

船厂:拖轮是按照文船和菠萝庙向我厂收费的标准而定的。

据我们了解,文船的补充本关于拖轮一项,他们自自己已不用了。

引水:0.46/净吨速引费:船长〈150 637/次150─200 1092/次船长〉200 1820/次公司:(1)不应设立或说明概念,因船舶进出厂坞都有引水,时间是安排好的,不存在速引。

(2)是否正确安排了也要收取速引费?船厂:此项费用是引水方强制收费,非船厂所愿。

2、船坞服务进出坞*拖轮同1引水同1驻坞*特殊排墩*50─75 522875─100 6578126─150 9967151─175 11999176─200 13351201─225 15383公司:如何算特殊排墩?船厂:非普通船底线型的船,如尖底等。

3、消防值班*消防巡回检查*临时放置消防器材*4、接拆水管冰机*空调*5、供冷却水冰机*空调*6、接拆电缆*供岸电* 2.0/度租频率转换器640/天公司:应属船厂设备,不应收费.船厂:多年来,我厂一直靠借坞为生,因此,我们知道,各船厂均有此项收费。

(比如搭梯,也属船厂设备,但要收费).7、搭拆上下船梯:码头*坞*8、安装临时电话30/天公司:①建议按92黄本②每次每台200元,市话免费提供,不计天数,据了解每次装电话人工用得较多。

船厂:①黄本中:每次每台200元,指的是安装费。

通话费因时﹑因地标准不同,因此黄本未予规定。

我厂现定30/天,拆装费已含在内,较为合理。

②在船上工人用电话多数也是为工作。

③以后装电话可以装在船上指定的地点。

9、清除生活垃圾*10、供淡水 4.5/吨供压载水:2.0/吨11、舱底污水及污油水抽出价格:污水24元/吨污油水45元/吨污泥105元/吨12、工程勘验费视具体情况现场定。

13. 特种船舶进出坞费及在坞费加20%注:带*的以92年黄本为标准(加一定系数)。

二.工程项目(对黄本的补充)1、尾轴工程(1)填料换新:公司:建议参照文冲补充本船厂:本身就是参照文冲补充本a、供船备件另计b坞外换填料另计(2)尾轴拆装;〈a〉、单独拆装推进器按拆装50%计价;〈b〉、可变螺距推进器及尾轴拆装,按拆装450%计价;〈c〉、带有导流管的推进器及尾轴拆装,按拆装200%计价;〈d〉、尾轴吊运车间按拆装15%计(附属工程另计);推进器吊运车间按拆装10%计;(3)尾轴磁力探伤:直径〈Φ400 280元≥Φ400 400元公司:①含意?②是否重浇白合金,如是重浇白合金,价格还可以,如光是拆装换长轴承,就非常贵了。

船厂:含白合金a、单独拆装外衬套拆装70%计价;单独拆装内衬套拆装40%计价;b、铁梨木下半部换新按K=0.75计价;铁梨木在车间全部换新按K=0.75计价;c、铁梨木白合金换新1米以内按1米计价;超过1米按实际长度计价;〈5a、尾轴单独校调K=0.3计;尾轴三道轴承档光车K=1.5计;b、尾轴锥度拂准在坞内拂准K=1.5计价;尾轴锥度接触点着色检查按拂准30%计价;〈6〉a、三片式车叶按K=0.86计价;五片式车叶按K=1.3计;b、车叶表面铲刷清洁车叶直径〈Φ4200 210元≥Φ4200 328元2.舵系工程(1)、舵杆运厂按舵叶拆装10%计价;舵叶运厂按舵叶拆装15%计价;(2)、舵杆光车(3)、舵叶密性试验船长≤100米227元/扇101~150米240元/扇151~200米254元/扇〉200米267元/扇3.船体望光测量记录4.钢质工程1.单块板50kg以上按92黄本加一定系数计.(1)、重量小于5kg每块68元;(2)、钢板拆装按换新价30%计价;钢板拆除按换新价10%计价;(3)、拦杆换新Φ1/2″72元/mΦ2″88元/mΦ20(棒材)110元/m(4)、焊缝刨槽重新焊接板厚10mm 48元/米16mm 65元/米25mm 86元/米a、不足5米按5米计公司:以5米起太长船厂:并不长,文船本中规定不足10m按10m计。

b、焊缝拍片探伤120元/张5.海底伐研磨;(2)、伐座光车按拆装价25%计价;伐盘光车按拆装价12%计价;(3)、伐拆装运车间按截止伐拆检的100%计价;(4)、海底伐试压价按折检的50%计价;(5)、海底伐壳内铲刷油漆按截止伐折检15%计;(6)、防浪伐拆检按截止伐拆检加价35%计价;(7)、粗水泸器拆装清洁,检查按闸阀拆检计价;6.锌板换新:(1)锌板船供及厂供换新(2)、压载舱内双层底锌板换新船供加价25%,厂供加价10%(3)、螺栓连接锌板换瓣船供加价15%厂供加价5%7.海底格栅拆装油漆,伐箱内清洁油漆;船长〈100米139元/件101─150米152元/件151─200米171元/件201─225米206元/件5、钢板测厚(1)普通测厚10元/点(2)钻孔测厚30元/点8.起货设备(7)、三联滑车拆检加价50%;四联滑车拆检加价100%;(11)、部件运厂按原价的25%计;(12)、克林吊工程按原价加价30%计价;(13)、吊杆拆装运厂按鹅颈肖轴抽检价130%计价;吊杆弯曲试验5吨318元/根10吨372元/根25吨515元/根吊杆校直5吨1044元/根10吨1146元/根25吨1482元/根注:原价是指92黄本价9.轮机工程(1)主机工程a.气缸盖;探伤不分大小每个220元;备件缸头运厂拆检试压按拆装运厂55%计价;b.进排气伐气伐光车按拆装20%计价;气伐座光车按拆装30%计价;气伐座拆装25%计价;具有两只以上气伐者加价40%进排气伐定时单独校对按拆装35%计价;c.缸头附件需光车者按拆装25%计价;附属伐办拆装按拆装25%计价;d.筒型活塞换活塞环,间隙拂配按拆装20%计价;e.十字头洗塞换活塞环,间隙拂配按拆装25%计价;活塞杆填料箱密封环拂配按拆装35%计价;组合式活塞冷却水立管拆检按拆装30%计价;f.活塞解体备件活塞运厂按活塞拆装30%计价;活塞上车床校正,测量跳动量按活塞解体60%计价;活塞光车令槽按活塞解体90%计价;g.十字头轴承、连杆大端轴承十字头轴承运厂按拆装20%计价;连杆轴承运厂按拆装25%计;曲拐箱道门拆装按活塞拆装10%计价;h、气缸套机架缸套水腔位清污泥油漆;i..高压燃油泵、喷油器单独校对喷油定时按拆装35%计价;倒挂式主轴承拆装按拆装加60%计价;V型机按拆装加价20%;l.推力轴承(按黄本加系数)推力轴承两端座轴承拆装按拆装60%计价;m.船舶预试航、试车轻载试航按试车的50%计价;轻载试航证书费船长〈1000吨65元1001─5000吨135元5001─10000吨195元〉10000吨260元n.中速机按低速机加价20%计价;高速机按低速机加价30%计价;o.为配合机件进出机舱,机舱天窗、栏杆、铁梯等有阻部件拆装船体≤100米900101─150米1180151─200米1440200米以上拆装运厂加价40%;s.主机口琴伐组拆装运厂解体检查调正(A)B);u.主机贯串螺丝检查收紧,调整,包括主轴承十字头,滑板,倒顺车导板,开档检查(2)、空气瓶安全伐拆检;零件光车换拆检25%计价;(3)、空气瓶工程a、空气瓶吊运回厂另计;b、空气瓶上车床滚动除锈另计;c、空气瓶内部清洁不包括油漆;d、总伐、总付伐拆检不包括光车;光车按拆检25%计价;不含拆检)平板式冷却按清洗加80%计价;长径比L:D 3─4 加10%4─5 加20%5─6 加30%6─7 加40%燃油、分油机加热器清洗,按机油冷却器加20%计价;10、管系a.伐盘光车按拆装20%计,伐座光车按拆装35%计价;b.伐拆装管路压力1.6Mpa~2.5Mpa加价5%;2.5Mpa~4Mpa加价10%;c.原油、重油、渣油伐门加价30%;污水粪便伐门加价35%;锅炉本体伐门加价50%;蒸汽伐、空气伐加价25%d.蝶形伐(手动)就地拆检按闸伐价计;单独拆装蝶形伐按伐件拆装加50%计;液压蝶伐,气动蝶伐、液压、气动系统修理另计;e.伐件试压:截止伐按拆装50%计价;闸伐按拆装30%计价;f.其他加价系数参照管路拆装换新有关系数;g.锅炉安全伐伐盘光车按拆装15%计价;双联伐只光车一个伐盘盘按伐盘光车价格的60%计伐座光车按拆装30%计;双联伐只光车一个伐座者按伐座光车价格的60%计h.管路拆装换新:按92黄本加系数。

k.原油、重油,渣油管路换新加价30%;管系长50m以内按此价51--100米按此价90%计价;101--300米按此价80%计价;301--500米按此价60%计价;501以上按此价50%计价;n.管路压力1.6MPa~2.5Mpa加价5%;2.5MPa~4Mpa加价10%;o.管路拆装按一米以上拆换价的20%计。

11、电气部分a、交流电动机小修直流电动机小修按交流电动机小修200%计;交流电动机定子或转子重绕按小修400%计;直流电动机定子或转子重绕按小修600%计;交流电动机定子和转子全部重绕按小修540%计;直流电动机定子和转子重绕按小修740%计;b.交流发电机小修直流发电机小修按交流电动机小修200%计;交流发电机定子或转子重绕按小修480%计;直流发电机定子或转子重绕按小修680%计;交流发电机定子和转子全部重绕按小修650%计;直流发电机定子和转子重绕按小修850%计c.发电机装复后付机开档测量调整付机缸径150 558元200 726元250 854元300 949元12、锅炉工程a.人孔手孔拆装人孔拆装228元/只手孔拆装98元/只炉门拆装332元/只b.烟道门拆装道门面积≤0.5m2 191元/只≤1m2 218元/只≤ 1.5m2 286元/只≤2m2 395元/只≤ 2.5m2 513元/只道门不分园、方按面积计价;c.水压试验容积≤1吨535元/台、次≤2吨664元/台、次≤3吨750元/台、次≤4吨810元/台、次≤5吨907元/台、次≤7吨1200元/台、次≤10吨1598元/台、次〉10吨2078元/台、次水压试验包括配置泵水工具及闷头配妥,水压后污水全部放净;水压试验不包括伐件拆装;水压试验以后每次按80%计价13、其他零件光车换新另计不包括搭拆脚手架b.舱盖工程(1)舱盖拆装(含吊装):680元/块液压舱盖拆装:2100元/缸(2)舱盖胶条换新平直胶条24元/米角胶20元/只胶条、角胶材料价格另计(3)舱盖冲水试验237元/块预冲水176/块c.水密门工程(1)水密门拆装166元/扇(2)水密门拆装胶条换新17元/米胶条材料价格另计;(3)水密门冲水试验72元/扇水密门轨头活络64元/扇d. 舷窗工程(1)、玻璃、胶皮材料价格另计;(2)、玻璃换新包括玻璃框拆装;(3)、胶皮换新包括冲水试验;(4)、搭拆脚手架另计;e、舷梯工程(1)、舷梯绞车就地或运厂解体,清洗,检查,校准372元/台(2)舷梯起卸装置强度试验544元/台(3)舷梯拆装运厂20级621元/台28级670元/台36级774元/台f、救生艇工程(1)救生艇拆装372元/艘(2)起艇机就地或运厂解体564元/台清洗、检查试验;(3)救生艇架负荷试验765元/台(4)救生艇压重试验、检查密性911元/艘(5)救生艇拆装运厂124元/艘。

相关文档
最新文档