Integration of GEMLCA and the P-GRADE portal.
香精油中农药残留

Determination of Pesticide Minimum Residue Limits in Essential OilsReport No 3A report for the Rural Industries Research andDevelopment CorporationBy Professor R. C. Menary & Ms S. M. GarlandJune 2004RIRDC Publication No 04/023RIRDC Project No UT-23A© 2004 Rural Industries Research and Development Corporation.All rights reserved.ISBN 0642 58733 7ISSN 1440-6845‘Determination of pesticide minimum residue limits in essential oils’, Report No 3Publication No 04/023Project no.UT-23AThe views expressed and the conclusions reached in this publication are those of the author and not necessarily those of persons consulted. RIRDC shall not be responsible in any way whatsoever to any person who relies in whole or in part on the contents of this report.This publication is copyright. However, RIRDC encourages wide dissemination of its research, providing the Corporation is clearly acknowledged. For any other enquiries concerning reproduction, contact the Publications Manager on phone 02 6272 3186.Researcher Contact DetailsProfessor R. C. Menary & Ms S. M. GarlandSchool of Agricultural ScienceUniversity of TasmaniaGPO Box 252-54HobartTasmania 7001AustraliaPhone: (03) 6226 2723Fax: (03) 6226 7609Email: r.menary@.auIn submitting this report, the researcher has agreed to RIRDC publishing this material in its edited form.RIRDC Contact DetailsRural Industries Research and Development CorporationLevel 1, AMA House42 Macquarie StreetBARTON ACT 2600PO Box 4776KINGSTON ACT 2604Phone: 02 6272 4819Fax: 02 6272 5877Email: rirdc@.auWebsite: .auPublished in June 2004Printed on environmentally friendly paper by Canprint.FOREWORDInternational regulatory authorities are standardising the levels of pesticide residues present in products on the world market which are considered acceptable. The analytical methods to be used to confirm residue levels are also being standardised. To constructively participate in these processes, Australia must have a research base capable of constructively contributing to the establishment of methodologies and must be in a position to assess the levels of contamination within our own products.Methods for the analysis for pesticide residues rarely deal with their detection in the matrix of essential oils. This project is designed to develop and validate analytical methods and apply that methodology to monitor pesticide levels in oils produced from commercial harvests. This will provide an overview of the levels of pesticide residues we can expect in our produce when normal pesticide management programs are adhered to.The proposal to produce a manual which deals with the specific problems associated with detection of pesticide residues in essential oils is intended to benefit the essential oil industry throughout Australia and may prove useful to other horticultural products.This report is the third in a series of four project reports presented to RIRDC on this subject. It is accompanied by a technical manual detailing methodologies appropriate to the analysis for pesticide residues in essential oils.This project was part funded from RIRDC Core Funds which are provided by the Australian Government. Funding was also provided by Essential Oils of Tasmania and Natural Plant Extracts Cooperative Society Ltd.This report, an addition to RIRDC’s diverse range of over 1000 research publications, forms part of our Essential Oils and Plant Extracts R&D program, which aims for an Australian essential oils and plant extracts industry that has established international leadership in production, value adding and marketing.Most of our publications are available for viewing, downloading or purchasing online through our website:•downloads at .au/fullreports/index.html•purchases at .au/eshopSimon HearnManaging DirectorRural Industries Research and Development CorporationAcknowledgementsOur gratitude and recognition is extended to Dr. Noel Davies (Central Science Laboratories, University of Tasmania) who provided considerable expertise in establishing procedures for chromatography mass spectrometry.The contribution to extraction methodologies and experimental work-up of Mr Garth Oliver, Research Assistant, cannot be underestimated and we gratefully acknowledge his enthusiasm and novel approaches.Financial and ‘in kind’ support was provided by Essential Oils Industry of Tasmania, (EOT).AbbreviationsADI Average Daily IntakeAGAL Australian Government Analytical Laboratoriesingredientai activeAPCI Atmospheric Pressure Chemical IonisationBAP Best Agricultural PracticesenergyCE collisionDETA DiethylenetriamineECD Electron Capture DetectorionisationESI ElectrosprayFPD Flame Photometric DetectionChromatographyGC GasResolutionHR HighChromatographyLC LiquidLC MSMS Liquid Chromatography with detection monitoring the fragments of Mass Selected ionsMRL Maximum Residue LimitSpectrometryMS MassNRA National Registration AuthorityR.S.D. Relative Standard DeviationSFE Supercritical Fluid ExtractionSIM Single Ion MonitoringSPE Solid Phase ExtractionTIC Total Ion ChromatogramContents FOREWORD (III)ACKNOWLEDGEMENTS (IV)ABBREVIATIONS (V)CONTENTS (VI)EXECUTIVE SUMMARY (VII)1. INTRODUCTION (1)1.1B ACKGROUND TO THE P ROJECT (1)1.2O BJECTIVES (2)1.3M ETHODOLOGY (2)2. EXPERIMENTAL PROTOCOLS & DETAILED RESULTS (3)2.1M ETHOD D EVELOPMENT (3)2.2M ONITORING OF H ARVESTS (42)2.3P RODUCTION OF M ANUAL (46)3. CONCLUSIONS (47)IMPLICATIONS & RECOMMENDATIONS (50)BIBLIOGRAPHY (50)Executive SummaryThe main objective of this project was to continue method development for the detection of pesticide residues in essential oils, to apply those methodologies to screen oils produced by major growers in the industry and to produce a manual to consolidate and coordinate the results of the research. Method development focussed on the effectiveness of clean-up techniques, validation of existing techniques, the assessment of the application of gas chromatography (GC) with detection using electron capture detectors (ECD), flame photometric detectors (FPD) and high pressure liquid chromatography (HPLC) with ion trap mass selective (MS) detection.The capacity of disposable C18 cartridges to separate components of boronia oil was found to be limited with the majority of boronia components being eluted on the solvent front, with little to no separation achieved. The cartridges were useful, however, in establishing the likely interaction of reverse phases (RP) C18 columns with components of essential oils, using polar mobile phases . The loading of large amounts of oil onto RP HPLC columns presents the risk of permanently contaminating the bonded phases. The lack of retention of components on disposable SPE C18 cartridges, despite the highly polar mobile phase, presented a good indication that essential oils would not accumulate on HPLC RP columns.The removal of non-polar essential oil components by solvent partitioning of distilled oils was minimal, with the recovery of pesticides equivalent to that recorded for the essential oil components. However application of this technique was of advantage in the analysis of solvent extracted essential oils such as those produced from boronia and blackcurrant.ECD was found to be successful in the detection of terbacil, bromacil, haloxyfop ester, propiconazole, tebuconazole and difenaconzole. However, analysis of pesticide residues in essential oils by application of GC ECD is not sufficiently sensitive to allow for a definitive identification of any contaminant. As a screen, ECD will only be effective in establishing that, in the absence of a peak eluting with the correct retention time, no gross contamination of pesticide residues in an essential oil has occurred . In the situation where a peak is recorded with the correct elution characteristics, and which is enhanced when the sample is fortified with the target analyte, a second means of contaminant identification would be required. ECD, then, can only be used to rule out significant contamination and could not in itself be adequate for a positive identification of pesticide contamination.Benchtop GC daughter, daughter mass spectrometry (MSMS) was assessed and was not considered practical for the detection of pesticide residues within the matrix of essential oils without comprehensive clean-up methodologies. The elution of all components into the mass spectrometer would quickly lead to detector contamination.Method validation for the detection of 6 common pesticides in boronia oil using GC high resolution mass spectrometry was completed. An analytical technique for the detection of monocrotophos in essential oils was developed using LC with detection by MSMS. The methodology included an aqueous extraction step which removed many essential oil components from the sample.Further method development of LC MSMS included the assessment of electrospray ionisation (ESI) and atmospheric pressure chemical ionisation (APCI. For the chemicals trialed, ESI has limited application. No response was recorded for some of the most commonly used pesticides in the essential oil industry, such as linuron, oxyflurofen, and bromacil. Overall, there was very little difference between the sensitivity for ESI and APCI. However, APCI was slightly more sensitive for the commonly used pesticides, tebuconazole and propiconazole, and showed a response, though poor, to linuron and oxyflurofen. In addition, APCI was the preferred ionisation method for the following reasons,♦APCI uses less nitrogen gas compared to ESI, making overnight runs less costly;♦APCI does not have the high back pressure associated with ionisation by ESI such that APCI can be run in conjunction with UV-VIS without risk of fracturing the cell, which is pressure sensitive. Analytes that ionised in the negative APCI mode were incorporated into a separate screen which included bromacil, terbacil, and the esters of the fluazifop and haloxyfop acids. Further work using APCI in the positive mode formed the basis for the inclusion of monocrotophos, pirimicarb, propazine and difenaconazole into the standard screen already established. Acephate, carbaryl, dimethoate, ethofumesate and pendimethalin all required further work for enhanced ionisation and / or improved elution profiles. Negative ionisation mode for APCI gave improved characteristics for dicamba, procymidone, MCPA and mecoprop.The thirteen pesticides included in this general screen were monocrotophos, simazine, cyanazine, pirimicarb, propazine, sethoxydim, prometryb, tebuconazole, propiconazole, , difenoconazole and the esters of fluroxypyr, fluazifop and haloxyfop.. Bromacil and terbacil were not included as both require negative ionisation and elute within the same time window as simazine, which requires positive ionisation. Cycling the MS between the two modes was not practical.The method validation was tested against three oils, peppermint, parsley and fennel.Detection limits ranged from 0.1 to 0.5 mgkg-1 within the matrix of the essential oils, with a linear relationship established between pesticide concentration and peak height (r2 greater than 0.997) and repeatabilities, as described by the relative standard deviation (r.s.d), ranging from 3 to 19%. The type of oil analysed had minimal effect on the response function as expressed by slope of the standard curve.The pesticides which have an carboxylic acid moiety such as fluazifop, haloxyfop and fluroxypyr, present several complications in any analytical method development. The commercial preparations usually have the carboxylic acid in the ester form, which is hydrolysed to the active acidic form on contact with soil and vegetation. In addition, the esters may be present in several forms, such as the ethoxy ethyl or butyl esters. Detection using ESI was tested. Preliminary results indicate that ESI is unsuitable for haloxyfop and fluroxypyr ester. Fluazifop possessed good ionisation characteristics using ESI, with responses approximately thirty times that recorded for haxloyfop. Poor chromatography and response necessitated improved mobile phase and the effect of pH on elution characteristics was considered the most critical parameter. The inclusion of acetic acid improved peak resolution.The LC MSMS method for the detection of dicamba, fluroxypyr, MCPA, mecoprop and haloxyfop in peppermint and fennel distilled oils underwent the validation process. Detection limits ranged from 0.01 to 0.1 mgkg-1Extraction protocols and LC MSMS methods for the detection of paraquat and diquat were developed. ESI produced excellent responses for both paraquat and diquat, after some modifications of the mobile phase. Extraction methodology using aqueous phases were developed. Extraction with carbonate buffer proved to be the most effective in terms of recovery and robustness. A total ion chromatogram of the LC run of an aqueous extract of essential oil was recorded and detection using a photodiode array detector confirmed that very little essential oil matrix was co-extracted. The low background noise indicated that samples could be introduced directly into the MS. This presented a most efficient and rapid way for analysis of paraquat and diquat, avoiding the need for specialised columns or modifiers to be included in the mobile phase to instigate ion exchange.The adsorbtion of paraquat and diquat onto glass and other surfaces was reduced by the inclusion of diethylenetriamine (DETA). DETA preferentially accumulates on the surfaces of sample containers, competitively binding to the adsorption sites. All glassware used in the paraquat diquat analysis were washed in a 5% solution of 0.1M DETA, DETA was included in all standard curve preparations, oils were extracted with aqueous DETA and the mobile phase was changed to 50:50 DETA / methanol. The stainless steel tubing on the switching valve was replaced with teflon, further improvingreproducibility. Method validation was undertaken of the analysis of paraquat and diquat using the protocols established. The relationship between analyte concentration and peak area was not linear at low concentrations, with adsorption more pronounced for paraquat, such that the response for this analyte was half that seen for diquat and the 0.1 mgkg-1 level.The development of a method for the detection of the dithiocarbamate, mancozeb was commenced. Disodium N, N'-ethylenebis(dithiocarbamate) was synthesised as a standard for the derivatised final analytical product. An LC method, with detection using MSMS, was successfully completed. The inclusion of a phase transfer reagent, tetrabutylammonium hyrdrogen sulfate, required in the derivatisation step, contaminated the LC MSMS system, such that any signal from the target analyte was masked. Alternatives to the phase transfer reagent are now being investigated.Monitoring of harvests were undertaken for the years spanning 1998 to 2001. Screens were conducted covering a range of solvent extracted and distilled oils. Residues tested for included tebuconazole, simazine, terbacil, bromacil, sethoxydim, prometryn, oxyflurofen, pirimicarb, difenaconazole, the herbicides with acidic moieties and paraquat and diquat. Problems continued for residues of propiconazole in boronia in the 1998 / 1999 year with levels to 1 mgkg-1 still being detected. Prometryn residues were detected in a large number of samples of parsley oil.Finally the information gleaned over years of research was collated into a manual designed to allow intending analysts to determine methodologies and equipment most suited to the type of the pesticide of interest and the applicability of analytical equipment generally available.1. Introduction1.1 Background to the ProjectResearch undertaken by the Horticultural Research Group at the University of Tasmania, into pesticide residues in essential oils has been ongoing for several years and has dealt with the problems specific to the analysis of residues within the matrix of essential oils. Analytical methods for pesticides have been developed exploiting the high degree of specificity and selectivity afforded by high resolution gas chromatography mass spectrometry. Standard curves, reproducibility and detection limits were established for each. Chemicals, otherwise not amenable to gas chromatography, were derivatised and incorporated into a separate screen to cover pesticides with acidic moieties.Research has been conducted into low resolution GC mass selective detectors (MSD and GC ECD. Low resolution GC MSD achieved detection to levels of 1 mgkg-1 in boronia oil, whilst analysis using GC ECD require a clean-up step to effectively detect halogenated chemicals below 1mgkg-1.Dithane (mancozeb) residues were digested using acidified stannous chloride and the carbon disulphide generated from this reaction analysed by GC coupled to FPD in the sulphur mode.Field trials in peppermint crops were established in accordance with the guidelines published by the National Registration Authority (NRA), monitoring the dissipation of Tilt and Folicur residues in peppermint leaves and the co-distillation of these residues with hydro-distilled peppermint oils were assessed.Development of extraction protocols, analytical methods, harvest monitoring and field trials were continued and were detailed in a subsequent report. Solvent-based extractions and supercritical fluid extraction (SFE) was found to have limited application in the clean-up of essential oilsIn conjunction with Essential Oils of Tasmania (EOT), the contamination risk, associated with the introduction of a range of herbicides, was assessed through a series of field trials. This required analytical method development to detect residues in boronia flowers, leaf and oil. The methodology for a further nine pesticides was successful applied. Detection limits for these chemicals ranged from 0.002 mgkg-1 to 0.1 mgkg-1. In addition, methods were developed to analyse for herbicides with active ingredients (ai) whose structure contained acidic functional groups. Two methods of pesticide application were trialed. Directed sprays refer to those directed on the stems and leaves of weeds at the base of boronia trees throughout the trial plot. Cover sprays were applied over the entire canopy. For all herbicides for which significant residues were detected, it was evident that cover sprays resulted in contamination levels ten times those occurring as a result of directed spraying in some instances. Chloropropham, terbacil and simazine presented potentially serious residue problems, with translocation of the chemical from vegetative material to the flower clearly evident.Directed spray applications of diuron and dimethenamid presented only low residue levels in extracted flowers with adequate control of weeds. Oxyflurofen and the mixture of bromacil and diuron (Krovar) presented only low levels of residues when used as a directed spray and were effective as both post and pre-emergent herbicides. Only very low levels of residues of both sethoxydim and norflurazon were detected in boronia oil produced in crops treated with directed spray applications. Sethoxydim was effective as a cover spray for grasses whilst norflurazon showed potential as herbicide to be used in combination with other chemicals such as diuron, paraquat and diquat. Little contamination of boronia oils by herbicides with acidic moieties was found. This advantage, however, appears to be offset by the relatively poor weed control. Both pendimethalin and haloxyfop showed good weed control. Both, however, present problems with chemical residues in boronia oil and should only be used as a directed sprayThe stability of tebuconazole, monocrotophos and propiconazole in boronia under standard storage conditions was investigated. Field trials of tebuconazole and propiconazole were established in commercial boronia crops and the dissipation of both were monitored over time. The amount of pesticide detected in the oils was related to that originally present in the flowers from which the oils were produced.Experiments were conducted to determine whether the accumulation of terbacil residues in peppermint was retarding plant vigour. The level recorded in the peppermint leaves were comparatively low. Itis unlikely that terbacil carry over is the cause for the lack of vigour in young peppermint plants.Boronia oils produced in 1996, 1997 and 1998 were screened for pesticides using the analytical methods developed. High levels of residues of propiconazole were shown to persist in crops harvested up until 1998. Field trials have shown that propiconazole residues should not present problems if the fungicide is used as recommended by the manufacturers.1.2 Objectives♦Provide the industry, including the Standards Association of Australia Committee CH21, with a concise practical reference, immediately relevant to the Australian essential oil industry♦Facilitate the transfer of technology from a research base to practical application in routine monitoring programs♦Continue the development of analytical methods for the detection of metabolites of the active ingredients of pesticide in essential oils.♦Validate the methods developed.♦Provide industry with data supporting assurances of quality for all exported products.♦Provide a benchmark from which Australia may negotiate the setting of a realistic maximum residue limit (MRL)♦Determine whether the rate of uptake is relative to the concentration of active ingredient on the leaf surface may establish the minimum application rates for effective pest control.1.3 MethodologyThree approaches were used to achieve the objectives set out above.♦Continue the development and validation of analytical methods for the detection of pesticide residues in essential oils. Analytical methods were developed using gas chromatography high resolution mass spectrometry (GC HR MS), GC ECD, GC FPD and high pressure liquid chromatography with detection using MSMS.♦Provide industry with data supporting assurances of quality for all exported products.♦Coordinate research results into a comprehensive manual outlining practical approaches to the development of analytical proceduresOne aspect of the commissioning of this project was to provide a cost effective analytical resource to assess the degree of the pesticide contamination already occurring in the essential oils industry using standard pesticide regimens. Oil samples from annual harvests were analysed for the presence of pesticide residues. Data from preceding years were collated to determine the progress or otherwise, in the application of best agricultural practice (BAP).2. Experimental Protocols & Detailed ResultsThe experimental conditions and results are presented under the following headings:♦Method Development♦Monitoring of Commercial Harvests♦Production of a Manual2.1 Method DevelopmentMethod development focussed on the effectiveness of clean-up techniques, validation of existing techniques, the assessment of the application of GC ECD and FPD and high pressure liquid chromatography with ion trap MS, MS detection.2.1.1 Clean-up Methodologies2.1.1.i. Application of Disposable SPE cartridges in the clean-up of pesticide residues in essentialoilsLiterature reviews provided limited information with regards to the separation of contaminants within essential oils. The retention characteristics of disposable C18 cartridges were trialed.Experiment 1;Aim : To assess the capacity of disposable C18 cartridges to the separation of boronia oil components. Experimental : Boronia concrete (49.8 mg) was dissolved in 0.5 mL of acetone and 0.4 mL of chloroform was added. 1mg of octadecane was added as an internal standard. A C18 Sep-Pak Classic cartridge (short body) was pre- conditioned with 1.25 mL of methanol, which was passed through the column at 7.5 mLmin-1, followed by 1.25 mL of acetone, at the same flow rate. The boronia samplewas then applied to the column at 2 mLmin-1 flow and eluted with 1.25 mL of acetone / chloroform (5/ 4) and then eluted with a further 2.5 mL of chloroform. 5 fractions of 25 drops each were collected. The fractions were analysed by GC FID using the following parametersAnalytical parameters6890PackardHewlettGCcolumn: Hewlett Packard 5MS 30m, i.d 0.32µmcarrier gas instrument grade nitrogeninjection volume: 1µL (split)injector temp: 250°Cdetector temp: 280°Cinital temp: 50°C (3 min), 10°Cmin-1 to 270°C (7 mins)head pressure : 10psi.Results : Table 1 record the percentage volatiles detected in the fractions collectedFraction 1 2 3 4 5 % components eluting 18 67 13 2636%monoterpenes 15%sesquiquiterpenes 33 65 2%high M.W components 1 43 47 9Table 1. Percentage volatiles eluting from SPE C18 cartridgesDiscussion : The majority of boronia components eluted on the solvent front, effecting minimal separation. This area of SPE clean-up of essential oils requires a wide ranging investigation, varying parameters such as cartridge type and polarity of mobile phase.Experiment 2.Aim : For the development of methods using LC MSMS without clean-up steps, the potential for oil components to accumulate on the reverse phase (RP) column must be assessed. The retention of essential oil components on SPE C18 cartridges, using the same mobile phase as that to be used in theLC system, would provide a good indication as to the risk of contamination of the LC columns withoil components.Experimental: Parsley oil (20-30 mg) was weighed into a GC vial. 200 µL of a 10 µgmL-1 solution (equivalent to 100mgkg-1 in oil) of each of sethoxydim, simazine, terbacil, prometryn, tebuconazoleand propiconazole were used to spike the oil, which was then dissolved in 1.0 mL of acetonitrile. The solution was then slowly introduced to the C18 cartridge (Waters Sep Pac 'classic' C18 #51910) using a disposable luer lock, 10 mL syringe, under constant manual pressure, and eluted with 9 mLs of acetonitrile. Ten, 1 mL fractions were collected and transferred to GC vials. 1mg of octadecane was added to each vial and the samples were analysed by GC FID under the conditions described in experiment 1.The experiment was repeated using C18 cartridges which had been pre-conditioned with distilled waterfor 15 mins. Again, parsley oil, spiked with pesticides was eluted with acetonitrile and 5 x 1 mL fractions collected.Results: The majority of oil components and pesticides were eluted from the C18 cartridge in the firsttwo fractions. Little to no separation of the target pesticides from the oil matrix was achieved. Table2 lists the distribution of essential oil components in the fractions collected.Fraction 1 2 3 4 5 % components eluting 18 67 13 2663%monoterpenes 15%sesquiquiterpenes 33 65 2%high M.W components 1 43 47 9water conditioned% components eluting 35 56 8 12%monoterpenes 3068%sesquiquiterpenes 60 39 1 0%high M.W components 0 50 42 7Table 2. Percentage volatiles eluting for SPE C18 cartridgesFigure 1 shows a histogram of the percentage distribution of components from the oil in each of the four fractions.Figure 1. Histogram of the percentage of volatiles of distilled oils in each of four fraction elutedon SPE C18 cartridges (non-preconditioned)Figure 2. Histogram of the percentage of volatiles of distilled oils in each of four fraction elutedon SPE C18 cartridges (preconditioned)Discussion : The chemical properties of many of the target pesticides, including polarity, solubility in organic solvents and chromatographic behaviour, are similar to the majority of essential oil components. This precludes the effective separation of analytes from such matrices through the use of standard techniques, where the major focus is pre-concentration of pesticide residues from water or water based vegetative material. However, this experiment served to provide a good indication that under HPLC conditions, where a reverse phase C18 column is used in conjunction with acetonitrile / water based mobile phases, essential oil components do not remain on the column.。
管理文化

MANAGING CULTURAL INTEGRATION IN CROSS-BORDER MERGERS AND ACQUISITIONSDaniel R.Denison,Bryan Adkins andAshley M.GuidrozABSTRACTCross-border M&A has become one of the leading approaches forfirms to gain access to global markets.Yet there has been little progress in the research literature exploring the role that culture may play in the success of these ventures.Poor culture-fit has often been cited as one reason why M&A has not produced the outcomes organizations hoped for(Cart-wright&Schoenberg,2006).Cross-border M&A has the added chal-lenges of having to deal with both national and organizational culture differences.In this chapter we review the literature on cultural integration in cross-border M&A and provide a framework designed to help manage the integration process throughout the M&A lifecycle.This framework presents culture assessment and integration as a crucial component to reducing poor culture-fit as a barrier to M&A success.Mergers and acquisitions(M&A)have become a central part of most corporate growth strategies,and an increasing portion of that M&A activity now spans national borders.Indeed,beyond a certain scale,one might say that all M&A is now cross-border M&A.For example,even a merger Advances in Global Leadership,Volume6,95–115Copyright r2011by Emerald Group Publishing LimitedAll rights of reproduction in any form reservedISSN:1535-1203/doi:10.1108/S1535-1203(2011)000000600895between two large American corporations such as HP and EDS requires an integration plan that affects operations in many countries.Furthermore,the success of the merger depends not only on the integration of operations at the center where the national culture is presumably the same,but also on the integration process in many locations around the world where the national cultures differ from that in the center.Despite this trend,relatively little research has focused directly on cross-border M&A,and even less has addressed the topic of cultural integration in cross-border deals.This chapter explores these issues by first reviewing the existing literature on cross-border M&A,and then presenting a framework that highlights some of the important dynamics in the cultural integration process.This analysis is then used to pose both a set of research questions for future study and a set of practical recommendations for managing cultural integration in future cross-border mergers.UNDERSTANDING CROSS-BORDERMERGERS AND ACQUISITIONSCross-border mergers and acquisitions (M&A)are a large component of global foreign direct investment (FDI)activities (United Nations Con-ference on Trade and Development [UNCTAD],2006).Cross-border M&As are often used as a means for gaining entry into a foreign market,a method for engaging in a dynamic learning process,or a value-creating strategy (Shimizu,Hitt,Vaidyanath,&Pisanto,2004).Although there are many similarities between cross-border M&As and within-border M&As,the international scope poses additional challenges to the cultural integration process (Hofstede,1980;Shimizu et al.,2004).Acquiring companies outside of one’s own country carries what Zaheer (1995)called a ‘‘liability of foreignness ’’–the costs incurred by a firm operating in a foreign market in addition to what a local firm would incur.Cross-border M&As also have the added challenge of double-layered acculturation where national culture must also be integrated in addition to organizational culture (Barkema,Bell,&Pennings,1996).Despite these challenges,cross-border M&As continue to be a popular business strategy.The financial value of M&A activity has steadily increased from the late 20th century and into the present decade.Global FDI activity peaked at US $1.7trillion during the fiscal year of 2008,with cross-border M&As accounting for US $707billion (UNCTAD,2010).DANIEL R.DENISON ET AL.96Managing Cultural Integration97 One challenge inherent within the M&A literature that we reviewed for this chapter is the fragmentation of the research across disciplines.Researchers in finance,strategy,organizational behavior,and human resources have all studied M&A from different perspectives.Eachfield has produced a considerable amount of research delineating the factors that lead to the success or failure of M&A.Poor culture-fit has been an oft-cited reason by researchers from many different disciplines,albeit without much statistical evidence(Cartwright&Schoenberg,2006).In recent years,researchers have begun to develop more comprehensive assessments of the role of organiza-tional culture in the M&A process(Cartwright&Schoenberg,2006;Stahl& Voigt,2008).One review concludes that the study of culture in M&A is still in its infancy and that current research is too inconsistent to support clear conclusions about the positive or negative role that culture can play during M&A(Teerikangas&Very,2006).These authors make several propositions about the culture literature in the M&Afield and conclude that(i)culture is a multilevel variable that includes organizational,industrial,functional, national,occupational,and professional cultures;(ii)these cultures are interconnected and present a dynamic challenge to organizations in the M&A process;and(iii)the quality of thefirm’s integration strategy will influence the effect that culture has onfirm performance(Teerikangas&Very,2006).They conclude their paper stating that‘‘instead of asking if‘yes or no’cultural differences impact the performance of M&A,researchers should focus on ‘how’do they impact the performance of M&A’’(Teerikangas&Very,2006, p.46).We agree that culture does play a key role in the M&A process and that more research needs to be conducted to understand‘‘how’’cultural integration effects M&A performance.We also contend that the outcomes could be improved if culture is positioned as a central component of the M&A process from the beginning.To help illustrate the multilevel nature of the cross-border cultural integration process,consider the example of the merger between Finnish Merita Bank and Swedish Nordbanken described by Piekkari,Vaara, Tienari,and Santti(2005).This merger experienced many cultural integration challenges due to the decision to adopt Swedish as the corporate language.This decision had a disintegrating and fragmenting effect among employees,particularly because it disadvantaged those Finnish employees who did not speak Swedish.This decision negatively impacted performance appraisals,language training and management development,and career paths and promotion of native Finnish speaking employees who could not operate as well in the Swedish-language environment(Piekkari et al.,2005). The authors write that the‘‘chosen corporate language is likely to send animplicit symbolic message regarding the division of power between the merging parties’’(Piekkari et al.,2005,p.331).In this situation,the decision to adopt Swedish as the corporate language sent an implicit signal that the needs of Finnish employees were not at the forefront of managements’attention.This also struck a deep chord with Finnish employees,since Finland was ruled by Sweden for nearly 500years,up until 1809;and,thus there is a long history of the Swedish business elite requiring Finns to speak Swedish in order to survive economically.The Finnish–Swedish merger also brings to mind the importance of communication with employees,particularly those employed at lower levels or within front-line positions (Budhwar,Varma,Katou,&Narayan,2009).Pioch (2007)and Larsson and Lubatkin’s (2001)case studies of United States,United Kingdom,and Swedish cross-border M&As highlight the importance of low-level employee integration.As an example,Pioch (2007)discovered that a management-imposed corporate culture was not well-received by all employees within a UK-based retailer that was acquired by a larger international corporation.Employees consented to the company values,but overall cultural integration was not achieved (Pioch,2007).Larsson and Lubatkin (2001)also observed similar findings in their study of 50cross-border and domestic M&As in the United States and Sweden.They stress that integration should include a balance of company sponsored socialization activities,such as introduction programs,training,cross visits,retreats,and celebrations,as well as allowing for employee autonomy to create a joint organizational culture (Larsson &Lubatkin,2001).Several additional themes emerge from the research literature that are of particular interest for cross-border M&A.These themes include (i)the firm’s previous experience,in M&A activity in general,and in their previous business activity within the target country;(ii)the similarity in national culture between the host and target company;and (iii)the integration strategies adopted during the M&A process.These variables can all impact the success of cross-border M&As.Experience CountsResearchers have shown that both prior experience in M&A activity and prior experience within the target country can increase the frequency and success of subsequent cross-border M&A activities (Barkema et al.,1996;Finkelstein &Haleblian,2002;Haleblian &Finkelstein,1999;Vermeulen &Barkema,2001;Very &Schweiger,2001).Using organizational learning DANIEL R.DENISON ET AL.98Managing Cultural Integration99 theory as a framework(Levitt&March,1988),Collins,Holcomb,Certo, Hitt,and Lester(2009)reasoned that asfirms engage in M&A activity, either domestically or internationally,they learn a great deal about what is needed to make an M&A successful and use that experience to pursue additional international M&A ing a sample of Fortune500firms, they found that prior domestic and international acquisitions influenced the likelihood of acquisitions in foreign markets by US-basedfirms.They also found that prior international experience and prior experience within the target country were stronger predictors of subsequent international acquisitions,generally,and acquisitions within a target country,specifically (Collins et al.,2009).Similar patterns have also been observed in the Chinese business market;equity joint ventures(EJVs),which were once the primary vehicle for foreignfirms to enter the Chinese market,have decreased over the years,giving rise to more M&A activity(Xia,Tan,& Tan,2008).Nonetheless,firms entering into EJVs in China learned a great deal about how to conduct business within that marketplace,making it more likely thatfirms would engage in M&As for their subsequent business ventures.Thus,previous M&A experience preparesfirms for the challenges involved in making a merger work.Firms that already have specific experience within a target country will already be familiar with the legal and regulatory requirements of that country,for example,and will not be held back by their learning curve.Which is More Important:Similarities or Differences?A second line of research unique to cross-border M&As in particular is understanding the effect of differences in national culture.Cultural familiarity theory argues thatfirms are less likely to invest in organizations in culturally distant countries,and subsequently have poorer performance postintegration(Lee,Shenkar,&Li,2008;Li&Guisinger,1991;Shenkar, 2001).The resource-based view of thefirm,in contrast,hypothesizes exactly the opposite:that more culturally distant M&As will actually be more successful because the cultural differences enhance potential synergies between the two partners(Chakrabarti,Mukherjee,&Jayaraman,2009). The research on this issue,however,has been inconclusive.Datta and Puia (1995)found that cultural distance had a negative effect on subsequent shareholder wealth of the acquiringfirm,whereas Chakrabarti and colleagues(2009)found a positive effect of cultural distance onfirm performance36months after integration.Slangen(2006)argues that it is notthe distance between the cultures that impacts performance but the level of integration that the firms seek to achieve.For example,using a sample of Dutch acquisitions across 30countries,Slangen (2006)found that higher levels of integration negatively impacted firm performance as cultural distance increased.Understanding the Choices for Integration StrategyThis research on the cultural distance between countries reminds us that these cultural differences need to be viewed in the context of a more general approach to integration.For this,we look to the classic typology developed by Mirvis and Marks (1992).They viewed integration in terms of the degree of change required by the acquiring firm and the acquired firm.This typology distinguishes ‘‘stand alone’’mergers that require little change by either firm,from ‘‘absorption’’mergers that require fundamental change in the acquired firm,but little change in the acquiring firm,from ‘‘reverse acquisitions’’that require a high degree of change in the acquiring firm as they adopt the ways of the acquired firm.Finally,they distinguish these from ‘‘best of both’’mergers requiring substantial changes in both firms,and ‘‘transformations’’that require more fundamental change for both firms (Fig.1).Mirvis and Marks go on to elaborate the options for integration outlined in Fig.2.As two firms go from being separate entities in a holding companyBest of Both Reverse AcquisitionStand Alone Absorption Transformation Degree of Change in Acquiring Company Degree ofChange inAcquiredCompanyLowLow HighHighFig.1.Different Types of Mergers.Source:Mirvis and Marks (1992).DANIEL R.DENISON ET AL.100with little interdependence to being fully merged and consolidated,they have series of intermediate choices.Like the typology in Fig.1,this framework helps to clarify two points:First,this framework makes it clear that there are many different approaches to choose in the integration process.The degree of integration and the speed of integration are,in most cases,well within the control of the leaders of the acquiring firm.The biggest problems often come when the choices,and their consequences,are not understood clearly from the outset.Mergers that began as ‘‘absorptions’’but turned into ‘‘reverse mergers’’as they unfolded are likely to encounter problems.Or,mergers that were presented in the beginning as a ‘‘merger of equals’’in order to disguise the fact that one (or both)firms was trying to domi-nate have generated some spectacular failures.Making a clear choice,with full understanding of the consequences,and then clearly communi-cating this throughout the organization seems to be a prerequisite for success.Second,the Mirvis and Marks framework also poses a key dilemma in planning the integration process in any merger:Integration requires a lot of resources.But,at the same time,the creation of new dynamic capabilities requires integration.Thus,combining two organizations into one holding StructureManagement ImplicationsDecentralizedPlanning & MonitoringAutonomy ofLine Management Centralized Planning & Monitoring Coordination of Line Management Integrated Operations and Controls Cooperation of Line ManagementCorporateFunctions Production or Marketing Companywide IntegrationFig.2.How Much Integration?Source:Mirvis and Marks (1992).Managing Cultural Integration 101company requires minimal resources.But,in turn,it can also not be expected to generate any new dynamic capabilities.A full transformation with integrated operational control has the potential to create many new dynamic capabilities,but it will not be cheap!This dilemma is compounded when a merger spans national boundaries.The added complexity of national differences adds a third dimension of cultural distance to the Mirvis and Marks matrix presented in Fig.1and underscores the point that the quality and quantity of organizational resources that can be devoted to integrating an acquisition will be a key determinant of its success.How much complexity is the acquiring organization capable of managing?In addition to these basic integration strategy decisions,it is also important to consider the integration process itself.Integration is a multistage process.In the next section,we consider the role that organizational culture can play during each of these stages,and how cross-border M&As in particular can add both increased opportunity and increased complexity.MANAGING THE CULTURAL INTEGRATIONPROCESS IN MERGERS AND ACQUISITIONSA number of approaches have been proposed for managing postmerger integration across borders.Quah and Young (2005),for example,prescribe a five-year timeline that divides postacquisition activity into four phases,beginning with very slow absorption for the first year postacquisition and accelerating until the two firms are fully integrated.Similarly,our model in Fig.3views integration as a multistage process in which organizational culture plays an important role at each stage.Considering cultural issues early in the M&A process can increase the likelihood of a positive outcome.A crucial,and often overlooked first step,is to begin the M&A process with an understanding of how M&A activity fits with the culture and growth strategy of the organization.Beginning here ensures that cultural issues remain on the table through the acquisition and integration process rather than emerging toward the end when an integrated and unified organization is desired.Growth StrategyThe starting point in managing the cultural integration process is to consider the role of M&A in the organization’s growth strategy.Researchers often DANIEL R.DENISON ET AL.102consider the M&A event itself in isolation,rather than considering that the merger may be just one part of a larger growth strategy (Barkema &Schijven,2008;Kusewitt,1985;Salter &Weinhold,1979).Thus,the key to success is not to consider the merger in isolation,but rather to consider the M&A activity within the broader context of the evolution of the organization.Culture is often not considered until later in the M&A process,once the deal is complete.However,because ‘‘culture clashes’’are often blamed for the failure of M&As,it is important that organizations have a clear understanding of their own culture at the beginning of this process when M&A is a primary component of their growth strategy.Having insight into the level of clarity and alignment that exists within one’s own firmregarding Fig.3.A Process for Cultural Integration in Mergers and Acquisitions.Managing Cultural Integration 103mission and strategy,customer needs,internal processes,and expected behaviors and practices,for example,allows for a better assessment of cultural fit as potential targets are considered.This activity can also be beneficial in identifying the cultural traits that the organization would like to retain or develop moving forward,as they seek out organizations that might fit within their growth strategy.Assessing the organization’s cross-border capabilities for managing cultural complexity is a particularly important part of this process.A broad-based assessment of the knowledge and capabilities required to move effectively into a new region can help identify both opportunities and limitations.Assessing a firm’s existing culture as part of the growth strategy process should also include an assessment of the capability of the leadership and their ability to manage cultural complexity.Kavanagh and Ashkanasy (2006)evaluated the effect of leadership and change management strategy on M&A integration in three organizations and found that managers responsible for driving the merger process were not equipped with the necessary change management skills to ensure success.M&As are not isolated business activities (Kusewitt,1985;Salter &Weinhold,1979),and sophisticated change management skills are needed as well as a clearly communicated vision from the leadership in the firm (Kavanagh &Ashkanasy,2006;Waldman &Javidan,2009).Henkel’s 2006acquisition of Dial provides a helpful example of an acquisition that fits in well with a carefully considered growth strategy that took into consideration the cultural evolution of the corporation.Henkel,the leading German consumer products firm,has been on a decade-long transformation moving from a dominant German and European perspective with a strong base in the chemical industry to a global consumer products firm capable of managing brands that compete head-to-head with firms like Unilever and Procter &Gamble.The $2.9billion acquisition of the American icon Dial was a key step in the globalization of their home care and personal products businesses.This acquisition also gave Henkel a much stronger base in North America,with 25%of their sales now coming from the United States of America.In addition,this acquisition also gave greater global exposure to many of the established Dial brands.Potential TargetsWhen considering a range of potential M&A targets,a firm will typically gather an array of data and information.Depending on the specific growth DANIEL R.DENISON ET AL.104Managing Cultural Integration105 strategy an M&A target might support,that information may include opportunities to quickly expand product lines or move into new geographies or perhaps to eliminate a competitive threat.As information is collected regarding the potential target’s operations,it is also important to consider the expected degree of integration that the merger or acquisition will require.Will it be a holding company that is allowed a great deal of operating independence and thus less intensive cultural integration?Is it being absorbed into the acquiringfirm in a way that will require significant change for the acquiredfirm?Which parts of the acquiring organization will be most influenced by the acquisition?Will the merger or acquisition require the transformation of the cultures of allfirms involved?Other factors to consider for the success of cross-border M&As include thefirm’s previous business experience outside the borders of their country. Previous experience within a foreign environment,either through partner-ships,joint ventures,or M&As,has been shown to increase the likelihood and success of later M&A activity(Barkema et al.,1996;Finkelstein& Haleblian,2002;Haleblian&Finkelstein,1999;Vermeulen&Barkema, 2001;Very&Schweiger,2001).By the same token,the acquiredfirm’s experience counts,too,both with regard to M&A in general,and in regard to their experience with the national culture of the acquiringfirm.Firms with previous joint venture activity in China,for example,had already moved up the learning curve on how to conduct business within that country and were more likely to engage in successful M&A activity within China for their subsequent business ventures(Xia et al.,2008).Identifying potential targets might include,for example,compiling all of the foreign locations thefirm has worked within and evaluating which of those markets would be best to target.Due DiligenceDuring the due diligence period,a targetfirm has been identified and leaders of the respectivefirms begin sharingfinancial and legal information to guide the decision regarding the potential benefits and liabilities of the merger. This is an opportune time to investigate the culture of the target organization and identify similarities and differences between the twofirms.During the due diligence phase of Twentieth Century Advisors acquisition of Benham Capital Management Group,for example,the twofirms exchanged corporate values statements that revealed that they both shared some of the same guiding principles(Marks&Mirvis,2001).This made the subsequent integration ofthe two firms much easier as they already had shared perceptions on corporate values and behavior.Thus,the due diligence phase is a critical stage in the M&A process,and a time when cultural due diligence should be central within the overall due diligence process.Including human resource or organizational development experts with-in the M&A team is crucial to ensuring that cultural assessment is not overlooked (Marks &Mirvis,2001).Harding and Rouse (2007)recommend that all organizations take proactive steps to evaluate the culture of organizations they are considering acquiring and suggest that M&A teams either conduct interviews or use a cultural assessment tool to gather information.Developing a detailed understanding of how the leaders and employees in the firms develop strategies and goals,engage with the marketplace,and reward behaviors will offer critical insights regarding the potential synergies and areas of conflict that might arise during the cultural integration effort.Due diligence with respect to the cross-border manage-ment capabilities of both firms is particularly important at this stage.As another example,during the due diligence phase of Dow Chemical’s 1999acquisition of Union Carbide,the 25-person integration team did an assessment of their perceptions of the culture of both organizations.They also did a careful comparison of the perceptions of the Dow members of the integration team with the Carbide members of the integration team.The convergence of the perspectives of these two halves of the integration team was taken as a clear indication that the integration team had reached a consensus regarding the strengths and challenges of the two parts of their future organization.Cross-border acquisitions often have a way of turning from ‘‘absorption’’to ‘‘reverse acquisition’’in the acquired firm’s home country as the merger unfolds.The view from HQ may regard ownership as the most important aspect of control,or may see the adoption of global processes and procedures as the most important objective,and may not appreciate the realities of the business on the ground,and the importance of these activities for building the corporation’s presence and brand in the new marketplace.It is extremely important for these potential dynamics to be anticipated by the integration team.Cultural IntegrationLarsson and Finkelstein’s (1999)case study research provides some of the earliest documentation on the importance of postacquisition integration to DANIEL R.DENISON ET AL.106Managing Cultural Integration107 the success of a merger.In their analysis,integration was found to be the single most important predictor of synergy realization in the M&As they studied.Interestingly enough,theirfindings seem to apply equally well to both within-country and to cross-border mergers.They hypothesized that cross-border M&As would negatively affect organizational integration, positively impact the potential for combination in the merger,but increase employee resistance.The results,however,showed no relationship between cross-border M&As and increased employee resistance or a decrease in organizational integration,but did show a positive relationship with combination potential,a precursor to positive synergy.Thus,integration seems equally important for both within-border and cross-border M&As, but cross-border M&As do seem to provide increased opportunities for expanding new market access or promoting complementary globalization synergies.As mentioned previously,the decision to adopt Swedish as the company language in the Merita Bank–Nordbanken merger had long-lasting effects on Finnish employees(Piekkari et al.,2005).But national culture does not always have to be a barrier to integration.Slangen(2006) argues that national culture only hinders M&A success when acquisition companies are too tightly integrated into the acquirer.More research is needed in this area,but the growth strategy of thefirm will largely influence the integration strategy that is adopted.There are a number of factors that can affect postacquisition integration. Epstein(2004)suggests that successful integrations includefive components: a coherent integration strategy,a strong integration team,frequent communication,speed in implementation,and measurement alignment across all departments to gauge success.Provided that some culture data has been collected during the due diligence phase,this information can be used by leaders and integration teams to create clarity and alignment among the employees regarding direction,processes,and expected behavior.Leader-ship team alignment is also important to assure that common messages and priorities are communicated,and that relationship-building activities and role-clarity efforts are implemented.If the due diligence phase does not include a detailed examination of the respective cultures,the period just after closing and prior to the integration activities should be used to gather important data about how the respective organizations operate.That data highlights the differences and possible synergies of thefirms and is used to proactively facilitate the culture integration process.As an example,Fig.4includes an analysis of our culture data from a merger in the petrochemical industry.1The acquiringfirm,in this case,is an American petrochemicalfirm that has acquired a German speciality。
产品碳足迹及中和核算指南

产品碳足迹及中和核算指南英文回答:Carbon Footprint and Neutralization Accounting Guide.Introduction.In the face of the escalating climate crisis, organizations are increasingly recognizing the importance of understanding and mitigating their environmental impact. Carbon footprinting and neutralization accounting provide a comprehensive framework for quantifying and reducing greenhouse gas (GHG) emissions associated with an organization's activities. This guide outlines the principles, methodologies, and best practices for developing and implementing a robust carbon footprint and neutralization accounting program.1. Establishing a Carbon Footprint.The first step towards carbon footprinting is establishing an accurate baseline of an organization's GHG emissions. This involves identifying all relevant emission sources and quantifying their contributions usingrecognized industry standards and methodologies, such asthe Greenhouse Gas Protocol (GHG Protocol). Key emission sources include:Scope 1: Direct emissions from owned or controlled sources (e.g., fuel combustion, industrial processes)。
中国地质大学(北京)考博专业英复习材料

晶) is said to have a porphyritic texture(斑状结构). The classification of fine-grained rocks, then, is based on the proportion of minerals which form phenocrysts and these phenocrysts (斑晶)reflect the general composition of the remainder(残留) of the rock. The fine-grained portion of a porphyritic(斑岩) rock is generally referred to as the groundmass(基质) of the phenocrysts. The terms "porphyritic" and "phenocrysts" are not restricted to fine-grained rocks but may also apply to coarse-grained rocks which contain a few crystals distinctly larger than the remainder. The term obsidian(黑曜岩) refers to a glassy rock of rhyolitic(流纹岩) composition. In general, fine-grained rocks consisting of small crystals cannot readily be distinguished from③ glassy rocks in which no crystalline material is present at all. The obsidians, however, are generally easily recognized by their black and highly glossy appearanceass of the same composition as obsidian. Apparently the difference between the modes of formation of obsidian and pumice is that in pumice the entrapped water vapors have been able to escape by a frothing(起泡) process which leaves a network of interconnected pore(气孔) spaces, thus giving the rock a highly porous (多孔的)and open appearance(外观较为松散). ④ Pegmatite(结晶花岗岩) is a rock which is texturally(构造上地) the exact opposite of obsidian. ⑤ Pegmatites are generally formed as dikes associated with major bodies of granite (花岗岩) . They are characterized by extremely large individual crystals (单个晶体) ; in some pegmatites crystals up to several tens of feet in length(宽达几十英尺)have been identified, but the average size is measured in inches (英寸) . Most mineralogical museums contain a large number of spectacular(壮观的) crystals from pegmatites. Peridotite(橄榄岩) is a rock consisting primarily of olivine, though some varieties contain pyroxene(辉石) in addition. It occurs only as coarse-grained intrusives(侵入), and no extrusive(喷出的) rocks of equivalent chemical composition have ever been found. Tuff (凝灰岩)is a rock which is igneous in one sense (在某种意义上) and sedimentary in another⑥. A tuff is a rock formed from pyroclastic (火成碎 屑的)material which has been blown out of a volcano and accumulated on the ground as individual fragments called ash. Two terms(igneous and sedimentary) are useful to refer solely to the composition of igneous rocks regardless of their textures. The term silicic (硅质 的)signifies an abundance of silica-rich(富硅) and light-colored minerals(浅 色矿物), such as quartz, potassium feldspar(钾长石), and sodic plagioclase (钠长石) . The term basic (基性) signifies (意味着) an abundance of dark colored minerals relatively low in silica and high in calcium, iron, and
GRID Systems

Optimization Techniques for Implementing Parallel Skeletonsin Distributed EnvironmentsM.Aldinucci,M.Danelutto {aldinuc,marcod }@di.unipi.it UNIPI Dept.of Computer Science –University of Pisa Largo B.Pontecorvo 3,Pisa,Italy J.D¨u nnweber,S.Gorlatch {duennweb,gorlatch }@math.uni-muenster.de WWU MuensterDept.of Computer Science –University of M¨u nsterEinsteinstr.62,M¨u nster,Germany CoreGRID Technical Report Number TR-0001January 21,2005Institute on Programming ModelCoreGRID -Network of ExcellenceURL:Legacy Code Support for Production GridsT.Kiss*,G.Terstyanszky*,G.Kecskemeti*,Sz.Illes*,T.Delaittre*,S.Winter*,P .Kacsuk**,G.Sipos***Centre of Parallel Computing,University of Westminster,115New Cavendish Street,London W1W 6UW United Kingdom e-mail:gemlca-discuss@ **MTA SZTAKI1111Kende u.13Budapest,Hungary CoreGRID Technical Report Number TR-00116th June 2005Institute on Problem Solving Environment,Tools and GRID Systems CoreGRID -Network of ExcellenceURL: CoreGRID is a Network of Excellence funded by the European Commission under the Sixth Framework ProgrammeProject no.FP6-004265Legacy Code Support for Production GridsT.Kiss*,G.Terstyanszky*,G.Kecskemeti*,Sz.Illes*,T.Delaittre*,S.Winter*,P.Kacsuk**,G.Sipos***Centre of Parallel Computing,University of Westminster,115New Cavendish Street,London W1W6UW United Kingdome-mail:gemlca-discuss@**MTA SZTAKI1111Kende u.13Budapest,HungaryCoreGRID TR-00116th June2005AbstractIn order to improve reliability and to deal with the high complexity of existing middleware solutions,todays production Grid systems restrict the services to be deployed on their resources.On the other hand end-users requirea wide range of value added services to fully utilize these resources.This paper describes a solution how legacy codesupport is offered as third party service for production Grids.The introduced solution,based on the Grid ExecutionManagement for Legacy Code Architecture(GEMLCA),do not require the deployment of additional applications onthe Grid resources,or any extra effort from Grid system administrators.The implemented solution was successfullyconnected to and demonstrated on the UK National Grid Service.1IntroductionThe vision of Grid computing is to enable anyone to offer resources to be utilised by others via the network.This orig-inal aim,however,has not been fulfilled so far.Todays production Grid systems,like the EGEE Grid,the NorduGrid or the UK National Grid Service(NGS)apply very strict rules towards service providers,hence restricting the number of sites and resources in the Grid.The reason for this is the very high complexity to install and maintain existing Grid middleware solutions.In a production Grid environment strong guarantees are needed that system administrators keep the resources up and running.In order to offer reliable service only a limited range of software is allowed to be deployed on the resources.On the other hand these Grid systems aim to serve a large and diverse user community with different needs and goals.These users require a wide range of tools in order to make it easier to create and run Grid-enabled applications. As system administrators are reluctant to install any software on the production Grid that could compromise reliability, the only way to make these utilities available for users is to offer them as third-party services.These services are running on external resources,maintained by external organisations,and they are not integral part of the production Grid system.However,users can freely select and utilise these additional services based on their requirements and experience with the service.This previously described scenario was utilised to connect GEMLCA(Grid Execution Management for Legacy Code Architecture)[11]to the UK National Grid Service.GEMLCA enables legacy code programs written in any source language(Fortran,C,Java,etc.)to be easily deployed as a Grid Service without significant user effort.A This research work is carried out under the FP6Network of Excellence CoreGRID funded by the European Commission(Contract IST-2002-004265).user-level understanding,describing the necessary input and output parameters and environmental values such as the number of processors or the job manager used,is all that is required to port the legacy application binary onto the Grid.GEMLCA does not require any modification of,or even access to,the original source code.The architecture is also integrated with the P-GRADE portal and workflow[13]solutions to offer a user friendly interface,and create complex applications including legacy and non-legacy components.In order to connect GEMLCA to the NGS two main tasks have been completed:•First,a portal server has been set up at University of Westminster running the P-GRADE Grid portal and offering access to the NGS resources for authenticated and authorised users.With the help of their Grid certificates and NGS accounts portal users can utilise NGS resources in a much more convenient and user-friendly way than previously.•Second,the GEMLCA architecture has been redesigned in order to support the third-party service provider scenario.There is no need to install GEMLCA on any NGS resource.The architecture is deployed centrally on the portal server but still offers the same legacy code functionally as the original solution:users can easily deploy legacy applications as Grid services,can access these services from the portal interface,and can create, execute and visualise complex Grid workflows.This paper describes two different scenarios how GEMLCA is redesigned in order to support a production Grid system.Thefirst scenario supports the traditional job submission like task execution,and the second offers the legacy codes as pre-deployed services on the appropriate resources.In both cases GEMLCA runs on an external server, and neither compromise the reliability of the production Grid system,nor requires extra effort from the Grid system administrators.The service is transparent from the Grid operators point of view but offers essential functionality for the end-users.2The UK National Grid ServiceThe National Grid Service(NGS)is the UK production Grid operated by the Grid Operation Support Centre(GOSC). It offers a stable highly-available production quality Grid service to the UK research community providing compute and storage resources for users.The core NGS infrastructure consists of four cluster nodes at Cambridge,CCLRC-RAL,Leeds and Manchester,and two national High Performance Computing(HPC)services:HPCx and CSAR.NGS provides compute resources for compute Grid through compute clusters at Leeds and Oxford,and storage resources for data Grid through data clusters at CCLRC-RAL and Manchester.This core NGS infrastructure has recently been extended with two further Grid nodes at Bristol and Cardiff,and will be further extended by incorporating UK e-Science Centres through separate Service Level Agreements(SLA).NGS is based on GT2middleware.Its security is built on Globus Grid Security Infrastructure(GSI)[14],which supports authentication,authorization and single sign-on.NGS uses GridFTP to transfer input and outputfiles to and from nodes,and Storage Resource Broker(RSB)[6]with OGSA-DAI[3]to provide access to data on NGS nodes. It uses the Globus Monitoring and Discovery Service(MDS)[7]to handle information of NGS nodes.Ganglia[12], Grid Integration Test Script(GITS)[4]and Nagios[2]are used to monitor both the NGS and its nodes.Nagios checks nodes and services while GITS monitors communication among NGS nodes.Ganglia collects and processes information provided by Nagios and GITS in order to generate NGS-level view.NGS uses a centralised user registration ers have to obtain certificates and open accounts to be able to use any NGS service.The certificates are issued by the UK Core Programme Certification Authority(e-Science certificate)or by other CAs.NGS accounts are allocated from a central pool of generic user accounts to enable users to register with all NGS nodes at the same er management is based on Virtual Organisation Membership Service (VOMS)[1].VOMS supports central management of user registration and authorisation taking into consideration local policies on resource access and usage.3Grid Execution Management for Legacy Code ArchitectureThe Grid computing environment requires special Grid enabled applications capable of utilising the underlying Grid middleware and infrastructure.Most Grid projects so far have either developed new applications from scratch,orsignificantly re-engineered existing ones in order to be run on their platforms.However,as the Grid becomes com-monplace in both scientific and industrial settings,the demand for porting a vast legacy of applications onto the new platform will panies and institutions can ill afford to throw such applications away for the sake of a new technology,and there is a clear business imperative for them to be migrated onto the Grid with the least possible effort and cost.The Grid Execution Management for Legacy Code Architecture(GEMLCA)enables legacy code programs written in any source language(Fortran,C,Java,etc.)to be easily deployed as a Grid Service without significant user effort.In this chapter the original GEMLCA architecture is outlined.This architecture has been modified,as described in chapters4and5,in order to create a centralised version for production Grids.GEMLCA represents a general architecture for deploying legacy applications as Grid services without re-engineering the code or even requiring access to the sourcefiles.The high-level GEMLCA conceptual architecture is represented on Figure1.As shown in thefigure,there are four basic components in the architecture:1.The Compute Server is a single or multiple processor computing system on which several legacy codes arealready implemented and available.The goal of GEMLCA is to turn these legacy codes into Grid services that can be accessed by Grid users.2.The Grid Host Environment implements a service-oriented OGSA-based Grid layer,such as GT3or GT4.Thislayer is a pre-requisite for connecting the Compute Server into an OGSA-built Grid.3.The GEMLCA Resource layer provides a set of Grid services which expose legacy codes as Grid services.4.The fourth component is the GEMLCA Client that can be installed on any client machine through which a userwould like to access the GEMLCA resources.Figure1:GEMLCA Conceptual ArchitectureThe novelty of the GEMLCA concept compared to other similar solutions like[10]or[5]is that it requires minimal effort from both Compute Server administrators and end-users of the Grid.The Compute Server administrator should install the GEMLCA Resource layer on top of an available OGSA layer(GT3/GT4).It is also their task to deploy existing legacy applications on the Compute Servers as Grid services,and to make them accessible for the whole Grid community.End-users do not have to do any installation or deployment work if a GEMLCA portal is available for the Grid and they only need those legacy code services that were previously deployed by the Compute Server administrators.In such a case end-users can immediately use all these legacy code services-provided they have access to the GEMLCA Grid resources.If they would like to deploy legacy code services on GEMLCA Grid resources they can do so,but these services cannot be accessed by other Grid users.As a last resort,if no GEMLCA portal is available for the Grid,a user must install the GEMLCA Client on their client machine.However,since it requires some IT skills to do this,it is recommended that a GEMLCA portal is installed on every Grid where GEMLCA Grid resources are deployed.The deployment of a GEMLCA legacy code service assumes that the legacy application runs in its native environ-ment on a Compute Server.It is the task of the GEMLCA Resource layer to present the legacy application as a Grid service to the user,to communicate with the Grid client and to hide the legacy nature of the application.The deploy-ment process of a GEMLCA legacy code service requires only a user-level understanding of the legacy application, i.e.,to know what the parameters of the legacy code are and what kind of environment is needed to run the code(e.g. multiprocessor environment with n processors).The deployment defines the execution environment and the parameter set for the legacy application in an XML-based Legacy Code Interface Description(LCID)file that should be stored in a pre-defined location.Thisfile is used by the GEMLCA Resource layer to handle the legacy application as a Grid service.GEMLCA provides the capability to convert legacy codes into Grid services just by describing the legacy param-eters and environment values in the XML-based LCIDfile.However,an end-user without specialist computing skills still requires a user-friendly Web interface(portal)to access the GEMLCA functionalities:to deploy,execute and retrieve results from legacy applications.Instead of developing a new custom Grid portal,GEMLCA was integrated with the workflow-oriented P-GRADE Grid portal extending its functionalities with new portlets.Following this integration,end-users can easily construct workflow applications built from legacy code services running on different GEMLCA Grid resources.The workflow manager of the portal contacts the selected GEMLCA Resources,passes them the actual parameter values of the legacy code,and then it is the task of the GEMLCA Resource to execute the legacy code with the actual parameter values.The other important task of the GEMLCA Resource is to deliver the results of the legacy code service back to the portal.The overall structure of the GEMLCA Grid with the Grid portal is shown in Figure2.Figure2:GEMLCA with Grid Portal4Connecting GEMLCA to the NGSTwo different scenarios were identified in order to execute legacy code applications on NGS sites.In each scenario both GEMLCA and the P-GRADE portal are installed on the Parsifal cluster of the University of Westminster.As a result,there is no need to deploy any GEMLCA or P-GRADE portal code on the NGS resources.scenario1:legacy codes are stored in a central repository and GEMLCA submits these codes as jobs to NGS sites, scenario2:legacy codes are installed on NGS sites and executed through GEMLCA.The two scenarios are supporting different user needs,and each of them increases the usability of the NGS in different ways for end-users.The GEMLCA research team has implemented thefirst scenario in May2005,and currently working on the implementation of the second scenario.This chapter briefly describes these two different scenarios,and the next chapter explains in detail the design and implementation aspects of thefirst already implemented solution.As the design and implementation of the second scenario is currently work is progress,its detailed description will be the subject of a future publication.4.1Scenario1:Legacy Code Repository for NGSThere are several legacy applications that would be useful for users within the NGS community.These applications were developed by different institutions and currently not available for other members of the community.According to this scenario legacy codes can be uploaded into a central repository and made available for authorised users through a Grid portal.The solution extends the usability of NGS as users can submit not only their own applications but can also utilise other legacy codes stored in the repository.Users can access the central repository,managed by GEMLCA,through the P-GRADE portal and upload their applications into this repository.After uploading legacy applications users with valid certificates and existing NGS accounts can select and execute legacy codes through the P-GRADE portal on different NGS sites.In this scenario the binary codes of legacy applications are transferred from the GEMLCA server to the NGS sites,and executed as jobs.Figure3:Scenario1-Legacy Code Repository for NGS4.2Scenario2:Pre-deployed Legacy Code ServicesThis solution extends the NGS Grid towards the service-oriented Grid ers cannot only submit and execute jobs on the resources but can also access legacy applications deployed on NGS and include these in their workflows. This scenario is the logical extension of the original GEMLCA concept in order to use it with NGS.In this scenario the legacy codes are already deployed on the NGS sites and only parameters(input or output)are submitted.Users contact the central GEMLCA resource through the P-GRADE portal,and can access the legacy codes that are deployed on the NGS sites.In this scenario the NGS system administrators have full control of legacy codes that they deploy on their own resources.Figure4:Scenario2Pre-Deployed Legacy Code on NGS Sites5Legacy Code Repository for the NGS5.1Design objectivesThe currently implemented solution that enables users to deploy,browse and execute legacy code applications on the NGS sites is based on scenario1,as described in the previous chapter.This solution utilises the original GEMLCA architecture with the necessary modifications in order to execute the tasks on the NGS resources.The primary aims of the solution are the following:•The owners of legacy applications can publish their codes in the central repository making it available for other authorised users within the UK e-Science community.The publication is not different from the original method used in GEMLCA,and it is supported by the administration Grid portlet of the P-GRADE portal,as described in[9].After publication the code is available for other non-computer specialist end-users.•Authorised users can browse the repository,select the necessary legacy codes,set their input parameters,and can even create workflows from compatible components.These workflows can then be mapped onto the NGS resources,submitted and the execution visualised.•The deployment of a new legacy application requires some high level understanding of the code(like name and types input and output parameters)and its execution environment(e.g.supported job managers,maximum number of processors).However,once the code is deployed end-users with no Grid specific knowledge can easily execute it,and analyse the results using the portal interface.As GEMLCA is integrated with the P-GRADE Grid portal,NGS users have two different options in order to execute their applications.They can submit their own code directly,without the described publication process,using the original facilities offered by the portal.This solution is suggested if the execution is only on an ad-hoc basis when the publication puts too much overhead on the process.However,if they would like to make their code available for a larger community,and would like make the execution simple enough for any end-user,they can publish the code with GEMLCA in the repository.In order to execute a legacy code on an NGS site,users should have a valid user certificate,for example an e-Science certificate,an NGS account and also an account for the P-GRADE portal running at Westminster.After logging in the portal they download their user certificate from an appropriate myProxy server.The legacy code,submitted to the NGS site,utilise this certificate to authenticate users.Portal Legacy code deployer Legacy code executor Placeand input Portal Legacy code deployer Legacy code executor script”filesOriginal GEMLCAGEMLCA NGS Figure 5:Comparison of the Original and the NGS GEMLCA Concept5.2Implementation of the SolutionTo fulfil these objectives some modifications and extensions of the original GEMLCA architecture were necessary.Figure 5compares the original and the extended GEMLCA architectures.As it is shown in the figure,an additional layer,representing the remote NGS resource where the code is executed,appears.The deployment of a legacy code is not different from the original GEMLCA concept;however,the execution has changed significantly in the NGS version.To transfer the executable and the input parameters to the NGS site,and to instruct the remote GT2GRAM to execute the jobs,required the modification of the GEMLCA architecture,including the development of a special script that interfaces with Condor G.The major challenge when connecting GEMLCA to the NGS was that NGS sites use Globus Toolkit version 2(GT2),however the current GEMLCA implementations are based on service-oriented Grid middleware,namely GT3and GT4.The interfacing between the different middleware platforms is supported by a script,called NGS script,that provides additional functionality required for executing legacy codes on NGS sites.Legacy codes and input files are stored in the central repository but executed on the remote NGS sites.To execute the code on a remote site first the NGS script,executed as a GEMLCA legacy code,instructs the portal to copy the binary and input files from the central repository to the NGS site.Next,the NGS script,using Condor-G,submits the legacy code as a job to the remote site.The other major part of the architecture where modifications were required is the config.xml file and its relatedJava classes.GEMLCA uses an XML-based descriptionfile,called config.xml,in order to describe the environmental parameters of the legacy code.Thisfile had to be extended and modified in order to take into consideration a second-level job manager,namely the job manager used on the remote NGS site.The config.xml should also notify the GEMLCA resource that it has to submit the NGS script instead of a legacy code to GT4MMJFS(Master Managed Job Factory Service)when the user wants to execute the code on an NGS site.The implementation of these changes also required the modification of the GEMLCA core layer.In order to utilise the new GEMLCA NGS solution:1.The owner of the legacy application deploys the code as a GEMLCA legacy code in the central repository.2.The end-user selects and executes the appropriate legacy applications on the NGS sites.As the deployment process is virtually identical to the one used by the original GEMLCA solution here we con-centrate on the second step,the code execution.The following steps are performed by GEMLCA when executing a legacy code on the NGS sites(Fig.6):1.The user selects the appropriate legacy codes from the portal,defines inputfiles and parameters,and submits an“execute a legacy code on an NGS site”request.2.The GEMLCA portal transfers the inputfiles to the NGS site.3.The GEMLCA portal forwards the users request to a GEMLCA Resource.4.The GEMLCA resource creates and submits the NGS script as a GEMLCA job to the MMJFS.5.The MMJFS starts the NGS script.6.Condor-G contacts the remote GT2GRAM,sends the binary of the legacy code and its parameters to the NGSsite,and submits the legacy code as a job to the NGS site job manager.Figure6:Execution of Legacy Codes on an NGS SiteWhen the job has been completed on the NGS site the results are transferred from the NGS site to the user in the same way.6Results-Traffic simulation on the NGSA working prototype of the described solution has been implemented and tested creating and executing a traffic simu-lation workflow on the different NGS resources.The workflow consists of three types of components:1.The Manhattan legacy code is an application to generate inputs for the MadCity simulator:a road networkfileand a turnfile.The MadCity road networkfile is a sequence of numbers,representing a road topology of a road network.The MadCity turnfile describes the junction manoeuvres available in a given road network.Traffic light details are also included in thisfile.2.MadCity[8]is a discrete-time microscopic traffic simulator that simulates traffic on a road network at the levelof individual vehicles behaviour on roads and at junctions.After completing the simulation,a macroscopic trace file,representing the total dynamic behaviour of vehicles throughout the simulation run,is created.3.Finally a traffic density analyser compares the traffic congestion of several runs of the simulator on a givennetwork,with different initial road traffic conditions specified as input parameters.The component presents the results of the analysis graphically.Each of these applications was published in the central repository at Westminster as GEMLCA legacy code.The publication was done using the administration portlet of the GEMLCA P-GRADE portal.During this process the type of input and output parameters,and environmental values,like job managers and maximum number of processors used for parallel execution,were set.Once published the codes are ready to be used by end-users even with very limited computing knowledge.Figure7shows the workflow graph and the execution of the different components on NGS resources:•Job0is a road network generator mapped at Leeds,•jobs1and2are traffic simulators running parallel at Leeds and Oxford,respectively,•finally,job3is a traffic density analyser executed at Leeds.Figure7:Workflow Graph and Visualisation of its Execution on NGS ResourcesWhen creating the workflow the end-user selected the appropriate application from the repository,set input param-eters and mapped the execution to the available NGS resources.During execution the NGS script run,contacted the remote GT2GRAM,and instructed the portal to pass executa-bles and input parameters to the remote site.Whenfinishing the execution the outputfiles were transferred back to Westminster and were made available for the user.7Conclusion and Future WorkThe implemented solution successfully demonstrated that additional services,like legacy code support,run and main-tained by third party service providers can be added to production Grid systems.The major advantage of this solution is that the reliability of the core Grid infrastructure is not compromised,and no additional effort is required from Grid system administrators.On the other hand,utilizing these services the usability of these Grids can be significantly improved.Utilising and re-engineering the GEMLCA legacy code solution two different scenarios were identified to provide legacy code support for the UK NGS.Thefirst,providing a legacy code repository functionality,and allowing the submission of legacy applications as jobs to NGS resources was successfully implemented and demonstrated.The final production version of this architecture and its official release for NGS users is scheduled for June2005.The second scenario,that extends the NGS with pre-deployed legacy code services,is currently in the design phase.Challenges are identified concerning its implementation,especially the creation and management of virtual organizations that could utilize these pre-deployed services.References[1]R.Alfieri,R.Cecchini,V.Ciaschini,L.dellAgnello,A.Frohner,A.Gianoli,K.Lorentey,,and F.Spata.V oms,an authorization system for virtual organizations.af.infn.it/voms/VOMS-Santiago.pdf. [2]S.Andreozzi,S.Fantinel,D.Rebatto,L.Vaccarossa,and G.Tortone.A monitoring tool for a grid operationcenter.In CHEP2003,La Jolla,California,March2003.[3]Mario Antonioletti,Malcolm Atkinson,Rob Baxter,Andrew Borley,Neil P Chue,Hong,Brian Collins,NeilHardman,Ally Hume,Alan Knox,Mike Jackson,Amy Krause,Simon Laws,James Magowan,Norman W Paton,Dave Pearson,Tom Sugden,Paul Watson,and Martin Westhead.The design and implementation of grid database services in ogsa-dai.Concurrency and Computation:Practice and Experience,17:357–376,2005. [4]David Baker,Mark Baker,Hong Ong,and Helen Xiang.Integration and operational monitoring tools for theemerging uk e-science grid infrastructure.In Proceedings of the UK e-Science All Hands Meeting(AHM2004), East Midlands Conference Centre,Nottingham,2004.[5]B.Balis,M.Bubak,and M.Wegiel.A solution for adapting legacy code as web services.In V.Getov andT.Kiellmann,editors,Component Models and Systems for Grid Applications,pages57–75.Springer,2005.ISBN0-387-23351-2.[6]C.Baru,Moore R,A.Rajasekar,and M.Wan.The sdsc storage resource broker.In Proc.CASCON’98Confer-ence,November1998.[7]Karl Czajkowski,Steven Fitzgerald,Ian Foster,and Carl Kesselman.Grid information services for distributedresource sharing.In Proceedings of the Tenth IEEE International Symposium on High-Performance Distributed Computing(HPDC-10),/research/papers/MDS-HPDC.pdf.[8]A.Gourgoulis,G.Terstyansky,P.Kacsuk,and S.C.Winter.Creating scalable traffic simulation on clusters.In PDP2004.Conference Proceedings of the12th Euromicro Conference on Parallel,Distributed and Network based Processing,La Coruna,Spain,February2004.[9]A.Goyeneche,T.Kiss,G.Terstyanszky,G.Kecskemeti,T.Delaitre,P.Kacsuk,and S.C.Winter.Experienceswith deploying legacy code applications as grid services using gemlca.In P.M.A.Sloot,A.G.Hoekstra,T.Priol,。
全面剖析碳达峰与碳中和的内涵-定义、两者的关联及达成方式英文版

全面剖析碳达峰与碳中和的内涵-定义、两者的关联及达成方式英文版Comprehensive Analysis of the Meaning of Carbon Peaking and Carbon Neutrality - Definitions, Their Connection, and Achievement MethodsCarbon peaking and carbon neutrality have become important concepts in the global effort to combat climate change.Definition:Carbon peaking refers to the point in time when a country's or region's carbon emissions reach their maximum level before starting to decline. On the other hand, carbon neutrality, also known as net-zero emissions, means that the amount of carbon dioxide released into the atmosphere is balanced by the amount removed or offset.Connection:The connection between carbon peaking and carbon neutrality lies in the goal of reducing greenhouse gas emissions. Carbon peaking marks the first step towards achieving carbon neutrality, as it signifies the beginning of a downward trend in emissions. Both concepts are crucial in the transition to a low-carbon economy and the mitigation of climate change.Achievement Methods:There are several ways to achieve carbon peaking and carbon neutrality. Some strategies include transitioning to renewable energy sources, improving energy efficiency, implementing carbon capture and storage technologies, promoting sustainable land use practices, and investing in nature-based solutions. Additionally, carbon pricing mechanisms, such as carbon taxes or cap-and-trade systems, can incentivize businesses and individuals to reduce their emissions.In conclusion, understanding the definitions, connection, and achievement methods of carbon peaking and carbon neutrality is essential in addressing the urgent need to reduce greenhouse gas emissions and combat climate change on a global scale.。
材料科学与工程专业英语Unit2ClassificationofMaterials译文

Unit 2 Classification of MaterialsSolid materials have been conveniently grouped into three basic classifications: metals, ceramics, and polymers. This scheme is based primarily on chemical makeup and atomic structure, and most materials fall into one distinct grouping or another, although there are some intermediates. In addition, there are three other groups of important engineering materials —composites, semiconductors, and biomaterials.译文:译文:固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。
固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。
固体材料被便利的分为三个基本的类型:金属,陶瓷和聚合物。
这个分类是首先基于这个分类是首先基于化学组成和原子结构来分的,化学组成和原子结构来分的,大多数材料落在明显的一个类别里面,大多数材料落在明显的一个类别里面,大多数材料落在明显的一个类别里面,尽管有许多中间品。
尽管有许多中间品。
除此之外,此之外, 有三类其他重要的工程材料-复合材料,半导体材料和生物材料。
有三类其他重要的工程材料-复合材料,半导体材料和生物材料。
Composites consist of combinations of two or more different materials, whereas semiconductors are utilized because of their unusual electrical characteristics; biomaterials are implanted into the human body. A brief explanation of the material types and representative characteristics is offered next.译文:复合材料由两种或者两种以上不同的材料组成,然而半导体由于它们非同寻常的电学性质而得到使用;生物材料被移植进入人类的身体中。
应用地球化学元素丰度数据手册-原版

应用地球化学元素丰度数据手册迟清华鄢明才编著地质出版社·北京·1内容提要本书汇编了国内外不同研究者提出的火成岩、沉积岩、变质岩、土壤、水系沉积物、泛滥平原沉积物、浅海沉积物和大陆地壳的化学组成与元素丰度,同时列出了勘查地球化学和环境地球化学研究中常用的中国主要地球化学标准物质的标准值,所提供内容均为地球化学工作者所必须了解的各种重要地质介质的地球化学基础数据。
本书供从事地球化学、岩石学、勘查地球化学、生态环境与农业地球化学、地质样品分析测试、矿产勘查、基础地质等领域的研究者阅读,也可供地球科学其它领域的研究者使用。
图书在版编目(CIP)数据应用地球化学元素丰度数据手册/迟清华,鄢明才编著. -北京:地质出版社,2007.12ISBN 978-7-116-05536-0Ⅰ. 应… Ⅱ. ①迟…②鄢…Ⅲ. 地球化学丰度-化学元素-数据-手册Ⅳ. P595-62中国版本图书馆CIP数据核字(2007)第185917号责任编辑:王永奉陈军中责任校对:李玫出版发行:地质出版社社址邮编:北京市海淀区学院路31号,100083电话:(010)82324508(邮购部)网址:电子邮箱:zbs@传真:(010)82310759印刷:北京地大彩印厂开本:889mm×1194mm 1/16印张:10.25字数:260千字印数:1-3000册版次:2007年12月北京第1版•第1次印刷定价:28.00元书号:ISBN 978-7-116-05536-0(如对本书有建议或意见,敬请致电本社;如本社有印装问题,本社负责调换)2关于应用地球化学元素丰度数据手册(代序)地球化学元素丰度数据,即地壳五个圈内多种元素在各种介质、各种尺度内含量的统计数据。
它是应用地球化学研究解决资源与环境问题上重要的资料。
将这些数据资料汇编在一起将使研究人员节省不少查找文献的劳动与时间。
这本小册子就是按照这样的想法编汇的。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
University of Westminster EprintsIntegration of GEMLCA and the P-GRADE portal.Tamas Kiss1, Gergely Sipos2 , Peter Kacsuk2, K.Karoczkai2, Gabor Terstyanszky1, Thierry Delaitre11 School of Informatics, University of Westminster2 2MTA SZTAKI Laboratory of Parallel and Distributed SystemsH-1518 Budapest, P.O. Box 63, HungaryThis is an electronic version of a paper presented at the CoreGRID Workshop on Grid Systems, Tools and Environments (WP7 Workshop) (in conjunction with GRIDS@Work), 12-14 Oct 2005, Sophia Antipolis, France.The Eprints service at the University of Westminster aims to make the research output of the University available to a wider audience. Copyright and Moral Rights remain with the authors and/or copyright owners.Users are permitted to download and/or print one copy for non-commercial private study or research. Further distribution and any use of material from within this archive for profit-making enterprises or for commercial gain is strictly forbidden. Whilst further distribution of specific materials from within this archive is forbidden, you may freely distribute the URL of the University of Westminster Eprints ().In case of abuse or copyright appearing without permission e-mail wattsn@.Integration of GEMLCA and the P-GRADE PortalT. Kiss1, G. Sipos2, P. Kacsuk2, K. Karoczkai2, G. Terstyanszky1, T. Delaitre1 1Centre of Parallel Computing,Cavendish School of Computer Science,University of Westminster, 115 New Cavendish Street, London W1W 6UW,2MTA SZTAKI Laboratory of Parallel and Distributed SystemsH-1518 Budapest, P.O. Box 63, Hungary1. IntroductionThere are many efforts all over the world to provide new Grid middleware concepts for constructing large production Grids. As a result, the Grid community is in the phase of producing third generation Grid systems that are represented by the OGSA (Open Grid Services Architecture) and WSRF (Web Services Resource Framework) specifications. On the other hand relatively little attention has been paid to how end-users can survive in the rapidly changing world of Grid generations. Moreover, the efforts in this field remained isolated resulting only in limited functionality prototypes for specific user domains and not serving a wider user community.This paper describes how the integration of two different architectures, the P-GRADE Grid portal [1] and GEMLCA (Grid Execution Management for Legacy Code Architecture) [2] resulted in a more generic solution serving a large variety of Grid systems and application domains. The integrated solution provides a high-level user-friendly Grid application environment that supports users of GT2-based second generation and GT3/GT4-based third generation Grid systems from the same user interface. It also allows the integration of legacy code applications into complex Grid workflows which can be mapped to Grid nodes running this wide variety of middleware.The integration has happened at different levels. In the first step, the GEMLCA clients were added to the portal providing a user-friendly interface for legacy code deployment, execution and visualisation. On the other hand this integration also enhanced the usability of the originally GT2-based P-GRADE portal making it capable to handle GT3/GT4 Grid services. In the second step, GEMLCA was extended to handle legacy code submission to current GT2-based production Grids, like the UG National Grid Service or the EGEE Grid. This resulted in a legacy code repository that makes it even easier to P-GRADE portal end users to create and execute workflows from previously published legacy components. Finally, the research teams are currently working on a more loosely coupled integration that will allow incorporating advanced functionality, like GEMLCA-based legacy code support, into the portal as a plug-in, resulting in more flexible solution depending on actual user requirements.The paper introduces GEMLCA and the P-GRADE portal and describes the integration activities outlined above.2. Baseline technologies2.1 P-GRADE PortalThe P-GRADE portal [1] is a workflow-oriented Grid portal with the main goal to cover the whole lifecycle of workflow-oriented computational grid applications. It enables the graphical development of workflows consisting of various types of executable components (sequential, MPI or PVM programs), executing theseworkflows in Globus-based grids [4] relying on user credentials, and finally analyzing the correctness and performance of applications by the built-in visualization facilities. Workflow applications can be developed in the P-GRADE portal by its graphical Workflow Editor.A P-GRADE portal workflow is an acyclic dependency graph that connects sequential and parallel programs into an interoperating set of jobs. The nodes of such a graph are jobs, while the arc connections define the execution order of the jobs and the data dependencies between them that must be resolved by the workflow manager during the execution. An example for P-GRADE portal workflows can be seen in the middle part of Figure 2. Large rectangles represent jobs while small rectangles around the jobs are called ports and represent data files that the corresponding jobs expect or produce. Directed arcs interconnect pairs of input and output files if an output file of a job serves as an input file for another job.The semantics of the workflow execution means that a job (a node of the workflow) can be executed if, and only if all of its input files are available, i.e. all the jobs that produce input files for this job have successfully terminated, and all the user-defined input files are available either on the portal server and at the pre-defined grid storage resources. Therefore, the workflow describes both the control-flow and the data-flow of the application.Managing the transfer of files and recognition of the availability of the necessary files is the task of the workflow manager portal subsystem, currently implemented on the top of Condor DAGMan [3]. The workflow manager is capable to transfer data among Globus VOs [4], thus the different components of the same workflow can be mapped onto different Globus VOs. These VOs can be part of the same grid, or can belong to multiple grids.2.2 GEMLCAGEMLCA represents a general architecture for deploying legacy applications as Grid services without re-engineering the code or even requiring access to the source files. The high-level GEMLCA conceptual architecture is represented on Figure 1.As shown in the figure, there are four basic components in the architecture:Figure 1 GEMLCA conceptual architecture1)The Compute Server is a single or multiple processor computing system onwhich several legacy codes are already implemented and available. The goal of GEMLCA is to turn these legacy codes into Grid services that can be accessed by Grid users.2)The Grid Host Environment implements a service-oriented OGSA-based Gridlayer, such as GT3 or GT4. This layer is a pre-requisite for connecting the Compute Server into an OGSA-built Grid.3)The GEMLCA Resource layer provides a set of Grid services which exposelegacy codes as Grid services.4)The fourth component is the GEMLCA Client that can be installed on any clientmachine through which a user would like to access the GEMLCA resources.The deployment of a GEMLCA legacy code service assumes that the legacy application runs in its native environment on a Compute Server. It is the task of the GEMLCA Resource layer to present the legacy application as a Grid service to the user, to communicate with the Grid client and to hide the legacy nature of the application. The deployment process of a GEMLCA legacy code service requires only a user-level understanding of the legacy application, i.e., to know what the parameters of the legacy code are and what kind of environment is needed to run the code (e.g. multiprocessor environment with ‘n’ processors). The deployment defines the execution environment and the parameter set for the legacy application in an XML-based Legacy Code Interface Description (LCID) file that should be stored in a pre-defined location. This file is used by the GEMLCA Resource layer to handle the legacy application as a Grid service.3. Integrating GEMLCA and the P-GRADE portalGEMLCA provides the capability to convert legacy codes into Grid services just by describing the legacy parameters and environment values. However, an end-user without specialist computing skills still requires a user-friendly Web interface to access the GEMLCA functionalities: to deploy, execute and retrieve results from legacy applications. The P-GRADE portal offers these functionalities besides other capabilities like Grid certificate management, workflow creation, execution visualization and monitoring. This section describes how the integration enhanced the functionalities of both environments and how this integration can be even more effective in the future.3.1 Extending the P-GRADE portal towards service oriented GridsThe P-GRADE portal supported only GT2 based Grids, originally. On the other hand, GEMLCA aims to expose legacy applications as GT3/GT4 Grid services. The integration of GEMLCA and the portal extended the GT2-based P-GRADE portal towards service oriented Grids. Users can still utilise GT2 resources through traditional job submission, and can also use GT3/GT4 resources by including GEMLCA legacy code services in their workflows. The generic architecture of the J ob s S e r v ic e i n v o c a t i o n Legacy applicationsGrid Site 1GEMLCA – P-GRADE portal Grid is shown on figure 2.Integrating the P-GRADE portal with GEMLCA required several modifications in the P-GRADE portal. These are as follows:1.In the original P-GRADE portal a workflow component can be a sequential orMPI program. The portal was modified in order to include legacy code Grid services as GEMLCA components into the workflow.2.The Job properties window of the P-GRADE portal was changed in order toextend it with the necessary legacy code support. The user can select a GEMLCA Grid resource from drop-down list. Once the Grid resource is selected the portal retrieves the list of legacy code services available on the selected Grid resource.Next, the user can choose a legacy code service from this list. Once the legacy code service is selected the portal fetches the parameter list belonging to the selected legacy code service with default parameter values. The user can either keep these values or modify them.3.The P-GRADE portal was extended with the GEMLCA Administration Portletthat hides the syntax and structure of the LCID file from users. After filling in a simple Web form the LCID file is created automatically and uploaded by the portal to the appropriate directory of the GEMLCA resource.After these modifications in the portal, end-users can easily construct workflow applications built from both GT2 jobs and legacy code services, and can map their execution to different Grid resources, as shown in Figure 3.3.2 Legacy code repository for production GridsCreating a workflow in the P-GRADE portal requires the user to define input and output parameters and identify ports. For the owner of the code this task is not too complex. However, if another end-user wants to use the same code in a workflow the process have to be repeated by a user who has no deeper understanding of the code itself. In order to help these end-users a legacy code repository based on GEMLCA was created that can be connected to GT2 based production Grid services, like the UK National Grid Service (NGS). The GEMLCA repository enables code owners to publish their applications as GEMLCA legacy codes in the same way as it was described in section 3.1. After this publication other authorised users can browse the repository and simply select the application they would like to use. Parameter valuescan be set by the user. However, there is no need to define parameter types or input/output ports as these are created automatically by the portal based on the GEMLCA description.The P-GRADE portal extended with the GEMLA repository has been successfully implemented and offered for UK NGS users as a third party service. This implementation of the integrated GEMLCA – P-GRADE portal solution extends the capabilities of both tools. On one hand, GEMLCA is now capable of working with GT2 based Grids by submitting the legacy executables as jobs to the remote GT2 gatekeeper. On the other hand, the usability of the P-GRADE portal has also been enhanced by making it much easier for end-users to create workflows using legacy codes published in the repository. The integrated GEMLCA – P-GRADE portal solution for GT2 based production Grids is shown of figure 3.3.3 GEMLCA as a plug-in in the P-GRADE portalBoth GEMLCA and the P-GRADE portal are continuously developing products. In order to present their latest features in a uniform way, the two systems must be integrated into a single software from time to time. Although the integration could be done with reasonable effort in case of the first few GEMLCA and portal releases, the task became quite cumbersome with the appearance of additional features and grid middleware technologies.The P-GRADE portal and the GEMLCA developer teams elaborated a new concept to handle this issue. According to the idea the portal will be transformed into a component-based environment that can be extended with dynamically downloadable plug-ins on the fly. The approach exploits the fact that GEMLCA and P-GRADE jobs are quite similar to each other. They both consist of a binary executable and 1-N input files, a set of input parameters, a resource that has been selected for the execution, etc. The only difference is that the executable and the input files of a GEMLCA job can come from different users, the executable and the input files of a P-GRADE job must be provided by the same party. Nevertheless, this difference can be hidden from the higher software layers during execution by the workflow manager subsystem (an intelligent script can transfer executable and input files to the executor site uniformly, even if those files are coming from different parties), due to the different concept GEMLCA and P-GRADE job developers must be provided with different user interfaces. (As it was described in Section 3.1 other types of parameters must be set on the “Properties” windows for a GEMLCA and for a P-GRADE job.)While job property windows are hard coded parts of the current version of the workflow editor, the new plug-in based concept enables the dynamic download of these graphical elements, making the P-GRADE and GEMLCA integration taskalmost self evident. The plug-in based editor works in the following way: (See also Figure 4) When the user drops a new job onto the editor canvas and opens its property window, the editor obtains a list of the currently available job types from the portal server (2). Each entry of this list consists of a name (e.g. GEMLCA job) and a reference pointing to a dynamically downloadable plug-in. In subject to the choice of the user (3) the editor downloads the plug-in that belongs to the selected job type and presents it on its GUI as the content of the job property window (4). Since each plug-in can encapsulate customised graphical elements to support the development of one specific type of job (5), no property windows must be hard coded into the workflow editor any more.Because the workflow editor is a Web Start application plug-ins can be implemented as Java objects. While the list of plug-ins can be retrieved from the portal server using a text format (e.g. an XML message), a binary protocol is needed between the editor and the plug-in provider to make object transmission possible. Java Web Services [6], Jini services [5], EJBs [7] or RMI servers [8] are all suitable for this purpose, thus the plug-in provider can be implemented with any of these technologies. Besides GEMLCA jobs other types of computational jobs can be plugged into the portal in this way. The plug-in provider has to only register its service at the portal server (1). 4. Conclusions and future workWith the integration of the GEMLCA and the P-GRADE portal tools a complex environment has been created that provides solution for a wide range of grid-related problems. Using the integrated system Grid users can deploy legacy and newly developed sequential and parallel applications as software services. They can test and then share these services with a larger community. The members of the community can develop workflows that connect their own computational tasks and the pre-deployed services into an acyclic graph. These workflows can be submitted into Globus-2, 3 and 4 based Grids, can be monitored in a real-time fashion, moreover, the different components can be executed in different Globus VOs.As the next step, the P-GRADE portal will be transformed into a plug-in based computational framework. With the plug-in concept the portal will be able to support other types of computational services than GEMLCA, without modifying any source code or even stopping the portal server.References[1] G. Sipos and P. Kacsuk: Classification and Implementations of Workflow-Oriented Grid Portals, To appear inthe Proc. of The 2005 International Conference on High Performance Computing and Communications (HPCC2005), Sorrento, Italy[2] T. Delaittre, T. Kiss, A. Goyeneche, G. Terstyanszky, S.Winter, P. Kacsuk: GEMLCA: Running Legacy CodeApplications as Grid Services, To appear in “Journal of Grid Computing” Vol. 3. No. 1.[3] T. Tannenbaum, D. Wright, K. Miller, and M. Livny: Condor - A Distributed Job Scheduler. Beowulf ClusterComputing with Linux, The MIT Press, MA, USA, 2002.[4] I. Foster, C. Kesselman: Globus: A Toolkit-Based Grid Architecture, In I. Foster, C. Kesselmann (eds.) …TheGrid: Blueprint for a New Computing Infrastructure“, Morgan Kaufmann, 1999, pp. 259-278.[5] J. Waldo. The Jini architecture for network-centric munications of the ACM, 42(10):76–82,Oct. 1999.[6] D. Chappell and T. Jewell, “Java Web Services”, O’Reilly Press, 2002.[7] Sun Microsystems: Enterprise JavaBeans: [8] Troy Bryan Downing. Java RMI: Remote Method Invocation. Number 0764580434. IDG Books, 1998.。