Grid Market Directory A Web Services based Grid Service Publication Directory

合集下载

Informatica-UsingPowerCenterWebServices

Informatica-UsingPowerCenterWebServices

How to Use PowerCenter Web Services to Extend the Power of Data IntegrationAbstractThis article shows how to extend PowerCenter’s ETL infrastructure to expose integrated data as web services. It discusses the benefits of enhancements and new features of web services in PowerCenter version 8.5 or later. It also provides sizing guidelines and sample performance results generated from web service testing within the Informatica labs.Table of ContentsExecutive Summary (2)Business Use Case (3)PowerCenter Solution for Use Case (3)Architecture (3)Client (4)Web Services Hub (4)Data Integration (5)Deployment (5)Security (6)Performance and Scalability (6)Performance Tuning Parameters (7)Benefits of Upgrading from PowerCenter 8.1.1 (7)Sizing Guidelines (9)Sample Performance Results (9)Executive SummaryFor many years, IT organizations have used extract, transform, and load (ETL) technology for traditional batch-oriented data integration projects such as building data warehouses or migrating or consolidating data. PowerCenter’s rich metadata framework has provided the ability to reuse data transformation, integration, and cleansing rules seamlessly across the enterprise for different projects. High performance and scalability and rich connectivity to sources like relational databases, ERP systems, and hosted applications have proven PowerCenter to be ideal for managing large volumes of diverse data.As enterprises have become more agile, the ability to exchange operational data between applications has become more critical. Data transformation, integration, and cleansing rules need to be applied to the data on the fly and provided to the consuming application in real-time.Despite obvious shortcomings, some IT organizations have used enterprise application integration (EAI) techniques and J2EE application server technology to provide data to applications. Such technology lacks a strong metadata repository capability and integration with the ETL infrastructure. The integration logic in the EAI approach commonly involves hand coding, which is complex and very expensive to maintain. The transformation, integration, or cleansing rules developed for batch ETL are often similar to the ones needed for operational data integration. However, since the EAI technology is decoupled from the PowerCenter repository, the integration rules cannot be reused. Furthermore, the technology falls short when the volume of data grows. As a result, this approach to operational data integration is inflexible and not scalable.Exposing integrated data as data services makes the ETL infrastructure more extensible and reusable across IT projects. Data services enable access, integration, and delivery of enterprise data throughout the enterprise and across corporate firewalls. Data services can be exposed in many modalities, including web services, SQL, and Java/C++ APIs.Web services offer a method for data exchange among applications based on common SOA standards and protocols. The basic web service platform is XML plus HTTP. XML provides a language which can be used among different platforms and programming languages and the HTTP protocol is the most commonly used Internet protocol. Extending PowerCenter’s ETL infrastructure to expose integrated data as web services addresses the shortcomings of the EAI technology described above. Furthermore, PowerCenter can integrate data from external web service providers to allow it to participate fully in SOA architectures. The strength of ETL technology combines with web services standards to allow IT organizations to extend the reach of data integration from traditional data warehouse projects to operational applications on smaller budgets.Business Use CaseAlthough the traditional enterprise data warehouse is seen as the information store for all enterprise data, it is often not capable of fulfilling the demands for more real-time information. The general practice within IT has been to implement the various data integration tasks for access, quality, manipulation and delivery using disparate tools tuned for different data latency modes and data volumes. This “hairball” of data integration logic has resulted in a lack of agility, low to no reusability, data inconsistency, poor data manageability, and complicated change management.IT organizations are looking for a scalable flexible real-time data integration infrastructure that can form the basis of an enterprise information management framework for delivering business agility. This framework should help IT organizations to better manage the creation, management, manipulation, and delivery of enterprise data in a scalable, consistent, accurate, secure, and timely manner.PowerCenter Solution for Use CaseInformatica’s data services offering delivers a proven value to the end user for enabling large-volume data integration in the enterprise. At the heart of Informatica’s data services platform is a high-performance engine for delivering scalable and sophisticated metadata-aware data integration services for access, cleansing, transformation, and delivery of data. The platform offers the flexibility of a variety of data delivery mechanisms, including web services. ArchitectureWeb services provide a distributed computing platform that allows access to computational logic and data by other applications over the Internet and intranet using standard XML protocols and formats. Web services leverage open Internet standards:y Web Services Description Language (WSDL) to describe the operations available in a web service.y Universal Description, Discovery, and Integration (UDDI) to advertise and syndicate the web services.y Simple Object Access Protocol (SOAP) to send and receive web service messages.y Web Services Flow Language (WSFL) to define the web service processes.The use of standard XML protocol eliminates the interoperability issues of existing solutions, such as CORBA and DCOM, and makes web services platform, language, and vendor neutral. Many commercial software products have already provided or have started providing support for web service interface.The PowerCenter web service solution is based on a three-tier architecture:y Clienty Web Services Huby Data IntegrationFigure 1. PowerCenter Web Services ArchitectureClientThe first tier is the client tier and consists of two types of web clients:y A web service client that accesses PowerCenter web services by sending web service requests and receiving web service responses in the form of SOAP messages over HTTP.y A web client, typically a web browser, that accesses PowerCenter services and metadata by sending HTTP requests and receiving HTTP responses.The web service client or web client can also send requests and receive responses through a secure HTTPS connection.Web Services HubThe second tier contains the Web Services Hub, the web service gateway to PowerCenter. The Web Services Hub uses the services in the third tier to serve the clients requests.The following are some of the architectural features of the Web Services Hub:y The Web Services Hub is built on a Tomcat web service container and runs as an independent process and hence resilient to failure of other Informatica services.y The Web Service Hub processes requests and responses in blocks for improved throughput. It bundles multiple requests into data blocks and sends them to the DTM engine. Likewise, it receives multiple responses in data blocks from the DTM engine.y Each connection from the Web Services Hub to the DTM engine is independent. Each data block of request or response is sent through an independent connection and is not queued behind other requests.y The Web Services Hub limits the number of context switches by reusing active client threads to send the requests from different clients to the DTM engine.y The Web Services Hub provides a range of web service operations that allow clients to access PowerCenter services and metadata. Operations such as startWorkflow and startTask allow clients to remotely start workflows and tasks. Operations such as getAllFolders and getWorkflowLog allow clients to access metadata from the repository and information on the workflow runs.Data IntegrationThe third tier consists of the PowerCenter data integration components, including the Integration Service and the Repository Service. These application services handle any data transformation required in a request.After the Web Services Hub receives and authenticates a web service request, the Web Services Hub sends the request to the Integration Service. The Integration Service starts a Data Transformation Manager (DTM) process to handle any data transformation required in a request. The Integration Service sends the results of the data transformation to the Web Services Hub which sends the response to the web service client.You can set up a grid in the PowerCenter domain and configure workflows and sessions to run on the grid to improve performance and scalability. When you run a workflow on a grid, the Integration Service runs a DTM process on each available node of the grid. When you run a session on a grid, the Integration Service distributes session threads to multiple DTM processes on nodes in the grid.External Web ServicesThe PowerCenter web service solution also allows you to consume information from external web service providers for use in a PowerCenter data transformation. PowerCenter provides a Web Services Consumer transformation process as part of PowerExchange for Web Services that can act as a web service client within a PowerCenter workflow. When a workflow contains a Web Services Consumer transformation, the workflow can send a request to an external web service and use the response in the data transformation.DeploymentFigure 2 shows a typical PowerCenter web services deployment:Figure 2. PowerCenter Web Services DeploymentTo ensure that the Web Services Hub does not become the bottleneck or a single point of failure, deploy an external load balancer to manage the requests going into the Web Services Hub. A load balancer can ensure that requests fora web service are balanced across multiple Web Services Hubs associated with the web service. The load balancer can also route requests through HTTP or HTTPS efficiently.SecurityThe PowerCenter web services solution provides the following levels of security:y Transport layer security. The SSL protocol provides security for the SOAP message transport. Using HTTPS ensures the integrity and confidentiality of SOAP messages and provides point-to-point security. To enable the transport layer security, set up a keystore file for the SSL keys and certificate. Then use the AdministrationConsole to configure the Web Services Hub to use HTTPS.y Message layer security. To provide security at the message level, configure a web service workflow as a protected web service. A protected web service requires a security token to be included in the request. The security token is a session ID that is generated when a web service client logs in with a user name and password.The session ID can be used for subsequent requests from the client. It expires after a period of client inactivity.The batch web service operations provided by the Web Service Hub require the client to log in and obtain a session ID.The transport layer security ensures SOAP message integrity and confidentiality while the SOAP message layer security provides client authentication. To maximize security for PowerCenter web services, use protected web services and run them over a secure HTTPS connection.Performance and ScalabilityThe PowerCenter web services solution is designed to be highly scalable and available to enable PowerCenter customers to manage high volumes of requests with minimum down time.Power Center 8.5 includes a number of performance enhancements:y The DTM engine can run with multiple partitions, allowing it to process multiple requests concurrently. To enable this feature, create multiple horizontal partitions when you design a mapping.y A Web Services Hub can dynamically launch multiple DTM processes to process requests concurrently, based on the request load and expected service time. The Web Service Hub monitors the quality of service for each web service workflow. You can set the maximum number of DTM processes that can be launched and the service time threshold at which a new DTM process will launched. When the service time threshold is crossed, the WebServices Hub launches a new DTM process to handle requests.y To identify performance bottlenecks, the Web Services Hub monitors the minimum, maximum, and average processing time per request (including and excluding Web Services Hub processing), the number of lostconnections, and the percentage of partitions used. The Web Services Hub uses the statistics to determine whena new DTM process needs to be launched to keep data transformation performance at an optimum level.y If the Web Services Hub is the performance bottleneck, you can create multiple Web Services Hubs and improve performance in the following ways:- Associate multiple Web Services Hubs with the same set of web services workflows. When multiple Web Services Hubs can run the same web services, clients can access any of the Web Services Hubs to runa web service.- Associate each Web Services Hub with a different set of web services workflows. When each Web Services Hub runs a different set of web services, clients must access a specific Web Services Hub to run a web service. You can balance the load for each Web Services Hub by associating the Web Services Hubwith web services that vary in levels or times of demand.To further improve performance, use a software or hardware load balancer to manage the volume of requests sent to the Web Services Hubs.y Multiple Web Services Hubs can be associated with a Repository Service in a domain. You can set up multiple Web Services Hubs to run a web service workflow. This means that if one Web Service Hub fails, another Web Service Hub can run the web service workflow. When you set up a load balancer to manage request load across multiple Web Services Hubs, the URL of the load balancer is the service access point for all managed WebServices Hubs. The web service clients access the load manager URL to run any of the web service workflows associated with the managed Web Services Hubs.Performance Tuning ParametersYou can configure the following properties to enable the Web Services Hub to perform at an optimum level for the required workload:y MaxISConnections. Maximum number of connections that can be open at one time from the Web Services Hub to the Integration Service.y MaxConcurrentRequests. Maximum number of request processing threads allowed, which determines the maximum number of simultaneous requests that can be handled. Before you set this parameter, check thememory available on the machine that hosts the Web Services Hub. This parameter allocates 64KB of memory to each request.y MaxQueueLength. Maximum number of requests that can be kept in queue when the Web Services Hub reaches the maximum concurrent request limit and all possible request processing threads are in use. Any requestreceived when the queue is full is rejected.Use the PowerCenter Administration Console to configure these advanced Web Services Hub properties.Benefits of Upgrading from PowerCenter 8.1.1When you upgrade to PowerCenter version 8.5 or later, you can take advantage of the following enhancements and new features:y Improved performance and reliability. PowerCenter 8.5 has made tremendous improvements in web service performance. The following charts show the comparison of service time and throughput performance forPowerCenter 8.1.1 and 8.5.Figure 3. Average Service Time for PowerCenter 8.1.1 and 8.5Figure 4. Average Throughput for PowerCenter 8.1.1 and 8.5y Dynamic scalability. The Web Services Hub monitors web services performance and dynamically starts new DTM processes to handle an increase in web services requests. A larger number of DTM processes increases the number of client requests that can be processed within the target service time. When the load decreases, the Web Services Hub shuts down the additional DTM. This means that, at any given time, the DTM engine’s usage of system resources is optimized and reflects the overall load on the system.Figure 5 shows that a larger number of DTM processes running concurrently can handle more client requests in less time:Figure 5. Average Service Time for Single and Multiple DTM Processesy Multiple Web Service Hubs associated with a repository. When you associate multiple Web Services Hubs with one repository, multiple Web Services Hubs can run the same web services concurrently. Distributing the service request load across multiple Web Services Hubs optimizes the performance of web services.y New methods to create web service mappings. You can create a web service mapping by defining the source and target manually or basing the source and target definitions on existing relational or flat file sources andtargets. You can also create a service mapping based on a reusable transformation or a mapplet.y Web service reports. On the Administration Console, you can run a report on the activities of a Web Services Hub and the web services running on the Web Services Hub. You can view statistics on the requests received by the Web Services Hub and the average time it took to process messages.y Try-It application. You can use the Try-It application to test an operation in a web service published on the Web Services Hub console. Provide the request as a SOAP message or as parameter values and then run the web service.Sizing GuidelinesThe service time threshold parameter determines the expected length of time for an average service request to be processed. It determines the maximum response time acceptable for an average service request. In most situations, the service time threshold must be measured in sub-seconds.If the service load increases and the response time exceeds the service time threshold, the Web Services Hub launches additional workflow instances to handle the load and to maintain the service time threshold.Each additional workflow instance consumes additional memory and resources. The complexity of the web service mapping determines how much additional memory and resources a workflow instance consumes.Sizing for PowerCenter web services depends on the following factors:y Complexity of the Web Service Mappingy Service Time thresholdy Number of concurrent clientsSample Performance ResultsThe following example shows performance results generated from web service testing within the Informatica labs. The test environment consisted of two nodes: one node running the Web Services Hub and another node running the Integration Service. The two nodes were running on an isolated 1Gbps network.Each node had the following specification:y 2 CPU ( clock speed 1.79GHz; AMD Opteron™ Processor 844; 64-bit)y4GB RAMy Linux 2.6.9-34.0.1.ELsmpNote: You can also use one machine to host the Web Service Hub and the PowerCenter services. It is not necessary to host the services on separate machines.MappingThe mapping used for the test consisted of a simple pass-through mapping with a 2KB payload. This mapping was chosen so that the capacity of the Integration Service node does not affect throughput. As the mapping logic gets more complex, the Integration Service node may require additional hardware.Response TimeThe following table shows the average response time generated during the testing. Note that, with a larger number of clients, the response time benefits from having messages bundled together.Number of Clients Average Response Time (ms)1 111 10 105Number of Clients Average Response Time (ms)100 86200 77500 72ScalabilityThe following table shows the throughput generated during the testing:Number of Clients Throughput (msgs/sec)1 8.610 82.9100 239.8200 313.7500 448.5AuthorsKiran MehtaDirector, Research and Development Raymond ToDevelopment ManagerMarissa JohnstonPrincipal Technical Writer。

GRID Systems

GRID Systems

Optimization Techniques for Implementing Parallel Skeletonsin Distributed EnvironmentsM.Aldinucci,M.Danelutto {aldinuc,marcod }@di.unipi.it UNIPI Dept.of Computer Science –University of Pisa Largo B.Pontecorvo 3,Pisa,Italy J.D¨u nnweber,S.Gorlatch {duennweb,gorlatch }@math.uni-muenster.de WWU MuensterDept.of Computer Science –University of M¨u nsterEinsteinstr.62,M¨u nster,Germany CoreGRID Technical Report Number TR-0001January 21,2005Institute on Programming ModelCoreGRID -Network of ExcellenceURL:Legacy Code Support for Production GridsT.Kiss*,G.Terstyanszky*,G.Kecskemeti*,Sz.Illes*,T.Delaittre*,S.Winter*,P .Kacsuk**,G.Sipos***Centre of Parallel Computing,University of Westminster,115New Cavendish Street,London W1W 6UW United Kingdom e-mail:gemlca-discuss@ **MTA SZTAKI1111Kende u.13Budapest,Hungary CoreGRID Technical Report Number TR-00116th June 2005Institute on Problem Solving Environment,Tools and GRID Systems CoreGRID -Network of ExcellenceURL: CoreGRID is a Network of Excellence funded by the European Commission under the Sixth Framework ProgrammeProject no.FP6-004265Legacy Code Support for Production GridsT.Kiss*,G.Terstyanszky*,G.Kecskemeti*,Sz.Illes*,T.Delaittre*,S.Winter*,P.Kacsuk**,G.Sipos***Centre of Parallel Computing,University of Westminster,115New Cavendish Street,London W1W6UW United Kingdome-mail:gemlca-discuss@**MTA SZTAKI1111Kende u.13Budapest,HungaryCoreGRID TR-00116th June2005AbstractIn order to improve reliability and to deal with the high complexity of existing middleware solutions,todays production Grid systems restrict the services to be deployed on their resources.On the other hand end-users requirea wide range of value added services to fully utilize these resources.This paper describes a solution how legacy codesupport is offered as third party service for production Grids.The introduced solution,based on the Grid ExecutionManagement for Legacy Code Architecture(GEMLCA),do not require the deployment of additional applications onthe Grid resources,or any extra effort from Grid system administrators.The implemented solution was successfullyconnected to and demonstrated on the UK National Grid Service.1IntroductionThe vision of Grid computing is to enable anyone to offer resources to be utilised by others via the network.This orig-inal aim,however,has not been fulfilled so far.Todays production Grid systems,like the EGEE Grid,the NorduGrid or the UK National Grid Service(NGS)apply very strict rules towards service providers,hence restricting the number of sites and resources in the Grid.The reason for this is the very high complexity to install and maintain existing Grid middleware solutions.In a production Grid environment strong guarantees are needed that system administrators keep the resources up and running.In order to offer reliable service only a limited range of software is allowed to be deployed on the resources.On the other hand these Grid systems aim to serve a large and diverse user community with different needs and goals.These users require a wide range of tools in order to make it easier to create and run Grid-enabled applications. As system administrators are reluctant to install any software on the production Grid that could compromise reliability, the only way to make these utilities available for users is to offer them as third-party services.These services are running on external resources,maintained by external organisations,and they are not integral part of the production Grid system.However,users can freely select and utilise these additional services based on their requirements and experience with the service.This previously described scenario was utilised to connect GEMLCA(Grid Execution Management for Legacy Code Architecture)[11]to the UK National Grid Service.GEMLCA enables legacy code programs written in any source language(Fortran,C,Java,etc.)to be easily deployed as a Grid Service without significant user effort.A This research work is carried out under the FP6Network of Excellence CoreGRID funded by the European Commission(Contract IST-2002-004265).user-level understanding,describing the necessary input and output parameters and environmental values such as the number of processors or the job manager used,is all that is required to port the legacy application binary onto the Grid.GEMLCA does not require any modification of,or even access to,the original source code.The architecture is also integrated with the P-GRADE portal and workflow[13]solutions to offer a user friendly interface,and create complex applications including legacy and non-legacy components.In order to connect GEMLCA to the NGS two main tasks have been completed:•First,a portal server has been set up at University of Westminster running the P-GRADE Grid portal and offering access to the NGS resources for authenticated and authorised users.With the help of their Grid certificates and NGS accounts portal users can utilise NGS resources in a much more convenient and user-friendly way than previously.•Second,the GEMLCA architecture has been redesigned in order to support the third-party service provider scenario.There is no need to install GEMLCA on any NGS resource.The architecture is deployed centrally on the portal server but still offers the same legacy code functionally as the original solution:users can easily deploy legacy applications as Grid services,can access these services from the portal interface,and can create, execute and visualise complex Grid workflows.This paper describes two different scenarios how GEMLCA is redesigned in order to support a production Grid system.Thefirst scenario supports the traditional job submission like task execution,and the second offers the legacy codes as pre-deployed services on the appropriate resources.In both cases GEMLCA runs on an external server, and neither compromise the reliability of the production Grid system,nor requires extra effort from the Grid system administrators.The service is transparent from the Grid operators point of view but offers essential functionality for the end-users.2The UK National Grid ServiceThe National Grid Service(NGS)is the UK production Grid operated by the Grid Operation Support Centre(GOSC). It offers a stable highly-available production quality Grid service to the UK research community providing compute and storage resources for users.The core NGS infrastructure consists of four cluster nodes at Cambridge,CCLRC-RAL,Leeds and Manchester,and two national High Performance Computing(HPC)services:HPCx and CSAR.NGS provides compute resources for compute Grid through compute clusters at Leeds and Oxford,and storage resources for data Grid through data clusters at CCLRC-RAL and Manchester.This core NGS infrastructure has recently been extended with two further Grid nodes at Bristol and Cardiff,and will be further extended by incorporating UK e-Science Centres through separate Service Level Agreements(SLA).NGS is based on GT2middleware.Its security is built on Globus Grid Security Infrastructure(GSI)[14],which supports authentication,authorization and single sign-on.NGS uses GridFTP to transfer input and outputfiles to and from nodes,and Storage Resource Broker(RSB)[6]with OGSA-DAI[3]to provide access to data on NGS nodes. It uses the Globus Monitoring and Discovery Service(MDS)[7]to handle information of NGS nodes.Ganglia[12], Grid Integration Test Script(GITS)[4]and Nagios[2]are used to monitor both the NGS and its nodes.Nagios checks nodes and services while GITS monitors communication among NGS nodes.Ganglia collects and processes information provided by Nagios and GITS in order to generate NGS-level view.NGS uses a centralised user registration ers have to obtain certificates and open accounts to be able to use any NGS service.The certificates are issued by the UK Core Programme Certification Authority(e-Science certificate)or by other CAs.NGS accounts are allocated from a central pool of generic user accounts to enable users to register with all NGS nodes at the same er management is based on Virtual Organisation Membership Service (VOMS)[1].VOMS supports central management of user registration and authorisation taking into consideration local policies on resource access and usage.3Grid Execution Management for Legacy Code ArchitectureThe Grid computing environment requires special Grid enabled applications capable of utilising the underlying Grid middleware and infrastructure.Most Grid projects so far have either developed new applications from scratch,orsignificantly re-engineered existing ones in order to be run on their platforms.However,as the Grid becomes com-monplace in both scientific and industrial settings,the demand for porting a vast legacy of applications onto the new platform will panies and institutions can ill afford to throw such applications away for the sake of a new technology,and there is a clear business imperative for them to be migrated onto the Grid with the least possible effort and cost.The Grid Execution Management for Legacy Code Architecture(GEMLCA)enables legacy code programs written in any source language(Fortran,C,Java,etc.)to be easily deployed as a Grid Service without significant user effort.In this chapter the original GEMLCA architecture is outlined.This architecture has been modified,as described in chapters4and5,in order to create a centralised version for production Grids.GEMLCA represents a general architecture for deploying legacy applications as Grid services without re-engineering the code or even requiring access to the sourcefiles.The high-level GEMLCA conceptual architecture is represented on Figure1.As shown in thefigure,there are four basic components in the architecture:1.The Compute Server is a single or multiple processor computing system on which several legacy codes arealready implemented and available.The goal of GEMLCA is to turn these legacy codes into Grid services that can be accessed by Grid users.2.The Grid Host Environment implements a service-oriented OGSA-based Grid layer,such as GT3or GT4.Thislayer is a pre-requisite for connecting the Compute Server into an OGSA-built Grid.3.The GEMLCA Resource layer provides a set of Grid services which expose legacy codes as Grid services.4.The fourth component is the GEMLCA Client that can be installed on any client machine through which a userwould like to access the GEMLCA resources.Figure1:GEMLCA Conceptual ArchitectureThe novelty of the GEMLCA concept compared to other similar solutions like[10]or[5]is that it requires minimal effort from both Compute Server administrators and end-users of the Grid.The Compute Server administrator should install the GEMLCA Resource layer on top of an available OGSA layer(GT3/GT4).It is also their task to deploy existing legacy applications on the Compute Servers as Grid services,and to make them accessible for the whole Grid community.End-users do not have to do any installation or deployment work if a GEMLCA portal is available for the Grid and they only need those legacy code services that were previously deployed by the Compute Server administrators.In such a case end-users can immediately use all these legacy code services-provided they have access to the GEMLCA Grid resources.If they would like to deploy legacy code services on GEMLCA Grid resources they can do so,but these services cannot be accessed by other Grid users.As a last resort,if no GEMLCA portal is available for the Grid,a user must install the GEMLCA Client on their client machine.However,since it requires some IT skills to do this,it is recommended that a GEMLCA portal is installed on every Grid where GEMLCA Grid resources are deployed.The deployment of a GEMLCA legacy code service assumes that the legacy application runs in its native environ-ment on a Compute Server.It is the task of the GEMLCA Resource layer to present the legacy application as a Grid service to the user,to communicate with the Grid client and to hide the legacy nature of the application.The deploy-ment process of a GEMLCA legacy code service requires only a user-level understanding of the legacy application, i.e.,to know what the parameters of the legacy code are and what kind of environment is needed to run the code(e.g. multiprocessor environment with n processors).The deployment defines the execution environment and the parameter set for the legacy application in an XML-based Legacy Code Interface Description(LCID)file that should be stored in a pre-defined location.Thisfile is used by the GEMLCA Resource layer to handle the legacy application as a Grid service.GEMLCA provides the capability to convert legacy codes into Grid services just by describing the legacy param-eters and environment values in the XML-based LCIDfile.However,an end-user without specialist computing skills still requires a user-friendly Web interface(portal)to access the GEMLCA functionalities:to deploy,execute and retrieve results from legacy applications.Instead of developing a new custom Grid portal,GEMLCA was integrated with the workflow-oriented P-GRADE Grid portal extending its functionalities with new portlets.Following this integration,end-users can easily construct workflow applications built from legacy code services running on different GEMLCA Grid resources.The workflow manager of the portal contacts the selected GEMLCA Resources,passes them the actual parameter values of the legacy code,and then it is the task of the GEMLCA Resource to execute the legacy code with the actual parameter values.The other important task of the GEMLCA Resource is to deliver the results of the legacy code service back to the portal.The overall structure of the GEMLCA Grid with the Grid portal is shown in Figure2.Figure2:GEMLCA with Grid Portal4Connecting GEMLCA to the NGSTwo different scenarios were identified in order to execute legacy code applications on NGS sites.In each scenario both GEMLCA and the P-GRADE portal are installed on the Parsifal cluster of the University of Westminster.As a result,there is no need to deploy any GEMLCA or P-GRADE portal code on the NGS resources.scenario1:legacy codes are stored in a central repository and GEMLCA submits these codes as jobs to NGS sites, scenario2:legacy codes are installed on NGS sites and executed through GEMLCA.The two scenarios are supporting different user needs,and each of them increases the usability of the NGS in different ways for end-users.The GEMLCA research team has implemented thefirst scenario in May2005,and currently working on the implementation of the second scenario.This chapter briefly describes these two different scenarios,and the next chapter explains in detail the design and implementation aspects of thefirst already implemented solution.As the design and implementation of the second scenario is currently work is progress,its detailed description will be the subject of a future publication.4.1Scenario1:Legacy Code Repository for NGSThere are several legacy applications that would be useful for users within the NGS community.These applications were developed by different institutions and currently not available for other members of the community.According to this scenario legacy codes can be uploaded into a central repository and made available for authorised users through a Grid portal.The solution extends the usability of NGS as users can submit not only their own applications but can also utilise other legacy codes stored in the repository.Users can access the central repository,managed by GEMLCA,through the P-GRADE portal and upload their applications into this repository.After uploading legacy applications users with valid certificates and existing NGS accounts can select and execute legacy codes through the P-GRADE portal on different NGS sites.In this scenario the binary codes of legacy applications are transferred from the GEMLCA server to the NGS sites,and executed as jobs.Figure3:Scenario1-Legacy Code Repository for NGS4.2Scenario2:Pre-deployed Legacy Code ServicesThis solution extends the NGS Grid towards the service-oriented Grid ers cannot only submit and execute jobs on the resources but can also access legacy applications deployed on NGS and include these in their workflows. This scenario is the logical extension of the original GEMLCA concept in order to use it with NGS.In this scenario the legacy codes are already deployed on the NGS sites and only parameters(input or output)are submitted.Users contact the central GEMLCA resource through the P-GRADE portal,and can access the legacy codes that are deployed on the NGS sites.In this scenario the NGS system administrators have full control of legacy codes that they deploy on their own resources.Figure4:Scenario2Pre-Deployed Legacy Code on NGS Sites5Legacy Code Repository for the NGS5.1Design objectivesThe currently implemented solution that enables users to deploy,browse and execute legacy code applications on the NGS sites is based on scenario1,as described in the previous chapter.This solution utilises the original GEMLCA architecture with the necessary modifications in order to execute the tasks on the NGS resources.The primary aims of the solution are the following:•The owners of legacy applications can publish their codes in the central repository making it available for other authorised users within the UK e-Science community.The publication is not different from the original method used in GEMLCA,and it is supported by the administration Grid portlet of the P-GRADE portal,as described in[9].After publication the code is available for other non-computer specialist end-users.•Authorised users can browse the repository,select the necessary legacy codes,set their input parameters,and can even create workflows from compatible components.These workflows can then be mapped onto the NGS resources,submitted and the execution visualised.•The deployment of a new legacy application requires some high level understanding of the code(like name and types input and output parameters)and its execution environment(e.g.supported job managers,maximum number of processors).However,once the code is deployed end-users with no Grid specific knowledge can easily execute it,and analyse the results using the portal interface.As GEMLCA is integrated with the P-GRADE Grid portal,NGS users have two different options in order to execute their applications.They can submit their own code directly,without the described publication process,using the original facilities offered by the portal.This solution is suggested if the execution is only on an ad-hoc basis when the publication puts too much overhead on the process.However,if they would like to make their code available for a larger community,and would like make the execution simple enough for any end-user,they can publish the code with GEMLCA in the repository.In order to execute a legacy code on an NGS site,users should have a valid user certificate,for example an e-Science certificate,an NGS account and also an account for the P-GRADE portal running at Westminster.After logging in the portal they download their user certificate from an appropriate myProxy server.The legacy code,submitted to the NGS site,utilise this certificate to authenticate users.Portal Legacy code deployer Legacy code executor Placeand input Portal Legacy code deployer Legacy code executor script”filesOriginal GEMLCAGEMLCA NGS Figure 5:Comparison of the Original and the NGS GEMLCA Concept5.2Implementation of the SolutionTo fulfil these objectives some modifications and extensions of the original GEMLCA architecture were necessary.Figure 5compares the original and the extended GEMLCA architectures.As it is shown in the figure,an additional layer,representing the remote NGS resource where the code is executed,appears.The deployment of a legacy code is not different from the original GEMLCA concept;however,the execution has changed significantly in the NGS version.To transfer the executable and the input parameters to the NGS site,and to instruct the remote GT2GRAM to execute the jobs,required the modification of the GEMLCA architecture,including the development of a special script that interfaces with Condor G.The major challenge when connecting GEMLCA to the NGS was that NGS sites use Globus Toolkit version 2(GT2),however the current GEMLCA implementations are based on service-oriented Grid middleware,namely GT3and GT4.The interfacing between the different middleware platforms is supported by a script,called NGS script,that provides additional functionality required for executing legacy codes on NGS sites.Legacy codes and input files are stored in the central repository but executed on the remote NGS sites.To execute the code on a remote site first the NGS script,executed as a GEMLCA legacy code,instructs the portal to copy the binary and input files from the central repository to the NGS site.Next,the NGS script,using Condor-G,submits the legacy code as a job to the remote site.The other major part of the architecture where modifications were required is the config.xml file and its relatedJava classes.GEMLCA uses an XML-based descriptionfile,called config.xml,in order to describe the environmental parameters of the legacy code.Thisfile had to be extended and modified in order to take into consideration a second-level job manager,namely the job manager used on the remote NGS site.The config.xml should also notify the GEMLCA resource that it has to submit the NGS script instead of a legacy code to GT4MMJFS(Master Managed Job Factory Service)when the user wants to execute the code on an NGS site.The implementation of these changes also required the modification of the GEMLCA core layer.In order to utilise the new GEMLCA NGS solution:1.The owner of the legacy application deploys the code as a GEMLCA legacy code in the central repository.2.The end-user selects and executes the appropriate legacy applications on the NGS sites.As the deployment process is virtually identical to the one used by the original GEMLCA solution here we con-centrate on the second step,the code execution.The following steps are performed by GEMLCA when executing a legacy code on the NGS sites(Fig.6):1.The user selects the appropriate legacy codes from the portal,defines inputfiles and parameters,and submits an“execute a legacy code on an NGS site”request.2.The GEMLCA portal transfers the inputfiles to the NGS site.3.The GEMLCA portal forwards the users request to a GEMLCA Resource.4.The GEMLCA resource creates and submits the NGS script as a GEMLCA job to the MMJFS.5.The MMJFS starts the NGS script.6.Condor-G contacts the remote GT2GRAM,sends the binary of the legacy code and its parameters to the NGSsite,and submits the legacy code as a job to the NGS site job manager.Figure6:Execution of Legacy Codes on an NGS SiteWhen the job has been completed on the NGS site the results are transferred from the NGS site to the user in the same way.6Results-Traffic simulation on the NGSA working prototype of the described solution has been implemented and tested creating and executing a traffic simu-lation workflow on the different NGS resources.The workflow consists of three types of components:1.The Manhattan legacy code is an application to generate inputs for the MadCity simulator:a road networkfileand a turnfile.The MadCity road networkfile is a sequence of numbers,representing a road topology of a road network.The MadCity turnfile describes the junction manoeuvres available in a given road network.Traffic light details are also included in thisfile.2.MadCity[8]is a discrete-time microscopic traffic simulator that simulates traffic on a road network at the levelof individual vehicles behaviour on roads and at junctions.After completing the simulation,a macroscopic trace file,representing the total dynamic behaviour of vehicles throughout the simulation run,is created.3.Finally a traffic density analyser compares the traffic congestion of several runs of the simulator on a givennetwork,with different initial road traffic conditions specified as input parameters.The component presents the results of the analysis graphically.Each of these applications was published in the central repository at Westminster as GEMLCA legacy code.The publication was done using the administration portlet of the GEMLCA P-GRADE portal.During this process the type of input and output parameters,and environmental values,like job managers and maximum number of processors used for parallel execution,were set.Once published the codes are ready to be used by end-users even with very limited computing knowledge.Figure7shows the workflow graph and the execution of the different components on NGS resources:•Job0is a road network generator mapped at Leeds,•jobs1and2are traffic simulators running parallel at Leeds and Oxford,respectively,•finally,job3is a traffic density analyser executed at Leeds.Figure7:Workflow Graph and Visualisation of its Execution on NGS ResourcesWhen creating the workflow the end-user selected the appropriate application from the repository,set input param-eters and mapped the execution to the available NGS resources.During execution the NGS script run,contacted the remote GT2GRAM,and instructed the portal to pass executa-bles and input parameters to the remote site.Whenfinishing the execution the outputfiles were transferred back to Westminster and were made available for the user.7Conclusion and Future WorkThe implemented solution successfully demonstrated that additional services,like legacy code support,run and main-tained by third party service providers can be added to production Grid systems.The major advantage of this solution is that the reliability of the core Grid infrastructure is not compromised,and no additional effort is required from Grid system administrators.On the other hand,utilizing these services the usability of these Grids can be significantly improved.Utilising and re-engineering the GEMLCA legacy code solution two different scenarios were identified to provide legacy code support for the UK NGS.Thefirst,providing a legacy code repository functionality,and allowing the submission of legacy applications as jobs to NGS resources was successfully implemented and demonstrated.The final production version of this architecture and its official release for NGS users is scheduled for June2005.The second scenario,that extends the NGS with pre-deployed legacy code services,is currently in the design phase.Challenges are identified concerning its implementation,especially the creation and management of virtual organizations that could utilize these pre-deployed services.References[1]R.Alfieri,R.Cecchini,V.Ciaschini,L.dellAgnello,A.Frohner,A.Gianoli,K.Lorentey,,and F.Spata.V oms,an authorization system for virtual organizations.af.infn.it/voms/VOMS-Santiago.pdf. [2]S.Andreozzi,S.Fantinel,D.Rebatto,L.Vaccarossa,and G.Tortone.A monitoring tool for a grid operationcenter.In CHEP2003,La Jolla,California,March2003.[3]Mario Antonioletti,Malcolm Atkinson,Rob Baxter,Andrew Borley,Neil P Chue,Hong,Brian Collins,NeilHardman,Ally Hume,Alan Knox,Mike Jackson,Amy Krause,Simon Laws,James Magowan,Norman W Paton,Dave Pearson,Tom Sugden,Paul Watson,and Martin Westhead.The design and implementation of grid database services in ogsa-dai.Concurrency and Computation:Practice and Experience,17:357–376,2005. [4]David Baker,Mark Baker,Hong Ong,and Helen Xiang.Integration and operational monitoring tools for theemerging uk e-science grid infrastructure.In Proceedings of the UK e-Science All Hands Meeting(AHM2004), East Midlands Conference Centre,Nottingham,2004.[5]B.Balis,M.Bubak,and M.Wegiel.A solution for adapting legacy code as web services.In V.Getov andT.Kiellmann,editors,Component Models and Systems for Grid Applications,pages57–75.Springer,2005.ISBN0-387-23351-2.[6]C.Baru,Moore R,A.Rajasekar,and M.Wan.The sdsc storage resource broker.In Proc.CASCON’98Confer-ence,November1998.[7]Karl Czajkowski,Steven Fitzgerald,Ian Foster,and Carl Kesselman.Grid information services for distributedresource sharing.In Proceedings of the Tenth IEEE International Symposium on High-Performance Distributed Computing(HPDC-10),/research/papers/MDS-HPDC.pdf.[8]A.Gourgoulis,G.Terstyansky,P.Kacsuk,and S.C.Winter.Creating scalable traffic simulation on clusters.In PDP2004.Conference Proceedings of the12th Euromicro Conference on Parallel,Distributed and Network based Processing,La Coruna,Spain,February2004.[9]A.Goyeneche,T.Kiss,G.Terstyanszky,G.Kecskemeti,T.Delaitre,P.Kacsuk,and S.C.Winter.Experienceswith deploying legacy code applications as grid services using gemlca.In P.M.A.Sloot,A.G.Hoekstra,T.Priol,。

IBM网格计算

IBM网格计算

links to the Management Modules - 14, 1 Gbps links to the blades An additional or redundant Ethernet
… :: ::

ConsIBole
ThinkVision
OOO O
Switch Module may be installed in bay 2
Hot-swap Ethernet Switch Module – bay 1 provides:
- 4-port 1 Gbps Ethernet
(Serial over LAN) for Management Module connection
to network
- 2 internal 10/100 Mbps
Up to four hot-swap 1800/2000 watt Power Modules
Up to eight Ethernet ports
Up to four Fibre Channel ports
Single
management
IBM
console
Resource sharing allows: - Power and cooling savings
~ 326
高性能机架式服务器
支持双 Xeon™ 处理器 支持最大16GB内存 6 个热拔插 SCSI 硬盘 集成系统管理
2U
支持双 Opteron™ 处理器 支持最大16GB内存 2 个热拔插 SCSI 或SATA硬盘 集成系统管理
1U
~ JS20 刀片服务器
基于PowerPC 的刀片服务器 采用双路 2.2GHz PPC 970 芯片 每个刀片最大 4GB 内存 双IDE 硬盘 支持安装2个子卡 (以太网、光纤通道等)

SUMMARY

SUMMARY

SOFTWARE—PRACTICE AND EXPERIENCESoftw.Pract.Exper.2005;35:817–826Published online in Wiley InterScience().DOI:10.1002/spe.689The case for using BridgeCertificate Authorities forGrid computingMarty Humphrey1,∗,†,Jim Basney2and Jim Jokl11Department of Computer Science,University of Virginia,151Engineer’s Way,Charlottesville,VA22904,U.S.A.2National Center for Supercomputing Applications,University of Illinois at Urbana-Champaign,605East Springfield Avenue,Champaign,IL61820,U.S.A.SUMMARYAs Grid deployments increase,the challenge remains to create a scalable,multi-organizational authentication infrastructure.Public key infrastructures(PKIs)are widely used for authentication in Grids, due in large part to the success of the Globus toolkit,despite the challenges and difficulties both for PKI administrators and users.The Bridge Certificate Authority(CA)is a compromise between a strictly hierarchical PKI and a mesh PKI and achieves many of the benefits of the hierarchical PKI and mesh PKI but has been untested for use with Grid software.This paper reports on the use of a Bridge CA with two representative Grid software packages:the Globus Toolkit v2and .Wefind that both packages support Bridge CAs sufficiently today to be usable in Grid software architectures,although not without limitations.In this paper,and through these experiments,we build the case for using Bridge CAs for Grid computing.Copyright c 2005John Wiley&Sons,Ltd.KEY WORDS:Grid computing;authentication;PKI;Bridge Certificate AuthorityINTRODUCTIONAuthentication is the foundation of secure computing and is defined as the act of ensuring that someone or something is whom they claim to be.In general distributed computing scenarios,users might∗Correspondence to:Marty Humphrey,Department of Computer Science,School of Engineering and Applied Science, University of Virginia,151Engineer’s Way,P.O.Box400740,Charlottesville,V A22904,U.S.A.†E-mail:humphrey@Contract/grant sponsor:National Science Foundation under grants(Next Generation Software program);contract/grant number: ACI-0203960Contract/grant sponsor:(NSF Middleware Initiative);contract/grant number:ANI-0222571Contract/grant sponsor:the National Partnership for Advanced Computational Infrastructure(NPACI)(through a subcontract to SURA),and Microsoft Research;contract/grant number:SCI-0123937Copyright c 2005John Wiley&Sons,Ltd.Received18October2004 Revised23January2005 Accepted11February2005818M.HUMPHREY,J.BASNEY AND J.JOKLbe required to authenticate to remote resources,such as when attempting to spawn a remote shell. Similarly,a remote FTP server often requires a username and a password in order for the client to retrieve one or morefiles.Note that the decision to allow a particular request to proceed—which is referred to as authorization—generally follows only after authentication.The goal of Grid computing is to create a virtual organization across one or more physical organizations or administrative domains[Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826THE CASE FOR USING BRIDGE CERTIFICATE AUTHORITIES FOR GRID COMPUTING819 hierarchical PKI model requires that every service and every client have the same,single CA as the ‘trust anchor’,while the bridge model facilitates more pairwise trust relationships via a Bridge CA. To simplify this study of Bridge CAs for Grids,the focus of this paper is largely on the mechanics of Bridge CAs with existing Grid software and not on the policy issues that are being addressed by other projects.We assume that the Bridge CA enforces a set of minimum policy requirements for the CAs in the PKI as agreed upon by the collaborating entities(for example,by the EU Grid PMA[Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826820M.HUMPHREY,J.BASNEY AND J.JOKLFigure1.Typical hierarchical PKI.(e.g.by configuring the software to explicitly trust the appropriate intermediate certificate of Figure Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826THE CASE FOR USING BRIDGE CERTIFICATE AUTHORITIES FOR GRID COMPUTING 821Root AMid-AUserA1UserA2 Root B Root nMid-B User B1 User B1Bridge CACross-certificate pairsFigure 2.Typical Bridge PKI.Figure 3.Path validation for Hierarchical CA.Figure 4.Path validation for Bridge CA.One of the important areas in which the hierarchical CA is different than the Bridge CA is path validation,as seen through an example in which Fred from CampusU attempts to validate machine1from Org1.FigureCopyright c 2005John Wiley &Sons,Ltd.Softw.Pract.Exper.2005;35:817–826822M.HUMPHREY,J.BASNEY AND J.JOKLTestbedBridge CACross-certificate pairsFigure5.Schematic of Grid testbed PKI Bridge,is easy.In other words,the chain of FigureCopyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826THE CASE FOR USING BRIDGE CERTIFICATE AUTHORITIES FOR GRID COMPUTING823 GLOBUS TOOLKIT V2Version2of the Globus Toolkit implements its PKI operations using OpenSSL and so inherits some of the limitations of the OpenSSL software.Although we cross-certified with UVa,UAB,USC,and TACC,we decided initially to focus on only a single institution,UAB,when we tested the ability of Globus V2to correctly process the Bridge CA structure.Thefirst result we found was that Campus CA integration is complicated by the Globus interface.Whereas Campus CA and OS-exported certificates are generally in PKCS-12format[ Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826824M.HUMPHREY,J.BASNEY AND J.JOKLWindows XP treatment of certifi inherits many of its behaviors from the underlying operations of the platform(such as Windows XP),of particular interest is how Windows XP uses the URIs in the AIAfield of the certificates in the chain being validated.There were a number of key questions that were addressed in the experiments,including whether or not Windows XP would simply accept a single certificate for the immediately superior CA for each AIA URI found in a certificate, whether or not Windows XP would accept an object containing multiple certificates,and whether or not more information could be provided using LDAP URLs instead of HTTP.We used thefirst Bridge CA—which contained OrgCA,CorpCA,and RootCA and was established as part of our HEPKI-TAG effort—to determine Windows support for a Bridge CA.We then simulated the users of these three organizations using their certificates by applying digital signatures on Microsoft Word documents.Our tests consisted of having a real user simulate a particular user in one of the three organizations by installing the certificate/key of that particular user,and then attempting to read the document that was signed by a user in the other organizations.The challenge to Windows XP was to recognize the certificate Bridging structure by downloading the appropriate certificates as contained in the AIAfields.Our tests contained the following results,which are instrumental in the understanding of how Windows and thus processes a Bridge CA.AIA URI lookups.Windows XP reads a single object using HTTP URLs in the AIAfield of the certificate.Windows XP will try all of the URLs in the AIAfield in order but will stop after it reads and caches thefirst entry found.For example,it is not possible to simply add a URL for each object needed and expect Windows to read and cache all of the certificates.Windows XP will stop after downloading thefirst one and appears to assume that the multiple URLs in the certificate all point to the same information so further lookups after thefirst successful download are not needed.Windows2000reads objects using AIA pointers the same way as Windows XP.The caching strategy appears to be a little different but the overall functionality is similar.Download of simple certificates.When referenced via an HTTP URL in the AIAfield in a certificate, Windows XP will download and cache a single certificate.The certificate should be in DER format.Download of PKCS-7objects.When referenced via an HTTP URL in the AIAfield of a certificate, Windows XP will accept a PKCS-7object[Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826THE CASE FOR USING BRIDGE CERTIFICATE AUTHORITIES FOR GRID COMPUTING825 LDAP server log.So,early indications are that not100%of the PKI functionality in Microsoft applications comes directly from operating system libraries.Testing bridge functionality will need to be done on a per-application basis.CRL checking.If CRL checking is enabled in the operating system,Office XP appears to want to check CRLs while performing document signature verification even if there are not any CRL distribution points present in the certificates.Based on these experiments,and through separate experiments,we have been able to verify that and/or WSE is correctly able to process the bridge CA,although only by preloading intermediate CAs into the client-side machine(in a similar requirement that we saw in Globus V2). In addition,because of robust processing on the part of Windows XP for the AIAfield in X.509 certificates,we believe that the potential for Bridged PKIs in is strong. CONCLUSIONWe have reported on the use of the Bridge CA as the means by which to create a scalable and persistent authentication infrastructure for Grids,without incurring the difficult policy decisions and the single point of failure of hierarchical PKIs and without the arbitrary complexity of mesh PKIs.Both of the two representative Grid software packages we report on in this document were able to correctly process the Bridge CA path validation,but only if the architecture of the Bridge PKI is straightforward and with limited use of the AIAfield in X.509certificates.Our continuing work is to further expand the operations of our existing Bridge CA at the University of Virginia by cross-certifying with more institutions and PKIs.ACKNOWLEDGEMENTSWe thank the many people at UVa,UAB,TACC,USC,and others on the SURA testbed team worked to pull all of this bridge work together,particularly John-Paul Robinson of UAB and Ashok Adiga of TACC.We additionally thank Norm Beekwilder and Mark Morgan at the University of Virginia for their help with .Finally, we thank the reviewers for their many constructive comments,which we used to greatly improve the quality of this paper.REFERENCES1.Foster I,Kesselman C,Tuecke S.The physiology of the Grid:An Open Grid Services Architecture for distributed systemsintegration.Draft of6/22/02.Available at:/ogsi-wg/drafts/ogsa draft2.92002-06-22.pdf.2.Humphrey M,Thompson M.Security implications of typical Grid computing usage scenarios.Proceedings of the10thInternational Symposium on High Performance Distributed Computing(HPDC-10),San Francisco,CA,7–9August2001.3.Public-key infrastructure(X.509)(pkix)page./html.charters/pkix-charter.html[15October2004].4.Neuman BC,Ts’o T.Kerberos:An authentication service for computer networks.IEEE Communications1994;32(9):33–38.5.ITU-T Recommendation X.509(1997E).Information technology—open systems Interconnection—the directory:Authentication framework,June1997.6.Dierks T,Rescorla E.The TLS Protocol Version1.1.RFC2246,March2004.Available at:/internet-drafts/draft-ietf-tls-rfc2246-bis-06.txt.Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826826M.HUMPHREY,J.BASNEY AND J.JOKL7.Foster I,Kesselman C,Tsudik G,Tuecke S.A security architecture for computational grids.Proceedings of the5th ACMConference on Computer and Communications Security Conference.ACM Press:New York,1998;83–92.8.Butler R,Engert D,Foster I,Kesselman C,Tuecke S,V olmer J,Welch V.A national-scale authentication infrastructure.IEEE Computer2000;33(12):60–66.9.Globus project page./[15October2004].10.Housely R,Polk W,Ford W,Solo D.Internet X.509public key infrastructure certificate and certificate revocation list(CRL)profile.RFC3280,April2002.Available at:/rfc/rfc3280.txt.11.Myers M,Ankney R,Malpani A,Galperin S,Adams C.X.509Internet public key infrastructure online certificate statusprotocol(OCSP).RFC2560,June1999.Available at:/rfc/rfc2560.txt.12.Clarke R.The fundamental inadequacies of conventional public key infrastructure.Proceedings of the EuropeanConference on Information Systems,Bled,Slovenia,June2001.Available at:.au/people/Roger.Clarke/II/ECIS2001.html.13.Ellison C,Schneier B.Ten risks of PKI:What you’re not being told about Public Key puter SecurityJournal2000;16(1):1–7.14.European Policy Management Authority for Grid Authentication page./[15October2004].15.Higher education PKI technical activities group(HEPKI-TAG)page./hepki-tag/[15October2004].16.Higher education bridge certificate authority(HEBCA)page./hebca/[15October2004].17.Federal bridge certificate authority page./pki/fbca/welcome.html[15October2004].18.Web services resource framework page./developerworks/webservices/library/ws-resource/[15October2004].:Web services resource framework page.[15October2004].20.Humphrey M,Wasson G,Morgan M,Beekwilder N.An early evaluation of WSRF and WS-Notification via .2004Grid Computing Workshop(associated with Supercomputing2004),Pittsburgh,PA,8November2004.21.Jokl J,Basney J,Humphrey M.Experiences using Bridge CAs for Grids.Proceedings of Workshop on Grid SecurityPractice and Experience(U.K.e-Science Security Task Force),Oxford,8–9July2004.Technical Report YCS-2004-380, Department of Computer Science,University of York.Available at:/ftpdir/reports/.22.Novotny J,Tuecke S,Welch V.An online credential repository for the Grid:MyProxy.Proceedings of the10thInternational Symposium on High Performance Distributed Computing(HPDC-10),San Francisco,CA,7–9August2001.23.MyProxy online credential repository page./myproxy/[15October2004].24.Polk T,Hastings N.Bridge certification authorities:Connecting B2B public key infrastructures.PKI Forum MeetingProceedings,June2000.25.TeraGrid page.[15October2004].26.Internet2Windows XP PKI bridge path validation experiment page./bridge/[15October2004].27.RSA Laboratories.PKCS12v1.0:Personal Information Exchange Syntax,June1999.Available at:ftp:///pub/pkcs/pkcs-12/pkcs-12v1.pdf.28.Libpkix sourceforge page./[15October2004].29.Microsoft Web Services Enhancements(WSE)page./webservices/building/wse/default.aspx[15October2004].30.RSA Laboratories.Cryptographic Message Syntax Standard.PKCS7.An RSA Laboratories technical note,version1.5.Revised1November1993.Copyright c 2005John Wiley&Sons,Ltd.Softw.Pract.Exper.2005;35:817–826。

TIPTOP GP 5.25 Web Services 开发说明

TIPTOP GP 5.25 Web Services 开发说明

(ERP Web login通道)
aws_ttsrv2
aws_efsrv2
TIPTOP Service GetWay V2
(TIPTOP為Server端)
通用接口
CRM Services TIPTOP Services Gateway
CRM
PDM Services
PDM
GPM
PDM Services
建立產品結構表
建立報價單
PDM
取得員工基本資料 取得廠商基本資料 取得客戶基本資料 建立客戶基本資料
服務功能存取
ERP Service Function
取得料件基本資料 建立料件基本資料
建立產品結構表
建立報價單
取得員工基本資料
取得廠商基本資料
GPM
建立客戶基本資料
取得客戶基本資料
建立服務功能
定義服務功能需要接受的輸入參數及處理後的 回傳參數 撰寫服務功能的程式邏輯 4GL function 註冊 4GL function 為一 Service, 並產生服務列 表的 WSDL file 提供 ERP 服務 WSDL file 及 Service function 輸出入參數定義
CRM 共 通 性 的 整 合 功 能
ቤተ መጻሕፍቲ ባይዱPDM
建立產品結構表 建立料件基本資料 取得料件基本資料 ……….
GPM
取得廠商基本資料 取得客戶基本資料 取得料件基本資料 ……….
服務功能存取
ERP Service Function
取得料件基本資料
CRM
建立料件基本資料
建立產品結構表
建立報價單
PDM
取得員工基本資料 取得廠商基本資料

Grids_in_OPM

Grids_in_OPM

Grids in OPMAbstractGrid information can be stored in the OPM database and managed by the Project Manager application. Information is written from Grid Utility to the OPM. The Save in OPM option in Grid Utility ensures that corresponding Master Grid and Processing Grid information reside in the OPM database. Certain SFMs can import the grid information from the OPM database into the SFM parameter setup.NOTICECopyright protection as an unpublished work is claimed by WesternGeco. The work was created in 2008. Should publication of the work occur, the following notice shall apply. "©2008 Westerngeco". This work contains valuable tradesecrets; disclosure without writtenauthorization is prohibited.Grids in OPMContents1.0Overview1.1Grid Utility1.2SFM Parameter Setup1.0 OverviewThe Grid Utility program provides an option to add grid information to the OPM database. Grid data saved in OPM includes both the Master Grid and Processing Grids. The Save in OPM option in Grid Utility ensures that corresponding Master Grid and Processing Grid information reside in the OPM database. When a Processing Grid is added to OPM, Grid Utility first checks that the Master Grid resides in OPM. If necessary, Grid Utility writes the Master Grid information to OPM before saving the Processing Grid information. The Project Manager directory tree lists all the Master Grid names saved in the OPM (Figure 1). Grid information can be read from the OPM database by the GRID_DEFINE and VEL_GRID_DEFINE SFMs. Master Grid Definition parameter values can be imported from the OPM database into the SFM setup by using the Parameter Analysis submenu option.Figure 1.Project Manager1.1 Grid UtilityUse Grid Utility to select a grid to be saved to the OPM database. If a Processing Grid is selected, Grid Utility checks that corresponding Master Grid information resides in the OPM database. If the Master Grid is not in OPM, then Grid Utility saves both the Master Grid and Processing Grid information.To save a grid to the OPM, the user:•Selects the grid•Choose the Save in OPM option from the File menu•Provide the grid name. This name will appear in the Project Manager directory tree.1.2 SFM Parameter SetupThe user can import grid information from the OPM database into the SFM parameter setup. The GRID_DEFINE and VEL_GRID_DEFINE SFMs contain a feature to activate Parameter Analysis on the right mouse button (MB3) popup menu the SeisFlow Editor (Figure 2).To pull grid information into the SFM setup:•From the SeisFlow parameter setup area for the GRID_DEFINE (or VEL_GRID_DEFINE) SFM, open the Master Grid Definition parameter set.•On the SFM box in the flow area, click the right mouse button (MB3) to show the popup menu (Figure 2).•Select Parameter Analysis to cascade the submenu.•Choose the Populate Master Grid from OPM option.• A Master Grids in OPM dialog box will show the names of all Master Grids stored in the OPM database. Choose the desired Master Grid from the list and click the OK button (Figure 3). The parameter setup will fill with the grid values from OPM (Figure 4).The active Parameter Analysis feature provides two options to populate grid definition parameters from the OPM: one option populates a Master Grid, the other option populates a Processing Grid.Figure 2.The Master Grids in OPM dialog box lists the master grid names that appear in the ProjectManager directory tree. Expand a Master Grid name to show the Processing Grid list.Master Grids in OPMThe parameter set in the SeisFlow Editor accepts the input values from the OPM.Setup for Master Grid Definition Parameter Set。

【计算机系统应用】_信息服务_期刊发文热词逐年推荐_20140726


107 agent技术 108
5
推荐指数 4 2 2 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2010年 序号 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
2008年 序号 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52
科研热词 推荐指数 web服务 4 面向服务架构 3 企业服务总线 3 网格 2 xml 2 soa 2 qos 2 领域工程 1 面向服务架构soa 1 面向服务构架 1 面向最终用户编程 1 面向方面编程 1 隐私策略 1 隐私保护 1 阶段事件驱动模型 1 锁服务 1 通用分组无线服务技术 1 连接强度 1 远程鉴别拨入用户服务协议 1 载波恢复 1 软件体系结构 1 资源记录 1 资源组织 1 语义锁 1 语义相似度 1 语义服务描述 1 语义web服务 1 设计模式 1 认证 1 虚拟现实 1 节点稳定性 1 船舶应急救援 1 自私节点 1 自动售货机 1 网络服务 1 缓存 1 统一搜索 1 系统设计 1 系统服务描述表 1 系统安全 1 粗糙集 1 管理信息系统 1 移动自组网 1 相适度算法 1 监控系统 1 电子商务网站 1 电力交易运营 1 用户行为 1 用户属性 1 用户兴趣聚类 1 用户偏爱 1 灰色关联度 1

GridTime 3000 GNSS Time Server System 系统发布说明书

GridTime™ 3000 GNSS Time Server System Release Notes IntroductionThis System Release Note (SRN) provides information about the GridTime™ 3000 GNSS Time Server version1.0r1.0 (release version 1.0r1.0) released in November 2022. Release v1.0r1.0 is the official release for the GridTime 3000 v1.0r1.0.A user guide for this release (v1.0r1.0) is available along with this SRN.The software release is available at the Microchip Support web page.Table of Contents Introduction (1)1. Summary of Features (3)1.1. New Features in v1.0r1.0 (3)1.2. Known Issues (3)2. Upgrading the Firmware (14)2.1. Upgrading the Firmware (14)3. Contacting Technical Support (18)4. References (19)5. Revision History (20)The Microchip Website (21)Product Change Notification Service (21)Customer Support (21)Microchip Devices Code Protection Feature (21)Legal Notice (21)Trademarks (22)Quality Management System (23)Worldwide Sales and Service (24)1. Summary of FeaturesThis section describes the new features and known issues included with this software release notice.1.1 New Features in v1.0r1.0The following are the latest features released in v1.0r1.0 for GridTime 3000:•Authentication–Support for RFC 2865: Remote Authentication Dial-In User Service (RADIUS)–Support for RFC 4511: Lightweight Directory Access Protocol (LDAP)1.2 Known IssuesThe following tables list the known issues in v1.0r1.0, their operating constraints, and workarounds.1.2.1 Additional Device FunctionsThe following table lists the known issues with the general function of the device, including any issues related to output suppression, daylight savings rollover, licenses, or expansion oscillator usage.Table 1-1. Known Issues: General Functions of the Device1.2.2 Clock Management Tool (CMT)The following table lists the known issues with the clock management tool, including any field validation errors and issues with any buttons or configurable options.Table 1-2. Known Issues: CMT1.2.3 AuthenticationThe following table lists the known issues with the authentication of users through a local or external authentication provider and any issues with access control.Table 1-3. Known Issues: Authentication1.2.4 Simple Network Management Protocol (SNMP) and SyslogThe following table lists the known issues with SNMP including any issues with SNMP authentication, SNMP traps, or SNMP configuration.Table 1-4. Known Issues: SNMP1.2.5 IPv4 EthernetThe following table lists the known issues with the network ports including both the administrator and timing ports, their interfacing protocols, and behavior (DHCP, Link Local, ARP, Clans, and so on).Table 1-5. Known Issues: Network Ports1.2.6 Precision Time ProtocolThe following table lists the known issues with the precision time protocol input or output. Including all known issues with PTP profile implementation.Table 1-6. Known Issues: Precision Time Protocol Input or Output1.2.7 Network Time ProtocolThe following table lists the known issues with the network time protocol input and output implementation.Table 1-7. Known Issues: Network Time Protocol Input and Output1.2.8 Parallel Redundancy ProtocolThe following table lists the known issues with the parallel redundancy protocol implementation.Table 1-8. Known Issues: Parallel Redundancy Protocol Implementation1.2.9 T1/E1/J1The following table lists the known issues with T1/E1/J1 outputs.Table 1-9. Known Issues: T1/E1/J1 Outputs1.2.10 Fixed FrequencyThe following lists the known issues with the fixed frequency outputs.No issues found.1.2.11 DCLS IRIG-B and AM IRIG-BThe following table lists the known issues with DCLS IRIG-B or AM IRIG-B.Table 1-10. Known Issues: DCLS IRIG-B or AM IRIG-B1.2.12 Programmable Pulse OutputsThe following table lists the known issues with the programmable pulse outputs.Table 1-11. Known Issues: Programmable Pulse outputs1.2.13 DCF77 OutputNo Issues found.1.2.14 Serial StringThe following table lists the known issues with the RS232 and RS422 serial string outputs.Table 1-12. Known Issues: RS232 and RS422 Serial String Outputs1.2.15 AlarmsThe following table lists the known issues with the device alarm generation or conditions.Table 1-13. Known Issues: Device Alarm Generation or Conditions1.2.16 Global Navigation Satellite System (GNSS)No issues found.2. Upgrading the FirmwareThis section describes how to upgrade the GridTime 3000 firmware.2.1 Upgrading the FirmwareThis section describes how to upgrade the GridTime 3000's firmware using the CMT. The CMT is built into the device and can be accessed by navigating to either of the IP addresses on the LCD using a web browser.Note: Only an administrator user can upgrade the GridTime 3000's firmware.1.Log in to the CMT as an administrator user.2.Click the Settings icon.3.Click the Firmware icon to open the firmware window.4.Click CHOOSE FILE on the right pane to select the new firmware image.5.After selecting the firmware image from your local system, click on the image and click Open.device. The upload tracking percentage of the image to the device is shown.7.Once the percentage has reached 100%, an APPLY FIRMWARE UPDATE button appears. Click this button toupload the device's firmware.8.An INSTALLING PLEASE WAIT... icon appears. Then, the GridTime 3000 reboots.9.If the upgrade is successful, then the device shows the new firmware on its LCD during boot up and in thefirmware tab of the CMT, once you have logged out and then logged back in.If the firmware version remains the same, then this might indicate a potential upgrade failure. Retry the upgrade and if the upgrade failure persists, then contact technical support. For details, see 3. Contacting Technical Support.Contacting Technical Support 3. Contacting Technical SupportIf you encounter any difficulty in installing this firmware update or operating the product, contact Microchip Frequency and Time Division (FTD) Services and Support at:•USA Call Center:Including Americas, Asia and Pacific RimMicrosemi FTD Services and Support3870 N. First StreetSan Jose, CA 95134Toll-free in North-America: 1-888-367-7966Telephone: 408-428-7907Email: *****************************Internet: /en-us/products/synchronization-and-timing-systems•Europe, Middle East, Africa (EMEA) Call Center:Microsemi FTD Services and Support EMEAAltlaufstrasse 4283635 Hoehenkirchen-SiegertsbrunnGermanyTelephone: +49 700 3288 6435Fax: +49 8102 8961 533Email: ***************************** and ****************************•Australia and New ZealandTekron International LtdLevel 1, 47 The EsplanadeLower Hutt, 5022New ZealandTelephone: +64 566 7722Toll-free in Australia: 1 800 506 311email: *************************References 4. References•GridTime™ 3000 GNSS Time Server SRN (this document)•GridTime™ 3000 GNSS Time Server Installation Manual (DS00004572A)•GridTime™ 3000 SNMP MIBThis SRN and the User Guide are provided in PDF format. The SNMP MIB is provided as an ASCII text file.Note: To view and print PDFs, download Adobe Acrobat Reader from /reader/.Revision History 5. Revision HistoryThe revision history describes the changes that were implemented in the document. The changes are listed byrevision, starting with the most current publication.The Microchip WebsiteMicrochip provides online support via our website at /. This website is used to make files and information easily available to customers. Some of the content available includes:•Product Support – Data sheets and errata, application notes and sample programs, design resources, user’s guides and hardware support documents, latest software releases and archived software•General Technical Support – Frequently Asked Questions (FAQs), technical support requests, online discussion groups, Microchip design partner program member listing•Business of Microchip – Product selector and ordering guides, latest Microchip press releases, listing of seminars and events, listings of Microchip sales offices, distributors and factory representativesProduct Change Notification ServiceMicrochip’s product change notification service helps keep customers current on Microchip products. Subscribers will receive email notification whenever there are changes, updates, revisions or errata related to a specified product family or development tool of interest.To register, go to /pcn and follow the registration instructions.Customer SupportUsers of Microchip products can receive assistance through several channels:•Distributor or Representative•Local Sales Office•Embedded Solutions Engineer (ESE)•Technical SupportCustomers should contact their distributor, representative or ESE for support. Local sales offices are also available to help customers. A listing of sales offices and locations is included in this document.Technical support is available through the website at: /supportMicrochip Devices Code Protection FeatureNote the following details of the code protection feature on Microchip products:•Microchip products meet the specifications contained in their particular Microchip Data Sheet.•Microchip believes that its family of products is secure when used in the intended manner, within operating specifications, and under normal conditions.•Microchip values and aggressively protects its intellectual property rights. Attempts to breach the code protection features of Microchip product is strictly prohibited and may violate the Digital Millennium Copyright Act.•Neither Microchip nor any other semiconductor manufacturer can guarantee the security of its code. Code protection does not mean that we are guaranteeing the product is “unbreakable”. Code protection is constantly evolving. Microchip is committed to continuously improving the code protection features of our products. Legal NoticeThis publication and the information herein may be used only with Microchip products, including to design, test,and integrate Microchip products with your application. Use of this information in any other manner violates these terms. Information regarding device applications is provided only for your convenience and may be supersededby updates. It is your responsibility to ensure that your application meets with your specifications. Contact yourlocal Microchip sales office for additional support or, obtain additional support at /en-us/support/ design-help/client-support-services.THIS INFORMATION IS PROVIDED BY MICROCHIP "AS IS". MICROCHIP MAKES NO REPRESENTATIONSOR WARRANTIES OF ANY KIND WHETHER EXPRESS OR IMPLIED, WRITTEN OR ORAL, STATUTORYOR OTHERWISE, RELATED TO THE INFORMATION INCLUDING BUT NOT LIMITED TO ANY IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE, OR WARRANTIES RELATED TO ITS CONDITION, QUALITY, OR PERFORMANCE.IN NO EVENT WILL MICROCHIP BE LIABLE FOR ANY INDIRECT, SPECIAL, PUNITIVE, INCIDENTAL, OR CONSEQUENTIAL LOSS, DAMAGE, COST, OR EXPENSE OF ANY KIND WHATSOEVER RELATED TO THE INFORMATION OR ITS USE, HOWEVER CAUSED, EVEN IF MICROCHIP HAS BEEN ADVISED OF THE POSSIBILITY OR THE DAMAGES ARE FORESEEABLE. TO THE FULLEST EXTENT ALLOWED BY LAW, MICROCHIP'S TOTAL LIABILITY ON ALL CLAIMS IN ANY WAY RELATED TO THE INFORMATION OR ITS USE WILL NOT EXCEED THE AMOUNT OF FEES, IF ANY, THAT YOU HAVE PAID DIRECTLY TO MICROCHIP FOR THE INFORMATION.Use of Microchip devices in life support and/or safety applications is entirely at the buyer's risk, and the buyer agrees to defend, indemnify and hold harmless Microchip from any and all damages, claims, suits, or expenses resulting from such use. No licenses are conveyed, implicitly or otherwise, under any Microchip intellectual property rights unless otherwise stated.TrademarksThe Microchip name and logo, the Microchip logo, Adaptec, AnyRate, AVR, AVR logo, AVR Freaks, BesTime, BitCloud, CryptoMemory, CryptoRF, dsPIC, flexPWR, HELDO, IGLOO, JukeBlox, KeeLoq, Kleer, LANCheck, LinkMD, maXStylus, maXTouch, MediaLB, megaAVR, Microsemi, Microsemi logo, MOST, MOST logo, MPLAB, OptoLyzer, PIC, picoPower, PICSTART, PIC32 logo, PolarFire, Prochip Designer, QTouch, SAM-BA, SenGenuity, SpyNIC, SST, SST Logo, SuperFlash, Symmetricom, SyncServer, Tachyon, TimeSource, tinyAVR, UNI/O, Vectron, and XMEGA are registered trademarks of Microchip Technology Incorporated in the U.S.A. and other countries. AgileSwitch, APT, ClockWorks, The Embedded Control Solutions Company, EtherSynch, Flashtec, Hyper Speed Control, HyperLight Load, IntelliMOS, Libero, motorBench, mTouch, Powermite 3, Precision Edge, ProASIC, ProASIC Plus, ProASIC Plus logo, Quiet- Wire, SmartFusion, SyncWorld, Temux, TimeCesium, TimeHub, TimePictra, TimeProvider, TrueTime, WinPath, and ZL are registered trademarks of Microchip Technology Incorporated in the U.S.A.Adjacent Key Suppression, AKS, Analog-for-the-Digital Age, Any Capacitor, AnyIn, AnyOut, Augmented Switching, BlueSky, BodyCom, CodeGuard, CryptoAuthentication, CryptoAutomotive, CryptoCompanion, CryptoController, dsPICDEM, , Dynamic Average Matching, DAM, ECAN, Espresso T1S, EtherGREEN, GridTime, IdealBridge, In-Circuit Serial Programming, ICSP, INICnet, Intelligent Paralleling, Inter-Chip Connectivity, JitterBlocker, Knob-on-Display, maxCrypto, maxView, memBrain, Mindi, MiWi, MPASM, MPF, MPLAB Certified logo, MPLIB, MPLINK, MultiTRAK, NetDetach, NVM Express, NVMe, Omniscient Code Generation, PICDEM, , PICkit, PICtail, PowerSmart, PureSilicon, QMatrix, REAL ICE, Ripple Blocker, RTAX, RTG4, SAM-ICE, Serial Quad I/O, simpleMAP, SimpliPHY, SmartBuffer, SmartHLS, SMART-I.S., storClad, SQI, SuperSwitcher, SuperSwitcher II, Switchtec, SynchroPHY, Total Endurance, TSHARC, USBCheck, VariSense, VectorBlox, VeriPHY, ViewSpan, WiperLock, XpressConnect, and ZENA are trademarks of Microchip Technology Incorporated in theU.S.A. and other countries.SQTP is a service mark of Microchip Technology Incorporated in the U.S.A.The Adaptec logo, Frequency on Demand, Silicon Storage Technology, Symmcom, and Trusted Time are registered trademarks of Microchip Technology Inc. in other countries.GestIC is a registered trademark of Microchip Technology Germany II GmbH & Co. KG, a subsidiary of Microchip Technology Inc., in other countries.All other trademarks mentioned herein are property of their respective companies.© 2022, Microchip Technology Incorporated and its subsidiaries. All Rights Reserved.ISBN: 978-1-6683-1533-0Quality Management SystemFor information regarding Microchip’s Quality Management Systems, please visit /quality.Worldwide Sales and Service。

OGSA-DAI和Flex开发数据网格应用探讨

OGSA-DAI和Flex开发数据网格应用探讨摘要:OGSA-DAI是一个开源的数据网格中间件,用于简化对数据的访问与集成,支持访问关系数据库、XML数据库和文件目录系统等数据源,但是没有提供表现层解决方案。

探讨了利用OGSA-DAI,结合Flex设计了一个实现多源异构数据库集成的框架。

??关键词:OGSA-DAI;Flex;异构数据库??中图分类号:TP393.09 文献标识码:A 文章编号:1672-7800(2011)05-0094-02?お???0 引言??实现对分布式异构关系数据库访问与集成时,常常运用OGSA-DAI(Open Grid Service Architecture Data Access And Integration,开放网格服务体系结构数据访问与集成)来开发。

但是,OGSA-DAI没有提供表现层的解决方案,允许应用开发者自由选择表现层解决方案,并通过Web Services方式与OGSA-DAI交互。

Flex是目前最流行的RIA(Rich Internet Application,富互联网应用程序)解决方案之一,相比于传统网络应用,拥有更加卓越的交互能力和绚丽的表现效果。

因此采用OGSA-DAI结合Flex,充分发挥两者的优势,可以大大提高开发效率。

??1 OGSA-DAI和Flex特点分析??1.1 Flex??Flex的开发模式是基于事件驱动的,为程序开发人员提供了丰富的数据显示或与用户交互的组件库。

Flex组件部署简单、安全性高、扩展灵活、交互表现丰富、编程容易,使程序员从繁重的界面调试中得到解脱,加快了Web应用系统的开发速度。

??1.2 OGSA-DAI??利用OGSA-DAI可以有效地屏蔽底层平台的异构性,实现异构数据库的统一访问,具体特点有:①可配置性,即实现数据源的“即插即用”;②可扩展性,以提高数据访问的透明性,便于维护和更新;③全局数据库视图。

要求将网格环境下的多个异构数据库的共享信息集成为一个全局数据库视图,用户访问多个异构数据库就如访问单个数据库一样,实现分布式透明访问;④统一的查询模式。

基于LDAP的网格监控系统

基于LDAP的网格监控系统查礼;徐志伟;林国璋;刘玉树;刘东华;李伟【期刊名称】《计算机研究与发展》【年(卷),期】2002(039)008【摘要】对于高性能分布式计算环境--网格--来说,监控其中计算资源的状态是至关重要的.通过监控可以及时发现并排除故障.通过分析监控数据可以找出性能瓶颈,为系统调整提供可靠的依据.GridMon是基于LDAP目录服务的分布式网格监控系统,改变了以往目录服务中不存储动态信息的使用方法,灵活地将静态和动态信息结合在目录层次中,从而减少了客户端对服务器的交互次数,并采用中间件技术有效地解决了直接访问被监控主机带来的安全和接口问题.借助LDAP的目录层次,建立了网格系统的树状基本结构.提出了网格监控对象和监控事件的概念及其表示方法,从而形成完整的网格监控结构模型.详细讨论了根据这个模型实现的网格监控原型系统--GridMon.最后,通过网格与机群系统的结构不同点,阐述了评价网格监控系统的要点,并以此为依据,结合应用前景对GridMon进行了客观的评价.【总页数】7页(P930-936)【作者】查礼;徐志伟;林国璋;刘玉树;刘东华;李伟【作者单位】北京理工大学计算机科学与工程系,北京,100081;中国科学院计算技术研究所,北京,100080;北京理工大学计算机科学与工程系,北京,100081;北京理工大学计算机科学与工程系,北京,100081;中国科学院计算技术研究所,北京,100080;中国科学院计算技术研究所,北京,100080【正文语种】中文【中图分类】TP393.4【相关文献】1.基于Web Service与LDAP的网格信息服务的设计 [J], 褚睿;黄永忠;胡建伟2.基于LDAP的网格计算环境下安全技术的应用 [J], 徐秀;毛艳春;张申3.基于LDAP的网格计算池资源监控系统的设计与实现 [J], 韩志伟;王冰蜂;黄琦4.基于LDAP和代理的网格环境下网络性能监控系统 [J], 潘景山;王英龙;张睿超;王美琴;王少辉5.基于PKI和LDAP光子网格认证机制研究 [J], 魏晋;丁金金;邵海霞因版权原因,仅展示原文概要,查看原文内容请购买。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

1 Grid Market Directory: A Web Services based Grid Service Publication Directory

Jia Yu, Srikumar Venugopal, and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS) Laboratory Department of Computer Science and Software Engineering The University of Melbourne, Australia Email: {jiayu, srikumar, raj}@cs.mu.oz.au

Abstract As Grids are emerging as the next generation service-oriented computing platforms, they need to support Grid economy that helps in the management of supply and demand for resources and offers an economic incentive for Grid resource providers. To enable this Grid economy, a market-like Grid environment including an infrastructure that supports the publication of services and their discovery is needed. As part of the Gridbus project, we proposed and have developed a Grid Market Directory (GMD) that serves as a registry for high-level service publication and discovery in Virtual Organisations.

1. Introduction Computational Grids [1] are emerging as the next-generation computing platform and global cyber-infrastructure for solving large-scale problems in science, engineering and business. They enable the sharing, exchange, discovery, selection and aggregation of geographically distributed, heterogeneous resources—such as computers, data sources, visualization devices and scientific instruments. As the Grid comprises of a wide variety of resources owned by different organizations with different goals, the resource management and quality of service provision in Grid computing environments is a challenging task. Grid economy [9] facilitates the management of supply and demand for resources. Also, it enables the sustained sharing of resources by providing an incentive for Grid Service Providers (GSPs).

It has been envisioned that Grids enable the creation of Virtual Organizations (VOs) [19] and Virtual Enterprises (VEs) [18] or computing marketplaces [20]. A group of participants with a common objective can form a VO. Organizations or businesses or individuals can participate in one or more VOs by sharing some or all of their resources. To realize this vision, Grids need to support diverse infrastructures/services [19] including an infrastructure that allows (a) the creation of one or more VO(s) registries to which participants can register themselves; (b) participants to register themselves as GSPs and publication of resources or application services that they interested in sharing; (c) GSPs to register themselves in one or more VOs and specify the kind of resources/services that they would like to share in VOs of their interest; and (d) the discovery of resources/services and their attributes (e.g., access price and constraints) by higher-level Grid applications or services such as Grid resource brokers. These services are among fundamental requirements necessary for the realisation of Grid economy.

Several Grid economy models drawn from conventional markets have been proposed for organizing the Grid market [9]. They are: commodity, posted price, bargaining, tender/contract and auction models. In Grid economy models, a trusted third party, Service Publication Directory, is needed as a central service linking resource providers and consumers. For example, in the commodity model, resource providers publish their services to a Directory, providing service location, service type and service charge price, etc., while resource brokers query the directory and select a suitable service according to the quality-of-service (QoS) requirements (e.g. deadline and budget) of their delegating consumers. 2

Grid Service Info (RDBMS)Web Server (Tomcat)GMD QueryWebservice

Consumer (Web Client)

Grid Market Directory (GMD)GMD PortalManager

Provider (Web Client)Publish/ManageQuery(SOAP+XML)

Grid NodeBrowse

Consumer (Grid Resource Broker)Grid NodeGrid NodeJob

submission

Figure 1: Gridbus GMD Architecture

In this paper, we propose a service publication and discovery registry called the Grid Market Directory (GMD) that meets the above requirements. The GMD has been developed using emerging web services technologies as part of the Gridbus Project, which is developing technologies that provide end-to-end support for resource allocation based on the QoS requirements of resource providers and consumers.

The rest of this paper is organized as follows. The related work within Grid and Web services communities is presented in Section 2. The detailed system architecture and design issues are described in Section 3. Section 4 describes technologies that are used in the current implementation. A use-case study is presented in Section 5. We conclude in Section 6 with a discussion of current system status and future work.

相关文档
最新文档