Event dissemination service for pervasive computing

合集下载

eventvalues介绍 -回复

eventvalues介绍 -回复

eventvalues介绍-回复1. What is EventValues?EventValues is a versatile programming library that allows developers to pass information between different components or functions within an application. It provides a framework for implementing event-based communication and enables the sharing of data across various parts of the code.2. Why is EventValues important?In software development, communication and data sharing are essential components. As applications become more complex, the need to effectively transmit information between different parts of the program grows. EventValues simplifies this process by providing a unified approach to event-based communication, making it easier for developers to manage data flow and coordination within their application.3. How does EventValues work?EventValues operates on the principle of event-drivenprogramming, where events trigger specific actions or updates within the application. The library utilizes event handlers, which are functions that execute in response to specific events. These event handlers can access and modify the data contained in the EventValues object.4. What are the key features of EventValues?- Flexibility: EventValues allows developers to define custom events and associate them with specific data. This flexibility enables the integration and synchronization of various parts of the application seamlessly.- Data sharing: With EventValues, developers can store and share data throughout the application using a centralized data store. This eliminates the need for explicit data transfers between components, reducing dependencies and improving overall code organization.- Asynchronous communication: EventValues supports asynchronous communication, allowing different parts of the application to operate independently and asynchronously exchange data. This flexibility is particularly useful in user interfaceswhere simultaneous processing of events is common.- Scalability: EventValues can easily scale to accommodate applications of varying complexity. As new components are added, developers can define additional events and associated data to meet evolving requirements.5. How to use EventValues?To start using EventValues, developers need to follow these steps:Step 1: Define EventValues objects: Begin by defining one or more EventValues objects that will serve as data stores for the application. These objects will hold the data to be shared across different parts of the program.Step 2: Create event handlers: Define the event handlers that should execute in response to specific events. These event handlers will have access to the relevant EventValues objects and can read, modify, or update the stored data.Step 3: Trigger events: Trigger events wherever necessary in theapplication code. This will signal the associated event handlers to execute and perform the required actions.Step 4: Handle events: Implement the logic within each event handler to handle the data stored in the EventValues objects. This may involve retrieving data, modifying it, or transmitting it to other components.Step 5: Update data: As the application progresses, modify the data contained in the EventValues objects whenever necessary. This may involve updating values, adding new entries, or deleting existing ones.6. What are the advantages of using EventValues?- Enhanced code organization: EventValues promotes modular and loosely coupled code by facilitating the sharing and communication of data between components. This improves code organization and makes it easier to maintain and extend the application.- Simplified debugging: With EventValues, developers can focus onspecific events and their associated data, reducing the complexity of debugging and easing the identification of potential issues.- Better collaboration: EventValues allows multiple developers to work on different parts of the application simultaneously without worrying about data dependencies. This improves collaboration within development teams and helps deliver projects more efficiently.- Increased reusability: By decoupling components through event-based communication, EventValues promotes code reusability. Developers can create components that are adaptable to different contexts and easily integrate them into other projects.- Improved performance: Asynchronous communication and event-driven processing enabled by EventValues can enhance application performance, as different parts of the code can execute concurrently without blocking each other.In conclusion, EventValues provides a powerful and flexible framework for event-driven communication and data sharing insoftware applications. Its ease of use, scalability, and ability to simplify code organization make it an invaluable tool for developers seeking to enhance their application's functionality and maintainability.。

Priority说明书

Priority说明书

PriorityPower a smarter, more flexible way to work that maximizes fast, frequent,high-value outcomes from your Citrix solutions. Priority accelerates theresults that drive your business.We partner with you to unlock valueAccelerate outcomesOur Customer Success teampartners with you to helpyour organization maximizeits return on your Citrixinvestment. We guide youtowards a path of acceleratedand continuous value bymonitoring, measuring andoptimizing your journey everystep of the way.Improve time-to-valueYour assigned TechnicalAccount Manager willbecome a trusted advisor,advocate and partner in yoursuccess. They’ll regularlymake recommendationsthat optimize yourenvironment and operationsto leverage best practicesand amplify results.Improved uptimeGet fast, proactive supportthat provides continuousinsights and recommendationsto prevent unplanneddowntime and reducetechnical hiccups. Plus,leverage faster reactivesupport to get you backup and running in businesscritical moments.Optimize ROIWe maximize the return thatyour Citrix solutions generatevia continual performanceand supportabilityimprovements across yourinfrastructure and operations.IT leaders get open accessto a wealth of resources, toolsand advice that chart a pathtowards compounding value. What Priority deliversCitrix Priority service injects proactive advice, expertise and support to the heart of your operations.It’s a full-time advocate for your success. From issue prevention to faster response times and speedyresolution – we continually optimize your solutions to help your users, your IT team and your businesswin with Citrix.What you get with PriorityAll the benefits of Select with the following additional benefits.Assigned technicalaccount managementPartner with your assignedTechnical Account Managerto define goals, allocateresources and co-createthe high-value outcomesthat drive your success.Priority routing and queuingGet direct access to PriorityTechnical Support Engineerswho know your environmentinside-out, for rapid responseof Severity 1 issues within15 minutes and resolutionin less than 6 hours.Critical situation expeditionYour Priority CriticalSituation Manager handlesyour most critical cases toaccelerate time to resolution.They take ownership ofSeverity 1 issues and provideproactive updates forincreased transparencyand faster restoration.Scheduled supportGet up to 40 hours ofscheduled support forany migration, update andimplementation you need. Weempower your engineers toenact changes at convenientoff-time hours to minimizedisruption and maximizebusiness performance.1Prioritize your potential2Leverage high-value expertise 90%of Priority customers cite their Technical Account Manager as their most valued aspect of Priority.Get back to winning,faster91%of IT organizations whoupgraded to Priority reneweddue to increased uptimeand faster issue resolution.Realize rapid ROI90%of Priority customersrealized direct valuein the first six months.Improve yourbusiness outcomes73%of customers improvedbusiness success measuresand accelerated utilizationsince upgrading to Priority.。

HPE SmartMemory数据手册说明书

HPE SmartMemory数据手册说明书

Data sheet HPE SmartMemoryChoose the right memory withHPE SmartMemoryAs the insatiable demand for applications, data, and digital content grows exponentially, traditional server infrastructure—the digital foundation of business and society—has become resource constrained. At the same time, businesses demand greater performance and maximum uptime. IT trends such as server virtualization, cloud computing, and high-performance computing have increased the average gigabyte per server memorysix-fold in the last six years. As a result, DRAM manufacturers are increasing chip component densities to support higher memory capacities. For example, today a single 4 GB DRAM chip contains more than 4 billion memory cells, and a single 32 GB DDR3 memory module has more than 288 billion memory cells.Choose the right memoryThe combination of increased memory demandand component complexity have raised thestakes higher as businesses require continuousavailability of IT infrastructure. Memory is acritical system component, significantly definingthe systems reliability, performance, andincreasingly the server and data center powerfootprint. Therefore, choosing the right memoryis the key to ensuring upmost reliability, highestperformance, and deliver a faster return on yourIT investment.As one of largest DRAM buyers, HPEchooses only the highest quality memory.HPE SmartMemory is performance testedand tuned on HPE ProLiant servers usingproprietary software that simulates extremeoperating environments and conditions.HPE Qualified Options means every genuineHPE DIMM has been through our rigorousqualification process that extends beyondstandard industry practice to help ensurecompatibility, performance, and reliability.Achieve reliability, performance,and manageability withHPE SmartMemoryHPE SmartMemory is a unique technologyintroduced for HP Proliant Gen8 and Gen8 v2servers, and unlike other third-party memory,HPE SmartMemory authenticates whethermemory has passed HPE’s rigorous qualificationand test process, and provides you with thehighest quality, genuine HPE SmartMemory.More importantly, verification of HPESmartMemory unlocks certain performanceand high efficiency features optimized for HPProliant Gen8 and Gen v2 servers. In addition,HPE SmartMemory provides future enhancedsupport through HPE Active Health Systemand manageability software.Key features and benefitsHPE SmartMemory is ideal for ProLiant Gen8 and Gen8 v2 customers who are looking to extract all the memory performance, dependability, and power savings that the ProLiant Gen8 server is designed to deliver. Memory plays an increasingly larger part of the server’s power consumption andchoosing the most efficient memory is a critical component in reducing your data center’s overall power and cooling requirements. These savings translate to reduced operating cost and a faster return on investment (ROI), freeing up IT budget spent on power and cooling.HPE SmartMemory introduces the new 1866 MHz Std. Voltage and 1600 MHz low voltage SmartMemory, designed to provide better performance and capacity at a competitive price.Reduce power without sacrificing performanceHPE is committed to helping you achieve the maximum benefit per watt out of your IT infrastructure. With HPE SmartMemory power utilization is up to 20 percent less when compared to third-party memory or other OEMs’ memory.1• 1600 MHz Low Voltage memory availability in all sizes for both RDIMM and UDIMM technologies.• An industry-first, HPE has introduced 24 GB DDR3-1333 Registered DIMM (RDIMM) at 1.35 V• While the industry supports DDR3-1866RDIMM at 1.5 V at one DIMM per channel, our ProLiant Gen8 servers support DDR3-1866 RDIMM up to two DIMMs per channel at 1866 MT/s running at 1.5 V. Our ProLiant Gen8 v2 servers support DDR3-1600 LV RDIMM up to two DIMMs per channel at 1600 MT/s running at 1.35 V. This equates to up to 20 percent less power at the DIMM level with no performance penalty 2• HPE SmartMemory 1.35 V DDR3-1600 registered memory is engineered to achieve the same performance level as 1.5 V memory. This also simplifies the HPE memory portfolio making it easier to select the right memory • HPE HyperCloud SmartMemory 32 GB 1.5 V DDR3-1333 registered memory is engineered to achieve 3 DPC @1333 in HPE ProLiant in DL380p Gen8 serversBetter performanceHPE SmartMemory supports DDR3 unbuffered ECC DIMMs (UDIMMs) at 2 DIMMs per channel at 1866 MT/s while the industry supports UDIMM at 2 DIMMs per channel at 1600 MT/s, or 25 percent greater bandwidth.2 In addition, DDR3 UDIMMs are capable of low voltage operation, up to 20 percent less power usage:•****************•****************HPE SmartMemory now supports DDR3-1866 LRDIMMs:•***************•***************•***************HPE SmartMemory now supports DDR3-1866 RDIMMs:•***************•***************•***************HPE SmartMemory supports low-voltage DDR3-1600 RDIMMs at same memorybandwidth as 1.5 V DDR3-1600 RDIMMs, up to 20 percent less power usage:•****************•****************•***************HPE SmartMemory supported on HP Proliant Gen8 v2 Intel® Xeon® E5-2600 Series Processor FamilyWith the latest generation Intel® series based ProLiant Gen8 and Gen8 v2 servers, DDR3 provides additional features:• 24 GB DDR3-1333 RDIMM 1.35 V support for maximum bandwidth on blades server • New 32 GB Load Reduced DIMM (LRDIMM) increases capacity by 50 percent enabling HP Proliant Gen8 and Gen8 v2 Intel 2-socket servers support of up to 768 GB • DDR3-1600 Registered DIMMs supports either low-voltage 1.35 V or 1.5 V with no performance penalty• HPE Advanced Memory Error DetectionTechnology increased memory related uptime by up to 35 percent by pinpointing errors more likely to cause unplanned downtime1 “HPE Proprietary Software Test,” August 2011.2 “HPE Proprietary Software Test,” September 2011.3A MD 1 TB max capacity: AMD population rules require 4 rank DIMMs in center slot only for 3 DPC servers.DDR3 memory comparison for HP Proliant Gen8 and Gen8 v2 serversRDIMMsLRDIMM UDIMMs HDIMM Maximum DIMM capacity 24 GB32 GB 8 GB32 GB Maximum server capacityAMD: 1 TB max capacity 3 (48 slots; 32 GB quad rank DIMMs)N/AAMD: 64 GB (16 slots; 4 GB dual rank DIMMs)NAIntel 4 socket: 2 TB max capacity (64 slots; 32 GB quad rank DIMMs)Intel 2 socket: 768 GB (24 slots; 32 GB LRDIMMs)Intel: 48 GB max capacity (12 slots; 4 GB dual rank DIMMs)Intel 2 socket (only DL360p and DL380p Gen8): 384 GB max capacity Maximum number of DIMMs/channel 3 dual rank3 quad rank (LRDIMM only) 2 dual rank 3 dual rank Low power option4 GB, 8 GB, 16 GB, 24 GB, 32 GB 32 GB 2 GB, 4 GB, 8 GB 32 GB only Address error detectionYesYesYesYesRDIMMs647893-B21HPE 4 GB (1x4 GB) Single Rank x4 PC3L-10600R (DDR3-1333) Registered CAS-9 Low Voltage Memory Kit 647895-B21HPE 4 GB (1x4 GB) Single Rank x4 PC3-12800R (DDR3-1600) Registered CAS-11 Memory Kit647897-B21HPE 8 GB (1x8 GB) Dual Rank x4 PC3L-10600R (DDR3-1333) Registered CAS-9 Low Voltage Memory Kit 647899-B21HPE 8 GB (1x8 GB) Single Rank x4 PC3-12800R (DDR3-1600) Registered CAS-11 Memory Kit647901-B21HPE 16 GB (1x16 GB) Dual Rank x4 PC3L-10600R (DDR3-1333) Registered CAS-9 Low Voltage Memory Kit 672631-B21HPE 16 GB (1x16 GB) Dual Rank x4 PC3-12800R (DDR3-1600) Registered CAS-11 Memory Kit700404-B21HPE 24 GB (1x24 GB) Three Rank x4 PC3L-10600R (DDR3-1333) Registered CAS-9 Low Voltage FIO Memory Kit 713981-B21HPE 4 GB (1x4 GB) Single Rank x4 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit 713983-B21HPE 8 GB (1x8 GB) Dual Rank x4 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit 731765-B21HPE 8 GB (1x8 GB) Single Rank x4 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit 713985-B21HPE 16 GB (1x16 GB) Dual Rank x4 PC3L-12800R (DDR3-1600) Registered CAS-11 Low Voltage Memory Kit 708637-B21HPE 4 GB (1x4 GB) Single Rank x4 PC3-14900R (DDR3-1866) Registered CAS-13 Memory Kit708639-B21HPE 8 GB (1x8 GB) Dual Rank x4 PC3-14900R (DDR3-1866) Registered CAS-13 Memory Kit731761-B21HPE 8 GB (1x8 GB) Single Rank x4 PC3-14900R (DDR3-1866) Registered CAS-13 Memory Kit708641-B21HPE 16 GB (1x16 GB) Dual Rank x4 PC3-14900R (DDR3-1866) Registered CAS-13 Memory KitUDIMMs647905-B21HPE 2 GB (1x2 GB) Single Rank x8 PC3L-10600E (DDR3-1333) Unbuffered CAS-9 Low Voltage Memory Kit 647907-B21HPE 4 GB (1x4 GB) Dual Rank x8 PC3L-10600E (DDR3-1333) Unbuffered CAS-9 Low Voltage Memory Kit 647909-B21HPE 8 GB (1x8 GB) Dual Rank x8 PC3L-10600E (DDR3-1333) Unbuffered CAS-9 Low Voltage Memory Kit 713975-B21HPE 2 GB (1x2 GB) Single Rank x8 PC3L-12800E (DDR3-1600) Unbuffered CAS-11 Low Voltage Memory Kit 713977-B21HPE 4 GB (1x4 GB) Dual Rank x8 PC3L-12800E (DDR3-1600) Unbuffered CAS-11 Low Voltage Memory Kit 713979-B21HPE 8 GB (1x8 GB) Dual Rank x8 PC3L-12800E (DDR3-1600) Unbuffered CAS-11 Low Voltage Memory Kit 708631-B21HPE 2 GB (1x2 GB) Single Rank x8 PC3-14900E (DDR3-1866) Unbuffered CAS-13 Memory Kit708633-B21HPE 4 GB (1x4 GB) Dual Rank x8 PC3-14900E (DDR3-1866) Unbuffered CAS-13 Memory Kit708635-B21HPE 8 GB (1x8 GB) Dual Rank x8 PC3-14900E (DDR3-1866) Unbuffered CAS-13 Memory KitLRDIMMs647903-B21HPE 32 GB (1x32 GB) Quad Rank x4 PC3L-10600L (DDR3-1333) Load Reduced CAS-9 Low Voltage Memory Kit 708643-B21HPE 32 GB (1x32 GB) Quad Rank x4 PC3-14900L (DDR3-1866) Load Reduced CAS-13 Memory KitHDIMMs715166-B21HPE 32 GB (1x32 GB) Dual Rank x4 PC3-10600H (DDR3-1333) HyperCloud CAS-9 Memory Kit• RDIMMs offer higher capacities, memory register support, and address error detection• UDIMMs offer lower price points, smaller capacities, and basic ECC• LRDIMMs support maximum capacity• New HyperCloud DIMMs (HDIMMs) support maximum capacity of 768 GB at 3 DPC @1333 only for ProLiant DL380p Gen8 serversSign up for updatesData sheet© Copyright 2012–2013, 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.Intel and Intel Xeon are trademarks of Intel Corporation in the U.S. and other countries. AMD is a trademark of Advanced Micro Devices, Inc.4AA3-9210ENW, October 2016, Rev. 5Other resourcesFor more information on HPE SmartMemory, visit /products/memory .For more information on qualified memory support for your HPE ProLiant server, visit HPE QuickSpecs: /products/quickspecs .For latest news, videos, podcasts, and updates on HPE Qualified Options, visit /go/hpqo .HPE ProLiant DDR3 Memory Configuration Tool HPE provides a unique Web-based tool to help configure DDR3 memory in your HPE ProLiantserver. The latest version supports HPEProLiant servers based on the Intel E5-2600Series Processors. For more information, visit/products/servers/options/tool/hp_memtool. html .HPE Factory Express HPE Factory Express provides customization and deployment services along with your storage and server purchases. You cancustomize hardware to your exact specificationsin the factory—helping to speed deployment. Visit /go/factoryexpress .Customer technical trainingGain the skills you need with HPE ExpertOne training and certification. With HPE ProLiant training, you will accelerate your technologytransition, improve operational performance, and get the best return on your HPE investment. Ourtraining is available when and where you needit, through flexible delivery options and a global training capability. /learn/proliant HPE Services HPE Technology Services offers a set ofconsultancy, deployment, and support solutionsdesigned to meet the lifecycle needs of your IT environments. HPE Care Pack Services for industry-standard servers includes support forqualified options at no additional cost.HPE Foundation Care Services delivers scalable support-packages for HPE industry-standard servers and software. You can choose the type and level of service that is most suitable for your business needs. New to this portfolio isHPE Collaborative Support Service. This service offers a single point of contact for serverproblem diagnosis, hardware problem resolution, basic software problem diagnosis, fault isolation,and resolution. In case, the issue is with HPE or supported third-party software product and cannot be resolved by applying known fixes, HPE contacts the third-party vendor and create a problem incident on your behalf.If you are running business-criticalenvironments, we offer HPE Proactive Care or HPE Critical Advantage. These services help you deliver high levels of application availability through proactive service management.All service options include HPE InsightRemote Support for secure remote monitoring, diagnosis, and problem resolution. Also included is the HPE Support Center that provides access to the information, tools, and experts to support HPE business products.For more information, visit /services/proliant or /services/bladesystem .HPE Financial ServicesHPE Financial Services provides innovative financing and financial asset management programs to help you acquire, manage, and ultimately retire your HPE solutions. Visit /go/hpfinancialservices .Learn more at/products/memory。

A Unified Model of Internet Scale Alerting Services

A Unified Model of Internet Scale Alerting Services

A Unified Model of Internet Scale Alerting ServicesAnnika Hinze,Daniel FaensenInstitute of Computer ScienceFreie Universit¨a t Berlin,Germanyfaensen,hinze@inf.fu-berlin.deAbstract In the last years,alerting systems have gained strengthened attention.Several systems have been implemented.For the evaluation and cooperation of thesesystems,the following problems arise:The systems and their models are not com-patible,and existing models are only appropriate for a subset of conceivable applica-tion domains.Due to modeling differences,a simple integration of different alertingsystems is impossible.What is needed,is a unified model that covers the whole vari-ety of alerting service applications.This paper provides a unified model for alertingservices that captures the special constraints of most application areas.The modelcan serve as a basis for an evaluation of alerting service implementations.In ad-dition to the unified model,we define a general profile structure by which clientscan specify their interest.This structure is independent of underlying profile defi-nition languages.To eliminate drawbacks of the existing non-cooperating solitaryservices we introduce a new technique,the Mediating Alerting Service(MediAS).It establishes the cooperation of alerting services in an hierarchical and parallel way.1IntroductionElectronic publication becomes more and more popular.Since the readers do not want to be forced to regularly search for information about new documents,there is strong need for alerting services.An alerting service keeps its clients informed about new doc-uments and events they are interested in.Currently,several implementations of alerting services exist for different applicational domains,such as Salamander[12],Siena[2],or Keryx[1].The underlying models of these services do not meet all requirements found in applications suitable for wide area networks,such as digital libraries.Additionally, the models for existing alerting services mainly cover the applications the services are designed for.In this paper,we provide a unified model for alerting services that consid-ers the special constraints of different application domains.The interests of clients are defined as so-called profiles.Since several profile definition languages are used in the dif-ferent services,we give a general structure of profiles for alerting services,independent of the profile definition language.The large number of existing alerting services for a certain application domain has several drawbacks.The users have to define their interest at different services in different ways.The available notifications are mostly bound to the supply of individual services,information from different suppliers is not combined.We therefore introduce and propose the use of a Mediating Alerting Service(MediAS),that connects several suppliers and clients.The remainder of this paper is structured as follows:In Section2,we provide an overview of the structure and tasks of an alerting service.We introduce scenarios for the conceivable application domains of alerting services and name the problems with existing alerting services in detail.Section3introduces our architectural model for alerting ser-vices and Section4outlines our event-based model.In Section5,we propose the use of a mediating alerting service.Section6provides an overview of some related systems and models.Section7gives some directions for our future work.2Event Notification ServiceIn this section,we introduce the general structure and tasks of an alerting service.Wepresent a collection of possible scenarios in which Internet-scale event services are ap-plicable.The main purpose of this collection is the motivation and validation of an event model that covers all applications.Based on the scenarios and the derived common re-quirements for alerting services,we will point out the problems with existing alertingservices and their models.Tasks of an Alerting Service Alerting services connect suppliers of information and inter-ested clients.In our example of scientific papers the suppliers are publishing houses and the clients are the interested scientists.Alerting services inform the clients about the oc-currences of events on objects of interest.Objects of interest are located at the supplier’s side.Events can be changes on existing objects or the creation of new objects,e.g.,the publication of a journal article.Clients define their interest by personal profiles.The infor-mation about the occurring events isfiltered according to these profiles and notifications are sent to the interested users(clients).Fig.1depicts the data-flow in a high-level ar-chitecture of an alerting service.Keep in mind that the data-flow is independent from the delivery mode,such as push or pull.The tasks of an alerting service can be subdivided into the following steps:First,the observable event classes are to be determined and of-fered to the clients.Then,the client’s profiles have to be defined and stored.The occurring events have to be observed andfiltered.Before creating notifications,the events are inte-grated in order to detect combinations of events(e.g.,two conferences happen to be at the same time).After duplicate recognition the messages can be buffered in order to enable efficient notification(e.g.,by merging several messages into one notification).According to a givenFigure1.Data-flow in an Alerting ServiceScenarios We present a selection of application domains of alerting services to demon-strate the need for a unified and extended event model and a Mediating Alerting Service.Stock ticker Selected stock values are pushed to registered clients.Delivery can beimmediate(to paying customers)or deferred.Clients can be off-line.In that case,noti-fications are lost without consequences.A client can be a PC with an analysis software that reacts on events like threshold crossing of a share value by notifying its user or by reacting autonomously.Cardinality of the relation between supplier and client is usually ,size of notification messages is small(a few bytes)but they are sent with high frequency.Encryption can be required.Objects of interest are identifiable in advance,the clients subscribe to objects by selecting from a given list of objects(out of).Digital Library In a digital library,users want to be notified on new publications theyare interested in.They define their interest by specifying certain bibliographical meta-data(e.g.,a journal or an author)or by Information Retrieval-like rmation suppliers can be publishing houses or universities’technical report servers.The offereddocuments reside on publisher’s side within a database,file systems,or other repositories.Within the profiles,the clients have to specify the source.Notifications can include thefull document or a pointer to it(DOI,URL).To avoid an unnecessary high frequency of notification deliveries,users can specify a time interval(e.g.,weekly)within whichnotifications are collected and then delivered altogether.Since users do not know eachsupplier and do not want to register at different suppliers’interfaces,a service covering many suppliers and giving unified access to their repositories operates between clients and information sources.Departments or working groups may have overlapping user profiles. Cardinality between suppliers and clients can be,notifications can be large(several MB).Frequency of notifications is low,delivery has to be guaranteed.Objects of interest are unknown at the time of profile definition and usually come into existence later.Clients can be interested in only parts of objects,e.g.,in a mathematical proof that is part of an article,which is part of a journal issue.Software Update Registered users of software(programs,data)automatically get up-dates pushed from their vendors via the Internet.To avoid too frequent delivery,users can specify that only every second update is really of interest.While notification frequency is low,their size can become huge.It depends on previous events whether an update event is to be forwarded to the client.We call dependence of events on other events event patterns.Remote monitoring and control A power station is equipped with a variety of probesand sensors.Multiple devices of the same type ensure reliability by redundant measure-ment.Measured values are pushed in real-time to the monitors.Monitoring is done at different places,several control rooms and by replicated software agents.A client can request information from certain probes or probe classes.Additionally,it can require to be notified only if two or more probes deliver the same value or if a probe did not de-liver new measurements for a certain time interval.Cardinality of supplier to client can be out of,that means1of redundant clients has to handle an event.Reliable connections and real-time delivery is required.Notification delivery in the case of at least two related event occurrences is another example of an event pattern.Events that indicate that nothing happened in a time interval are called passive events.Other conceivable example scenarios are mobile computing,traveller information sys-tems and replication services.Dimensions for Model Evaluation From the scenarios in the previous section,the follow-ing dimensions to classify and evaluate event models and alerting services emerge: Cardinality Associations between suppliers and clients cover the range from to .The Remote Monitoring and Control scenario shows that the notion of a out of cardinality is useful.Notification size Depending on application type,the size of a notification can range froma few bytes to several megabytes.By delivering pointers to objects instead of theobjects themselves,the size can be reduced significantly.Notification frequency Can vary from high frequent(in the range of seconds)to,say, once a year or only once at all.Guaranteed delivery In a digital library,for instance,it is necessary to guarantee deliv-ery of notifications even if clients are offline.Real time Remote monitoring and control can require real time delivery of notifications. Passive events In some cases,it is useful to be notified if during a specified interval nothing happened,e.g.,if a server does not handle requests anymore.Event pattern Clients register for events that depend on other events.Composed objects Objects do not need to be atomic,but can consist of other objects.Object repositories Clients can subscribe to repositories to get informed about the changes within that repository.To subscribe to information objects that do not exist at the time of the profile definition,clients refer to the repository the object will appear in.Profile definition Clients can subscribe to concrete objects(e.g.,by referring to their identifier),by specifying meta-data that describe the objects of interests or(in the case of digital libraries)using an IR-like query.Scalability Can be achieved by redundant alerting services,duplicated parts of services or a hierarchy of services if profiles of different clients are overlapping.Encryption Scenarios that cover delivery of privacy data or data that are liable for costs can require encrypted delivery.Encryption is handled on protocol level.Reliability and Acknowledgment Acknowledgments can be used to implement reliable delivery.These characteristics will not be considered in our model,since they have no influence within the modeling level used here.Drawbacks of Existing Models In the following part,we show the drawbacks of the ex-isting models for alerting services regarding the requirements described above.Terminology:The term alerting service itself bears the problem that on the one hand there exist several names for this kind of service(Alerting Service,Notification Service, Profile Service,etc.),while on the other hand several different concepts are called no-tification service(see Section6).Additionally,the different models for alerting systems use identical terms to describe different concepts.For example consider the term Chan-nel:In the CORBA model,an event channel is an intervening object that allows multiple suppliers to communicate with multiple consumers asynchronously[14].CDF[7]or Net-caster Channel[13]are similar to television broadcast channels.In contrast to CORBA, CDF Channel has an observer function for the channel objects.Further evaluation of the implementation of alerting services with channels can be found in[8].States of non-existing objects:In most event-based models,an event is defined as a state transition of an object of interest at a particular time,where the state of an object is the current value of all its attributes(e.g.,[14,11]).The binding of events to the object of interest cannot be weakened in general as the events are strongly related to the objects (opposite to clock-time events).Consider the case of a scientific paper or article that is published.This object appears at a specific time,the state of the object is then the content of the paper and its meta-data such as author and title.But what is the state of the object before it exists?So the term of an event as a state transition of the object is not appropriate here.Rosenblum and Wolf[15]define an event as an instantaneous effect of the termination of an invocation of an operation on an object.This definition associates the event with the invoker of the operation instead of with the object of interest.As a consequence,the invoker has to communicate with the observer in order to announce the event.But it cannot be generally presupposed that invokers actively announce events to observers,due to several reasons.Composed Objects:The notifications sent to the clients are the messages that are seen as the physical representation of the events[2]that the clients are subscribed to.Since the events relate to identifiable objects of interest,the notifications contain or refer to these objects.However,clients are often not interested in whole documents or sites(they are interested in an article instead of the whole journal,or even in a single mathematical proof instead of the article),therefore,substructures need to be identifiable as objects.How to register that nothing happened:“Send message if the value of share S does not change for a period of days.”Existing models of alerting services cannot handle this kind of profile,as neither an operation is performed on the object of interest,nor does the object of interest change its state.We are aware of the fact that this construct is contradictory to the intuitive notion of an event as something that happens.Figure2.Architectural Model of an Alerting Service3ArchitectureIn this section we introduce a general architectural model for alerting services that is usedfor the identification of components involved in the event model presented in Section4. The involved components and their relations are shown in Fig.2.Objects of interest areso-called information objects that are located at the supplier’s side,optionally in an ob-ject rmation objects can be persistent(e.g.,documents)or transient(e.g., measured values).The objects can be organized hierarchically.Changes of theses objects (creation,update,deletion)are induced by an invoker.Responsibility of the observer is the detection of changes of single objects or in the object repositories.For the observer change detection can be an active task or it is passively informed by the invoker.Any change is an event.A detailed definition of the term event will be given in Section4.1. Events are reported as materialized event messages to thefilter.Thefilter has knowledge of the client’s profiles and compares the event with the query part of the profiles.If a profile and event match,thefilter delivers the event message to the notifier.For the de-tection of event patterns,events are stored in the event repository.The notifier in turn checks the schedule part of the profile.The event messages are buffered until the notifica-tions become due,they are then edited according to the format specified by the client and delivered.The buffering is also needed for the notification of offline clients to guarantee delivery.The client’s profiles contain information about the events the client is interested in,a schedule,a notification protocol,and a notification format.A detailed description of the profile structure is given in Section4.1.For scalability and reliability the com-ponents of the alerting service can be duplicated.While invoker and object repository usually reside at the supplier’s side,not all information suppliers implement an observer. An alerting service that covers this type of suppliers could implement an observer as a wrapper for each supplier.The observer is enforced to keep various information on the repository if it is not notified by the invoker and the supplier’s interface does not offer a search for changes since a specified date.Alternatively,the observer can be moved to the supplier’s side and perform its tasks as an agent of the alerting service there.4Unified Event ModelIn this section,we introduce our event-based model for alerting services.First,we define the terms used within the model.Then,we formally describe the tasks of an alertingservice.Due to space limitations the event model is presented in a shortened version.Anextended version with all details can be found in[10].4.1Terms and DefinitionsObject In correspondence to other models,we use objects to encapsulate the functional-ity of model participants.In our model,an object can be any logical entity residing on ahardware component within a network.Hardware and human beings can also participate but are represented by the software-based proxies.Each object is uniquely identifiable;forsimplicity reasons,we refer to the identifier as a handle(e.g.,a URL).Considerations ofa naming model for alerting services can be found in[15].The objects,that are offered by suppliers,such as journals or news-pages,we call objects of interest.Objects have a state. The state of an object is the value of its attributes.A set of objects offered by a supplier is referred to as repository.A supplier can offer one or more repositories.Since repositories can also be seen as objects of interest,we consider a hierarchy of objects,whereas the items within the repositories are called information rmation objects can also be composed of other objects,e.g.,journals consist of articles.Therefore,objects need to carry information about their position within the hierarchy.Event Based on the scenarios,we get a set of possible events regarding information ob-jects:A new information object appears;existing information objects are changed;exist-ing information objects are deleted;for a certain interval of time the information object remains unchanged.Similar to the model used for Event Action Systems[11]we divide events into two classes:time events and object events.Time events involve clock times, dates,and time intervals.Object events involve changes of non-temporal objects.We ad-ditionally distinguish active and passive events.Active events are state transitions of the repository at a particular time;they are observer-independent.State transitions can be ac-tions such as insertion,deletion,or change of a data-object.In the context of databases,as in digital libraries,the notion of state transition is in close relation to integrity constraints. Each transaction on the database underlies several constraints(e.g.,the key of a tuple has to be unique)and the operations are accepted only if the constraints are fulfilled.Allowed operations transfer the database from a valid state to another valid state.This process is called a state transition.Asfinal consequence,both constraints and given set of values ensure an invariant state space.A state transition of a particular information object occurs if its attribute values are changed.Passive events involve counters and object properties at a specified time.They have to be observed.Passive events model the fact that for a given time interval an object did not change.Examples are also given in Section2.Profile A profile is the formally defined information need of a client.Each profile con-sists of two parts,a description of events the client is interested in(query-profile)and the conditions for the notification(meta-profile).Query-profile:In addition to primitive events(time and object events),they can spec-ify event patterns.Event patterns are Boolean combinations of events.For time events, we distinguish between points(),intervals(,,or)and frequencies(e.g.,weekly)in time(absolute).Time events can also be given in relation to another event,e.g.,“X weeks after the conference”(relative)or by using combina-tions.For time events,client and server need to define a reference(e.g.,a common time zone).Passive events need to specify the objects and their attributes(see active events), and constraints as time or counters.For active events,clients have to define the objects and the type of state transition to be observed on the object and possibly further attribute values they are interested in.Additionally,the repository of the objects has to be defined. Identification of objects can be done by giving(1)the object handles,(2)metadata aboutthe objects,or(3)values of attributes of the objects.For example,the subscription tosubject-based Internet-channels is covered by(2).For composed objects,the level of theconcerned attribute within the hierarchy should also be given.Possible state transitions are the occurrence of a new object,the change of an object,or the removal of an object.The change of an object can concern the structure(changing set of attributes,changingrange of attributes),or the values of the attributes.Additionally,clients can be interested in the changing number of values,or in different changes of the value itself.The following items have to be defined within a meta-profile:1.content of notification(e.g.,object-handle,object itself,information about the ob-ject/event and/or their numbers),2.structure of notification(number of events reported in the message,instructions forthe merging of notifications,ranking mechanism),3.notification protocol(e.g.,e-mail,desktop-icon,download),4.time-policy for event detection(frequency of observations),5.time-policy for notification such as scheduled(e.g.,daily)or event-dependent(e.g.,on events)or depending on event-attributes(“X weeks before the conference”). Notification A notification is a message reporting about events.Clients are notified ac-cording to the time-policy given in their profiles.Notifications created by observers have to be evaluated to discover patterns or duplicates.Before sending notifications,depend-ing on the profile,they have to be edited(e.g.,duplicate removal,merging,formating)in order to ensure that clients get only one notification at the time.4.2ModelThe clients of an alerting service get information about the possible profile types from the service.Suppliers inform the service(and so the clients)about possible events by means of advertisements.An alert relation describes a connection between repositories of information objects,clients,and events the client wants to get notified about.The alert relation is therefore a relation of a set of repositories,a set of clients,and a set of profiles:Clients may register or unregister for certain events happening on certain repositories,for instance by subscribing to an event channel,such as CDF channels.The corresponding tuples,where denotes the repository, the client,and the profile regarding an event on,are inserted into,or deleted from,the relation.An event channel is a projection of the alert relation onto the repositories and the profiles:.Here,we employ the notion of the relational algebra.Within a projection duplicate tuples are deleted.According to this model,a CORBA channel is only a communication medium and not covered by the definition of a channel given above.Another conclusion is,that clients that are interested in the same events at the same repository are subscribed for the same event channel,as in CDF channels or mailing lists.Events can happen at any time,they are caused by invokers that perform actions on the repository.Let be the time a state transition in repository has been caused by an invoker.An observer can learn of events in two ways.Either an observer is notified by the invoker at time.Or the observer proceeds according to a time scheduleThe observer registers all state changes on in the interval at time.The observer creates a message reporting the occurrence of the event and forwards it to thefilter.The profilefilter compares the reported events to the profiles.With Carzaniga[2],denotes that an event matches a profile. Notifications to the following clients must be produced:, where is the set of alert relations affected by eventsin at.A similar sequence is conceivable for passive events,as well as for the subsequent handling of event patterns and notifications.For the complete formalism, again,we have to refer to the extended paper version[10].Several conclusions can be drawn from the model:The observers either have to be informed by the invokers or they have to be aware of the state of the repository(and the objects contained in the repository).In the latter case,observers,therefore,need to be initialized with the states of all existing objects in the repository.They can only detect changes with frequencies less or equal to the observation frequency.Otherwise,observers can miss events within their observation interval.If two events regarding the same object happen within the same observation interval,these events can weaken or intensify each other.For example,if the events that a value of an attribute increases and decreases hap-pen shortly one after another,these two events neutralize each other.For passive events,filters initially have to know about the existence of objects,and,therefore,they have to be initialized with all existing objects.For the recognition of changed objects,different update strategies have to be considered(versions,in-place-update and shadow-update, see[5]).5Mediating Alerting ServiceIn this section,we motivate and propose the use of a mediating alerting service.Consider the application domain of a digital library.For full coverage of scientific publications, users have to register at a variety of suppliers(cardinality),some even unknown to the user,with different interfaces for profile definition.Notification format and protocol are heterogeneous,and many suppliers do not offer an alerting service.In addition,the notifications of different providers cannot be combined.As solution to the problems men-tioned above and pointed out in previous sections,and as an implementation of both the architectural and the event model,we propose a Mediating Alerting Service(MediAS).In MediAS,multiple alerting services cooperate in a hierarchical and parallel manner. That means that alerting services are clients(or suppliers)for other alerting services.An alerting service can also cover multiple suppliers employing multiple observers.In turn, one observer can deliver event messages to multiplefilters.MediAS integrates the view to the information suppliers.Its observers serve as wrappers for the suppliers’interfaces. For suppliers that implement their own alerting services the observer acts as a client and is notified of all changes in the suppliers’repositories.Other suppliers are queried regu-larly by the observer that wraps the suppliers’interfaces.Profiles are stored in MediAS’repository.If profiles of different clients are overlapping(as it is expected in a digital li-brary environment,e.g.,profiles owned by library users of a working group or university department),several instances of MediAS can cooperate hierarchically,which improves scalability.A MediAS which is the client of a different MediAS submits an integration of its profiles to its supplier.That ensures that the client MediAS will be notified of all information objects its clients are interested in.Problems arise when an alerting service is notified by other alerting services whose coverages are overlapping.It can happen that notifications of the same event are delivered more than once.The MediAS must take this into account andfilter out duplicate events.A similar problem can be caused by observers covering the same repository or suppliers providing similar objects(e.g.,two digital libraries that offer the same journals).In addi-tion to the problems that are well examined in the area of non-sequential processing when having parallel tasks,the duplication of the event repository forces replication to ensure reliable detection of event patterns.。

Webroot Spy Sweeper Enterprise 产品说明书

Webroot Spy Sweeper Enterprise 产品说明书

Spyware describes programs that monitor online activity and often secretly transmit information to a third party without end user’s knowledge. The most nefarious forms of spyware continue toappear on enterprise networks at an alarming rate - and just a single undiscovered intruder can cause extensive damage to your business:nThreatened security of intellectual propertyn Legal exposure due to compromised consumer or employee data n Risk of violating compliance regulationsn Increased user downtime and costly drain on IT resources n Paralyzed productivity at mission-critical workstationsn Diminished workstation performance and network stabilityWebroot Spy Sweeper EnterpriseAdding a dedicated desktop anti-spyware solution to yoursecurity infrastructure is critical for maintaining optimal system performance and protecting intellectual property. Webroot SpySweeper Enterprise is an award-winning, centrally managed, scalable enterprise solution that provides best of breed protection against all types of malicious spyware, adware, and other harmful intruders.Webroot SSE provides financial institutions with:nMost effective spyware detection, removal and blocking technologynCentralized management from anywhere using a Web-enabled admin consolenUnsurpassed performance and scalability for enterprise-wide spyware protectionnAutomated or manual deployment of threat definitions and software updatesnSweep setting enforcement and easy access to definition downloads for remote and mobile usersnConfigurable sweep schedules with ability to scan select workstations on demandnAbility to create and enforce flexible protection policies by group or workstationnCustomizable reports, summaries and alerting capabilities on detected threatsnDashboard view to quickly identify top threats, critical workstations, and more.Webroot Solution for FinanceWebroot understands the challenges financial institutions face as they strive to avoid harmful security breaches while maintaining compliance with government regulations and implementing industry-leading spyware protection.With Spy Sweeper™ Enterprise, Webroot's award winning anti-spyware solution, financial institutions are able to quickly and effectively mitigate their risks to evolving security threats.Webroot Spy Sweeper Enterprise: Key Features Spy Sweeper Enterprise is the most trusted name in enterprise antispyware protection. This comprehensive solution provides:Real-time Blocking with Smart ShieldsOnly Spy Sweeper Enterprise offers Smart Shields to proactively defend against spyware infections. Smart Shields continuously protect the most common spyware behaviors to change a system, including changes to system memory, registry entries, host files, startup processes, browser-hijackings and many other security settings.The Most Accurate Threat DetectionThe detection of false positives in an enterprise environment can cause detrimental effects. Through its combination of thorough research and detection processes, Spy Sweeper Enterprise provides the most accurate threat detection — minimizing the potential risk from misidentifying a mission-critical application as spyware.Patent-pending Comprehensive Removal T echnology (CRT)The Webroot Comprehensive Removal T echnology (CRT) is the backbone behind the most advanced spyware removal engine inthe industry. CRT ensures that all traces of spyware are completely disabled from a system PC by using adaptive recognition practices to remove processes, applications or files that may have changed during the remediation process or may not have been previously detected. Using CRT, Spy Sweeper Enterprise assures system stability during and after the spyware removal process.Industry Leading Threat Database — Backed by the Power of Phileas™With the most technically advanced spyware detection process, theWebroot Threat Research Center delivers a standard of protection that is unmatched in the industry. Only Webroot offers the benefits of Phileas — the industry’s first automated spyware surveillance system designed to proactively detect spyware on the Internet. Phileas rapidly identifies potential spyware and provides protection from new threats before users can unwittingly infect their PCs. This technology dramatically enhances Webroot’s anti-spyware definition database and detection capabilities.Centralized Management from any Internet-connected PCUsing the Web-enabled admin console, IT administrators centrally configure and automate the deployment of definitions, policies, sweep schedules and program updates to the desktops from anywhere. The Admin Consoleallows multiple administrators to be simultaneously logged in with full audit logging of all user actions. Administrators are able to configure the client tobe invisible to end users, allow user control over specific settings, or run in administrative mode with full control for advanced users.Seamless, Scalable DeploymentSpy Sweeper Enterprise is scalable to fit companies of any size.Spy Sweeper Enterprise is seamlessly deployed throughout the organization via login script, an internal software management solution, or using Group Policy in Active Directory.Powerful Sweep SettingsIT administrators specify sweep schedules, set policies for automatedquarantine and removal of spyware, configure settings for coverage of files, memory, and registry in sweeps, and determine any software that should not be removed for selected groups (e.g. authorized system monitoring tools used by IT). The lockdown feature allows administrators to dictate network and workstation settings for full compliance with the corporate security ptop and Remote User ManagementSpy Sweeper Enterprise maintains the enforcement of administrator-set policies for laptop and remote users while they are away from the network. Laptop and remote users directly check the Webroot update server for definition updates while not connected to the corporate network, to ensure continuous protection from spyware threats.Reporting and AlertingExtensive reporting features provide graphical executive summaries, spy reports, and status updates. Administrators can customize reports to provide detailed analysis of the spyware threat by workstation, group, type detected, and time. In addition, alert settings allow administrators to configure multiple email addresses and notification options to ensure the correct people in the organization are alerted when a threat is detected. System RequirementsServer:OS: Windows 2000, Windows XP , Windows Server 2003CPU: 200 MHz minimum,350 MHz or better recommended Memory: 512 MB recommendedClient:OS: Windows 98 SE, Me (all require Internet Explorer 6.0 with Service Pack 1), NT 4.0, 2000 or XPCPU: 300 MHz or better recommendedMemory: 128 MB RAM minimum; 256 MB RAM or better recommended© 2006 Webroot Software, Inc. — All rights reserved. No part of this publication may be reproduced, photocopied, stored on a retrieval system, or transmitted without the prior consent of Webroot Software, Inc. Webroot, Spy Sweeper, Webroot Spy Sweeper Enterprise and their logos are registered trademarks of Webroot Software, Inc.“Estimates suggest that 70% of all companies go out of business after a major data loss and about 20% of all businesses experience a major disaster that affects IT every five years. Approximately 35% of disaster-recovery plans work when tested, leaving a significant opportunity for improvement. As stated earlier, only about 45% of businesses are using an intrusion detection system.”Computer World, June15, 2005。

DataScope 产品套件连接选项与要求说明书

DataScope 产品套件连接选项与要求说明书

ACCESSING DATASCOPE: OPTIONS & REQUIREMENTSThis document identifies the current connection options and requirements for accessing∙DataScope Select∙Tick History∙DataScope Plus∙DataScope Equities & DataScope Fixed Incomein both the Production and Disaster Recovery environments via the Internet and Private Network through Delivery Direct/Financial Community Network (FCN).Clients are encouraged to access DataScope products via host-name-based URLs, not IP addresses. This is the recommended best practice as it requires no changes to client scripts or browser bookmarks in the event of a failover of the Production environment to the Disaster Recovery (DR) site. A failover automatically changes the Domain Name System (DNS) URL entries to point to the Disaster Recovery site for both the Internet and Private Network URLs. (Clients must ensure that their firewall configurations do not block DataScope Disaster Recovery IP addresses, which are listed later in this document.)Actions that may trigger a failover to the Disaster Recovery site include, but are not limited to: ∙Major hardware or network failure∙Power problems∙Acts of nature (tornado, hurricane, etc.)∙Malicious actsClients who access DataScope products via IP addresses will need to configure the Disaster Recovery IP addresses and repoint their access to the Disaster Recovery site if a failover is invoked. IP clients will also be responsible to repoint their access back to the primary Production environment site if notified by Refinitiv.Clients who access DataScope via IP addresses should access the Disaster Recovery site only under one of these conditions:∙Upon notification from Refinitiv of an actual failover. Service alerts are provided to clients if a failover is invoked, and also when the Disaster Recovery site is ready for client access.∙During a DataScope Disaster Recovery testing period. Refinitiv schedules an annual client testing opportunity to support preparedness for Disaster Recovery site failover, notifying clients in advance viaservice alerts.DataScope clients are encouraged to subscribe to service alerts. Please see the remaining sections of this document for connection options and requirements. This information is current as of the of document issue date identified below. Contact one of our support contacts with questions:∙Frontline Support∙OnlineSolutionsVersion 3.0 / Date of issue: May 2019© Refinitiv 2019DataScope SelectClients can connect to DataScope Select from four interfaces (GUI, FTP, SOAP API and REST API) accessed over the Internet or Private Network through Delivery Direct/FCN.InternetDataScope Select officially supports Internet Explorer 11, Mozilla Firefox and Google Chrome. Clients are encouraged to browse DataScope Select using modern versions of Internet Explorer to ensure optimal performance and reliability.Host Name Access (recommended)GUI SOAP API REST API /RestApi/v1 /RestApi/v1FTP IP AccessGUI192.165.219.225 159.220.29.23SOAP API192.165.219.152 159.220.29.21REST API 192.165.219.152 159.220.29.21FTP192.165.219.225 159.220.29.23Private Network - Delivery Direct/FCNDataScope Select access is supported over Private Networks through Delivery Direct/FCN. These access methods also support an FTP Push option for extraction retrieval, as described in the next section.Host Name Access (Recommended)GUI SOAP API REST API /RestApi/v1 /RestApi/v1 FTP IP AccessGUI159.43.0.1 159.43.7.36SOAP API159.43.0.2 159.43.7.35REST API 159.43.0.2 159.43.7.35FTP159.43.0.1 159.43.7.36FTP PushThis option enables Private Network clients to connect DataScope Select to their site via FTP using the following IP addresses and the instructions below. Additional configuration through the DataScope Select User Interface (Preference screen > FTP Automated Delivery tab) is required. Please see the DataScope Select FTP User Guide for details. This document is available on MyRefinitiv.FTP 159.43.0.6 159.43.7.46Clients must also configure their firewalls as follows to allow for the FTP/SFTP connections to be initiated from the DataScope Select FTP Hosts:1. Launch the DataScope Select console and change the global static NAT IP address that the Refinitiv hostswill target. (Provided by the Delivery Direct carrier).2. Open the following firewall ports: 80, 443, 20, 21 and 22/TCP. (Source: 159.43.0.0/29; Destination:DataScope Select servers).3. Selectively forward * on the client DNS.Tick HistoryTick History v2 is hosted on the DataScope Select platform, and you can access it via the GUI and REST API interfaces using the same connection options as those for DataScope Select. (Note that Tick History does not support the SOAP API and FTP interfaces.) Tick History also has the additional connection requirement described in this section.If your company’s firewall requires sites to be whitelisted, and your company downloads Tick History output directly from Amazon Web Services (AWS) as described below, then you should whitelist several locations.This applies to downloading output directly from AWS via:∙The REST API, by specifying the x-direct-connect HTTP header.∙The graphical user interface (GUI), by selecting Enable Direct Download From S3 in the Tick History File Delivery section of the Preferences screen.It applies to directly downloading Venue by Day packages, and custom reports, from AWS.If your firewall allows the use of wildcard characters, whitelist the following locations:∙https://*/tickhistory*/*∙https://tickhistory*/*If your firewall does not allow wildcards, whitelist the following locations:∙https:///∙https:///∙https:///∙https:///∙https:///∙https:///tickhistory.query.production.pln-results/∙https:///tickhistory.query.production.pln-results/∙https:///tickhistory.query.production.hdc-results/∙https:///tickhistory.query.production.pln-results/∙https:///tickhistory.query.production.hdc-results/∙https:///tickhistory.query.production.hdc-ebd (Venue By Day only)∙https:///tickhistory.query.production.pln-ebd (Venue By Day only)However, if either of the following applies to you, we recommend that you download Tick History output from Refinitiv, in the standard way, not directly from AWS:∙If your firewall requires whitelisting, but you cannot whitelist domain-based URLs, only IP addresses, then do not download directly from AWS.∙If you access Tick History not via the Internet, but via a private network such as Delivery Direct (DD) or Financial Community Network (FCN), then do not download directly from AWS.DataScope PlusClients can connect to the DataScope Plus interface over the Internet, Private Network through Delivery Direct/FCN, or via a standard FTP connection. File retrieval is available via FTP Pull only. FTP Push is not supported. InternetHost Name Access (recommended)GUI FTP IP AccessGUI192.165.219.225 159.220.29.23FTP159.220.50.21 164.57.208.61Private Network - Delivery Direct/FCNHost Name Access (recommended)GUI FTP IP AccessGUI159.43.0.1 159.43.7.36FTP159.43.5.13 159.43.12.130DataScope Equities &DataScope Fixed IncomeClients can access DataScope Equities and DataScope Fixed Income files over a standard FTP connection via the Internet or Private Network through Delivery Direct or FCN. File retrieval is available via FTP Pull only. FTP Push is not supported.ProductionInternet 159.220.29.24Delivery Direct/FCN 159.43.7.49 Disaster RecoveryInternet 159.220.34.140Delivery Direct/FCN 159.43.0.25。

Miscedence翻译

Miscedence翻译

Miscedence1. IntroductionIn the realm of technology, the term “Miscedence” refers to the occurrence of unexpected events or situations. These events oftendisrupt the normal flow of operations and can have various impacts on different aspects of life. This article delves into the concept of Miscedence, exploring its causes, effects, and ways to mitigate its consequences.2. Causes of MiscedenceMiscedence can arise from a multitude of factors, including:2.1 Natural DisastersNatural disasters, such as earthquakes, floods, hurricanes, or wildfires, can cause significant Miscedence. These events may damage critical infrastructure, disrupt communication networks, and halt transportation systems, resulting in widespread chaos and confusion.2.2 Human ErrorHuman error is another major cause of Miscedence. Mistakes made by individuals, whether accidental or intentional, can have far-reaching consequences. Examples include system administrators misconfiguring servers, employees mishandling sensitive data, or technicians improperly installing equipment.2.3 Cyber AttacksWith the increasing reliance on technology, cyber attacks have become a leading cause of Miscedence. Malicious hackers can breach security defenses, gain unauthorized access to systems, and disrupt essentialservices. Such attacks can paralyze government institutions, businesses, and even the everyday lives of individuals.2.4 Complex SystemsComplex systems, such as financial networks, transportation grids, or healthcare infrastructures, are vulnerable to Miscedence due to their intricate nature. A single point of failure or an unexpected interaction between components can trigger a chain reaction, leading to cascading failures and widespread disruption.3. Effects of MiscedenceThe effects of Miscedence can be significant and diverse, impacting various domains:3.1 Economic ConsequencesMiscedence can have severe economic implications. Disruptions to supply chains, halt in production, or loss of critical infrastructure canresult in financial losses for businesses and individuals. Additionally, restoring systems, recovering data, and implementing preventive measures can incur substantial costs.3.2 Social and Humanitarian ImpactMiscedence can disrupt the daily lives of individuals, causing social and humanitarian issues. For instance, communication breakdowns during natural disasters can hamper rescue efforts, delay medical assistance, and make it challenging for affected communities to reach out for help.3.3 National Security RisksMiscedence can pose significant national security risks. Cyber attacks on critical infrastructure, such as power grids or transportation systems, can undermine a country’s defense capabilities. Moreover, the disruption of essential services can create vulnerabilities, making it easier for external threats to exploit the situation.3.4 Psychological EffectsMiscedence can also have psychological effects on individuals and communities. The uncertainty caused by unexpected events can lead to fear, stress, and anxiety. Moreover, the loss of personal data, livelihoods, or infrastructure can have long-lasting emotional impacts.4. Mitigating MiscedenceTo minimize the adverse effects of Miscedence, several mitigation strategies can be employed:4.1 Redundancy and ResilienceDesigning systems with redundancy and resilience can help mitigate the impact of unexpected events. By implementing multiple layers of backup, alternative communication channels, or fail-safe mechanisms, organizations can ensure that essential services continue even in the face of disruptions.4.2 Disaster Preparedness and ResponseTo enhance resilience, governments, organizations, and individuals must develop comprehensive disaster preparedness and response plans. These plans should include regular drills, training programs, and clear communication protocols to ensure an efficient and coordinated response during Miscedence events.4.3 Cybersecurity MeasuresGiven the increasing threat of cyber attacks, robust cybersecurity measures are essential to mitigate Miscedence. This includes regular security audits, strong password policies, encryption protocols, and employee education on identifying and responding to cyber threats.4.4 Education and AwarenessRaising public awareness about Miscedence and its potential impacts is crucial. Educational campaigns can empower individuals to take necessaryprecautions, such as backing up data, securing personal information, and reporting suspicious activities. Additionally, promoting a culture of resilience and preparedness can contribute to a more resilient society.ConclusionMiscedence represents the unexpected occurrences that can disrupt our lives in numerous ways. Understanding its causes, effects, andstrategies for mitigation is essential for individuals, organizations, and governments alike. By acknowledging the reality of Miscedence and taking proactive measures, we can minimize its impact and ensure a more secure and resilient future.。

Abstracting Execution Logs to Execution Events for Enterprise Applications

Abstracting Execution Logs to Execution Events for Enterprise Applications

Abstracting Execution Logs to Execution Events for Enterprise Applications Zhen Ming Jiang1, Ahmed E. Hassan2, Parminder Flora3 and Gilbert Hamann4Queen’s University1,2, Research In Motion3,4{zmjiang1, ahmed2}@cs.queensu.ca, {pflora3, ghamann4}@AbstractMonitoring the execution of large enterprise systems is needed to ensure that such complex systems are performing as expected. However, common techniques for monitoring, such as code instrumentation and profiling have significant performance overhead, and require access to the source code and to system experts. In this paper, we propose using execution logs to monitor the execution of applications. Unfortunately, execution logs are not designed for monitoring purposes. Each occurrence of an execution event results in a different log line, since a log line contains dynamic information which varies for each occurrence of the event. We propose an approach which abstracts log lines to a set of execution events. Our approach can handle log lines without having strict requirements on the format of a log line. Through a case study on a large enterprise application, we demonstrate that our approach performs well when abstracting execution logs for large enterprise applications. We compare our approach against the SLCT tool which is commonly used to find line patterns in logs. 1.IntroductionLarge commercial software systems require runtime monitoring to ensure Quality of Service [20, 22]. Erroneous behavior or long waiting times will result in user dissatisfaction and loss of revenue. Runtime monitoring compares the runtime behavior of an application against a documented or expected behavior. Runtime monitoring techniques, such as source code instrumentation and profiling, require a good understanding of an application to determine the most appropriate subsystems or components to monitor. Such understanding is rarely available in practice.Source code instrumentation [5, 6] is commonly used to insert monitoring points in large software systems. However, code instrumentation requires rebuilding and redeploying the monitored system. This is a time consuming task which causes service disruption. Furthermore, code instrumentation is not feasible when the source code of the application is not available. This is the case for many large enterprise systems, which make use of Commercial-Off-the-Shelf (COTS) components. COTS components are usually purchased from a third party and are shipped without the source code.Code profiling [8, 18, 19] is another common technique for monitoring an application. Code profiling avoids the need for rebuilding the source code. However, profiling introduces significant slowdowns and is not desirable in a production setting.Software developers mark the execution of high level events in an application by outputting log lines during development. For example, the event of charging a user account may be marked by outputting the following log line: “Charged User Account, account=1000, amount=200”. Such log lines indicate high level information about the progress of execution for an application, in contrast to code instrumentation events which are at a much lower level. These logs lines are helpful during remote issue resolution by customer support representatives. Also recent legal acts, such as the Sarbanes-Oxley Act of 2002, have made the logging of the execution for telecommunication and financial companies a legal requirement [4].We propose using these commonly available logs to monitor the execution of an application. Monitoring techniques which make use of execution logs are the least invasive (no disruption or degradation of an application) and most applicable (logs are available). However, execution logs are not designed for monitoring. Although each log line is an instantiation of an execution event, each log line contains dynamic information that is specificto the particular occurrence of an event. The dynamic information causes the same execution event to result in different log lines. For example a log line indicating the event of charging a user’s credit account would contain different account numbers and charge amounts for each instantiation of that particular execution event as shown in Table 1. All three log lines correspond to different instantiations of the same execution event.Log Lines1: Charged User Account, account=1000, amount=2002: Charged User Account, account=1234, amount=1003: Charged User Account, account=500, amount=250Table 1. Sample log linesAbstracting log lines to execution events is a challenging and time consuming task. A system expert is needed to determine that two logs lines represent the same execution event. Such system experts are rarely available and if they are available they have limited time. Due to the large size of enterprise systems, this process is time consuming. Moreover, the system expert must update the abstractions for each version of the software system.At first glance it may appear that the process of abstracting log lines to execution events could be solved if the source code of an application is available. Simply searching for “LOG” statements (e.g., “printf” or “System.out”) is not sufficient since in many instances an execution event may be generated dynamically using several output statements in the source code. Table 2 shows one example.Log Lines Source CodeUser ShoppingBasket contains: 2, 3, 5 LOG("User Shopping Basket contains: "); for (int i=0; i<shoppingBasket.size(); i++) { itemId = shoppingBasket[i];if(i > 0) {LOG(" , ” + itemId);} else {LOG(itemId);}}Table 2. An example of an execution event generated bymultiple output statementsIn this paper, we introduce a lightweight approach to abstract log lines to execution events. Our approach can handle free-form log lines with limited requirements on the format of the log lines. Experiments on a large enterprise application show that our approach abstracts log lines with high precision and recall.Organization of the PaperThis paper is organized as follows. Section 2 overviews related work in the field and places our contributions relative to prior work. Section 3 presents our approach of abstracting log lines to execution events. Section 4 demonstrates the effectiveness of our approach through a case study using log files from a large enterprise software application. Section 5 concludes the paper.2.Related WorkPrior approaches for abstracting log lines to execution events could be grouped under three general approaches: Rule-based, Codebook-based and AI-based approaches. Rule-based approaches [13, 14, 15, 26, 27] use a set of hard coded rules to abstract log lines to execution events. These approaches are commonly used in practice since they are very accurate. However these approaches require a substantial effort for encoding and updating the rules. Codebook-based approaches [7, 24, 28] are similar to the rule-based approach. However codebook approaches process a subset of execution events (“alarms”) instead of all events. The subset of events, which forms the codebook, is used in real-time to match the observed symptoms.AI-based approaches[1, 2, 3, 9, 10, 12, 21] use various types of artificial intelligent techniques such as, Bayesian networks and frequent-itemset mining, to abstract execution logs to events.Our approach, presented in Section 4, is similar to a rule-based approach. However, our approach requires less system knowledge and effort. Rather than encoding rulesto recognize specific execution events, our approach usesa few general heuristics to recognize static and dynamic parts in log lines. Log lines with identical static parts are grouped together to abstract log lines to execution events. Approach TransparencyAmount ofSystemKnowledgeNeededEffortCoverageRule-based[13, 14, 15,26,27]Y HighHighNCodebook-based[7, 24, 28]Y MediumHighNAI-based[1, 2, 3, 9,10, 12, 21]N LowLowNOurapproachY LowLowYWe define several criteria to evaluate a log abstraction approach:1.Transparency: Can a user easily understand therationale for abstracting a log line to an executionevent? For example, in a neural-network AI approach,the user cannot determine the rationale for abstractinga log line to a particular event. We desire an approachwith high transparency so users would trust it.2.Amount of system knowledge: What is the amountof knowledge needed about the system to use theapproach? For example, in a rule approach a domainexpert is needed to encode all rules.3.Amount of required effort: What is the amount ofeffort required for the approach to work properly?Rule-based approaches and cookbook-based approaches require a large amount of human effort toencode the rules or alarms. These encodings must beupdated with every version of a software system.4.Coverage: Is each log line abstracted to a unique logevent? For example, AI-based approaches may notabstract log lines which do not occur above aparticular threshold.3.Our Log Abstraction ApproachLog lines are generated by output statements in the source code. Log lines generated by the same set of output statements correspond to the same execution event. Suchlog lines will be similar to each other. Table 1 shows threelog lines generated by the same execution event. The identical (i.e., static) parts in each log line are static information describing the execution event (i.e., the context), whereas the varying parts (i.e., dynamic) are parameter values generated during runtime.Not all log lines contain static and dynamic parts. For example, “All accounts initialized” contains only a staticpart. Most of the time, a log line has both dynamic andstatic parts. Log lines with same static parts and same structure of dynamic parts belong to the same execution events. If we can parameterize each log line, we can abstract log lines to execution events properly.3.1.Clone Detection ApproachLog lines generated by the same execution events should look the same if we can properly parameterize the dynamic contents from each log line. This process is similar to the “Parameterized Token-Matching Algorithms” used in the source code Clone Detection research [11]. To verify the feasibility of using clone detection techniques to abstraction log lines, we used the CCFinder [11] tool which implements the parameterized token matching approach. The tool can detect similarities on multiple programming languages and plain text.Although CCFinder works on large source code bases,it is not able to process large log files. For example, CCFinder performs clone detections on multiple lines and cannot handle large file size. Based on a closer analysis, we feel CCFinder cannot process large log files as good as large source files due to the following reasons:1.Programming languages or plaintext wrap aroundlines but have delimiters for each statement (like “;”,“.” or “!”); whereas a log line does not use similardelimiters. Thus, CCFinder cannot find the end ofeach log line and treats all log lines as one chuck.2.Source code contains control keywords like if, else,for, while etc. These keywords are the static parts inthe source code and are used by CCFinder in thelexical analysis and transformation steps to analyzeand mark up the source code. Log lines have a lessstrict grammar and unlimited vocabulary; CCFindercannot mark up any specific parts when processing.3.2.Our ApproachBased on lessons learned from running CCFinder on log files, we have derived a new approach to detect similarities among log lines, then parameterize and abstract these lines. Our approach treat end of line characters as the delimiter for each log line. Our approach scales up to process log files which contain thousands or millions of log lines.Log Lines1.Start check out2.Paid for, item=bag, quality=1, amount=1003.Paid for, item=book, quality=3, amount=1504.Check out, total amount is 2505.Check out doneTable 4. Sample log linesAs shown in Figure 3, our approach consists of three steps: Anonymize, Tokenize and Categorize. In the rest of this section, we demonstrate our approach using a small example that is shown in Table 4. The example has 5 log lines.The Anonymize stepThe Anonymize step uses heuristics to recognize tokens in log lines which correspond to dynamic parts. Once the tokens are recognized they are replaced with generic tokens ($v). Heuristics can be added or removed from our approach. We use the following two heuristics to recognize dynamic parts in log lines:1.Assignment pairs like “word=value”;2.Phrases like “is[are|was|were] value”Table 5 shows the sample log lines after the Anonymize step. In the second and third lines, contents after the equal signs (=) are replaced with the generic term $v. In the fourth line, the phrase “is 250” is replaced with the term “=$v”. There are no changes made to the first and last line.Log Lines1.Start check out2.Paid for, item=$v, quality=$v, amount=$v3.Paid for, item=$v, quality=$v, amount=$v4.Check out, total amount=$v5.Check out doneTable 5. The sample logs after the anonymize stepThe Tokenize stepThe Tokenize step separates the anonymized log lines into different groups (i.e., bins) according to the number of words and parameters in each log line. Separating log lines into different bins narrows down the search space during the Categorize step. The bins enable us to process large log files in a timely fashion by limiting our search space. We estimate the number of parameters in a log line by counting the number of generic terms (i.e., $v). Log lines with the same number of tokens and the same number of parameters are placed in the same bin. Table 6 shows the sample log lines after the Anonymize and Tokenize steps. The left column indicates the name of a bin. Each bin is named with a tuple: number of words and number of parameters that are contained in the log line associated with that bin. The right column in Table 6 shows the log lines. Each row shows the bin and its corresponding log lines. The second and the third log lines contain 8 words and are likely to contain 3 parameters. Thus the second and third log lines are grouped together in the 8_3 bin. Similarly, the first and last log lines are grouped together in the 3_0 bin as they both contain 3 words and are likely to contain no parameters.Bin Name Log Lines3_0 1. Start check out5. Check out done5_1 4. Check out, total amount=$v8_3 2. Paid for, item=$v, quality=$v, amount=$v3. Paid for, item=$v, quality=$v, amount=$vTable 6. Sample logs after the tokenize stepFigure 3. High level overview of our approach for abstracting execution logs to execution eventsThe Categorize stepThe Categorize step compares log lines in each bin and abstracts them to the corresponding execution events. The inferred execution events are stored in an execution event database for future references. Our algorithm goes through the log lines bin by bin. After this step, each log line should be abstracted to an execution event. Table 7 tabulates the results of our working example after the Categorize step. Our algorithm starts with the 3_0 bin. Initially, there are no execution events which correspond to this bin yet. Therefore the execution event corresponding to the first log line becomes the first execution event namely 3_0_1 (the first execution event corresponds to the bin which has 3 words and no parameters). Then the algorithm moves to the next log line in the 3_0 bin which contains the fifth log line. The algorithm compares the fifth log line with all the existing execution events corresponds to the 3_0 bin. Currently, there is only one execution event: 3_0_1. As the fifth log line is not similar to the 3_0_1 execution event, we create a new execution event 3_0_2 for the fifth log line. With all the log lines in the 3_0 bin processed, we can move on to the 5_1 bin. As there are no execution events which correspond to the 5_1 bin initially, the fourth log line gets assigned to a new execution event 5_1_1. Similar log applies when we process the log lines from the 8_3 bin are processed with the same algorithm.Execution eventLog Lines3_0_1 1. Start check out 3_0_2 5. Check out done5_1_1 4. Check out, total amount=$v8_3_1 2. Paid for, item=$v, quality= $v, amount=$v 8_3_13. Paid for, item=$v, quality=$v, amount=$v Table 7. Sample logs after the categorize step4. Case StudyWe conducted a case study to evaluate the effectiveness of our approach. We ran an implementation of our approach against logs for a large enterprise software system. Since we have limited knowledge of the software system, we could not use rule-based or codebook-based approaches. We can only use an AI-based approach. There are two AI tools which we could compare our approach against. The tools are: teirify [16] and SLCT (Simple Logfile Clustering Tool) [12]. The teirify tool uses a bio-informatics algorithm [17] to detect line patterns, whereas the SLCT tool makes use of frequent-itemset mining to cluster similar log lines. Unfortunately, teirify requires a large amount of memory and cannot handle large log files (exceeding 10,000 log lines). Thus, we compared the performance of our log line parameterization approach against the result obtained from SLCT, which was able to scale to handle large files. We briefly describe the SLCT approach, explain the setup of our case study, report the results of our study, and discuss lessons learned from our study. In this paper, we use average precision and average recall [25] to measure the performance of different approaches. An approach with high recall classifies most of the log lines. An approach with high precision implies there are few log lines which are mis-classified.4.1. SLCTAbstracting log lines to execution events can be considered as the process of discovering common patterns among log lines. Consequently, we can consider using a data mining algorithm which analyzes large volumes of data to report interesting patterns and relations in the data. SLCT (Simple Logfile Clustering Tool) [12] is an open source tool which uses the Frequent-Itemset Mining technique [23] to detect patterns and spot abnormal events in the streams of logs. SLCT outputs execution events as a regular expression. Table 8 shows one example. SLCT by default attempts to create patterns which do not intersect. If the support count is 3, then only “In Checkout, user is Tom” will be shown. However, if the support count is 4, only the first pattern (In Checkout, user is *) will be reported. Each of these reported line pattern can be considered as an execution event.Log LinesLine Patterns In Checkout user is Tom In Checkout user is Jerry In Checkout user is Tom In Checkout user is TomIn Checkout, user is * In Checkout, user is TomTable 8. An example of multiple line patterns4.2. Case Study SetupWe conducted three experiments on the log files for a large enterprise application. In the first experiment, we study the feasibility of an approach by running it against small-size log files. Small log files are generated by randomly picking 100 log lines from a larger log file. We ran each approach (our approach and the SLCT approach) on 100 different randomly generated small-size log files.Since the files are small in size, we can manually verify the correctness of each approach and gain a better understanding of the limitations and strengths of each approach. We also use the experiment to fine tune the input parameters. In the second experiment, we examine the stability of an approach by running it against a medium-size log files. Each medium-size log files consists of 10,000 log lines randomly picked from a larger log file. We ran each approach on 100 different log files. We use the experiments to measure the average, minimum and maximum performance of an approach. In the third experiment, we test the scalability of an approach by running it against a large log file, with around three quarter of a million (723,608) log lines. The studied application was internationalized and part of the internationalization efforts involved the manual abstracting log lines to execution events. The execution events are stored in a separate file which is translated to different languages. We use this file as the gold standard in our performance evaluation of each approach. We now present the results of the three experiments using the two approaches.4.3. ResultsIn this section, we present the performance results of SLCT and our log abstraction approach.SLCTFor our experiments, we used the support count of 10, 100, and 100 for the small, medium and large log files, respectively. Unfortunately, as the log files gets bigger SLCT suffers from an ambiguity problem, as it shows general patterns like lines begins with “Start”. To avoid ambiguity, we remove the general patterns before abstracting log lines to patterns (i.e., events). Table 9 tabulates the precision and recall values for SLCT. SLCT can handle log files with various sizes with stable performance. However, the performance is not satisfying. The low precision and recall is due to the following two reasons. First, SLCT won’t abstract every log line to an execution event since the log line must occur often enough for a frequent pattern to emerge. Second, SLCT reports many sub-patterns. Table 8 shows one example. The pattern “In Checkout, user is Tom” is a sub-pattern of “In Checkout, user is *”. In our case study, if log lines have more than one line patterns reported, we just match these log lines with the line patterns which have the highest support count.Experiments Precision RecallSmall 3.9% ± 0.45% 11.4% ± 4.8% Medium2.6% ± 0.14%12.3% ± 1.8%Large 2.4% 18.4%Table 9. The performance for SLCTOur Log Abstraction ApproachFor our approach, we first examine the logs and we add an additional rule to anonymize email addresses in log lines. For the Categorize step, we assign log lines to a particular execution event when the log line and the event have the same number of tokens after the anonymize step. Table 10 shows the results of the three experiments for our approach. As we can see, our approach can handle different size log files with high precision and recall.Experiments Precision Recall Small 95.9% ± 3.1% 99.9% ± 0.31% Medium90.0% ± 2.5%97.8% ± 0.44%Large 90.0% 98.4%Table 10. The performance for our approachWe manually investigated the remaining 5-10% of the log lines which our approach could not correctly abstract to their corresponding execution events. Our investigation revealed that these log lines followed a peculiar pattern for their dynamic information which our approach did not account for. For example, our approach will incorrectly abstract the log lines shown in Table 11 to different execution events. We then updated the heuristics in the Anonymize step to account for such a pattern by adding the line pattern (“Start processing for user $v”).Log Lines1: Start processing for user Jen2: Start processing for user Tom Lee 3: Start processing for user HenryTable 11. An example of log lines which our approach fails4.4. Limitations and DiscussionOur approach considerably outperforms SLCT since our approach does not suffer from the problem of limited frequencies and subpattern merging. However, our case study was performed on a single application so we must explore the performance of our approach on applications from various domains. In addition, our approach requires some human involvement. Our approach requires users to go through some log lines to compose the anonymization rules. Our approach also requires users to quickly go through the abstracted execution events making sure there are all log lines which have been abstracted. The example shown in Table 11 can be easily spotted by skimming through the abstracted events. It can be resolved by adding in one rule for replacing the words after “for user”.5. ConclusionComplex enterprise applications must be monitored to ensure that they are functioning properly. However, manyenterprise applications were not built with monitoring in mind. Monitoring must be added to these applications. Traditional techniques to add monitoring to an applicationrequire access to the source code, and may result in unacceptable performance degradations. Moreover all techniques require extensive knowledge of the application. Such knowledge rarely exists in practice.We propose monitoring application by means of the execution logs which are used by support staff and developers to understand the execution of an application. To use such logs, we must abstract each log line to its corresponding execution event. We call this process the log abstraction problem. In this paper, we developed an approach that addresses many of the shortcomings and limitations of other approaches. We conducted a case study using logs from a large enterprise application. Our case study shows that our approach abstracts events with high precision and recall.AcknowledgementWe are grateful to Research In Motion (RIM) for providing access to the execution logs of the large telecom applications used in this study. The findings and opinions expressed in this paper are those of the authors and do not necessarily represent or reflect those of RIM and/or its subsidiaries and affiliates. Moreover, our results do not in any way reflect the quality of RIM's software products.References[1]Huard, J., and Lazar, A. Fault isolation based on decision-theoretic troubleshooting. Technical Report 442-96-08. Columbia University, New York, NY. 1996.[2]J. Huard, Probabilistic reasoning for fault management on XUNET. Technical Report, AT&T Bell Labs. 1994.[3]D.W. Guerer, I. Khan, R. Ogler, R. Keffer. An Artificial Intelligence Approach to Network Fault Management. SRI International, 1996.[4]Summary of Sarbanes-Oxley Act of 2002. /Resources/Sarbanes+Oxley/Summary+of+the+Provisions+of+the+Sarbanes-Oxley+Act+of+2002.htm[5]D.F. Jerding, J.T. Stasko, and T. Ball. Visualizing Interactions in Program Executions. In Proc. Int’l Conf. Software Eng. (ICSE), pp. 360-370, 1997.[6]L.C. Briand, Y. Labiche, and Y. Miao. Towards the Reverse Engineering of UML Sequence Diagrams. Proc. IEEE Working Conference Reverse Eng., pp. 57-66, 2003.[7]Yemini, S. A., Sliger, S., Eyal, M., Yemini, Y. and Ohsie, D. High Speed and Robust Event Correlation. IEEE Communications Magazine, 1996, 82--90.[8]The Eclipse Test and Performance Tools Platform. /tptp.[9]Lin, A.: A hybrid approach to fault diagnosis in network and system management. In Proceedings of the ACM SIGCOMM Conference. 2002.[10]Hermann Wietgrefe, Klaus-Dieter Tuchs, Klaus Jobmann, Guido Carls, Peter Froehlich, Wolfgang Nejdl, Sebastian Steinfeld. Using Neural Networks for Alarm Correlation in Cellular Phone Networks. International Workshop on Applications of Neural Networks in Telecommunications.1997. [11]Toshihiro Kamiya, Shinji Kusumoto and Katsuro Inoue. CCFinder: A Multi-Linguistic Token-based Code cloning Detection System for Large Scale Source Code. IEEE Transactions on Software Engineering. July, 2002.[12]Risto Vaarandi. A Data Clustering Algorithm for Mining Patterns From Event Logs. Proceedings of the 2003 IEEE Workshop on IP Operations and Management. 2003.[13]Risto Vaarandi. Simple Event Correlator for real-time security log monitoring. Hakin9 Magazine 1/2006 (6) 2006. [14]Hansen, S. E. and Atkins, E. T. 1993. Automated System Monitoring and Notification With Swatch. In Proceedings of the7th USENIX Conference on System Administration. System Administration Conference. Berkeley, CA. November, 1993. [15]Wolfgang Ley and Uwe Ellerman. 1996. http://www.cert.dfn.de/eng/logsurf/.[16]Stearley, J. 2004. Towards informatic analysis of syslogs.In Proceedings of the 2004 IEEE international Conference on Cluster Computing. Washington, DC. September, 2004[17]Teiresias. /Tspd.html[18]Java Virtual Machine Profiler Interface (JVMPI). /j2se/1.4.2/docs/guide/jvmpi/jvmpi.html[19]Salah, M. and Mancoridis, S. Toward an environment for comprehending distributed systems. In Proceedings of the 10th Working Conference on Reverse Engineering. Nov., 2003.[20]Steinle, M., Aberer, K., Girdzijauskas, S., and Lovis, C. Mapping moving landscapes by mining mountains of logs: novel techniques for dependency model generation. In Proceedings of the 32nd international Conference on Very Large Data Bases. Seoul, Korea. September, 2006.[21]Sheppard, J. W. and Simpson, W. R. Improving the accuracy of diagnostics provided by fault dictionaries. In Proceedings of the 14th IEEE VLSI Test Symposium. 1996. [22]Oliner, A. and Stearley, J. What Supercomputers Say: A Study of Five System Logs. In Proceedings of the 37th Annual IEEE/IFIP international Conference on Dependable Systems and Networks. Washington, DC. June, 2007.[23]Pang-Ning Tan, Michael Steinbach, and Vipin Kumar Introduction to Data Mining.Addison-Wesley (2005).[24]Gupta, M. and Subramanian, M. Preprocessor algorithm for network management codebook. In Proceedings of the 1st Conference on Workshop on intrusion Detection and Network Monitoring. Berkeley, CA. 1999.[25]Hassan, A. E. and Holt, R. C. Replaying development history to assess the effectiveness of change propagation tools. Empirical Softw. Engg. Sep. 2006..[26]R. Vaarandi. SEC - a lightweight event correlation tool. In Proc. of IEEE Workshop on IP Operations & Management (IPOM2002), 2002.[27]Carlos Viegas Damásio, Peter Fröhlich, Wolfgang Nejdl, Luís Moniz Pereira, and Michael Schroeder. Using Extended Logic Programming for Alarm-Correlation in Cellular Phone Networks. Appl. Intell. 17(2): 187-202 (2002).[28]Klinger, S., Yemini, S., Yemini, Y., Ohsie, D., and Stolfo, S. 1995. A coding approach to event correlation. In Proceedingsof the Fourth international Symposium on integrated Network Management IV. London, UK.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

EVENT DISSEMINATION SERVICE FORPERVASIVE COMPUTINGSasu Tarkoma1AbstractEvent-based computing is a generic enabler for the next generation mobile services and applications that need to meet user requirements irrespective of time and location. The event paradigm and publish/subscribe systems allow clients to asynchronously receive information that matches their interests. We outline an event architecture for mobile computing that addresses two key requirements: terminal mobility and user mobility. The system consists of access servers, event channels and a mechanism for locating event channels. The architecture uses filter covering and merging for supporting high accuracy in event delivery, reducing communication cost, and improving event processing on terminals and servers. Experimental results based on the merging system are also examined.1. IntroductionPervasive computing creates new possibilities for applications and services; however, it also presents new requirements for software that need to be taken into account in applications and in the service infrastructure. In order to support the development and deployment of intelligent applications a number of fundamental enabling middleware services are needed [4]. Two important services are event monitoring and event notification, which are vital for supporting adaptation in applications. Event monitoring and notification are used to realize a number of pervasive applications, such as smart rooms, sensor networks, presence applications, and device tracking and management.Most research on event systems has focused on event dissemination in the fixed-network, where clients are usually stationary and have reliable, low-latency, and high bandwidth communication links. Recently, mobility and wireless communication have been an active topic in many research projects working with event systems, such as Siena [1], JEcho [2], and Rebeca [6]. These systems have focused on fixed-network infrastructure supporting mobile entities. Ad hoc environments are an emerging research area.Pervasive computing creates new requirements for event systems, such as mobility support, context-awareness, interoperability, and support for heterogeneous devices. The proposed event framework addresses these issues by providing a Web Services-based fixed-network event routing infrastructure that supports user and terminal mobility. User mobility occurs when a user becomes disconnected or changes the 1 Helsinki Institute for Information Technology, sasu.tarkoma@hiit.fidevice. Terminal mobility occurs when a terminal moves to a new location and connects to a new access point. Mobility transparency is a key requirement for the system, and the middleware system should hide the complexity of subscription management due to mobility. The hypothesis is that efficient mobility support may be provided using an explicit-join rendezvous scheme [5]. The proposed scheme uses event channels as meeting points in the server cloud. In addition, covering relations and filter merging are central in minimizing the size of routing tables at the event routers or servers.The proposed event framework differs from existing work in several ways. The handover protocol uses subscription covering to prevent unnecessary update messages. The filtering system is based on disjunctive attribute filters, which are more expressive than single predicate or conjunctive attribute filters. Features such as client-side filtering and sessions are not supported in many systems. Client-side filtering is important for pervasive environments in order to prevent unnecessary signaling over wireless networks. The system groups client subscriptions into sessions, which are manageable units and may be shared between applications.Features such as mobility support and merging are not visible to pervasive applications, but they are used in making the system more efficient by delivering information where it is needed and doing it using the highest possible precision. API features such as support for client-side filtering, and session sharing are interesting for pervasive applications. For example: smart sensors may reduce unnecessary network signaling by retrieving or receiving merged sets of filters and sending only matching events to the network.The rest of the paper is organized as follows. In Section 2 we present the system architecture. In Section 3 we examine filter merging and in Section 4 we present the current status and future work.2. SystemArchitectureThe architecture is based on the notion of a domain of event servers that provide the event service for a number of wireless and mobile clients and for other entities that use events. The architectural components of the system are:• event channels (EC), which are rendezvous points for subscribers and publishers,• resolution servers (RS), which are responsible for the event channels,• sessions that consist of zero or more subscriptions and buffered notifications,• access servers (AS), which maintain connections with client systems, and• event domains are collections of access and resolution servers.The architecture aims to meet the requirements of mobile users by supporting bounded delivery times and disconnected operation. User mobility is supported by the session concept that stores undelivered notifications at access servers. A handover protocol is used to transfer sessions between access servers, which facilitates clientmobility and may also be used in load balancing of sessions. Access servers associate notifications with client subscriptions so that client systems do not necessarily have to filter incoming notifications. This is especially useful for low-end client systems, such as mobile phones and lightweight PDAs. Resolution servers are responsible for event channels that contain a more generic set of filters, and the access servers maintain the full set of filters. Server-to-server communication may take an advantage of IP-multicast, but if it is not available the application level multicast is realized using point-to-point messaging.The motivation for the work stems from the high cost of mobility in a hop-by-hop routed event infrastructure. The cost for mobility support in terms of exchanged messages is high, because the source and target servers need to synchronize and update the event routing topology, which may be arbitrary [5]. When non-covered or merged subscriptions are propagated servers can no longer identify the source server of a subscription, because this information is lost in the covering or merging process, and servers need to use flooding to find the route to the source server [6].The system supports two notification models: ordered delivery with logging, and decentralized delivery with causal ordering. In ordered delivery events are routed through the event channel, which provides total ordering of events within an event domain and event history and logging features. In decentralized delivery the event channel is used to synchronize the subscription status of the access servers and they forward events to other access servers. Within a single event domain, filtering is done in three phases: first on client systems, which is not mandatory, then at the resolution servers (event channels) or access servers (decentralized delivery), and in the last phase on the destination access servers.The decentralized notification mechanism does not rely on a centralized dispatcher, which makes it less prone to problems related with scalability and network partitions. This approach synchronizes subscriptions by using a proxy channel, which resides on the access servers. The event channel has the merged subscriptions for the access servers, and upon subscriptions/unsubscriptions it updates the proxy channels. The event channel has a global high-level view on the subscriptions and advertisements in the channel, and may further optimize and merge the set of filters. For example using advertisement semantics subscriptions need to be sent only to those servers that have matching advertisements. Access servers forward events to other access servers based on the proxy channel’s routing table. The proxy channel contains the filters for other access servers, which are needed in order to make the forwarding decision.Event channels partition the subscription space into orthogonal or near orthogonal sets of subscriptions. Event channels are located using a built-in directory service, which maps event types to channels. A load balancer component is used to relocate channels and assign channels to resolution servers. A key property of the system is that the lookup cost for an event channel is constant or near constant. This means that subscription management operations may be performed efficiently, and the number of hops required by the operations is bounded both within an event domain and between event domains. Event channels may also be connected on a higher level, for example to form hierarchies, which may affect the publication cost of events, but this does not affect the routing table update cost for a single channel. Access servers update event channels only when a subscription is not already covered by an existing subscription, in addition the event channel update protocol uses either perfect or imperfect merging.Filter merging removes unnecessary redundancy, and reduces the routing table sizes and processing overhead of event channels. Access servers join the event domain by sending a subscription message to an event channel. An access server may leave the domain by unsubscribing all subscriptions.2.1. FederationFederation of event domains may be accomplished by connecting event channels of the same type in different domains. A basic assumption is that in order for an event to be delivered to another domain an event channel corresponding to the event type must exist in the foreign domain. There are several ways to implement communication between event channels. The basic method is to multicast updates between all channels. Another method is to map or hash the event channel name over a set of backbone servers to find a rendezvous server. Each event channel of the same type in different domains updates the merged set of subscriptions to this rendezvous server. The server delivers this information to other channels, and therefore the channels have the knowledge of what events should be forwarded and where. Federated resolution servers may also calculate a shortest-distance spanning tree for event channels of the same type. In this case the event channels would form a more traditional routed event infrastructure with a higher cost for mobility.2.2. Handover Procedure for Client MobilityThe mobility protocol supports both client-initiated and server-initiated handovers. There are two variants of the protocol: handover within a domain and handover between domains. With client-initiated terminal mobility the protocol proceeds in both cases as follows: the target server of mobility initiates the protocol and contacts the source server and client subscriptions are sent to the target access server. If the target server has already subscribed a covering set of subscriptions the event channels are not updated unless the source server has no subscribers for the relocated subscriptions. Buffering needs to be done both at the source server and at the target server in order to avoid false negatives. Handover between domains differs from domain specific operation, because relevant event channels in the two domains, source and target, may need to be updated. In the final phase of the handover the client session (containing buffered notifications) is moved from the source to the target server and duplicates are removed. Initial results with the handover protocol within an event domain indicate that the handover procedure may benefit from support for filter covering in scenarios where subscriptions are saturated and subscribed by other clients both at the source and target access servers.Merging3. FilterWe define a notification to be a set of 3-tuples: N = {t1,t2,..,t m}, each tuple is defined by <name, type, value>. The set of elementary types is defined as: T = {String, Integer, Double, Boolean, Date}. A filter is a set of attribute filters, which are 3-tuples defined by <name, type, filter clause>. Each attribute filter must match a tuple in a notification for the filter to match a notification. The filter clause is a constraint in the disjunctive normal form constructed using elementary atomic predicates. Our system supports basic comparison and matching predicates for the different types. Notifications and filters are represented using XML.We have developed a merging framework that uses filter covering and merging rules for predicates to remove redundancy. The covering algorithm takes into account also semi-structured events by supporting quantification over lists. The framework supports perfect and imperfect merging. Both merging approaches use the same principle for filters that have the same structure2: the conjuncts of each mergeable attribute filter are merged either using merging rules or combined using disjunctions if they are not covered by other conjuncts. A conjunct that is covered by another conjunct may be removed. The algorithms have polynomial time complexity. Imperfect merging simply fuses the set of filters that have the same structure. Perfect merging differs from imperfect merging by a stricter mergeability condition: a filter F1 may be merged with another filter F2 only if it has at least n-1 identical attribute filters, where n is the number of attribute filters in F1 and F2 [3]. The disjunctive formulas guarantee that the merging of the distinctive attribute filter can be performed. Perfectly merged filters have a precision of one, and imperfectly merged filters have a precision in the range [0,1].Experimentation with the framework indicates that covering and merging may be used to reduce the size of propagated subscriptions, matching time, and signaling overhead. Merging performance depends on the nature of the constraints, their distributions and the structure of filters. Figure 1 gives a summary of the initial results for perfect and imperfect merging with simple integer, double, boolean and string constraints generated from 100 random strings and a number range of 1000. We used 10 schemas, 200 subscriptions, and 20 replications for these results using randomly generated subscriptions and randomly generated schemas using the uniform distribution. Perfect merging is especially useful for filters with a few attribute filters. Imperfect merging gives good performance even when the number of attribute filters grows, and fuses the filters given that they have the same structure with a cost in a number of false positives. For 200 subscriptions and 2 tuples the precision was 100%, for 3 tuples it was 92%-99%, and for 4 tuples it was 60%-82%. The benchmark is based on a number of predefined schemas. If the filter structure is random the merging schemes may not be able to perform merging.Figure 1. Initial results using perfect and imperfect merging using 2,3, and 4 tuples2 This means that the names and types of attribute filters are identical, but the constraints may differ.Results with gzip compression for both filters and filter streams are also presented in Figure 1. The results show that gzip gives good compression ratio for both merged and unmerged filter streams. On the other hand, filter specific gzip compression is relatively inefficient when compared with streaming compression. The major benefit of covering and merging is not only shorter messages, but also more compact routing tables, and faster event matching and processing.4. Current Status and Future WorkA prototype implementation of the proposed system has been developed using the Java language, building on existing technologies such as SOAP and Apache Axis. We have also developed a lightweight version of the client-side API, which features pub/sub and session management operations for J2ME MIDP3. The implementation has been used to create two demonstration applications: a mobile presence application and a context-sensitive ticker for mobile phones that allows transferring the end-point of a session to a dormant instance of the application. The presence application uses the event system to disseminate changes in users’ presence information. Currently, load balancing and federation support are under development. The framework supports scalability to a number of event servers, but wide-area scalability is an open issue. Dynamic filtering and merging in both client-server and server-server environments seem to be promising research topics—how to balance between the precision and size of filter sets using different algorithms. These mechanisms may also be used in ad hoc environments on devices that have enough processing power. The proposed event architecture is envisaged to be a supporting layer for a compound event detection system that supports the detection of complex event sequences in time. We also plan to compare different mobility models and load balancing techniques for session/channel relocation using simulation and the prototype implementation.5. References[1] CAPORUSCIO, M., INVERARDI, P., PELLICCIONE, P, Formal Analysis of Clients Mobility inthe Siena Publish/Subscribe Middleware, Technical Report, Department of Computer Science,University of Colorado, October 2002.[2] CHEN, Y., SCHWAN, K., ZHOU, D., Opportunistic Channels: Mobility-Aware Event Delivery,Middleware 2003, LNCS Vol. 2672, pp. 182-201, Springer-Verlag, 2003.[3] MÜHL, G., Generic Constraints for Content-based Publish/Subscribe, in: C. Batini et al. (Eds.),CooPIS 2001, LNCS 2172, pp. 211-225, Springer-Verlag, 2001.[4] RAATIKAINEN, K., CHRISTENSEN, H., NAKAJIMA, T., Application Requirements forMiddleware for Mobile and Pervasive Systems, ACM SIGMOBILE Mobile Computing andCommunications Review, Volume 6, Issue 4 (October 2002).[5] TARKOMA, S., KANGASHARJU, J., RAATIKAINEN, K., Client Mobility in Rendezvous-Notify,2nd Intl. Workshop on Distributed Event-Based Systems (DEBS'03).[6] ZEIDLER, A., FIEGE, L., Mobility Support with Rebeca, Proceedings of the 23rd InternationalConference on Distributed Computing Systems (ICDCS) Workshop on Mobile ComputingMiddleware.3 Java 2 Micro Edition, Mobile Information Device Profile。

相关文档
最新文档