An Efficient Consistency Algorithm for the Temporal Constraint Satisfaction Problem

合集下载

An Efficient Algorithm for OWL-S Based

An Efficient Algorithm for OWL-S Based

An Efficient Algorithm for OWL-S BasedSemantic Search in UDDINaveen Srinivasan, Massimo Paolucci, and Katia SycaraRobotics Institute, Carnegie Mellon University, USA{naveen, paolucci, katia}@Abstract. The increasing availability of web services demands for a discoverymechanism to find services that satisfy our requirement. UDDI provides a webwide registry of web services, but its lack of an explicit capability representa-tion and its syntax based search provided produces results that are coarse in na-ture. We propose to base the discovery mechanism on OWL-S. OWL-S allowsus to semantically describe web services in terms of capabilities offered and toperform logic inference to match the capabilities requested with the capabilitiesoffered. We propose OWL-S/UDDI matchmaker that combines the better oftwo technologies. We also implemented and analyzed its performance.1 IntroductionWeb Services have promised to change the Web from a database of static documents to an e-business marketplace. Web Service technology are being adapted by Business-to-Business applications and even in some Business-to-Consumer applications. The widespread adoption of web services is due to its simplicity and the data interopera-bility provided by its components namely XML [7], SOAP [10] and WSDL [11].With the proliferation of Web Services, it is becoming increasingly difficult to find a web service that will satisfy our requirements. Universal Description, Discovery and Integration [8] (here after UDDI) is an industry standard developed to solve the web service discovery problem. UDDI is a registry that allows businesses to describe and register their web services. It also allows businesses to discover services that fit their requirement and to integrate them with their business component.While UDDI has many features that make it an appealing registry for Web services, its discovery mechanism has two crucial limitations. First limitation is its search mecha-nism. In UDDI a web service can describe its functionality using a classification schemes like NAISC, UNSPSC etc. F or example, a Domestic Air Cargo Transport Service can use the UNSPSC code 78.10.15.01.00 to describe it functionality. Although we can discover web services using the classification mechanism, the search would yield coarse results with high precision and recall errors. The second shortcoming of UDDI is the usage of XML to describe its data model. UDDI guarantees syntactic inter-operability, but it fails to provide a semantic description of its content. Therefore, two identical XML descriptions may have very different meaning, and vice versa. Hence, XML data is not machine understandable. XML’s lack of explicit semantics proves to be an additional barrier to the UDDI’s discovery mechanism.An Efficient Algorithm for OWL-S Based Semantic Search in UDDI 97 The semantic web initiative [5] addresses the problem of XML’s lack of semantics by creating a set of XML based languages, such as RDF and OWL, which rely on ontologies that explicitly specify the content of the tags. In this paper, we adopt OWL-S [3], an OWL [15] based ontology, that can be used to describe the capabili-ties of web services. Like UDDI, OWL-S allows a web service to describe using the classification schemes. In addition, OWL-S provides a capability-based description mechanism [6] to describe the web service. Using capability-based description we can express the functionality of the web service in terms of inputs and precondition they require and outputs and effects they produce. Capability-based search will overcome the limitations of UDDI and would yield better search results.In this paper we propose an OWL-S/UDDI Matchmaker which takes advantage of UDDI’s proliferation in the web service technology infrastructure and OWL-S’s ex-plicit capability representation. In order to achieve this symbiosis we need to store the OWL-S profile descriptions inside an UDDI registry, hence we provide a mapping between the OWL-S profile and the UDDI data model based on [1]. We also enhance the UDDI registry with an OWL-S matchmaker module which can process the OWL-S description, which is present in the UDDI advertisements. The matchmaking component is completely embedded in the UDDI registry. We believe that such archi-tecture brings both these two technologies, working toward similar goals, together and realize their co-dependency among them. We also added a capability port to the UDDI registry, which can used to search for web services based on their capabilities.The contributions of this paper are an efficient implementation of the matching algo-rithm proposed in [2], an architecture that is tightly integrated with UDDI, an extension of the UDDI registry and the API to add capability search functionality, preliminary experiments showing scalability of our implementation and an update of the mapping described in [1] to address the latest developments in OWL-S and UDDI.The rest of the paper is organized as follows; we first describe UDDI and OWL-S followed by the UDDI search mechanism. In Section 3 we describe the architecture of the OWL-S/UDDI matchmaker and an updated mapping between OWL-S profile and UDDI. In Section 4 we present our efficient implementation of the matching algo-rithm, followed by experimental results comparing the performances of our OWL-S/UDDI Matchmaker implementation with a standard UDDI registry and finally we conclude.2 UDDI and OWL-SUDDI [8] is an industrial initiative aimed to create an Internet-wide network of regis-tries of web services for enabling businesses to quickly, easily, and dynamically dis-cover web services and interact with one another. OWL-S is an ontology, based on OWL, to semantically describe web services. OWL-S is characterized by three mod-ules: Service Profile, Process Model and Grounding. Service Profile describes the capabilities of web services, hence crucial in the web service discovery process. For the sake of brevity of this paper, we are not going into the details of OWL-S and UDDI we assume that the readers are familiar with it, for more information see [3] and [8].N. Srinivasan, M. Paolucci, and K. Sycara982.1 UDDI Search MechanismUDDI allows a wide range of searches: services can be searched by name, by loca-tion, by business, by bindings or by TModels. For example it is possible to look for all services that have a WSDL representation, or for services that adhere to Rosetta Net specification. Unfortunately, the search mechanism supported by UDDI is limited to keyword matches and does not support any inference based on the taxonomies re-ferred to by the TModels. F or example a car selling service may describe itself as “New Car Dealers” which is an entry in NAICS, but a search for “Automobile Deal-ers” services will not identify the car selling service despite the fact that “New Car Dealers” is a subtype of “Automobile Dealers”. Such semantic matching problem can be solved if we use OWL, RDF etc instead of XML.The second problem with UDDI is the lack of a power full search mechanism. Search by Category information is the only way to search for services, however, the search may produce lot of results with may be of no interest. F or example when searching for “Automobile Dealer”, you may not be interested in dealers who don’t accept a pre-authorized loan or credit cards as method of payments. In order to produce more precise search results, the search mechanism should not only take the taxonomy information into account but also the inputs and outputs of web services. The search mechanism resulted in combining the semantic base matching and the capability search is far more effective than the current search mechanism. OWL-S provides both semantic matching capability and capability base searching, hence a perfect candidate.Fig. 1. Architecture of OWL-S / UDDI Matchmaker3 OWL-S / UDDI Matchmaker ArchitectureIn order to combine OWL-S and UDDI, we need embed an OWL-S profile descrip-tion in a UDDI data structure (we discuss this embedding in Section 3.1), and we need to augment the UDDI registry with an OWL-S Matchmaking component, for process-ing OWL-S profile information. The architecture of the combined OWL-S/UDDIAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 99 registry is shown in Fig 1. The matchmaker component in this architecture, unlike the previous version discussed in [2], is tightly coupled with the UDDI registry. By tightly coupled we mean the matchmaker component relies on the UDDI registry’s ports (publish and inquiry) for its operations.On receiving an advertisement through the publish port the UDDI component, in the OWL-S/UDDI matchmaker, processes it like any other UDDI advertisement. If the advertisement contains OWL-S Profile information, it forwards the advertisement to the matchmaking component. The matchmaker component classifies the adver-tisement based on the semantic information present in the advertisement.A client can use the UDDI’s inquiry port to access the searching functionality pro-vided by the UDDI registry, however these searches neither use the semantic informa-tion present in the advertisement nor the capability description provided by the OWL-S Profile information. Hence we extended the UDDI registry by adding a capability port (see Fig 1) to solve the above problem. As a consequence, we also extended the UDDI API to access the capability search functionality of the OWL-S/UDDI match-maker. Using the capability port, we can search for services based on the capability descriptions, i.e. inputs, outputs, pre-conditions and effects (IOPEs) of a service. The queries received through the capability port are processed by the matchmaker compo-nent, hence the queries are semantically matched based on the OWL-S Profile infor-mation. The query response contains list of Business Service keys of the advertise-ments that match the client’s query. Apart from the service keys, it also contains use-ful information, like matching level and mapping, about each matched advertisement. The matching level signifies the level of match between the client’s request and the matched advertisement. The mapping contains information about the semantic map-ping between the request’s IOPEs and the advertisement’s IOPEs. Both these infor-mation can be used for selecting and invoking of an appropriate service from the results.Fig. 2. TModel for Stock Quote Service3.1 Embedding OWL-S in UDDIThe OWL-S/UDDI registry requires the embedding of OWL-S profile information inside UDDI advertisements. We adopt the OWL-S/UDDI mapping mechanism de-scribed in [1]. The mechanism uses a one-to-one mapping if an OWL-S profile ele-ment has a corresponding UDDI element, such as, for example, the contact informa-tion in the OWL-S Profile. F or OWL-S profile elements with no corresponding UDDI elements, it uses a T-Model based mapping. The T-Model mapping is looselyN. Srinivasan, M. Paolucci, and K. Sycara100 based on the WSDL-to-UDDI mapping proposed by the OASIS committee [13]. It defines specialized UDDI TModels for each unmapped elements in the OWL-S Pro-file like OWL-S Input, Output, Service Parameter and so on. These specialized TModels are used just like the way NAICS TModel is used to describe the category of a web service. Fig 2 illustrates an OWL-S/UDDI mapping of a Stock Quoting service whose input is a company ticker symbol and its output is the company’s latest quotes. In our work we extended the OWL-S/UDDI mapping to reflect the latest develop-ments in both UDDI and OWL-S. Fig 3 shows the resulting OWL-S/UDDI mapping. Furthermore we enhanced the UDDI API with the OWL-S/UDDI mapping function-ality, so that OWL-S Profiles can be converted into UDDI advertisements and pub-lished using the same API.Fig. 3. Mapping between OWL-S Profile and UDDI4 Achieving Matching PerformanceA naive implementation of the matching algorithm described in [2] would match the inputs and the outputs of the request against the inputs and the outputs of all the ad-vertisements in the matchmaker. Clearly, as the number of advertisements in the matchmaker increases the time taken to process each query will also increase. To overcome this limitation, when an advertisement is published, we annotate all the ontology concepts in the matchmaker with the degree of match that they have with theconcepts in each published advertisement. As a consequence the effort need to answerAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 101a query is reduced to little more than just a lookup. The rational behind our approach is that since the publishing of an advertisement is a one-time event, it makes sense to spend time to process the advertisement and store the partial results and speed up the query processing time, which may occur many times and also the query response time is critical. F irst we will briefly discuss the matching algorithm, then our enhance-ments in the publish and the query phase.4.1 Matching AlgorithmThe matching algorithm we used in our matchmaker is based on the algorithm pre-sented in [2]. The algorithm defines a more flexible matching mechanism based on the OWL’s subsumption mechanism. When a request is submitted, the algorithm finds an appropriate service by first matching the outputs of the request against the outputs of the published advertisements, and then, if any advertisement is matched after the output phase, the inputs of the request are matched against the inputs of the adver-tisements matched during the output phase.In the matching algorithm, the degree of match between two outputs or two inputs depends on the match between the concepts that represents by them. The matching between the concepts is not syntactic, but it is based on the relation between these concepts in their OWL ontologies. For example consider an advertisement, of a vehi-cle selling service, whose output is specified as Vehicle and a request whose output is specified as Car. Although there is no exact match between the output of the request and the advertisement, given an ontology fragment as show in F ig 4, the matching algorithm recognizes a match because Vehicle subsumes Car.The matching algorithm recognizes four degrees of match between two concepts. Let us assume OutR represents the concepts of an output of a request, and OutA that of an advertisement. The degree of match between OutR and OutA is as follows.exact : If OutR and OutA are same or if OutR is an immediate subclass of OutA. For example given the ontology fragment like Fig 4, the degree of match between a request whose output is Sedan and an advertisement whose output is Car is exact.Fig. 5. Advertisement Propagation Fig. 4. Vehicle OntologyN. Srinivasan, M. Paolucci, and K. Sycara102plug in: If OutA subsumes OutR, then OutA is assumed to encompass OutR or in other words OutA can be plugged instead of OutR. F or example we can assume a service selling Vehicle would also sell SUVs. However this match is inferior to the exact match because there is no guarantee that a Vehicle seller will sell every type of Vehicle.subsume: If OutR subsumes OutA, then the provider may or may not completely satisfy the requester. Hence this match is inferior than the plug in match.fail: A match is a fail if there is no subsumption relation between OutA and OutR.4.2 Publishing PhasePublishing of a Web service is not a time critical task; therefore we attempt to exploit this time to pre-compute the degree of match between the advertisement and possible requests. To perform this pre-computation, the matchmaker maintains a taxonomy that represents the subsumption relationships between all the concepts in the ontolo-gies that it loaded. Each concept in this taxonomy is annotated with a two lists out-put_node_ information and input_node_information that specify to what degree any request pointing to that concept would match the advertisement. For example, out-put_node_information is represented as the following vector [<Adv1,exact>, <Adv2,subsume> , …], where AdvX points to the advertisement and “subsume” specify the degree of match. The advantage of the pre-computation is that at query time the matchmaker can extract the correct value with just a lookup with no need of inference.More in details, at publishing time, the matchmaker loads the ontologies that are used by the advertisement’s inputs and outputs and updates its taxonomy. Then, for each output in the advertisement, the matchmaker performs the following steps. •The matchmaker locates the node corresponding to the concept, which represents the output, in the hierarchical structure let us call this node curr_node. The de-gree of match between the curr_node’s concept and the output of the advertise-ment is exact, so the matchmaker updates the output_node_ information. Let us assume Fig 5 represents the hierarchical structure maintained by the matchmaker and let an output of an advertisement Adv1 be ‘Car’. The matchmaker updates the output_node_information of the ‘Car’ node that it matches Adv1exactly.•The matchmaker updates the output_node_ information of all the nodes that are immediate child of the curr_node that the published advertisement matches them exactly. Because the algorithm states that the degree of match between output and the concepts immediate subclass are also exact. F ollowing our example the matchmaker will updates the output_node_ information of the ‘Coupe’ node and the ‘Sedan’ node that it matches the advertisement Adv1 exactly.•The matchmaker updates the output_node_ information of all the parents of the curr_node that the degree of match between the nodes and the published adver-tisement is subsume. Following our example, we can see that the degree of match between the Adv1’s output concept ‘Car’ and the parent nodes of the curr_node ‘Thing’ and ‘Vehicle’ are subsume.•Similarly the matchmaker updates the output_node_ information of all the child nodes of curr_node that the degree of match between the node and the publishedAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 103 advertisement is plug-in. Following our example, we can see that the degree ofmatch the advertisement’s output and the child nodes of curr_node ‘Luxury’ and‘Mid-Size’ is plug-in.Similar steps are followed, for each input of the published advertisement the matchmaker updates the input_node_information of the appropriate nodes.As we can observe, we are performing most of the work required by the matchingalgorithm during the publishing phase itself, thereby spending a considerable amountof time in this phase. Nevertheless, we can show that the time spend during thisphase, does not depend linearly on the number of concepts present in the data struc-ture but in the order of log (number of concepts) in present in the tree structure, andhence showing that our implementation is scalable.Since we use hierarchical data structure, the time required to insert a node will bein the order of log d N, where d is the degree of tree. Similarly time required to trav-erse between any two nodes in a particular branch will also be in the order of log d N.The time required for publishing an advertisement will be equal to the time requiredfor classification of the ontologies used by inputs and outputs of the advertisement,plus the time required to update the hierarchical structure with the newly added con-cepts, plus the time required to propagate information about the newly added adver-tisement to the hierarchical structure. And in a best case scenario, when no ontologyneeds to be loaded, the publishing time will be time required for updating and propa-gating.Time publish = Time Classification + Time Update + Time propagate (1) The time required by Racer for classifying neither directly depended on the numberof concepts nor the number of advertisements present in the matchmaker. The timerequired by the other two operations, update and propagate, will be in the orderof (log d N). Hence the publishing time does not linearly depend on the number ofconcepts or the advertisements present in the matchmaker.4.3 Querying PhaseSince most of the matching information is pre-computed at the publishing phase,the matchmaker’s query phase is reduced to simple lookups in the hierarchical datastructure. We also save time by not allowing a query to load ontologies. Althoughloading ontologies required by the query appears to be a good idea, we do notallow it for the following three reasons: first, the loading of an ontology is anexpensive process, furthermore the number of ontologies to load is in principle unbounded. Second, if the request requires the loading of a new ontology, it is verylikely that the new concepts will have no relation with the concepts that arealready present in the matchmaker, therefore the matching process would failanyway. Third, the ontologies loaded by the query may be used only one time, andover time we may result is storing information about lot of unused concepts. Notethat the decision of not loading ontologies at query time introduces incompletenessin the matching process: it is possible that the requested ontology bares some rela-tions with the loaded ontologies, therefore the matching process may succeed.N. Srinivasan, M. Paolucci, and K. Sycara104Still, the likelihood of this event is small, and the cost of loading ontologies so bigthat we opted for not loading them.When the matchmaker receives a query it retrieves all output_node_informations,the sets of advertisements and its degree of match with the concept, of all the nodes corresponding to the outputs of the request. For example, if the outputs of the requestare ‘Car’ and ‘Price’, the matchmaker fetches the output_node_informations of carONI1 and of price ONI2. The matchmaker then finds the advertisements that arecommon between the sets of advertisements retrieved, i.e. ONI1∩ ONI2. If no intersection is found then the query fails. If common advertisements are found sayADVSo, they are selected for further processing.The matchmaker performs a lookup operation and fetches all the in-put_node_informations, the sets of advertisements and degree of match with the con-cept, of all the nodes corresponding to the inputs of the request. The matchmakerkeeps only the input_node_information of the advertisements that were selected dur-ing the output processing phase, other advertisements are discarded. For example letIN1, IN2, and IN3be the input_node_information, then only input_node_informationof advertisements ADVSo ∩ IN1 ∩ IN2∩ IN3 are kept. This input_node_informationand match level of each output is used to score the advertisements that were selectedduring the output processing phase, i.e. ADVSo.We can see that the time required for processing a query does not depend on thenumber of advertisements published in the matchmaker. As we also see the queryingphase involves lookups and intersections between the selected advertisements. In our implementation lookups can performed in constant time. Hence time to process aquery depends on the time to perform intersections between the selected advertise-ments.Time query = (Num out+Num in)*(Time Lookup + Time Intersection) (2)Table 1. Publishing Time without loadingontologiesFig. 6. Time distribution during publishingan advertisementWhere the Num out and Num in are the number of inputs and outputs of an advertise-ment, Time Lookup is the time required to extract information about an input or an out-put, and Time Intersection is the time to compute intersection between the lists extractedAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 105 during the lookup time. We can see that the computation required for the querying process does not depend on the number of advertisements, and therefore it is scalable.5 Preliminary Experimental ResultsWe conducted some preliminary evaluation comparing the performances of our OWL-S/UDDI registry and a UDDI registry, to show that adding an OWL-S match-maker component does not hinder the performance and scalability of a UDDI registry. We extended jUDDI [14] an open source UDDI registry with the OWL-S matchmak-ing component. We used RACER [4] as to perform OWL inferences. In our experi-ments, we measured the processing time of an advertisement by calculating the differ-ence between the time the UDDI registry receives an advertisement and the time the result is delivered, to eliminate the network latency time.5.1 Performance – Publishing TimeIn our first experiment we compared the time take to publish an advertisement in an OWL-S/UDDI registry and in a UDDI registry. We assumed that the ontologies re-quired by the inputs and outputs of the advertisements are already present in the OWL-S/UDDI registry. The advertisements may have different inputs and outputs but they are present in one ontology file, hence the ontology has to be loaded only once, however our registry still have to load 50 advertisements. Table 1 shows the average time taken to publish 50 advertisements in a UDDI registry and an OWL-S/UDDI registry. We can see that the OWL-S/UDDI registry spends around 6-7 times more time, since publishing it a one-time event we are not concerned about the time taken.However, we took a closer look at the time taken to publish an advertisement by the OWL-S/UDDI registry. Fig 6 shows the time spent in different phases of the publishing an advertisement. Following are the 5 phases in the publishing process: •UDDI - time required by the UDDI component to process an advertisement. •Validation - time required by Racer to validate the advertisement.•Loading - time required by Racer to load the advertisement•Updating - time required to extract the ontology tree from Racer. •Propagating - time required to propagate the input/output information.As we can see, most of the time is spent in loading and validating ontology (around 70%) when compared to the matchmaking operations.5.2 Performance – Ontology LoadingIn the second experiment, we analyzed the performance of our registry when we load advertisements that required loading new ontology and hence significantly updating the taxonomy maintained by the matchmaking component. We published 50 adver-tisements that use different ontologies to describe their inputs and outputs, in our OWL-S/UDDI registry and measured the time taken to publish each advertisement. Each of these advertisements has three inputs and one output and requires loading an ontology containing 30 concepts.Fig. 7. Publishing time for advertisements that requires to load new ontologies In Fig.7, we can see that the time take to publish an advertisement increases line-arly with the number of advertisements, and we can also see that this linear increase is contributed by ‘new-concept’. This linear increase of ‘new-concept’ is attributed to a limitation of the Racer system. Whenever we load a new ontology into Racer we have determine if we need to update the taxonomy maintained by the match-maker, if so, what concepts should be updated. The Racer system does not provide any direct means to give this information. Hence we need to find out this informa-tion through a series of interactions. The new-concept in Fig 7represents the time required to perform this operation. We can substantially reduce the time required for publishing if either Racer can provide the information directly or if we could have direct access to Racer and we maintain the taxonomy inside Racer itself. We can see that if ignore the time taken by ‘new-concept’, the resulting graph would not have such drastic increase in the publishing time, concurring to our discussion in Section 4.2.Table 2. Query processing timeTime in ms Standard DeviationOWL-S/UDDI 1.306 .545.3 Performance – Querying TimeIn our final experiment, we calculated the time required to process a query. The que-ries we used do not load new ontologies into the matchmaker, they use the ontologies that are already present in the matchmaker. We used 50 queries each with three inputs。

英文算法比赛作文范文

英文算法比赛作文范文

英文算法比赛作文范文I love algorithm competitions. The thrill of solving complex problems under time pressure is just so exciting.It's like a mental workout that keeps my brain sharp and focused.The best part about algorithm competitions is the sense of accomplishment when you finally crack a tough problem.It's like a victory dance in your head, and it feels amazing. Plus, it's a great way to showcase your problem-solving skills and creativity.When I first started participating in algorithm competitions, I was intimidated by the level of competition. But as I kept practicing and learning from others, Irealized that everyone starts somewhere. It's all about the journey of improvement and growth.The community aspect of algorithm competitions is also something I really enjoy. It's awesome to meet like-mindedpeople who share the same passion for problem-solving. We exchange tips, tricks, and strategies, and it's a great way to learn from each other.The adrenaline rush during the competition is unbeatable. The clock is ticking, and you're frantically trying to come up with the most efficient solution. It's like a race against time, and the feeling of solving a problem just before the buzzer goes off is indescribable.Overall, algorithm competitions have been an incredibly rewarding experience for me. It's not just about the competition itself, but the journey of self-improvement, the friendships made, and the sheer joy of problem-solving.I can't wait for the next competition!。

Algorithmic Efficiency in Computational Problems

Algorithmic Efficiency in Computational Problems

Algorithmic Efficiency inComputational Problemsrefers to the ability of an algorithm to solve a problem in the most efficient manner possible. In computer science, algorithmic efficiency is a key concept that plays a crucial role in the design and analysis of algorithms. It is important to analyze and compare the efficiency of different algorithms in order to determine the best algorithm for a given problem.There are several factors that contribute to the efficiency of an algorithm, including time complexity, space complexity, and the quality of the algorithm design. Time complexity refers to the amount of time it takes for an algorithm to solve a problem, while space complexity refers to the amount of memory space required by an algorithm to solve a problem. The quality of algorithm design includes factors such as the choice of data structures and the way the algorithm is implemented.One important measure of algorithmic efficiency is the big O notation, which provides an upper bound on the growth rate of an algorithm. The big O notation allows us to compare the efficiency of different algorithms and make informed decisions about which algorithm to use for a particular problem. For example, an algorithm with a time complexity of O(n) is considered more efficient than an algorithm with a time complexity of O(n^2) for large input sizes.In order to improve the efficiency of algorithms, it is important to understand the theory behind algorithm design and analysis. This includes understanding different algorithm design techniques such as divide and conquer, dynamic programming, and greedy algorithms. By using these techniques, it is possible to design algorithms that are more efficient and can solve problems in a faster and more resource-efficient manner.In addition to understanding algorithm design techniques, it is also important to consider the specific characteristics of the problem at hand when designing algorithms. For example, some problems may have specific constraints that can be exploited toimprove algorithm efficiency. By taking into account these constraints, it is possible to design algorithms that are tailored to a specific problem and can solve it more efficiently.Another key aspect of algorithmic efficiency is the implementation of algorithms. The choice of programming language, data structures, and optimization techniques can all impact the efficiency of an algorithm. By optimizing the implementation of an algorithm, it is possible to reduce its time and space complexity and improve its overall efficiency.Overall, algorithmic efficiency is a fundamental concept in computer science that plays a crucial role in the design and analysis of algorithms. By understanding the theory behind algorithm design and analysis, and by carefully considering the specific characteristics of the problem at hand, it is possible to design algorithms that are efficient, fast, and resource-efficient. This can lead to significant improvements in the performance of computational problems and the development of more effective software applications.。

An efficient algorithm for online management of 2D area of partially reconfigurable FPGAs

An efficient algorithm for online management of 2D area of partially reconfigurable FPGAs

Algorithm 1 Basic SLA for finding MERs based on MKE at (i, j ).
(a) FPGA Area configuration and its matrix values. (b) MKEs (circles), Key Elements that are not maximal (lozenges) and Valley Points (shaded cells) on Column 5.
An Efficient Algorithm for Online Management of 2D Area of Partially Reconfigurable FPGAs
Jin Cui, Qingxu Deng Department of Computer Science Northeastern University Shenyang, China Xiuqiang He, Zonghua Gu Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong, China
Abstract
Partially Runtime-Reconfigurable (PRTR) FPGAs allow hardware tasks to be placed and removed dynamically at runtime. We present an efficient algorithm for finding the complete set of maximal empty rectangles on a 2D PRTR FPGA, which is useful for online placement and scheduling of HW tasks. The algorithm is incremental and only updates the local region affected by each task addition or removal event. We use simulation experiments to evaluate its performance and compare to related work.

计算机调剂技巧

计算机调剂技巧

计算机调剂技巧As a computer science student, I understand the importance of honing my algorithm adjustment skills. 在我参加计算机科学的学生中,我了解磨炼算法调整技能的重要性。

This skill is crucial in various aspects of the field, from problem-solving to optimization. 这种技能在领域的各个方面都至关重要,从解决问题到优化。

Being able to fine-tune algorithms can make a significant difference in the efficiency and effectiveness of software programs. 能够微调算法可以显著提高软件程序的效率和有效性。

One of the most important aspects of algorithm adjustment is understanding the concept of time complexity. 算法调整中最重要的一个方面是理解时间复杂度的概念。

This involves analyzing the amount of time an algorithm takes to run as a function of the length of its input. 这涉及分析算法作为其输入长度函数运行所需的时间量。

By understanding time complexity, developers can make informed decisions about which algorithms to use based on the size of the input data. 通过理解时间复杂度,开发人员可以根据输入数据的大小做出明智的算法选择决策。

算法作文素材

算法作文素材

算法作文素材Algorithms have become an integral part of our daily lives, influencing everything from social media feeds to autonomous vehicles. 算法已经成为我们日常生活中不可或缺的一部分,影响着从社交媒体到自动驾驶车辆的方方面面。

These complex mathematical instructions dictate how technology functions, often making decisions for us without our awareness. 这些复杂的数学指令指导着技术的运作,通常在我们不知情的情况下为我们做出决策。

One of the benefits of algorithms is their ability to sift through vast amounts of data quickly and efficiently. 算法的好处之一是它们能够快速而有效地筛选大量数据。

However, this efficiency can also lead to issues of bias and discrimination, as algorithms are often programmed with the biases of their creators. 然而,这种效率也可能导致偏见和歧视的问题,因为算法往往被编程了其创造者的偏见。

For example, a study by the University of Washington found that online ads for high-paying jobs were shown more frequently to men than women. 例如,华盛顿大学的一项研究发现,高薪工作的在线广告更频繁地展示给男性而不是女性。

Optimization Algorithms

Optimization Algorithms

Optimization AlgorithmsOptimization algorithms play a crucial role in various fields, including engineering, data science, finance, and many others. These algorithms are designed to find the best solution from a set of possible options, often with the goal of maximizing or minimizing a particular objective function. However, the effectiveness of optimization algorithms can vary depending on the specific problem they are applied to, and there are various factors to consider when selecting the most suitable algorithm for a given task. One of the key considerations when choosing an optimization algorithm is the nature of the problem itself. Different types of problems, such as linear programming, nonlinear optimization, integer programming, and many others, may require different algorithms to achieve optimal results. For example, linear programming problems can be effectively solved using algorithms such as the simplex method or interior-point methods, while nonlinear optimization problems may benefit from algorithms like gradient descent, genetic algorithms, or simulated annealing. Another important factor to consider is the computational complexity of the problem. Some optimization algorithms may be more efficient in terms of computational resources, while others may require more time and memory to reach a solution. In real-world applications, where time and resources are often limited, it is crucial to select an algorithm that can deliver results within acceptable time frames and resource constraints. Furthermore, the presence of constraints in the optimization problem can also influence the choice of algorithm. Constraints can significantly impact the search space and the feasibility of solutions, and certain algorithms may be better suited to handle specific types of constraints. For instance, algorithms such as the interior-point method are well-suited for handling constraints in linear programming problems, while evolutionary algorithms are known for their ability to handle constraints in nonlinear optimization. In addition to the technical aspects, the interpretability of the results and the ease of implementation are also important considerations when selecting an optimization algorithm. In some applications, it may be essential to understand and interpret the solution provided by the algorithm, which can vary depending on the algorithm used. Moreover, the ease of implementation and integration with existing systemsor software can also influence the choice of algorithm, especially in industrial or commercial applications. The performance of optimization algorithms can also be influenced by the quality and characteristics of the data used. Noisy or incomplete data can pose challenges for some algorithms, while others may be more robust in handling such data. Additionally, the scalability of the algorithm and its ability to handle large-scale problems efficiently are crucial considerations, especially in the era of big data and complex optimization problems. In conclusion, the selection of an optimization algorithm should be a well-informed decision, taking into account the nature of the problem, computational complexity, constraints, interpretability, ease of implementation, and the quality of data. By carefully considering these factors, practitioners can choose the most suitable algorithm to effectively address their optimization challenges and achieve the best possible outcomes.。

计算主动数据库中不可归约规则集的有效算法

计算主动数据库中不可归约规则集的有效算法

计算机研究与发展ISSN 1000-1239P CN 11-1777P T PJournal of Computer Research and Development 43(2):281~287,2006收稿日期:2004-10-12;修回日期:2005-01-19 基金项目:黑龙江省自然科学基金项目(F00-06)计算主动数据库中不可归约规则集的有效算法郝忠孝1,2,3熊中敏11(哈尔滨理工大学计算机与控制学院 哈尔滨 150080)2(哈尔滨工业大学计算机科学与技术学院 哈尔滨 150001)3(齐齐哈尔大学计算机科学与技术系 齐齐哈尔 161006)(zhmx iong123@y ahoo 1com 1cn)An Efficient Algorithm for Computing an Irreducible Rule Set in Active DatabaseHao Zhongxiao 1,2,3and Xiong Zhongmin 11(School of Comp uter and Control,H arbin University of Science and T echnology ,H ar bin 150080)2(College of Comp uter Science and T echnology ,H ar bin I ns titute of T echnology ,H ar bin 150001)3(Dep ar tment of Comp uter Science and T echnology ,Q iqihar University ,Qiqihar 161006)Abstract Termination decision in active database is an important problem and becomes a focus for many researchers 1Several w orks suggest proving term ination by using triggering and activation graphs at com -pile -time,and computing an irreducible rule set is the key technique 1But due to the various conservative -ness of the existing approaches,the irreducible rule set computed by them can be reduced again 1This defect impairs not only the correctness of termination decision at compile -time but also the efficiency of run -time rule analysis 1In this paper,the characteristic of nontermination for an activation rule is analyzed in detail and such concepts as activation path,inhibited activation cycle and inhibited activation rule are proposed 1Based on these concepts,an efficient algorithm for computing an irreducible rule set is presented,w hich can make the irreducible rule set,com puted by the ex isting approaches,to be reduced again 1Key words active database;rule analysis;activation path;inhibited activation cy cle;irreducible rule set摘 要主动数据库中规则集的可终止性判定是一个重要问题,已经成为一个研究热点1有些研究工作提出了在编译阶段运用触发图和活化图的方法解决这个问题,其中的一个关键技术就是计算主动规则集的不可归约规则集1现有的计算方法由于具有一定保守性,使得计算出的不可归约规则集仍可进一步地归约,这无疑将影响到规则集的可终止性判定的准确性和运行阶段规则分析的效率1经过深入分析活化规则可无限执行的特点,提出了活化路径等概念1基于这些概念,提出了一个计算主动规则集的不可归约规则集的有效算法,使现有方法求得的不可归约规则集得到进一步的归约1关键词主动数据库;规则分析;活化路径;禁止活化环;不可归约规则集中图法分类号 T P31111311 引 言传统数据库集成ECA 规则后具有自动反应功能,并且已经得到了多方面的应用,例如完整性维护、工作流管理等等[1]1但是规则集的描述具有无结构性,使得它的可终止性判定问题成为了一个著名的不可判定问题[2]1为使主动数据库得到更好的应用和发展,用户和开发人员急需能进行规则行为分析的工具1现有的方法提出运用触发图和活化图的方法解决这个问题,其中的一个关键技术就是计算主动规则集的不可归约规则集[1,3]1由于文献[1]和文献[3]在计算不可归约规则集时没有考虑到一个活化规则不仅应该单独可无限存在而且应该和受它活化的规则能同步地无限存在,使得计算出的不可归约规则集仍可进一步地归约,这无疑将影响到规则集的可终止性判定的准确性和运行阶段规则分析的效率1因此,有许多研究工作致力于找到一个更好的分析方法,从而能够进行更准确的可终止性判定[4,5],但它们亦未考虑本文提出的上述观点1可终止性是具有良好行为特性主动规则集应具有的一个重要特征[6],主动规则集无法确保具有良好行为特性已成为主动数据库发展和应用的瓶颈[7]1为此,本文提出了活化路径、禁止活化环、禁止活化规则等概念1基于这些概念,提出了一个计算不可归约规则集的有效算法1例11图1中含有两个触发环R 1{r 1,r 2,r 3}和R 2{r 4,r 5,r 6}以及一个活化环R c 1{r 1,r 2,r 4,r 3}1触发边用实线表示,活化边用虚线表示1运用文献[1]中的归约算法,不能消除任何一条边1活化环R c 1既不被R 1包含也不被R 2包含,若设定执行模式为immediate and recursive ,调度方式为all se -quential [8],则R 1和R 2不能并行执行1当R 1被触发时,R c 1因r 4没被触发而不能无限循环执行1r 4只能有限执行,导致受其活化的r 3也只能有限执行,因此R 1可终止1R 1中r 2可终止执行导致r 4不能被活化,r 4只能有限执行,即R 2可终止1当R 2被触发时,类似情形将发生.R 1和R 2都具有可终止性,故规则集{r 1,,,r 6}应具有可终止性,即可进一步归约为§1Fig 11 T G and A G graphs associated w ith ex ample 11图1 与例1相关的触发图和活化图2 基本概念定义1[1]1与一个主动规则特定的一次执行相关联的过渡值为此规则所监视的操作插入、删除或更新的瞬时值1定义2[1]1一个主动规则表示为一个三元组1(1)事件集合是一组被监视的数据操纵操作1(2)条件是一个在当前数据库状态和规则的过渡值上表示的谓词语句1(3)动作是一组数据操纵操作1定义3[1]1R 表示一个任意的主动规则集1触发图T G(trig gering graph)表示一个有向图{V,E },其中每一结点v i I V 相应于一个规则r j I R 1有向边3r j ,r k 4I E 表示规则r j 生成触发规则r k 的事件1定义4[1]1R 表示一个任意的主动规则集1活化图AG(activation graph)表示一个有向图{V ,E },其中每一结点v i I V 相应于一个规则r j I R 1有向边3r j ,r k 4I E ,并且j X k 表示规则r j 的动作可能改变规则r k 条件的真值由假变为真1边3r j ,r j 4I E 表示规则r j 在其动作执行后条件为真1定义5[1]1一个自惰化规则(sel-f disactivating rule)就是其动作执行后使其条件为假的主动规则1定义6[1]1一个不含函数的规则(function -free rule )也就是不会向数据库中引进任何新符号的规则1定义71在触发图T G 中,形如(r 0,,,r n ,r 0)的路径称之为触发环;在活化图A G 中,形如(r 0,,,r n ,r 0)的路径称之为活化环1本文设定如下的规则执行语义和定义限制1(1)定义的规则都应该是自惰化且是不含函数的规则1此限制可保证活化图在分析触发环的可终止性时提供有用的信息[1]1(2)执行模式(execution model)为immediate and recursive,调度(scheduling)方式为all sequential 1此语义被SQL -3以及大多数研究原型系统支持[8],它使得所有的触发环在数据库环境中按序执行1下面给出文献[1]中的归约算法1R 表示一个任意的主动规则集,并给定与之相关的触发图T G 和活化图A G ;Out T (r i )和Out A (r i )分别表示在TG 和A G 中所有属于以规则r i 为始点的有向边的终点;r i 1T 和r i 1A 分别表示在T G 和AG 中以r i为终点的有向边个数;L 表示一个存储规则的表结构1算法11归约算法(rule reduction algorithm)1输入:主动规则集R 和相关的触发图TG 和活化图AG ;输出:不可归约规则集R 1Begin282计算机研究与发展 2006,43(2)L B=§;For each rule r i I RIf(r i1T=0)or(r i1A=0),thenL B=append(L,r i);While not Empty(L){r i B=pop(L);R B=R-r i;For each rule r j I Out T(r i){r j1T B=r j1T-1;If r j1T=0and r j I R and r j|L,thenL B=append(L,r j);}For each rule r j I Out A(r i){r j1A B=r j1A-1;If r j1A=0and r j I R and r j|L,thenL B=append(L,r j);}}Return(R);End定义8[1]1一个不可归约规则集I为规则集R 经过归约算法处理后得到的一个子集1定义9[1]1I表示一个不可归约规则集1若I中任意一个子集S经过归约算法处理后无任何规则被消除,则S为一个非终止规则集1定义10[1]1若在一个规则集R中存在两个规则r c和r d(不必不同)且有3r c,r j4I AG和3r d,r j4I T G,则称规则r j可由规则集R直接可达;若规则r j可由规则集R直接可达或存在可由R直接可达且不必不同的两个规则r a和r t,使得3r a,r j4I A G 和3r t,r j4I TG,则称规则r j由R可达13定义、定理及证明定理1[1]1任何一个非终止规则集至少应包含一个在触发图TG和活化图A G中都存在的环1一个非终止规则集中任意一个规则可由属于触发图T G的环或活化图A G的环或同时存在于T G和AG的环内某个规则r可达1定义111若在一个规则集R中存在一个规则r i且有3r i,r j4I TG,则称规则r j可由规则集R直接触发可达;若规则r j可由规则集R直接触发可达或存在可由R直接触发可达的一个规则r t使得3r t,r j4I T G,则称规则r j由R触发可达1定义121若在一个规则集R中存在一个规则r i且有3r i,r j4I AG,则称规则r j可由规则集R直接活化可达;若规则r j可由规则集R直接活化可达或存在可由R直接活化可达的一个规则r a使得3r a,r j4I A G,则称规则r j由R活化可达1定义131若存在一条边3r i,r j4I TG,则称规则r i是规则r j的一个触发规则1定义141若存在一条边3r i,r j4I AG,则称规则r i是规则r j的一个活化规则1定理21在不可归约规则集R的活化图A G 中,若规则r不被任何活化环包含且不被任何活化环活化可达,则r必有限次执行1定义151令r0为不可归约规则集R中任意一个规则,若规则r a为它的一个活化规则,则定义与活化边3r a,r04相关的活化路径P a如下:(1)若r a包含在一个活化环R a中,则P a表示为R a;(2)若r a可由一个活化环R a活化可达且r a| R a,则P a表示为R a以及r a可由R a活化可达且不含R a中任意规则的最短路径1定义161令r为不可归约规则集R中任意一个规则,与之相关的所有活化规则相应的活化路径组成r的一个活化路径集1推论11令r为不可归约规则集R中任意一个规则1在活化图AG中能无限次地活化r的所有路径必包含在r的活化路径集中1定理31令r0为不可归约规则集R中一个触发环R T的任意一个规则,若总可以找到相应于其一个活化规则r a的一个活化路径P a且P a被R T包含,则R T将具有非终止性1证明1对于r0,触发环R T中存在规则r t且有3r t,r04I R T1下面做最坏假设:在R T的第1次循环执行中,R T中所有自惰化规则的条件均为当前的数据库状态满足1假设的根据是为了检测R T的可终止性,先假设R T是不可终止的且至少能循环执行一次,若能得到与假设矛盾的结论,则R T是可终止的1否则,按最坏的假设R T可能不可终止1因为P a I R T且r a是P a的终点,在R T的执行序列中存在两种情况:¹r a执行落后于r t1则在R T的第2次循环执行中,当r0被r t触发时,因为其已被活化规则r a在R T的第1次循环执行过程中所活化,故r0可立即执行1ºr a执行超前于r t1则在R T的第2次循环执行中,当r0被r t触发时,因为其已被活化规则r a在R T的本次循环执行过程中所活化,故r0可立即执行1由¹和º可知,r0将无限执行下去1因为r0是R T中的任意规则,故R T可具有非终283郝忠孝等:计算主动数据库中不可归约规则集的有效算法止性1证毕1定理41令r0为不可归约规则集R中一个触发环R T的任意一个规则,若总可以找到相应于其一个活化规则r a的一个活化路径P a且有如下条件之一成立,则R T将具有非终止性1¹P a被R T包含;º除了含有R T中的规则外,P a中其他规则可由R T触发可达且这些规则不被任何触发环包含或只是包含在另一个可终止的触发环中1证明1若条件¹成立,由定理3可知r0可以无限执行1若条件º成立,因为r a是P a的终点,故其不仅可由R T触发可达而且可由R T活化可达1由定理1可知r a能和R T同步执行1对于r0,触发环R T中存在规则r t且有3r t,r04I R T,则一定存在如图2所示的两种情形:在图2(a)中,r a被另一个可终止的触发环R c包含1假设在本文设定执行模式下r d具有比r t更高的优先级,则r a总在r t之前执行1很明显,这是一个最坏假设1在R T的一个执行序列的任何一次循环执行过程中,当r0被r t触发时必已被r a活化,故可无限执行1因为r0是R T中的任意规则,故R T可具有非终止性1证毕1Fig12T G and A G gr aphs associated with R T in theorem 41(a)r a is included by a terminable trig gering cycle R c and(b)r a is not included by any trig gering cycle1图2定理4中R T相关的触发图T G和活化图A G1 (a)r a被一个可终止触发环R c包含;(b)r a不被任何触发环包含定理51令r0为不可归约规则集R中一个触发环R T的任意一个规则,若总可以找到相应于其一个活化规则r a的一个活化路径P a且有如下条件之一成立,则R T将具有非终止性1¹P a被R T包含;º除了含有R T中的规则外,P a中其他规则可由R T触发可达且它们包含在另一个不可终止的触发环中1证明1若条件¹成立,由定理3可知r0可以无限执行1若条件º成立,由定理4相关证明可知r a 能和R T同步执行1对于r0,触发环R T中存在规则r t 且有3r t,r04I R T,存在图3所示的情形:F ig13TG and A G gr aphs asso ciated with R T in theo-rem51图3定理5中与R T相关的触发图T G和活化图A Gr a包含在另一个不可终止触发环R c中1若不考虑优先级,则R T的执行序列可存在如下情形:¹r d在r t之后执行;ºr d在r t之前执行1采取最坏的假设:r d具有比r t高的优先级,则r a总在r t之前执行1因为R c具有不可终止性,故R T的执行序列具有不可终止性1因为r0是R T中的任意规则,故R T 可具有非终止性1证毕1定理61令r0为不可归约规则集R中一个触发环R T的一个规则,并且其活化路径集中任意一个活化路径P a满足如下条件:除了含有R T中的规则外,还含有不可由R T触发可达的规则且这些规则被另一个触发环R c T包含或触发可达1则有如下结论:¹令r c为R c T中任意一个规则,r c不能由R T 触发可达;º若R c T可终止,则R T必可终止;»若R T不可终止,则R c T必不可终止;¼若R T总被看做是可终止的,则规则集R的可终止性判定更为准确、有效1证明1令r c为P a中任意一个规则1若r c不能由R T触发可达,则它必被另一个触发环R c T包含或触发可达1此结论证明如下1因为r c I P a,则r c I R故r c1A X0且r c1T X01否则,r c可通过算法1从R中消除掉1在R的触发图T G中,r c必被另一个触发环R c T包含或触发可达1否则,通过算法1处理一定存在r c1T=0,即r c可被消除掉1这与r c I R且R是不可归约规则集相矛盾1(1)利用反证法证明结论¹1令r c为R c T中任意一个规则并假设其可由R T触发可达1由定义11可知此时P a中任意规则都可由R T触发可达1这与P a含有不可由R T触发可达的规则的前提条件相矛盾1结论¹成立1定理6的前提条件存在如图4所示的情形1(2)结论º证明如下:1)在图4(a)和图4(b)中,因为R T和R c T不能同步执行故r0不能既由r t 触发可达又由r a活化可达,即r0必有限次执行,故R T必具有可终止性;2)按最坏考虑,在图4(c)和图284计算机研究与发展2006,43(2)Fig14T G and A G gr aphs associated with R T in theorem 61(a)r a is included R c T by and cannot trigg er r0;(b)r a is tr iggering reachable by R c T and cannot tr igger r0;(c)r a is included by R c T and can trigger r0;and(d)r a i s tr igger-ing r eachable by R c T and can trig ger r01图4定理6中与R T相关的触发图T G和活化图A G1 (a)r a包含在R c T中且不触发r0;(b)r a为R c T触发可达且不触发r0;(c)r a包含在R c T中且能触发r0;(d)r a为R c T触发可达且能触发r04(d)中,r a可触发r01因为R c T具有可终止性,故r a 只能有限次执行1因此r0只能被r a有限次活化,即r0只能有限次执行1故R T必具有可终止性1由1)和2)可知即使在最坏情况下结论º成立1(3)利用反证法证明结论»1在最坏情况下如图4(c)和图4(d)所示,R T可能不可终止1假设R T 不可终止时R c T是可终止的,显然和结论º矛盾1结论º表明,若R c T可终止则R T必可终止,故结论»成立1(4)结论¼证明如下1若已知R c T是可终止的,由结论º可知R T必可终止1但若已知R T是可终止的,要判定规则集R的可终止性还需检查R c T是否可终止1因此在检验R的可终止性时应首先考虑R c T的可终止性,而R T的可终止性不加考虑,即R T总可看做是可终止的1此时若R c T是可终止的, R T必可终止;若R c T是不可终止的,可直接判定R 是不可终止的1而且在设计规则集时,保证R c T的可终止性就可同时保证R T的可终止性1证毕1推论21令r0为不可归约规则集R中一个触发环R T的任意一个规则,若总可以找到它的一个活化路径P a且P a中的任一规则可被R T包含或触发可达,则R T将具有非终止性1否则,R T必可终止1定理71令R A为不可归约规则集R中任意一个活化环1若R A中任意规则r被一个触发环R T包含或触发可达,则R A可具有非终止性1否则,R A 必可终止1证明1令r为R A中任意一个规则1若r不被任何触发环包含且不能由任何触发环触发可达,运用算法1后r因为r1T=0被消除掉,即R A必是可终止的1若R A中所有规则不能被同一个触发环包含或触发可达,则R中任一触发环被调度执行时,R A中至少有一个规则不能被触发而呈可终止性执行,即R A必是可终止的1否则,由定理1可知R A将具有非终止性1证毕1定理81在不可归约规则集R的活化图A G 中,规则r不被任何活化环包含1若r可由一个活化环R A活化可达并且r和R A中任一规则均可被一个触发环R T包含或触发可达,则r可能无限执行1否则,r必只执行有限次1证明1R A为可活化可达r的活化环且它的任一规则都被一个触发环R T包含或触发可达,但r 却不能这样1也就是r和R A不能被同一个触发环同步触发1在R的一个执行序列中可存在如下情形:(1)若包含R A或可触发可达R A的触发环R T 被触发,由定理7可知R A可能无限执行1因此r可能无限次被活化但却不能被R T触发,故r必只执行有限次1(2)若包含r或可触发可达r的触发环R T被触发,R A不能被R T触发1故r不可能被R A无限次活化,即r必只执行有限次1否则,若r和R A中任一规则均可被一个触发环R T包含或触发可达,当R T被触发时,它们都能被触发1即r既可被一个触发环触发可达也可被一个活化环活化可达,由定理1知r可无限执行1证毕1定义171在不可归约规则集R中,一个可终止活化环称之为禁止活化环1定义181在不可归约规则集R中,一个只可有限执行的活化规则称之为禁止活化规则1定义191在不可归约规则集R中,一个不含任何禁止活化环和任何禁止活化规则的活化路径称之为有效活化路径14计算不可归约规则集的新算法令R0为任意一个规则集,并已知与其相关的285郝忠孝等:计算主动数据库中不可归约规则集的有效算法触发图T G和活化图AG1运用归约算法1将得到一个不可归约规则集R1当R X§时,本文提出的新算法将对R进一步归约,从而得到更准确的结果1同时定义两个集合:S T={R T|R T是R中的一个触发环},S A={R A|R A是R中的一个活化环}1 r1T和r1A分别表示在T G和AG中以r为终点的有向边个数1算法21改进的归约算法(refined rule reduction)1输入:主动规则集R和相关的触发环集S T和活化环集S A;输出:不可归约规则集R1Begin(1)For每一个活化环R A I S A DoIf(不存在任何一个触发环R T I S T,使得R A中任意规则r可被R T包含或触发可达)then R A标记为禁止活化环;(2)For每一个不被任何活化环包含的规则r I R DoIf(不存在任何一个活化环R A I S A使得r可由R A活化可达,并且r和R A中任意规则都可被S T内一个触发环R T中包含或触发可达)then r标记为禁止活化规则;(3)For每一个触发环R T I S T Do[For每一个规则r I R T DoIf(r的活化路径集中不存在任何一个有效活化路径P a,使得P a中的任一规则可被R T包含或触发可达)thenr1A:=0;]调用算法11归约算法;(4)Return(R);End定理91算法2是可终止的、正确的1其时间复杂度为O(p@m@n+p2),p表示规则集R中的规则个数;m表示触发环的个数;n表示活化环个数1证明1可终止性1因为在规则集R中的触发环个数、活化环个数、规则个数都是有限的,并且算法的执行次数只由For语句决定,故算法可自动终止1正确性:由定理7和定义17知步骤(1)是正确的;由定理8和定义18知步骤(2)是正确的;由推论2知步骤(3)是正确的;故算法是正确的1分析:很明显算法的时间复杂度由步骤(3)决定1在最坏情况下每个规则都能由任何活化环活化可达,p个规则可以最多有(p@n)个活化路径;在最坏情况下,假设m个触发环都需要被检验,步骤(3)最内层的For语句最多可执行(p@m@n)次1由文献[1]知算法1的时间复杂度是O(p2),故算法的时间复杂度为O(p@m@n+p2)1证毕15结论本文提出的算法相对文献[1]中的方法取得了更准确的结果1准确、有效地计算不可归约规则集对于在编译阶段基于活化图、触发图的规则集的可终止性静态分析具有决定性作用1因为不可归约规则集对运行阶段规则集的可终止性动态分析同样是个关键因素[1],故本文提出的算法对规则集的可终止性判定具有重要意义1参考文献1E1Barali s,S1Ceri,et al1Compile-time and runtime analysi s of active behaviors1IEEE T rans1Know ledge and Data Engineering, 1998,10(3):353~3702J1Bai ley,Guozhu Dong,et al1On the decidability of the term-i nati on problem of active database system1Journal of Com puter and System Sciences,2004,311(1-3):389~4373E1Baralis,S1Ceri,et al1Improved rule analysis by means of triggering and activation graphs1In:T1Sellis,ed1Proc1S econd Workshop Rules in Database S ystems,LNCS9851Berlin: Springer,19951165~1814E1Barali s,J1Widom1An algebraic approach to static analysis of active database rules1ACM T rans1Database Systems,2000,25(3):269~3325S1Comai,L1Tanca1T ermination and confluen ce by rule pri orit-i zati on1IEEE T rans1Know ledge and Data Engineering,2003,15(2):257~2706A1Aiken,J1Widom,J1Hellerstein1Behavior of database pro-ductions rules:T ermination,confluence,and observable determ i n-is m1In:Proc1ACM SIGM OD Int.l Conf1M anagem ent of Data1 New York:ACM Press,1992159~687K1R1Dittri ch,H1Fritschi,S1Gatziu,et al1SAM OS in hind-si ght:Experi ences in building an active object-oriented DBM S1In-formati on Systems,2003,28(5):369~3928Norman W1Paton,e t al1Active database system1ACM Com-puting Surveys,1999,31(1):63~103Hao Zhongxiao,bor n in19401Professor andPh1D1super visor1H is main research inter-ests focus on database t heory and applicatio n1郝忠孝,1940年生,教授,博士生导师,主要研究方向为数据库理论及应用1286计算机研究与发展2006,43(2)X iong Zhongmin ,born in 19721Ph 1D 1candidate 1His main research interests in -clude active database,object -or iented database 1熊中敏,1972年生,博士研究生,主要研究方向为主动数据库、面向对象数据库1Research BackgroundT he wor k of this paper is suppor ted by the Natural Science Foundat ion of Heilongjiang Pro vince of China under grant No 1F 00-061T ermination is one of character istics of an active rule set with desirable behavior property ,but most o f the research efforts about active behav iors have been focused on compile -time static analysis,and very little effort has been focused on run -time dynamic analy -sis 1Computing the ir reducible rule set about an active rule set is the key of the static and dynamic rule analysi s based on tr iggering and activation gr aphs 1T his paper pr esents a novel algorithm which can make the ir reducible rule set,computed by the ex ist ing approach -es,to be completely reduced again 1欢迎订阅5计算机研究与发展65计算机研究与发展6创刊于1958年,是我国第一个计算机刊物1现已成为我国计算机领域最有影响的学术期刊之一1多年来,本刊一直被评为我国计算技术类核心期刊;国务院学位办指定的评估学位与研究生教育的/中文重要期刊0;并成为美国5工程索引6(EI)、日本5科学技术文献速报6、俄罗斯5文摘杂志6、中国科技论文统计源期刊数据库、中国科学引文数据库等国内外重要机构的检索源期刊15计算机研究与发展6多次荣获国家及省部级科技期刊奖及/百种中国学术期刊0奖1影响因子已达到01843;总被引频次为11631目前,本刊以漂亮的封面设计、特色鲜明的高质量内涵、活泼多样的栏目吸引着广大作者和读者1欢迎投稿,欢迎订阅1邮发代号:2-654订价:48100元P 期,全年576元P 12期.到编辑部购买可享受八折优惠,即38.40元P 期,全年460元P 12期(含邮费)1通信地址:北京2704信箱5计算机研究与发展6编辑部邮政编码:100080电 话:(010)62620696(兼传真);(010)62565533-8609;联系人:王玉荣开户名称:中国科学院计算技术研究所开户银行:工行北京市分行海淀镇支行帐 号:02000045090881231-35287郝忠孝等:计算主动数据库中不可归约规则集的有效算法。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
1
An Efficient Consistency Algorithm for the Temporal Constraint Satisfaction Problem
Berthe Y. Choueiry a,∗ and Lin Xu a
a
Constraint Systems Laboratory Department of Computer Science & Engineering University of Nebraska-Lincoln E-mail: {choueiry,lxu}@
tj ti
ei.j Ii.j = {[3, 5], [6, 9], ...}
Fig. 2. TCSP.
k The superscript k of interval lij denotes the position of the interval in the domain. This ordering scheme is important for the specification of our algorithm. Solving a temporal constraint network corresponds to assigning a value to each time point all the constraints are simultaneously satisfied. Finding the equivalent minimal network can be accomplished by removing from the edge labels the values that do not participate in any solution. Solving an STP and finding its minimal network can be done in polynomial time. For example, the FloydWarshall algorithm for computing all pairs shortest paths computes the minimal network in O(n3 ), where n is the number of nodes, or time points, in the network. Solving the TCSP is NP-complete and finding its minimal network is NP-hard [5].
* Corresponding author: B.Y. Choueiry, 256 Avery Hall, Lincoln, NE, 68588-0115, USA.
to efficiently process temporal networks is a prerequisite for enabling computers to support human users in decision making and to automate the planning and execution of complex engineering tasks. This paper describes an efficient consistency algorithm for the meta-CSP modeling the Temporal Constraint Satisfaction Problem (TCSP) [5]. A major research effort in the Constraint Processing community is the development of efficient filtering algorithms. These algorithms propagate the constraints in a problem in order to reduce its size and enhance the performance of the algorithms used for solving it. Although particularly simple at the conceptual level, the basic mechanism for ensuring arc-consistency (AC) [12,7,8] has witnessed several refinements [9,6,1,2] and remains the subject of intensive research [3,15]. The unusual attention to a mechanism executable in polynomial time is justified by the fact that this simple mechanism is at the heart of many procedures for solving CSPs. To the best of our knowledge, the only work reported in the literature on applying consistency algorithms to the meta-CSP is a study by Schwalb and Dechter [11], which we discuss in Section 3.3. Shortly stated, the above study changes the end points of the temporal intervals. In contrast, our approach considers each interval as an atomic value, which is either kept or removed, but whose extent is never modified. In this paper we argue that arc-consistency of the meta-CSP is NP-hard. We define the property of arc-consistency of the meta-CSP and propose an efficient algorithm, AC, for achieving it. This algorithm, which guarantees arc-consistency of the meta-CSP but, not its arc-consistency, drastically reduces the size of the meta-CSP and en search process used for solving it. While the basic idea behind our filtering algorithm is simple, the value of our contribution lies in the design of polynomial-time and space data-structures, reminiscent of AC-4
1. Introduction In this paper we study constraint propagation in networks of metric temporal constraints, which are an essential tool for building systems that reason about time and actions. These networks model events and their relationships (as distances between events), and provide the means to specify the temporal elements of an episode with a temporal extent. Examples of such an episode are as diverse as a story, a discourse, a manufacturing process, the measurements executed by the Hubble space telescope, the activities of a robot, or the scheduling of a summer vacation. The ability
AI Communications ISSN 0921-7126, IOS Press. All rights reserved
2
Choueiry and Xu / An Efficient Consistency Algorithm for the TCSP
[9] and AC-2001 [3] for general CSPs, that make the algorithm particularly efficient and perhaps even optimal for achieving the property of arcconsistency. Note that optimality still needs to be formally established. This paper is structured as follows. Section 2 introduces our notation, the task we address and its complexity. Section 3 introduces the concept of the AC-consistency of the meta-CSP and the algorithm for achieving it. Section 4 describes our experiments and observations. Finally, Section 5 concludes this paper.
相关文档
最新文档