Efficient Disparity Computation without
An Efficient Algorithm for OWL-S Based

An Efficient Algorithm for OWL-S BasedSemantic Search in UDDINaveen Srinivasan, Massimo Paolucci, and Katia SycaraRobotics Institute, Carnegie Mellon University, USA{naveen, paolucci, katia}@Abstract. The increasing availability of web services demands for a discoverymechanism to find services that satisfy our requirement. UDDI provides a webwide registry of web services, but its lack of an explicit capability representa-tion and its syntax based search provided produces results that are coarse in na-ture. We propose to base the discovery mechanism on OWL-S. OWL-S allowsus to semantically describe web services in terms of capabilities offered and toperform logic inference to match the capabilities requested with the capabilitiesoffered. We propose OWL-S/UDDI matchmaker that combines the better oftwo technologies. We also implemented and analyzed its performance.1 IntroductionWeb Services have promised to change the Web from a database of static documents to an e-business marketplace. Web Service technology are being adapted by Business-to-Business applications and even in some Business-to-Consumer applications. The widespread adoption of web services is due to its simplicity and the data interopera-bility provided by its components namely XML [7], SOAP [10] and WSDL [11].With the proliferation of Web Services, it is becoming increasingly difficult to find a web service that will satisfy our requirements. Universal Description, Discovery and Integration [8] (here after UDDI) is an industry standard developed to solve the web service discovery problem. UDDI is a registry that allows businesses to describe and register their web services. It also allows businesses to discover services that fit their requirement and to integrate them with their business component.While UDDI has many features that make it an appealing registry for Web services, its discovery mechanism has two crucial limitations. First limitation is its search mecha-nism. In UDDI a web service can describe its functionality using a classification schemes like NAISC, UNSPSC etc. F or example, a Domestic Air Cargo Transport Service can use the UNSPSC code 78.10.15.01.00 to describe it functionality. Although we can discover web services using the classification mechanism, the search would yield coarse results with high precision and recall errors. The second shortcoming of UDDI is the usage of XML to describe its data model. UDDI guarantees syntactic inter-operability, but it fails to provide a semantic description of its content. Therefore, two identical XML descriptions may have very different meaning, and vice versa. Hence, XML data is not machine understandable. XML’s lack of explicit semantics proves to be an additional barrier to the UDDI’s discovery mechanism.An Efficient Algorithm for OWL-S Based Semantic Search in UDDI 97 The semantic web initiative [5] addresses the problem of XML’s lack of semantics by creating a set of XML based languages, such as RDF and OWL, which rely on ontologies that explicitly specify the content of the tags. In this paper, we adopt OWL-S [3], an OWL [15] based ontology, that can be used to describe the capabili-ties of web services. Like UDDI, OWL-S allows a web service to describe using the classification schemes. In addition, OWL-S provides a capability-based description mechanism [6] to describe the web service. Using capability-based description we can express the functionality of the web service in terms of inputs and precondition they require and outputs and effects they produce. Capability-based search will overcome the limitations of UDDI and would yield better search results.In this paper we propose an OWL-S/UDDI Matchmaker which takes advantage of UDDI’s proliferation in the web service technology infrastructure and OWL-S’s ex-plicit capability representation. In order to achieve this symbiosis we need to store the OWL-S profile descriptions inside an UDDI registry, hence we provide a mapping between the OWL-S profile and the UDDI data model based on [1]. We also enhance the UDDI registry with an OWL-S matchmaker module which can process the OWL-S description, which is present in the UDDI advertisements. The matchmaking component is completely embedded in the UDDI registry. We believe that such archi-tecture brings both these two technologies, working toward similar goals, together and realize their co-dependency among them. We also added a capability port to the UDDI registry, which can used to search for web services based on their capabilities.The contributions of this paper are an efficient implementation of the matching algo-rithm proposed in [2], an architecture that is tightly integrated with UDDI, an extension of the UDDI registry and the API to add capability search functionality, preliminary experiments showing scalability of our implementation and an update of the mapping described in [1] to address the latest developments in OWL-S and UDDI.The rest of the paper is organized as follows; we first describe UDDI and OWL-S followed by the UDDI search mechanism. In Section 3 we describe the architecture of the OWL-S/UDDI matchmaker and an updated mapping between OWL-S profile and UDDI. In Section 4 we present our efficient implementation of the matching algo-rithm, followed by experimental results comparing the performances of our OWL-S/UDDI Matchmaker implementation with a standard UDDI registry and finally we conclude.2 UDDI and OWL-SUDDI [8] is an industrial initiative aimed to create an Internet-wide network of regis-tries of web services for enabling businesses to quickly, easily, and dynamically dis-cover web services and interact with one another. OWL-S is an ontology, based on OWL, to semantically describe web services. OWL-S is characterized by three mod-ules: Service Profile, Process Model and Grounding. Service Profile describes the capabilities of web services, hence crucial in the web service discovery process. For the sake of brevity of this paper, we are not going into the details of OWL-S and UDDI we assume that the readers are familiar with it, for more information see [3] and [8].N. Srinivasan, M. Paolucci, and K. Sycara982.1 UDDI Search MechanismUDDI allows a wide range of searches: services can be searched by name, by loca-tion, by business, by bindings or by TModels. For example it is possible to look for all services that have a WSDL representation, or for services that adhere to Rosetta Net specification. Unfortunately, the search mechanism supported by UDDI is limited to keyword matches and does not support any inference based on the taxonomies re-ferred to by the TModels. F or example a car selling service may describe itself as “New Car Dealers” which is an entry in NAICS, but a search for “Automobile Deal-ers” services will not identify the car selling service despite the fact that “New Car Dealers” is a subtype of “Automobile Dealers”. Such semantic matching problem can be solved if we use OWL, RDF etc instead of XML.The second problem with UDDI is the lack of a power full search mechanism. Search by Category information is the only way to search for services, however, the search may produce lot of results with may be of no interest. F or example when searching for “Automobile Dealer”, you may not be interested in dealers who don’t accept a pre-authorized loan or credit cards as method of payments. In order to produce more precise search results, the search mechanism should not only take the taxonomy information into account but also the inputs and outputs of web services. The search mechanism resulted in combining the semantic base matching and the capability search is far more effective than the current search mechanism. OWL-S provides both semantic matching capability and capability base searching, hence a perfect candidate.Fig. 1. Architecture of OWL-S / UDDI Matchmaker3 OWL-S / UDDI Matchmaker ArchitectureIn order to combine OWL-S and UDDI, we need embed an OWL-S profile descrip-tion in a UDDI data structure (we discuss this embedding in Section 3.1), and we need to augment the UDDI registry with an OWL-S Matchmaking component, for process-ing OWL-S profile information. The architecture of the combined OWL-S/UDDIAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 99 registry is shown in Fig 1. The matchmaker component in this architecture, unlike the previous version discussed in [2], is tightly coupled with the UDDI registry. By tightly coupled we mean the matchmaker component relies on the UDDI registry’s ports (publish and inquiry) for its operations.On receiving an advertisement through the publish port the UDDI component, in the OWL-S/UDDI matchmaker, processes it like any other UDDI advertisement. If the advertisement contains OWL-S Profile information, it forwards the advertisement to the matchmaking component. The matchmaker component classifies the adver-tisement based on the semantic information present in the advertisement.A client can use the UDDI’s inquiry port to access the searching functionality pro-vided by the UDDI registry, however these searches neither use the semantic informa-tion present in the advertisement nor the capability description provided by the OWL-S Profile information. Hence we extended the UDDI registry by adding a capability port (see Fig 1) to solve the above problem. As a consequence, we also extended the UDDI API to access the capability search functionality of the OWL-S/UDDI match-maker. Using the capability port, we can search for services based on the capability descriptions, i.e. inputs, outputs, pre-conditions and effects (IOPEs) of a service. The queries received through the capability port are processed by the matchmaker compo-nent, hence the queries are semantically matched based on the OWL-S Profile infor-mation. The query response contains list of Business Service keys of the advertise-ments that match the client’s query. Apart from the service keys, it also contains use-ful information, like matching level and mapping, about each matched advertisement. The matching level signifies the level of match between the client’s request and the matched advertisement. The mapping contains information about the semantic map-ping between the request’s IOPEs and the advertisement’s IOPEs. Both these infor-mation can be used for selecting and invoking of an appropriate service from the results.Fig. 2. TModel for Stock Quote Service3.1 Embedding OWL-S in UDDIThe OWL-S/UDDI registry requires the embedding of OWL-S profile information inside UDDI advertisements. We adopt the OWL-S/UDDI mapping mechanism de-scribed in [1]. The mechanism uses a one-to-one mapping if an OWL-S profile ele-ment has a corresponding UDDI element, such as, for example, the contact informa-tion in the OWL-S Profile. F or OWL-S profile elements with no corresponding UDDI elements, it uses a T-Model based mapping. The T-Model mapping is looselyN. Srinivasan, M. Paolucci, and K. Sycara100 based on the WSDL-to-UDDI mapping proposed by the OASIS committee [13]. It defines specialized UDDI TModels for each unmapped elements in the OWL-S Pro-file like OWL-S Input, Output, Service Parameter and so on. These specialized TModels are used just like the way NAICS TModel is used to describe the category of a web service. Fig 2 illustrates an OWL-S/UDDI mapping of a Stock Quoting service whose input is a company ticker symbol and its output is the company’s latest quotes. In our work we extended the OWL-S/UDDI mapping to reflect the latest develop-ments in both UDDI and OWL-S. Fig 3 shows the resulting OWL-S/UDDI mapping. Furthermore we enhanced the UDDI API with the OWL-S/UDDI mapping function-ality, so that OWL-S Profiles can be converted into UDDI advertisements and pub-lished using the same API.Fig. 3. Mapping between OWL-S Profile and UDDI4 Achieving Matching PerformanceA naive implementation of the matching algorithm described in [2] would match the inputs and the outputs of the request against the inputs and the outputs of all the ad-vertisements in the matchmaker. Clearly, as the number of advertisements in the matchmaker increases the time taken to process each query will also increase. To overcome this limitation, when an advertisement is published, we annotate all the ontology concepts in the matchmaker with the degree of match that they have with theconcepts in each published advertisement. As a consequence the effort need to answerAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 101a query is reduced to little more than just a lookup. The rational behind our approach is that since the publishing of an advertisement is a one-time event, it makes sense to spend time to process the advertisement and store the partial results and speed up the query processing time, which may occur many times and also the query response time is critical. F irst we will briefly discuss the matching algorithm, then our enhance-ments in the publish and the query phase.4.1 Matching AlgorithmThe matching algorithm we used in our matchmaker is based on the algorithm pre-sented in [2]. The algorithm defines a more flexible matching mechanism based on the OWL’s subsumption mechanism. When a request is submitted, the algorithm finds an appropriate service by first matching the outputs of the request against the outputs of the published advertisements, and then, if any advertisement is matched after the output phase, the inputs of the request are matched against the inputs of the adver-tisements matched during the output phase.In the matching algorithm, the degree of match between two outputs or two inputs depends on the match between the concepts that represents by them. The matching between the concepts is not syntactic, but it is based on the relation between these concepts in their OWL ontologies. For example consider an advertisement, of a vehi-cle selling service, whose output is specified as Vehicle and a request whose output is specified as Car. Although there is no exact match between the output of the request and the advertisement, given an ontology fragment as show in F ig 4, the matching algorithm recognizes a match because Vehicle subsumes Car.The matching algorithm recognizes four degrees of match between two concepts. Let us assume OutR represents the concepts of an output of a request, and OutA that of an advertisement. The degree of match between OutR and OutA is as follows.exact : If OutR and OutA are same or if OutR is an immediate subclass of OutA. For example given the ontology fragment like Fig 4, the degree of match between a request whose output is Sedan and an advertisement whose output is Car is exact.Fig. 5. Advertisement Propagation Fig. 4. Vehicle OntologyN. Srinivasan, M. Paolucci, and K. Sycara102plug in: If OutA subsumes OutR, then OutA is assumed to encompass OutR or in other words OutA can be plugged instead of OutR. F or example we can assume a service selling Vehicle would also sell SUVs. However this match is inferior to the exact match because there is no guarantee that a Vehicle seller will sell every type of Vehicle.subsume: If OutR subsumes OutA, then the provider may or may not completely satisfy the requester. Hence this match is inferior than the plug in match.fail: A match is a fail if there is no subsumption relation between OutA and OutR.4.2 Publishing PhasePublishing of a Web service is not a time critical task; therefore we attempt to exploit this time to pre-compute the degree of match between the advertisement and possible requests. To perform this pre-computation, the matchmaker maintains a taxonomy that represents the subsumption relationships between all the concepts in the ontolo-gies that it loaded. Each concept in this taxonomy is annotated with a two lists out-put_node_ information and input_node_information that specify to what degree any request pointing to that concept would match the advertisement. For example, out-put_node_information is represented as the following vector [<Adv1,exact>, <Adv2,subsume> , …], where AdvX points to the advertisement and “subsume” specify the degree of match. The advantage of the pre-computation is that at query time the matchmaker can extract the correct value with just a lookup with no need of inference.More in details, at publishing time, the matchmaker loads the ontologies that are used by the advertisement’s inputs and outputs and updates its taxonomy. Then, for each output in the advertisement, the matchmaker performs the following steps. •The matchmaker locates the node corresponding to the concept, which represents the output, in the hierarchical structure let us call this node curr_node. The de-gree of match between the curr_node’s concept and the output of the advertise-ment is exact, so the matchmaker updates the output_node_ information. Let us assume Fig 5 represents the hierarchical structure maintained by the matchmaker and let an output of an advertisement Adv1 be ‘Car’. The matchmaker updates the output_node_information of the ‘Car’ node that it matches Adv1exactly.•The matchmaker updates the output_node_ information of all the nodes that are immediate child of the curr_node that the published advertisement matches them exactly. Because the algorithm states that the degree of match between output and the concepts immediate subclass are also exact. F ollowing our example the matchmaker will updates the output_node_ information of the ‘Coupe’ node and the ‘Sedan’ node that it matches the advertisement Adv1 exactly.•The matchmaker updates the output_node_ information of all the parents of the curr_node that the degree of match between the nodes and the published adver-tisement is subsume. Following our example, we can see that the degree of match between the Adv1’s output concept ‘Car’ and the parent nodes of the curr_node ‘Thing’ and ‘Vehicle’ are subsume.•Similarly the matchmaker updates the output_node_ information of all the child nodes of curr_node that the degree of match between the node and the publishedAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 103 advertisement is plug-in. Following our example, we can see that the degree ofmatch the advertisement’s output and the child nodes of curr_node ‘Luxury’ and‘Mid-Size’ is plug-in.Similar steps are followed, for each input of the published advertisement the matchmaker updates the input_node_information of the appropriate nodes.As we can observe, we are performing most of the work required by the matchingalgorithm during the publishing phase itself, thereby spending a considerable amountof time in this phase. Nevertheless, we can show that the time spend during thisphase, does not depend linearly on the number of concepts present in the data struc-ture but in the order of log (number of concepts) in present in the tree structure, andhence showing that our implementation is scalable.Since we use hierarchical data structure, the time required to insert a node will bein the order of log d N, where d is the degree of tree. Similarly time required to trav-erse between any two nodes in a particular branch will also be in the order of log d N.The time required for publishing an advertisement will be equal to the time requiredfor classification of the ontologies used by inputs and outputs of the advertisement,plus the time required to update the hierarchical structure with the newly added con-cepts, plus the time required to propagate information about the newly added adver-tisement to the hierarchical structure. And in a best case scenario, when no ontologyneeds to be loaded, the publishing time will be time required for updating and propa-gating.Time publish = Time Classification + Time Update + Time propagate (1) The time required by Racer for classifying neither directly depended on the numberof concepts nor the number of advertisements present in the matchmaker. The timerequired by the other two operations, update and propagate, will be in the orderof (log d N). Hence the publishing time does not linearly depend on the number ofconcepts or the advertisements present in the matchmaker.4.3 Querying PhaseSince most of the matching information is pre-computed at the publishing phase,the matchmaker’s query phase is reduced to simple lookups in the hierarchical datastructure. We also save time by not allowing a query to load ontologies. Althoughloading ontologies required by the query appears to be a good idea, we do notallow it for the following three reasons: first, the loading of an ontology is anexpensive process, furthermore the number of ontologies to load is in principle unbounded. Second, if the request requires the loading of a new ontology, it is verylikely that the new concepts will have no relation with the concepts that arealready present in the matchmaker, therefore the matching process would failanyway. Third, the ontologies loaded by the query may be used only one time, andover time we may result is storing information about lot of unused concepts. Notethat the decision of not loading ontologies at query time introduces incompletenessin the matching process: it is possible that the requested ontology bares some rela-tions with the loaded ontologies, therefore the matching process may succeed.N. Srinivasan, M. Paolucci, and K. Sycara104Still, the likelihood of this event is small, and the cost of loading ontologies so bigthat we opted for not loading them.When the matchmaker receives a query it retrieves all output_node_informations,the sets of advertisements and its degree of match with the concept, of all the nodes corresponding to the outputs of the request. For example, if the outputs of the requestare ‘Car’ and ‘Price’, the matchmaker fetches the output_node_informations of carONI1 and of price ONI2. The matchmaker then finds the advertisements that arecommon between the sets of advertisements retrieved, i.e. ONI1∩ ONI2. If no intersection is found then the query fails. If common advertisements are found sayADVSo, they are selected for further processing.The matchmaker performs a lookup operation and fetches all the in-put_node_informations, the sets of advertisements and degree of match with the con-cept, of all the nodes corresponding to the inputs of the request. The matchmakerkeeps only the input_node_information of the advertisements that were selected dur-ing the output processing phase, other advertisements are discarded. For example letIN1, IN2, and IN3be the input_node_information, then only input_node_informationof advertisements ADVSo ∩ IN1 ∩ IN2∩ IN3 are kept. This input_node_informationand match level of each output is used to score the advertisements that were selectedduring the output processing phase, i.e. ADVSo.We can see that the time required for processing a query does not depend on thenumber of advertisements published in the matchmaker. As we also see the queryingphase involves lookups and intersections between the selected advertisements. In our implementation lookups can performed in constant time. Hence time to process aquery depends on the time to perform intersections between the selected advertise-ments.Time query = (Num out+Num in)*(Time Lookup + Time Intersection) (2)Table 1. Publishing Time without loadingontologiesFig. 6. Time distribution during publishingan advertisementWhere the Num out and Num in are the number of inputs and outputs of an advertise-ment, Time Lookup is the time required to extract information about an input or an out-put, and Time Intersection is the time to compute intersection between the lists extractedAn Efficient Algorithm for OWL-S Based Semantic Search in UDDI 105 during the lookup time. We can see that the computation required for the querying process does not depend on the number of advertisements, and therefore it is scalable.5 Preliminary Experimental ResultsWe conducted some preliminary evaluation comparing the performances of our OWL-S/UDDI registry and a UDDI registry, to show that adding an OWL-S match-maker component does not hinder the performance and scalability of a UDDI registry. We extended jUDDI [14] an open source UDDI registry with the OWL-S matchmak-ing component. We used RACER [4] as to perform OWL inferences. In our experi-ments, we measured the processing time of an advertisement by calculating the differ-ence between the time the UDDI registry receives an advertisement and the time the result is delivered, to eliminate the network latency time.5.1 Performance – Publishing TimeIn our first experiment we compared the time take to publish an advertisement in an OWL-S/UDDI registry and in a UDDI registry. We assumed that the ontologies re-quired by the inputs and outputs of the advertisements are already present in the OWL-S/UDDI registry. The advertisements may have different inputs and outputs but they are present in one ontology file, hence the ontology has to be loaded only once, however our registry still have to load 50 advertisements. Table 1 shows the average time taken to publish 50 advertisements in a UDDI registry and an OWL-S/UDDI registry. We can see that the OWL-S/UDDI registry spends around 6-7 times more time, since publishing it a one-time event we are not concerned about the time taken.However, we took a closer look at the time taken to publish an advertisement by the OWL-S/UDDI registry. Fig 6 shows the time spent in different phases of the publishing an advertisement. Following are the 5 phases in the publishing process: •UDDI - time required by the UDDI component to process an advertisement. •Validation - time required by Racer to validate the advertisement.•Loading - time required by Racer to load the advertisement•Updating - time required to extract the ontology tree from Racer. •Propagating - time required to propagate the input/output information.As we can see, most of the time is spent in loading and validating ontology (around 70%) when compared to the matchmaking operations.5.2 Performance – Ontology LoadingIn the second experiment, we analyzed the performance of our registry when we load advertisements that required loading new ontology and hence significantly updating the taxonomy maintained by the matchmaking component. We published 50 adver-tisements that use different ontologies to describe their inputs and outputs, in our OWL-S/UDDI registry and measured the time taken to publish each advertisement. Each of these advertisements has three inputs and one output and requires loading an ontology containing 30 concepts.Fig. 7. Publishing time for advertisements that requires to load new ontologies In Fig.7, we can see that the time take to publish an advertisement increases line-arly with the number of advertisements, and we can also see that this linear increase is contributed by ‘new-concept’. This linear increase of ‘new-concept’ is attributed to a limitation of the Racer system. Whenever we load a new ontology into Racer we have determine if we need to update the taxonomy maintained by the match-maker, if so, what concepts should be updated. The Racer system does not provide any direct means to give this information. Hence we need to find out this informa-tion through a series of interactions. The new-concept in Fig 7represents the time required to perform this operation. We can substantially reduce the time required for publishing if either Racer can provide the information directly or if we could have direct access to Racer and we maintain the taxonomy inside Racer itself. We can see that if ignore the time taken by ‘new-concept’, the resulting graph would not have such drastic increase in the publishing time, concurring to our discussion in Section 4.2.Table 2. Query processing timeTime in ms Standard DeviationOWL-S/UDDI 1.306 .545.3 Performance – Querying TimeIn our final experiment, we calculated the time required to process a query. The que-ries we used do not load new ontologies into the matchmaker, they use the ontologies that are already present in the matchmaker. We used 50 queries each with three inputs。
以d,e开头的词汇与例句

以D开头的词汇及例句-考研英语阅读理解与翻译考点1800词汇deceitful ad. 欺诈的【例句】本词与“deceive”具有音与意的关联性。
deduction n. 1.减除, 扣除, 减除额 2.推论, 演绎【例句】a natural deduction from the evidence 从证据推断出的必然结论deem v. 认为,视为,相信【例句】The government ~ed the contry’s future power needs more important. 政府认为国家今后的劝力需求更为重要。
【认知】本词与动词“doom(判定)”有发音与意义的相关性。
defaulted v. 1.不履行义务,缺席 2.违约,不履行债务,拖欠【例句】A total of 125 contracts went into ~ up ot May 25. 至5月25日,总共有125份合同未履行。
defect n. 1.缺点,缺陷,毛病,瑕疵 2.缺乏,欠缺【例句】The fundamental ~of fathers is that they want their children to be a credit to them. 父亲们根本的毛病是想通过孩子争光。
defensive a. 防卫的,防御用的, 自卫的【例句】The slightest delay in defensive dispositions might sound the coutry’s death-knel. 防御部署的稍一迟误可能会敲响这个国家的丧钟。
deference n. (~ to)顺从, 尊重【例句】The respect with which he had always treated me deepened into deference. 他素来对我采取的敬重态度进一步溶化成了敬仰。
deficient a. (~ in)缺乏的, 不足的, 不完善的【例句】a new boy graded dull, if not actually deficient 一个即使不算有智力缺陷也该列为迟钝的新生deficit n. 1.赤字,亏绌,亏损 2.缺乏,不足,不足之额;缺陷【例句】The baseball team erased a 6-0 ~. 这支棒球队刷去了一个6-0的落后纪录。
Algorithmic Efficiency in Computational Problems

Algorithmic Efficiency inComputational Problemsrefers to the ability of an algorithm to solve a problem in the most efficient manner possible. In computer science, algorithmic efficiency is a key concept that plays a crucial role in the design and analysis of algorithms. It is important to analyze and compare the efficiency of different algorithms in order to determine the best algorithm for a given problem.There are several factors that contribute to the efficiency of an algorithm, including time complexity, space complexity, and the quality of the algorithm design. Time complexity refers to the amount of time it takes for an algorithm to solve a problem, while space complexity refers to the amount of memory space required by an algorithm to solve a problem. The quality of algorithm design includes factors such as the choice of data structures and the way the algorithm is implemented.One important measure of algorithmic efficiency is the big O notation, which provides an upper bound on the growth rate of an algorithm. The big O notation allows us to compare the efficiency of different algorithms and make informed decisions about which algorithm to use for a particular problem. For example, an algorithm with a time complexity of O(n) is considered more efficient than an algorithm with a time complexity of O(n^2) for large input sizes.In order to improve the efficiency of algorithms, it is important to understand the theory behind algorithm design and analysis. This includes understanding different algorithm design techniques such as divide and conquer, dynamic programming, and greedy algorithms. By using these techniques, it is possible to design algorithms that are more efficient and can solve problems in a faster and more resource-efficient manner.In addition to understanding algorithm design techniques, it is also important to consider the specific characteristics of the problem at hand when designing algorithms. For example, some problems may have specific constraints that can be exploited toimprove algorithm efficiency. By taking into account these constraints, it is possible to design algorithms that are tailored to a specific problem and can solve it more efficiently.Another key aspect of algorithmic efficiency is the implementation of algorithms. The choice of programming language, data structures, and optimization techniques can all impact the efficiency of an algorithm. By optimizing the implementation of an algorithm, it is possible to reduce its time and space complexity and improve its overall efficiency.Overall, algorithmic efficiency is a fundamental concept in computer science that plays a crucial role in the design and analysis of algorithms. By understanding the theory behind algorithm design and analysis, and by carefully considering the specific characteristics of the problem at hand, it is possible to design algorithms that are efficient, fast, and resource-efficient. This can lead to significant improvements in the performance of computational problems and the development of more effective software applications.。
ALTERA literature dsed8b10b

B For details on running disparity rules, see the IEEE 802.3z specification, paragraph 36.2.4.4.
Altera Corporation
3
8b10b Encoder/Decoder MegaCore Function (ED8B10B) Data Sheet
The Altera® 8b10b Encoder/Decoder MegaCore® Function (ED8B10B) is a compact, high performance core capable of encoding and decoding at Gigabit Ethernet rates (125 MHz: 1 Gbps). The ED8B10B is optimized for the APEXTM 20K, FLEX 10K®, and MercuryTM devices.
The ED8B10B is designed to maintain a neutral average disparity. Average disparity determines the DC component of a serial line. Running disparity is a record of the cumulative disparity of every encoded word, and is tracked by the encoder. To guarantee neutral average disparity, a positive running disparity must be followed by neutral or negative disparity; a negative running disparity must be followed by neutral or positive disparity. If these conditions are not met, the decoder flags an error by asserting its rderr output.
计算机调剂技巧

计算机调剂技巧As a computer science student, I understand the importance of honing my algorithm adjustment skills. 在我参加计算机科学的学生中,我了解磨炼算法调整技能的重要性。
This skill is crucial in various aspects of the field, from problem-solving to optimization. 这种技能在领域的各个方面都至关重要,从解决问题到优化。
Being able to fine-tune algorithms can make a significant difference in the efficiency and effectiveness of software programs. 能够微调算法可以显著提高软件程序的效率和有效性。
One of the most important aspects of algorithm adjustment is understanding the concept of time complexity. 算法调整中最重要的一个方面是理解时间复杂度的概念。
This involves analyzing the amount of time an algorithm takes to run as a function of the length of its input. 这涉及分析算法作为其输入长度函数运行所需的时间量。
By understanding time complexity, developers can make informed decisions about which algorithms to use based on the size of the input data. 通过理解时间复杂度,开发人员可以根据输入数据的大小做出明智的算法选择决策。
数字电子技术专业英语词汇表

数字电子技术专业词汇表此专业词汇表由中山大学微电子2加2专业10级全体同学于2011年春协作查找和翻译而成ABEL一种逻辑电路设计语言Absolute value绝对值abstraction抽象academic学术的,理论的accommodate容纳acquisition获得物acronym首字母略缩词activate使活动actuator促动器addend加数adder加法器additional附加的adhering粘附的adjacency邻接物adnauseum令人作呕的advantage优势aforethought预谋alarm警告albeit虽然algebra代数algebraic代数的algorithm算法aligned对齐的alphabet字母表alphabetic照字母次序的alphanumeric字母数字的ambient周围ambiguity歧义性ambiguous引起歧义的AMI传号交替反转码amiss出差错的amplifier放大器amplitude幅度analog模拟analogous类似的anode阳极anthologies精选的appealing有魅力的append附加applicable合适的application应用appropriate恰当的aptly适当地arbitrary任意的arc电弧architecture架构archive存档arcs弧光灯arithmetic算术array数组arrow箭头artifact人工制品ascend上升ascending上升的ASIC专用集成电路assemble装配assembly装配线assert断言asserted生效的assign分配assignment分配assumption假设asterisk星号asymmetrical不对称的asynchronous异步的attenuated衰减的attenuator衰减器attic阁楼attribute属性automatic自动的automatically自动地auxiliary辅助的axiom原理balanced code平衡码bandwidth带宽barrel shifter柱式位移器base基极behavioral行为的benchmark基准bidirctional双向的binary二进制的binary digit二进制数字binomial二项式的bipolar双极的biquinary二五混合进制的bistable双稳态的bit位bit vectors位向量bitwise按位计算block阻塞;阻止;限制blow熔断boolean布尔数学体系的boolean-expression布尔表达式boost提升borrow借位bounce弹回bracket括号内的brute-force暴力bubble气泡buffer缓冲器bug故障bundling集束Bypass旁路cable电缆cache快速缓存CAD计算机辅助设计CAE计算机辅助工程cajole诱使capability能力capable能胜任的capacitance电容capacitive电容性的capacitor电容器capture占领carburetor化油器,汽化器carry进位cascade级联cascading级联catalog目录catchy易记住的categorize分类cathode负极causality因果性ceiling顶函数centerpoint中点character字符characteristic特性charge充电checksum校验和chunk相当大的数量circuitry电路的circuits电路circular环形的circumstance条件citation引用clause条款cleanup清理整顿clock时钟clockwise顺时针方向的closure封闭coaxial同轴的code编码codeword码字coding编码coefficient系数coil线圈collapse倒塌collector集电极colon列column列combinational联合的commercial商业的commutativity交换性compact紧凑的comparison比较compatibility兼容性compatible兼容的compilation编著compile编译compiler编译器complement补complementary互补的complexity复杂性complicated复杂的component成员;成分comprehensive广泛的compromise折中computation计算compute计算comsumption功率消耗concatenate连结concatenation连结conceivably令人信服的conceive构思,以为concept概念conceptual观念的,概念的concise简明的concurrent同时发生的configuration配置,构造conjure用魔术变出consecutive连续的consensus一致conservative保守的consistency一致性consistent一致的constant常数的constraint约束construct构成consume消费consumption消费contemplate深思熟虑contextual前后关系的contiguous连续的continuously连续的control logic控制逻辑controller控制器convention惯例conversely相反地convert转变converter转换器co-opted协同优化correction-factor校正因子correspond对应corresponding对应的corrupt败坏cosmic宇宙的cosmic ray宇宙射线counter计数器counterclockwise逆时针方向的counterpart与另一方相当的事物couple耦合,一对coverage覆盖范围CPLD复杂可编程逻辑器件crank柄criteria判断标准critical决定性的crosshatch画出交叉线阴影crosspoint相交点crosstalk色度亮度干扰crucial决定性的crystalline晶态的cumulative累积的curricula课程customization专用化customize定制cut off截断cycle周期cylinder圆柱daisy菊花daisy chain串级链database资料库dataflow数据流data-transmission数据传输DC balanced直流平衡deactivate使无效debounce去除抖动debug调试decade十年decibel分贝decimal十进制的declaration宣布decode解码decoder解码器decompose分解decomposed分解了的decomposition分解decoupling去耦合dedicate奉献deduce推断deem认为default默认值defect缺点definite明确的definition定义delay延迟deleterious有害的delve钻研demonstrate展示demultiplexer信号分离器denote表示density密度depicte描写derivation推导derivative导数derive导出derived导出descend下降descending下行的detect探测detector检测器deterministically确定地diagram图表dicey不确定的dictate支配difference差Differential差分dim昏暗的dimension纬度diminish减小DIP package双列直插封装diphase code二相编码directive指令disabled使失效discard丢弃discharge释放电荷discipline纪律discontinuity不连续discrete离散的disk磁盘,光盘disparity不平衡性disposition处置disregard忽视dissipate损耗dissipation损耗distinct不同的distortion扭曲disturbance干扰dividend被除数documentation文档纪录domain范围dominant占优势的dominate主导DPLL数字锁相环drain漏极dramatically戏剧地dual二重的duality二元性的duel斗争,抗争duplicate复制duration持续的时间dutifully忠实地;忠贞地dye染色dynamic动态edge边缘edge-trigger边缘触发electromechanical机电的elegance优雅的element元件,基础elevator电梯eliminate消除elimination消除embed嵌入embody体现emitter射极emphatically强调地emulate模仿encode编码encompass围绕encompassing围绕encounter冲突encyclopedic百科传书的end-around首尾循环entidy实体enumerate列举enumerated列举ephemeral短暂的equation方程式equilibrium平衡equivalence等同equivalent相当于erasable可擦除的erasable可消除的erase清除erroneous错误的error-detecting检错esoteric神秘的essence本质essential本质上essentially根本上estimate估计ethernet以太网event事件evident明显的exceed超越excess额外的excess-3余3excessive过度的excitation激发exclusive排外的execute执行exhaustive详尽的,彻底的expansion展式explanatory解释的explicit清楚的exploit开发exponentially以指数方式extension扩展external外部的extract提取eyestrain眼睛疲劳fabric构造fabricate建造fabrication建造fanout扇出fascinate使着迷feasibility可行性feasible可行的feature特征feedback反馈feedback回馈fiber-optic光纤的figure象征finite有限的finite field有限域finite-memory有限存储器fixed固定的flag标记flexibility灵活性flip翻转flip-flop触发器fluid流体;流动的flux磁通font字体foolproof万无一失的foregoing前面提到的foreseeable能预测的forestall预先阻止formula公式formulate表达formulate用公式表示formulation用公式表示formule公式fortuitous偶然发生的FPGA现场可编程逻辑阵列fraction分数frill褶边function功能,函数functionality功能fundamental基本的furthermore此外fuse熔丝fuse blown保险丝熔断garage车库gear传动装置generality一般性generalize推广generate产生generation产生,代generator发生器generic通用genuine真实的geometry几何学geopolitical地理政治学的geopolitics地缘政治学glance一瞥glitch短时脉冲波global全局的graph图表graphcially用图表表示的graphical绘画的Gray code格雷码guarantee保证halve二等分hamming哈明hardware硬件havoc毁坏hazard冒险HDL硬件描述语言heir后人hence因此hermetically密封地hexadecimal十六进制的hierarchical分层的hierarchy层次结构high-order高位的hint暗示holography全息术hook钩住horizontal水平的house收藏hydraulic液压的hype大肆宣传hypothetical假设的hysteresis滞后作用IC集成电路idempotency幂等identical同一的Identifier标识符idle空闲的illuminate照亮illustrate阐明immune免于……的immunity免疫力impedance阻抗imperfection缺点implement执行,手段implementation执行implicant蕴含项Implication暗指implicitly含蓄地;暗中地impose强加inadvertently无意地;不经意地incandescent白炽inclination趋向inclined倾向于incompatible不相容的inconsistency不一致性inconsistent不一致的incorporate加上increment增长incrementer增量器indecipherable难辨认的indeterminate不确定的Index指标indicate指示indicative标示的,指示的indicator指示器indice指数indistinguishable不易察觉的individual单个的induction归纳法Infinite无限的Infinitesimally极小地inherent固有的Inhibit抑制;禁止INIT初始化initial初始的initialization初始化initialize初始化innocuous-looking看似乏味的innovation革新insanely疯狂地insightful富有洞察力的inspection检查instantaneously瞬时地instantiate举例说明instantiation实例化instructive教育性的instrument仪器insulate使隔绝insulating绝缘的intact完好的integer整数integration完成intensity亮度,强度interchange交替变化interchangeable可互换的interconnect互相连接interconnection互连interface接口Interfacing接口intermediate中间的intermittent间歇的intermix混合internal内部的interoperability互用性,协同工作的能力interpret解释interpretation解释intersection交点intimidate威吓intriguing吸引人的intuitive直觉的intuitively直觉地,直观地invalid无效的inverse相反的inversion反转invert倒置inverter反相器invoke引起irrelevant不相关的isolation隔离itemized详细列举的iterative迭代的jitter紧张不安judiciously明智的jumble混乱的juxtaposition并列Karnaugh卡诺Karnaugh map卡诺图knob球形柄label标签lag落后于;延迟latch锁存器latch up闩锁launch发起layout布局,版面设计leakage漏leftmost最左边的legend图例legible清楚的Legitimate合法的legitimate合理的legitimately正当合理的library文库likewise同样的linear线型的linearly呈直线的literal文字的lithium锂logic逻辑lookup查找lowercase小写字母的LSI大规模集成macro宏macrocell宏单元magnetic磁的magnetization磁化magnitude幅度maintain维持maintainability可维护性的maintenance维护malice恶意Manchester code曼彻斯特编码mandatory强制的,义务的manifest显然的manipulate操作manipulation操作manual手工的manufactory工厂manufacturability可生产性manufacture制造manufacturer制造商map映射margin容限marginal在页边的matrix矩阵maximize增至最大限度maximum极大,最大限度MCM多次接触混相Mealy米里型状况mechanical机械的,固定的mechanism机制,原理media介质medium手段megohm百万欧姆merge使合并merit长处messier混乱的metastability亚稳定性metastable亚稳的methodology一套方法microampere微安microprocessor微处理器migrate移往mil千分之一military军事的mimic毫米波单片集成电路minimal最小的minimize减至最小minimizer最小化器minterm最小项minuend被减数misleading误导的mismatch使配合不当mnemonic记忆的moderate适度的moderate中等的moderate-speed中等速度modify修改modular模的modulo以…为模molecular分子的molecule分子momentary短暂的monolithic整体的;庞大的MSI中规模集成multibit多位multichip多片状multiphase多相multiple复合的multiplexer多路复用器multiplicand被乘数multiplicant被乘数multiplication乘法multiplier乘数multiply相乘multisource多源mutual相互的NAND gate与非门nanosecond纳秒narrative叙述negate求反negation否定negative logic负逻辑negligible可忽略的negotiate谈判,商议nerd讨厌的人nested嵌套的nesting嵌套nibble半字节noise margin噪声容限nomenclature术语nominal名义上的noncritical非关键的noncrystalline非晶体的nondeterministically非确定地nonideal非理想的noninverting同相nonlogic非逻辑nonnumeric非数值的nonoverlap不重合nontrivial重要的nonvolatile非挥发性的nonzero非零的NOR或非notation符号notion概念notwithstanding尽管noxious有害的;有毒的NRE负阻元件NRZ不归零逢1翻转制obsolescence废弃obsolete淘汰obtain获得, occasional不经常的octal八进制的omit省略,忽略open-drain漏极开路operand操作数operation操作optimal最佳的optimization优化optimize使最优化optional可选择的orientation定位oriented面向……的original原件oscillate震荡oscillator震荡器outcome结果outline提纲overall总体的overflow溢出overflow溢出overlay覆盖overly过度的overridden无效的overstress过应力package封装pad焊盘painstaking勤勉的pairwise成对地panel平板、控电板paradigm范例parallel平行的parameter参数,参量parenthesen括号parenthesize加括号parity奇偶校验parlance说法partial部分的partial product部分积partition分割partitioned分段的patent专利pattern模式PCB印刷电路板perform执行perfume香料peril危机periodic周期性的periphery边缘Perl一种CGI脚本语言permanent永久的,固定的persistent不断的pertain关于,perverse错误的pessimistic悲观的phase阶段philosopher哲人philosophy哲学pictorial绘画的;形象化的pin引脚pin number引脚号pinout引出线pipeline流水线pit凹陷pitfall缺陷pixel像素PLA可编程逻辑阵列PLD可编程逻辑器件pneumatic气动pointy-haired秃头polarity极polymer聚合体portion部分portray画像postpone延迟postulate假设potential有可能的potentially潜在地,可能地precede在之前,先于precedence优先权preceding前述的precise精确的precomputed预先计算的predecessor前身predefined预先确定predetermined先已决定的prefix前缀preliminary初步的premise前提preponderance优先preset预置prime基本的primitive原始的probing探测procedural程序上的procedure程序proceed前进,进行programmable可编程的propagate传播propagation传播property特性proportional比例的proportionally按比例pros and cons赞成和反对的理由protocol条款,协议prototype雏形,蓝本provision供应prudent谨慎的pseudo假的pseudorandom伪随机的pulse脉搏;脉冲pump泵punctuation标点符号puncture穿孔purist纯化论者pushbuttons按键PWB线路板quad四倍的quadruple四倍的quartz石英quiescent静态的quote引述;举证;报价quotient商quotient商,商数radically根本地,本质上radices根,基数radix基数rail轨random随意的Range范围规格rating标称值rationale基本原理readability可读性reciprocal互惠的Recommended值得推荐的Reconvened再聚会;再集会rectangle矩形recursively回归地,递归地redeeming补偿的redefined重新定义的redo改装redraw重画reduced约简的redundant多余的reference参考refinement提纯reflected反射reg寄存器regardless不顾regenerate使再生regenerative恢复的register寄存器reinforce加强relay中转relay logic继电器逻辑remainder余数replica复制品replicate复制representation表示法representative代表reprogrammability可重复编程request要求reserved保留的reset重置resistive抵抗性的resistor电阻器respectively分别地restrict受限制的restriction约束retard阻碍retrieve检索reveal显示revenue收入reverse反转Reverse-polarity逆极性;逆偏振revise修改revolution革命rightmost最右边的ripple波纹ripple adder行波加法器robust强健的robustness鲁棒性rollover翻转rotary旋转的rotation旋转roughly大致row一行RZ归零制salient显著的sanity明智satisfy令人满意的saturate浸湿scale规模scan扫描scenario局面schematic原理图scheme计划scope范围sector部门segment部分selectively选择性的self-complementing自补码semantics语义学semicolon分号semiconductor半导体seminumerical半数值的sensitivity敏感度sensor感应器separate使分离sequence顺序,序列sequential时序的,相继的serially连续的shift移位shifter转换机构shortcut捷径shrapnel榴弹shuffle弄混;乱堆sign符号signal信号signed-magnitude符号量值significant重要的silver-halide卤化银simple-minded头脑简单的simplify简化simulate仿真simulation仿真simulator仿真器simultaneous同时的sink使下沉sketch画图skew偏斜slant倾斜的slash斜杠SMT表面装配技术sneak path潜通路Snext综合近端串扰socket插口solder焊接solely唯一地sophisticate复杂的sourcing供应sparsest稀疏的spec规格specification规格specified指定的specify指定sponsor赞助商sponsored赞助的sprinkled点缀Sreg状态寄存器SSI指令基本格式stabilize使稳固static静态stereo立体声stimulate刺激storage贮藏straightforward直接的strategy策略stray杂散的stream流stream流structural结构化的structure结构stubborn顽固的subarray子阵列subcube子集subdirectory子目录subexpression子表达式subrange子范围subroutine子程序subscript下标符号subsection分段subsequent随后的subset子集subsidiary附属的substantial实际的substitute替换substituting取代subsystem子系统subtle细微的subtlety微妙subtract减subtraction减法subtractor减法器subtrahend减数subtype子类型successive连续的sufficient足够的suffix下标summarize总结superset扩展集supplant取代suppression抑制susceptible易受影响的swallow吞swap交换switching-algebra开关代数symmetric对称的symmetry对称性synchronize使同步synchronizer同步器synchronous同步的syndrome典型表现synopsys电子设计软件商名syntactical语法的syntax语法synthesis综合synthesize综合synthesized合成的synthesizer合成器table图表tabular列成表格的tabulated制成表的Tailored定做的tedious单调的tedium沉闷telecommunication电信template模版temporarily暂时地tempt使想要tentacle触角terminal终点termination终止Test试验test bench测试台text文本textual文本的theorem定理theoretical理论的threshold阈值thru-hole通孔tick时钟跳动timescale时间标尺toggle跳转tolerant忍受的topology拓扑,布局track记录track路径trademark商标transceiver收发器transcribing抄写transfer迁移transform转换,改变transformer-coupled变压器耦合transient瞬态的transistor晶体管transition转变transmission传送transmit传输transparent透明的transponder收发机transpose移项treatise专著treelike树状tricky难处理的trigger触发trivial琐细的truncate截短truth table真值表trvial没价值不重要的TTL晶体管-晶体管逻辑(电路) turn-the-crank实现tweak调整tweaking补偿ultraviolet紫外线unambiguous不含糊的uncompressed无压缩的unconstrained不受拘束的underscore强调undetectable不可探测性undue过度的unidirectional单向的unique唯一的unprimed未装填的unsigned无符号的unspecified非特指的unwieldy笨重的upgrade升级、提高品质uppercase大写字母的valid有效的vaporize使蒸发variables变量variation变化vector矢量vendor卖主verbal言语的Verification确认verify证实verilog一种逻辑电路设计语言versa电磁阀versatile多用途的version版本versus对vertical垂直的VHDL一种逻辑电路设计语言vice versa反之亦然victim牺牲者vinyl乙烯树脂violate违反visualize在脑中使形象化visually视觉上VLSI超大规模集成电路volatile易变的wafer晶片waveform波形weighted code加权码weird离奇的wire电线withstand承受wrapper包装wreak诉诸XOR异或yield产出。
COMPUTER ARCHITECTURE
COMPUTER ARCHITECTURERonald A.ThistedDepartments of Statistics,Health Studies,and Anesthesia&Critical CareThe University of Chicago7April1997To appear,Encyclopedia of Biostatistics.A computer architecture is a detailed specification of the computational,communication, and data storage elements(hardware)of a computer system,how those components interact (machine organization),and how they are controlled(instruction set).A machine’s archi-tecture determines which computations can be performed most efficiently,and which forms of data organization and program design will perform optimally.The term architecture as applied to computer design,wasfirst used in1964by Gene Amdahl,G.Anne Blaauw,and Frederick Brooks,Jr.,the designers of the IBM System/360. They coined the term to refer to those aspects of the instruction set available to programmers, independent of the hardware on which the instruction set was implemented.The System/360 marked the introduction of families of computers,that is,a range of hardware systems all executing essentially the same basic machine instructions.The System/360also precipitated a shift from the preoccupation of computer designers with computer arithmetic,which had been the main focus since the early1950s.In the1970s and1980s,computer architects focused increasingly on the instruction set.In the current decade,however,designers’main challenges have been to implement processors efficiently,to design communicating memory hierarchies,and to integrate multiple processors in a single design.It is instructive to examine how this transition from concentrating on instruction sets to high-level integration has taken place.Instruction-set architectureIn the late1970s,statisticians often had to be skilled FORTRAN programmers.Many were also sufficiently conversant with assembly language programming for a particular com-puter that they wrote subprograms directly using the computer’s basic instruction set.The Digital VAX11/780was a typical scientific computer of the era.The VAX had over 300different machine-level instructions,ranging in size from2to57bytes in length,and22 different addressing modes.Machines such as the VAX,the Intel80x86family of processors (the processors on which the IBM PC and its successors are based),and the Motorola680x0 processors(on which the Apple Macintosh is based)all had multiple addressing modes, variable-length instructions,and large instruction sets.By the middle of the1980s,such machines were described as“complex instruction-set computers”(CISC).These architectures did have the advantage that each instruction/addressing-mode combination performed its special task efficiently,making it possible tofine-tune performance on large tasks with very different characteristics and computing requirements.In CISC computers,80%or more of the computation time typically is spent executingonly a small fraction(10–20%)of the instructions in the instruction set,and many of the cycles are spent performing operating system services.Patterson and Ditzel[3]proposed the“reduced instruction-set computer”(RISC)in1980.The RISC idea is to design a small set of instructions which make implementation of these most frequently performed tasks maximally efficient.The most common features of RISC designs are a single instruction size,a small number of addressing modes,and no indirect addressing.RISC architectures became popular in the middle to late1980s.Examples of RISC architectures include Digital’s Alpha processor family,Hewlett-Packard’s PA series,the Mo-torola/Apple/IBM PowerPC family,and Sun’s Ultra-SPARC family.RISC architectures are particularly suitable for taking advantage of compiler-based optimization,which means that programs such as computationally-intensive statistical procedures written in compiled languages are likely to see best performance in RISC environments.The operating system for a computer is a program that allocates and schedules the com-puter’s computational resources such as processors and memory.For instance,the operating system controls how much of the computer’s main memory is available to programs and how much is used to cache portions offiles being read from or written to disk storage.Operating systems also determine which programs will be processed at any given time,and how long the processor will“belong”to the program currently claiming puters such as the VAX and System/360were designed in conjunction with an architecture-specific operating system(VMS and OS/360,respectively).Indeed,until the late1980s,most operating sys-tems were written specifically for computers with a particular architecture,and only a small number of these operating systems were ever converted to control machines based on other designs.Early RISC architectures were designed to make Unix operating systems perform effi-ciently,and today most major Unix-based computer manufacturers rely on a RISC archi-tecture.However,it is important to note that many different operating systems can run on a single computer,whether that computer’s architecture is a CISC or a RISC design.For instance,Unix,Linux,MacOS,Windows NT,OS/2,and other operating systems all have versions that operate on PowerPC processors.Increasingly,architectures are designed to run multiple operating systems efficiently,and the most common operating systems do operate on many different processors.This means that choice of operating system is becoming less closely tied to the processor family on which a computer is based.Hardware and Machine OrganizationUp to this point we have talked largely about instruction-set aspects of computer architec-ture.The architectural advances of primary interest to statisticians today involve hardware and machine organization.The hardware architecture consists of low-level details of a machine,such as timing requirements of components,layouts of circuit boards,logic design,power requirements, and the like.Fortunately,few of these details affect the day-to-day work of statisticians aside from their consequences—processors continue to get smaller and faster,and memory continues to get larger and less expensive.At the level of machine organization,the computers we use are built of inter-dependent systems,of which the processor itself is just one.Others include memory and memory man-agement systems,specialized instruction processors,busses for communication within andbetween systems and with peripheral devices,and input/output controllers.In multiprocess-ing architectures,the protocols for interaction between multiple processors(the multipro-cessing control system)is included as well.In a typical computer based on Intel’s Pentium Pro processor,the processor itself is tightly linked to a memory cache consisting of synchronous static random access memory (SRAM).This memory component is very fast,but because it is part of the same chip as the processor itself,it must also be quite small.This level-one cache memory is thefirst element of the Pentium Pro’s memory system.This system in the Pentium architecture consists of a hierarchy of successively larger(but slower)memory layers:a level-two cache,coupled to main memory(dynamic RAM,or extended data-out(EDO)DRAM).The main memory bank is backed up in turn by virtual memory residing on disk;a memory management chipset (separate from the Pentium processor)controls transactions between these two layers of the memory system.Input and output are handled via plug-in slots for processor cards attached to the sys-tem bus,which also serves the graphics controller.Typically,graphics output has its own, separate,graphics memory for preparing and displaying the contents of graphics displays. In addition,the graphics subsystem may incorporate specialized processors that acceler-ate graphical computations as well as specialized memory modules linked to the graphics processors.For most statisticians using a computer on the desktop(or increasingly,on the laptop), then,improving performance can often be achieved by increasing the amount of level-two cache memory,by increasing the amount of main memory,by acquiring faster and larger disk drives,by adding graphics accelerators and graphics memory,or by adding a high-speed input/output controller.On more powerful workstation systems,many of these features are integrated directly on the main processor chip.The UltraSPARC-1,for instance,has the memory management unit,thefloating-point processor,as well as graphics and imaging support on the same chip as the instruction processor.Floating-point computationFrom the earliest days of digital computers,statistical computation has been dominated byfloating-point calculations.“Scientific computers”are often defined as those which deliver high performance forfloating-point operations.The aspects of computer design that make these operations possible have followed the same evolutionary path as the rest of computer architecture.Early computer designs incorporated nofloating-point instructions.Since all numbers were treated as integers,programmers working with nonintegers had to represent a number using one integer to hold the significant digits coupled to a second integer to record a scaling factor.In effect,each programmer had to devise his or her ownfloating-point representation. By the1960s,some designs introduced instructions forfloating-point operations,but many had none.(The IBM1620computer,for example,hadfixed-precision integer arithmetic, but the programmer could control the number of digits of precision.)By the mid-1970s, virtually all scientific computers hadfloating-point instructions.Unfortunately for the prac-tical statistician,the representation offloating-point numbers,the meaning offloating point operations,and the results of an operation differed substantially from one machine to thenext.The burden of knowing numerical analysis and good numerical algorithms fell heavily on the data analyst’s shoulders.In the early1980s the Institute of Electrical and Electronics Engineers(IEEE)developed a standard forfloating-point arithmetic[1].To implement these standards,computer architects of that time developed afloating-point architecture separate from that of the principal processor—in effect,moving thefloating-point issues from the level of machine-specific definition of arithmetic to a common set of operations(an“instruction set”)whose output could be strictly defined.Examples include the Motorola6888x and the Intel80x87floating-point processors(FPPs),which were designed in parallel with the680x0and80x86central processors,respectively.In later RISC-based architectures,floating point processes are tightly coupled to the cen-tral instruction processor.The UltraSPARC-1design incorporates an IEEE-compliant FPP which performs allfloating-point operations(including multiplies,divides,and square roots). The PowerPC family also includes an integrated IEEE-compliant FPP.These processors il-lustrate the increased role of hardware and machine organization.The FPPs are logically external to the basic instruction processor.Therefore the computer design must include channels for communication of operands and results,and must incorporate machine-level protocols to guarantee that the mix offloating-point and non-floating-point instructions are completed in the correct order.Today,many of these FPPs are part of the same integrated-circuit package as the main instruction processor.This keeps the lines of communication very short and achieves additional effective processor speed.Parallel and Vector ArchitecturesThe discussion above concerns the predominant computer architecture used by practic-ing statisticians,one based on the sequential single processor.The notion that speed and reliability could be enhanced by coupling multiple processors in a way that enabled them to share work is an obvious extension,and one that has been explored actively since at least the1950s.Vector computers include instructions(and hardware!)that make it possible to execute a single instruction(such as an add)simultaneously on a vector of operands.In a standard scalar computer,computing the inner product of two vectors x and y of length p requires a loop within which the products x i y i are calculated.The time required is that of p multipli-cations,together with the overhead of the loop.On a vector machine,a single instruction would calculate all p products at once.The highest performance scientific computers available since1975incorporate vector ar-chitecture.Examples of early machines with vector capabilities include the CRAY-1machine and its successors from Cray Research and the CYBER-STAR computers from CDC.Vector processors are special cases of parallel architecture,in which multiple processors cooperatively perform computations.Vector processors are examples of machines which can execute a single instruction on multiple data streams(SIMD computers).In computers with these architectures,there is a single queue of instructions which are executed in parallel (that is,simultaneously)by multiple processors,each of which has its own data memory cache.Except for special purposes(such as array processing),the SIMD model is neither sufficientlyflexible nor economically competitive for general-purpose designs.Computers with multiple processors having individual data memory,and which fetch andexecute their instructions independently(MIMD computers),are moreflexible than SIMD machines and can be built by taking regular scalar microprocessors and organizing them to operate in parallel.Such multiprocessors are rapidly approaching the capabilities of the fastest vector machines,and for many applications already have supplanted them. Additional Reading and SummaryAn elementary introduction to computer architecture is contained in Patterson and Hen-nessy[4];more advanced topics are covered in[5].Thisted[6]gives an overview offloating-point arithmetic and the IEEE standard with a view toward common statistical computations.Goldberg[2]presents an overview offloating-point issues for computer scientists.Patterson and Hennessy[5]contains a lengthy chapter on parallel computers based on multiprocessors,as well as a short appendix on vector computers.Both sections contain historical notes and detailed pointers to the literature for additional reading.In the early days of digital computers,few useful computations could be accomplished without a thorough understanding of the computer’s instruction set and a detailed knowledge of its architecture.Advances in computer architecture,coupled with advances in program-ming languages,compiler technology,operating systems,storage,and memory,have made such specific knowledge of much reduced importance for producing high-quality scientific work.References[1]Institute of Electrical and Electronics Engineers(IEEE).(1985).IEEE Standard for Bi-nary Floating-Point Arithmetic(Standard754-1985).IEEE,New York.Reprinted in SIGPLAN Notices,22(2),5–48.[2]Goldberg,D.(1991)“What every computer scientist should know aboutfloating-pointarithmetic,”Computing Surveys,23(1),5–48.[3]Patterson,David A.,and Ditzel,D.R.(1980).“The case for the reduced instruction setcomputer,”Computer Architecture News,8(6),25–33.[4]Patterson,David A.,and Hennessy,John L.(1994).Computer Organization and Design:The Hardware/Software Interface.Morgan Kaufmann:San Francisco.[5]Patterson,David A.,and Hennessy,John L.(1996).Computer Architecture:A Quanti-tative Approach,Second Edition.Morgan Kaufmann:San Francisco.[6]Thisted,Ronald A.(1988).Elements of Statistical Computing:Numerical Computation.Chapman and Hall:London.。
计算视觉——视差计算
计算视觉——视差计算背景介绍⽴体匹配也称作视差估计(disparity estimation),或者双⽬深度估计。
其输⼊是⼀对在同⼀时刻捕捉到的,经过极线校正的左右图像和。
⽽它的输出是由参考图像(⼀般以左图作为参考图像)中每个像素对应的视差值所构成的视差图 d。
视差是三维场景中某⼀点在左右图像中对应点位置的像素级差距。
当给定摄像机的基线距离 b 和焦距 f 之后,我们就可以从视差图中⾃动计算出深度,。
所以深度和视差是可以互相转换,相互等价的。
⽴体匹配算法分为四个步骤:匹配代价计算(matching cost computation)代价聚合(cost aggregation)视差计算(disparity computation)视差精修(disparity refinement)传统的视差估计算法主要分为两类:局部算法:主要基于滑动窗⼝来计算局部窗⼝内的匹配代价;全局算法:通过优化包括局部数据项和平滑项的能量函数来计算⽴体视图之间的相关性;传统的视差估计算法对于各种应⽤场景都具有等价的视差估计能⼒,不会因为场景变化⽽产⽣较⼤的差异,因此有论⽂将传统⽅法估计的视差图作为带有噪声的标签来提升⽆监督视差估计的性能,本⽂后⾯会提到。
随着深度学习的发展以及⼤规模合成/仿真数据的推动,CNN将上述四个步骤统⼀成⼀个端到端的⽹络,进⼀步提升了视差估计的性能。
本⽂主要探讨的是⾃监督学习在基于卷积神经⽹络的视差估计算法中的应⽤情况。
NCC视差匹配⽅法对于原始的图像内任意⼀个像素点(px,py) (p_x,p_y)(p x,p y)构建⼀个n×n n\times nn×n的邻域作为匹配窗⼝。
然后对于⽬标相素位置(px+d,py) (p_x+d, p_y)(p x+d,p y)同样构建⼀个n×n n\times nn×n⼤⼩的匹配窗⼝,对两个窗⼝进⾏相似度度量,注意这⾥的d dd有⼀个取值范围。
立体匹配的原理和方法
立体匹配的原理和方法Stereo matching is a fundamental problem in computer vision that aims to establish correspondences between points in a pair of stereo images. 立体匹配是计算机视觉中的一个基本问题,旨在建立一对立体图像中点的对应关系。
It is a crucial step in tasks such as depth estimation, visual odometry, and 3D reconstruction. 这是深度估计、视觉里程计和三维重建等任务中的一个关键步骤。
The principle of stereo matching is to find corresponding points in two images taken from different viewpoints. 立体匹配的原理在于找出来自不同视角拍摄的两幅图像中对应的点。
By comparing these points, the depth information of the scene can be inferred. 通过比较这些点,可以推断出场景的深度信息。
One common method for stereo matching is the use of pixel-based matching algorithms. 一个常见的立体匹配方法是使用基于像素的匹配算法。
These algorithms compare the intensity or color of pixels in the two images to find correspondences. 这些算法比较两幅图像中像素的强度或颜色来找到对应的点。
However, pixel-based methods often struggle with handling textureless regions or occlusions in the images. 然而,基于像素的方法常常难以处理图像中无纹理区域或遮挡。
Efficient motion planning strategies for large-scale sensor networks
1 Introduction
Consider the initial deployment of a wireless sensor network (WSN). Ideally, the WSN is fully connected with a topology to facilitate coverage, sensing, localization, and data routing. Unfortunately, since deployment methods can vary from aerial to manual, the initial configuration could be far from ideal. As a result, the WSN may be congested, disconnected, and incapable of localizing itself in the environment. Node failures in established networks could have similar effects. Such limitations in static networks have lead to an increased research interest into improving network efficiency via nodes that support at least limited mobility [2]. Also of fundamental importance to WSN research is resource management, and (perhaps most importantly) power management. Energy consumption is the most limiting factor in the use of wireless sensor networks, as service life is limited by onboard battery capacity. This constraint has driven research into power sensitive routing protocols, sleeping protocols, and even network architectures for minimizing
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
2
UNGER, BENHIMANE, WAHL, NAVAB: EFFICIENT DISPARITY COMPUTATION
Selim Benhimane3
/~pp
2
Eric Wahl1
Eric\.Wahl@BMW.de
Nassir avab2
http://campar.in.tum.de/Main/NassirNavab
3
Abstract In order to improve the performance of correlation-based disparity computation of stereo vision algorithms, standard methods need to choose in advance the value of the maximum disparity (MD). This value corresponds to the maximum displacement of the projection of a physical point expected between the two images. It generally depends on the motion model, the camera intrinsic parameters and on the depths of the observed scene. In this paper, we show that there is no optimal MD value that minimizes the matching errors in all image regions simultaneously and we propose a novel approach of the disparity computation that does not rely on any a priori MD. Two variants of this approach will be presented. When compared to traditional correlation-based methods, we show that our approach improves not only the accuracy of the results but also the efficiency of the algorithm. A local energy minimization is also proposed for fast refinement of the results. An extensive comparative study with ground truth is carried out on classical stereo images and the results show that the proposed method clearly gives more accurate results and it is two times faster than the fastest possible implementation of traditional correlation-based methods.
45%
40% Incorrect Estimated Disparities [%]
35% all nonocc textrls discnt
30%
25%
20%
15% 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 Maximum Disparity
Figure 1: Curves plotting the percentage of incorrect disparities estimated with classical methods on the different image regions of the Teddy dataset as a function of the maximum disparity value used. The curves show that the maximum disparity needs to be set according to the region type of the image. When considering all pixels (all), the optimal value of the maximum disparity is equal to 43. The same value is optimal when considering non occluded regions (nonocc) and regions close to discontinuities (discnt). However, this value is not optimal when considering textureless regions (textrls) where the value 37 provides a lower percentage of wrong disparities. Typical commercial implementations of such systems use one or two cameras together with a computationally feasible algorithm to compute depth information [8, 22]. If two cameras are used, local methods based on correlation can be implemented very efficiently [12]. On dedicated hardware, methods such as local correlation, dynamic programming, semiglobal matching or even belief propagation can be implemented for real-time application [6, 8, 18, 24]. However, optimized local methods are among the fastest ways in order to perform dense matching solely on general purpose CPUs without special hardware. In this case, decisions must be made upon the values of some parameters, particularly for the maximum disparity (MD). We show that the choice for a fixed MD influences the quality of the depth-map: setting it too high introduces false matches and setting it too low will produce gross errors at close objects. But also a seemingly ideal value will not result in the best possible result. The relationship between MD and matching errors is shown in Fig. 1 for the standard dataset Teddy1 . The figure depicts that there is no optimal fixed MD setting that minimizes all individual errors at once. For example, if the value 37 is used the errors of textureless regions are minimized but this value will cause higher errors in the other regions. The optimum may be obtained if the MD is set to 37 in textureless regions and to 43 in the rest of the image. In general, the MD should be variable and as close as possible to the true disparity since a higher MD only increases the possibility of false matches, especially in regions with weak texture. In some cases, the choice of MD is complicated. Generally, the disparity is proportional to the distance between camera centers and the focal length and it is inverse proportional to the depth of the point. Especially at motion-stereo [15], a choice for a fixed MD either restricts the practical applicability (the minimally allowed depth of the points is a function of the camera interframe displacement) or results in an increased number of errors and an inefficient use of processing power (if the MD is set too high). We propose a novel method for efficient dense stereo matching without the need of