毕设外文文献翻译

合集下载

毕业设计外文文献翻译

毕业设计外文文献翻译

毕业设计(论文)外文资料翻译系别:专业:班级:姓名:学号:外文出处:附件: 1. 原文; 2. 译文2013年03月附件一:A Rapidly Deployable Manipulator SystemChristiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. KhoslaAbstract:A rapidly deployable manipulator system combines the flexibility of reconfigurable modular hardware with modular programming tools, allowing the user to rapidly create a manipulator which is custom-tailored for a given task. This article describes two main aspects of such a system, namely, the Reconfigurable Modular Manipulator System (RMMS)hardware and the corresponding control software.1 IntroductionRobot manipulators can be easily reprogrammed to perform different tasks, yet the range of tasks that can be performed by a manipulator is limited by mechanicalstructure.Forexample, a manipulator well-suited for precise movement across the top of a table would probably no be capable of lifting heavy objects in the vertical direction. Therefore, to perform a given task,one needs to choose a manipulator with an appropriate mechanical structure.We propose the concept of a rapidly deployable manipulator system to address the above mentioned shortcomings of fixed configuration manipulators. As is illustrated in Figure 1, a rapidly deployable manipulator system consists of software and hardware that allow the user to rapidly build and program a manipulator which is customtailored for a given task.The central building block of a rapidly deployable system is a Reconfigurable Modular Manipulator System (RMMS). The RMMS utilizes a stock of interchangeable link and joint modules of various sizes and performance specifications. One such module is shown in Figure 2. By combining these general purpose modules, a wide range of special purpose manipulators can be assembled. Recently, there has been considerable interest in the idea of modular manipulators [2, 4, 5, 7, 9, 10, 14], for research applications as well as for industrial applications. However, most of these systems lack the property of reconfigurability, which is key to the concept of rapidly deployable systems. The RMMS is particularly easy toreconfigure thanks to its integrated quick-coupling connectors described in Section 3.Effective use of the RMMS requires, Task Based Design software. This software takes as input descriptions of the task and of the available manipulator modules; it generates as output a modular assembly configuration optimally suited to perform the given task. Several different approaches have been used successfully to solve simpli-fied instances of this complicated problem.A third important building block of a rapidly deployable manipulator system is a framework for the generation of control software. To reduce the complexity of softwaregeneration for real-time sensor-based control systems, a software paradigm called software assembly has been proposed in the Advanced Manipulators Laboratory at CMU.This paradigm combines the concept of reusable and reconfigurable software components, as is supported by the Chimera real-time operating system [15], with a graphical user interface and a visual programming language, implemented in OnikaA lthough the software assembly paradigm provides thesoftware infrastructure for rapidly programming manipulator systems, it does not solve the programming problem itself. Explicit programming of sensor-based manipulator systems is cumbersome due to the extensive amount of detail which must be specified for the robot to perform the task. The software synthesis problem for sensor-based robots can be simplified dramatically, by providing robust robotic skills, that is, encapsulated strategies for accomplishing common tasks in the robots task domain [11]. Such robotic skills can then be used at the task level planning stage without having to consider any of the low-level detailsAs an example of the use of a rapidly deployable system,consider a manipulator in a nuclear environment where it must inspect material and space for radioactive contamination, or assemble and repair equipment. In such an environment, widely varied kinematic (e.g., workspace) and dynamic (e.g., speed, payload) performance is required, and these requirements may not be known a priori. Instead of preparing a large set of different manipulators to accomplish these tasks—an expensive solution—one can use a rapidly deployable manipulator system. Consider the following scenario: as soon as a specific task is identified, the task based design software determinesthe task. This optimal configuration is thenassembled from the RMMS modules by a human or, in the future, possibly by anothermanipulator. The resulting manipulator is rapidly programmed by using the software assembly paradigm and our library of robotic skills. Finally,the manipulator is deployed to perform its task.Although such a scenario is still futuristic, the development of the reconfigurable modular manipulator system, described in this paper, is a major step forward towards our goal of a rapidly deployable manipulator system.Our approach could form the basis for the next generation of autonomous manipulators, in which the traditional notion of sensor-based autonomy is extended to configuration-based autonomy. Indeed, although a deployed system can have all the sensory and planning information it needs, it may still not be able to accomplish its task because the task is beyond the system’s physical capabilities. A rapidly deployable system, on the other hand, could adapt its physical capabilities based on task specifications and, with advanced sensing, control, and planning strategies, accomplish the task autonomously.2 Design of self-contained hardware modulesIn most industrial manipulators, the controller is a separate unit housing the sensor interfaces, power amplifiers, and control processors for all the joints of the manipulator.A large number of wires is necessary to connect this control unit with the sensors, actuators and brakes located in each of the joints of the manipulator. The large number of electrical connections and the non-extensible nature of such a system layout make it infeasible for modular manipulators. The solution we propose is to distribute the control hardware to each individual module of the manipulator. These modules then become self-contained units which include sensors, an actuator, a brake, a transmission, a sensor interface, a motor amplifier, and a communication interface, as is illustrated in Figure 3. As a result, only six wires are requiredfor power distribution and data communication.2.1 Mechanical designThe goal of the RMMS project is to have a wide variety of hardware modules available. So far, we have built four kinds of modules: the manipulator base, a link module, three pivot joint modules (one of which is shown in Figure 2), and one rotate joint module. The base module and the link module have no degrees-of-freedom; the joint modules have onedegree-of-freedom each. The mechanical design of the joint modules compactly fits aDC-motor, a fail-safe brake, a tachometer, a harmonic drive and a resolver.The pivot and rotate joint modules use different outside housings to provide the right-angle or in-line configuration respectively, but are identical internally. Figure 4 shows in cross-section the internal structure of a pivot joint. Each joint module includes a DC torque motor and 100:1 harmonic-drive speed reducer, and is rated at a maximum speed of 1.5rad/s and maximum torque of 270Nm. Each module has a mass of approximately 10.7kg. A single, compact, X-type bearing connects the two joint halves and provides the needed overturning rigidity. A hollow motor shaft passes through all the rotary components, and provides achannel for passage of cabling with minimal flexing.2.2 Electronic designThe custom-designed on-board electronics are also designed according to the principle of modularity. Each RMMS module contains a motherboard which provides the basic functionality and onto which daughtercards can be stacked to add module specific functionality.The motherboard consists of a Siemens 80C166 microcontroller, 64K of ROM, 64K of RAM, an SMC COM20020 universal local area network controller with an RS-485 driver, and an RS-232 driver. The function of the motherboard is to establish communication with the host interface via an RS-485 bus and to perform the lowlevel control of the module, as is explained in more detail in Section 4. The RS-232 serial bus driver allows for simple diagnostics and software prototyping.A stacking connector permits the addition of an indefinite number of daughtercards with various functions, such as sensor interfaces, motor controllers, RAM expansion etc. In our current implementation, only modules with actuators include a daughtercard. This card contains a 16 bit resolver to digital converter, a 12 bit A/D converter to interface with the tachometer, and a 12 bit D/A converter to control the motor amplifier; we have used an ofthe-shelf motor amplifier (Galil Motion Control model SSA-8/80) to drive the DC-motor. For modules with more than one degree-of-freedom, for instance a wrist module, more than one such daughtercard can be stacked onto the same motherboard.3 Integrated quick-coupling connectorsTo make a modular manipulator be reconfigurable, it is necessary that the modules can be easily connected with each other. We have developed a quick-coupling mechanism with which a secure mechanical connection between modules can be achieved by simply turning a ring handtight; no tools are required. As shown in Figure 5, keyed flanges provide precise registration of the two modules. Turning of the locking collar on the male end produces two distinct motions: first the fingers of the locking ring rotate (with the collar) about 22.5 degrees and capture the fingers on the flanges; second, the collar rotates relative to the locking ring, while a cam mechanism forces the fingers inward to securely grip the mating flanges. A ball- transfer mechanism between the collar and locking ring automatically produces this sequence of motions.At the same time the mechanical connection is made,pneumatic and electronic connections are also established. Inside the locking ring is a modular connector that has 30 male electrical pins plus a pneumatic coupler in the middle. These correspond to matching female components on the mating connector. Sets of pins are wired in parallel to carry the 72V-25A power for motors and brakes, and 48V–6A power for the electronics. Additional pins carry signals for two RS-485 serial communication busses and four video busses. A plastic guide collar plus six alignment pins prevent damage to the connector pins and assure proper alignment. The plastic block holding the female pins can rotate in the housing to accommodate the eight different possible connection orientations (8@45 degrees). The relative orientation is automatically registered by means of an infrared LED in the female connector and eight photodetectors in the male connector.4 ARMbus communication systemEach of the modules of the RMMS communicates with a VME-based host interface over a local area network called the ARMbus; each module is a node of the network. The communication is done in a serial fashion over an RS-485 bus which runs through the length of the manipulator. We use the ARCNET protocol [1] implemented on a dedicated IC (SMC COM20020). ARCNET is a deterministic token-passing network scheme which avoids network collisions and guarantees each node its time to access the network. Blocks ofinformation called packets may be sent from any node on the network to any one of the other nodes, or to all nodes simultaneously (broadcast). Each node may send one packet each time it gets the token. The maximum network throughput is 5Mb/s.The first node of the network resides on the host interface card, as is depicted in Figure 6. In addition to a VME address decoder, this card contains essentially the same hardware one can find on a module motherboard. The communication between the VME side of the card and the ARCNET side occurs through dual-port RAM.There are two kinds of data passed over the local area network. During the manipulator initialization phase, the modules connect to the network one by one, starting at the base and ending at the end-effector. On joining the network, each module sends a data-packet to the host interface containing its serial number and its relative orientation with respect to the previous module. This information allows us to automatically determine the current manipulator configuration.During the operation phase, the host interface communicates with each of the nodes at 400Hz. The data that is exchanged depends on the control mode—centralized or distributed. In centralized control mode, the torques for all the joints are computed on the VME-based real-time processing unit (RTPU), assembled into a data-packet by the microcontroller on the host interface card and broadcast over the ARMbus to all the nodes of the network. Each node extracts its torque value from the packet and replies by sending a data-packet containing the resolver and tachometer readings. In distributed control mode, on the other hand, the host computer broadcasts the desired joint values and feed-forward torques. Locally, in each module, the control loop can then be closed at a frequency much higher than 400Hz. The modules still send sensor readings back to the host interface to be used in the computation of the subsequent feed-forward torque.5 Modular and reconfigurable control softwareThe control software for the RMMS has been developed using the Chimera real-time operating system, which supports reconfigurable and reusable software components [15]. The software components used to control the RMMS are listed in Table 1. The trjjline, dls, and grav_comp components require the knowledge of certain configuration dependent parametersof the RMMS, such as the number of degrees-of-freedom, the Denavit-Hartenberg parameters etc. During the initialization phase, the RMMS interface establishes contact with each of the hardware modules to determine automatically which modules are being used and in which order and orientation they have been assembled. For each module, a data file with a parametric model is read. By combining this information for all the modules, kinematic and dynamic models of the entire manipulator are built.After the initialization, the rmms software component operates in a distributed control mode in which the microcontrollers of each of the RMMS modules perform PID control locally at 1900Hz. The communication between the modules and the host interface is at 400Hz, which can differ from the cycle frequency of the rmms software component. Since we use a triple buffer mechanism [16] for the communication through the dual-port RAM on the ARMbus host interface, no synchronization or handshaking is necessary.Because closed form inverse kinematics do not exist for all possible RMMS configurations, we use a damped least-squares kinematic controller to do the inverse kinematics computation numerically..6 Seamless integration of simulationTo assist the user in evaluating whether an RMMS con- figuration can successfully complete a given task, we have built a simulator. The simulator is based on the TeleGrip robot simulation software from Deneb Inc., and runs on an SGI Crimson which is connected with the real-time processing unit through a Bit3 VME-to-VME adaptor, as is shown in Figure 6.A graphical user interface allows the user to assemble simulated RMMS configurations very much like assembling the real hardware. Completed configurations can be tested and programmed using the TeleGrip functions for robot devices. The configurations can also be interfaced with the Chimera real-time softwarerunning on the same RTPUs used to control the actual hardware. As a result, it is possible to evaluate not only the movements of the manipulator but also the realtime CPU usage and load balancing. Figure 7 shows an RMMS simulation compared with the actual task execution.7 SummaryWe have developed a Reconfigurable Modular Manipulator System which currently consists of six hardware modules, with a total of four degrees-of-freedom. These modules can be assembled in a large number of different configurations to tailor the kinematic and dynamic properties of the manipulator to the task at hand. The control software for the RMMS automatically adapts to the assembly configuration by building kinematic and dynamic models of the manipulator; this is totally transparent to the user. To assist the user in evaluating whether a manipulator configuration is well suited for a given task, we have also built a simulator.AcknowledgmentThis research was funded in part by DOE under grant DE-F902-89ER14042, by Sandia National Laboratories under contract AL-3020, by the Department of Electrical and Computer Engineering, and by The Robotics Institute, Carnegie Mellon University.The authors would also like to thank Randy Casciola, Mark DeLouis, Eric Hoffman, and Jim Moody for their valuable contributions to the design of the RMMS system.附件二:可迅速布置的机械手系统作者:Christiaan J.J. Paredis, H. Benjamin Brown, Pradeep K. Khosla摘要:一个迅速可部署的机械手系统,可以使再组合的标准化的硬件的灵活性用标准化的编程工具结合,允许用户迅速建立为一项规定的任务来通常地控制机械手。

毕设外文原文及译文

毕设外文原文及译文

北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。

毕业设计外文文献翻译

毕业设计外文文献翻译

Encoding the Java Virtual Machine’s Instruction Set1 IntroductionThe development of programs that parse and analyze Java Bytecode [9] has a long history and new programs are still developed [2,3,4,7,13]. When developing such tools, however, a lot of effort is spent to develop a parser for the bytecode and for (re-) developing standard control- and data-flow analyses which calculate, e.g., the control-flow graph or the data-dependency graph.To reduce these efforts, we have developed a specification language (OPAL SPL) for encoding the instructions of stack-based intermediate languages. The idea is that—once the instruction set is completely specified using OPAL SPL—generating both bytecode parsers and standard analyses is much easier than their manual development. To support this goal, OPAL SPL supports the specification of both the format of bytecode instructions and the effect on the stack and registers these instructions have when executed. An alternative use of an OPAL SPL specification is as input to a generic parser or to generic analyses as illustrated by Fig. 1Though the language was designed with Java Bytecode specifically in mind and is used to encode the complete instruction set of the Java Virtual Machine (JVM) , we have striven for a Java-independent specification language. In particular, OPAL SPL focuses on specifying the instruction set rather than the complete class file format, not only because the former’s structure is much more regular than the latter’s,but also because a specifi cation of the instruction set promises to be most beneficial. Given the primary focus of OPAL SPL—generating parsers and facilitating basic analyses—we explicitly designed the language such that it is possible to group related instructions. This makes specifications more concise and allows analyses to treat similar instructions in nearly the same way. For example, the JVM’s iload 5 instruction, which loads the integer value stored in register #5, is a special case of the generic iload instruction where the instruction’s operand is 5. We also designed OPAL SPL in such a way that specifications do not prescribe how a framework represents or processes information; i.e., OPAL SPL is representation agnostic.The next section describes the specification langua ge. In Section3we reason about the language’s design by discussing the specification of selected JVM instructions. In Section4the validation of specifications is discussed. The evaluation of the approach is presented in Section5. The paper ends with a discussion of related work and a conclusion.2 Specifying Bytecode InstructionsThe language for specifying bytecode instructions (OPAL SPL) was primarily designed to enable aconcise specification of the JVM’s instruction set. OPAL SPL supports the sp ecification of both an instruction’s format and its effect on the stack and local variables (registers)when the instruction is executed. It is thus possible to specify which kind of values are popped from and pushed onto the stack as well as which local variables are read or written. Given a specification of the complete instruction set the information required by standard control- and data-flow analyses is then available.However, OPAL SPL is not particularly tied to Java as it abstracts from the particularities of the JVM Specification. For example, the JVM’s type system is part of an OPAL SPL specification rather than an integral part of the OPAL SPL language itself.Next, we first give an overview of the language before we discuss its semantics.2.1 SyntaxThe OPAL Specification Language (OPAL SPL) is an XML-based language. Its grammar is depicted in Fig.2using an EBNF-like format. Non-terminals are written in capital letters (INSTRUCTIONS, TYPES, etc.), the names of XML-elements are written in small letters (types, stack, etc.) and the names of XML-attributes start with ―@‖ (@type, @var, etc.). We refer to the content of an XML-element using symbols that start with―/‖ (/V ALUEEXPRESSION, /EXPECTEDV ALUE, etc.). ―<>‖ is used to specify nesting of elements. ―( ),?,+,*,{},|‖ have the usual semantics. For example,exceptions<(exception @type)+>specifies that the XML-elementexceptionshas one or moreexceptionchild elements that always have the attributetype.2.2 SemanticsFormat SpecificationEach specification written in OPAL SPL consists of four major parts (line 1 in Fig.2). The first part(types, lines 2–3) specifies the type system that is used by the underlying virtual machine. The second part (exceptions, line 4) declares the exceptions that may be thrown when instructions are executed. The third part (functions, line 5) declares the functions that are used in instruction specifications. The fourth part is the specification of the instructions themselves (lines 6–12), each of which may resort to the declared functions to access information not simply stored along with the instruction. For example,invoke instructions do not store the signature and declaring class of the called methods. Instead, a reference to an entry in the so-called constant pool is stored. Only this constant pool entry has all information about the method. To obtain, e.g., the return type of the called method, an abstract function TYPE method refreturn type(method ref) is declared that takes a reference to the entry as i nput and returns the method’s return type. Using abstract function declarations, we abstract—in the specification of the instructions—from the concrete representation of such information by the enclosing by tecode toolkit.The specification of an instruction consists of up to four parts:the instruction’s format (lines 7–8), a description of the effect the instruction has on the stack when executed (lines 9–10), a descriptions of theregisters it affects upon execution (lines 11–12), and information about the exceptions that may be thrown during execution (end of line 6). An instruction’s format is specified by sequences which describe how an instruction is stored. Theu1, u2andu4elements (line 8) of each format sequence specify that the current value is an unsigned integer value with 1, 2 and 4 bytes, respectively. Similarly, thei1, i2 andi4 elements (line 8) are used to specify that the current value is a (1, 2 or 4 byte) signed integer value. The values can be bound to variables using thevarat tribute and can be given a second semantics using thetype attribute. For example,<i2 type=‖short‖ var=‖value‖/>is a twobyte signed integer value that is bound to the variable value and has type short with respect to the instruction set’s type system. Additionally, it is possible to specify expected values (line 8). This enables the selection of the format sequence to be used for reading in the instruction. E.g., <sequence><u1 var=‖opcode‖>171</u1>... specifies that this sequence matches if the value of the first byt e is 171. A sequence’s list element is used to specify that a variable number of values need to be read. The concrete number of elements is determined by the count attribute. The attribute’s value is an expression that can use values that were previously assigned to a variable. The sequence elements implicit and implicit type are used to bind implicit value and type information to variables that can later on be used in type or value expressions(line 7, 10 and 11). To make it possible to aggregate related bytecode instructions to one logical instruction, several format sequences can be defined. The effect on the stack is determined by the number and type of stack operands that are popped (line 9) and pushed (line 10). If multiple stack layouts are specified, the effect on the stack is determined by the firstbefore-executionstack layout that matches; i.e., to determine the effect on the stack a data-flow analysis is necessary.Unique Prefix RuleOne constraint placed upon specifications written in OPAL SPL is that a format sequence can be identified unambiguously by only parsing a prefix of the instruction; no lookahead is necessary. In other words, if each format sequence is considered a production and eachu1, u2, etc. is considered a terminal, then OPAL SPL requires the format sequences to constitute an LR(0) grammar This unique prefix rule is checked automatically (cf. Sec.4); furthermore, this rule facilitates generating fast parsers from the specification, e.g., using nestedswitchstatements.Type SystemOPAL SPL does not have a hard-coded type hierarchy. Instead, each specification written in SPL contains a description of the type system used by the bytecode language being described. The only restriction is that all types have to be arranged in a single, strict hierarchy.The Java Virtual Machine Specification [9]’s type hierarchy is shown in Fig.3(1). It captures all runtime types known to the Java virtual machine, as well as those types that are used only at link- or compile-time, e.g., branchoffset, fieldref and methodref. The hierarchy is a result of the peculiarities of theJVM’s instruction set. The byteorbooleantype, e.g., is required to model the baloadandbastore instructions, which operate on arrays of byteorbooleanalike.OPAL SPL’s type system implicitly defines a second type hierarchy ((2) in Fig. 3). The declared hierarchy of types (1) is mirrored by a hierarchy of kinds (2); for every (lower-case) type there automatically exists an (upper-case) kind. This convention ensures their consistency and keeps the specification itself brief. The values of kindINT LIKEareint, short, etc., just as the values of type int like are 1, 2, etc. Kinds enable parameterizing logical instructions likeareturnwith types,thus making a concise specification of related instructions (e.g., freturn, ireturn, andareturn) possible (cf. Sec.3.12).Information FlowIn OPAL SPL, the flow of information (values, types, register IDs) is modeled by means of named variables and expressions using the variables. In general, the flow of information is subject to the constraints illustrated by Fig.4. For example, variables defined within a specific format sequence can only be referred to by later elements within the same format sequence; a variable cannot be referred to across format sequences. If the same variable is bound by all format sequences, i.e., it is common to all format sequences, then the variable can be used to identify register IDs, the values pushed onto the stack, etc. Similarly, if an instruction defines multiple stack layouts, then a value can only flow from the i-th stack layout before execution to the i-th stack layout after execution and only information that is common to all stack layouts before execution may be stored in a register.3 Design DiscussionThe design of the OPAL specification language (OPAL SPL) is influenced by the peculiarities of the JVM’s instruction set [9, Chapter 6]. In the following, we discuss those instructions that had a major influence on the design.3.1 Modeling the Stack Bottom(athrow)All JVM instructions—with the exception ofathrow—specify only the number and types of operands popped from and pushed onto the stack; they do not determine the layout of the complete stack. In case of the athrowinstruction, however, the stack layout after its execution is completely determined (Fig.5, line 6); the single element on the stack is the thrown exception. This necessitates explicit modeling of the stack’s contents beyond the operands that are pushed and popped by a particular instruction. The explicit modeling of the rest of the stack (line5) here by allows for the (implicit) modeling of stacks of a fixed size (line6).3.2 Pure Register Instructions(iinc)The flow of information for instructions that do not affect the stack—e.g., the JVM’siinc instruction—is depicted in Fig. 7and adheres to the general scheme of information flow (cf. Fig. 4). After parsing the instruction according to the format sequence(Fig. 6, lines3–5), the two variables lvIndex an dincrement are initialized.3.3 Interpretation of Arithmetic Instructions (iinc, add, sub,etc.)The specification ofiinc (Fig. 6) also illustrates OPAL SPL’s ability to model computed values, e.g., add(value, increment). This information can subsequently be used, e.g., by static analyses to determine data dependencies or to perform abstract interpretations.3.4 Constant Pool Handling (ldc)The Java class file format achieves its compactness in part through the use of a constant pool. Hereby, immediate operands of an instruction are replaced by an index into the (global) pool. For example, in case of the load constant intructionldc, the operand needs to be programmatically retrieved from the constant pool (Fig.8, line 5). To obtain the value’s type, one uses the reflective type offunction that the enclosing toolkitx has to provide (line14).3.5 Multiple Format Sequences, Single Logical InstructionAn instruction such asldc, which may refer to an integer value in the constant pool, is conceptually similar to instructions such asiconst 0orsipush;allofthem push a constant value onto the operand stack. The primary difference between the format sequences of ldc(Fig. 8, lines 3–5)andiconst 0(lines 6–7)isthat the former’s operand resides in th e constant pool. In contrast, sipushencodes its operand explicitly in the bytecode stream as an immediate value (line9).To facilitate standard control- and data-flow analyses, OPAL SPL abstracts away from such details, so that similar instructions can be subsumed by more generic instructions using explicit or implicit type and value bindings. A generic push instruction (Fig. 8), e.g., subsumes all JVM instructions that just push a constant value onto the stack. In this case the pushed value is either a computed value (line5), an implicit value (line7), or an immediate operand (line9).3.6 Variable Operand Counts (invokevirtual, invokespecial,etc.)Some instructions pop a variable number of operands, e.g., the four invoke instructions invokevirtual, invokespecial, invokeinterface,andinvokestatic. In their case the number of popped operands directly depends on the number of arguments of the method. To support instructions that pop a variable number of operands, OPAL SPL provides the list element (Fig.9, line 8). Using the list element’scountattribute, it is possible to specify a function that determines the number of operands actually popped from the stack. It is furthermore possible, by using theloop varattribute, to specify a variable iterating over these operands. The loop variable (i) can then be used inside the list element to specify the expected operands (line10). This enables specification of both the expected number and type of operands, i.e., of the method arguments (lines8–10).Using functions (methodrefargcount, methodrefargtype, ...) offloads the intricate handling of the constant pool to externally supplied code (cf. Sec.3.4)—the enclosing toolkit; the OPAL specification language itself remains independent of how the framework or toolkit under development stores suchinformation.3.7 ExceptionsThe specification of invokevirtual (Fig. 9) also makes explicit which exceptions the instruction may throw (line 16). This information is required by control-flow analyses and thus needs to be present in specifications. To identify the instructions which may handle the exception the function (caughtby)needs to be defined by the toolkit. This functions computes, given both the instruction’s address and the type of the exception, the addresses of all instructions in the same method that handle the exception. Similar to the handling of the constant pool, OPAL SPL thus offloads the handling of the exceptions attribute.3.8 Variable-length Instructions (tableswitch, lookupswitch)The support for variable-length instructions (tableswitch, lookupswitch) is similar to the support for instructions with a variable stack size (cf. Sec. 3.6). In this case, anelementselement can be used to specify how many times (Fig.10, line 7) which kind of values (lines8–9) need to be read. Hereby, the elementsconstruct can accommodate multiple sequence elements (lines7–10).The variable number of cases is, however, just one reason why table switch and lookupswitch are classified as variable-length instructions; the JVM Specification mandates that up to three padding bytes are inserted, to align the following format elements on a four-byte boundary (line4).3.9 Single Instruction, Multiple Operand Stacks (dup2)The JVM specification defines several instructions that operate on the stack independent of their operands’ types or—if we change the perspective—that behave differently depending on the type of the operands present on the stack prior to their execution. For example, thedup2instruction (Fig. 11) duplicates the contents of two one-word stack slots.Instructions such asdup2anddup2x1distinguish their operands by their computational type (category 1 or 2) rather than by their actual type (int, reference,etc.). This makes it possible to compactly encode instructions such asdup2and motivates the corresponding level in the type hierarchy (cf. Sec.2.2). Additionally, this requires that OPAL SPL supports multiple stack layouts.In OPAL SPL, the stack is modeled as a list of operands, not as a list of slots as discussed in the JVM specification. While the effect of an instruction such asdup2 is more easily expressed in terms of stack slots, the vast majority of instructions naturally refers to operands. In particular, the decision to base the stack model on operands rather than slots avoids explicit modeling of the higher and lower halves of category-2-values, e.g., the high and low word of a 64 bitlong operand.3.10 (Conditional) Control Transfer Instructions (if, goto, jsr, ret)To perform control-flow analyses it is necessary to identify those instructions that may transfer control, either by directly manipulating the program counter or terminating the current method. This information is specified using theinstruction element’s optional transferscontrol attribute (Fig.12, line 1). Itspecifies if control is transfered conditionally or always. The target instruction to which control is transferredisidentifiedbythevaluesoftype branchoffset orabsoluteaddress.For these two types the type system contains the meta-information (cf. Fig.3)thatthe values have to be interpreted either as relative or absolute program counters.3.11 Multibyte Opcodes and Modifiers (wideinstructions, newarray)The JVM instruction set consists mostly of instructions whose opcode is a single byte, although a few instructions have longer opcode sequences. In most cases this is due to the widemodifier, a single byte prefix to the instruction. In case of the newarray instruction, however, a suffix is used to determine its precise effect. As can be seen in Fig.13, the parser needs to examine two bytes to determine the correct format sequence.3.12 Implicit Types and Type ConstructorsThe specification ofnewarray(Fig.13) also illustrates the specification of implied types and type constructors. As the JVM instruction set is a typed assembly language, many instructions exist in a variety of formats, e.g., asiadd, ladd, fadd, anddadd.Theimplicit type construct is designed to eliminate this kind of redundancy in the specification, resulting in a single, logical instruction:add. Similarily, newarraymakes use of type bindings (lines5, 8).But, to precisely model the effect ofnewarrayon the operand stack, an additional function that constructs a type is needed. Given a type and an integer, the function arrayconstructs a new type; here, a one-dimensional array of the base type (line14).3.13 Extension MechanismOPAL SPL has been designed with extensibility in mind. The extension point for additional information is the instruction element’sappinfochild, whose content can consist of arbitrary elements with a namespace other than OPAL SPL’s own.To illustrate the mechanism, suppose that we want to create a Prolog representation for Java Bytecode, in which information about operators is explicit, i.e., in which theifgt instruction is an if instruction which compares two values using the greater than operator, as illustrated by Fig.14.4 V alidating SpecificationsTo validate an OPAL SPL specification, we have defined an XML Schema which ensures syntactic correctness of the specification and performs basic identity checking. It checks, for example, that each declared type and each instruction’s mnemonic is unique. Additionally, we have developed a program which analyzes a specification and detects the following errors: (a) a format sequence does not have a unique prefix path, (b) multiple format sequences of a single instruction do not agree in the variables bound by them, (c) the number or type of function’s a rguments is wrong or its result is of the wrong type.5 EvaluationWe have used the specification of the JVM’s instruction set [9] for the implementation of a highly flexible bytecode toolkit. The toolkit supports four representations of Java bytecode: a native representation, which is a one-to-one representation of the Java Bytecode; a higher-level representation, which abstracts away some details of Java bytecode—in particular from the constant pool; an XML representation which uses the higher-level representation; a Prolog-based representation of Java Bytecode, which is also based on the higher-level representation.6 Related WorkApplying XML technologies to Java bytecode is not a new idea [5]. The XML serialization of class files, e.g., allows for their declarative transformation using XSLT. The XMLVM [11] project aims to support not only the JVM instruction set [9], but also the CLR instruction set [8]. This requires that at least the CLR’s operand stack is transformed [12], as the JVM r equires. The description of the effect that individual CLR instructions have on the operand stack is, however, not specified in an easily accessible format like OPAL SPL, but rather embedded within the XSL transformations.7 Conclusion and Future WorkIn future work, we will investigate the use of OPAL SPL for the encoding of other bytecode languages, such as the Common Intermediate Language. This would make it possible to develop (control- and dataflow-) analyses with respect to the OPAL SPL and to use the same analysis to analyze bytecode of different languages.From:Encoding the Java Virtual Machine’s Instruction SetJava虚拟机指令系统的编码1引言解释和分析Java字节码程序的发展有已经长的历史了,新的方案仍在研究。

毕业设计论文外文文献翻译

毕业设计论文外文文献翻译

毕业设计(论文)外文文献翻译院系:财务与会计学院年级专业:201*级财务管理姓名:学号:132148***附件: 财务风险管理【Abstract】Although financial risk has increased significantly in recent years risk and risk management are not contemporary issues。

The result of increasingly global markets is that risk may originate with events thousands of miles away that have nothing to do with the domestic market。

Information is available instantaneously which means that change and subsequent market reactions occur very quickly。

The economic climate and markets can be affected very quickly by changes in exchange rates interest rates and commodity prices。

Counterparties can rapidly become problematic。

As a result it is important to ensure financial risks are identified and managed appropriately. Preparation is a key component of risk management。

【Key Words】Financial risk,Risk management,YieldsI. Financial risks arising1.1What Is Risk1.1.1The concept of riskRisk provides the basis for opportunity. The terms risk and exposure have subtle differences in their meaning. Risk refers to the probability of loss while exposure is the possibility of loss although they are often used interchangeably。

毕设外文文献+翻译1

毕设外文文献+翻译1

毕设外文文献+翻译1外文翻译外文原文CHANGING ROLES OF THE CLIENTS、ARCHITECTSAND CONTRACTORS THROUGH BIMAbstract:Purpose –This paper aims to present a general review of the practical implications of building information modelling (BIM) based on literature and case studies. It seeks to address the necessity for applying BIM and re-organising the processes and roles in hospital building projects. This type of project is complex due to complicated functional and technical requirements, decision making involving a large number of stakeholders, and long-term development processes.Design/methodology/approach–Through desk research and referring to the ongoing European research project InPro, the framework for integrated collaboration and the use of BIM are analysed.Findings –One of the main findings is the identification of the main factors for a successful collaboration using BIM, which can be recognised as “POWER”: product information sharing (P),organisational roles synergy (O), work processes coordination (W), environment for teamwork (E), and reference data consolidation (R).Originality/value –This paper contributes to the actual discussion in science and practice on the changing roles and processes that are required to develop and operate sustainable buildings with the support of integrated ICT frameworks and tools. It presents the state-of-the-art of European research projects and some of the first real cases of BIM application inhospital building projects.Keywords:Europe, Hospitals, The Netherlands, Construction works, Response flexibility, Project planningPaper type :General review1. IntroductionHospital building projects, are of key importance, and involve significant investment, and usually take a long-term development period. Hospital building projects are also very complex due to the complicated requirements regarding hygiene, safety, special equipments, and handling of a large amount of data. The building process is very dynamic and comprises iterative phases and intermediate changes. Many actors with shifting agendas, roles and responsibilities are actively involved, such as: the healthcare institutions, national and local governments, project developers, financial institutions, architects, contractors, advisors, facility managers, and equipment manufacturers and suppliers. Such building projects are very much influenced, by the healthcare policy, which changes rapidly in response to the medical, societal and technological developments, and varies greatly between countries (World Health Organization, 2000). In The Netherlands, for example, the way a building project in the healthcare sector is organised is undergoing a major reform due to a fundamental change in the Dutch health policy that was introduced in 2008.The rapidly changing context posts a need for a building with flexibility over its lifecycle. In order to incorporate life-cycle considerations in the building design, construction technique, and facility management strategy, a multidisciplinary collaboration is required. Despite the attempt for establishing integrated collaboration, healthcare building projects still facesserious problems in practice, such as: budget overrun, delay, and sub-optimal quality in terms of flexibility, end-user?s dissatisfaction, and energy inefficiency. It is evident that the lack of communication and coordination between the actors involved in the different phases of a building project is among the most important reasons behind these problems. The communication between different stakeholders becomes critical, as each stakeholder possesses different setof skills. As a result, the processes for extraction, interpretation, and communication of complex design information from drawings and documents are often time-consuming and difficult. Advanced visualisation technologies, like 4D planning have tremendous potential to increase the communication efficiency and interpretation ability of the project team members. However, their use as an effective communication tool is still limited and not fully explored. There are also other barriers in the information transfer and integration, for instance: many existing ICT systems do not support the openness of the data and structure that is prerequisite for an effective collaboration between different building actors or disciplines.Building information modelling (BIM) offers an integrated solution to the previously mentioned problems. Therefore, BIM is increasingly used as an ICT support in complex building projects. An effective multidisciplinary collaboration supported by an optimal use of BIM require changing roles of the clients, architects, and contractors; new contractual relationships; and re-organised collaborative processes. Unfortunately, there are still gaps in the practical knowledge on how to manage the building actors to collaborate effectively in their changing roles, and todevelop and utilise BIM as an optimal ICT support of the collaboration.This paper presents a general review of the practical implications of building information modelling (BIM) based on literature review and case studies. In the next sections, based on literature and recent findings from European research project InPro, the framework for integrated collaboration and the use of BIM are analysed. Subsequently, through the observation of two ongoing pilot projects in The Netherlands, the changing roles of clients, architects, and contractors through BIM application are investigated. In conclusion, the critical success factors as well as the main barriers of a successful integrated collaboration using BIM are identified.2. Changing roles through integrated collaboration and life-cycle design approachesA hospital building project involves various actors, roles, and knowledge domains. In The Netherlands, the changing roles of clients, architects, and contractors in hospital building projects are inevitable due the new healthcare policy. Previously under the Healthcare Institutions Act (WTZi), healthcare institutions were required to obtain both a license and a building permit for new construction projects and major renovations. The permit was issued by the Dutch Ministry of Health. The healthcare institutions were then eligible to receive financial support from the government. Since 2008, new legislation on the management of hospital building projects and real estate has come into force. In this new legislation, a permit for hospital building project under the WTZi is no longer obligatory, nor obtainable (Dutch Ministry of Health, Welfare and Sport, 2008). This change allows more freedom from the state-directed policy, and respectively,allocates more responsibilities to the healthcare organisations to deal with the financing and management of their real estate. The new policy implies that the healthcare institutions are fully responsible to man age and finance their building projects and real estate. The government?s support for the costs of healthcare facilities will no longer be given separately, but will be included in the fee for healthcare services. This means that healthcare institutions must earn back their investment on real estate through their services. This new policy intends to stimulate sustainable innovations in the design, procurement and management of healthcare buildings, which will contribute to effective and efficient primary healthcare services.The new strategy for building projects and real estate management endorses an integrated collaboration approach. In order to assure the sustainability during construction, use, and maintenance, the end-users, facility managers, contractors and specialist contractors need to be involved in the planning and design processes. The implications of the new strategy are reflected in the changing roles of the building actors and in the new procurement method.In the traditional procurement method, the design, and its details, are developed by the architect, and design engineers. Then, the client (the healthcare institution) sends an application to the Ministry of Healthto obtain an approval on the building permit and the financial support from the government. Following this, a contractor is selected through a tender process that emphasises the search for the lowest-price bidder. During the construction period, changes often take place due to constructability problems of the design and new requirements from the client.Because of the high level of technical complexity, and moreover, decision-making complexities, the whole process from initiation until delivery of a hospital building project can take up to ten years time. After the delivery, the healthcare institution is fully in charge of the operation of the facilities. Redesigns and changes also take place in the use phase to cope with new functions and developments in the medical world.The integrated procurement pictures a new contractual relationship between the parties involved in a building project. Instead of a relationship between the client and architect for design, and the client and contractor for construction, in an integrated procurement the client only holds a contractual relationship with the main party that is responsible for both design and construction. The traditional borders between tasks and occupational groups become blurred since architects, consulting firms, contractors, subcontractors, and suppliers all stand on the supply side in the building process while the client on the demand side. Such configuration puts the architect, engineer and contractor in a very different position that influences not only their roles, but also their responsibilities, tasks and communication with the client, the users, the team and other stakeholders.The transition from traditional to integrated procurement method requires a shift of mindset of the parties on both the demand and supply sides. It is essential for the client and contractor to have a fair and open collaboration in which both can optimally use their competencies. The effectiveness of integrated collaboration is also determined by the client?s capacity and strategy to organize innovative tendering procedures.A new challenge emerges in case of positioning an architect in a partnership with the contractor instead of with the client. In case of the architect enters a partnership with the contractor, an important issues is how to ensure the realisation of the architectural values as well as innovative engineering through an efficient construction process. In another case, the architect can stand at the client?s side in a strategic advisory role instead of being the designer. In this case, the architect?s responsibility is translating client?s requirements and wishes into the architectural values to be included in the design specification, and evaluating the contractor?s proposal against this. In any of this new role, the architect holds the responsibilities as stakeholder interest facilitator, custodian of customer value and custodian of design models.The transition from traditional to integrated procurement method also brings consequences in the payment schemes. In the traditional building process, the honorarium for the architect is usually based on a percentage of the project costs; this may simply mean that the more expensive the building is, the higher the honorarium will be. The engineer receives the honorarium based on the complexity of the design and the intensity of the assignment. A highly complex building, which takes a number of redesigns, is usually favourable for the engineers in terms of honorarium. A traditional contractor usually receives the commission based on the tender to construct the building at the lowest price by meeting the minimum specifications given by the client. Extra work due to modifications is charged separately to the client. After the delivery, the contractor is no longer responsible for the long-term use of the building. In the traditional procurement method, all risks are placed with theclient.In integrated procurement method, the payment is based on the achieved building performance; thus, the payment is non-adversarial. Since the architect, engineer and contractor have a wider responsibility on the quality of the design and the building, the payment is linked to a measurement system of the functional and technical performance of the building over a certain period of time. The honorarium becomes an incentive to achieve the optimal quality. If the building actors succeed to deliver a higher added-value thatexceed the minimum client?s requirements, they will receive a bonus in accordance to the client?s extra gain. The level of transparency is also improved. Open book accounting is an excellent instrument provided that the stakeholders agree on the information to be shared and to its level of detail (InPro, 2009).Next to the adoption of integrated procurement method, the new real estate strategy for hospital building projects addresses an innovative product development and life-cycle design approaches. A sustainable business case for the investment and exploitation of hospital buildings relies on dynamic life-cycle management that includes considerations and analysis of the market development over time next to the building life-cycle costs (investment/initial cost, operational cost, and logistic cost). Compared to the conventional life-cycle costing method, the dynamic life-cycle management encompasses a shift from focusing only on minimizing the costs to focusing on maximizing the total benefit that can be gained. One of the determining factors for a successful implementation of dynamic life-cycle management is the sustainable design of the building and building components, which means that the design carriessufficient flexibility to accommodate possible changes in the long term (Prins, 1992).Designing based on the principles of life-cycle management affects the role of the architect, as he needs to be well informed about the usage scenarios and related financial arrangements, the changing social and physical environments, and new technologies. Design needs to integrate people activities and business strategies over time. In this context, the architect is required to align the design strategies with the organisational, local and global policies on finance, business operations, health and safety, environment, etc.The combination of process and product innovation, and the changing roles of the building actors can be accommodated by integrated project delivery or IPD (AIA California Council, 2007). IPD is an approach that integrates people, systems, business structures and practices into a process that collaboratively harnesses the talents and insights of all participants to reduce waste and optimize efficiency through all phases of design, fabrication and construction. IPD principles can be applied to a variety of contractual arrangements. IPD teams will usually include members well beyond the basic triad of client, architect, and contractor. At a minimum, though, an Integrated Project should include a tight collaboration between the client, the architect, and the main contractor ultimately responsible for construction of the project, from the early design until the project handover. The key to a successful IPD is assembling a team that is committed to collaborative processes and is capable of working together effectively. IPD is built on collaboration. As a result, it can only be successful if the participants share and apply common values and goals.3. Changing roles through BIM applicationBuilding information model (BIM) comprises ICT frameworks and tools that can support the integrated collaboration based on life-cycle design approach. BIM is a digital representation of physical and functional characteristics of a facility. As such it serves as a shared knowledge resource for information about a facility forming a reliable basis for decisions during its lifecycle from inception onward (National Institute of Building Sciences NIBS, 2007). BIM facilitates time and place independent collaborative working. A basic premise of BIM is collaboration by different stakeholders at different phases of the life cycle of a facility to insert, extract, update or modify information in the BIM to support and reflect the roles of that stakeholder. BIM in its ultimate form, as a shared digital representation founded on open standards for interoperability, can become a virtual information model to be handed from the design team to the contractor and subcontractors and then to the client.BIM is not the same as the earlier known computer aided design (CAD). BIM goes further than an application to generate digital (2D or 3D) drawings. BIM is an integrated model in which all process and product information is combined, stored, elaborated, and interactively distributed to all relevant building actors. As a central model for all involved actors throughout the project lifecycle, BIM develops andevolves as the project progresses. Using BIM, the proposed design and engineering solutions can be measured against the client?s requirements and expected building performance. The functionalities of BIM to support the design process extend to multidimensional (nD), including: three-dimensional visualisation and detailing, clash detection, material schedule, planning, costestimate, production and logistic information, and as-built documents. During the construction process, BIM can support the communication between the building site, the factory and the design office– which is crucial for an effective and efficient prefabrication and assembly processes as well as to prevent or solve problems related to unforeseen errors or modifications. When the building is in use, BIM can be used in combination with the intelligent building systems to provide and maintain up-to-date information of the building performance, including the life-cycle cost.To unleash the full potential of more efficient information exchange in the AEC/FM industry in collaborative working using BIM, both high quality open international standards and high quality implementations of these standards must be in place. The IFC open standard is generally agreed to be of high quality and is widely implemented in software. Unfortunately, the certification process allows poor quality implementations to be certified and essentially renders the certified software useless for any practical usage with IFC. IFC compliant BIM is actually used less than manual drafting for architects and contractors, and show about the same usage for engineers. A recent survey shows that CAD (as a closed-system) is still the major form of technique used in design work (over 60 per cent) while BIM is used in around 20 percent of projects for architects and in around 10 per cent of projects for engineers and contractors.The application of BIM to support an optimal cross-disciplinary and cross-phase collaboration opens a new dimension in the roles and relationships between the building actors. Several most relevant issues are: the new role of a model manager; the agreement on the access right and IntellectualProperty Right (IPR); the liability and payment arrangement according to the type of contract and in relation to the integrated procurement; and the use of open international standards.Collaborative working using BIM demands a new expert role of a model manager who possesses ICT as well as construction process know-how (InPro, 2009). The model manager deals with the system as well as with the actors. He provides and maintains technological solutions required for BIM functionalities, manages the information flow, and improves the ICT skills of the stakeholders. The model manager does not take decisions on design and engineering solutions, nor the organisational processes, but his roles in the chain of decision making are focused on:the development of BIM, the definition of the structure and detail level of the model, and the deployment of relevant BIM tools, such as for models checking, merging, and clash detections;the contribution to collaboration methods, especially decision making and communication protocols, task planning, and risk management;and the management of information, in terms of data flow and storage, identification of communication errors, and decision or process (re-)tracking.Regarding the legal and organisational issues, one of the actual questions is: “In what way does the intellectual property right (IPR) in collaborative working using BIM differ from the IPR in a traditional teamwork?”. In terms of combine d work, the IPR of each element is at tached to its creator. Although it seems to be a fully integrated design, BIM actually resulted from a combination of works/elements; for instance: the outline of the building design, is created by the architect, the design for theelectrical system, is created by the electrical contractor, etc. Thus, in case of BIM as a combined work, the IPR is similar to traditional teamwork. Working with BIM with authorship registration functionalities may actually make it easier to keep track of the IPR.How does collaborative working, using BIM, effect the contractual relationship? On the one hand,collaborative working using BIM does not necessarily change the liability position in the contract nor does it obligate an alliance contract. The General Principles of BIM A ddendum confirms: …This does not effectuate or require a restructuring of contractual relationships or shifting of risks between or among the Project Participants other than as specifically required per the Protocol Addendum and its Attachments? (ConsensusDOCS, 2008). On the other hand, changes in terms of payment schemes can be anticipated. Collaborative processes using BIM will lead to the shifting of activities from to the early design phase. Much, if not all, activities in the detailed engineering and specification phase will be done in the earlier phases. It means that significant payment for the engineering phase, which may count up to 40 per cent of the design cost, can no longer be expected. As engineering work is done concurrently with the design, a new proportion of the payment in the early design phase is necessary.4. Review of ongoing hospital building projects using BIMIn The Netherlands, the changing roles in hospital building projects are part of the strategy, which aims at achieving a sustainable real estate in response to the changing healthcare policy. Referring to literature and previous research, the main factors that influence the success of the changing roles can be concluded as: the implementation of an integrated procurementmethod and a life-cycle design approach for a sustainable collaborative process; the agreement on the BIM structure and the intellectual rights; and the integration of the role of a model manager. The preceding sections have discussed the conceptual thinking on how to deal with these factors effectively. This current section observes two actual projects and compares the actual practice with the conceptual view respectively.The main issues, which are observed in the case studies, are: the selected procurement method and the roles of the involved parties within this method;the implementation of the life-cycle design approach;the type, structure, and functionalities of BIM used in the project;the openness in data sharing and transfer of the model, and the intended use of BIM in the future; and the roles and tasks of the model manager.The pilot experience of hospital building projects using BIM in the Netherlands can be observed at University Medical Centre St Radboud (further referred as UMC) and Maxima Medical Centre (further referred as MMC). At UMC, the new building project for the Faculty of Dentistry in the city of Nijmegen has been dedicated as a BIM pilot project. At MMC, BIM is used in designing new buildings for Medical Simulation and Mother-and-Child Centre in the city of Veldhoven.The first case is a project at the University Medical Centre (UMC) St Radboud. UMC is more than just a hospital. UMC combines medical services, education and research. More than 8500 staff and 3000 students work at UMC. As a part of the innovative real estate strategy, UMC has considered to use BIM for its building projects. The new development of the Faculty ofDentistry and the surrounding buildings on the Kapittelweg in Nijmegen has been chosen as a pilot project to gather practical knowledge and experience on collaborative processes with BIM support.The main ambition to be achieved through the use of BIM in the building projects at UMC can be summarised as follows: using 3D visualisation to enhance the coordination and communication among the building actors, and the user participation in design;integrating the architectural design with structural analysis, energy analysis, cost estimation, and planning;interactively evaluating the design solutions against the programme of requirements and specifications;reducing redesign/remake costs through clash detection during the design process; andoptimising the management of the facility through the registration of medical installations andequipments, fixed and flexible furniture, product and output specifications, and operational data.The second case is a project at the Maxima Medical Centre (MMC). MMC is a large hospital resulted from a merger between the Diaconessenhuis in Eindhoven and St Joseph Hospital in Veldhoven. Annually the 3,400 staff of MMC provides medical services to more than 450,000 visitors and patients. A large-scaled extension project of the hospital in Veldhoven is a part of its real estate strategy. A medical simulation centre and a women-and-children medical centre are among the most important new facilities within this extension project. The design has been developed using 3D modelling with several functionalities of BIM.The findings from both cases and the analysis are as follows.Both UMC and MMC opted for a traditional procurement method in which the client directly contracted an architect, a structural engineer, and a mechanical, electrical and plumbing (MEP) consultant in the design team. Once the design and detailed specifications are finished, a tender procedure will follow to select a contractor. Despite the choice for this traditional method, many attempts have been made for a closer and more effective multidisciplinary collaboration. UMC dedicated a relatively long preparation phase with the architect, structural engineer and MEP consultant before the design commenced. This preparation phase was aimed at creating a common vision on the optimal way for collaboration using BIM as an ICT support. Some results of this preparation phase are: a document that defines the common ambition for the project and the collaborative working process and a semi-formal agreement that states the commitment of the building actors for collaboration. Other than UMC, MMC selected an architecture firm with an in-house engineering department. Thus, the collaboration between the architect and structural engineer can take place within the same firm using the same software application.Regarding the life-cycle design approach, the main attention is given on life-cycle costs, maintenance needs, and facility management. Using BIM, both hospitals intend to get a much better insight in these aspects over the life-cycle period. The life-cycle sustainability criteria are included in the assignments for the design teams. Multidisciplinary designers and engineers are asked to collaborate more closely and to interact with the end-users to address life-cycle requirements. However, ensuring the building actors to engage in an integrated collaboration to generate sustainable design solutions that meet the life-cycle。

毕业设计外文文献翻译(原文+译文)

毕业设计外文文献翻译(原文+译文)

Environmental problems caused by Istanbul subway excavation and suggestionsfor remediation伊斯坦布尔地铁开挖引起的环境问题及补救建议Ibrahim Ocak Abstract:Many environmental problems caused by subway excavations have inevitably become an important point in city life. These problems can be categorized as transporting and stocking of excavated material, traffic jams, noise, vibrations, piles of dust mud and lack of supplies. Although these problems cause many difficulties,the most pressing for a big city like Istanbul is excava tion,since other listed difficulties result from it. Moreover, these problems are environmentally and regionally restricted to the period over which construction projects are underway and disappear when construction is finished. Currently, in Istanbul, there are nine subway construction projects in operation, covering approximately 73 km in length; over 200 km to be constructed in the near future. The amount of material excavated from ongoing construction projects covers approximately 12 million m3. In this study, problems—primarily, the problem with excavation waste(EW)—caused by subway excavation are analyzed and suggestions for remediation are offered.摘要:许多地铁开挖引起的环境问题不可避免地成为城市生活的重要部分。

毕业设计(论文)外文翻译【范本模板】

毕业设计(论文)外文翻译【范本模板】

华南理工大学广州学院本科生毕业设计(论文)翻译英文原文名Review of Vibration Analysis Methods for Gearbox Diagnostics and Prognostics中文译名对变速箱振动分析的诊断和预测方法综述学院汽车工程学院专业班级车辆工程七班学生姓名刘嘉先学生学号201130085184指导教师李利平填写日期2015年3月15日英文原文版出处:Proceedings of the 54th Meeting of the Society for Machinery Failure Prevention Technology, Virginia Beach,V A, May 1-4,2000,p. 623-634译文成绩:指导教师(导师组长)签名:译文:简介特征提取技术在文献中有描述;然而,大多数人似乎掩盖所需的特定的预处理功能。

一些文件没有提供足够的细节重现他们的结果,并没有一个全面的比较传统的功能过渡齿轮箱数据。

常用术语,如“残差信号”,是指在不同的文件不同的技术.试图定义了状态维修社区中的常用术语和建立所需的特定的预处理加工特性。

本文的重点是对所使用的齿轮故障检测功能。

功能分为五个不同的组基于预处理的需要。

论文的第一部分将提供预处理流程的概述和其中每个特性计算的处理方案。

在下一节中,为特征提取技术描述,将更详细地讨论每一个功能。

最后一节将简要概述的宾夕法尼亚州立大学陆军研究实验室的CBM工具箱用于齿轮故障诊断。

特征提取概述许多类型的缺陷或损伤会增加机械振动水平。

这些振动水平,然后由加速度转换为电信号进行数据测量。

原则上,关于受监视的计算机的健康的信息被包含在这个振动签名。

因此,新的或当前振动签名可以与以前的签名进行比较,以确定该元件是否正常行为或显示故障的迹象。

在实践中,这种比较是不能奏效的。

由于大的变型中,签名的直接比较是困难的。

相反,一个涉及从所述振动署名数据特征提取更多有用的技术也可以使用。

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译

软件工程专业毕业设计外文文献翻译1000字本文将就软件工程专业毕业设计的外文文献进行翻译,能够为相关考生提供一定的参考。

外文文献1: Software Engineering Practices in Industry: A Case StudyAbstractThis paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process, practices, and techniques that lead to the production of quality software. The software engineering practices were identified through a survey questionnaire and a series of interviews with the company’s software development managers, software engineers, and testers. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.IntroductionSoftware engineering is the discipline of designing, developing, testing, and maintaining software products. There are a number of software engineering practices that are used in industry to ensure that software products are of high quality, reliable, and maintainable. These practices include software development processes, software configuration management, software testing, requirements engineering, and project management. Software engineeringpractices have evolved over the years as a result of the growth of the software industry and the increasing demands for high-quality software products. The software industry has developed a number of software development models, such as the Capability Maturity Model Integration (CMMI), which provides a framework for software development organizations to improve their software development processes and practices.This paper reports a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The objective of the study was to identify the software engineering practices used by the company and to investigate how these practices contribute to the production of quality software.Research MethodologyThe case study was conducted with a large US software development company that produces software for aerospace and medical applications. The study was conducted over a period of six months, during which a survey questionnaire was administered to the company’s software development managers, software engineers, and testers. In addition, a series of interviews were conducted with the company’s software development managers, software engineers, and testers to gain a deeper understanding of the software engineering practices used by the company. The survey questionnaire and the interview questions were designed to investigate the software engineering practices used by the company in relation to software development processes, software configuration management, software testing, requirements engineering, and project management.FindingsThe research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company’s software development process consists of five levels of maturity, starting with an ad hoc process (Level 1) and progressing to a fully defined and optimized process (Level 5). The company has achieved Level 3 maturity in its software development process. The company follows a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The software engineering practices used by the company include:Software Configuration Management (SCM): The company uses SCM tools to manage software code, documentation, and other artifacts. The company follows a branching and merging strategy to manage changes to the software code.Software Testing: The company has adopted a formal testing approach that includes unit testing, integration testing, system testing, and acceptance testing. The testing process is automated where possible, and the company uses a range of testing tools.Requirements Engineering: The company has a well-defined requirements engineering process, which includes requirements capture, analysis, specification, and validation. The company uses a range of tools, including use case modeling, to capture and analyze requirements.Project Management: The company has a well-defined project management process that includes project planning, scheduling, monitoring, and control. The company uses a range of tools to support project management, including project management software, which is used to track project progress.ConclusionThis paper has reported a case study of software engineering practices in industry. The study was conducted with a large US software development company that produces software for aerospace and medical applications. The study investigated the company’s software development process,practices, and techniques that lead to the production of quality software. The research found that the company has a well-defined software development process, which is based on the Capability Maturity Model Integration (CMMI). The company uses a set of software engineering practices that ensure quality, reliability, and maintainability of the software products. The findings of this study provide a valuable insight into the software engineering practices used in industry and can be used to guide software engineering education and practice in academia.外文文献2: Agile Software Development: Principles, Patterns, and PracticesAbstractAgile software development is a set of values, principles, and practices for developing software. The Agile Manifesto represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. This paper presents an overview of agile software development, including its principles, patterns, and practices. The paper also discusses the benefits and challenges of agile software development.IntroductionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. The manifesto emphasizes the importance of individuals and interactions, working software, customer collaboration, and responding to change. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases.Agile Software Development PrinciplesAgile software development is based on a set of principles. These principles are:Customer satisfaction through early and continuous delivery of useful software.Welcome changing requirements, even late in development. Agile processes harness change for the customer's competitive advantage.Deliver working software frequently, with a preference for the shorter timescale.Collaboration between the business stakeholders and developers throughout the project.Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done.The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.Working software is the primary measure of progress.Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.Continuous attention to technical excellence and good design enhances agility.Simplicity – the art of maximizing the amount of work not done – is essential.The best architectures, requirements, and designs emerge from self-organizing teams.Agile Software Development PatternsAgile software development patterns are reusable solutions to common software development problems. The following are some typical agile software development patterns:The Single Responsibility Principle (SRP)The Open/Closed Principle (OCP)The Liskov Substitution Principle (LSP)The Dependency Inversion Principle (DIP)The Interface Segregation Principle (ISP)The Model-View-Controller (MVC) PatternThe Observer PatternThe Strategy PatternThe Factory Method PatternAgile Software Development PracticesAgile software development practices are a set ofactivities and techniques used in agile software development. The following are some typical agile software development practices:Iterative DevelopmentTest-Driven Development (TDD)Continuous IntegrationRefactoringPair ProgrammingAgile Software Development Benefits and ChallengesAgile software development has many benefits, including:Increased customer satisfactionIncreased qualityIncreased productivityIncreased flexibilityIncreased visibilityReduced riskAgile software development also has some challenges, including:Requires discipline and trainingRequires an experienced teamRequires good communicationRequires a supportive management cultureConclusionAgile software development is a set of values, principles, and practices for developing software. Agile software development is based on the Agile Manifesto, which represents the values and principles of the agile approach. Agile software development practices include iterative development, test-driven development, continuous integration, and frequent releases. Agile software development has many benefits, including increased customer satisfaction, increased quality, increased productivity, increased flexibility, increased visibility, and reduced risk. Agile software development also has some challenges, including the requirement for discipline and training, the requirement for an experienced team, the requirement for good communication, and the requirement for a supportive management culture.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

xxxxxxxxx毕业设计(论文)外文文献翻译(本科学生用)题目:Poduct Line Engineering: The State of the Practice生产线工程:实践的形态学生姓名:学号:学部(系):专业年级:指导教师:职称或学位:2011年3月10日外文文献翻译(译成中文1000字左右):【主要阅读文献不少于5篇,译文后附注文献信息,包括:作者、书名(或论文题目)、出版社(或刊物名称)、出版时间(或刊号)、页码。

提供所译外文资料附件(印刷类含封面、封底、目录、翻译部分的复印件等,网站类的请附网址及原文】Requirements engineering practicesA precise requirements engineering process— a main driver for successful software development —is even more important for product line engineering. Usually, the product line’s scope addresses various domains simultaneously. This makes requirements engineering more complex. Furthermore, SPL development involves more tasks than single-product development. Many product line requirements are complex, interlinked, and divided into common and product-specific requirements. So, several requirements engineering practices are important specifically in SPL development:♉ Domain identification and modeling, as well as commonalities and variations across product instancesSeparate specification and verification for platform and product requirements♉ Management of integrating future requirements into the platform and products♉ Identification, modeling, and management of requirement dependenciesThe first two practices are specific to SPL engineering. The latter two are common to software development but have much higher importance for SPLs.Issues with performing these additional activities can severely affect the product line’s long-term success. During the investigation, we found that most organizations today apply organizational and procedural measures to master these challenges. The applicability of more formal requirements engineering techniques and tools appeared rather limited, partly because such techniques are not yet designed to cope with product line evelopment’s inherent complexities. The investigation determined that the following three SPL requirements engineering practices were most important to SPL success.Domain analysis and domain description.Before starting SPL development, organizations should perform a thorough domain analysis. A well-understood domain is a prerequisite for defining a suitable scope for the product line. It’s the foundation for efficiently identifying and distinguishing platform and product requirements. Among the five participants in our investigation, three explicitly modeled the product line requirements. The others used experienced architects and domain experts to develop the SPL core assets without extensive requirements elicitation. Two organizations from the first group established a continuous requirements management that maintained links between product line and product instance requirements. The three other organizations managed their core assets’ evolution using change management procedures and versioning concepts. Their business did not force them to maintain more detailed links between the requirements on core assets and product instances.The impact of architectural decisions on requirements negotiations.A stable but flexible architecture is important for SPL development. However, focusing SPL evolution too much on architectural issues will lead to shallow or even incorrect specifications. It can cause core assets to ignore important SPL requirements so that the core assets lose relevance for SPL development. Organizations can avoid this problem by establishing clear responsibilities for requirements management in addition to architectural roles.The work group participants reported that a suitable organizational tool for balancing requirements and architecture is roundtable meetings in which requirements engineers,lead architects, and marketing and sales personnel discuss SPL implementation. Also,integrating the architects into customer negotiations will solve many problems that can arise from conflicting requirements. Another measure is to effectively document requirements and architectural vision so that product marketing and SPL architects can understand each other and agree on implementation.Effective tool supportWe often discussed tool support for SPL requirements engineering during the investigation. Because requirements engineering for SPL can become highly complex, effective tool support is important. Existing tools don’t satisfactorily support aspects such as variability management, version management for requirements collections, management of different views on requirements, or dependency modeling and evolution. So, an SPL organization must design custom solutions for these issues. Specifically, the two participants in the investigation that had established continuous requirements management had to maintain expensive customization and support infrastructures for their tool environment. The other organizations tried to avoid these costs by mitigating insufficient tool support through organizational measures such as strict staging of the requirements specification.工程实践要求精确的需求工程过程,它是一个成功的软件开发的主要动力,更是对产品线工程的重要。

相关文档
最新文档