外文原文及译文

合集下载

什么使薪酬看起来合理?【外文翻译】

什么使薪酬看起来合理?【外文翻译】

外文文献翻译译文一:外文原文原文:What Make a Salary Seem ReasonableHighhouse Scoot,Brooks-Laber Margaret EAlthough considerable research attention has been directed at understanding perceptions of salary fairness, very little attention has been given to how salary expectations are formed or how trivial elements of the job search context may influence these expectations.Twoexp eriments demonstrated how the simple manipulation of response options for a multiple-choice item may influence subsequent salary expectations and salary satisfaction. Results are discussed in light of Parducci's(1995) contextual theory.It has been repeatedly shown that the way in which a question is asked can influence perceptions of what is normal. For example, Harris found that people who were asked 'How short was the basketball player?' estimated lower heights than people who were asked 'How tall was the basketball player?' Similarly, Loftus found that people who were asked 'Do you get headaches frequently?' reported more headaches than people asked 'Do you get headaches occasionally?' More recently, Norbert Schwarz and his colleagues identified numerous examples of how the wording of survey items can strongly impact self reports. For example, Schwarz reported research showing that people claimed a higher life success when the numeric values of a life-success item ranged from -5 to + 5, than when they ranged from 0 to 10. He described another study showing that psychosomatic patients reported symptom frequencies of more than twice a month when the item's response scale ranged from 'twice a month or less' to 'several times per day', but did not do so when the scale ranged from 'never' to 'more than twice a month'. Schwarz suggested that respondents to surveys assumethat the values in the middle range of a scale reflect the 'typical' value in the 'real world', whereas the extremes of the scale correspond to the extremes of the distribution. More provocative is the finding that, in addition to affecting respondents' behavioural reports on surveys, these simple context effects may also affect subsequent judgments. For example, patients in Schwarz and Scheuring's study reported a higher health satisfaction when the response scale suggested that their symptom frequency was below average.Similar to the Schwarz and Scheuring study, our research examined the effects of response scales on subsequent judgments. However,the response-scale paradigm was used to examine broader theoretical issues regarding the impact of the job-seeking context on expectations about pay. Despite the importance of starting salary in the job choice process,very little research has focused on factors that has focused on incumbent satisfaction with organizational pay noted that the existing meagre research on job-seeker expectations for staring salary is more fragmented than programmatic.The authors suggested that researchers draw from the vast literature on decision-making to understand how individual and situational factors influence salary perceptions.Our failure to find effects consistent with adaptation-level theory for our sample of job seekers was consistent with Parducci, Calfee, Marshall, and Davidson's (1960) failure to support adaptation-level theory predictions in the lab. Parducci and his colleagues found that, holding all else constant, variation in the mean of a distribution of numbers had no effect on student perceptions of the typical number. Similary, Ordóñez et al. (2000), in a study of distributive fairness perceptions, found that MBAs presented with two reference salaries (i.e., salary of a peer paid higher and salary of a peer paid lower) did not average these reference points in the manner predicted by adaptation-level theory. Ordóñez and her colleagues recommended that future research examine what happens when people are presented with more than two reference salaries.Parducci's (1995) contextual theory generally proposes that attribute judgments reflect a compromise between a range principle and a frequency principle. Most relevant tothe present concern is the frequency principle, which is a tendency for people to assign the same number of contextual representations to equal segments of the scale of judgment. For example, if fewer than one half of salaries available in the immediate context (e.g., salaries in classified advertisements) are below a particular salary, that salary is perceived to be in the bottom half of the scale of judgment. As another simple example, consider a summer job seeker whose three friends have accepted jobs with hourly rates of $5, $10, and $11. An offer of $8.50 may be perceived to be near the bottom of salaries available in the market, because two of the three available strategies are above that offer, even though it is objectively above the midpoint of the salaries of his friends. A related phenomenon, called the alternative-outcomes effect (Windschitl & Wells, 1998), has been observed for people judging the likelihood of winning a raffle. In one study, participants were presented with two different raffles involving 10 tickets. In the first situation, they were told 'You hold 3 tickets and seven other people each hold 1.' In the second situation, they were told 'You hold 3 tickets and one other person holds 7.' People felt much more certain of winning in the situation where they held more tickets than any individual competitor (3-1-1-1-1-1-1-1) than when they held fewer tickets ( 3-7), despite the fact that the probability of winning either raffle is the same. In both this example and the previous hypothetical salary examples, people are influenced by the context in which information is presented, ignoring the absolute value of the current situation. Contextual theory posits that when events that elicit hedonic judgments are concentrated at the upper endpoints of their contexts, they elicit greater happiness, regardless of the absolute levels of the events. This means that an important factor influencing satisfaction with any particular outcome is the placement of that outcome relative to other possible outcomes, or the proportion of contextual representations below that outcome. Other researchers have also suggested that reference points are not combined into a single comparison point (Kahneman, 1992) and that satisfaction is determined instead by the relative frequencies of positive and negative events (Diener, Sandvik, & Pavot, 1990). When applied to starting salary expectations, these models seem to suggest that the frequency of salary options above and below a targetsalary, not the lower and upper bounds of the salary distribution, will influence starting salary expectations. The experiments were designed to examine this proposition within the context of a simple multiple-choice item on a career-expectations survey.Participants were business students (N= 204) enrolled at a medium-sized public university in the Midwestern United States. Participation occurred during class time in seven marketing classes, ranging in size from 21 to 37 students. The majority (i.e., 90%) of the students were juniors or seniors, and male (52%). The average participant was 21 years of age. A'career attitudes survey' was designed that contained seven typical items inquiring about students' plans after graduation, along with demographic questions. Items were in multiple-choice and open-ended formats, and addressed issues such as how many jobs the students planned to apply for, what methods they planned to use to find jobs, and the nature of their expected first job. Embedded within the survey was the starting salary item that was the focus of the endpoint-level and option-frequency manipulations. The item read 'What do you expect your starting salary to be?' Participants received one of four response scales differing on the two factors. Table 1 presents the response options by endpoints of the range (low= $15,000-50,000; high= $30,000-65,000) and frequency of multiple-choice options above a target salary(low frequency; high frequency). The overall range roughly corresponded with the range of annual starting salaries of recent graduates from the business school (i.e., $21,000-65,000). Note that the manipulation of range endpoints (see Fig. 1) is distinct from a manipulation of range width, which has appeared in earlier research by Rynes et al. (1983) and Highhouse et al. (1999). The width of the salary range (i.e., $35,000) remained constant across experimental conditions in our study. The option-frequency manipulation was designed using $40,000 as the target salary(low frequency = 1 response option above $40,000; high frequency = 4 response options above $40,000). The target salary is the midpoint of the entire salary range (i.e., half the distance between the lowest salary in the low endpoint level condition and the highest salary in the high endpoint level condition). Each participant received one endpoint level and one frequency condition in a2 x 2between-subjects factorial design.Although previous research has shown that the social environment can have an impact on perceptions of a fair staring salary, far fewer studies have investigated the impact of contextual features of the decision-marking environment that may influence salary expectations. The research that does exist has focused on the width of the range of salaries available in the market. Our research builds on this work by showing that factors other than range width may influence salary expectations. Drawing from Parducci's contextual theory. We expected that, holding salary range constant, the frequency of salary options above and below a target salary would influence salary expectations independent of the level of the endpoints of salaries in the market. Our first experiment, using the manipulated response-option paradigm, showed that the frequency of response options above a target salary in the response categories for an item on a typical career survey influenced later reports of expected staring salary for a group of business majors. Contrary to Helson's adaptation-level theory, the salary endpoint level had no effect on expectations for this group. Thus, consistent with Parducci's proposition, our findings showed that one must consider not only the level of the endpoints of salaries in the immediate context but also the perceived frequency of salaries. The second experiment showed that these effects can extend beyond salary expectations to influence satisfaction with job offers. The frequency of response options above the midpoint salary in a multiple-choice item had a main effect on salary satisfaction and job attractiveness for psychology students presented 20 min later with a hypothetical job advertisement.These research results suggest that salary expectations,at least for naive job seekers,can be influenced by simple features of the contextual environment. Unfortunately, longitudinal investigations are highly dynamic.Future research is needed that employs more moment-to-moment assessments(see Stone, Shiffman,& DeCries, 1999)of job seekers' salary expectations.We do suspect,however,that our results are not limited to the simple numerical anchors set up in our experiments.Considerable research has shown that simple numerical anchors can strongly influence judgments of experts as well as novices (e.g, Northcraft andNeale,1987).Similarly, studies using such varied experts as agricultural judges (Gaeth Shanteau,1984) ,parole offices (Carroll& Payne, 1976),and court judges (Ebbesen & Konecni,1975) have concluded that the experience of these judges does not make them less susceptible to simple context effects.Barber and Bretz(2000) noted that understanding how different contexts can evoke difference in how a given salary offer will be evaluated is important for organizations"as organizations cannot predict employee reactions to pay practices without knowledge of the standards against which those practices will be evaluated" We believe that,in addition to the importance of this knowledge for organizations,such knowledge is important for job seekers. Job seekers need to be aware that their salary expectations can be inadvertently affected by their exposure to salaries that may or may not be meaningful to their situation.People are constantly faced with skewed distributions of salaries because they tend to hear more about fellow job seekers who were paid high salaries than they are to hear about fellow job seekers who were paid the industry average. This creates a cognitive context in which offered salaries are likely to be perceived as being in the bottom half of the scale of judgement,even when they are objectively near the middle of the distributions of salaries. This could be positive if it leads employees to hold high expectations of pay,as these expectations may be associated with higher negotiated salaries.Generally,though,it is important for job seekers to realize when they are being affected by context.Job seekers need to be aware of the danger of marking inferences from small saiples,as small samples of salaries may not represent the salary distribution in the population of relevant jobs.Demonstrating context dependence may be as simple as showing people how a multiple-choice option on a typical survey can influence their standards for appropriate pay.Future research might consider whether such basic training techniques can inoculate job seekers against simple context effects.Source:Frequency Context Effects On Starting-Salary Expectations. Journal of Occupational & Organizational Psychology, V olume 67, 2003(1): P69二、翻译文章译文:什么使薪酬看起来合理?虽然很多研究工作应该注意理解对工资公平、怎样看管给出了期望的薪水如何形成或者是多么微不足道的要素影响上下文可以找工作的这些期望。

毕设外文原文及译文

毕设外文原文及译文

北京联合大学毕业设计(论文)任务书题目:OFDM调制解调技术的设计与仿真实现专业:通信工程指导教师:张雪芬学院:信息学院学号:2011080331132班级:1101B姓名:徐嘉明一、外文原文Evolution Towards 5G Multi-tier Cellular WirelessNetworks:An Interference ManagementPerspectiveEkram Hossain, Mehdi Rasti, Hina Tabassum, and Amr AbdelnasserAbstract—The evolving fifth generation (5G) cellular wireless networks are envisioned to overcome the fundamental challenges of existing cellular networks, e.g., higher data rates, excellent end-to-end performance and user-coverage in hot-spots and crowded areas with lower latency, energy consumption and cost per information transfer. To address these challenges, 5G systems will adopt a multi-tier architecture consisting of macrocells, different types of licensed small cells, relays, and device-to-device (D2D) networks to serve users with different quality-of-service (QoS) requirements in a spectrum and energy-efficient manner. Starting with the visions and requirements of 5G multi-tier networks, this article outlines the challenges of interference management (e.g., power control, cell association) in these networks with shared spectrum access (i.e., when the different network tiers share the same licensed spectrum). It is argued that the existing interference management schemes will not be able to address the interference management problem in prioritized 5G multitier networks where users in different tiers have different priorities for channel access. In this context, a survey and qualitative comparison of the existing cell association and power control schemes is provided to demonstrate their limitations for interference management in 5G networks. Open challenges are highlighted and guidelines are provided to modify the existing schemes in order to overcome these limitations and make them suitable for the emerging 5G systems.Index Terms—5G cellular wireless, multi-tier networks, interference management, cell association, power control.I. INTRODUCTIONTo satisfy the ever-increasing demand for mobile broadband communications, the IMT-Advanced (IMT-A) standards have been ratified by the International Telecommunications Union (ITU) in November 2010 and the fourth generation (4G) wireless communication systems are currently being deployed worldwide. The standardization for LTE Rel-12, also known as LTE-B, is also ongoing and expected to be finalized in 2014. Nonetheless, existing wireless systems will not be able to deal with the thousand-fold increase in total mobile broadband data [1] contributed by new applications and services such as pervasive 3D multimedia, HDTV, VoIP, gaming, e-Health, and Car2x communication. In this context, the fifth generation (5G) wireless communication technologies are expected to attain 1000 times higher mobile data volume per unit area,10-100 times higher number of connecting devices and user data rate, 10 times longer battery life and 5 times reduced latency [2]. While for 4G networks the single-user average data rate is expected to be 1 Gbps, it is postulated that cell data rate of theorder of 10 Gbps will be a key attribute of 5G networks.5G wireless networks are expected to be a mixture of network tiers of different sizes, transmit powers, backhaul connections, different radio access technologies (RATs) that are accessed by an unprecedented numbers of smart and heterogeneous wireless devices. This architectural enhancement along with the advanced physical communications technology such as high-order spatial multiplexing multiple-input multiple-output (MIMO) communications will provide higher aggregate capacity for more simultaneous users, or higher level spectral efficiency, when compared to the 4G networks. Radio resource and interference management will be a key research challenge in multi-tier and heterogeneous 5G cellular networks. The traditional methods for radio resource and interference management (e.g., channel allocation, power control, cell association or load balancing) in single-tier networks (even some of those developed for two-tier networks) may not be efficient in this environment and a new look into the interference management problem will be required.First, the article outlines the visions and requirements of 5G cellular wireless systems. Major research challenges are then highlighted from the perspective of interference management when the different network tiers share the same radio spectrum. A comparative analysis of the existing approaches for distributed cell association and power control (CAPC) is then provided followed by a discussion on their limitations for5G multi-tier cellular networks. Finally, a number of suggestions are provided to modifythe existing CAPC schemes to overcome these limitations.II. VISIONS AND REQUIREMENTS FOR 5G MULTI-TIERCELLULAR NETWORKS5G mobile and wireless communication systems will require a mix of new system concepts to boost the spectral and energy efficiency. The visions and requirements for 5G wireless systems are outlined below.·Data rate and latency: For dense urban areas, 5G networks are envisioned to enable an experienced data rate of 300 Mbps and 60 Mbps in downlink and uplink, respectively, in 95% of locations and time [2]. The end-to- end latencies are expected to be in the order of 2 to 5 milliseconds. The detailed requirements for different scenarios are listed in [2].·Machine-type Communication (MTC) devices: The number of traditional human-centric wireless devices with Internet connectivity (e.g., smart phones, super-phones, tablets) may be outnumbered by MTC devices which can be used in vehicles, home appliances, surveillance devices, and sensors.·Millimeter-wave communication: To satisfy the exponential increase in traffic and the addition of different devices and services, additional spectrum beyond what was previously allocated to 4G standard is sought for. The use of millimeter-wave frequency bands (e.g., 28 GHz and 38 GHz bands) is a potential candidate to overcome the problem of scarce spectrum resources since it allows transmission at wider bandwidths than conventional 20 MHz channels for 4G systems.·Multiple RATs: 5G is not about replacing the existing technologies, but it is about enhancing and supporting them with new technologies [1]. In 5G systems, the existing RATs, including GSM (Global System for Mobile Communications), HSPA+ (Evolved High-Speed Packet Access), and LTE, will continue to evolve to provide a superior system performance. They will also be accompanied by some new technologies (e.g., beyondLTE-Advanced).·Base station (BS) densification: BS densification is an effective methodology to meet the requirements of 5G wireless networks. Specifically, in 5G networks, there will be deployments of a large number of low power nodes, relays, and device-to-device (D2D) communication links with much higher density than today’s macrocell networks.Fig. 1 shows such a multi-tier network with a macrocell overlaid by relays, picocells, femtocells, and D2D links. The adoption of multiple tiers in the cellular networkarchitecture will result in better performance in terms of capacity, coverage, spectral efficiency, and total power consumption, provided that the inter-tier and intratier interferences are well managed.·Prioritized spectrum access: The notions of both trafficbased and tier-based Prioriti -es will exist in 5G networks. Traffic-based priority arises from the different requirements of the users (e.g., reliability and latency requirements, energy constraints), whereas the tier-based priority is for users belonging to different network tiers. For example, with shared spectrum access among macrocells and femtocells in a two-tier network, femtocells create ―dead zones‖ around them in the downlink for macro users. Protection should, thus, be guaranteed for the macro users. Consequently, the macro and femtousers play the role of high-priority users (HPUEs) and lowpriority users (LPUEs), respectively. In the uplink direction, the macrocell users at the cell edge typically transmit with high powers which generates high uplink interference to nearby femtocells. Therefore, in this case, the user priorities should get reversed. Another example is a D2D transmission where different devices may opportunistically access the spectrum to establish a communication link between them provided that the interference introduced to the cellular users remains below a given threshold. In this case, the D2D users play the role of LPUEs whereas the cellular users play the role of HPUEs.·Network-assisted D2D communication: In the LTE Rel- 12 and beyond, focus will be on network controlled D2D communications, where the macrocell BS performs control signaling in terms of synchronization, beacon signal configuration and providing identity and security management [3]. This feature will extend in 5G networks to allow other nodes, rather than the macrocell BS, to have the control. For example, consider a D2D link at the cell edge and the direct link between the D2D transmitter UE to the macrocell is in deep fade, then the relay node can be responsible for the control signaling of the D2Dlink (i.e., relay-aided D2D communication).·Energy harvesting for energy-efficient communication: One of the main challenges in 5G wireless networks is to improve the energy efficiency of the battery-constrained wireless devices. To prolong the battery lifetime as well as to improve the energy efficiency, an appealing solution is to harvest energy from environmental energy sources (e.g., solar and wind energy). Also, energy can be harvested from ambient radio signals (i.e., RF energy harvesting) with reasonable efficiency over small distances. The havested energy could be used for D2D communication or communication within a small cell. Inthis context, simultaneous wireless information and power transfer (SWIPT) is a promising technology for 5G wireless networks. However, practical circuits for harvesting energy are not yet available since the conventional receiver architecture is designed for information transfer only and, thus, may not be optimal for SWIPT. This is due to the fact that both information and power transfer operate with different power sensitivities at the receiver (e.g., -10dBm and -60dBm for energy and information receivers, respectively) [4]. Also, due to the potentially low efficiency of energy harvesting from ambient radio signals, a combination of different energy harvesting technologies may be required for macrocell communication.III. INTERFERENCE MANAGEMENT CHALLENGES IN 5GMULTI-TIER NETWORKSThe key challenges for interference management in 5G multi-tier networks will arise due to the following reasons which affect the interference dynamics in the uplink and downlink of the network: (i) heterogeneity and dense deployment of wireless devices, (ii) coverage and traffic load imbalance due to varying transmit powers of different BSs in the downlink, (iii) public or private access restrictions in different tiers that lead to diverse interference levels, and (iv) the priorities in accessing channels of different frequencies and resource allocation strategies. Moreover, the introduction of carrier aggregation, cooperation among BSs (e.g., by using coordinated multi-point transmission (CoMP)) as well as direct communication among users (e.g., D2D communication) may further complicate the dynamics of the interference. The above factors translate into the following key challenges.·Designing optimized cell association and power control (CAPC) methods for multi-tier networks: Optimizing the cell associations and transmit powers of users in the uplink or the transmit powers of BSs in the downlink are classical techniques to simultaneously enhance the system performance in various aspects such as interference mitigation, throughput maximization, and reduction in power consumption. Typically, the former is needed to maximize spectral efficiency, whereas the latter is required to minimize the power (and hence minimize the interference to other links) while keeping theFig. 1. A multi-tier network composed of macrocells, picocells, femtocells, relays, and D2D links.Arrows indicate wireless links, whereas the dashed lines denote the backhaul connections. desired link quality. Since it is not efficient to connect to a congested BS despite its high achieved signal-to-interference ratio (SIR), cell association should also consider the status of each BS (load) and the channel state of each UE. The increase in the number of available BSs along with multi-point transmissions and carrier aggregation provide multiple degrees of freedom for resource allocation and cell-selection strategies. For power control, the priority of different tiers need also be maintained by incorporating the quality constraints of HPUEs. Unlike downlink, the transmission power in the uplink depends on the user’s batt ery power irrespective of the type of BS with which users are connected. The battery power does not vary significantly from user to user; therefore, the problems of coverage and traffic load imbalance may not exist in the uplink. This leads to considerable asymmetries between the uplink and downlink user association policies. Consequently, the optimal solutions for downlink CAPC problems may not be optimal for the uplink. It is therefore necessary to develop joint optimization frameworks that can provide near-optimal, if not optimal, solutions for both uplink and downlink. Moreover, to deal with this issue of asymmetry, separate uplink and downlink optimal solutions are also useful as far as mobile users can connect with two different BSs for uplink and downlink transmissions which is expected to be the case in 5G multi-tier cellular networks [3].·Designing efficient methods to support simultaneous association to multiple BSs: Compared to existing CAPC schemes in which each user can associate to a singleBS, simultaneous connectivity to several BSs could be possible in 5G multi-tier network. This would enhance the system throughput and reduce the outage ratio by effectively utilizing the available resources, particularly for cell edge users. Thus the existing CAPCschemes should be extended to efficiently support simultaneous association of a user to multiple BSs and determine under which conditions a given UE is associated to which BSs in the uplink and/or downlink.·Designing efficient methods for cooperation and coordination among multiple tiers: Cooperation and coordination among different tiers will be a key requirement to mitigate interference in 5G networks. Cooperation between the macrocell and small cells was proposed for LTE Rel-12 in the context of soft cell, where the UEs are allowed to have dual connectivity by simultaneously connecting to the macrocell and the small cell for uplink and downlink communications or vice versa [3]. As has been mentioned before in the context of asymmetry of transmission power in uplink and downlink, a UE may experience the highest downlink power transmission from the macrocell, whereas the highest uplink path gain may be from a nearby small cell. In this case, the UE can associate to the macrocell in the downlink and to the small cell in the uplink. CoMP schemes based on cooperation among BSs in different tiers (e.g., cooperation between macrocells and small cells) can be developed to mitigate interference in the network. Such schemes need to be adaptive and consider user locations as well as channel conditions to maximize the spectral and energy efficiency of the network. This cooperation however, requires tight integration of low power nodes into the network through the use of reliable, fast andlow latency backhaul connections which will be a major technical issue for upcoming multi-tier 5G networks. In the remaining of this article, we will focus on the review of existing power control and cell association strategies to demonstrate their limitations for interference management in 5G multi-tier prioritized cellular networks (i.e., where users in different tiers have different priorities depending on the location, application requirements and so on). Design guidelines will then be provided to overcome these limitations. Note that issues such as channel scheduling in frequency domain, timedomain interference coordination techniques (e.g., based on almost blank subframes), coordinated multi-point transmission, and spatial domain techniques (e.g., based on smart antenna techniques) are not considered in this article.IV. DISTRIBUTED CELL ASSOCIATION AND POWERCONTROL SCHEMES: CURRENT STATE OF THE ARTA. Distributed Cell Association SchemesThe state-of-the-art cell association schemes that are currently under investigation formulti-tier cellular networks are reviewed and their limitations are explained below.·Reference Signal Received Power (RSRP)-based scheme [5]: A user is associated with the BS whose signal is received with the largest average strength. A variant of RSRP, i.e., Reference Signal Received Quality (RSRQ) is also used for cell selection in LTE single-tier networks which is similar to the signal-to-interference (SIR)-based cell selection where a user selects a BS communicating with which gives the highest SIR. In single-tier networks with uniform traffic, such a criterion may maximize the network throughput. However, due to varying transmit powers of different BSs in the downlink of multi-tier networks, such cell association policies can create a huge traffic load imbalance. This phenomenon leads to overloading of high power tiers while leaving low power tiers underutilized.·Bias-based Cell Range Expansion (CRE) [6]: The idea of CRE has been emerged as a remedy to the problem of load imbalance in the downlink. It aims to increase the downlink coverage footprint of low power BSs by adding a positive bias to their signal strengths (i.e., RSRP or RSRQ). Such BSs are referred to as biased BSs. This biasing allows more users to associate with low power or biased BSs and thereby achieve a better cell load balancing. Nevertheless, such off-loaded users may experience unfavorable channel from the biased BSs and strong interference from the unbiased high-power BSs. The trade-off between cell load balancing and system throughput therefore strictly depends on the selected bias values which need to be optimized in order to maximize the system utility. In this context, a baseline approach in LTE-Advanced is to ―orthogonalize‖ the transmissions of the biased and unbiased BSs in time/frequency domain such that an interference-free zone is created.·Association based on Almost Blank Sub-frame (ABS) ratio [7]: The ABS technique uses time domain orthogonalization in which specific sub-frames are left blank by the unbiased BS and off-loaded users are scheduled within these sub-frames to avoid inter-tier interference. This improves the overall throughput of the off-loaded users by sacrificing the time sub-frames and throughput of the unbiased BS. The larger bias values result in higher degree of offloading and thus require more blank subframes to protect the offloaded users. Given a specific number of ABSs or the ratio of blank over total number of sub-frames (i.e., ABS ratio) that ensures the minimum throughput of the unbiased BSs, this criterion allows a user to select a cell with maximum ABS ratio and may even associate with the unbiased BS if ABS ratio decreases significantly. A qualitative comparison amongthese cell association schemes is given in Table I. The specific key terms used in Table I are defined as follows: channel-aware schemes depend on the knowledge of instantaneous channel and transmit power at the receiver. The interference-aware schemes depend on the knowledge of instantaneous interference at the receiver. The load-aware schemes depend on the traffic load information (e.g., number of users). The resource-aware schemes require the resource allocation information (i.e., the chance of getting a channel or the proportion of resources available in a cell). The priority-aware schemes require the information regarding the priority of different tiers and allow a protection to HPUEs. All of the above mentioned schemes are independent, distributed, and can be incorporated with any type of power control scheme. Although simple and tractable, the standard cell association schemes, i.e., RSRP, RSRQ, and CRE are unable to guarantee the optimum performance in multi-tier networks unless critical parameters, such as bias values, transmit power of the users in the uplink and BSs in the downlink, resource partitioning, etc. are optimized.B. Distributed Power Control SchemesFrom a user’s point of view, the objective of power control is to support a user with its minimum acceptable throughput, whereas from a system’s point of view it is t o maximize the aggregate throughput. In the former case, it is required to compensate for the near-far effect by allocating higher power levels to users with poor channels as compared to UEs with good channels. In the latter case, high power levels are allocated to users with best channels and very low (even zero) power levels are allocated to others. The aggregate transmit power, the outage ratio, and the aggregate throughput (i.e., the sum of achievable rates by the UEs) are the most important measures to compare the performance of different power control schemes. The outage ratio of a particular tier can be expressed as the ratio of the number of UEs supported by a tier with their minimum target SIRs and the total number of UEs in that tier. Numerous power control schemes have been proposed in the literature for single-tier cellular wireless networks. According to the corresponding objective functions and assumptions, the schemes can be classified into the following four types.·Target-SIR-tracking power control (TPC) [8]: In the TPC, each UE tracks its own predefined fixed target-SIR. The TPC enables the UEs to achieve their fixed target-TABLE IQUALITATIVE COMPARISON OF EXISTING CELL ASSOCIATION SCHEMESFOR MULTI-TIER NETWORKSSIRs at minimal aggregate transmit power, assuming thatthe target-SIRs are feasible. However, when the system is infeasible, all non-supported UEs (those who cannot obtain their target-SIRs) transmit at their maximum power, which causes unnecessary power consumption and interference to other users, and therefore, increases the number of non-supported UEs.·TPC with gradual removal (TPC-GR) [9], [10], and [11]:To decrease the outage ra -tio of the TPC in an infeasiblesystem, a number of TPC-GR algorithms were proposedin which non-supported users reduce their transmit power[10] or are gradually removed [9], [11].·Opportunistic power control (OPC) [12]: From the system’s point of view, OPC allocates high power levels to users with good channels (experiencing high path-gains and low interference levels) and very low power to users with poor channels. In this algorithm, a small difference in path-gains between two users may lead to a large difference in their actual throughputs [12]. OPC improves the system performance at the cost of reduced fairness among users.·Dynamic-SIR tracking power control (DTPC) [13]: When the target-SIR requirements for users are feasible, TPC causes users to exactly hit their fixed target-SIRs even if additional resources are still available that can otherwise be used to achieve higher SIRs (and thus better throughputs). Besides, the fixed-target-SIR assignment is suitable only for voice service for which reaching a SIR value higher than the given target value does not affect the service quality significantly. In contrast, for data services, a higher SIR results in a better throughput, which is desirable. The DTPC algorithm was proposed in [13] to address the problem of system throughput maximization subject to a given feasible lower bound for the achieved SIRs of all users in cellular networks. In DTPC, each user dynamically sets its target-SIR by using TPC and OPC in a selective manner. It was shown that when the minimum acceptable target-SIRs are feasible, the actual SIRs received by some users can be dynamically increased (to a value higher than their minimum acceptabletarget-SIRs) in a distributed manner so far as the required resources are available and the system remains feasible (meaning that reaching the minimum target-SIRs for the remaining users are guaranteed). This enhances the system throughput (at the cost of higher power consumption) as compared to TPC. The aforementioned state-of-the-art distributed power control schemes for satisfying various objectives in single-tier wireless cellular networks are unable to address the interference management problem in prioritized 5G multi-tier networks. This is due to the fact that they do not guarantee that the total interference caused by the LPUEs to the HPUEs remain within tolerable limits, which can lead to the SIR outage of some HPUEs. Thus there is a need to modify the existing schemes such that LPUEs track their objectives while limiting their transmit power to maintain a given interference threshold at HPUEs. A qualitative comparison among various state-of-the-art power control problems with different objectives and constraints and their corresponding existing distributed solutions are shown in Table II. This table also shows how these schemes can be modified and generalized for designing CAPC schemes for prioritized 5G multi-tier networks.C. Joint Cell Association and Power Control SchemesA very few work in the literature have considered the problem of distributed CAPC jointly (e.g., [14]) with guaranteed convergence. For single-tier networks, a distributed framework for uplink was developed [14], which performs cell selection based on the effective-interference (ratio of instantaneous interference to channel gain) at the BSs and minimizes the aggregate uplink transmit power while attaining users’ desire d SIR targets. Following this approach, a unified distributed algorithm was designed in [15] for two-tier networks. The cell association is based on the effective-interference metric and is integrated with a hybrid power control (HPC) scheme which is a combination of TPC and OPC power control algorithms.Although the above frameworks are distributed and optimal/ suboptimal with guaranteed convergence in conventional networks, they may not be directly compatible to the 5G multi-tier networks. The interference dynamics in multi-tier networks depends significantly on the channel access protocols (or scheduling), QoS requirements and priorities at different tiers. Thus, the existing CAPC optimization problems should be modified to include various types of cell selection methods (some examples are provided in Table I) and power control methods with different objectives and interference constraints (e.g., interference constraints for macro cell UEs, picocell UEs, or D2Dreceiver UEs). A qualitative comparison among the existing CAPC schemes along with the open research areas are highlighted in Table II. A discussion on how these open problems can be addressed is provided in the next section.V. DESIGN GUIDELINES FOR DISTRIBUTED CAPCSCHEMES IN 5G MULTI-TIER NETWORKSInterference management in 5G networks requires efficient distributed CAPC schemes such that each user can possibly connect simultaneously to multiple BSs (can be different for uplink and downlink), while achieving load balancing in different cells and guaranteeing interference protection for the HPUEs. In what follows, we provide a number of suggestions to modify the existing schemes.A. Prioritized Power ControlTo guarantee interference protection for HPUEs, a possible strategy is to modify the existing power control schemes listed in the first column of Table II such that the LPUEs limit their transmit power to keep the interference caused to the HPUEs below a predefined threshold, while tracking their own objectives. In other words, as long as the HPUEs are protected against existence of LPUEs, the LPUEs could employ an existing distributed power control algorithm to satisfy a predefined goal. This offers some fruitful direction for future research and investigation as stated in Table II. To address these open problems in a distributed manner, the existing schemes should be modified so that the LPUEs in addition to setting their transmit power for tracking their objectives, limit their transmit power to keep their interference on receivers of HPUEs below a given threshold. This could be implemented by sending a command from HPUEs to its nearby LPUEs (like a closed-loop power control command used to address the near-far problem), when the interference caused by the LPUEs to the HPUEs exceeds a given threshold. We refer to this type of power control as prioritized power control. Note that the notion of priority and thus the need of prioritized power control exists implicitly in different scenarios of 5G networks, as briefly discussed in Section II. Along this line, some modified power control optimization problems are formulated for 5G multi-tier networks in second column of Table II.To compare the performance of existing distributed power control algorithms, let us consider a prioritized multi-tier cellular wireless network where a high-priority tier consisting of 3×3 macro cells, each of which covers an area of 1000 m×1000 m, coexists with a low-priority tier consisting of n small-cells per each high-priority macro cell, each。

毕业论文外文翻译(中英文)

毕业论文外文翻译(中英文)

译文交通拥堵和城市交通系统的可持续发展摘要:城市化和机动化的快速增长,通常有助于城市交通系统的发展,是经济性,环境性和社会可持续性的体现,但其结果是交通量无情增加,导致交通拥挤。

道路拥挤定价已经提出了很多次,作为一个经济措施缓解城市交通拥挤,但还没有见过在实践中广泛使用,因为道路收费的一些潜在的影响仍然不明。

本文首先回顾可持续运输系统的概念,它应该满足集体经济发展,环境保护和社会正义的目标.然后,根据可持续交通系统的特点,使拥挤收费能够促进经济增长,环境保护和社会正义。

研究结果表明,交通拥堵收费是一个切实有效的方式,可以促进城市交通系统的可持续发展。

一、介绍城市交通是一个在世界各地的大城市迫切关注的话题。

随着中国的城市化和机动化的快速发展,交通拥堵已成为一个越来越严重的问题,造成较大的时间延迟,增加能源消耗和空气污染,减少了道路网络的可靠性.在许多城市,交通挤塞情况被看作是经济发展的障碍.我们可以使用多种方法来解决交通挤塞,包括新的基础设施建设,改善基础设施的维护和操作,并利用现有的基础设施,通过需求管理策略,包括定价机制,更有效地减少运输密度.交通拥堵收费在很久以前就已提出,作为一种有效的措施,来缓解的交通挤塞情况。

交通拥堵收费的原则与目标是通过对选择在高峰拥挤时段的设施的使用实施附加收费,以纾缓拥堵情况.转移非高峰期一些出行路线,远离拥挤的设施或高占用车辆,或完全阻止一些出行,交通拥堵收费计划将在节省时间和降低经营成本的基础上,改善空气中的质量,减少能源消耗和改善过境生产力。

此计划在世界很多国家和地方都有成功的应用。

继在20世纪70年代初和80年代中期挪威与新加坡实行收费环,在2003年2月伦敦金融城推出了面积收费;直至现在,它都是已经开始实施拥挤收费的大都市圈中一个最知名的例子。

然而,交通拥堵收费由于理论和政治的原因未能在实践中广泛使用。

道路收费的一些潜在的影响尚不清楚,和城市发展的拥塞定价可持续性,需要进一步研究。

外文资料原件或复印件及译文

外文资料原件或复印件及译文

外文资料原件或复印件及译文1、原文部分:Cyclone IntroductionThe CycloneTM field programmable gate array family is based on a 1.5-V, 0.13-μm, all-layer copper SRAM process, with densities up to 20,060 logicelements (LEs) and up to 288 Kbits of RAM. With features like phaselocked loops (PLLs) for clocking and a dedicated double data rate (DDR) interface to meet DDR SDRAM and fast cycle RAM (FCRAM) memory requirements, Cyclone devices are a cost-effective solution for data-path applications. Cyclone devices support various I/O standards, including LVDS at data rates up to 640 megabits per second (Mbps), and 66- and 33-MHz, 64- and 32-bit peripheral component interconnect (PCI), for interfacing with and supporting ASSP and ASIC devices. Altera also offers new low-cost serial configuration devices to configure Cyclone devices.FeaturesThe Cyclone device family offers the following features:■ 2,910 to 20,060 LEs, see Table 1–1■ Up to 294,912 RAM bits (36,864 bytes)■ Supports configuration through low-cost serial configuration device■ Support for LVTTL, LVCMOS, SSTL-2, and SSTL-3 I/O standards■ Support for 66- and 33-MHz, 64- and 32-bit PCI standard■ High-speed (640 Mbps) LVDS I/O support■ Low-speed (311 Mbps) LVDS I/O support■ 311-Mbps RSDS I/O support■Up to eight global clock lines with six clock resources available per logic array block (LAB) row■Support for external memory, including DDR SDRAM (133 MHz), FCRAM, and single data rate (SDR) SDRAMDescriptionCyclone devices contain a two-dimensional row- and column-based architecture to implement custom logic. Column and row interconnects of varying speeds provide signal interconnects between LABs and embedded memory blocks. The logic array consists of LABs, with 10 LEs in each LAB. An LE is a small unit of logic providing efficient implementation of user logic functions. LABs are grouped into rows and columns across the device. Cyclone devices range between 2,910 to 20,060 LEs. M4K RAM blocks are true dual-port memory blocks with 4K bits of memory plus parity (4,608 bits). These blocks provide dedicated true dual-port, simple dual-port, or single-port memory up to 36-bits wide at up to 250 MHz. These blocks are grouped into columns across the device in between certain LABs. Cyclone devices offer between 60 to 288 Kbits of embedded RAM. Each Cyclone device I/O pin is fed by an I/O element(IOE) located at the ends of LAB rows and columns around the periphery of the device. I/O pins support various single-ended and differential I/O standards, such as the 66- and 33-MHz, 64- and 32-bit PCI standard and the LVDS I/O standard at up to 640 Mbps. Each IOE contains a bidirectional I/O buffer and three registers for registering input, output, and output-enable signals.Dual-purpose DQS, DQ, and DM pins along with delay chains (used to phase-align DDR signals) provide interface support with external memory devices such as DDR SDRAM, and FCRAM devices at up to 133 MHz (266 Mbps). Cyclone devices provide a global clock network and up to two PLLs. The global clock network consists of eight global clock lines that drive throughout the entire device. The global clock network can provide clocks for all resources within the device, such as IOEs, LEs, and memory blocks. The global clock lines can also be used for control signals. Cyclone PLLs provide general-purpose clocking with clock multiplication and phase shifting as well as external outputs for high-speed differential I/O support.2、译文部分:飓风系列芯片介绍:飓风系列可编程芯片核心电压 1.5V,采用0.13um制程,全铜工艺, 集成20,060个逻辑单元,并且内置288 K RAM空间。

外文参考文献译文及原文

外文参考文献译文及原文

目录1介绍 (1)在这一章对NS2的引入提供。

尤其是,关于NS2的安装信息是在第2章。

第3章介绍了NS2的目录和公约。

第4章介绍了在NS2仿真的主要步骤。

一个简单的仿真例子在第5章。

最后,在第.8章作总结。

2安装 (1)该组件的想法是明智的做法,以获取上述件和安装他们的个人。

此选项保存downloadingtime和大量内存空间。

但是,它可能是麻烦的初学者,因此只对有经验的用户推荐。

(2)安装一套ns2的all-in-one在unix-based系统 (2)安装一套ns2的all-in-one在Windows系统 (3)3目录和公约 (4)目录 (4)4运行ns2模拟 (6)ns2程序调用 (6)ns2模拟的主要步骤 (6)5一个仿真例子 (8)6总结 (12)1 Introduction (13)2 Installation (15)Installing an All-In-One NS2 Suite on Unix-Based Systems (15)Installing an All-In-One NS2 Suite on Windows-Based Systems (16)3 Directories and Convention (17)Directories and Convention (17)Convention (17)4 Running NS2 Simulation (20)NS2 Program Invocation (20)Main NS2 Simulation Steps (20)5 A Simulation Example (22)6 Summary (27)1介绍网络模拟器(一般叫作NS2)的版本,是证明了有用在学习通讯网络的动态本质的一个事件驱动的模仿工具。

模仿架线并且无线网络作用和协议(即寻址算法,TCP,UDP)使用NS2,可以完成。

一般来说,NS2提供用户以指定这样网络协议和模仿他们对应的行为方式。

外文原文及译文

外文原文及译文

外文原文及译文一、外文原文Subject:Financial Analysis with the DuPont Ratio: A UsefulCompassDerivation:Steven C. Isberg, Ph.D.Financial Analysis and the Changing Role of Credit ProfessionalsIn today's dynamic business environment, it is important for credit professionals to be prepared to apply their skills both within and outside the specific credit management function. Credit executives may be called upon to provide insights regarding issues such as strategic financial planning, measuring the success of a business strategy or determining the viability of an acquisition candidate. Even so, the normal duties involved in credit assessment and management call for the credit manager to be equipped to conduct financial analysis in a rapid and meaningful way.Financial statement analysis is employed for a variety of reasons. Outside investors are seeking information as to the long run viability of a business and its prospects for providing an adequate return in consideration of the risks being taken. Creditors desire to know whether a potential borrower or customer can service loans being made. Internal analysts and management utilize financial statement analysis as a means to monitor the outcome of policy decisions, predict future performance targets, develop investment strategies, and assess capital needs. As the role of the credit manager is expanded cross-functionally, he or she may be required to answer the call to conduct financial statement analysis under any of these circumstances. The DuPont ratio is a useful tool in providing both an overview and a focus for such analysis.A comprehensive financial statement analysis will provide insights as to a firm's performance and/or standing in the areas of liquidity, leverage, operating efficiency and profitability. A complete analysis will involve both time series and cross-sectional perspectives. Time series analysis will examine trends using the firm's own performance as a benchmark. Cross sectional analysis will augment the process by using external performance benchmarks for comparison purposes. Every meaningful analysis will begin with a qualitative inquiry as to the strategy and policies of the subject company, creating a context for the investigation. Next, goals and objectives of the analysis will be established, providing a basis for interpreting the results. The DuPont ratio can be used as a compass in this process by directing the analyst toward significant areas of strength and weakness evident in the financial statements.The DuPont ratio is calculated as follows:ROE = (Net Income/Sales) X (Sales/Average Assets) X (Average Assets/Avenge Equity)The ratio provides measures in three of the four key areas of analysis, eachrepresenting a compass bearing, pointing the way to the next stage of the investigation.The DuPont Ratio DecompositionThe DuPont ratio is a good place to begin a financial statement analysis because it measures the return on equity (ROE). A for-profit business exists to create wealth for its owner(s). ROE is, therefore, arguably the most important of the key ratios, since it indicates the rate at which owner wealth is increasing. While the DuPont analysis is not an adequate replacement for detailed financial analysis, it provides an excellent snapshot and starting point, as will be seen below.The three components of the DuPont ratio, as represented in equation, cover the areas of profitability, operating efficiency and leverage. In the following paragraphs, we examine the meaning of each of these components by calculating and comparing the DuPont ratio using the financial statements and industry standards for Atlantic Aquatic Equipment, Inc. (Exhibits 1, 2, and 3), a retailer of water sporting goods.Profitability: Net Profit Margin (NPM: Net Income/Sales)Profitability ratios measure the rate at which either sales or capital is converted into profits at different levels of the operation. The most common are gross, operating and net profitability, which describe performance at different activity levels. Of the three, net profitability is the most comprehensive since it uses the bottom line net income in its measure.A proper analysis of this ratio would include at least three to five years of trend and cross-sectional comparison data. The cross sectional comparison can be drawn from a variety of sources. Most common are the Dun & Bradstreet Index of Key Financial Ratios and the Robert Morris Associates (RMA) Annual Statement Studies. Each of these volumes provide key ratios estimated for business establishments grouped according to industry (i.e., SIC codes). More will be discussed in regard to comparisons as our example is continued below. As is, over the two years, Whitbread has become less profitable.Leverage: The Leverage Multiplier (Average Assets/Average Equity)Leverage ratios measure the extent to which a company relies on debt financing in its capital structure. Debt is both beneficial and costly to a firm. The cost of debt is lower thanthe cost of equity, an effect which is enhanced by the tax deductibility of interest payments in contrast to taxable dividend payments and stock repurchases. If debt proceeds are invested in projects which return more than the cost of debt, owners keep the residual, and hence, the return on equity is "leveraged up." The debt sword, however, cuts both ways. Adding debt creates a fixed payment required of the firm whether or not it is earning an operating profit, and therefore, payments may cut into the equity base. Further, the risk of the equity position is increased by the presence of debt holders having a superior claim to the assets of the firm.二、译文题目:杜邦分析体系出处:史蒂文c Isberg运输研究所硕士论文杜邦分析体系财务分析与专业信用人员的角色转变在当今动态商业环境中,信贷的专业人士申请内部外部的特定信贷管理职能的技能非常重要。

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文

毕业设计(论文)外文原文及译文一、外文原文MCUA microcontroller (or MCU) is a computer-on-a-chip. It is a type of microcontroller emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor (the kind used in a PC).With the development of technology and control systems in a wide range of applications, as well as equipment to small and intelligent development, as one of the single-chip high-tech for its small size, powerful, low cost, and other advantages of the use of flexible, show a strong vitality. It is generally better compared to the integrated circuit of anti-interference ability, the environmental temperature and humidity have better adaptability, can be stable under the conditions in the industrial. And single-chip widely used in a variety of instruments and meters, so that intelligent instrumentation and improves their measurement speed and measurement accuracy, to strengthen control functions. In short,with the advent of the information age, traditional single- chip inherent structural weaknesses, so that it show a lot of drawbacks. The speed, scale, performance indicators, such as users increasingly difficult to meet the needs of the development of single-chip chipset, upgrades are faced with new challenges.The Description of AT89S52The AT89S52 is a low-power, high-performance CMOS 8-bit microcontroller with 8K bytes of In-System Programmable Flash memory. The device is manufactured using Atmel's high-density nonvolatile memory technology and is compatible with the industry-standard 80C51 instruction set and pinout. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with In-System Programmable Flash on a monolithic chip, the Atmel AT89S52 is a powerful microcontroller which provides a highly-flexible and cost-effective solution to many embedded control applications.The AT89S52 provides the following standard features: 8K bytes ofFlash, 256 bytes of RAM, 32 I/O lines, Watchdog timer, two data pointers, three 16-bit timer/counters, a six-vector two-level interrupt architecture, a full duplex serial port, on-chip oscillator, and clock circuitry. In addition, the AT89S52 is designed with static logic for operation down to zero frequency and supports two software selectable power saving modes. The Idle Mode stops the CPU while allowing the RAM, timer/counters, serial port, and interrupt system to continue functioning. The Power-down mode saves the RAM contents but freezes the oscillator, disabling all other chip functions until the next interrupt or hardware reset.Features• Compatible with MCS-51® Products• 8K Bytes of In-System Programmable (ISP) Flash Memory– Endurance: 1000 Write/Erase Cycles• 4.0V to 5.5V Operating Range• Fully Static Operation: 0 Hz to 33 MHz• Three-level Program Memory Lock• 256 x 8-bit Internal RAM• 32 Programmable I/O Lines• Three 16-bit Timer/Counters• Eight Interrupt Sources• Full Duplex UART Serial Channel• Low-power Idle and Power-down Modes• Interrupt Recovery from Power-down Mode• Watchdog Timer• Dual Data Pointer• Power-off FlagPin DescriptionVCCSupply voltage.GNDGround.Port 0Port 0 is an 8-bit open drain bidirectional I/O port. As an output port, each pin can sink eight TTL inputs. When 1s are written to port 0 pins, the pins can be used as high-impedance inputs.Port 0 can also be configured to be the multiplexed low-order address/data bus during accesses to external program and data memory. In this mode, P0 has internal pullups.Port 0 also receives the code bytes during Flash programming and outputs the code bytes during program verification. External pullups are required during program verification.Port 1Port 1 is an 8-bit bidirectional I/O port with internal pullups. The Port 1 output buffers can sink/source four TTL inputs. When 1s are written to Port 1 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 1 pins that are externally being pulled low will source current (IIL) because of the internal pullups.In addition, P1.0 and P1.1 can be configured to be the timer/counter 2 external count input (P1.0/T2) and the timer/counter 2 trigger input (P1.1/T2EX), respectively.Port 1 also receives the low-order address bytes during Flash programming and verification.Port 2Port 2 is an 8-bit bidirectional I/O port with internal pullups. The Port 2 output buffers can sink/source four TTL inputs. When 1s are written to Port 2 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 2 pins that are externally being pulled low will source current (IIL) because of the internal pullups.Port 2 emits the high-order address byte during fetches from external program memory and during accesses to external data memory that use 16-bit addresses (MOVX @ DPTR). In this application, Port 2 uses strong internal pull-ups when emitting 1s. During accesses to external data memory that use 8-bit addresses (MOVX @ RI), Port 2 emits the contents of the P2 Special Function Register.Port 2 also receives the high-order address bits and some control signals during Flash programming and verification.Port 3Port 3 is an 8-bit bidirectional I/O port with internal pullups. The Port 3 output buffers can sink/source four TTL inputs. When 1s are written to Port 3 pins, they are pulled high by the internal pullups and can be used as inputs. As inputs, Port 3 pins that are externally being pulled low will source current (IIL) because of the pullups.Port 3 also serves the functions of various special features of the AT89S52, as shown in the following table.Port 3 also receives some control signals for Flash programming and verification.RSTReset input. A high on this pin for two machine cycles while the oscillator is running resets the device. This pin drives High for 96 oscillator periods after the Watchdog times out. The DISRTO bit in SFR AUXR (address 8EH) can be used to disable this feature. In the default state of bit DISRTO, the RESET HIGH out feature is enabled.ALE/PROGAddress Latch Enable (ALE) is an output pulse for latching the low byte of the address during accesses to external memory. This pin is also the program pulse input (PROG) during Flash programming.In normal operation, ALE is emitted at a constant rate of 1/6 the oscillator frequency and may be used for external timing or clocking purposes. Note, however, that one ALE pulse is skipped during each access to external data memory.If desired, ALE operation can be disabled by setting bit 0 of SFR location 8EH. With the bit set, ALE is active only during a MOVX or MOVC instruction. Otherwise, the pin is weakly pulled high. Setting the ALE-disable bit has no effect if the microcontroller is in external execution mode.PSENProgram Store Enable (PSEN) is the read strobe to external program memory. When the AT89S52 is executing code from external program memory, PSENis activated twice each machine cycle, except that two PSEN activations are skipped during each access to external data memory.EA/VPPExternal Access Enable. EA must be strapped to GND in order to enable the device to fetch code from external program memory locations starting at 0000H up to FFFFH. Note, however, that if lock bit 1 is programmed, EA will be internally latched on reset. EA should be strapped to VCC for internal program executions.This pin also receives the 12-volt programming enable voltage (VPP) during Flash programming.XTAL1Input to the inverting oscillator amplifier and input to the internal clock operating circuit.XTAL2Output from the inverting oscillator amplifier.Special Function RegistersNote that not all of the addresses are occupied, and unoccupied addresses may not be implemented on the chip. Read accesses to these addresses will in general return random data, and write accesses will have an indeterminate effect.User software should not write 1s to these unlisted locations, since they may be used in future products to invoke new features. In that case, the reset or inactive values of the new bits will always be 0.Timer 2 Registers:Control and status bits are contained in registers T2CON and T2MOD for Timer 2. The register pair (RCAP2H, RCAP2L) are the Capture/Reload registers for Timer 2 in 16-bit capture mode or 16-bit auto-reload mode.Interrupt Registers:The individual interrupt enable bits are in the IE register. Two priorities can be set for each of the six interrupt sources in the IP register.Dual Data Pointer Registers: To facilitate accessing both internal and external data memory, two banks of 16-bit Data Pointer Registers areprovided: DP0 at SFR address locations 82H-83H and DP1 at 84H-85H. Bit DPS = 0 in SFR AUXR1 selects DP0 and DPS = 1 selects DP1. The user should always initialize the DPS bit to the appropriate value before accessing the respective Data Pointer Register.Power Off Flag:The Power Off Flag (POF) is located at bit 4 (PCON.4) in the PCON SFR. POF is set to “1” during power up. It can be set and rest under software control and is not affected by reset.Memory OrganizationMCS-51 devices have a separate address space for Program and Data Memory. Up to 64K bytes each of external Program and Data Memory can be addressed.Program MemoryIf the EA pin is connected to GND, all program fetches are directed to external memory. On the AT89S52, if EA is connected to VCC, program fetches to addresses 0000H through 1FFFH are directed to internal memory and fetches to addresses 2000H through FFFFH are to external memory.Data MemoryThe AT89S52 implements 256 bytes of on-chip RAM. The upper 128 bytes occupy a parallel address space to the Special Function Registers. This means that the upper 128 bytes have the same addresses as the SFR space but are physically separate from SFR space.When an instruction accesses an internal location above address 7FH, the address mode used in the instruction specifies whether the CPU accesses the upper 128 bytes of RAM or the SFR space. Instructions which use direct addressing access of the SFR space. For example, the following direct addressing instruction accesses the SFR at location 0A0H (which is P2).MOV 0A0H, #dataInstructions that use indirect addressing access the upper 128 bytes of RAM. For example, the following indirect addressing instruction, where R0 contains 0A0H, accesses the data byte at address 0A0H, rather than P2 (whose address is 0A0H).MOV @R0, #dataNote that stack operations are examples of indirect addressing, so the upper 128 bytes of data RAM are available as stack space.Timer 0 and 1Timer 0 and Timer 1 in the AT89S52 operate the same way as Timer 0 and Timer 1 in the AT89C51 and AT89C52.Timer 2Timer 2 is a 16-bit Timer/Counter that can operate as either a timer or an event counter. The type of operation is selected by bit C/T2 in the SFR T2CON (shown in Table 2). Timer 2 has three operating modes: capture, auto-reload (up or down counting), and baud rate generator. The modes are selected by bits in T2CON.Timer 2 consists of two 8-bit registers, TH2 and TL2. In the Timer function, the TL2 register is incremented every machine cycle. Since a machine cycle consists of 12 oscillator periods, the count rate is 1/12 of the oscillator frequency.In the Counter function, the register is incremented in response to a1-to-0 transition at its corresponding external input pin, T2. In this function, the external input is sampled during S5P2 of every machine cycle. When the samples show a high in one cycle and a low in the next cycle, the count is incremented. The new count value appears in the register during S3P1 of the cycle following the one in which the transition was detected. Since two machine cycles (24 oscillator periods) are required to recognize a 1-to-0 transition, the maximum count rate is 1/24 of the oscillator frequency. To ensure that a given level is sampled at least once before it changes, the level should be held for at least one full machine cycle.InterruptsThe AT89S52 has a total of six interrupt vectors: two external interrupts (INT0 and INT1), three timer interrupts (Timers 0, 1, and 2), and the serial port interrupt. These interrupts are all shown in Figure 10.Each of these interrupt sources can be individually enabled or disabledby setting or clearing a bit in Special Function Register IE. IE also contains a global disable bit, EA, which disables all interrupts at once.Note that Table 5 shows that bit position IE.6 is unimplemented. In the AT89S52, bit position IE.5 is also unimplemented. User software should not write 1s to these bit positions, since they may be used in future AT89 products. Timer 2 interrupt is generated by the logical OR of bits TF2 and EXF2 in register T2CON. Neither of these flags is cleared by hardware when the service routine is vectored to. In fact, the service routine may have to determine whether it was TF2 or EXF2 that generated the interrupt, and that bit will have to be cleared in software.The Timer 0 and Timer 1 flags, TF0 and TF1, are set at S5P2 of the cycle in which the timers overflow. The values are then polled by the circuitry in the next cycle. However, the Timer 2 flag, TF2, is set at S2P2 and is polled in the same cycle in which the timer overflows.二、译文单片机单片机即微型计算机,是把中央处理器、存储器、定时/计数器、输入输出接口都集成在一块集成电路芯片上的微型计算机。

福建茶叶出口论文外文原文及译文

福建茶叶出口论文外文原文及译文

北京联合大学毕业论文外文原文及译文题目:福建省茶叶出口现状及对策研究专业:国际经济与贸易指导教师:学院:学号:班级:姓名:一、外文原文Current status and future development of global tea production and tea productsAlastair HicksFAO Regional Office for Asia and the PacificTea is globally one of the most popular and lowest cost beverages, next only to water. Tea is consumed by a wide range of age groups in all levels of society. More than three billion cups of tea are consumed daily worldwide. Tea is considered to be a part of the huge beverage market, not to be seen in isolation jus t as a ‘commodity’. Tea active ingredients are of interest to functional foods markets. Africa, South America, the Near East and especially the Asian region produces a varied range of teas, this, together with a reputation in the international markets for high quality, has resulted in Asia enjoying a share of every importing market in the world. Huge populations in Asia, Middle East, Africa, UK, EU, and countries of the CIS consume tea regularly and throughout the day. The main tea producing countries globally are: in Africa: Burundi, Kenya, Malawi, Rwanda, Tanzania, Uganda, Zimbabwe and others. In South America: Argentina, Brazil and others; In Near East: Iran and Turkey. In Asia: Bangladesh, China, India, Indonesia, Sri Lanka, Viet Nam and others. In addition, the Russian Federation and CIS countries produce quantities of tea. Numerous types of teas are produced in the countries listed above. In China, for example, the country with the largest planting of tea and second in output, green tea is around half of the total export, black tea around one third and other teas one fifth. Depending on the manufacturing technique it may be described as green, black, oolong, white, yellow and even compressed tea. The Intergovernmental Group on Tea monitors market conditions and provides an update of potential market prospects for tea over the medium term. which examines the current situation and medium term prospects for production, consumption and trade of tea, and its impact on the world tea market.In summary, tea is considered as having a share of the global beverage market, ahighly competitive field. A wide range of tea products continue to be developed, through product and process development for added-value, as market shares become more sophisticated and competitive. The tea industry must rise to these challenges, facing the future with confidence.IntroductionThe Asian region produces a varied range of teas and this, together with a reputation in the international markets for high quality, has resulted in Asia enjoying a share of every importing market in the world. Africa, South America and the Near East also produce quantities of tea. Huge populations of Asia, UK, EU, Middle East, Africa and countries of the CIS consume tea regularly and throughout the day .The common tea plant is the evergreen shrub, Camellia sinensis. There are several varieties of this species of plant, a well known one being the Indian Assam tea (C. sinensis var. assamica Kitamura). Traditionally, tea is prepared from its dried young leaves and leaf buds, made into a beverage by steeping the leaves in boiling water. China is credited with introducing tea to the world, though the evergreen tea plant is in fact native to Southern China, North India, Myanmar and Cambodia .Although there are a growing number of countries that produce teas in a multiplicity of blends, there are essentially three main types of Camellia tea, which are Green, ‘Oolong’ and Black. The difference lies in the ‘fermentation’, which actually refers to oxidative and enzymatic changes within the tea leaves, during processing. Green tea is essentially unfermented, Oolong tea is partially fermented and Black tea is fully fermented. Black tea, which represents the majority of international trade, yields an amber coloured, full-flavour liquid without bitterness .For example, both Orange Pekoe and Pekoe are black teas. refers to the silver-tipped Assam teas. Orange Pekoe is made from the very young top leaves and traditionally comes from India or Sri Lanka. Pekoe tea comes from India, Indonesia or Sri Lanka and is made from leaves even smaller than those characteristically used for Orange Pekoe.In addition to these conventional teas, many countries of Asia have a number of herbal teas, made from brewing plant leaves, or other plant parts including flowers. For example, Gymnema sylvestre, a member of the botanical family Asclepiadaceae, found mainly in India, has been used as a healthy and nutritive herbal tea which claims to have a number of medicinal properties. Numerous other herbal teas are gaining more popularity recently .Current SituationThe global tea production growth rate in 2006 was more than 3% to reach an estimated 3.6 million t.. The expansion was mainly due to record crops in China, Viet Nam and India. Production in China increased 9.5% over the record in 2005, to 1.05 million t. in 2006, through Government policies to increase rural household incomes. Expansion of 28 percent in Viet Nam gave an output of 133,000t as tea bushes reached optimum yields. India had a 3% increase in harvest output of 945,000t for the year. This growth offset other major countries, Kenya and Sri Lanka, where output declined by 6 and 1.6%, respectively.ExportsExports in 2006 reached 1.55 million t. compared to 1.53 million t. in 2005 (Table 2).Increased shipments from Sri Lanka, India and Viet Nam offset major declines in Kenya and Indonesia, down by 12.4 and 7%. Tea exports from Sri Lanka reached 314,900 in 2006, a gain of 5.4%, while exports from Viet Nam and India expanded by 24 and 14%. The increase was due to expansion in trade to the Near East, with their growth and strength of the economies in the region. Significant growth was also achieved by Rwanda, and Tanzania, while shipments from China were relatively unchanged. Decline in exports from Kenya were affected by political uncertainty in Pakistan, its major market. Pakistan’s uncertainty also affected shipments from Indonesia and Bangladesh where exports declined, and structural problems plague the industry (FAO 2008).ImportsWorld net imports of tea declined by 1.7% to 1.57 million t. in 2006 (Table 3), reflecting reduced tea imports by Pakistan, the Russian Federation, and the Netherlands. Increased imports by traditional markets such as the United Kingdom, United States, Egypt and Germany, did not offset these declines. Imports by Pakistan declined by 3%, Russian Federation by 2%, and Netherlands by 25%, imports increasing by 7% in United Kingdom, United States, and Egypt. In Germany a 9 percent increase was recorded.ConsumptionWorld tea consumption grew by 1% in 2006, reaching 3.64 million t., but less than the annual average of 2.7% over the previous decade (Table 4). The biggest influence has been the growth in agricultural products consumption, tea included, in China and India, as their economies expanded dramatically. In 2006, China recorded a spectacular annual increase of 13.6% in total consumption, which reached 776,900 t., whilst annual growth in tea consumption in India was less, it was higher than the previous decade. Income gains inIndia, China, other developing countries, translate to more demand, for higher value-added items.Tea Added Value Product and Process DevelopmentTraditional loose tea has been largely replaced by bagged tea in many forms, for convenience. There are a range of preferences for tea styles and drinking habits among different consumers in various countries . Green and black tea will remain as major forms of tea, however, instant tea, flavored tea, decaffeinated tea, organically grown tea,‘foamy’ tea, roasted tea, herbal tea, ready-to-drink tea (canned and bottled) are developing into the market. Food products being developed are tea-rice, tea-noodles, tea-cake, tea-biscuits, tea-wine, tea-candy, tea-ice cream. In particular new types of herbal, fruit-flavor and decaffeinated teas, as well as ready-to-drink teas are becoming popular. The organically grown and healthful image of tea can be exploited, as can the utilization of active-ingredients of tea as their functional properties and nature become better known.Ready-to-drink tea is cheaper than coca-cola derivatives and this is perceived as a main competitor. There is a risk that tea consumption may drop as other drinks come on the market, from e.g. rice, potatoes, mulberry leaves. Diversified products such as tea chewing gum have been developed (Hicks 2001).Some ConclusionsThe review of the world tea market indicates some improvement in the fundamental oversupply situation in the world market which has persisted in recent years. However, in the medium term, projections suggest that although supply will continue to outstrip demand, the gap could be closer to equilibrium, if consumption improves in traditional markets. Strategies must be devised to continue the improvement in demand. Opportunities for an expansion in consumption and improvement in prices exist in producing countries themselves, as per capita consumption levels are relatively low. E.g. per capita consumption level in the major importing countries, such as the Russian Federation is 1.26 kg and for the UK, is 2.20kg, whilst per capita consumption levels in India is 0.65 kg and for Kenya is 0.40 kg.The results of research into the health benefits of tea consumption should also be used more extensively in promoting consumption in both producing and importing countries. In addition, strategies to exploit demand in value-added market segments, including specialty and organic teas, should also be more aggressively pursued. In targeting potential growth markets, recognition of and compliance with food safety and quality standards is essential.Even the impact of imposing a minimum quality standard as a means of improving the quality of tea traded internationally, would by default, reduce the quantity of tea in the world market and improve prices, at least in the short to medium term (FAO 2008).In summary, tea can be considered as having a share of the soft drink/beverages market, as well as having functional food potential. A wide range of tea products will continue to be developed through product and process development for added-value as the market shares become more sophisticated and competitive. The industry must rise to these challenges and face the future with confidence (Hicks 2001).Article ID:/ducument/d11.pdf二、译文世界茶叶产业现状和未来发展茶是全球最受欢迎和最低成本的饮料之一,仅次于纯净水。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

北京联合大学毕业设计(论文)外文原文及译文题目:中国移动业务管理系统专业:计算机科学与技术指导教师:商新娜学院:信息学院学号: 2009080405108班级: 20090804051 姓名:宋晶一、外文原文Selecting the Right Platform ComponentsEvery software developer needs to be able to make decisions. Software development involves decision making, from deciding on the right programming language to use to selecting the developer tools, or deciding on how best to deploy and distribute an application. This chapter introduces the process that we will use throughout the rest of the book for selecting components for our enterprise platform.Decisions, Decisions, DecisionsAs we discuss in Chapter 1, the focus of this book is on building an enterprise software platform. As demonstrated in that chapter, an enterprise platform comprises a large number of independent components, each of which contributes capabilities to the overall platform. Selecting the components to be used to establish a platform is typically the role of a software architect. The architect uses his or her knowledge of the system to be delivered and the needs of the stakeholders to help make choices about which components to use. The architect must be able to explain the choices to others, such as programmers, customers, and management, in order to achieve the acceptance needed to make the project a successful one.As software architects, we need to select which components will compose our enterprise platform. For example, if you know that the applications that will be created for this platform will use a Web-based interface, you will want to make sure that you have a servlet container as one of the platform components.After you determine the need for a servlet container, you need to decide among the many choices of servlet containers.There may be a wide variety of choices for any single component in yourplatform. Because you have a large number of components that can be applied to your architecture, you want to make sure that you have a consistent means of deciding which component implementation you will select and use, and then be able to communicate the reasons for your choices to other stakeholders in the project. A consistent and documented product-selection process eases this communication task by making it possible to explain to all involved parties how the decision to use a specific component was reached.This decision-making process is not unique to the use of open source software,but applies to the selection of any software component or application.However, this book is primarily about using open source software, so the first decision we will examine is the decision to use open source software.Choosing Open Source SoftwareWhy choose to use open source software? Although this question may offendsome open source zealots, open source software is not the right choice in everysituation. It is good to know when open source software is the right choice and be able to defend the decision to other stakeholders in a project. So, in this light, let’s examine why people choose to use—or not use—open source solutions.So what are the factors that push companies to choose the open source alternative to commercial software? How does open source software compare tocommercial software in the areas of cost, fitness, quality, risk, and time? When you are making a decision to purchase or use any product, there several things of interest to consider:■ Cost■ Suitability■ Quality■ Risk■ TimeWeigh the needs and features of your product against each of these pointswhen making a product decision. In some cases, cost may be the driving factorin making a product decision; in others cases, it may be suitability, quality, level of risk, or time. The following sections examine each of these factors briefly to help you understand how they affect a project decision.CostCost reflects the entire cost of owning and using the software. This is muchmore than just the initial cost of purchase. Cost includes cost of acquisition, cost of use, cost of training, and cost of maintenance. Taking all these costs intoaccount, you come up with what’s called the total cost of ownership for the timethat the product wil l be used. Cost applies directly to a project’s budget, and soin general we want to minimize costs when possible.During the height of the “dot-com” era, Linux was heralded by many as thechoice operating system for many of the newly formed and cash-tight startup companies. The rationale was simple, Linux met the software needs of these companies and could be acquired for free. In these cases, the initial cost of product acquisition was a driving factor in the decision-making process.Most open source software is available for free. There may be restrictionsplaced on how the software or resulting products are used commercially, but the software is typically free of acquisition costs. That is not to say that there are no costs involved in using open source software, but the initial cost of purchaseis zero.Let’s examine these costs as they relate to open source software. The cost ofpurchase is, of course, zero. There are, however, costs associated with: Training. Initially, the cost of training for an open source application islikely to be higher than for commercial applications. This is becauseoften there are limited training resources specializing in providing trainingfor open source applications. However, for a large organization withmany users, an initial investment in the development of training materials may be less costly than the purchase cost of the alternative commercial software.Operations. It is difficult to determine operational cost differences. Initially, any new software will have higher costs associated with operationsas the staff becomes more familiar with the software and the way it canbe configured and used. Commercial products often offer or work withadd-on tools (such as monitoring programs) that can be used to reduce operational costs. These tools often support open source products as well.For example, Apache is well supported by third-party Web monitoring tools. In general, the operational costs of open source software should notbe substantially different from those of commercial equivalents. Maintenance.Commercial software products normally charge annualfees to cover maintenance of the software and provide regular updates. Active open source projects maintain the software and provide updatesfor free. If a company wants to pay a fee for software maintenance, most open source products have developers who are willing to accept themoney and address company-specific problems.To summarize, open source software wins on initial acquisition costs, but loses on training costs. It ties with commercial software on operational costs but wins on maintenance costs. Overall, in most cases open source software will be less expensive for companies to use.SuitabilitySuitability is the measure of how well suited a given product is to the requirementsthat it is being applied against. In general, you want the product that best satisfies your project’s needs.As long as we are focused on enterprise Java and we select components thatconform to the standards, we know that we have achieved a certain amount of suitability for our purpose of building an enterprise platform. Suitability has to be measured against each project’s individual needs.iang As a very basic example,if you want to pound a nail into a board, you don’t use a pot, you use aham mer. If you’re making soup, though, a hammer will do you no good at all, but a pot will be quite useful.For a platform developer, suitability is normally based on product featuresthat relate to conformance to standards and performance metrics. If the primarypurpose is to use the open source platform as a development environment and deployment will be done on a different platform, then most performance considerations typically do not apply.QualityObviously, the higher the quality of your chosen software, the better. There arethree major quality concerns you need to consider when evaluating software: robustness, reliability, and reparability. These concerns are discussed in detail in the following sections.RobustnessRobustness is a measure of how well a product withstands abnormal situations.Sure, we’ve established that open source software is typically zero initialcost and subsequently fast to acquire. But does it follow, as the saying goes,. “You get what you pay for”? In other words, can we expect low quality from free software just because we pay nothing for it? The answer is that quality is just as much an issue with open source software as it is with commercial software,and that the money you pay for a given piece of software does not in anyway determine its quality.ReliabilityReliability is a measure of how often the product breaks. First, the source codeof an open source product is open to scrutiny. Many people can review and comment on the source code. You can even review the source code for a productbefore you decide to use it. Because of this, developers are often able tooffer fixes as software problems are found. This means that many times problemsare found and fixed faster in open source programs. It also means that, if needed, you can fix the software yourself.Now, compare this to the commercial software. The software is reviewed internally by the vendor and depends on the quality assurance (QA) process within the vendor. In this model, with many fewer people testing the software, bugs may or may not be caught, just as with open source software. The differenceis that the eyes that are looking over the code are those of the QA employees of the company that makes the software; with open source software, the reviewers are typically those who are using the software. If a bug does make itinto a release, the size of the vendor’s available workforce limits its ability to fix the bug and make the fix available to the user community. This means that a typical vendor has to decide which bugs to fix and which ones to ignore or postpone fixing until a later release. Even worse, the vendor may decide to no longer support a particular version of the software, and a fix may never be released or you may be forced to upgrade the software to a newer release. Microsoft, for example, collects all fixes (in particular those that don’t have to do with security) into yearly releases called service packs, meaning that if you find a bug in Windows you may well have to wait up to a year to get the fix, if not longer. Or, sometimes the company will fix the bug in a new release of Windows, forcing you to pay for the upgrade if you want to get a fix for a problemyou had with a previous release.ReparabilityReparability is a measure of how long it takes to repair a product once it is broken. In general, open source software is more repairable because the sourcecode is available, and this property typically means that bugs have often been discovered and repaired before you encounter them.As you can see, open source products can have some advantages over typical commercial software products. The bottom line is that any particular open source product may or may not have a quality advantage over any specific commercial product. In many cases, the quality of open source products is at least comparable to that of their commercial counterparts. Often, it is superior. The issue is not black and white; you have to judge each piece of software on its own merits against its competition to determine the quality of that software, regardless of how much it costs to acquire.二、译文选择正确的平台组件每个软件开发人员都需要能够有作出决定的能力。

相关文档
最新文档