Simultaneous Design Optimization of Permanent Magnet, Coils, and

合集下载

把权力关进制度的笼子(Cagewithpowerinthesystem)

把权力关进制度的笼子(Cagewithpowerinthesystem)

把权力关进制度的笼子(Cage with power in the system)Cage with power in the systemThe party's eighteen report pointed out, "to promote the operation of power, open, standardized, so that power in the sun."". General secretary Xi Jinping in the two plenary session of the Central Commission for Discipline Inspection speech made it clear that the establishment of an effective disciplinary mechanism, prevention mechanisms and security mechanisms, the power into the system cage". In recent years, the municipal government started in key areas and key positions, innovation, mode of supervision to restrict reform power, explore a new path specification and power balance.Tighten the power "reins", regulate the volume rate change. In depth investigation and analysis of typical cases on the basis of the introduction of the "urban management of construction land volume rate adjustment management approach.". This is a way to increase the "threshold" provisions may apply for adjustment of volume rate was only involved in the public interest or policy adjustment situation; two is the strict procedures, to organize experts to conduct feasibility studies and trial, involving closely connected people, held public hearings, and then submitted to the government for approval without objection; three is tightening clear power, adjust the volume rate shall be discussed and decided by the executive meeting of the municipal government, any other units and individuals have the right to decide the change of planning. In view of the existing planning in risk management, municipal government further forward the control mark, the timely introduction of the "Regulations" urban planning managementtechnical specification, planning and design stage and construction stage project approval rate calculation rules and related building height control, construction area and volume calculation standard, effective control of discretion. In the planning departments of internal construction project is a copy of certification and approval, after the issuance volume of track inspection, the completion of planning verification work undertaken by different offices, so as to "double insurance" and mutual supervision and restriction on the volume rate of the verification checks. At the same time, the establishment of the volume rate management filing system, adjust the volume change rate must be detailed registration, and reported to the discipline inspection and supervision organs for the record.Clear operation process, "sunshine" sell land. Municipal government to regulate the land market as a breakthrough, creatively established the "net transfer, Yaohao host, live broadcast, tracking the transfer of land management, prior to listing, listed in the transaction, after three stages of implementation of process control. "Net transfer", refers to the plots listed before, first and then transfer to resettlement, the "hair" into the "net"; "Yaohao management", in the land auction, the auction host randomly selected from the library, produced by the host site by Yaohao; "live", is a video of the auction live recordings, and by the NPC deputies and CPPCC members and the public mission site supervision, city land market network television and broadcast; "tracking" refers to the plots listed transactions, the contract management, the completion of the application online,real-time supervision and tracking management of land development. In addition, the municipal government for landacquisition and sensitive issue stipulates: "net listing" in the "not tearing down", "how to dismantle" must have the consent of more than 90% residents agree; all must be listed plots of land compensation, resettlement, land use rights to recover, contradiction at the "four place", adjusting or firmly not listed.Weaving mechanism, "fence", scientific planning and approval. 2011,The municipal government promulgated the work regulations of the Municipal Planning Commission, and made clear new regulations for the planning committee members, planning review, design review and rules of procedure. The "Regulations" provisions, submit the plan to the system architecture design scheme optimization, public buildings, large buildings along the main road and the city landscape river must submit at least three sets by different creative designers' scheme than the election, plan must reflect the surrounding built environment, especially important projects should be made to model the spatial relationship the expression of the building. The name of the creative designer and design unit will be inscribed on the building. In project bidding, the owners of the exit mechanism, the project construction unit tendering, bidding registration, take back, judges the random, the owners do not send personnel to participate in the evaluation, make the evaluation process more standardized, more fair competition.Investigation and separation of checks and balances, law enforcement open and transparent. The city land department formulated the "on the further strengthening of urban landmanagement related work opinions", "land and resources law enforcement measures", focus on the establishment of "prevention first, find timely, effective and long-term mechanism to stop the investigation in place". "Opinions" on the batch before, during, after the regulatory requirements have been standardized, and clearly establish, investigate, separate and breach of administrative behavior handling mechanism.Standard practice of reform of public power restriction on tackling the problem, realized three source of Governance: one is for the prevention of corruption from the source for new institutional arrangements; two is provided to ensure the new mechanism for preventing and controlling the source of social contradictions; three is the city social management by law to provide the source to strengthen a new experience. Specification and balance of public power is an important means to promote the city social management of the rule of law, the innovation and practice a beneficial exploration for the social management of the city into the rule of law, but also provides a new experience for the entire administrative norms and balance of public power, bring us three inspirations:(1) the rule of law is the fundamental way to check and balance the public power. Decentralization and balance is an effective way to regulate public power operation of human society find, decentralization and balance is also a kind of power operation mode, in this mode of operation, the balance of decentralization is the premise, not decentralization can not be achieved to the restriction of power. The rule of law through decentralization and balance, to ensure that decision-making,execution and supervision of mutual restriction and mutual coordination and form a closed relationship, to ensure the people's supervision power and public power operation and standardization, effectively solve the powerful people how to exercise the right of the post.(two) the rule of law is the most reliable cage. Hold the key links and key positions, easy to breed corruption on the one hand, in accordance with the principle of the balance of powers, from horizontal and vertical respectively on the power of reasonable decomposition, the chief and Deputy between, between the upper and lower levels, between departments and between the staff according to the law to use power, do have the right to have the responsibility, right subject to supervision, prevent the excessive concentration of power; on the other hand, a clear division of responsibilities, the power holders should bear the responsibility and must abide by the law and ethics together, and establish accountability mechanisms. Simultaneous,Through the construction of the rule of law, established a reasonable structure, scientific configuration, strict procedures and effective control of power supervision mechanism, weaving a reliable cage.(three) rule of law is the most perfect image of a famous city. Throughout the world, with international competitiveness and influence of the city has become a city, a very important aspect, namely the rule of law has become the cornerstone of city operation, city citizens believe in the city, the spirit of the core pillar. It can be said that the world famous city is afamous cultural city, an ecological city, a powerful city and a famous city under the rule of law.。

相邻大型深基坑同步开挖关键施工技术

相邻大型深基坑同步开挖关键施工技术

相邻大型深基坑同步开挖关键施工技术摘要:相邻深基坑同步开挖施工较单个深基坑独立开挖施工对于基坑围护结构、周围土体和周边道路管线形成的受力方式是有较大差异的,前者有变形叠加影响,对于周边环境的保护要求提出了更高的要求。

如何确保相邻深基坑同步开挖施工时减少对围护、周围土体以及周边环境的影响,使基坑自身及周边环境的变形值控制在设计范围内,是基坑施工阶段的重点控制内容。

背景工程基坑开挖面积大,开挖深度深,基坑间仅由一条市政道路相隔,由于项目工期较紧,两个基坑无法分阶段进行施工,需同步进行基坑施工,基坑施工阶段对周边道路及地下管线的保护提出了较高要求。

通过设计方案比选、施工方案优化,并采用信息化监控手段进行了基坑阶段的全过程监测,对相邻深基坑同步开挖后的基坑及周边道路、管线等安全进行了有效变形控制,为类似工程施工提供了借鉴。

关键词:相邻大型深基坑;同步开挖;变形控制Key construction techniques for synchronous excavation of adjacent large deep foundation pitsKang JunShanghai Construction No.7 (Group) Co., Ltd. Shanghai 200050,ChinaAbstract: The synchronous excavation construction of adjacent deep foundation pits differs greatly from the independent excavation construction of a single deep foundation pit in terms of the stress mode formed by the foundation pit enclosure structure, surrounding soil, and surrounding road pipelines. The former has deformation superposition effects, which puts forward higher requirements for the protection of the surrounding environment. How to ensure synchronous excavation and construction of adjacent deep foundation pits to reducethe impact on the enclosure, surrounding soil, and surrounding environment, and to control the deformation values of the foundationpit itself and surrounding environment within the design range, is a key control content in the construction stage of the foundation pit. The excavation area of the background engineering foundation pit is large and the excavation depth is deep. The foundation pits are only separated by a municipal road. Due to the tight project schedule, the two foundation pits cannot be constructed in stages and need to be constructed simultaneously. During the construction phase of the foundation pit, high requirements are put forward for the protectionof surrounding roads and underground pipelines. Through comparison and selection of design schemes, optimization of construction schemes,and the use of information monitoring methods, the entire process of the foundation pit stage was monitored. Effective deformation control was carried out on the safety of adjacent deep foundation pits and surrounding roads, pipelines, etc. after synchronous excavation,providing reference for similar engineering construction.Keywords: Adjacent large deep foundation pits;Simultaneous excavation;Deformation control11 引言近年来随着城市规模的不断扩大,地下空间的开发力度不断增大,且城市地区所设计的建筑密度很大,容易造成密集的基坑群施工,势必会遇上两个甚至多个深基坑同时进行开挖施工的情况。

Multidisciplinary Design Optimization

Multidisciplinary Design Optimization

Multidisciplinary Design Optimization Multidisciplinary Design Optimization (MDO) is a complex and challenging process that involves integrating various engineering disciplines to achieve the best possible design solution. It requires a holistic approach that takes into account the interactions and trade-offs between different design parameters, such as structural, thermal, aerodynamic, and control systems. MDO is crucial in modern engineering as it allows for the development of more efficient and cost-effective designs, ultimately leading to better products and systems. One of the key challenges in MDO is the need to balance conflicting design requirements. For example, in the design of an aircraft, engineers must consider the trade-offs between weight, aerodynamics, and structural integrity. Optimizing one aspect of the design may have a negative impact on another, so it is essential to find the right balance that meets all requirements. This requires close collaboration between engineers from different disciplines, as well as the use of advanced modeling and simulation tools to evaluate the design space and identify the best solutions. Another challenge in MDO is the complexity of the design space. With multiple interacting disciplines and a large number of design variables, the search for the optimal solution can be extremely challenging. Traditional design optimization methods often struggle to handle this complexity, leading to suboptimal solutions. MDO requires the use of advanced optimization algorithms and techniques, such as genetic algorithms, neural networks, and multi-objective optimization, to efficiently explore the design space and identify the best solutions. Furthermore, MDO also requires a significant amount of computational resources. The integration of multiple disciplines and the use of advanced optimization techniques often result in computationally intensive processes that require large-scale computing resources. This can be a barrier for small engineering teams or organizations with limited resources, as it may be challenging to access the necessary computational infrastructure to support MDO activities. However, with the advancement of cloud computing and high-performance computing technologies, these barriers are gradually being overcome, making MDO more accessible to a wider range of engineering teams. Despite these challenges, the benefits of MDO are significant. By considering multiple disciplinessimultaneously, MDO can lead to designs that are more efficient, reliable, and cost-effective. It can also help to identify innovative design solutions that may not be apparent when considering each discipline in isolation. Ultimately, MDO has the potential to revolutionize the way engineering design is conducted, leading to the development of better products and systems across a wide range of industries. In conclusion, Multidisciplinary Design Optimization is a complex and challenging process that requires a holistic approach to balance conflicting design requirements, handle the complexity of the design space, and access significant computational resources. Despite these challenges, the benefits of MDO are significant, leading to more efficient, reliable, and cost-effective designs. As technology continues to advance, MDO is expected to play an increasingly important role in modern engineering, ultimately leading to the development of better products and systems across a wide range of industries.。

1_WE3Paper_OptimizingOnDieDecap

1_WE3Paper_OptimizingOnDieDecap

DesignCon 2014 Optimizing On Die Decap in a System at Early Stage of Design CycleNaresh Dhamija, LSI India R&D Pvt. Ltd.Naresh.Dhamija@Pramod Parameswaran, LSI India R&D Pvt. Ltd. Pramod.Parameswaran@Sarika JainSarika.Jain@Makeshwar KothandaramanMakesh.Kothandaraman@Praveen SooraPraveen.Soora@AbstractIn a parallel interface like NAND, that support different combinations of speed and load, it has been very tough to figure out what should be the optimum on chip decap (OCD) requirement for a given system to reduce the SSO induced jitter. In this session, we determine the optimum value of OCD based on load capacitance, length of transmission line, package loop inductance and IO architecture. This covers the effect from most of the variables that engineers would like to see from time domain simulation. The advantages of this approach are multifold. It doesn't require complete time domain simulation that may be very time consuming. By knowing the OCD requirement at early stage of design, one can avoid unnecessary OCDs and can save die area. By putting OCDs without the need can actually affect the system adversely in both Reads and Writes. The other advantage that can lead to cost saving is one can also predict what kind of package it can support at a given load, frequency and OCD value.Author(s) BiographyNaresh Dhamija is a Staff Engineer in Signal Integrity at LSI Bangalore. In his present job, he is responsible for doing system SI/PI activities in both serial and parallel interfaces. His responsibilities at LSI include developing new methodologies to get simulations fast and more correlated to silicon. His 12 years of experience includes RF system characterization and analog validation of serdes, DDR and Data converters. He received his Bachelors degree in 2001 in the field of Electronics and Communication from C.R State Govt. College of Engineering, Murthal, Haryana (India). He holds a U.S patent in DDR training method.Pramod Parameswaran is an IO design/Signal Integrity Engineer at LSI Bangalore.He has around 14 years of Industrial experience in various domains. He has worked on various high speed, low power and high voltage IO buffer designs. He has also worked on various memory circuit designs and analog circuit design. Recently he has been working on high speed signal integrity analysis for parallel and serial interfaces. He holds around ten US patents in circuit designSarika Jain is Circuit design engineer-II at LSI Corporation working as signal Integrity team member for last one year and responsible for IO buffer/Chip Level IBIS modeling and other Signal Integrity analysis work as well. And she has three years of experience in Standard Cell/IO buffer Front-end modeling such as Verilog and test views. She completed the M.Tech degree in VLSI Design at Indian institute of information technology Gwalior.Makesh Kothandaraman manages the cells and IO development team at LSI Bangalore. His technical interests include IO design, signal integrity and modeling. He has over 17 years of experience in this area. He received his Bachelors degree from the Indian Institute of Technology, Madras during the year 1990. He then completed his Masters inElectrical Engineering from North Carolina State University at Raleigh before joining the AT&T Microelectronics group in Allentown, PA during the year 1996. This group subsequently became part of Lucent Technologies, then Agere Systems, and finally LSI Corp in 2007. He has been a key architect of many IO circuits at LSI and holds several U.S and international patents in the area of IO circuit design.Praveen Soora is a senior engineering manager at LSI Corporation leading signal integrity teams developing high-frequency Serdes, DDR, and Flash interfaces. He has more than twelve years of experience in signal integrity, and worked at Apple and at Intel prior to joining LSI Corporation. He is passionate about incorporating new methodologies into product design flows, correlating simulation models with physical product performance, and driving chip/package/system co-design teams. He has managed teams having members spread across multiple countries, and excels at identifying talented engineers and developing them into key contributors and technical leaders. He completed the MSEE degree at West Virginia University, Morgantown.1. IntroductionIn a parallel interface like NAND, DDR, due to non-zero power-ground loop inductance, simultaneous switching aggressor IOs (SSO) causes power supply noise that distorts the timing of the signal coming out of IO Pad. The voltage droop slows down the edges whereas in case of voltage bounce, the edges get faster. This leads to peak to peak period jitter on a victim signal if victim and aggressor patterns are different form each other which is the most typical case.OCDs (On Chip Decaps) work as local reservoir of charge for the IOs and hence provide an alternate path of current during charging and discharging of an IO. This reduces the sudden change of current through the inductive path of package and hence reducingL.di/dt and its effect. OCDs, though they help in reducing the SSO jitter, require significant amount of die area and hence need optimization. The more general practice used to optimize decap is to run full blown SSO simulations with a sweep of OCDs that happens at a very late stage of design cycle. In many cases, where die area is not limited by OCDs, optimization of decap is not at all considered.In this paper, we will bring the factors that need to be considered during OCD optimization. We will be presenting that it is not only the Die area or cost of silicon that drives optimization but also performance. In some cases adding more OCDs affect the signal integrity of signals in adverse manner. We will show test cases where the system is such that in early design phase itself, it can be figured out whether decap is required or not and if required what should be the right choice of decap. The rest of the paper is organized as follows:In section 2, we will first bring out the two effects of PG loop inductance that distorts the signal at IO Pad. Then we introduce the concept of transformed load at IO pad whereby we show when and how the load will affect the power noise and SSO jitter. In section 3, we will see how adding decap affects the voltage droop and SSO jitter. Adding decap shifts the system impedance resonance peak to lower frequency range and causing timing jitter to increase. We will show two complete systems with two different packages. We will see the effect of adding same decap on two different packages is exactly opposite. One value of decap that is found to be optimum for one case is worst choice for the other. In section 4, we will be concluding the methodology presented in this paper as a step by step procedure to optimize OCD, some takeaways, assumptions and disclaimers.2. Effect of load and transmission line on power noise and SSO jitterBefore seeing the effect of load and transmission line on power noise and jitter we will first see how power noise and SSO jitter are related. In section 2.1, we show that there are two ways by which Power noise can cause jitter.2.1 Power noise causing SSO Jitter at IO PadConsider the setup as shown in Fig.1 with RTT=50 ohms and Cload as 2pf. L_Pkg is 0.21nH. Drive Strength of IOs is 33 ohms.Figure 1: Setup used for SSO simulation with a lumped load at IO PADFig.2 is simulated result of Fig.1 showing comparison of ideal edge (Pink) vs. distorted edge (Blue) of a victim signal due to power noise produced by simultaneous switching of aggressors of setup shown in Fig. 1, we can see the two reasons for timing distortion are: Cause a) Considering aggressors are all rising, voltage droop (Red) happens due toL.di/dt. Due to the voltage droop there is a push-out of victim rising edges. Cause b) when Cload starts getting charged, current initially increases and later the slope (di/dt) inverts, and voltage noise changes the direction. This voltage bounce distorts edge towards the latter half of signal rise. Cause b) tries to make the edge fast and compensate for the effect of cause a). In case, lumped load at IO Pad is small OR rise time is fast, it is cause a) that is primarily responsible for distorting the edge as a delay as shown in Fig. 2. Cause b) affects in the latter portion of rise, it can affect mainly slow rising edges.Figure 2: Two causes of jitter from an SSO event2.2 Effect of Transmission line and Far-end load on Power Noise Consider the setup as shown in Fig.3. Here rest of the settings remains same as in Fig.1 but we have introduced a transmission line between load and IO Pad. Characteristics Impedance of transmission line (Z0) is 50 ohms. Data rate of the input signal is1067Mbps, so 1UI is ~937ps.Figure 3 Setup for measuring SSO Noise and Jitter in presence of transmission lineTo see the effect of transmission line length we swept the length of transmission line (TD) in terms of data bit period (UI). As we also want to see the effect of load, we also changed far end cap load (Cload) and termination RTT at each length of transmission line. Complete matrix of variables is shown in Table 1.The results of Table1 can be explained observing reflected waveform at near end (IO Pad).The reflection at near end will be seen after 2TD [1],[2]. Whenever the reflected waveform causes hump at pad while a transition from lo to hi, the voltage droop on vddio_chip is less and whenever there is a droop on pad due to reflection, the voltage droop on vddio_chip is more while going lo to hi. In the case when Cload is 2pf, TD <UI/2 will have negligible effect on droop on vddio_chip as reflections get over and doesn’t fall at next edge. For the same case if TD ~ UI/2 (Pink Curve in Fig.4), the hump from preceding hi to lo transition will be right on the edge causing lesser potential difference between vddio_chip and pad and hence reducing the demand of current from Vddio_chip. This causes a significant less droop. For TD ~UI (Fluorescent Green Curve in Fig.4), the droop in vddio_chip while aggressors transitioning lo to hi (that is the case always considered here) depends on how the aggressors’ edge transitioned 2UI earlier, if it is hi to lo, the pad will see hump as a reflection, so the droop in vddio_chip will be less as shown in circle a) in Fig.4. If the transition happened 2UI earlier is lo to hi, there will be a dip at pad at current edge transition going lo to hi. This causes more droop (as shown in circle b) in Fig. as the demand of current increased because of potential difference between vddio_chip and pad increased. So if TD ~ UI, or in general TD ~ (UI + N*UI/2) where N is an integer, the worst case droop in vddio_chip will be highest.Figure 4: Effect of Transmission line length on Power Noise in presence of light (2pf) CloadWith Cload 16pf, RTT remaining same i.e. equals to Z0 (50 ohms here). The trend of droop w.r.t Td remain same but with the difference that with 2pf Cload, the minimum droop started reducing at TD~UI/2, but now it starts happening much earlier in length say at TD~UI/4 itself as shown in figure . Similarly the increase of voltage droop starts happening at TD ~ 2UI/3 only. So in this case (that has ISI effects), we can say that worst case Vddio_chip droop is least at TD = UI/2 and increases thereafter.Now consider a case when RTT=120 > Z0 = 50. In this case the reflection at near end pad changes a bit as now there will be initially –ve reflection due to Cload and then +ve reflection because of RTT (120) being higher than Z0 (50) but overall trend remains same as with 50 ohms RTT as seen in table as the steady state value of reflection now corresponds to 120 ohms RTT.From this se ction, the key takeaway is that for lower value of Cload, droop doesn’t change much for TD < UI/2, but for higher value of Cload (that causes BW limitation), droop reduces for TD < UI/2 and increases after TD > UI/2.2.3 Effect of Power noise on SSO jitter at CloadAs we have seen in section 2b, the droop is least for UI/2 and maximum for 1UI, so we expect lesser SSO jitter at UI/2 compared to 1UI. We have found almost 50% increase in jitter at Cload=2pf, if we increase the length Td from UI/2 to 1UI. Fig.5 shows 40ps at UI/2 has become 59ps when TD=1UIFigure 5: Effect of Transmission line length on SSO JitterNow to see the effect of Cload on SSO jitter we will consider a test case where droop has highest difference between Cload of 2pf and Cload of 16pf. So we take a case when TD=2UI/3 and measure the jitter at Cload. Table 2 shows the result comparison of two cases.We can see that although the droop on 16pf load is high but the SSO jitter at Cload is almost nothing. This is because effective rise time at load is given by equation (1) [3]. With higher the effect of degradation in rise at near end has lesser impact at far end.(1)The corresponding effect in jitter is shown in fig. through the Eye diagram comparison at far end between ideal supply (Blue) and SSO affected supply (Red). Fig.6 shows the eye diagram of victim signal in a lightly loaded (Cload=2pf) system and Fig.7 shows the Eye diagram of victim signal in a heavily loaded (Cload=16pf) system.Figure 6: Data Eye comparison of ideal supply (Blue) vs. SSO affected supply (red) on Cload=2pfFigure 7: Data Eye comparison of ideal supply (Blue) vs. SSO affected supply (red) on Cload=16pf Please note that the droop on Cload also depends on L_pkg as the input rise time at near end as given in eq. (1) is directly dependent on droop which is directly proportional toL_pkg. So if the L_pkg is increased, even for heavier load, there is an increase in jitter Key take-aways from section 2:∙Supply noise is a function of length of transmission line and load configuration as revealed in Table 1.∙Jitter contributed from power noise at far end from a given package depends on Cload. Bigger the Cload, less is the impact of droop on SSO Jitter as revealed inTable 2.3. Effect of Decap on Power Noise and Jitter3.1 Effect of Decap on Power NoisePower Noise is a function of current through the package and self impedance (Z11) of system. Fig.8 is the result of ac analysis done on the same package used in setup considered in section 2 i.e. L_pkg is 0.21nH. As in [4] adding decaps lower the impedance at high frequency but brings the resonance peak to the lower frequency as shown in Fig.8. Decap values used in this ac analysis are 25pf/IO, 50pf/IO, 75pf/IO and 100pf/IO.Figure 8: System Z11 seen from Die side with different value of OCDWhile system Z11 is one of the parameters that has direct impact on voltage noise but it is product of current FFT and Z11(f) that gives the power noise [5].3.2 Effect of Decap on TimingIf the amplitude of power noise is such that it cannot cause a glitch on static or asynchronous signal but primarily degrades the timing. Then, we need to see how timing gets impacted by different noise profiles. Consider the setup as shown in Fig. 9. Data Rate of input signal is 1067 Mbps as considered in section 2. We have used 3 IOs here, one as victim signal (VCT) having PRBS7 pattern and other two as strobe pair (DQSPand DQSN). V_ac applied is 100mV. Frequency of the ac source (w) is swept from11MHz to 5111Mhz and the jitter measured at Cload of 2pf.Figure 9: Setup to see effect of voltage noise of different frequencies on a lumped CloadIn the plot shown in fig.10, on X-axis is the frequency of noise (w) and on Y-axis the pk-pk jitter measured at Cload.Figure 10: Pk-Pk Jitter vs. input sinusoidal Noise of 100mV amplitude at different frequenciesWe can see that effect of voltage noise on jitter is more at lower frequencies. This is the IO response curve to the input noise injected. Because of inherent stray capacitance, IO itself acts as decap, so high frequency noise is filtered out by IO itself. This slope can be different for different IOs depending on IO design, but considering this as typical IO characteristics, we can say that low frequency noise causes more jitter compared to high frequency noise of same amplitude. So while adding decap, one needs to be careful. If the amplitude of noise with decap has not reduced significantly, it can actually cause more pk-pk jitter compared to non-decap case.In case if we are only worried about timing uncertainty of source synchronous signals like data and strobe, we need to see the relative jitter between DQ and DQSP/DQSN. In plot of fig. 9, it is noticed that although jitter at low to mid frequency range is high but all the three signals – DQ, DQSP and DQSN tracking each other, so relative jitter between data and strobe would be almost zero. But as in mostly cases, strobe is 90 degree phase shifted w.r.t data, so the mid-range frequencies close to data rate will see highest relative jitter as shown in plot of fig.11. Since data rate considered here is 1067 Mbps, the relative jitter increases up till 1067 MHz of noise frequency. After that as standalone jitter also decreases as shown in fig.10, the relative jitter also starts decreasing faster as shown in fig. 11.Figure 11: Relative Jitter between DQ and Differential DQSP/DQSNTo see if there is any effect of load on this curve, we swept Cload from 5pf to 20pf and found that there is only constant offset of jitter between different load conditions. The plot shown in Fig.12, shows that change of jitter w.r.t change in voltage noise is almost linear and is independent of Cload.Figure 12: Effect of Cload on Relative Jitter between DQ and Differential DQSP/DQSNNow we will consider the same setup as of Fig.3 with Cload of 16pf and measure relative jitter in terms of Eye opening with different value of decaps. The results are shown in fig.13. On left Y-Axis is Eye opening (in purple bars) while on right Y-axis the voltage droop (as yellow trend line). Red dotted horizontal line is the spec. for passing Eye. The results shown in fig.13 are in accordance with the results shown in fig.8 and fig.11. From fig.8, we see that with decap value of 25pf/IO, the resonance peak comes at 682Mhz. Using 50pf/IO, the resonance comes to 479MHz. As per fig.11 the resonance caused by 682MHz peak will have higher impact on jitter and hence 25pf/Io is showing worst results. By adding decap more than 50pf/IO, amplitude of resonance peaks also reduces causing both droop and timing jitter to improve.Figure 13: Effect of Decap on relative Timing Uncertainty between DQ and DQSP/DQSNThis is a good example showing resonance peaks at mid frequencies make relative Eye opening to worsen. Considering this one can chose either not to put decap or to put such that resonance shouldn’t hit the mid frequency region of graph shown in Fig. 11.As a second test case, in the same setup of fig.3, now we will replace the package model of L_0.21nH with the package of having higher inductance value (L_0.47nH) and again see the effect of decaps.Figure 14:Effect of Decap on relative Timing Uncertainty with higher value of InductanceWith this package model, the results shown in fig.14 show that although the voltage droop is always reducing with the addition of decaps but Eye opening at Cload only improves from 0 decap to 25pf/IO. Beyond 25pf/IO, Eye is degrading till 75pf/IO. As decap value reaches 100pf/IO, some improvement starts happening, but st ill didn’t reach to the Eye opening observed at 25pf/IO. In this case low frequency resonance is degrading the Eye. This is a classic example of high Cload effect where improvement in voltage droop doesn’t cause far end Eye to improve.As a third example, we will see the effect of both the packages (L_0.21nH andL_0.47nH) while driving a lighter load. In this case, Cload of 16pf is replaced by Cload of 6pf. The results are shown in fig.15. We can see that both the packages now show improvement in droop as well as in Eye. This is in accordance with the fact that for lighter loads, whatever is the impact on near end waveform will be reflected at far end Cload. So as droop reduces with decap addition, near end waveform will improve and hence is the far end waveform at Cload.Figure 15: Effect of package decaps on lighter load (6pf)Key take-aways from section 3:∙Jitter caused by low frequency noise has higher impact on jitter, so decap should be added such that either resonant peak in system Z11(w) lie in the higherfrequency band OR the resonant peak in low-mid frequency range hassignificantly low value. For better accuracy IFFT approach as in [5] should befollowed∙In a hea vily loaded case, the improvement in jitter shouldn’t be judged by the improvement in power noise.∙Effect of resonance on higher Cload is more4. Conclusions and Future WorkIn this paper we have shown that power noise can be reduced by altering the transmission line length. For lightly loaded systems, transmission line length of delay UI/2 will give the least droop and least jitter. So for systems having this kind of characteristics, doesn’t need big amount of decaps. Effect of voltage droop causing edges to become slower hasvery less impact on high cap load systems. PDN resonances should be studied before deciding on the on die decap. Generally PDN study is done to meet the voltage droop requirement. We have shown in this paper, how PDN curve should be analyzed for optimizing on die decap from timing perspective (which is the main role of on die decap). PDN curve should be seen along with noise to jitter / relative uncertainty curve. We have shown test cases where adding unnecessarily high decaps has caused problems in a high cap load systems. As a future work, the effect of cause b) mentioned in section 2 should be studied in detail so that optimum decap value can be predicted more accurately.5. AcknowledgementsThe authors would like to express their appreciation to Sreenivasulu Ramavath from LSI India for support of this project.References:[1] William H. Hayt, “Engineering Electromagnetics - Sixth Edition”, The McGraw Companies[2] James R. Andrews, “Time Domain Reflectometry (TDR) and Time Domain Transmission (TDT) Measurement Fundamentals”, Application Note AN-15Copyright November 2004[3] Johnson and Graham, “High-Speed Digital Design: A Handbook of Black Magic”, Prentice-Hall, 1993[4] Istvan Novak, “Power Distribution Network Design Methodologies”, Professional Education International, Inc.[5] Iliya Zamek, “Modeling FPGA Current Waveform and Spectrum and PDN Noise Estimation”, DesignCon 2008。

机械工程专业英语的某些单词和句子翻译

机械工程专业英语的某些单词和句子翻译

英译汉1.tolerance公差 puter-aided manufacturing计算机辅助制造 3.numerically controlled machines数控机床 4.necking颈缩 5.turning,drilling and boring operation 6.formability and machinability成形与可加工性 7.assembly lines装配线 8.dimensional accuracy尺寸精度 9.cross-sectional area横截面积 10.percentage elongation伸长率 11.structural strength结构强度 12.stress-strain curver应力应变曲淬火和内应力 14.earthmoving and construction equipment线 13.quenching and internal stresses土建设备 15.straightening operation 16.cracking and distortoon断裂和扭曲变形 17.light service at fractional horsepower小马力轻载 18.screw pump螺杆泵 19.steel sheet and rolled-steel shapes钢板和滚压成型钢 20.straightening operation矫正操作 21.sensing devices 传感器 21.digital or pulse motor数字与脉冲马达 22.drilling钻 boring齿轮加工镗 reaming铰 gear-cutting operations齿轮加工汉译英1.切削刀具cutting tools 2.紧固件,如螺母fasteners such as nuts 3.刚和铸铁steels and cast irons 4.马氏体和奥氏体martensite and autenite 5.机械特性mechanical properties puter-aided manufacturing计算机辅助制造 7.数控系统numerically controlled systems 8.大批量生产技术mass production techology 9.控制单元control units 10.靠模附件profiling attachment 11.弹性模量和伸长率elastic modulus and percentage elongation 12.规模经济economy of scale 13.闭环系统close-loop system 14.有色金属non-ferrous 15.液压系统hydraulic system 16.弹性和屈服极限elastic and yield limit 17.龙门刨工作台低碳钢和合金钢low-carbon steel and alloy steel 18.pianner-table 短句:1、low carbon steels do not become hard when subjected to such a heat treatment, because of the sma ll amount of carbon contained.当经历热处理、低碳钢不会变硬、因为含有少量碳。

Molecular_Docking

Molecular_Docking

What is Docking?
Docking attempts to find the “best” matching between two molecules
… a more serious definition…
Given two biological molecules determine:
Lenhoff technique
Computes a “complementary” surface for the receptor instead of the Connolly surface, i.e. computes possible positions for the atom centers of the ligand
DOCK: Example
- HIV-1 protease is the target receptor - Aspartyl groups are its active side
DOCK DOCK works in 5 steps:
Step 1 Start with crystal coordinates of target receptor Step 2 Generate molecular surface for receptor Step 3 Generate spheres to fill the active site of the receptor: The spheres become potential locations for ligand atoms Step 4 Matching: Sphere centers are then matched to the ligand atoms, to determine possible orientations for the ligand

ReliabilityEngineeringandSystemSafety91(2006)992–1007

ReliabilityEngineeringandSystemSafety91(2006)992–1007

Reliability Engineering and System Safety 91(2006)992–1007Multi-objective optimization using genetic algorithms:A tutorialAbdullah Konak a,Ã,David W.Coit b ,Alice E.Smith caInformation Sciences and Technology,Penn State Berks,USA bDepartment of Industrial and Systems Engineering,Rutgers University cDepartment of Industrial and Systems Engineering,Auburn UniversityAvailable online 9January 2006AbstractMulti-objective formulations are realistic models for many complex engineering optimization problems.In many real-life problems,objectives under consideration conflict with each other,and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.In this paper,an overview and tutorial is presented describing genetic algorithms (GA)developed specifically for problems with multiple objectives.They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity.r 2005Elsevier Ltd.All rights reserved.1.IntroductionThe objective of this paper is present an overview and tutorial of multiple-objective optimization methods using genetic algorithms (GA).For multiple-objective problems,the objectives are generally conflicting,preventing simulta-neous optimization of each objective.Many,or even most,real engineering problems actually do have multiple-objectives,i.e.,minimize cost,maximize performance,maximize reliability,etc.These are difficult but realistic problems.GA are a popular meta-heuristic that is particularly well-suited for this class of problems.Tradi-tional GA are customized to accommodate multi-objective problems by using specialized fitness functions and introducing methods to promote solution diversity.There are two general approaches to multiple-objective optimization.One is to combine the individual objective functions into a single composite function or move all but one objective to the constraint set.In the former case,determination of a single objective is possible with methods such as utility theory,weighted sum method,etc.,but theproblem lies in the proper selection of the weights or utility functions to characterize the decision-maker’s preferences.In practice,it can be very difficult to precisely and accurately select these weights,even for someone familiar with the problem pounding this drawback is that scaling amongst objectives is needed and small perturbations in the weights can sometimes lead to quite different solutions.In the latter case,the problem is that to move objectives to the constraint set,a constraining value must be established for each of these former objectives.This can be rather arbitrary.In both cases,an optimization method would return a single solution rather than a set of solutions that can be examined for trade-offs.For this reason,decision-makers often prefer a set of good solutions considering the multiple objectives.The second general approach is to determine an entire Pareto optimal solution set or a representative subset.A Pareto optimal set is a set of solutions that are nondominated with respect to each other.While moving from one Pareto solution to another,there is always a certain amount of sacrifice in one objective(s)to achieve a certain amount of gain in the other(s).Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems/locate/ress0951-8320/$-see front matter r 2005Elsevier Ltd.All rights reserved.doi:10.1016/j.ress.2005.11.018ÃCorresponding author.E-mail address:konak@ (A.Konak).since thefinal solution of the decision-maker is always a trade-off.Pareto optimal sets can be of varied sizes,but the size of the Pareto set usually increases with the increase in the number of objectives.2.Multi-objective optimization formulationConsider a decision-maker who wishes to optimize K objectives such that the objectives are non-commensurable and the decision-maker has no clear preference of the objectives relative to each other.Without loss of generality, all objectives are of the minimization type—a minimization type objective can be converted to a maximization type by multiplying negative one.A minimization multi-objective decision problem with K objectives is defined as follows: Given an n-dimensional decision variable vector x¼{x1,y,x n}in the solution space X,find a vector x* that minimizes a given set of K objective functions z(x*)¼{z1(x*),y,z K(x*)}.The solution space X is gen-erally restricted by a series of constraints,such as g j(x*)¼b j for j¼1,y,m,and bounds on the decision variables.In many real-life problems,objectives under considera-tion conflict with each other.Hence,optimizing x with respect to a single objective often results in unacceptable results with respect to the other objectives.Therefore,a perfect multi-objective solution that simultaneously opti-mizes each objective function is almost impossible.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.If all objective functions are for minimization,a feasible solution x is said to dominate another feasible solution y (x1y),if and only if,z i(x)p z i(y)for i¼1,y,K and z j(x)o z j(y)for least one objective function j.A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space.A Pareto optimal solution cannot be improved with respect to any objective without worsening at least one other objective.The set of all feasible non-dominated solutions in X is referred to as the Pareto optimal set,and for a given Pareto optimal set,the corresponding objective function values in the objective space are called the Pareto front.For many problems,the number of Pareto optimal solutions is enormous(perhaps infinite).The ultimate goal of a multi-objective optimization algorithm is to identify solutions in the Pareto optimal set.However,identifying the entire Pareto optimal set, for many multi-objective problems,is practically impos-sible due to its size.In addition,for many problems, especially for combinatorial optimization problems,proof of solution optimality is computationally infeasible.There-fore,a practical approach to multi-objective optimization is to investigate a set of solutions(the best-known Pareto set)that represent the Pareto optimal set as well as possible.With these concerns in mind,a multi-objective optimization approach should achieve the following three conflicting goals[1]:1.The best-known Pareto front should be as close aspossible to the true Pareto front.Ideally,the best-known Pareto set should be a subset of the Pareto optimal set.2.Solutions in the best-known Pareto set should beuniformly distributed and diverse over of the Pareto front in order to provide the decision-maker a true picture of trade-offs.3.The best-known Pareto front should capture the wholespectrum of the Pareto front.This requires investigating solutions at the extreme ends of the objective function space.For a given computational time limit,thefirst goal is best served by focusing(intensifying)the search on a particular region of the Pareto front.On the contrary,the second goal demands the search effort to be uniformly distributed over the Pareto front.The third goal aims at extending the Pareto front at both ends,exploring new extreme solutions.This paper presents common approaches used in multi-objective GA to attain these three conflicting goals while solving a multi-objective optimization problem.3.Genetic algorithmsThe concept of GA was developed by Holland and his colleagues in the1960s and1970s[2].GA are inspired by the evolutionist theory explaining the origin of species.In nature,weak and unfit species within their environment are faced with extinction by natural selection.The strong ones have greater opportunity to pass their genes to future generations via reproduction.In the long run,species carrying the correct combination in their genes become dominant in their population.Sometimes,during the slow process of evolution,random changes may occur in genes. If these changes provide additional advantages in the challenge for survival,new species evolve from the old ones.Unsuccessful changes are eliminated by natural selection.In GA terminology,a solution vector x A X is called an individual or a chromosome.Chromosomes are made of discrete units called genes.Each gene controls one or more features of the chromosome.In the original implementa-tion of GA by Holland,genes are assumed to be binary digits.In later implementations,more varied gene types have been introduced.Normally,a chromosome corre-sponds to a unique solution x in the solution space.This requires a mapping mechanism between the solution space and the chromosomes.This mapping is called an encoding. In fact,GA work on the encoding of a problem,not on the problem itself.GA operate with a collection of chromosomes,called a population.The population is normally randomly initia-lized.As the search evolves,the population includesfitterA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007993andfitter solutions,and eventually it converges,meaning that it is dominated by a single solution.Holland also presented a proof of convergence(the schema theorem)to the global optimum where chromosomes are binary vectors.GA use two operators to generate new solutions from existing ones:crossover and mutation.The crossover operator is the most important operator of GA.In crossover,generally two chromosomes,called parents,are combined together to form new chromosomes,called offspring.The parents are selected among existing chromo-somes in the population with preference towardsfitness so that offspring is expected to inherit good genes which make the parentsfitter.By iteratively applying the crossover operator,genes of good chromosomes are expected to appear more frequently in the population,eventually leading to convergence to an overall good solution.The mutation operator introduces random changes into characteristics of chromosomes.Mutation is generally applied at the gene level.In typical GA implementations, the mutation rate(probability of changing the properties of a gene)is very small and depends on the length of the chromosome.Therefore,the new chromosome produced by mutation will not be very different from the original one.Mutation plays a critical role in GA.As discussed earlier,crossover leads the population to converge by making the chromosomes in the population alike.Muta-tion reintroduces genetic diversity back into the population and assists the search escape from local optima. Reproduction involves selection of chromosomes for the next generation.In the most general case,thefitness of an individual determines the probability of its survival for the next generation.There are different selection procedures in GA depending on how thefitness values are used. Proportional selection,ranking,and tournament selection are the most popular selection procedures.The procedure of a generic GA[3]is given as follows:Step1:Set t¼1.Randomly generate N solutions to form thefirst population,P1.Evaluate thefitness of solutions in P1.Step2:Crossover:Generate an offspring population Q t as follows:2.1.Choose two solutions x and y from P t based onthefitness values.ing a crossover operator,generate offspringand add them to Q t.Step3:Mutation:Mutate each solution x A Q t with a predefined mutation rate.Step4:Fitness assignment:Evaluate and assign afitness value to each solution x A Q t based on its objective function value and infeasibility.Step5:Selection:Select N solutions from Q t based on theirfitness and copy them to P t+1.Step6:If the stopping criterion is satisfied,terminate the search and return to the current population,else,set t¼t+1go to Step2.4.Multi-objective GABeing a population-based approach,GA are well suited to solve multi-objective optimization problems.A generic single-objective GA can be modified tofind a set of multiple non-dominated solutions in a single run.The ability of GA to simultaneously search different regions of a solution space makes it possible tofind a diverse set of solutions for difficult problems with non-convex,discon-tinuous,and multi-modal solutions spaces.The crossover operator of GA may exploit structures of good solutions with respect to different objectives to create new non-dominated solutions in unexplored parts of the Pareto front.In addition,most multi-objective GA do not require the user to prioritize,scale,or weigh objectives.Therefore, GA have been the most popular heuristic approach to multi-objective design and optimization problems.Jones et al.[4]reported that90%of the approaches to multi-objective optimization aimed to approximate the true Pareto front for the underlying problem.A majority of these used a meta-heuristic technique,and70%of all meta-heuristics approaches were based on evolutionary ap-proaches.Thefirst multi-objective GA,called vector evaluated GA (or VEGA),was proposed by Schaffer[5].Afterwards, several multi-objective evolutionary algorithms were devel-oped including Multi-objective Genetic Algorithm (MOGA)[6],Niched Pareto Genetic Algorithm(NPGA) [7],Weight-based Genetic Algorithm(WBGA)[8],Ran-dom Weighted Genetic Algorithm(RWGA)[9],Nondomi-nated Sorting Genetic Algorithm(NSGA)[10],Strength Pareto Evolutionary Algorithm(SPEA)[11],improved SPEA(SPEA2)[12],Pareto-Archived Evolution Strategy (PAES)[13],Pareto Envelope-based Selection Algorithm (PESA)[14],Region-based Selection in Evolutionary Multiobjective Optimization(PESA-II)[15],Fast Non-dominated Sorting Genetic Algorithm(NSGA-II)[16], Multi-objective Evolutionary Algorithm(MEA)[17], Micro-GA[18],Rank-Density Based Genetic Algorithm (RDGA)[19],and Dynamic Multi-objective Evolutionary Algorithm(DMOEA)[20].Note that although there are many variations of multi-objective GA in the literature, these cited GA are well-known and credible algorithms that have been used in many applications and their performances were tested in several comparative studies. Several survey papers[1,11,21–27]have been published on evolutionary multi-objective optimization.Coello lists more than2000references in his website[28].Generally, multi-objective GA differ based on theirfitness assign-ment procedure,elitisim,or diversification approaches.In Table1,highlights of the well-known multi-objective with their advantages and disadvantages are given.Most survey papers on multi-objective evolutionary approaches intro-duce and compare different algorithms.This paper takes a different course and focuses on important issues while designing a multi-objective GA and describes common techniques used in multi-objective GA to attain the threeA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007 994goals in multi-objective optimization.This approach is also taken in the survey paper by Zitzler et al.[1].However,the discussion in this paper is aimed at introducing the components of multi-objective GA to researchers and practitioners without a background on the multi-objective GA.It is also import to note that although several of the state-of-the-art algorithms exist as cited above,many researchers that applied multi-objective GA to their problems have preferred to design their own customized algorithms by adapting strategies from various multi-objective GA.This observation is another motivation for introducing the components of multi-objective GA rather than focusing on several algorithms.However,the pseudo-code for some of the well-known multi-objective GA are also provided in order to demonstrate how these proce-dures are incorporated within a multi-objective GA.Table1A list of well-known multi-objective GAAlgorithm Fitness assignment Diversity mechanism Elitism ExternalpopulationAdvantages DisadvantagesVEGA[5]Each subpopulation isevaluated with respectto a differentobjective No No No First MOGAStraightforwardimplementationTend converge to theextreme of each objectiveMOGA[6]Pareto ranking Fitness sharing byniching No No Simple extension of singleobjective GAUsually slowconvergenceProblems related to nichesize parameterWBGA[8]Weighted average ofnormalized objectives Niching No No Simple extension of singleobjective GADifficulties in nonconvexobjective function space Predefined weightsNPGA[7]Nofitnessassignment,tournament selection Niche count as tie-breaker in tournamentselectionNo No Very simple selectionprocess with tournamentselectionProblems related to nichesize parameterExtra parameter fortournament selectionRWGA[9]Weighted average ofnormalized objectives Randomly assignedweightsYes Yes Efficient and easyimplementDifficulties in nonconvexobjective function spacePESA[14]Nofitness assignment Cell-based density Pure elitist Yes Easy to implement Performance depends oncell sizesComputationally efficientPrior information neededabout objective spacePAES[29]Pareto dominance isused to replace aparent if offspringdominates Cell-based density astie breaker betweenoffspring and parentYes Yes Random mutation hill-climbing strategyNot a population basedapproachEasy to implement Performance depends oncell sizesComputationally efficientNSGA[10]Ranking based onnon-dominationsorting Fitness sharing bynichingNo No Fast convergence Problems related to nichesize parameterNSGA-II[30]Ranking based onnon-dominationsorting Crowding distance Yes No Single parameter(N)Crowding distance worksin objective space onlyWell testedEfficientSPEA[11]Raking based on theexternal archive ofnon-dominatedsolutions Clustering to truncateexternal populationYes Yes Well tested Complex clusteringalgorithmNo parameter forclusteringSPEA-2[12]Strength ofdominators Density based on thek-th nearest neighborYes Yes Improved SPEA Computationallyexpensivefitness anddensity calculationMake sure extreme pointsare preservedRDGA[19]The problem reducedto bi-objectiveproblem with solutionrank and density asobjectives Forbidden region cell-based densityYes Yes Dynamic cell update More difficult toimplement than othersRobust with respect to thenumber of objectivesDMOEA[20]Cell-based ranking Adaptive cell-baseddensity Yes(implicitly)No Includes efficienttechniques to update celldensitiesMore difficult toimplement than othersAdaptive approaches toset GA parametersA.Konak et al./Reliability Engineering and System Safety91(2006)992–10079955.Design issues and components of multi-objective GA 5.1.Fitness functions5.1.1.Weighted sum approachesThe classical approach to solve a multi-objective optimization problem is to assign a weight w i to each normalized objective function z 0i ðx Þso that the problem is converted to a single objective problem with a scalar objective function as follows:min z ¼w 1z 01ðx Þþw 2z 02ðx ÞþÁÁÁþw k z 0k ðx Þ,(1)where z 0i ðx Þis the normalized objective function z i (x )and P w i ¼1.This approach is called the priori approach since the user is expected to provide the weights.Solving a problem with the objective function (1)for a given weight vector w ¼{w 1,w 2,y ,w k }yields a single solution,and if multiple solutions are desired,the problem must be solved multiple times with different weight combinations.The main difficulty with this approach is selecting a weight vector for each run.To automate this process;Hajela and Lin [8]proposed the WBGA for multi-objective optimization (WBGA-MO)in the WBGA-MO,each solution x i in the population uses a different weight vector w i ¼{w 1,w 2,y ,w k }in the calculation of the summed objective function (1).The weight vector w i is embedded within the chromosome of solution x i .Therefore,multiple solutions can be simulta-neously searched in a single run.In addition,weight vectors can be adjusted to promote diversity of the population.Other researchers [9,31]have proposed a MOGA based on a weighted sum of multiple objective functions where a normalized weight vector w i is randomly generated for each solution x i during the selection phase at each generation.This approach aims to stipulate multiple search directions in a single run without using additional parameters.The general procedure of the RWGA using random weights is given as follows [31]:Procedure RWGA:E ¼external archive to store non-dominated solutions found during the search so far;n E ¼number of elitist solutions immigrating from E to P in each generation.Step 1:Generate a random population.Step 2:Assign a fitness value to each solution x A P t by performing the following steps:Step 2.1:Generate a random number u k in [0,1]for each objective k ,k ¼1,y ,K.Step 2.2:Calculate the random weight of each objective k as w k ¼ð1=u k ÞP K i ¼1u i .Step 2.3:Calculate the fitness of the solution as f ðx Þ¼P K k ¼1w k z k ðx Þ.Step 3:Calculate the selection probability of each solutionx A P t as follows:p ðx Þ¼ðf ðx ÞÀf min ÞÀ1P y 2P t ðf ðy ÞÀf minÞwhere f min ¼min f f ðx Þj x 2P t g .Step 4:Select parents using the selection probabilities calculated in Step 3.Apply crossover on the selected parent pairs to create N offspring.Mutate offspring with a predefined mutation rate.Copy all offspring to P t +1.Update E if necessary.Step 5:Randomly remove n E solutions from P t +1and add the same number of solutions from E to P t +1.Step 6:If the stopping condition is not satisfied,set t ¼t þ1and go to Step 2.Otherwise,return to E .The main advantage of the weighted sum approach is a straightforward implementation.Since a single objective is used in fitness assignment,a single objective GA can be used with minimum modifications.In addition,this approach is computationally efficient.The main disadvan-tage of this approach is that not all Pareto-optimal solutions can be investigated when the true Pareto front is non-convex.Therefore,multi-objective GA based on the weighed sum approach have difficulty in finding solutions uniformly distributed over a non-convex trade-off surface [1].5.1.2.Altering objective functionsAs mentioned earlier,VEGA [5]is the first GA used to approximate the Pareto-optimal set by a set of non-dominated solutions.In VEGA,population P t is randomly divided into K equal sized sub-populations;P 1,P 2,y ,P K .Then,each solution in subpopulation P i is assigned a fitness value based on objective function z i .Solutions are selected from these subpopulations using proportional selection for crossover and mutation.Crossover and mutation are performed on the new population in the same way as for a single objective GA.Procedure VEGA:N S ¼subpopulation size (N S ¼N =K )Step 1:Start with a random initial population P 0.Set t ¼0.Step 2:If the stopping criterion is satisfied,return P t .Step 3:Randomly sort population P t .Step 4:For each objective k ,k ¼1,y K ,perform the following steps:Step 4.1:For i ¼1þðk 21ÞN S ;...;kN S ,assign fit-ness value f ðx i Þ¼z k ðx i Þto the i th solution in the sorted population.Step 4.2:Based on the fitness values assigned in Step 4.1,select N S solutions between the (1+(k À1)N S )th and (kN S )th solutions of the sorted population to create subpopulation P k .Step 5:Combine all subpopulations P 1,y ,P k and apply crossover and mutation on the combined population to create P t +1of size N .Set t ¼t þ1,go to Step 2.A similar approach to VEGA is to use only a single objective function which is randomly determined each time in the selection phase [32].The main advantage of the alternating objectives approach is easy to implement andA.Konak et al./Reliability Engineering and System Safety 91(2006)992–1007996computationally as efficient as a single-objective GA.In fact,this approach is a straightforward extension of a single objective GA to solve multi-objective problems.The major drawback of objective switching is that the popula-tion tends to converge to solutions which are superior in one objective,but poor at others.5.1.3.Pareto-ranking approachesPareto-ranking approaches explicitly utilize the concept of Pareto dominance in evaluatingfitness or assigning selection probability to solutions.The population is ranked according to a dominance rule,and then each solution is assigned afitness value based on its rank in the population, not its actual objective function value.Note that herein all objectives are assumed to be minimized.Therefore,a lower rank corresponds to a better solution in the following discussions.Thefirst Pareto ranking technique was proposed by Goldberg[3]as follows:Step1:Set i¼1and TP¼P.Step2:Identify non-dominated solutions in TP and assigned them set to F i.Step3:Set TP¼TPF i.If TP¼+go to Step4,else set i¼iþ1and go to Step2.Step4:For every solution x A P at generation t,assign rank r1ðx;tÞ¼i if x A F i.In the procedure above,F1,F2,y are called non-dominated fronts,and F1is the Pareto front of population P.NSGA[10]also classifies the population into non-dominated fronts using an algorithm similar to that given above.Then a dummyfitness value is assigned to each front using afitness sharing function such that the worst fitness value assigned to F i is better than the bestfitness value assigned to F i+1.NSGA-II[16],a more efficient algorithm,named the fast non-dominated-sort algorithm, was developed to form non-dominated fronts.Fonseca and Fleming[6]used a slightly different rank assignment approach than the ranking based on non-dominated-fronts as follows:r2ðx;tÞ¼1þnqðx;tÞ;(2) where nq(x,t)is the number of solutions dominating solution x at generation t.This ranking method penalizes solutions located in the regions of the objective function space which are dominated(covered)by densely populated sections of the Pareto front.For example,in Fig.1b solution i is dominated by solutions c,d and e.Therefore,it is assigned a rank of4although it is in the same front with solutions f,g and h which are dominated by only a single solution.SPEA[11]uses a ranking procedure to assign better fitness values to non-dominated solutions at underrepre-sented regions of the objective space.In SPEA,an external list E of afixed size stores non-dominated solutions that have been investigated thus far during the search.For each solution y A E,a strength value is defined assðy;tÞ¼npðy;tÞN Pþ1,where npðy;tÞis the number solutions that y dominates in P.The rank r(y,t)of a solution y A E is assigned as r3ðy;tÞ¼sðy;tÞand the rank of a solution x A P is calculated asr3ðx;tÞ¼1þXy2E;y1xsðy;tÞ.Fig.1c illustrates an example of the SPEA ranking method.In the former two methods,all non-dominated solutions are assigned a rank of1.This method,however, favors solution a(in thefigure)over the other non-dominated solutions since it covers the least number of solutions in the objective function space.Therefore,a wide, uniformly distributed set of non-dominated solutions is encouraged.Accumulated ranking density strategy[19]also aims to penalize redundancy in the population due to overrepre-sentation.This ranking method is given asr4ðx;tÞ¼1þXy2P;y1xrðy;tÞ.To calculate the rank of a solution x,the rank of the solutions dominating this solution must be calculatedfirst. Fig.1d shows an example of this ranking method(based on r2).Using ranking method r4,solutions i,l and n are ranked higher than their counterparts at the same non-dominated front since the portion of the trade-off surface covering them is crowded by three nearby solutions c,d and e. Although some of the ranking approaches described in this section can be used directly to assignfitness values to individual solutions,they are usually combined with variousfitness sharing techniques to achieve the second goal in multi-objective optimization,finding a diverse and uniform Pareto front.5.2.Diversity:fitness assignment,fitness sharing,and nichingMaintaining a diverse population is an important consideration in multi-objective GA to obtain solutions uniformly distributed over the Pareto front.Without taking preventive measures,the population tends to form relatively few clusters in multi-objective GA.This phenom-enon is called genetic drift,and several approaches have been devised to prevent genetic drift as follows.5.2.1.Fitness sharingFitness sharing encourages the search in unexplored sections of a Pareto front by artificially reducingfitness of solutions in densely populated areas.To achieve this goal, densely populated areas are identified and a penaltyA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007997。

机电技术教育专业外文翻译--计算机辅助设计与制造

机电技术教育专业外文翻译--计算机辅助设计与制造

外文原文:Modern design and manufacturingCAD/CAMCAD/CAM is a term which means computer-aided design and computer-aided manufacturing. It is the technology concerned with the use of digital computers to perform certain functions in design and production. This technology is moving in the direction of greater integration(一体化)of design and manufacturing, two activities which have traditionally been treated as distinct(清楚的)and separate functions in a production firm. Ultimately, CAD/CAM will provide the technology base for the computer-integrated factory of the future.Computer-aided design (CAD) can be defined as the use of computer systems to assist in the creation, modification, analysis, or optimization(最优化)of a design. The computer systems consist of the hardware and software to perform the specialized design functions required by the particular user firm. The CAD hardware typically includes the computer, one or more graphics display terminals, keyboards, and other peripheral equipment. The CAD software consists of the computer programs to implement(实现,执行)computer graphics to facilitate the engineering functions of the user company. Examples of these application programs include stress-strain(压力-应变)analysis of components(部件), dynamic(动态的)response of mechanisms, heat-transfer calculations, and numerical control part programming. The collection of application programs will vary from one user firm to the next because their product lines, manufacturing processes, and customer markets are different these factors give rise to differences in CAD system requirements.Computer-aided manufacturing (CAM) can be defined as the use of computer systems to plan, manage, and control the operations of a manufacturing plant through either direct or indirect computer interface with the plant’s production resources. As indicated by the definition, the applications of computer-aided manufacturing fall into two broad categories:puter monitoring and control.2.manufacturing support applications.The distinction between the two categories is fundamental to an understanding of computer-aided manufacturing.In addition to the applications involving a direct computer-process interface(界面,接口)for the purpose of process monitoring and control, compute-aided manufacturing also includes indirect applications in which the computer serves a support role in the manufacturing operations of the plant. In these applications, the computer is not linked directly to the manufacturing process. Instead, the computer is used “off-line”(脱机)to provide plans, schedules, forecasts, instructions, and information by which the firm’s production resources can be managed more effectively. The form of the relationship between the computer and the process is represented symbolically in the figure given below. Dashed lines(虚线)are used to indicate that the communication and control link is an off-line connection, with human beings often required to consummate(使圆满)the interface. However, human beings are presently required in the application either to provide input to the computer programs or to interpret the computer output and implement the required action.CAM for manufacturing supportWhat is CAD/CAM software?Many toolpaths are simply too difficult and expensive to program manually. For these situations, we need the help of a computer to write an NC part program.The fundamental concept of CAD/CAM is that we can use a Computer-AidedDrafting (CAD) system to draw the geometry of a workpiece on a computer. Once the geometry is completed, then we can use a computer-Aided Manufacturing (CAM) system to generate an NC toolpath based on the CAD geometry.The progression(行进,级数)from a CAD drawing all the way to the working NC code is illustrated as follows:Step 1: The geometry is defined in a CAD drawing. This workpiece contains a pocket to be machined. It might take several hours to manually write the code for this pocket(凹槽,型腔). However, we can use a CAM program to create the NC code in a matter of minutes.Step 2: The model is next imported into the CAM module. We can then select the proper geometry and define the style of toolpath to create, which in this case is a pocket. We must also tell the CAM system which tools to use, the type of material, feed, and depth of cut information.Step 3: The CAM model is then verified to ensure that the toolpaths are correct. If any mistakes are found, it is simple to make changes at this point.Step 4: The final product of CAD/CAM process is the NC code. The NC code is produced by post-processing(后处理)the model, the code is customized(定制,用户化)to accommodate the particular variety of CNC control.Another acronym that we may run into is CAPP, which stands for Computer-Aided Part Programming. CAPP is the process of using computers to aid in the programming of NC toolpaths. However, the acronym CAPP never really gained widespread acceptance, and today we seldom hear this term. Instead, the more marketable CAD/CAM is used to express the idea of using computers to help generate NC part programs. This is unfortunate because CAM is an entire group of technologies related to manufacturing design and automation-not just the software that is used to program CNC machine tools.Description of CAD/CAM Components and FunctionsCAD/CAM systems contain both CAD and CAM capabilities – each of whichhas a number of functional elements. It will help to take a short look at some of these elements in order to understand the entire process.1.CAD ModuleThe CAD portion of the system is used to create the geometry as a CAD model. The CAD model is an electronic description of the workpiece geometry that is mathematically precise. The CAD system, whether stand alone or as part of a CAD/CAM package, tends to be available in several different levels of sophistication. (强词夺理,混合)2-D line drawings 两维线条图Geometry is represented in two axes, much like drawing on a sheet of paper. Z-level depths will have to be added on the CAM end.3-D wireframe models 三维线框模型Geometry is represented in three-dimensional space by connecting elements that represent edges and boundaries. Wiregrames can be difficult to visualize(想象,形象化,显现), but all Z axis information is available for the CAM operations.3-D surface models 三维表面模型These are similar to wireframes except that a thin skin has been stretched over the wireframe model to aid in visualization.Inside, the model is empty. Complex contoured Surfaces are possible with surface models.3-D solid modeling 三维实体模型This is the current state of the market technology that is used by all high-end software. The geometry is represented as a solid feature that contains mass. Solid models can be sliced(切片,部分,片段)open to reveal internal features and not justa thin skin.2.CAM ModuleThe CAM module is used to create the machining process model based upon the geometry supplied in the CAD model. For example, the CAD model may contain a feature that we recognize as a pocket .We could apply a pocketing routine to the geometry, and then all of the toolpaths would be automatically created to produce the pocket. Likewise, the CAD model(模子,铸型)may contain geometry that should beproduced with drilling operations. We can simply select the geometry and instruct the CAM system to drill holes at the selected locations.The CAM system will generate a generic(一般的,普通的)intermediate(中间的,媒介)code that describes the machining operations, which can later be used to produce G & M code or conversational programs. Some systems create intermediate code in their own proprietary(所有的,私人拥有的)language, which others use open standards such as APT for their intermediate files.The CAM modules also come in several classes and levels of sophistication. First, there is usually a different module available for milling, turning, wire EDM, and fabrication(装配). Each of the processes is unique enough that the modules are typically sold as add-ins(附加软件). Each module may also be available with different levels of capability. For example, CAM modules for milling are often broken into stages as follows, starting with very simple capabilities and ending with complex, multi-axis toolpaths :● 21/2-axis machining● Three-axis machining with fourth-axis positioning● Surface machining● Simultaneous five-axis machiningEach of these represents a higher level of capability that may not be needed in all manufacturing environments. A job shop might only require 3-axis capability. An aerospace contractor might need a sophisticated 5-axis CAM package that is capable of complex machining. This class of software might start at $5,000 per installation, but the most sophisticated modules can cost $15,000 or more. Therefore, there is no need to buy software at such a high level that we will not be able to use it to its full potential.3.Geometry vs. toolpathOne important concept we must understand is that the geometry represented by the CAD drawing may not be exactly the same geometry that is produced on the CNC machine C machine tools are equipped to produce very accurate toolpaths as long as the toolpaths are either straight lines or circular arcs. CAD systems are alsocapable of producing highly accurate geometry of straight line and circular arcs, but they can also produce a number of other classes of curves. Most often these curves are represented as Non-Uniform(不均匀的,不一致的)Rational Bezier Splines (NURBS) (非均匀有理B样条). NURBS curves can represent virtually any geometry, ranging from a straight line or circular arc to complex surfaces.Take, for example, the geometric entity that we call an ellipse(椭圆形). An ellipse is a class of curve that is mathematically different from a circular arc. An ellipse is easily produced on a CAD system with the click of the mouse. However, a standard CNC machine tool cannot be use to directly problem an ellipse – it can only create lines and circular arcs. The CAM system will reconcile(使和解,使顺从)this problem by estimating the curve with line segments.CNC machine tools usually only understand circular arcs or straight lines. Therefore, the CAM system must estimate curved surfaces with line segments. The curve in this illustration is that of an ellipse, and the toolpath generated consists of tangent line segments that are contained within a tolerance zone.The CAM system will generate a bounding geometry on either side of the true curve to form a tolerance zone.It will then produce a toolpath from the line segment that stays contained within the tolerance zone. The resulting toolpath will not be mathematically correct – the CAM system will only be able to estimate the surface. This basic method is used to produce estimated toolpaths for both 2-D curves and 3-D surface curves.Some CAM programs also have the ability to convert the line segments into arc segments. This can reduce the number of blocks in the program and lead to smoother surfaces.The programmer can control the size of the tolerance zone to create a toolpath that is as accurate as is needed. Smaller tolerance zones will produce finer toolpaths and more numerous line segments, while larger tolerance zones will produce fewer line segments and coarser(粗糙的)toolpaths. Each line segment will require a block of code in the NC program, so the NC part program can grow very large when using this technique.We must use caution when machining surfaces. It is easy to rely on the computer to generate the correct tooolpath, but finished surfaces are further estimated during machining with ball end mills.If we do not pay attention to the limitations of these techniques, then the accuracy of the finished workpiece may be compromised (妥协,折衷).4.Tool and material librariesTo create the machining operations, the CAM system will need to know which cutting tools are available and what material we are machining. CAM systems take care of this by providing customizable (可定制的)libraries of cutting tools and materials. Tool libraries contain information about the shape and style of the tool. Material libraries contain information that is used to optimize(使最优化)the cutting speeds and feeds. The CAM system uses this information together to create the correct toolpaths and machining parameters.(参数)The format of these tool and material libraries is often proprietary(专利的,独占的,私有的)and can present some portability issues.Proprietary(轻便,移动)tool and material files cannot be easily modified or used on another system. More progressive (改革论者,进步论者,前进的)CAM developers tend to produce their tool and material libraries as database files that can be easily modified and customized for other applications.5.Verification and post-processorCAM systems usually provide the ability to verify that the proposed toolpaths are correct. This can be via a simple backplot(背景绘制)of the tool centerline or via a sophisticated solid model of the machining operations. The solids verifications(确认,查证)is often a third-party software that the CAD/CAM software company has licensed.(得到许可的)However, it may be available as a standalone package. The post-processor is a software program that takes a generic intermediate code and formats the NC code for each particular machine tool control. The post-processor(后置处理器)can often be customized through templates(模板)and variables to provide the required customization. (用户化,专用化,定制)6.Portability 轻便,可带的Portability of electronic data is the Achilles` heel(唯一致命的弱点)of CAD/CAM systems and continues to be a time-consuming concern. CAD files are created in a number of formats and have to be shared between many organizations. It is very expensive to create a complex model on a CAD system; therefore, we want to maximize the portability of our models and minimize the need for recreating the geometry on another system.DXF, DWG, IGES, SAT, STL and parasolids are a few of the common formats for CAD data exchange.CAM process models are not nearly as portable as CAD models. We cannot usually take a CAM model developed in one system and transfer it to another platform. The only widely accepted standard for CAM model interchange is a version of Automatically Programmed Tool (APT). APT is a programming language used to describe machining operations. APT is an open standard that is well documented and can be accessed by third-party software developers. A number of CAD/CAM systems can export to this standard, and the CAM file can later be used by post-processors and verification software.There are some circumstances when the proprietary intermediate files created by certain CAD/CAM systems can be fed directly into a machine tool without any additional post-processing. This is an ideal solution, but there is not currently any standard governing this exchange.One other option for XAD/CAM model exchange is to use a reverse post-processor. A reverse post-processor can create a CAD/CAM model from a G &M-code of NC part program. These programs do work; however, the programmer must spend a considerable amount of time determining the design intent of the model and to separate the toolpaths from the geometry. Overall, reverse post-processing has very limited applications.Software issues and trendsThroughout industry, numerous software packages are used for CAD andCAD/CAM. Pure CAD systems are used in all areas of design, and virtually any product today is designed With CAD software-gone are the days of pencil and paper drawings.CAD/CAM software, on the other hand, is more specialized. CAD/CAM is a small but important niche(适当的位置)confined to machining and fabrication organizations, and it is found in much smaller numbers than its CAD big brother.CAD/CAM systems contain both the software for CAD design and the CAM software for creating toolpaths and NC code. However, the CAD portion is often weak and unrefined when compared to much of the leading pure CAD software. This mismatch sets up the classic(第一流的,标准的)argument between the CAD designers and the CAD/CAM programmer on what is the best way to approach CAD/CAM.A great argument can be made for creating all geometry on an industry-leading CAD system and then importing the geometry into a CAD/CAM system.A business is much better off if its engineers only have to create a CAD model one time and in one format. The geometry can then be imported into the CAD/CAM package for process modeling. Furthermore, industry-leading CAD software tends to set an unofficial standard. The greater the acceptance of the standard, the greater the return on investment for the businesses that own the software.The counter argument comes from small organizations that do not have the need or resources to own both an expensive, industry-standard CAD package and an expensive CAD/CAM package. They tend to have to redraw the geometry from the paper engineering drawing or import models with imperfect(有缺点的,未完成的)translators. Any original models will end up being stored as highly non-standardized CAD/CAM files. These models will have dubious(可疑的,不确定的)prospects(景色,前景,期望)of ever being translated to a more standardized version.Regardless of the path that is chosen, organizations and individuals tend to become entrenched(以壕沟防护)in a particular technology. If they have invested tremendous effort and time into learning and assimilating(吸收)a technology, then it becomes very difficult to change to a new technology, even when presented withoverwhelming(压倒性的,无法抵抗的)evidence of a better method. It can be quite painful to change. Of course, if we had a crystal ball and could see into the future, this would never happen; but the fact is that we cannot always predict what the dominant (有统治权的,占优势的)technology will be even a few years down the road.The result is technology entrenchment(堑墩)that can be very difficult and expensive to get out from under. About the only protection we can find is to select the technology that appears to be the most standardized (even if it is imperfect) and stay with it-then, if major changes appear down the road, we will be in a better position to adapt.外文原文:计算机辅助设计与制造CAD/CAM是表示计算机辅助设计和计算机辅助制造的专业术语。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

4712IEEE TRANSACTIONS ON MAGNETICS, VOL. 47, NO. 12, DECEMBER 2011Simultaneous Design Optimization of Permanent Magnet, Coils, and Ferromagnetic Material in ActuatorsJaewook Lee, Ercan M. Dede, and Tsuyoshi NomuraToyota Research Institute, Toyota Motor Engineering & Manufacturing North America, Ann Arbor, MI 48105 USAThis paper presents structural topology optimization of an electro/permanent magnet linear actuator. The optimization goal is to maximize the average magnetic force acting on a plunger that travels over a distance of 20 mm. To achieve this goal, the magnetic field sources (i.e., permanent magnet, positive and negative direction coils), and ferromagnetic material of the yoke are simultaneously co-designed using four design variables for each finite element. The magnetic force is calculated using the Maxwell stress tensor method coupled with finite-element analysis. The optimization sensitivity analysis is performed using the adjoint method, and the optimization problem is solved using a sequential linear programming method. To illustrate the utility of the proposed design approach, linear actuators are designed, and the optimal shapes and locations of the yoke permanent magnet, coils, and ferromagnetic part are provided. In addition, the effects of the PM magnetization direction and the current density strength on design results are described. Index Terms—Finite-element method, linear actuators, permanent magnets, structural topology optimization.I. INTRODUCTIONHE DEVELOPMENT of high grade permanent magnets (PMs) has resulted in their use in many modern electromechanical devices including linear actuators and electric motors. For example, actuators using PMs have been designed for oscillators in [1], [2] and for a voltage circuit breaker in [3]. Additionally, motors with PMs installed inside of the rotor structure are currently the most common solution for advanced hybrid-electric vehicle drivetrains [4]. Generally, the shape and location of the magnetic field sources (i.e., PM, and coils), and associated ferromagnetic material, greatly influences electro-mechanical device performance. Structural topology optimization [5] has been identified as a promising approach for performance enhancement, and such optimization techniques have been successfully applied to the design of various devices. For example, the ferromagnetic material inside of a linear actuator was designed using the level-set method in [6], [7]. Similarly, the ferromagnetic material inside of the stator of a PM motor was optimized to mitigate torque ripples in [8], while the ferromagnetic material in the rotor was designed in [9]. In [10], the ferromagnetic material and coil element of a magnetic recording head were simultaneously designed. The ferromagnetic material and the PM were also co-designed for an actuator and magnetostrictive sensor in [11] and for a PM motor in [12]. The PM material itself was also designed using structural topology optimization in [13], [14], where the magnetization direction of the PM was determined for an electromagnetic system in [13] and for 3-D Halbach arrays in [14]. These prior studies established the groundwork for the simultaneous optimization and co-design of the magnetic field sources (i.e., PM, positive and negative direction coils), and ferromagnetic portions of an actuator, as described in this work.TFig. 1. Configuration of electro/PM linear actuator.Accordingly, the structural topology optimization of an electro/PM linear actuator, as shown in Fig. 1, is the focus of this paper. This actuator is composed of a plunger and a yoke that consists of PM, coils (negative and positive directions) and ferromagnetic materials. These PM, coils, and ferromagnetic materials are designed using four optimization variables that are associated with each finite element. The goal of the study is to maximize the average magnetic force acting on the linear actuator plunger as it travels through a distance of 20 mm. The magnetic force is calculated using the Maxwell Stress Tensor (MST) method coupled with finite-element analysis. In addition, the effect of the PM magnetization direction and the external current density strength on the optimization results is also investigated. The paper is organized as follows. In Section II, an optimization method is proposed for the simultaneous co-design of the PM, coils, and ferromagnetic material. The actuator design results are presented in Section III. Finally, a summary is provided in Section IV. II. DESIGN OPTIMIZATION METHOD This section presents a structural topology optimization method for the design of an electro/PM linear actuator yoke with the goal of maximizing the average magnetic force acting on the plunger. First, the finite-element analysis and the magnetic force calculation procedure are explained, and then the strategy for the simultaneous design of the PM, coils, and ferromagnetic material is described. Next, the optimization problem is formulated, and finally the sensitivity analysis is presented.Manuscript received March 04, 2011; revised May 20, 2011; accepted June 17, 2011. Date of publication June 27, 2011; date of current version November 23, 2011. Corresponding author: J. Lee (e-mail: jaewook.lee@). Digital Object Identifier 10.1109/TMAG.2011.21608700018-9464/$26.00 © 2011 IEEELEE et al.: SIMULTANEOUS DESIGN OPTIMIZATION OF PERMANENT MAGNET, COILS, AND FERROMAGNETIC MATERIAL IN ACTUATORS4713A. Finite-Element Analysis and Magnetic Force Calculation Maxwell’s equation for the magnetic analysis of the PM and coil is written as (1) is the magnetic vector where is the magnetic reluctivity, potential, is the external current density in the coils, and is the residual magnetic flux density of the PM. For two-dimensions, (1) may be reduced toTABLE I MATERIAL STATES DEFINED BY DESIGN VARIABLES  THROUGH (2) or behind , or refers to the component The index direction of each vector. Now, switching to the finite-element formulation, (2) may be represented in matrix form as (3) The element stiffness matrix, derived as , and force vector, , are thenthrough , in each finite element control the material properties and allow for determination of the PM, coils or ferromagnetic material portions of the design domain. The relative per, and the meability, , the residual magnetic flux density, are defined as current density (7) (8) (9) , and are the penalization parameters (set to 0.5, where and are the relative reluctivity 3 and 3), respectively, of the ferromagnetic material and air (set to 1500 and 1), respecis the residual flux density of the PM, and is tively, the loaded external current density at the coil. Here, two design and are, respectively, used to represent equal variables volume positive and negative direction current density coils. Table I summarizes the various material states when the design variables are either zero or one. ), the To avoid an unknown material state (e.g., design variables are constrained such that as proposed in [10] and [11]. Thus, the four constrained design through , determine the material state for each variables, finite element and consequently yield the optimal design for the PM, coils and ferromagnetic components of the yoke. C. Optimization Problem Formulation(4)(5) where is the shape function of th finite-element node. From the solution of (3), the magnetic flux density, , is calculated . Finally, the magnetic force, , acting on the as plunger is calculated using the Maxwell stress tensor formulation, per [15] (6) where and are, respectively, unit vectors normal and tangential to the integration path enveloping the body subject to the magnetic force, i.e., the actuator plunger. The above process is repeated as the plunger is stepped through 20 equal increments which correspond to the 20 mm plunger movement. In such a way, the average magnetic force acting on the plunger is computed over its range of travel. B. Design Strategy Simultaneously designing the PM, coils and ferromagnetic material using topology optimization requires that the material properties in the design domain be interpolated using a method similar to that proposed in [10] and [11]. Four design variables,The design goal for the optimization problem is to maximize , acting on the the average axial direction magnetic force plunger subject to a volume constraint on the PM, coils and ferromagnetic portions of the design. To achieve this design goal, the optimization problem is formulated as (10) (11) (12) (13) (14) (15) (16) (17)4714IEEE TRANSACTIONS ON MAGNETICS, VOL. 47, NO. 12, DECEMBER 2011The objective function (11) is calculated as the average of the axial direction force at 20 equally spaced plunger locations spanning the 20 mm movement:(18) The volume of the ferromagnetic material, , the positive , the negative direction coil, , and direction coil, , is calculated as the PM, (19) (20) (21) (22) is the volume of th design domain finite element. where In terms of implementation, the formulated optimization problem (10)–(17) is solved using a Sequential Linear Programming (SLP) method [16]. Thus, to solve the above optimization problem using a gradient-based method, the sensitivity of the objective and constraint function must be derived. D. Sensitivity Analysis Here, the sensitivity of the objective function (18) with respect to the design variables, through , can be written as (23) The sensitivity of the magnetic force at th plunger location is analytically derived using the adjoint variable with the adjoint term may be method. The magnetic force written as (24) is the adjoint variable. First, the sensitivity of (24) where with respect to is derived as (25) and The terms are calculated from (4), (5), and (7). The adjoint variable, is then obtained by solving the adjoint equation derived asFig. 2. Sensitivity verification. (a) (d Avg (F ))=(d ), (b) (d Avg (F ))= (d ), (c) (d Avg (F ))=(d ), (d) (d Avg (F ))=(d ).(29) in (27) is calculated and in (28) and (29) is calculated from (5) and (9). The adjoint equation is derived following the prior procedure to arrive at (26). Furthermore, the sensitivity of the constraint (13)–(17) is easily derived since they are linear and thus not described here. To validate the derived analytical sensitivity expressions introduced above, comparisons are made with corresponding finite difference sensitivities. Fig. 2(a)–(d) compares the sensitivity with respect to, through , which is derived using (23)–(29). These visual comparisons confirm that the analytical sensitivities are in good agreement with the finite difference sensitivities. III. DESIGN OPTIMIZATION RESULTS The proposed optimization method is applied to the design of the yoke in the electro/PM linear actuator model, as shown in Fig. 3. The PM, coils (positive and negative), and ferromagnetic material of the yoke are designed to maximize the average force during the 20 mm plunger movement, which is simulated by to mm in 20 equal, moving the plunger from 1 mm, steps. The finite-element analysis domain is also shown in Fig. 3. Due to symmetry, one half of the actuator is analyzed using appropriate boundary conditions. A zero Neumann condition is at the bottom edge for the zero applied to vector potential . tangential (i.e., -direction) magnetic flux density, The optimal design is found by solving the optimization problem (10)–(17) using a SLP method. The move limit for SLP method is set to 0.015. The volume constraints for the PM, , positive direction coil, , negative direction coil, , and ferromagnetic material, , are set to 0.015, The term from (5) and (8), and,(26) Next, the sensitivity of (24) with respect to respectively derived as , and is(27) (28)LEE et al.: SIMULTANEOUS DESIGN OPTIMIZATION OF PERMANENT MAGNET, COILS, AND FERROMAGNETIC MATERIAL IN ACTUATORS4715Fig. 3. Design domain and finite-element analysis domain with boundary conditions.Fig. 5. (a) Magnetic force profile versus plunger location, d, and average force, (b) Convergence history of average force (i.e., objective function).Fig. 4. (a) Benchmark design (d mm), (b) Optimized design ( mm), Magnetic flux lines at various plunger locations: (c) Benchmark design, (d) Optimized design.= 10d = 10Fig. 6. Optimization result with magnetic flux lines (at d mm) for different PM magnetization directions: (a) 45 , (b) 90 , (c) 135 , (d) 180 , (e) Magnetic force profile.= 100.025, 0.025, and 0.25, respectively. An external current of 3 A flows through the coil which is assumed to consist of 200 turns, and the residual magnetic flux density of the permanent , is set to 0.4 tesla (T). magnet, Using the proposed method, the yoke is successfully designed to maximize the average magnetic force. The benchmark initial design and the optimization result are, respectively, presented in Fig. 4(a) and (b). In the optimal structure, both the PM (black structure) and the coils (dark gray structure) appear near the air-gap. The optimal structure maximizes the magnetic force by reducing the leakage magnetic field outside of the air-gap and plunger, which can be observed by comparing the magnetic flux lines shown in Fig. 4(c) and (d). Note that the magnetic field surrounding the air-gap mainly influences the magnetic force magnitude.Fig. 5(a) compares the magnetic force profile with the plunger location, , and the average force. The average force of the optimized design is 41.9% higher than that of the benchmark initial design. The convergence history of the objective function is shown in Fig. 5(b), and all volume constraints were satisfied during the optimization process. A. Effect of Magnetization Direction The influence of the PM magnetization direction on the optimization result is presented here. The optimization process was performed for four different magnetization directions (45 , 90 , 135 , and 180 with respect to -axis).4716IEEE TRANSACTIONS ON MAGNETICS, VOL. 47, NO. 12, DECEMBER 2011Fig. 6(a)–(d) shows the optimization results with magnetic flux lines in each magnetization direction. In every result, the coils appear near the air-gap to minimize leakage, and the PM appears such that its magnetization direction is aligned with the direction of the magnetic field generated by the coil and passing through the ferromagnetic material. Fig. 6(e) shows the magnetic force profile and average force for each actuator design. When the magnetization direction is 180 , Fig. 6(d), the PM is located closest to the air-gap and consequently, the smallest leakage occurs along with the highest average force on the plunger. B. Effect of External Current Strength It is hypothesized that the ratio of the PM to the external current density strength affects the optimization result. Thus, in order to investigate this effect, the optimization process is performed using four different current density strengths (i.e., 1 A, 3 A, 5 A, and 7 A) with a fixed PM strength of 0.4 T. Here, the magnetization direction is held fixed at 90 with respect to -axis. Fig. 7(a)–(d) shows the various optimization results with magnetic flux lines. As the external coil current strength increases, the optimization process tends to generate a structure with smaller magnetic reluctance in order to maximize the magnetic field generated by the coils. Typically, a PM has high magnetic reluctance since its permeability is almost same as that of air. Compared to the result Fig. 7(a), the structure in Fig. 7(b) contains a thin PM structure, which has smaller magnetic reluctance despite higher leakage of the PM magnetic field. In Fig. 7(c), a thick ferromagnetic structure is formed to accommodate the magnetic field generated by the coil, and a thin additional structure is created for the PM magnetic field. Even for the higher current case shown in Fig. 7(d), the PM is a minor magnetic field source, and consequently, the ferromagnetic material is used only to minimize the magnetic reluctance associated with the coil source. IV. CONCLUSION An electro/PM linear actuator was designed using structural topology optimization. The optimal locations and shapes of the PM, coils, and ferromagnetic material portions of the actuator yoke were simultaneously co-designed to maximize the average magnetic force acting on the plunger. The designed actuators are expected to generate a much higher magnetic force when compared with a benchmark initial design. The proposed method may be applied to the advanced development of interior permanent magnet motors for further enhancement of performance in future vehicle systems. REFERENCES[1] L. N. Tutelea, M. C. Kim, M. Topor, J. Lee, and I. Boldea, “Linear permanent magnet oscillatory machine: Comprehensive modeling for transients with validation by experiments,” IEEE Trans. Ind. Electron., vol. 55, no. 2, pp. 492–500, 2008.Fig. 7. Optimization result with magnetic flux lines (d external coil current: (a) 1 A, (b) 3 A, (c) 5 A, (d) 7 A.= 10 mm) for different[2] Z. Q. Zhu and X. Chen, “Analysis of an E-core interior permanent magnet linear oscillating actuator,” IEEE Trans. Magn., vol. 45, no. 10, pp. 4384–4387, Oct. 2009. [3] S. Fang, H. Lin, and S. L. Ho, “Transient co-simulation of low voltage circuit breaker with permanent magnet actuator,” IEEE Trans. Magn., vol. 45, no. 3, pp. 1242–1245, Mar. 2009. [4] K. T. Chau, C. C. Chan, and C. Liu, “Overview of permanent-magnet brushless drives for electric and hybrid electric vehicles,” IEEE Trans. Ind. Electron., vol. 55, no. 6, pp. 2246–2257, 2008. [5] M. P. Bendsøe and O. Sigmund, Topology Optimization—Theory, Methods and Applications, 2nd ed. Berlin, Germany: Springer, 2004. [6] S. Park and S. Min, “Magnetic actuator design for maximizing force using level set based topology optimization,” IEEE Trans. Magn., vol. 45, no. 5, pp. 2336–2339, May 2009. [7] S. Park and S. Min, “Design of magnetic actuator with nonlinear ferromagnetic materials using level-set based topology optimization,” IEEE Trans. Magn., vol. 45, no. 5, pp. 2336–2339, May 2009. [8] J. Kwack, S. Min, and J. Hong, “Optimal stator design of interior permanent magnet motor to reduce torque ripple using the level set method,” IEEE Trans. Magn., vol. 46, no. 6, pp. 2108–2111, Jun. 2010. [9] N. Takahshi, T. Yamada, and D. Miyahi, “Examination of optimal design of IPM motor using ON/OFF method,” IEEE Trans. Magn., vol. 46, no. 8, pp. 3149–3152, Aug. 2010. [10] S. Park, J. Yoo, and J. S. Choi, “Simultaneous optimal design of the yoke and the coil in the perpendicular magnetic recording head,” IEEE Trans. Magn., vol. 35, no. 10, pp. 3668–3671, Oct. 2009. [11] J. S. Choi and J. Yoo, “Simultaneous structural topology optimization of electromagnetic sources and ferromagnetic materials,” Comput. Methods Appl. Mech. Eng., vol. 198, pp. 2111–2121, 2009. [12] T. Labbe, B. Dehez, M. Markovic, and Y. Perriard, “Torque-to-weight ratio maximization in PMSM using topology optimization,” in Proc. XIX Int. Conf. Electrical Machines—ICEM, Rome, Italy, 2010. [13] S. Wang, D. Youn, H. Moon, and J. Kang, “Topology optimization of electromagnetic systems considering magnetization direction,” IEEE Trans. Magn., vol. 41, no. 5, pp. 1808–1811, May 2005. [14] J. S. Choi, J. Yoo, S. Nishiwaki, and K. Izui, “Optimization of magnetization directions in a 3-D magnetic structure,” IEEE Trans. Magn., vol. 45, no. 5, pp. 2336–2339, May 2009. [15] A. Benhama, A. C. Williamson, and A. B. J. Reece, “Force and torque computation from 2-D and 3-D finite element field solutions,” in IEE Proc. Electr. Power Appl., 1999, vol. 146, no. 1, pp. 25–31. [16] J. S. Arora, Introduction to Optimum Design, 2nd ed. New York: Elsevier, 2004.。

相关文档
最新文档