生命线工程的发展外文翻译
外文翻译原文

A Strategy of Reactive Power Optimization for Comprehensive Economy of Operating and Adjusting in Distribution Systems
Renjun Zhou, Hongming Yang, Yongfei Ma, Shiping Su
resulting in their stop of action in later time intervals and even deteriorating optimization effect. and too frequent, a novel strategy is proposed as the two-stage In the current references on the times and number of process, i.e. the off-line pre-optimization and real-time regulation and control action, the Dynamic Programming optimization. Each time intervals within a dispatch period are Approach [2] can hardly be used in a actual large-scale pre-optimized one by one using conventional model, while for real-time optim
12 生命线简介及震害(1)

无线通信:无线电广播、电视、移动通讯、卫星通 讯等。
传输介质:空间,省去传输线的花费,但是容易受 外界干扰。
对地震应急救灾来说,无线通讯显然优越。
通信网的基本结构
1. 有线通信
(1)公用电话网(PSTN) 电话是最常用的通信方式,公用电话网分层管 理,包括用户接入网、本地网、国内长途网、 国际长途网四层。
▪ 分区供水,把城市分区,各区泵站和管网独立,适当连 接,减少加压和漏水,但增加管网费用。
▪ 分压供水,对于山区较适用,减少高压管的数量。
▪ 循环或循序给水,为节约用水,废水适当处理后作为生 产用水再利用。
供水系统主要分为三个环节:
1)取水工程,包括水源、取水构筑物、一级泵站;
2)净水工程,包括反应池、沉淀池、过滤池等构筑物、二级泵 站;
唐山大地震,毁坏
台湾大地震,毁坏
送电线路的破坏,输电线塔是送电线路中 的关键结构,它的破坏主要与场地有关, 如果输电线塔处在滑坡、滚石、地表破裂、 严重液化场地就会遭受破坏。
天津市大沽化工厂钢筋混凝土电线杆破坏,唐山大地震,1976
沿陡河水库大坝电线杆倾倒,唐山大地震,1976
二、供水和排水系统
3)输配水工程,包括各种管网、中途加压泵站、水塔以及必要 的清水池、消毒设备等,它是整个系统的重要部分,占总投资 的50%-80%。
泵站在供水系统中起连接作用,如果水源位于高处,可利用重 力送水,但实际上离不开水泵,一级泵站是取水用的,二级泵 站是清水泵,抽取净化后的水送到输水管,在以后的管网中视 实际需要设加压泵站或调节泵站。
影响管道抗震的因素
管径 管材
越大越抗震 见前
年代 埋深
越久远越不抗震 越浅越好
生命线工程的进展外文翻译

生命线工程的研究进展李杰(同济大学建筑工程系,上海 200092,中国)摘要:本篇文章概述了生命线工程研究中的关键事件和假设干进展。
研究话题包括:随机地震动场的波动数值模拟、工程结构非线性地震反映的概率密度演化分析方式、大型生命线工程网络系统的抗震靠得住性分析与优化等。
在论述研究进展的同时,对假设干相关的国内外研究进展状况作了简要的评述。
对生命线工程研究的以后进展给出了一些建议。
关键词:生命线工程;地震;结构;靠得住性;网络;优化引言生命线工程是维系现代城市与区域经济、社会功能的基础性工程设施与系统,其典型对象包括区域电力与交通系统、城市供水、供气系统、通信系统等。
在强烈灾害(如地震、风暴等)攻击下,生命线工程的破坏能够致使城市乃至区域社会、经济功能的瘫痪。
例如:在1995年日本阪神大地震中,神户地域供水系统骨干供水管网破坏1610处,致使11万用户断水,一周后仅修复三分之一,全数修复工作持续三个半月。
与此同时,该地域供气系统、供电系统也都蒙受了严峻破坏。
由于生命线工程系统的耦联作用,还致使了严峻的次生灾害。
20世纪70年代中期,由于美国圣费尔南多地震的阻碍,一批美国地震工程学家正式提出了生命线地震工程的概念旧。
3J。
事实上,假设对生命线工程的研究对象与研究问题加以认真地考察,那么不难发觉:在研究对象上,生命线工程研究包括了生命线工程结构、生命线工程网络、复合生命线工程系统三个大体层次。
而在研究问题上,那么能够分为抗灾设计号|生态操纵两大大体领域H J。
近三十年来,活着界范围内,尤其是西方发达国家,对生命线工程研究的重视程度有增无减。
例如:1998年,美国联邦紧急事务署与美国土木工程师学会联合成立美国生命线工程联合会(ALA),统一和谐生命线工程科学研究、技术开发与工程实践等方面的工作。
2004年,在加拿大召开的世界地震工程大会上,生命线工程的研究进展被列为大会10个专题报告之一。
由于生命线工程涉及到重大土木工程抗灾、工程系统的靠得住性与耐久性、工程结构与工程系统的平安性监测与操纵等土木工程进展中的一系列关键科学与技术问题,从必然意义上说,生命线工程研究已成为现代土木工程研究的大体推动力量。
船舶工程施工英文

Shipbuilding EngineeringShipbuilding engineering is a specialized field that encompasses the design, construction, and maintenance of ships and other marine vessels. It involves a combination of mechanical, electrical, and structural engineering principles to ensure the safe and efficient operation of ships. The shipbuilding industry plays a crucial role in global trade, transportation, and defense, making it an essential sector in many countries.The process of ship construction begins with the design phase, where engineers and architects develop plans for the ship based on itsintended purpose. This includes determining the size, shape, andtechnical specifications of the vessel. The design phase also involves selecting the appropriate materials and technologies to be used in the construction process.Once the design is finalized, the shipbuilding process moves into the manufacturing phase. This involves the assembly of various components, such as the hull, superstructure, and mechanical systems, to form the complete ship. The hull, which is the main body of the ship, istypically constructed from steel or aluminum, depending on the size and type of vessel. The superstructure, which includes decks, cabins, and other non-hull components, is often made from reinforced concrete or steel.During the manufacturing phase, engineers must also install various mechanical systems, such as the propulsion system, electrical system,and navigation system. The propulsion system, which can be diesel-powered or electric, provides the ship with the necessary power to move through the water. The electrical system ensures that the ship's lights, communication devices, and other electrical appliances function properly. The navigation system, which includes radar, GPS, and other sensors, helps the ship navigate safely and efficiently.Once the ship is completed, it undergoes a series of tests and inspections to ensure that it meets all relevant safety and regulatory standards. These tests include sea trials, where the ship is taken outto sea to test its performance and stability, and dry dock inspections, where the ship is inspection for any structural or mechanical issues.After the ship is certified and ready for operation, it is typically delivered to its owner or operator. The ship can then be used for a variety of purposes, such as commercial shipping, passenger transportation, or military missions.In addition to the construction and maintenance of new ships, shipbuilding engineering also involves the repair and modification of existing vessels. This can include tasks such as replacing worn-out components, upgrading mechanical systems, or extending the overall length of the ship.Overall, shipbuilding engineering is a complex and challenging field that requires a strong background in engineering and technical skills. It is a vital industry that supports global trade and maritime activities, and provides employment opportunities for skilled workers around the world.。
外文翻译

The credit channel of monetary policy:Evidence from the housing marketMatteo Iacovielloa,*,Raoul Minetti b aBoston College,Economics,140Commonwealth Ave,Chestnut Hill,MA 02467,United States b Michigan State University,110Marshall Hall,East Lansing,MI 48824-1038,United StatesReceived 13January 2006;accepted 5December 2006Available online 7March 2007AbstractThis paper tests a credit channel of monetary policy (especially a bank-lending channel)in thehousing market.We argue that the relevance of the credit channel depends on the structural featuresof the housing finance system,in particular efficiency and institutional organisation.We employ aVAR approach to analyse this issue in four housing markets (Finland,Germany,Norway and theUK).Our findings show across countries a clear-cut relationship between presence of the creditchannel,efficiency of housing finance and type of institutions active in mortgage provision.Ó2007Elsevier Inc.All rights reserved.JEL classification:E44;E51;E52;G21;C22Keywords:Monetary transmission;Bank-lending channel;Housing market;Vector autoregressions1.IntroductionSince Bernanke and Blinder (1988),the literature has shown a renewed interest in thecredit channel of monetary policy.According to this view,widespread imperfections inthe credit market,such as asymmetric information or imperfect contract enforceability,result for consumers and firms in a wedge between the opportunity cost of internal fundsand the cost of external funds.In turn,this external finance premium depends on monetary0164-0704/$-see front matter Ó2007Elsevier Inc.All rights reserved.doi:10.1016/j.jmacro.2006.12.001*Corresponding author.Tel.:+1617884467.E-mail address:iacoviel@ (M.Iacoviello).Available online at Journal of Macroeconomics 30(2008)69–9670M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–96policy.Tight monetary policy not only raises market rates of interest but also the external finance premium,thus discouraging investment and consumption.The explanations of this link are twofold.The balance-sheet view argues that the bridge between monetary policy and the externalfinance premium is represented by thefinancial position of borrowers. Tight money affects borrowers’net worth,either reducing their current cashflows(increas-ing interest on debt burdens)or the value of their pledgeable assets.This feeds back on the externalfinance premium required by external lenders.The bank-lending channel view,on the other hand,focuses on lenders’financial status.Tight money drains reserves and retail deposits on the liability side of banks’balance-sheets.Faced with this deposit drain,banks can react by increasing their funding through managed liabilities(such as certificates of deposit)or shrinking assets(loans and securities).In the presence of an upward sloping supply for managed liabilities,banks mayfind too costly to fully offset the reduction in retail deposits and opt to reduce their assets.The lending view argues that the impact is relatively stronger on loans than on securities.In fact loans and securities are imperfect substitutes because loans are riskier and less liquid.Therefore tight money causes an inward shift of credit supply that especially affects borrowers with limited access to non-bank sources of external funding.The credit channel literature has produced mixed results(see Bernanke and Gertler, 1995;Baum et al.,2003).A strong focus has been placed on identifying contractions in credit aggregates resulting from inward shifts in the demand for funds(fully consistent with the traditional monetary transmission mechanism)from shifts in supply resulting from a credit channel.A second crucial issue of this empirical literature has been to disentangle the bank-lending from the balance-sheet channel(Kashyap et al.,1993).In this sense,much work has been done on the relative impact of monetary policy onfirms with different depen-dence on bank funds,such as small and bigfirms(see Gertler and Gilchrist,1994).1 This paper analyses the credit channel of monetary transmission on the households’demand side focusing on the housing market.Our aim is twofold.One the one hand, we want to assess the presence of such a channel in the housing market,possibly disentan-gling a bank-lending from a balance-sheet channel.On the other hand,we want to relate its presence to the structural characteristics of the housingfinance system,especially its institutional organisation and its efficiency.Clearly,the paper has implications that go beyond the housing market.Housing plays an important role in the business cycle,not only because housing investment is a very volatile component of demand(Bernanke and Gertler,1995),but also because changes in house prices can have important wealth effects on consumption(International Monetary Fund,2000)and investment(Topel and Rosen,1988).There are three main motivations for our paper.First,housing markets feature puzzles in terms of quantity and price dynamics hard to reconcile with the traditional monetary1Other studies use microeconomic data and exploit cross-sectional differences among banks orfirms to disentangle a bank-lending ing data from the Call Reports submitted by insured banks to the Federal Reserve,Kashyap and Stein(2000)find that small and illiquid banks react more strongly to monetary shocks, concluding that these banks cannot protect their loan portfolios by shrinking their stock of securities.Baum et al. (2003)show that the results of Kashyap and Stein(2000)can be explained by a different behaviour of banks in the presence offinancial sector uncertainty rather than by a bank-lending channel.Ashcraft(2006)argues that the result that small banks react to monetary shocks more strongly than big ones could be driven by the fact that large banks fund mainly largefirms.In general,a shortcoming of these studies using microeconomic data is that they do not ascertain whether the bank-lending channel affects aggregate economic activity.M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–9671 transmission mechanism.For instance,as Bernanke and Gertler(1995)observe,the response of residential investment to innovations in short-term rates is generally sharp and persistent.This feature does not match the dynamic response of long-term rates(those that mainly drive residential expenditure)that traditionally under-react to innovations in short-term rates and revert fast to their initial level.Second,as argued in Section2,there are reasons to expect that the housing market is particularly exposed to the credit channel, hence representing a better environment to capture its presence than the broader economy. Finally,by exploiting the cross-country heterogeneity in housingfinance systems,we can verify whether there exists a‘‘reasonable’’link between institutional context and evidence of a credit channel,thus offering an important robustness check for ourfindings.The paper is organised as follows.Section2analyses the credit channel in the housing market emphasising the role of the structural features of the housingfinance system,espe-cially its institutional framework and its efficiency.Section3presents the empirical meth-odology while Section4presents the results.In Section5,we perform robustness checks. Finally,Section6concludes.2.Institutional backgroundThe credit channel of monetary policy can be expected to be relatively effective in the housing market.Starting from the balance-sheet channel,‘‘housing demand is linked directly to consumer balance-sheets by features like down-payment requirements,up-front transaction costs,like closing costs and‘points’and minimum income-to-interest-payment ratios’’(Bernanke and Gertler,1995,p.45).2The lending channel is also likely to be rel-atively strong both at the source(depository institutions)and at the destination(house-holds).At the source,in countries where mortgage standardisation and securitisation are not widespread,the relative illiquidity of mortgages could matter.If banks want to keep a buffer against liquidity shocks,they might be encouraged to shift from less to more liquid loans or to securities.At the destination a fall in bank mortgages will probably result in actual lack of funds for house purchases whenever mortgage funding from spe-cialist mortgage lenders or from the State is not a sufficient buffer.In fact,households have lessfinancing opportunities thanfirms.2.1.Credit channel and the institutions for housingfinanceThefirst structural aspect that can affect the credit channel in the housing market(espe-cially the bank-lending channel)is the institutional organisation of the housingfinance system.The bank model is characterised by a strong presence of depository institutions (e.g.,banks)in mortgage provision.The bank model is the strongest candidate for a bank-lending channel:the dependence of borrowers on depository institutions is generally high;moreover,the amount of loanable funds is likely to depend strongly on monetary policy,because of the reliance of banks on reservable retail deposits.In particular,as stressed by Guiso et al.(1999),banking systems with low concentration are more exposed to a bank-lending channel,given the traditional difficulty of small banks in accessing 2In countries where equity withdrawal is not widespread,we can also expect that homeowners’housing demand is strongly tied to their housing wealth.72M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–96wholesale funding(however,see the analysis of Germany in Section4.2.for a qualification of this argument when small banks can form networks).The mortgage bond model is characterised by the strong role of specialist mortgage institutions(mortgage banks)which fund themselves mainly through the wholesale mar-ket.Because of this funding mechanism,the mortgage bond model is less likely to be char-acterised by a bank-lending channel.In fact,monetary policy is likely to have limited credit supply effects if specialist mortgage lenders with easy access to wholesale funding are major players and offer contracts highly substitutable with those of depository institu-tions.Finally,the State model is characterised by a relevant State involvement(directly or indirectly trough public banks).Whether the State model is exposed to a bank-lending channel depends on the substitutability between State mortgages and mortgages from depository institutions.State mortgages are often restricted to social housing or to funding particular types of house purchases;this implies low substitutability and,possibly,the presence of a bank-lending channel.2.2.Credit channel and the efficiency of housingfinanceThe second structural aspect that can affect the credit channel is the efficiency of the housingfinance system.In particular,three aspects are relevant for the presence of a credit channel:(i)depth of the funding system for housingfinance institutions;(ii)pres-ence of a diversified range of mortgage lenders;and(iii)sharing of credit risk.A deep market for wholesale funding can undermine at the source the effectiveness of a bank-lending channel by reducing the dependence of housingfinance institutions on retail deposits.A wide,diversified range of mortgagefinance institutions can weaken at the destination the bank-lending channel by reducing the dependence of households’house purchases on bank credit.The sharing of credit risk,instead,is mainly reflected in the level of minimum income-to-interest-payment ratios and down-payment requirements. These quantitative controls affect the link between borrowers’net worth and the avail-ability of funds from bank and non-bank intermediaries,determining the strength of the balance-sheet channel.The efficiency of a housingfinance system is the result of the historical evolution of the system and of regulatory constraints.After tight money,a regulatory ceiling on deposit rates can prevent banks from offsetting the drain in deposits by increasing the return paid to depositors.Similar arguments apply for restrictions on market funding.In the past,depos-itory institutions in some countries have been prevented from issuing bonds in the open mar-ket,which has implied a strong link between retail deposits and assets.Entry restrictions are again likely to strengthen the bank-lending channel by allowing a small range of lenders alternative to depository institutions.For these reasons,the lending channel is likely to have become weaker after thefinancial liberalisation that occurred in many countries during the 1980s.3It is instead unclear whetherfinancial liberalisation has significantly altered the strength of balance-sheet effects(see Bernanke and Gertler,1995;for a discussion).Table1classifies the housingfinance systems of Finland,Germany,Norway,and the UK according to the institutional framework and the level of efficiency,in the three3The abolition of interest rate ceilings and of portfolio and entry restrictions would have respectively deepened the market for banks’liabilities and reduced the dependence of households on banks for mortgage funding.Table1Structural features of housingfinance systemsCountry Institutional framework EfficiencyFunding market*Mortgage market Risk-bearingFinland Bank model•Strong reliance of banks on retaildeposits and limited use of generalwholesale funding(like bank bonds)•Limited use of mortgage bonds;no useof mortgage backed securities(EMF)•Limited possibility of diversifying away frombanks•State funding limited to particular types of mort-gages/borrowers(BGMR)•LTV ratios around70–80%•Strong role of banks •State funding restricted in scope and beneficiariesUK Bank model Competitive(DL)•Weak role of non-depository mortgage lenders•Integrated and competitive system•No restrictions on contracts(DL)•LTV ratios up to80%(without insurance)and100%with insurance•Strong role of depository institutions(banks and building societies)•Good access of depository institutions to wholesale general funding •Building societies can issue mortgage backed securitiesSources of inefficiency•Limits on building societies unsecured debt•Capital requirements unfavourable to issuing mortgage-backed securities (DL and EMF)Germany Bank and mortgage bondsystem Segmented(DL)•Strongly competitive•Well diversified range of alternative mortgage-lenders•Commercial and savings banks have overcome thefunding segmentation through ownership of thespecialised institutional funding sources(DL)•LTV ratios>80%restrictedonly to repeat buyers•Regulator constrains LTVratio below80%for mortgagebank and Bausparkassenmortgages•Low concentration in banking system •Strong reliance of banks on retail deposits(mortgage backed securities issued at a very small rate)Sources of inefficiency•Deposit rates sluggish below market rates•Banks cannot issue mortgage bonds •Only Bausparkassen can issue contract savings•Limits on insurers favour mortgage bonds(DL and EMF)Norway Bank and state model•Good access of commercial and savingsbanks to wholesale market(bankbonds and other general funding)(EMF)•Strong and increasing competition in market formortgage loans(LWD)LTV ratios around80%Note:DL refers to Diamond and Lea(1992);LWD refers to Lea et al.(1997);BGMR refers to Booth et al.(1994),EMF refers to European Mortgage Federation(2000).M. Iacoviello, R. Minetti / Journal of Macroeconomics 30 (2008) 69–96 7374M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–96aspects indicated above.4For this purpose,we refer mainly to the works by Diamond and Lea(1992),Booth et al.(1994),Lea et al.(1997)and European Mortgage Federation (2000).As the Table shows,we choose these countries because they display strongly diverse housingfinance systems,hence fulfilling the heterogeneity criterion mentioned among the motivations of the paper.In Appendix A,we provide additional evidence in support of this argument.In Section4,we discuss the institutional features of the countries under exam and draw empirical predictions about the presence of a balance-sheet or bank-lending channel.3.Empirical methodology3.1.OverviewSeveral studies provide a theoretical background for our econometric analysis.Aoki et al.(2004)and Iacoviello(2005)analyse the transmission of monetary policy in a general equilibrium framework in which the strength of borrowers’balance-sheets affects their debt capacity.Bernanke and Blinder(1988)provide a theoretical analysis of the bank-lending channel in an extended IS-LM framework.For each country,we run four VARs in order to assess the presence of a credit channel of monetary transmission and to disentangle a balance-sheet from a bank-lending channel (see Table2).5As explained in the next subsection,we follow Gali(1992),Gerlach and Smets(1995)and Angeloni et al.(2003)in identifying periods of tight money using a com-bination of long-run restrictions(corresponding to the long-run neutrality of monetary shocks)and of the more widely used short-run restrictions,namely delays in the effects of interest rate shocks on GDP and prices.6(1)Thefirst VAR includes:GDP,CPI inflation,short-term interest rate,real houseprices,housing loans by banks and other depository institutions,and total loans by banks and other depository institutions.This VAR is substantially uninformative for detecting a credit channel.A reduction in loans after tight money could reflecta fall in loan demand,thus being consistent with the traditional monetary transmis-sion mechanism.7Yet,the change in housing loans can give a clue on the quantita-tive relevance of a possible credit channel.(2)The second VAR includes:GDP,CPI inflation,short-term interest rate,real houseprices and the Spread between a mortgage interest rate on housing loans and a bench-mark interest rate.A rise in the Spread between the mortgage rate and a safe rate of comparable maturity(e.g.,a government bond yield)could capture the increase in4Given the impossibility of determining,even at a qualitative level,whether the presence of the state affects the effectiveness of the bank-lending channel,state and bank model are bundled together.5The variables used and the identification scheme are summarised in Table2.Appendix B describes data sources and time periods used in the regressions.6See Christiano et al.(1998)and Rotemberg and Woodford(1997)for models that generate long-run monetary neutrality while being consistent with the assumption that contemporaneous output and the price level do not respond to a monetary policy shock.7A reduction in loans is not even a necessary condition for a credit channel:households could try to compensate a reduction in wealth by borrowing more from external sources.Hence,tight money could elicit an increase in loan demand that,if strong enough,could overwhelm any contraction in loan supply resulting from a credit channel.the external finance premium associated with a credit channel.However,the analysis of the Spread encounters three problems.First,the price is only one of the terms of mortgage contracts:for instance,an increase in the default probability of the bor-rower could result in higher required collateral rather than higher mortgage rate.Second,if quantity rationing were pervasive in the credit market,the Spread would fail to capture an increase in non-price rationing of mortgage demand.Finally,in the 1980s some of the analysed countries experienced a progressive shift from long-term,fixed mortgage rates to variable,reviewable and renegotiable rates.The Spread between a variable mortgage rate and a long-term benchmark rate could also reflect a liquidity premium (possibly time-varying)not associated with agency or monitor-ing costs.We tackle this issue by matching the maturity of the benchmark safe rate with the actual length of fixity of the mortgage rate.For this purpose,for all the countries we reviewed the extant studies (Diamond and Lea,1992;Lea et al.,1997;Booth et al.,1994;European Mortgage Federation,2000)and identified the typical duration of mortgage contracts and the nature –fixed or renegotiable –of the mortgage rates.For example,for Finland we found that mortgage loans have typically adjustable rates with adjustment periods of 3–5years and therefore we con-sidered a 3-year benchmarking interest rate.Finally,note that the unavailability of detailed data on mortgage rates charged by different lenders prevents us from using the analysis of the Spread to disentangle a bank-lending from a balance-sheet channel –for instance detecting whether the Spread on bank mortgages increases more than that on mortgages from non-depos-itory institutions.Hence,we generally focus on the spread on mortgages by depos-itory institutions or the spread on an average mortgage rate (Germany)inferring from its behaviour only information on the existence of a broad credit channel (bal-ance-sheet and/or bank-lending).(3)The third VAR includes:GDP,consumer price inflation,short-term nominal interestrate,real house prices,and the ratio of housing loans by all ‘‘non-depository’’financial institutions and the State to all housing loans .We argue that the analysis of the exter-nal finance Mix –that is,the fraction of housing loans by ‘‘non-banks’’–is the best way to disentangle a lending channel.If managed liabilities are not a perfect substi-tute for deposits,a drain in reserves and deposits will lead to a relatively strong Table 2Overview of the econometric specificationsVARVariables (regression)Identification of Identification scheme 1Y ,DP,R ,HP,HL,BL (Loans regression)Monetary policy shock Combinations of short and long-run restrictions;monetary shock does not affect contemporaneously Y and DP and has zero impact on all the variables in the long run 2Y ,DP,R ,HP,SP (Spread regression)3Y ,DP,R ,HP,MIX (Mix regression)4Y ,DP,MIX,HP Mix shock Recursive.The MIX shock does not affectcontemporaneously Y and DPVariables :Y (real GDP),DP (consumer price inflation),R (money market rate),HP (real house prices),HL (real housing loans from banks),BL (real total loans from banks),SP (mortgage rate,RM,minus benchmark safe rate,RL),MIX (ratio of housing loans from ‘‘non-banks’’to total housing loans).M.Iacoviello,R.Minetti /Journal of Macroeconomics 30(2008)69–967576M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–96 contraction in bank mortgages and to an increase in the Mix.The Mix will plausibly increase also as households try to compensate the reduction in bank mortgages with mortgages by other institutions.However,in the presence of imperfect substitutabil-ity between bank and other mortgages,this compensation is only partial and the reduction in bank supply affects housing demand.Therefore the analysis of the Mix requires two steps:to analyse whether monetary policy affects the Mix(VAR3)and if so to analyse whether changes in the Mix affect the housing market(VAR4).(4)If monetary policy affects the Mix,we run a fourth VAR with GDP,CPI inflation,externalfinance Mix and real house prices.We look at the effects of an exogenous Mix increase,what we call externalfinance shock.If the Mix has any explanatory power in a house price reduced form equation that already includes income and inflation as controls,its incremental explanatory power supports the existence of an independent bank-lending channel.8The analysis of thefinance Mix wasfirst proposed by Kashyap et al.(1993)(who ana-lysed the response of the Mix between bank loans and commercial paper to innova-tions in the Fed Funds rate)and has been used in the analysis of a lending channel in the automobile market(Ludvigson,1998).As stressed by Oliner and Rudebusch (1996)the Mix does not completely solve the endogeneity problem because a change of the Mix could capture a change in the quality composition of borrowers.Suppose that banks specialize in funding households with a weakfinancial position.An increase of the Mix after tight money could reflect a‘‘flight to quality’’from risky households to households with a strongerfinancial position.In this case,the increase of the Mix would be the result of the working of a households’balance-sheet channel rather than a bank-lending channel.Therefore,whenever the combined evidence from the third and the fourth VARs hints at the presence of a bank-lending channel,we will carry out a robustness analysis to rule out this alternative explanation.In particular, in order to assess whether depository institutions fund riskier households than non-depository ones,we will use evidence on the risk of mortgages,as proxied,for exam-ple,by the default ratio of mortgages,by the number of repossessions,or by the amount of loan loss provisions made by mortgagefinanciers.Note that we can also exclude that changes in the Mix reflect the heterogeneous demand pattern of different cohorts of households.In fact,for all the countries the extant studies(Diamond and Lea,1992;Lea et al.,1997;Booth et al.,1994;European Mortgage Federation,2000) indicate that depository and non-depository institutions have no systematic tendency tofinance groups of households with different structural characteristics.In all the specifications we use house prices as a cyclical indicator in the housing market. In principle,another way to test for the presence of a credit channel in the housing market would be to analyse the behaviour of housing investment.There are reasons to believe that8Following Ludvigson(1998),we do not include the interest rate in this equation.If the interest rate indicates monetary policy,then including it would mean that changes in the Mix marginally reflect non-monetary effects.If the bank-lending channel is operative,then monetary policy should affect the Mix,and the Mix should affect house prices,but there should be no reason to expect that the Mix affects house prices when some variable that captures monetary policy stance is included in the VAR.Therefore the innovation in the Mix captures both monetary policy shocks and non-policy induced shocks,like,for instance,credit crunch episodes.M.Iacoviello,R.Minetti/Journal of Macroeconomics30(2008)69–9677 house prices are more suitable to our analysis.First,since in the housing market quantities adjust sluggishly,prices could be more informative in capturing changes in housing demand in the short run.Second,house prices can play a crucial role in the transmission of monetary policy through credit supply shifts.On the one hand,house prices affect bor-rowers’wealth and credit capacity(for theoretical models see Aoki et al.,2004and Iaco-viello,2005).On the other hand,they influence lenders’net worth and,potentially,the amount of credit they extend.Specifying the VARs using quantities rather than prices would omit these interactions.3.2.Identifying the shocksWe identify the monetary shocks in VARs1–3using a combination of short and long-run restrictions.In particular,we adopt the common trends approach as developed by King et al.(1991).The approach uses the cointegration properties of the data to achieve identification using both short and long-run restrictions.When a group of variables in a VAR is cointegrated,a useful specification for their dynamics is a vector–error–correction model(VECM).A VECM places reduced rank restrictions on the matrix of long-run impacts from a VAR.KPSW distinguish between structural shocks with permanent effects on the level of the variables from shocks with only temporary effects.The permanent shocks are the sources of the so-called common stochastic trends among the series.The number of these shocks equals the number of variables in the system less the cointegrating relationships between them.The remaining transitory shocks equal the number of cointe-grating relationships(intuitively,a cointegrating vector identifies a linear combination of the variables that is stationary,so that shocks to it do not eliminate the steady state in such a system).The VAR model needs not to be fully identified:partial identification of either the tran-sitory or permanent shocks is possible.Furthermore,one can separate the transitory shocks by adding some untested restriction on their impact effect.We identify the mone-tary shock as the transitory innovation that does not affect contemporaneously GDP and CPI inflation,but that can have impact effects on all the other variables.In addition,the shock has also to satisfy long-run neutrality,both by having zero long-run effect on GDP (and the other real variables)and by keeping relative prices of houses and consumer goods constant.9Therefore,GDP,inflation,real house prices and all other variables will revert back to their initial steady state once the effects of the shock die out.We run augmented Dickey–Fuller unit root tests on the levels of the series.10The tests suggest that the variables are integrated of order1.The results from the cointegration tests are mixed,but tend to indicate,in thefirst three VARs,at least three cointegrating vectors: one vector could correspond to a long-run stationary real interest rate(cointegration between nominal interest rate and inflation),another to a long-run housing supply curve (cointegration between house prices and GDP).The third cointegrating vector could hint, depending on the VAR,at a stable long-run ratio between housing loans and total loans (VAR1),stationary spread(VAR2),stationary Mix(VAR3).For this reason in our 9The monetary shock will not affect the relative prices of the two goods in the long run,but the permanent shocks in the VAR(which we do not focus upon here)in general will.However,it can affect the CPI and house price index(by the same amount),since we impose the zero long-run restriction on CPI changes,not on levels. 10More details on this and on the cointegration tests are available from the authors upon request.。
建筑工程及给排水专业中英文对照翻译

建筑工程及给排水专业中英文对照翻译Laminar and Turbulent FlowObservation shows that two entirely different types of fluid flow exist. This was demon- strated by Osborne Reynolds in 1883 through an experiment in which water was discharged from a tank through a glass tube. The rate of flow could be controlled by a valve at the outlet, and a fine filament of dye injected at the entrance to the tube. At low velocities, it was found that the dye filament remained intact throughout the length of the tube, showing that the particles of water moved in parallel lines. This type of flow is known as laminar, viscous or streamline, the particles of fluid moving in an orderly manner and retaining the same relative positions in successive cross- sections.As the velocity in the tube was increased by opening the outlet valve, a point was eventually reached at which the dye filament at first began to oscillate and then broke up so that the colour was diffused over the whole cross-section, showing that the particles of fluid no longer moved in an orderly manner but occupied different relative position in successive cross-sections. This type of flow is known as turbulent and is characterized by continuous small fluctuations in the magnitude and direction of the velocity of the fluid particles, which are accompanied by corresponding small fluctuations of pressure.When the motion of a fluid particle in a stream is disturbed, its inertiawill tend to carry it on in the new direction, but the viscous forces due to the surrounding fluid will tend to make it conform to the motion of the rest of the stream. In viscous flow, the viscous shear stresses are sufficient to eliminate the effects of anydeviation, but in turbulent flow they are inadequate. The criterion which determines whether flow will be viscous of turbulent is therefore the ratio of the inertial force to the viscous force acting on the particle. The ratioμρvl const force Viscous force Inertial ?= Thus, the criter ion which determines whether flow is viscous or turbulent is the quantity ρvl /μ, known as the Reynolds number. It is a ratio of forces and, therefore, a pure number and may also be written as ul /v where is the kinematic viscosity (v=μ/ρ).Experiments carried out with a number of different fluids in straight pipes of different diameters have established that if the Reynolds number is calculated by making 1 equal to the pipe diameter and using the mean velocity v , then, below a critical value of ρvd /μ = 2000, flow will normally be laminar (viscous), any tendency to turbulence being damped out by viscous friction. This value of the Reynolds number applies only to flow in pipes, but critical values of the Reynolds number can be established for other types of flow, choosing a suitable characteristic length such as the chord of an aerofoil in place of the pipe diameter. For a given fluid flowing in a pipe of a given diameter, there will be a critical velocity of flow corresponding to the critical value of the Reynolds number, below which flow will be viscous.In pipes, at values of the Reynolds number > 2000, flow will not necessarily be turbulent. Laminar flow has been maintained up to Re = 50,000, but conditions are unstable and any disturbance will cause reversion to normal turbulent flow. In straight pipes of constant diameter, flow can be assumed to be turbulent if the Reynolds number exceeds 4000.Pipe NetworksAn extension of compound pipes in parallel is a case frequently encountered in municipal distribution system, in which the pipes are interconnected so that the flow to a given outlet may come by several different paths. Indeed, it is frequently impossible to tell by inspection which way the flow travels. Nevertheless, the flow in any networks, however complicated, must satisfy the basic relations of continuity and energy as follows:1. The flow into any junction must equal the flow out of it.2. The flow in each pipe must satisfy the pipe-friction laws for flow in a single pipe.3. The algebraic sum of the head losses around any closed circuit must be zero.Pipe networks are generally too complicated to solve analytically, as was possible in the simpler cases of parallel pipes.A practical procedure is the method of successive approximations, introduced by Cross. It consists of the following elements, in order:1. By careful inspection assume the most reasonable distribution of flows that satisfies condition 1.2. Write condition 2 for each pipe in the formh L = KQ n(7.5) where K is a constant for each pipe. For example, the standard pipe-friction equation would yield K= 1/C2and n= 2 for constant f. Minor losses within any circuit may be included, but minor losses at the junction points are neglected.3. To investigate condition 3, compute the algebraic sum of the head losses around each elementary circuit. ∑h L= ∑KQ n. Consider losses from clockwise flows as positive, counterclockwise negative. Only by good luck will these add tozero on the first trial.4. Adjust the flow in each circuit by a correction, ΔQ , to balance the head in that circuit and give ∑KQ n = 0. The heart of this method lies in the determination of ΔQ . For any pipe we may writeQ = Q 0 +ΔQwhere Q is the correct discharge and Q 0 is the assumed discharge. Then, for a circuit100/Q h n h Q Kn Q K Q L L n n ∑∑∑∑?-=-=- (7.6) It must be emphasized again that the numerator of Eq. (7.6) is to be summed algebraically, with due account of sign, while the denominator is summed arithmetically. The negative sign in Eq.(7.6) indicates that when there is an excess of head loss around a loop in the clockwise direction, the ΔQ must be subtracted from clockwise Q 0’s and added to counterclockwise ones. The reverse is true if there is a deficiency of head loss around a loop in the clockwise direction.5. After each circuit is given a first correction, the losses will still not balance because of the interaction of one circuit upon another (pipes which are common to two circuits receive two independent corrections, one for each circuit). The procedure is repeated, arriving at a second correction, and so on, until the corrections become negligible.Either form of Eq. (7.6) may be used to find ΔQ . As values of K appear in both numerator and denominator of the first form, values proportional to the actual K may be used to find the distribution. Thesecond form will be found most convenient for use with pipe-friction diagrams for water pipes.An attractive feature of the approximation method is thaterrors in computation have the same effect as errors in judgment and will eventually be corrected by the process.The pipe-networks problem lends itself well to solution by use of a digital computer. Programming takes time and care, but once set up, there is great flexibility and many man-hours of labor can be saved.The Future of Plastic Pipe at Higher PressuresParticipants in an AGA meeting panel on plastic pipe discussed the possibility of using polyethylene gas pipe at higher pressures. Topics included the design equation, including work being done by ISO on an updated version, and the evaluation of rapid crack propagation in a PE pipe resin. This is of critical importance because as pipe is used at higher pressure and in larger diameters, the possibility of RCP increases.Se veral years ago, AGA’s Plastic Pipe Design Equation Task Group reviewed the design equation to determine if higher operating pressurescould be used in plastic piping systems. Members felt the performance of our pipe resins was not truly reflected by the design equation. It was generally accepted that the long-term properties of modern resins far surpassed those of older resins. Major considerations were new equations being developed and selection of an appropriate design factor.Improved pipe performanceMany utilities monitored the performance of plastic pipe resins. Here are some of the long-term tests used and the kinds of performance change they have shown for typical gas pipe resins.Elevated temperature burst testThey used tests like the Elevated Temperature Burst T est, inwhich the long-term performance of the pipe is checked by measuring the time required for formation of brittle cracks in the pipe wall under high temperatures and pressures (often 80 degrees C and around 4 to 5-MPa hoop stress). At Consumers Gas we expected early resins to last at least 170 hrs. at 80 degrees C and a hoop stress of 3 MPa. Extrapolation showed that resins passing these limits should have a life expectancy of more than 50 yrs. Quality control testing on shipments of pipe made fromthese resins sometimes resulted in product rejection for failure to meet this criterion.At the same temperature, today’s resins last thousands of hours at hoop stresses of 4.6 MPa. Tests performed on pipe made from new resins have been terminated with no failure at times exceeding 5,700 hrs. These results were performed on samples that were squeezed off before testing. Such stresses were never applied in early testing. When extrapolated to operating conditions, this difference in test performance is equivalent to an increase in lifetime of hundreds (and in some cases even thousands) of years.Environmental stress crack resistance testSome companies also used the Environmental Stress Crack Resistance test which measured brittle crack formation in pipes but which used stress cracking agents to shorten test times.This test has also shown dramatic improvement in resistance brittle failure. For example, at my company a test time of more than 20 hrs. at 50 degrees C was required on our early resins. Today’s resins last well above 1,000 hrs. with no failure.Notch testsNotch tests, which are quickly run, measure brittle crack formation in notched pipe or molded coupon samples. This isimportant for the newer resins since some other tests to failure can take very long times. Notch test results show that while early resins lasted for test times ranging between 1,000 to 10,000 min., current resins usually last for longer than 200,000 min.All of our tests demonstrated the same thing. Newer resins are much more resistant to the growth of brittle crack than their predecessors. Since brittle failure is considered to be the ultimate failure mechanism in polyethylene pipes, we know that new materials will last much longer than the old. This is especially reassuring to the gas industry since many of these older resins have performed very well in the field for the past 25 yrs. with minimal detectable change in properties.While the tests showed greatly improved performance, the equation used to establish the pressure rating of the pipe is still identical to the original except for a change in 1978 to a single design factor for all class locations.To many it seemed that the methods used to pressure rate our pipe were now unduly conservative and that a new design equation was needed. At this time we became aware of a new equation being balloted atISO. The methodology being used seemed to be a more technically correct method of analyzing the data and offered a number of advantages.Thermal Expansion of Piping and Its CompensationA very relevant consideration requiring careful attention is the fact that with temperature of a length of pipe raised or lowered, there is a corresponding increase or decrease in its length and cross-sectional area because of the inherent coefficient of thermal expansion for the particular pipe material. The coefficient of expansion for carbon steel is 0.012 mm/m?Cand for copper 0.0168mm/m?C. Respective module of elasticity a re for steel E = 207×1.06kN/m2 and for copper E = 103×106 kN/m2. As an example, assuming a base temperature for water conducting piping at 0?C, a steel pipe of any diameter if heated to 120?C would experience a linear extension of 1.4 mm and a similarly if heated to copper pipe would extend by 2.016 mm for each meter of their respective lengths. The unit axial force in the steel pipe however would be 39% greater than for copper. The change in pipe diameter is of no practical consequence to linear extension but the axial forces created by expansion or contractionare con- siderable and capable of fracturing any fitments which may tend to impose a restraint;the magnitude of such forces is related to pipe size. As an example,in straight pipes of same length but different diameters, rigidly held at both ends and with temperature raised by say 100?C, total magnitude of linear forces against fixed points would be near enough proportionate to the respective diameters.It is therefore essential that design of any piping layout makes adequate com- pensatory provision for such thermal influence by relieving the system of linear stresses which would be directly related to length of pipework between fixed points and the range of operational temperatures.Compensation for forces due to thermal expansion. The ideal pipework as far as expansion is concerned, is one where maximum free movement with the minimum of restraint is possible. Hence the simplest and most economical way to ensure com- pensation and relief of forces is to take advantage of changes in direction, or where this is not part of the layout and long straight runs are involved it may be feasible to introducedeliberate dog-leg offset changes in direction at suitable intervals.As an alternative,at calculated intervals in a straight pipe run specially designed expansion loops or “U” bends should be inserted. Depending upon design and space availability, expansion bends within a straight pipe run can feature the so called double offset “U” band or thehorseshoe typ e or “lyre” loop.The last named are seldom used for large heating networks; they can be supplied in manufacturers’ standard units but require elaborate constructional works for underground installation.Anchored thermal movement in underground piping would normally be absorbed by three basic types of expansion bends and these include the “U”bend, the “L”bend and the “Z”bend.In cases of 90 changes indirection the “L” and “Z”bends are used.Principles involved in the design of provision for expansion between anchor points are virtually the same for all three types of compensator. The offset “U” bend is usually made up from four 90° elbows and straight pipes; it permits good thermal displacement and imposes smaller anchor loads than the other type of loop. This shape of expansion bend is the standardised pattern for prefabricated pipe-in-pipe systems.All thermal compensators are installed to accommodate an equal amount of expansion or contraction; therefore to obtain full advantage of the length of thermal movement it is necessary to extend the unit during installation thus opening up the loop by an extent roughly equal the half the overall calculated thermal movement.This is done by “cold-pull” or other mechanical means. The total amount of extension between two fixed pointshas to be calculated on basis of ambient temperature prevailing and operational design temperatures so that distribution of stresses and reactions at lower and higher temperatures are controlledwithin permissible limits. Pre-stressing does not affect the fatigue life of piping therefore it does not feature in calculation of pipework stresses .There are numerous specialist publication dealing with design and stressing calculations for piping and especially for proprietary piping and expansion units; comprehensive experience back design data as well as charts and graphs may be obtained in manufacturers’publications, offering solutions for every kind of pipe stressing problem.As an alternative to above mentioned methods of compensation for thermal expansion and useable in places where space is restricted, is the more expensive bellows or telescopic type mechanical compensator. There are many proprietary types and models on the market and the following types of compensators are generally used.The bellows type expansion unit in form of an axial compensator provides for expansion movement in a pipe along its axis; motion in this bellows is due to tension or compression only.There are also articulated bellows units restrained which combine angular and lateral movement; they consist of double compensator units restrained by straps pinned over the center of each bellowsor double tied thus being restrained over its length.Such compensators are suitable for accommodating very pipeline expansion and also for combinations of angular and lateral movements.层流与紊流有两种完全不同的流体流动形式存在,这一点在1883年就由Osborne Reynolds 用试验演示证明。
cast_iron

Materials Science and Engineering A413–414(2005)322–333Solidification and modeling of cast iron—A shorthistory of the defining momentsDoru M.StefanescuThe Ohio State University,Columbus,Ohio,USAReceived in revised form2August2005AbstractHuman civilization has evolved from the Stone Age,through the Bronze Age to reach the Iron Age around1500B.C.There are many to contend that today we are living in the age of engineered materials,yet the importance of iron castings continues to support the thesis that we are still in the Iron Age.Cast iron,thefirst man-made composite,is at least2500years old.It remains the most important casting material,with over70%of the total world tonnage.The main reasons for cast iron longevity are its wide range of mechanical and physical properties coupled with its competitive price.This paper is a review of the fundamentals of solidification of iron-base materials and of the mathematical models that describe them,starting with the seminal paper by Oldfield,thefirst to attempt modeling of microstructure evolution during solidification,to the prediction of mechanical properties.The latest analytical models for irregular eutectics such as cast iron as well as numerical models with microstructure output are discussed. However,since the space does not permit an extensive description of the multitude of models available today,the emphasis is on model performance rather than the mathematics of model formulation.Also,because of space constrains,white iron and defect occurrence will not be covered.©2005Elsevier B.V.All rights reserved.Keywords:Cast iron;Microstructure;Mechanical properties;Solidification;Analytical and computational modelling of solidification1.IntroductionWhile the primeval potter was thefirst to modify the state of matter,he left little if any trace in the mythological and archeo-logical record.Thus,according to Eliade[1],the starting point in understanding the behavior of primitive societies in relation to matter must be the relationship of primitive man to mineral substances,in particular that of the iron-worker.Primitive people worked with meteoric iron long before learning to extract iron from iron ore.The Sumerian word AN.BAR,the oldest word designating iron,is made up of the pictogram‘sky’and‘fire’.Similar terminology is found in Egypt ‘metal from heaven’and with the Hittites‘black iron from sky’. Yet metallurgy did not establish itself until the secret of smelt-ing magnetite or hematite was discovered,followed by the art of hardening the metal through quenching.The beginning of this metallurgy on an industrial scale can be situated at1200–1000 B.C.in the mountains of Armenia[1].In the European tradition it was St.P´e ran,the patron saint of mines,who invented smelting of metals.E-mail address:doru@.Metal workers were so important in early history that some-times they raised to the level of royalty.According to certain sources,Genghis Khan was a simple smith before acceding to power.In ancient Java,the genealogy of metallurgists,like that of princes,goes back to god.And,in most ancient cultures,the metallurgist was believed to have a direct link to the divine,if not of divine origin himself.Thus,it is with a certain reverence that I approached the task of reviewing the long history of thefirst man-made compos-ite,cast iron,from its archeologically documented beginning some2500years ago,to the age of virtual cast iron,where its structure and properties are the outcome of computational exercises.2.A short history of an old materialThe earliest dated iron casting is a lion produced in China in 502B.C.Introduction of cast iron in Europe did not occur until about1200–1450A.D.Remarkable European cast iron artifacts include the sewer pipes in Versailles(1681)and the iron bridge near Coalbrookdale in England(1779).Before the invention of microscope in1860,only two types of iron were known,based0921-5093/$–see front matter©2005Elsevier B.V.All rights reserved. doi:10.1016/j.msea.2005.08.180D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333323Fig.1.Correlation between the Mg residual and graphite shape[3].on the appearance of their fracture:white and gray.Our knowl-edge of cast iron was extremely limited for a long time.In1896, thefirst paper on cast iron to be published in the newly created Journal of the American Foundrymen’s Association[2]stated the following:“The physical properties of cast iron are shrink-age,strength,deflection,set,chill,grain and hardness.Tensile test should not be used for cast iron,but should be confined to steel and other ductile pression test should be made,but is generally neglected,from the common erro-neous impression that the resistance of a small cube or cylinder, which is enormous,is always in excess of loads which can be applied”.It took another50years for ductile iron to be discov-ered(1938–1940independently by Adey,Millis and Morrogh). The major discoveries of cast iron ended in the1970s with the recognition of compacted graphite(CG)iron as a grade in its own merit.With that,the dependency of graphite shape on mag-nesium or cerium content was fully understood(see for example Fig.1[3]).Today,cast iron remains the most important casting material accounting for about70%of the total world casting tonnage. The main reasons for cast iron longevity are the wide range of mechanical and physical properties associated with its compet-itive price.3.Critical discoveries in understanding thesolidification of cast ironBefore society accepts to continue sinking resources in the study of solidification rather than of global warming it is important to understand why solidification is important.Some of the quick answers include:solidification processing allows microstructure engineering;solidification determines casting soundness;heat treatment is scarcely used for cast iron;most solidification defects cannot be corrected through heat treat-ment.In summary,solidification is the main driver of casting properties.A good resource for the early discoveries that propelled cast iron in its present position is Piwowarsky’s famous monograph published in1942[4].According to this source,by1892Ledebur recognized the role of silicon on the solidification structure of cast iron,proposing thefirst equation correlating the carbon and silicon content:(C+Si)/1.5=4.2–4.4.Then,in1924,Maurer designed his famous structural dia-gram that established direct correlation between the C and Si content of the iron and its as-cast microstructure.Thefirst attempt to understand the solidification microstructure was apparently that of Roll,who in1934outlined the“primary crys-tals”using Baumann etching to show the position of Mn sulfides (Fig.2).3.1.Nucleation and undercoolingSolidification starts with nucleation,which is strongly affected by undercooling.Extensive work by Patterson and Ammann[5]demonstrated that the effect of undercooling on the eutectic cell count depends on the way the undercooling occurs.If undercooling is the result of increased cooling rate, then the number of cells increases(Fig.3).The opposite is trueif Fig.2.Roll’s schematic representation of position of MnS around grains and dendrites(after[4]).324 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.3.The effect of undercooling on the eutectic cell count [5].undercooling is a consequence of the depletion of nuclei through superheating.While the analysis of solidification events was based for many years on indirect observations,it was not until 1961when through quenching from semisolid state,Oldfield [6]was able to quantify the nucleation and growth of eutectic grains.These experiments are the beginning of the effort of building the exten-sive database required for solidification modeling of cast iron.Understanding nucleation was and continues to be the sub-ject of extensive studies.Attempting to explain the efficiency of metals such as Ca,Ba and Sr in the inoculation of lamel-lar graphite (LG)iron,Lux [7]suggested in 1968that,when introduced in molten iron,these metals form saltlike carbides that develop epitaxial planes with the graphite,and thus consti-tute nuclei for graphite (Fig.4).Later,Weis [8]assumed that nucleation of LG occurs on SiO 2oxides formed by heteroge-neous catalysis of CaO,Al 2O 3,and oxides of other alkaline metals.A similar theory of double-layered nucleation was proposed at the same time for spheroidal graphite (SG).Using the results of SEM analysis,Jacobs et al.[9]contended that SG nucleates on duplex sulfide-oxide inclusions (1m dia.);the core is made of Ca Mg or Ca Mg Sr sulfides,while the outer shell is made of complex Mg Al Si Ti oxides.This idea was further devel-oped by Skaland et al.[10].They argued that SG nuclei are sulfides (MgS,CaS)covered by Mg silicates (e.g.,MgO ·SiO 2)or oxides that have low potency (large disregistry).After inocu-lation with FeSi that contains another metal (Me)such as Al,Ca,Sr or Ba,hexagonal silicates (MeO ·SiO 2or MeO ·Al 2O 3·2SiO 2)form at the surface of the oxides,with coherent/semicoherent low energy interfaces between substrate and graphite (Fig.5).Since graphite is in most cases an eutectic phase,a clear possibility of its nucleation on the primary austenite exist.Rejec-tion of C and Si by the solidifying austenite imposes a high solutal undercooling in the proximity of the γphase,favor-able to graphite nucleation.Yet,little is known on this subject,mostly because of the difficulties to outline the primary austenite through metallographic techniques.3.2.Crystallization of graphite from the liquidThe debate on the preferred growth direction of graphite seems to have been initiated by Herfurth [11]who in 1965postu-lated that the change from lamellar to spheroidal graphite occurs because of the change in the ratio between growth on the [1010]face (A direction)and growth on the [0001]face of the graphite prism (C direction).Experimental evidence for growth on both of these directions was provided by Lux et al.[12]in 1974(Fig.6).Assuming that the preferred growth direction for the SG is the A direction,Sadocha and Gruzleski [13]postulated the circumfer-ential growth of graphite spheroids,which seems to be the mostcommon.Fig.4.Growth of graphite on the epitaxial planes of saltlike carbides [7].D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333325Fig.5.Low potency(left)and high potency(right)nuclei for SG iron[10].Today it is generally accepted that the spheroidal shape is the natural growth habit of graphite in liquid iron.LG is a modi-fied shape,the modifiers being sulfur and oxygen.They affect graphite growth through some surface adsorption mechanism [14].3.3.Solidification of the iron–graphite eutecticWhile considerable effort was deployed to understand the solidification of the stable(Fe–graphite)and metastable (Fe Fe3C)eutectics,because of space restrictions only the for-mer will be discussed in some detail.One of the most important concepts in understanding the vari-ety of microstructures that can occur during the solidification of cast iron is that of the asymmetric coupled phase diagram, which describes non-equilibrium solidification.Such diagrams explain for example the presence of primary austenite dendrites in the microstructure of hypereutectic irons.The theoretical construction of these types of diagrams for cast iron wasfirst demonstrated by Lux et al.[15]in1975,and then documented experimentally by Jones and Kurz[16]in1980.They succeeded in constructing such diagrams for pure Fe C alloys solidifying white or withflake graphite.For a more detailed discussion on this subject the reader could use reference[14].In1949,which is very early after the discovery of SG iron, Patterson and Scheil used experimentalfindings to state that SG forms in the melt and is later encapsulated in aγshell.This was later confirmed by Sch¨o bel[17]through quenching and centrifuging experiments.In1953,Scheil and H¨u tter[18]mea-sured the radii of the graphite and theγshell and concluded that they develop such as to conserve a constant ratio(rγ/r Gr=2.3) throughout the microstructure.This ratio was confirmed theo-retically by Wetterfall et al.[19]who preformed calculations for the steady-state diffusion-controlled growth of graphite.Many other theories that did not gain wide acceptance in the science community were advanced over the years.Anexam-Fig.6.Experimental evidence of graphite growth along the A or C direction and schematic representation of possible mechanisms.(a)Growth of graphite along the A direction and(b)growth of graphite along the C direction[12].326 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.7.Influence of composition and solidification velocity on the morphology of the S/L interface.(a)Schematic representation [23,26]and (b)DS experiments [27].ple is the gas bubble theory postulated by Karsay [20],which infers that a precipitating gas phase provides the phase boundary required for graphite crystallization.Austenite precipitates then at the graphite–gas interface.Directional solidification (DS)experiments generated signifi-cant information on the mechanism of microstructure keland and Hogan [21]produced the first composition versus thermal gradient/solidification velocity ratio (C –G /V )diagram for FG iron in 1968.The compositional variable was sulfur.It took another 18years before the diagram was expanded to include SG and compacted graphite (CG)iron (%Mg–V )[22]and then extended to incorporate white iron (%Ce–G /V )[23].Measurements of the average eutectic lamellar spacing in LG iron [21,24]demonstrated that it does not behave like a regular eutectic,since the average spacing was about an order of magnitude higher than predicted by Jackson–Hunt for regular eutectics.Using the knowledge accumulated from DS experiments per-formed by others as well as by themselves,and some ideas from the earlier work of Rickert and Engler [25],Stefanescu and collaborators [23,26]summarized the influence of the amount of solute on the morphology of the solid–liquid (S/L)inter-face of graphitic iron as shown in Fig.7a.This concept was partially validated through DS experiments by Li et al.[27](Fig.7b).Some interesting analogies were made by comparing images obtained from SEM analysis of microshrinkage in SG iron [28]with results of phase-filed modeling of dendrites.The austen-ite growing into the liquid will tend to grow anisotropically in its preferred crystallographic orientation (Fig.8a).However,restrictions imposed by isotropic diffusion growth will impose an increased isotropy on the system.Consequently,the den-dritic shape of the austenite will be altered and the γ-liquid interface will exhibit only small protuberances instead of clear secondary arms (Fig.8c)[14].This interpretation is consis-tent with the results of phase-filed modeling [29]shown in Fig.8b and d.Alternatively,to understand the interaction between austenite dendrites and graphite nodules in the early stages of solidifica-tion,the concepts developed for particle engulfment and pushing may be used.For a description of this approach Refs.[14]and [28]are suggested.Oldfield’s name surfaces again when attempting to under-stand the influence of a third element on the stable (T st )and metastable (T met )temperatures.Indeed,using cooling curve analysis,Oldfield [6]demonstrated that Si increases the T st −T met interval,while chromium decreases it.This informa-tion was used to correlate microstructure to the beginning and end of the eutectic solidification.It became a truism [30]that if both the beginning and end of solidification occur above the metastable temperature,the solidification microstructure is gray.If both temperatures are under T met ,the iron is white,while if only one temperature is lower than T met the iron is mottled.3.4.The gray-to-white structural transition (GWT)The first rationalization of the GWT was based on the influ-ence of cooling rate on the stable and metastable eutectic tem-peratures.As shown in Fig.9,as the cooling rate increases,both temperatures decrease.However,since the slope of T st is steeper than that of T met ,the two intersect at a cooling rate which is the critical cooling rate (d T /d t )cr ,for the GWT.At cooling rates smaller than (d T /d t )cr the iron solidifies gray,while at higher cooling rates it solidifies white.Magnin and Kurz [31]further developed this concept by using solidification velocity rather than cooling rate as a variable,and considering the influence of nucleation undercooling for both the stable and metastable eutectics.Thus,a critical velocity for the white-to-gray transition and one for the gray-to-white transition were defined.D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333327Fig.8.SEM images of dendrites and SG iron in microshrinkage regions (left)and phase-filed calculated images of dendrites (right).(a)Primary austenite dendrite [28],(b)simulated high anisotropy [29],(c)eutectic austenite dendrite and SG aggregate [28]and (d)simulated no anisotropy [29].Fig.9.Critical cooling rate for the GTW transition.3.5.Dimensional variation during solidificationSoon after the discovery of SG iron researchers noted that its dimensional variation during solidification is quite different than that of LG iron.In 1954Gittus [32]measured the expansion of SG iron over the eutectic interval and showed that it was five times higher than that of LG iron.Hillert [33]explained this surprising finding by noting that most graphite forms when surrounded by austenite.Graphite expansion occurring during solidification imposes considerable plastic deformation on the austenite.Yet,specific volume calculations suggest that graphite expansion should be the same for FG and SG irons.Some 20years later,using a different experimental device that included a riser feeding the test casting,Margerie [34]found thatLG iron expands about 0.2–0.5%during eutectic solidification,while no significant expansion occurs in SG iron because of mass expulsion into the riser.This expulsion occurs because SG iron undergoes mushy solidification while LG iron solidifies with a skin (Fig.10).3.6.Melt controlThe progress in the understanding of the correlation between the solidification microstructure and temperature undercooling generated interest in the possibility of using cooling curves (CC)to predict not only the chemical composition but even the microstructure.After initial work by Loper et al.[35],Naro and Wallace [36]showed that eutectic undercooling continuously decreases as the cerium addition to the iron increases,and that this is directly related to the change in microstructure from LG,to SG,to white.Then,it was found that compacted graphite (CG)iron solidifies with larger recalescence than either LG or SG iron [37,38].This proved to be a significant discovery since it is currently used for process control in at least two patented technologies for the manufacturing of CG iron.In 1972Rabus and Polten [39]used the first derivative of the CC,which is the cooling rate,to attempt to precisely identify the points of interest on the CC such as beginning and start of solidification.Other researchers followed [40]and attempted to use the CC and its derivative to predict microstructure details such as 80%nodularity [41]and then the latent heat of fusion [42].This proved to be an elusive goal,in spite attempts to improve the standard Newtonian analysis [43]or to use Fourier analysis [44].Today CC analysis is a standard control tool in iron foundries for evaluating the chemical composition as well as graphite328 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333Fig.10.Schematic illustration of solidification mechanisms of continuously cooled lamellar and spheroidal graphite cast iron [14].shape,inoculation efficiency,shrinkage propensity and others.The ATAS equipment developed by NovaCast has the added fea-ture that it can store information developed in a specific foundry and incorporate it into an expert system.It outputs 20of the most important thermal parameters of the CC.As both the CC and the dimensional variation are strong indicators of the phase transformation occurring in the solidi-fying alloy,Stefanescu et al.[45]combined the two methods by adding quartz rods to a standard sand cup for CC,and using a displacement transducer to simultaneously measure temperature and dimensional variation (Fig.11).The method proved to be very efficient in the characterization of graphite shape and was patented as part of a technology for CG iron production with in-process operative control.A similar approach was promoted later by Yang and Aalhainen [46]that even used the derivative of the dimensional variation curve to predict the amount of car-bides.Fig.11.Results of measurement of temperature,cooling rate,and dimensional variation for a CG iron.4.Critical innovations in the development of mathematical models for cast ironIn this section we will present a summary of the main ana-lytical and computational models developed for cast iron.4.1.Analytical modeling of cast ironTwo years after the development of the Jackson–Hunt model for regular eutectics,Tiller [47]attempted to avoid one of the limitations of the JH model,which is that it could only be used for directional solidification.He developed a model for the cooperative growth of a eutectic spherical grain of LG and austenite.The model predicted that the correlation between solidification velocity and lamellar spacing obeys the relation-ship λV 1/2=4×10−6.This theoretical result was confirmed experimentally by Lakeland in 1968.The first analytical model to describe growth of the eutec-tic in SG iron was proposed in 1972by Wetterfall et al.[48].The model assumed diffusion controlled steady-state growth of graphite through the γshell.This model has survived the test of time and is used today in most computational mod-els for microstructure evolution.Under the assumption that the ratio between the radii of γand graphite remains constant dur-ing solidification,the equation derived for the growth velocity of graphite was simplified by Svensson and Wessen [49]to d r Gr /d t =2.87×10−11 T /r Gr .The irregular nature of the LG-γeutectic was not confronted until 1987,when Magnin and Kurz [50]proposed their irregu-lar faceted/non-faceted eutectic model assuming non-isothermal interface.They further assumed that the γphase that has a diffuse interface grows faster than the graphite phase that is faceted,and that branching occurs when a depression forms on the faceted phase.To impose a non-isothermal coupling condition over the interface,they ascribed a cubic function.They demonstrated that the smallest spacing of the lamellar eutectic is dictated by the extremum condition,but that a larger spacing will also exist,λbr ,dictated by a branching condition.λbr can be calculated as the product between a function of the physical constants of the faceted phase and a material constant.This constant must be postulated (guessed)which limits the generality of the model.D.M.Stefanescu/Materials Science and Engineering A413–414(2005)322–333329Recently,Catalina et al.[51,52]proposed a modified Jackson–Hunt model for eutectic growth applicable to both reg-ular and irregular eutectics.The model relaxes the assumption of isothermal interface and accounts for the density difference between the liquid and the two solid phases.Four character-istic spacings for which the undercooling exhibits a minimum were identified:λ␣,λ,λSL(for the average undercooling of the S/L interface),andλiso=λex(spacing at which the inter-face is isothermal equal to the one derived from the extremum criterion).It is remarkable thatλiso=λex was derived without invoking the extremum criterion.However,isothermal growth is not possible in all eutectic system.Fe–C alloys do not grow with an isothermal interface.The minimum spacing is determined by λSL,while the average spacing byλGr.Spacing adjustment of irregular eutectics occurs through the branching of the faceted phase.putational modeling of cast iron—analytical heat transport+transformation kineticsThe era of computational modeling of cast iron was started by the brilliancy of a scientist whose name has already been quoted several times in this paper.It is that of Oldfield[53],who,in 1966developed a computer model that could calculate the cool-ing curves of LG iron(Fig.12).His seminal paper included many innovations including parabolic laws with experimentally derived constants for nucleation and growth of spherical eutec-tic grains,correction for grain impingement against one another and against the wall,and a computer model for heatflow across a cylinder similar to FDM.Validation against published experi-ments was also included.Oldfield’s model is indisputably the basis of the current advances in computational modeling of microstructural evolution during solidification.Nobody ever remembers number2in any human endeavor. Yet,the author of this paper will have to take credit for this position,since in1973he was thefirst one to continue Oldfield’s work[54].Using an analytical model for heat transport and time stepping procedure to generate cooling curves,Stefanescu and Trufinescu[55]studied the effects of inoculants on the cooling curves and the nucleation constants.A third paper followed in 1978when Aizawa[56]used Oldfield’s model to examinethe Fig.12.Experimental and calculated cooling curves,quenched iron sample and equations for nucleation and growth proposed by Oldfield[53].influence of nucleation and growth rate constants on the width of the mushy zone in LG and SG iron.The next significant development in thefield belongs to Fredriksson and Svensson[57,58]who combined an analytical model for heat transfer with parabolic growth law for LG and white iron,carbon diffusion throughγshell for SG iron,and a model for cylindrical shape CG.They were also thefirst to introduce the Johnson–Mehl approximation for spherical grain impingement.At the same time and using similar procedures,Stefanescu and Kanetkar[59]included in the model primary and eutectic solidification,as well as the eutectoid transformation,calcu-lating for thefirst time the room temperature microstructure (Fig.13).Incremental improvements were contributed by various caze et al.[60]modified the mass balance equa-tion in the carbon diffusion model for SG iron to include calcula-tion of the off-eutectic austenite.Fras et al.[61]further improved the carbon diffusion model by solving for non-stationarydiffu-Fig.13.Calculated cooling curves(left)and fraction of phases(right).M is the cylindrical bar modulus.Full lines are for pearlite,dotted lines are for ferrite[59].330 D.M.Stefanescu /Materials Science and Engineering A 413–414(2005)322–333sion,including diffusion in liquid,and considering the ternary Fe C Si system.The next challenge of significant industrial interest was the prediction of the GWT.Fredriksson et al.[62]and Stefanescu and Kanetkar [63]approached it in 1986.By including both the stable and metastable phases in the calculation of the fraction solid,it was possible to output the solid fractions of gray and white eutectics.The basic equation was:f S =1−exp −4π3 N Gr r 3Gr +N Fe 3C r 3Fe3C where N is the number of grains and r is their putational modeling of cast iron—numericaltransport +transformation kineticsThe first coupled FDM energy transport–solidification kinet-ics model for SG iron was proposed in 1985by Su et al.[64].They used Oldfield’s nucleation model,carbon diffusion con-trolled growth through the γshell,and performed some valida-tion against experiment.It was not until 1991that a FDM energy transport–solidification kinetics model for SG iron was extended to room temperature by Chang et al.[65].They modeled the γ⇒αtransformation as a continuous cooling transformation and attempted some validation against experimental work.The first attempt to use a numerical model to predict the GWT appears to belong to Stefanescu and Kanetkar [66]who in 1987developed an axisymmetric implicit FDM heat transport model coupled with the description of the solidification kinetics of the stable and metastable eutectics.They validated model predictions against cast pin tests.A few years later,Nastac and Stefanescu [67]produced a complete FDM model for the prediction of the GWT,which was incorporated in ProCast.The model included the nucleation and growth of the stable and metastable phases and accounted for microsegregation.The model demonstrated such phenomena as the influence of Si segregation on the T st −T met interval for gray and white irons,and the influence of cooling rate and amount of Si on the gray-to-white and white-to-gray transitions (Fig.14).Mampey [68]included fluid flow in the transport calculations,compared filling simulation with experiment,and demonstrated the influence of mold filling on the final distribution of nod-ule count.He also illustrated the shifting of thermal center and the reduction of radial temperature differences when flow was included (Fig.15).putational modeling of cast iron—visualization of microstructureThe transformation of the computer into a dynamic micro-scope that transformed cast iron into a virtual material was spearheaded by Rappaz and his collaborators with their applica-tion of the cellular automaton (CA)technique to microstructure evolution modeling.Not surprisingly,the first application of the CA technique to cast iron is due to Charbon and Rappaz [69]who used the classic model for diffusion-controlled graphite growth through the austenite shell to describe SG ironsolidification.Fig.14.The influence of Si and initial cooling rate on structural transition in a 3.6%C,0.5%Mn,0.05%P,0.025%S cast iron [67].Two selected computer generated pictures at some intermedi-ate f S and at f S =1are presented in Fig.16.The reader will notice that each nodule is surrounded by an austenite grain.Yet experimental evidence suggests that more than one graphite spheroid is found in the eutectic austenite grains (see for exam-ple microshrinkage SEM images in Ref.[28]or color etching microstructures in Refs.[28,70]).Beltran-Sanchez and Stefanescu [71]improved on the previ-ous model by including solidification of primary austenite grains and by initiating graphite growth once graphite nuclei came in contact with the austenite grains.After contact,graphite was allowed to grow through the diffusion-controlled growth mech-anism.Fig.15.Calculated effect of fluid flow on the thermal profile of a cylindrical casting [68].。
lifelines cohort study -回复

lifelines cohort study -回复什么是生命线队列研究(lifelines cohort study)?生命线队列研究(lifelines cohort study)是荷兰一项重要的长期人群研究项目。
该项目以世代间通常被称为生活线或年轮仪的队列研究为基础,目的是通过分析多种因素对人群健康、疾病发生和预后的影响,从而提供指导公共卫生政策和健康促进策略的科学依据。
该研究项目起源于2006年,由格罗宁根大学医学中心领导,通过对35,000名居住在荷兰北部的人进行观察和数据收集。
参与人群代表了全荷兰人口的多样性,包括不同年龄、性别、职业等不同的背景因素。
生命线研究项目旨在收集大量的生活、基因、复旦、心理健康和临床状况数据,以了解和预测人类健康和疾病的发生、发展和预后。
生命线队列研究的主要目标是通过收集长期的纵向数据,探索各种因素对人群健康的影响。
该研究涵盖了从最早的生命阶段到晚年的各个阶段,其目的是明确因素与健康问题之间的关联,比如生活方式、遗传、环境和社会因素等。
研究中的一项重要工具是生命线(lifelines),它代表了人的生命历程中的不同事件和影响。
这些事件可能包括出生、成长、受教育、就业、生活方式、社会环境、疾病发作和治疗等。
通过对这些事件的记录和分析,研究人员可以得出结论,了解这些因素如何影响人群的健康状况。
生命线队列研究被广泛应用于荷兰的卫生政策制定和健康服务改进。
研究结果为政府、保健组织和医疗机构提供了指导,以改善公众健康和疾病管理。
此外,生命线队列研究在科学界也有重要影响,为研究人员提供了丰富的数据资源,促进了各种健康相关研究领域的发展。
通过生命线队列研究,研究人员希望能够更好地了解疾病的发生和发展机制,探索各种因素对疾病的影响,并为患者提供个性化的医疗服务。
此外,生命线队列研究对于制定健康政策和预防疾病也具有重要的意义,通过提供科学依据,为公共卫生政策的改进提供支持。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
生命线工程的研究进展李杰(同济大学建筑工程系,上海 200092,中国)摘要:本篇文章概述了生命线工程研究中的关键事件和若干进展。
研究话题包括:随机地震动场的波动数值模拟、工程结构非线性地震反应的概率密度演化分析方法、大型生命线工程网络系统的抗震可靠性分析与优化等。
在论述研究进展的同时,对若干相关的国内外研究发展状况作了简要的评述。
对生命线工程研究的未来发展给出了一些建议。
关键词:生命线工程;地震;结构;可靠性;网络;优化引言生命线工程是维系现代城市与区域经济、社会功能的基础性工程设施与系统,其典型对象包括区域电力与交通系统、城市供水、供气系统、通讯系统等。
在强烈灾害(如地震、风暴等)袭击下,生命线工程的破坏可以导致城市乃至区域社会、经济功能的瘫痪。
例如:在1995年日本阪神大地震中,神户地区供水系统主干供水管网破坏1610处,导致11万用户断水,一周后仅修复三分之一,全部修复工作持续三个半月。
与此同时,该地区供气系统、供电系统也都遭受了严重破坏。
由于生命线工程系统的耦联作用,还导致了严重的次生灾害。
20世纪70年代中期,由于美国圣费尔南多地震的影响,一批美国地震工程学家正式提出了生命线地震工程的概念旧。
3J。
实际上,若对生命线工程的研究对象与研究问题加以认真地考察,则不难发现:在研究对象上,生命线工程研究包括了生命线工程结构、生命线工程网络、复合生命线工程系统三个基本层次。
而在研究问题上,则可以分为抗灾设计号|生态控制两大基本领域H J。
近三十年来,在世界范围内,尤其是西方发达国家,对生命线工程研究的重视程度有增无减。
例如:1998年,美国联邦紧急事务署与美国土木工程师学会联合成立美国生命线工程联合会(ALA),统一协调生命线工程科学研究、技术开发与工程实践等方面的工作。
2004年,在加拿大召开的世界地震工程大会上,生命线工程的研究进展被列为大会10个专题报告之一。
由于生命线工程涉及到重大土木工程抗灾、工程系统的可靠性与耐久性、工程结构与工程系统的安全性监测与控制等土木工程发展中的一系列关键科学与技术问题,从一定意义上说,生命线工程研究已成为现代土木工程研究的基本推动力量。
在过去十年间,国内外在生命线工程研究中取得了系列的重要进展。
以生命线工程抗震为例,主要进展包括:(1)在大尺度地震动场的物理模拟与随机地震动场研究中取得了有意义的进步;(2)在复杂结构的非线性破坏机理方面开始有了深入的认识。
与材料本构关系研究相结合,确定了结构全过程、全寿命设计的基本概念,并逐步把关于结构非线性破坏机理的规律性认识应用于结构基于性态的设计理念之中;(3)对工程结构与工程系统中存在的不确定性进行了广泛的探索。
概率密度演化分析理论的提出,有可能为大型复杂工程结构的抗灾可靠性设计奠定理论基础;(4)在大型复杂工程网络可靠性分析方面取得重要进展。
以此为基础,现有研究工作已经将触角延伸到网络抗灾可靠性优化领域,从而可能实现工程系统层面的优化设计;(5)复合生命线工程系统的灾害模拟逐步得到重视,并在现代城市的综合灾害防御中开始占据一席之地。
本文拟摘要概述上述研究进展,与此同时,简要评述国际上的相关研究状况。
1 工程场地地震动随机场的波动数值模拟在一般中小尺度结构的地震反应分析中,经常假定在结构基底各点处的地震动输入完全相同,即所谓的一致输人。
然而,在研究实践中人们发现:对于大型桥梁、大跨空间结构(如机场建筑)、工程管线等生命线工程结构,采用一致输人假定可能导致结构地震反应的显著偏差。
因此,如何正确地确定地震动空间变化模型,成为生命线工程抗震研究中带有基础性的课题之一。
20世纪90年代中期以前,关于地震动场的研究大多基于对密集地震动台阵强震记录的统计分析。
其中,最具代表性的模型当推1996年Kiureian提出的多因子乘积形式的相干函数模型-8 J。
研究发现:地震动相干函数模型大多存在对于特定强震观测台阵的依赖性问题。
这在一定程度上揭示出场地因素对于地震动相干函数的影响。
实际台阵观测记录表明:密集台阵基岩处的相干函数统计结果具有一定的稳定性,而非基岩场地的地震动相干函数,即使对于同一台阵记录也会表现出明显差异归。
因此,在国际上出现了利用场地地震反应分析方法研究地震动相干函数的研究实例。
例如,1997年,美国zeⅣa教授和她的合作者采用弹性半空间模型,解析求解了具有随机介质场地的地震动相干函数。
在我们近年来的研究工作中,则从发展工程场地的随机波动分析方法人手,逐步完成了确定性介质场地的随机波动分析、随机介质场地的随机波动分析等系列研究工作,初步实现了一般工程场地的地震动随机场的波动分析技术。
这一工作的基本内容如图1所示。
对于确定性工程场地,由廖振鹏院士提出的近场波动的有限元数值分析技术提供了场地地震反应分析的良好工具。
当采用人工边界模拟无限域时,场地地震动的控制方程为:式中:M、C、K分别为有限元方程决定的局部场地质量矩阵、阻尼矩阵与刚度矩阵;R(t)为随机波动输入;五为场地位移反应、速度反应和加速度反应向量;矗为人工边界点的位移向量,其上标表示计算时刻,下标表示空间离散计算点的C为二项式组合系数。
坐标;Nj考虑工程场地介质的随机性,式(1)中的C与K皆应为随机矩阵,这就引出了随机场地的波动分析问题。
采用本文作者所发展的正交分解分析方法,可以通过引入关于随机响应向量的次序正交展随机波动方程及其边界条件转化为如下的扩阶系统方程与扩阶边界条件:式中:,,M K C A A A 分别为扩阶质量矩阵、扩阶阻尼矩阵和扩阶刚度矩阵;F(t)为扩阶波动输入;,,X x x 为扩阶位移向量、速度向量和加速度向量;x 为与人工边界点位移相对应的扩阶向量;j T 为插值系数矩阵。
原则上,引用一般随机振动分析方法,即可求解式(3)、(4)所构成的基本方程,例如采用振型分解方法。
然而,鉴于传统随机振动分析方法计算工作量较大,应用于扩阶系统方程将导致计算上的复杂性。
因此,我们进一步引入林家浩教授提出的虚拟激励法求解上述扩阶系统方程,从而使计算工作量大大降低。
现有研究表明,采用上述随机地震动场的波动数值分析方法,可以分析任意类型的工程场地地震动场,给出以数值结果表示的地震动相干函数解答。
例如,对于图2所示的阶梯地形变化场地,计算得到的地震动谱密度与相干函数结果如图3所示。
分析这一结果可知,计算地震动相干函数定量地揭示了相干函数幅值随频率下降的趋势。
这一现象在密集台阵观测中屡次出现而不能找到其合理的解释。
我们的研究结果表明,这一现象源于两个物理背景:场地的随机性与基岩的非一致地震输入。
2 结构非线性地震反应的概率密度演化分析多数生命线工程结构在遭遇强烈地震时会进人非线性工作阶段。
结构非线性地震响应分析的关键难题是:如何正确反映结构的非线性与地震输入、结构物理参数的随机性。
在经过长期研究之后,我们发现:在结构非线性地震响应分析问题中,结构非线性表现与结构本质随机性影响是互相耦联的。
这种相互耦联作用使得结构层次的非线性反应在本质上具有不可精确预测的特性。
深入研究结构的随机非线性反应问题,是解决这一问题的关键。
经过长期探索,我们发展了概率密度演化分析理论,初步解决了上述问题。
设一般多自由度非线性反应控制方程式中:M 、C 分别为结构质量矩阵和阻尼矩阵;x 、x 与x 分别为结构加速度、速度与位移反应;(),f X ξ为因随机本构反应关系导致的结构非线性恢复力,亭为随机本构关系中的随机参数;F(t)为随机动力荷载。
实际工程中的结构动力学问题一般是适定的,即其解答存在且惟一。
设在给定初始条件下,式(5)的解答 为: (),X X t ξ= (6)其任一分量为:(),l l x t ϕξ= (7) 由于ξ的随机变量性质,l x 与ξ的联合概率密度可以表示为:()()(),,,l l x x p x x t p x t x p x ξξξξξξξ== (8)式中:,x x ξ分别对应于l x 与ξ的实现值;ξ为Dirac 符号;()p x ξξ为ξ的概率密度分布函数。
将式(8)两边关于t 求导,可得:称式(9)为广义概率密度演化方程。
“广义”二字意味着菇;可以是任何设定的状态量(如位移、变形、内力、应变、应力等等)。
采用差分方法,可以求解上述概率密度演化方程汹]。
而结构反应的概率密度,可以由式(10)给出:式中:ξΩ为关于ξ的积分区域。
利用上述概率密度解答,容易给出结构反应的均值解答与方差范围。
由于可以给出概率密度分布,便给出了结构非线性反应的精细化描述。
由此,不难给出结构可靠度的准确刻划。
非常有意义的是,上述概率密度演化分析方法恰恰是建立在确定性结构非线性分析基础上的。
事实上,式(9)中的(),l x x t ξ的求取,正是通过经典确定性结构非线性分析获得的。
从一定意义上说,概率性非线性反应分析的结果,可以视为是确定性非线性分析结果的某种综合。
将这种理性总结与结构线性分析和非线性分析、结构静力分析和动力分析在方法论上的联系相类比,不难发现其中深有意味的相似之处。
图4是一个大型混凝土消化池的工程背景。
对这二结构,采用概率密度演化分析给出的部分地震反应分析结果见于图5与图6。
这一实例表明:随机结构地震反应的概率密度具有复杂的演化进程。
3 大型生命线工程网络的抗震可靠度分析与优化为了提高生命线工程网络的抗震能力,需要进行生命线工程网络的抗震可靠度分析方法的研究。
在国际上,这类研究可以划分为随机模拟算法与概率解析算法两种基本类别。
关于网络系统可靠度随机模拟方法的研究,可以上溯至20世纪70年代末期。
经过长期的研究与实践,人们在肯定其对于复杂网络的普遍适用性的同时,也发现这一方法存在若干不容忽视的弱点。
举其要者,包括:(1)计算效率不高,计算精度不易估计。
在研究实践中我们发现:对同一规模的不同问题,为提高一位有效数字,其计算时间可以相差数倍乃至10倍之多;(2)一般不能用于解决相关失效网络的可靠度分析问题;(3)不适合于进行网络系统的单元重要度分析,因之难以利用它进行系统的抗震优化设计。
由于上述缺陷,近年来,对于网络系统可靠度分析的概率解析算法研究构成了系统可靠度研究的一条主线。
就网络可靠度分析的概率解析算法而言,20世纪80年代末期以后的研究,多集中于不交最小路算法,如廖炯生、Abmham等人的研究。
由于受系统复杂性的影响,此类算法很难适应于大型网络的抗震可靠度分析。
事实上,在有关研究文献中,甚至看不到大于50个节点的网络分析实例。
究其原因,在于计算中的组合爆炸问题难以有效地加以解决。
在我们近年来的研究中,采用实时不交化的研究技术路线,发展了一类递推分解算法,现将其基本思想略述于下。
设网络系统G的结构函数:式中:A。