Efficient implementation of genus three hyperelliptic curve cryptography over F2n,” Crypto
Fiscal Policy Effectiveness in Japan

Journal of the Japanese and International Economies16,536–558(2002)doi:10.1006/jjie.2002.0512Fiscal Policy Effectiveness in Japan1Kenneth N.KuttnerFederal Reserve Bank of New York,New YorkandAdam S.PosenInstitute for International Economics,Washington,DCReceived January18,2002;revised August30,2002Kuttner,Kenneth N.,and Posen,Adam S.—Fiscal Policy Effectiveness in JapanThe effectiveness offiscal policy in Japan over the past decade has been a matter of greatcontroversy.We investigate the effectiveness of Japanesefiscal policy over the1976–1999period using a structural V AR analysis of real GDP,tax revenues,and public expenditures.Wefind that expansionaryfiscal policy,whether in the form of tax cuts or of public worksspending,had significant stimulative ing a new method of computing policymultipliers from structural V ARs,we calculate that the multiplier on tax cuts is about25%higher at a four-year horizon than that on public works spending,though both are wellin excess of one.A historical decomposition reveals that Japanesefiscal policy was con-tractionary over much of the1990s,and a significant proportion of the variation in growthcan be attributed tofiscal policy shocks;accordingly,most of the run-up in public debtis attributable to declining tax revenues due to the recession.Examining savings behaviordirectly,wefind limited evidence of Ricardian effects,insufficient to offset the short-term ef-fects of discretionaryfiscal policy.J.Japan.Int.Econ.,December2002,16(4),pp.536–558.1Correspondence should be addressed to Adam Posen,Institute for International Economics,1750 Massachusetts Avenue N.W.,Washington,D.C.20036.Fax:202-454-5432.E-mail:aposen@. An earlier version was presented at the CEPR-NBER-TCER Conference on Issues in Fiscal Adjust-ment,December13–14,2001,Tokyo,Japan.We are grateful to Stanley Fischer,Fumio Hayashi, Takeo Hoshi,Richard Jerram,John Makin,George Perry,Mitsuru Taniuchi,and Tsutomu Watanabe for helpful comments and advice.Samantha Davis provided excellent research assistance during the revisions.The views expressed here and any errors are those of the authors,and not necessarily those of Federal Reserve Bank of New York,the Federal Reserve System,or the IIE. c Federal Reserve Bank of New York and Institute for International Economics.2002.5360889-1583/02$35.00c 2002Elsevier Science(USA)All rights reserved.FISCAL POLICY EFFECTIVENESS IN JAPAN537 Federal Reserve Bank of New York,New York,and Institute for International Economics,Washington,DC.c 2002Elsevier Science(USA)Journal of Economic Literature Classification Numbers:E62,E65,E21.The effectiveness offiscal policy in Japan in the1990s has been at least as controversial as the currently more public disputes over monetary policy.There has been open debate over the degree to which expansionaryfiscal policy has even been tried,let alone whether it has been effective,along with widespread assertions about the degree of forward-looking behavior by Japanese savers.The highly visible and rapid,more than doubling of Japanese public debt in less than a decade speaks for itself to a surprising number of observers:thefiscal deficit has grown sharply,yet the economy has continued to stagnate,sofiscal stabilization failed.No less an economist than Milton Friedman recently wrote,”[D]oesfiscal stimulus stimulate?Japan’s experience in the‘90s is dramatic evidence to the contrary.Japan resorted repeatedly to large doeses offiscal stimulus in the form of extra government spending....The result:stagnation at best,depression at worst, for most of the past decade.”2But it is easy to demonstrate from just charting publicly available data that the bulk of the increase in Japanese public debt is due to a plateau in tax revenue rather than to increased public expenditure or even discretionary tax cuts.This of course reflects the inverse cyclical relationship between output and tax revenue.If one applied a plausible tax elasticity of1.25to reasonable measures of the widening output gap(e.g.,those estimated in Kuttner and Posen(2001)),the result would be a much-reduced estimate of the structural budget deficit.In fact,using the measure of potential based on a constant productivity trend growth rate of2.5%a year all but eliminates the non-social security portion of the deficit.Moreover,as measured by thefiscal shocks derived from our estimates in this paper,fiscal policy has been generally contractionary since1997.More tellingly,the massive increase in Japanese government debt outstanding over the period has had little apparent effect to date on either the level of long-term interest rates or the steepness of the yield curve,or the yen–dollar exchange rate. This is commonly attributed to the passivity of Japanese savers,and there surely has been no sign of crowding out or of inflation fears.This fact has not gone unremarked upon in thefinancial press.3Nevertheless,citing the eventual need to pay obligations,including those off of the government balance sheet(such as pen-sions),the ratings agencies downgraded Japanese local currency sovereign debt to 2Friedman,“No More Economic Stimulus Needed,”Wall Street Journal,October10,2001,p.A17. See also Ian Campbell,“Friedman Opposes Stimulus Package,”UPI Newswire,October9,2001.3The Economist observed,“[government bond yields]fell as the government pumped the economy with...fiscal stimulus,as the yen plummeted by40%from its high in the middle of1995,and even as the government’s debt climbed to100%of GDP.By late[1997]the Japanese government was able to borrow more cheaply than any other government in recorded history.”“Japanese Bonds:That Sinking Feeling,”The Economist,February21,1998,pp.74–75.538KUTTNER AND POSENAA-(by Standard and Poor’s,April15,2002)and A2(by Moody’s,May30, 2002).4But with the exception of a brief panic-induced spike in rates in Jan-uary1999,more than half of which was reversed within two months,holders of Japanese government bonds have yet to take any significant capital losses. Against this background of declining tax revenues and relatively stable long-term nominal interest rates,the actual course of Japanesefiscal policy has been almost tumultuous,rather than one of unremitting spend-spend-spend,as often assumed.The divergence of common perceptions from reality may be due in part to the fact that Japan has a centralized,if arcane,fiscal system.5Every year since 1994has brought announcements of various tax reforms,but their actual impact is difficult to ascertain.6On the public spending side,estimating the mamizu(“true water”)of any Japanesefiscal stimulus requires great care,given institutional complications.7Meanwhile,in terms of revenue collection,the Japanese tax base is rather small by developed economy standards,especially on the household side, where salaried urban workers pay a disproportionate share of the taxes,and small business owners and rural residents pay almost none.8The absence of obvious interest rate,inflation,or crowding out effects from the fiscal measures undertaken leads us to examine what really happened withfiscal policy in Japan in the1990s.If standard theory tells us that expansionaryfiscal policy drives up interest rates,limiting that policy’s effectiveness,then perhaps the absence of an interest rate rise is indicative of the opposite.Ourfirst considera-tion therefore is simply whether thefiscal impulses had Keynesian countercyclical signs and what impact those impulses had.As many observers have stressed,tra-ditional public works in Japan more closely approximate the building of pyramids in hinterlands,famous to macroeconomics undergraduates,than do those in any other OECD country.9Some have indicated that they would expect the multiplier on such wasteful expenditures to be less than one.10Of course,although Keynes 4See Arkady Ostrovsky and Christopher Swann,“Japan hit by downgrade in credit rating,”Financial Times,April16,2002,p.13,and David Ibison,“Japan’s sovereign debt rating downgraded,”, May31,2002.5See Ishi(2000)for a historical perspective;Balassa and Noland(1988),Bayoumi(1998),and OECD Economic Survey:Japan(1999),for institutional descriptions;and Schick(1996)for a com-parison of U.S.and Japanese budget processes.Tax Bureau(2000)gives the official account of the tax system.6See Watanabe et al.(2001)and Tax Bureau(2000).7See Posen(1998).8See Balassa and Noland(1988).9Sixty percent of the Japanese coastline is today reportedly encased in concrete(Ian Buruma,“The Japanese Malaise,”New York Review of Books,July5,2001,p.5).Similar examples are easy to come by:see,for example,Martin Wolf,“Japan’s Economic Black Holes,”Financial Times,January17, 2001,p.21,and Bergsten,Ito,and Noland(2001,pp.64–65).10In June1998the then—Vice Minister of Finance for International Affairs Eisuke Sakakibara (1999,p.45),expressed a contrary point of view:“Concerning the currentfiscal package,I know that there have been various criticisms of it,but I think there is now a wider acceptance,even in the international community,of public works as a more effective means than tax cuts.In addition,under current circumstances,a strong multiplier effect can be expected....”FISCAL POLICY EFFECTIVENESS IN JAPAN539 maintained that even overtly wasteful public works projects were an effective source offiscal expansion,several observers have stressed that in the Japanese context tax cuts are likely to be more effective.We then turn to historical decompositions of the effect offiscal policy on the Japanese economy in the1990s.The ample variation in Japanesefiscal policy,mov-ing from contractionary to expansionary and back to contractionary,with some tax measures temporary and others permanent,provides a rich basis for econometric investigation.Upon that investigation,it becomes clear thatfiscal policy provides an apparent explanation for a surprisingly large amount of the variation in Japanese economic growth over the period.Meanwhile,on the tax side,all tax cuts were preceded and accompanied by loud declarations by government officials that even-tually taxes would have to go up—whether due to the looming demographic threat, to the unsustainability of Japanese public debt,or to the supposedly declining po-tential growth rate.Even though wefind that these well-publicized dangers from debt did not have any obvious short-run effect on multipliers,we also directly examine the possibility of Ricardian equivalence.Finally,we conclude by consid-ering some of the questions raised by the apparentfiscal power granted through savers’passivity in Japan.The analysis here builds on earlier work applying a structural V AR approach tofiscal policy in Japan(Kuttner and Posen,2001),but extends that paper’s in-vestigations in four important ways.First,impulse response functions and their standard errors are calculated,allowing a clear sense of the significance and inter-action offiscal policy shocks.Second,fiscal shocks and their contributions to GDP growth in the1990s are computed and plotted,yielding an analysis of the historical record.Third,a new approach is introduced to compute“pure”policy multipliers from structural vector autoregressions(V ARs)in order to give a clearer picture of the effects of tax and expenditure changes in isolation.And fourth,throughout the paper,a variety of robustness checks are considered,especially with regard to the results’sensivity to the identifying assumptions.1.THE SHORT-RUN EFFECTS OF FISCAL POLICYTo assess the impact offiscal policy on the economy,we employ a structural three-variable V AR model adapted from Blanchard and Perotti(1999),which is designed to identify the impact offiscal policy while explicitly allowing for con-temporaneous interdependence among output,taxes,and spending.The one-lag version of the structural V AR can be expressed succinctly asA0Y t=A1Y t−1+Bεt,(1) where Y t=(T t,E t,X t) is the vector of the logarithms of real tax revenue,real expenditure,and real GDP,andεt is interpreted as a vector of mutually orthogonal shocks to the three jointly endogenous variables.540KUTTNER AND POSENFollowing Blanchard and Perotti,a key identifying assumption is that real GDP is allowed to have a contemporaneous effect on tax receipts,but not on expenditure. (As discussed below,however,plausible changes to this assumption make no substantive difference to the results.)The model also assumes that taxes do not depend contemporaneously on expenditure(or vice versa)although tax shocks are allowed to affect spending within the year.This assumption reflects the institutional setup forfiscal policy in Japan,where taxes are mostly collected from withholding and consumption,spending is mostly implemented with a lag,and both automatic stabilizers and the size of the public sector are limited.With these assumptions imposed,the model can be written asT t=a130X t+a111T t−1+a121E t−1+a131X t−1+εT tE t=a211T t−1+a221E t−1+a231X t−1+b21εT t+εE t(2)X t=a310T t+a320E t+a311T t−1+a321E t−1+a331X t−1+εX t,where a i j0,a i j1,and b i j represent the i,j th elements of the A0,A1,and B matrices.Thus a130captures the within-period elasticity of tax receipts with respect to GDP,b21is the effect of tax shocks on expenditure,and a310and a32allow taxes andexpenditure to affect real GDP contemporaneously.With seven parameters to estimate from the six unique elements of the covariance matrix of reduced-form V AR residuals,the model in(2)is not identified,however.11 Our strategy,like that of Blanchard and Perotti,is to use independent informationon the elasticity of tax revenue with respect to real GDP(i.e.,a130)to identifythe model.Drawing on Giorno et al.(1995),we set this parameter equal to1.25, yielding an exactly identified model.Reliable comprehensive quarterlyfiscal data for Japan are not available to the public or to the internationalfinancial organizations,unfortunately,and so we fit the model instead to annual consolidated central,state,and localfiscal data, compiled by the IMF,spanningfiscal years1976through1999.12Tax receipts are defined as direct and indirect tax revenue,excluding social security contributions. Expenditure corresponds to the sum of current and capital expenditure,less so-cial security and interest payments.13The estimated model also includes a linear trend and a trend interacted with a post-1990dummy to capture the post-1990 11See Hamilton(1994,chapter11)for a complete discussion of identification and estimation of structural V ARs.12This lack of timely higher frequency data is of course of policy significance,as well as presenting a difficulty for research.As Stanley Fischer(2001,p.163)observes,“Indeed,there is a general problem offiscal transparency in Japan...the key issues are lack of consolidation among differentfiscal units and the absence of quarterly data,which means thatfiscal information is on average about eight months out of date.”13As noted by Blanchard and Perotti(1999),estimating the third equation in the structural V AR is equivalent to using a measure of“cyclically adjusted”tax receipts(and a similarly adjusted measure of spending)as instruments for taxes and spending in a two-stage least squares regression.FISCAL POLICY EFFECTIVENESS IN JAPAN541TABLE IThe Relationship between Taxes,Spending,and GDP:Estimated Parameters of the Structural V AREquationIndependent variable Lag Tax Expenditure GDP Tax receipts t——−0.03 Expenditures t——0.17∗∗Real GDP t 1.25——Tax receipts t−10.71∗∗∗−0.12−0.25∗∗Expenditures t−10.030.78∗∗∗0.02 Real GDP t−1−0.58∗∗0.66∗0.59∗∗∗Tax shock t—−0.03—Trend−0.004−0.0020.033∗∗∗Trend×(t>1990)−0.018−0.010−0.038∗∗∗Adjusted R20.9960.9950.997 Durbin–Watson 1.66 2.30 1.85 Source.Authors’calculations,based on trivariate structural V AR including real tax revenue,real government expenditures and real GDP,estimated on24annual observations spanningfiscal years 1976through1999.Note.Asterisks indicate statistical signficance:∗∗∗for0.01,∗∗for0.05,and∗for0.10.The coefficient of1.25on real GDP in the tax equation is imposed a priori as an identifying assumption.The adjusted R-squared and Durbin–Watson statistics are from the reduced form V AR equations.Further details can be found in the text.slowdown in trend GDP growth.14The estimated parameters are displayed in Table I.Interpreting individual coefficients of a simultaneous equation model is difficult,of course,but it is worth noting that expenditures have a positive,statis-tically significant impact on real GDP.Figure1plots the impulse response functions for the four-year time-horizon relevant for policy analysis,along with90%confidence bands associated with the estimates.As shown in thefirst two panels of the last row of thefigure,tax cuts and expenditure increases both have expansionary effects.Moreover,the estimated effects are statistically significant at a one-to two-year horizon,as well as for the current year in the case of expenditure shocks.The estimated magnitudes of both tax and expenditure effects are comparable as well.The upper left-hand panel of Fig.1shows that tax revenue shocks tend to be relatively transitory,effectively vanishing after one year,notwithstanding the characterization of most Japanese tax law changes as permanent in intent.15,16In contrast,the center panel of the 14The model makes no explicit distinction between temporary and permanent tax and expenditure changes,in part because the temporary tax changes enacted in Japan have been much smaller in magnitude than the permanent ones(see Watanabe et al.,2001).Many of the supposedly permanent tax changes were offset by subsequent tax legislation,however,and this pattern should be picked up by the model’s dynamics.15This pattern is documented in Watanabe et al.(2001).Because of the feedback between tax revenues and GDP,and the greater-than-unit elasticity of tax revenue with respect to GDP,the impact of a10%tax shock on tax revenue is slightly less than10%.16The lack of a significant response of expenditures to tax shocks may appear atfirst to contradict the results of Ihori et al.(2001),who found Granger causality from the taxes to expenditures,expressed542KUTTNER AND POSEN years after shockp e r c e n t effect of tax shock on tax 01234-14-7714effect of tax shock on spending 01234-10-5051015effect of tax shock on GDP 01234-551015effect of spending shock on tax 01234-14-70714effect of spending shock on spending 01234-10-551015effect of spending shock on GDP 01234-5051015effect of GDP shock on tax 01234-14-70714effect of GDP shock on spending 01234-10-5051015effect of GDP shock on GDP01234-5051015FIG.1.Estimated impulse responses from structural V AR.Standard errors were computed via Monte-Carlo.Dashed lines represent 90%con fidence intervals.No standard errors are given for the contemporaneous effects of GDP and spending shocks on spending,as these are fixed by assumption.The tax shock represents a tax cut,and the spending shock represents a spending increase.figure shows that public works spending shocks are highly persistent,in keeping with institutional and journalistic accounts of government behavior in Japan (and elsewhere).The dynamic effects of tax and spending shocks,including the expansionary effect of tax cuts on GDP,are easier to interpret (and more dramatic)when put in yen terms,as is done in Table I.To do so requires scaling up the response by the inverse of the share of taxes in GDP,which averaged 19%during the 1990s.This adjustment results in a cumulative Y =484increase in GDP in response to a Y =100tax cut.One explanation for the size of the response is that,over the sample period,tax cuts have tended to be associated with spending increases;in fact,the cumulative increase in spending is roughly equal to the decrease in taxes (although this effect as a share of GDP.A closer look shows that the results are consistent,however:in our model,positive tax shocks decrease the level of real GDP in our model while leaving expenditures largely unchanged in the near term,which leads in turn to an increase in expenditures as a share of GDP.FISCAL POLICY EFFECTIVENESS IN JAPAN543 is estimated rather imprecisely).17Overall,GDP rises by more than twice the sum of the spending and tax effects.The immediate impact of a10%positive spending shock on GDP is1.6%, however,which translates into Y=84for a Y=100spending increase,and the stimulus builds only slightly over time.One reason for the smaller estimated effect of spending than of tax shocks is that taxes tend to rise in response to positive spending shocks in this sample,partly offsetting the expansionary impact of the spending increase.This can be interpreted as evidence of the expensive maintenance of unproductive Japanese public works projects.Overall,the increase in GDP is about1.75times the net effect of the spending minus the tax increases—smaller than the effect of tax shocks,but still a respectable economic impact.Deriving a model with sufficient structure to assess the impact offiscal pol-icy clearly requires a number of strong identifying assumptions.As noted above, three such assumptions are embedded in a Blanchard–Perotti framework:first,that current taxes do not depend directly on current expenditures;second,that current expenditures to not respond directly to current GDP;and third,that the within-year elasticity of tax revenues with respect to GDP is1.25.Since the model is exactly identified,these restrictions are not formally testable,of course,but the reported results are robust to plausible changes in all three of these assumptions.18 In particular,allowing for a contemporaneous effect from spending shocks to tax revenues(instead of the other way around)has virtually no effect on the results. The results are slightly more sensitive to changes to the assumed elasticity of tax revenues,but for plausible values of the parameter(i.e.,ranging from1.0to 1.5),the estimates are qualitatively similar to those reported above.And it turns out that assuming a plausible,negative elasticity of expenditures with respect to GDP(reflecting a presumed countercyclical use offiscal policy)actually increases the estimated effects offiscal shocks.These robustness checks therefore indicate that thefindings are not merely an artifact of the model’s identifying assump-tions.This analysis shows that,when it has been used,discretionaryfiscal policy in Japan has in fact had the effects predicted in standard closed-economy macroe-conomic analyses.Both tax cuts and spending increases lead to higher real GDP, although the tendency for taxes and spending to move together has reduced the impact of spending increases.19The commonly held perception offiscal policy’s 17Blanchard and Perotti(1999)found a qualitatively similar pattern in the U.S.data.18The full results obtained under these alternative assumptions are available from the authors upon request.19Further work is needed to reconcile our results on the sizable effects offiscal policy in Japan with thefindings(using very different econometric approaches)of Bayoumi(2001)and Perri(1999) thatfiscal policy had the expected sign but very small effects,and of Ramaswamy and Rendu(2000) that“public consumption had a dampening impact on activity in the1990s.”A likely explanation is that these analyses did not take full account of the dynamic interactions among GDP,tax revenue, and expenditure in the way that we were able to.544KUTTNER AND POSENineffectiveness in all likelihood stems from a failure to recognize the dependence of tax receipts with respect to GDP:as GDP falls,tax revenue shrinks,but to conclude from this that changes in the deficit have not affected growth would be incorrect.2.MULTIPLIERS ON TAX CUTS AND PUBLIC SPENDINGThe difficulty in reading off a simple multiplier from our estimations is that in the data(and therefore in Japanese reality over the period)tax cuts generally have been accompanied by spending increases;expenditure increases,on the other hand, have generally been accompanied by tax increases.So,for example,in Table II, where we list the Y=484estimate of the effect on GDP of a Y=100tax cut,we are actually reporting the four-year cumulative effect of the tax cut and of the accompanying expenditure increase seen in the data.A fair comparison of the effects of(or multiplier on)tax cuts and expenditure increases therefore requires taking into account any correlation between taxes and expenditures.To do this,we examine the responses to linear combinations of tax and spending shocks calculated to generate a cumulative1%change in the variable of interest, and a cumulative zero response to the other variable,measured at a four-year hori-zon.The response of GDP to this combination of shocks is then used to calculate a “pure”multiplier on tax or spending shocks.For example,a−0.66%(expansion-ary)tax shock combined with a−0.21%(contractionary)spending shock gives a1%reduction in tax revenues over four years,with no cumulative impact on spending,and a net0.47%increase in real GDP.Scaling this response by the inverse of the share of taxes in GDP(using the 1990–1999)average of19%)yields a multiplier for tax cuts of2.5;a similar calculation for spending increases gives a multiplier of2.0.As a result of this difference in magnitudes,the cumulative four-year gain to Japanese GDP from a revenue neutral shift of Y=100from public works spending to tax cuts is Y=47.TABLE IIThe Dynamic Impact of Fiscal Policy:Estimated Yen-Denominated Impulse Responses(Effects of expansionary Y=100shocks,in yen)Impact of−Y=100tax shock Impact of+Y=100spending shockTaxes Spending GDP Taxes Spending GDP Year0−963162010084Year1−32161583487105Year2036168377789Four-year cumulative−111104484127332353Source.Authors’calculations based on the estimated structural V AR.Note.The impact of Y=100tax and spending shocks are computed assuming taxes andspending represent19%of GDP.FISCAL POLICY EFFECTIVENESS IN JAPAN545 These estimates also understate the beneficial effects of tax cuts,because they do not directly capture the allocative efficiency gains from changes in Japanese tax code,just the immediate macroeconomic impact.Though such gains can be exaggerated,there is good reason to believe that such supply-side effects would be large in Japan today.These effects atfirst glance may seem rather large,relative to other published estimates;in fact they are quite close to comparably calculated multipliers for the United States,such as those of Blanchard and Perotti(1999).The“multipliers”reported there,however,are defined differently from those we calculate.Blanchard and Perotti reported multipliers defined as the ratio of the peak response of GDP to the size of the initial shock to taxes or spending.That method can be misleading, however,as it fails to take into account either the dynamics of the response or the tendency for taxes and spending to move together.20Using our method to calculate comparable multipliers from Blanchard and Perotti’s trend-stationary estimates, we obtain a multiplier of roughly4.0for tax shocks—considerably larger than our estimate for Japan.Our estimated spending multiplier for Japan is somewhat higher than the comparable multiplier for the United States calculated from the Blanchard–Perotti results,but quite close to similar calculations based on their estimated response to military spending shocks.In contrast to these results,the Economic Planning Agency(EPA)of the Japanese government(now the Cabinet and Fiscal Office)has published declining esti-mates of the multiplier onfiscal policy for the past several years.In May1995, the EPA World Economic Model5th Version reported cumulative multipliers on government investment of1.32in thefirst year,1.75in the second year,and2.13 in the third year(down from1.39,1.88,and2.33in the4th Version),and far lower multipliers on income tax reductions(0.46,0.91,and1.26,down from0.53,1.14, and1.56in the4th Version).21In October2001,the EPA released the multipli-ers from the1998revised version of the model,with the cumulative multipliers on government investment declining to1.12,1.31,and1.10,and on income tax reductions of0.62,0.59,and0.05.22Leaving aside the question of whether these changes represent statistically significant differences,given the difficulties of esti-mating these multipliers,it is worth considering the source of this divergence from our results.The difficulty in making a strict comparison lies in the unavailability(at least publicly,in English)of the details of the EPA’s large-scale macro model,partic-ularly with regard to the assumed response of monetary policy built in.As the discussion in OECD(2000,pp.60–64)makes clear,while there are a number of20Basing the multiplier on the peak response could,for example,yield a nonzero multiplier even if the effect on GDP were completely reversed in subsequent periods.21See“The EPA World Economic Model5th Version:Basic Structure and Multipliers,”Economic Analysis Series139,May,1995,www.esri.cao.go.jp/en/archive/bun/abstract/139-e.html.22See“The ESRI Short-Run Macroeconometric Model of Japanese Economy:Basic Structure and Multipliers,”October2001,www.esri.cao.go.jp/en/archive/e-dis/abstract/006-e.html.。
SequenceManager Logix Controller-based Batch和排队解决方

SequenceManagerLogix Controller-based Batch and Sequencing SolutionA Scalable Batch Solution for Process Control ApplicationsA modern batch system must account for the growing need for architecture flexibility, true distribution of control, and scalability. SequenceManager software provides batch sequencing in the Logix family of controllers by adding powerful new capability closer to the process and opening new possibilities for skids, off network systems, and single unit control. SequenceManager allows you to configure operations in Studio 5000 Logix Designer®, run sequence in FactoryTalk® View SE, and to capture and display batch results.SequenceManager directs PhaseManager™ programs inside a Logix-based controller in an ordered sequence to implement process-oriented tasks for single unit or multiple independent unit operations. Using industry standard ISA-88 methodology, SequenceManager enables powerful and flexible sequencing capabilities that allow for the optimal control of sequential processes.With SequenceManager, you can deliver fast and reliable sequence execution while reducing infrastructure costs for standalone units and complete skid-based system functionality.Key BenefitsSequenceManager™ software significantly reduces engineering time for system integrators and process equipment builders while providing key controller-based batch management capabilities for end users. Key benefits include:• Enables distributed sequence execution • Fast and excellent reliability of sequence execution native to controller • Efficient sequence development and monitoring in core product • Integrated control and HMI solution for intuitive operation • Reduced infrastructure costs for small systems • Provides data necessary for sequence reportingDistributed Batch Management Based on Proven TechnologyBuilt Upon Rockwell AutomationIntegrated ArchitectureSequenceManager was built using the standard control and visualization capabilities found in Rockwell Automation® Integrated Architecture® software. SequenceManager is a new capability that is builtinto Logix firmware that uses visualization through FactoryTalk® View SE to create an integrated sequencing solution. Combined with event and reporting tools, SequenceManager software is a complete batch solution for single unit and skid-based process applications.Scalable Controller-based Solution SequenceManager allows flexible design for skid-based equipment to be developed, tested and delivered asa fully functioning standalone solution but, if needed, seamlessly integrated into a larger control system. This strategy provides the end user with the option to integrate equipment without imposing design constraints on the OEM delivering the skid. Additionally, it enables the end user to deliver equipment as a standalone system without the constraint to scale to a larger process solution in the future. This batch solution offers scalability to help prevent costly redesign and engineering.Flexibility to Meet Process Needs SequenceManager enables you to expand your process control on skid based equipment that performs repetitive tasks and decision-making abilities. By using the ISA-88 methodology, SequenceManager allows for control design that can be adopted to fit the needs of the process industries without the constraints of custom application code. Built-in state model handling provides for fast and easy configuration while maintainingcontrol of the process.Editor and ViewerAs a brand new program type in Studio 5000 Logix Designer®, SequenceManager™ software gives the user the power and flexibility necessary to create dynamic recipes to maximize the effectiveness of the process control system.Without limitations on steps and parameters, and the ability to run parallel phases, to branch, and to loop back and rerun steps, SequenceManager removes the barriers in achieving effective batch within the controller.Sequence ExecutionProcedural sequences are executed through nativefunctions in the controller. With an integrated ISA-88 state model, the control and states of phases can be assured. Standard batch functionality, such as manual control and active step changes, are included to give the operational flexibility that is needed to respond toabnormal process conditions.Allowing for an Intuitive Batch ApplicationResponsive batch interactions between the controller and equipment, along with intuitive operator interfaces, provide the core of a truly distributed batching strategy that drives ISA-88 procedural models.Allen-Bradley, FactoryTalk Batch, FactoryTalk® View SE, Integrated Architecture, Listen.Think.Solve., PhaseManager, PlantPAx, Rockwell Automation, Rockwell Software, SequenceManager, and Studio 5000 Logix Designer are trademarks of Rockwell Automation, Inc. Trademarks not belonging to Rockwell Automation are property of their respective companies.Operator ViewerFactoryTalk® View SE and ActiveX controls monitor and interact with a running procedural sequence through the HMI. Advance ActiveX controls provide an intuitive interface for controlling sequences and changingparameters from the operational environment. Improved capabilities allow the user to perform manual step changes and acquire control easily.Reporting and AnalyticsSequenceManager data generates events that are used to produce batch reports and procedural analysis. A separate event client transfers the event data from the Logixcontroller to a historical database. SequenceManager uses the same data structure and reports as FactoryTalk Batch, which provides a consistent and intuitive batch reporting tool among Rockwell Automation® Batch Solutions.Additional InformationVisit us at /processPublication PROCES-PP001A-EN-E – June 2016Copyright © 2016 Rockwell Automation, Inc. All Rights Reserved. Printed in USA.。
Stochastic versus deterministic kernel-based superposition approaches for dose calculation of intens

Home Search Collections Journals About Contact us My IOPscienceStochastic versus deterministic kernel-based superposition approaches for dose calculation of intensity-modulated arcsThis article has been downloaded from IOPscience. Please scroll down to see the full text article.2008 Phys. Med. Biol. 53 4733(/0031-9155/53/17/018)View the table of contents for this issue, or go to the journal homepage for moreDownload details:IP Address: 219.221.203.6The article was downloaded on 02/06/2010 at 13:00Please note that terms and conditions apply.IOP P UBLISHING P HYSICS IN M EDICINE AND B IOLOGY Phys.Med.Biol.53(2008)4733–4746doi:10.1088/0031-9155/53/17/018Stochastic versus deterministic kernel-based superposition approaches for dose calculation of intensity-modulated arcsGrace Tang1,2,Matthew A Earl1,Shuang Luan3,Chao Wang4,Daliang Cao5,Cedric X Yu1and Shahid A Naqvi11Department of Radiation Oncology,University of Maryland School of Medicine,Baltimore,MD,USA2Department of Medical Physics and Bioengineering,University College London,London,UK3Department of Computer Science,University of New Mexico,NM,USA4Department of Computer Science and Engineering,University of Notre Dame,IN,USA5Swedish Cancer Institute,Seattle,USAE-mail:snaqvi@Received31March2008,infinal form9July2008Published13August2008Online at /PMB/53/4733AbstractDose calculations for radiation arc therapy are traditionally performed byapproximating continuous delivery arcs with multiple static beams.For3Dconformal arc treatments,the shape and weight variation per degree is usuallysmall enough to allow arcs to be approximated by static beams separated by5◦–10◦.But with intensity-modulated arc therapy(IMAT),the variation inshape and dose per degree can be large enough to require afiner angularspacing.With the increase in the number of beams,a deterministic dosecalculation method,such as collapsed-cone convolution/superposition,willrequire proportionally longer computational times,which may not be practicalclinically.We propose to use a homegrown Monte Carlo kernel-superpositiontechnique(MCKS)to compute doses for rotational delivery.The IMAT planswere generated with36static beams,which were subsequently interpolatedintofiner angular intervals for dose calculation to mimic the continuous arcdelivery.Since MCKS uses random sampling of photons,the dose computationtime only increased insignificantly for the interpolated-static-beam plans thatmay involve up to720beams.Ten past IMRT cases were selected for this study.Each case took approximately15–30min to compute on a single CPU runningMac OS X using the MCKS method.The need for afiner beam spacing isdictated by how fast the beam weights and aperture shapes change between theadjacent static planning beam angles.MCKS,however,obviates the concernby allowing hundreds of beams to be calculated in practically the same time as 0031-9155/08/174733+14$30.00©2008Institute of Physics and Engineering in Medicine Printed in the UK47334734G Tang et al for a few beams.For more than43beams,MCKS usually takes less CPU timethan the collapsed-cone algorithm used by the Pinnacle3planning system.(Somefigures in this article are in colour only in the electronic version)1.IntroductionTraditional rotational techniques such as three-dimensional(3D)conformal arc therapy(Laing et al1993)deliver a dose distribution to the target by rotating the linear accelerator(linac) gantry continuously about the patient.Thefield aperture shaped by a multileaf collimator (MLC)conforms to the two-dimensional(2D)projection of the target at each beam angle.The weightings of the beams within the same arc are the same,as the dose rate is kept constant.In essence,this technique achieves a conformal dose distribution to the tumour without explicitly considering normal tissue toxicity.With the advent of intensity-modulated radiation therapy (IMRT),more sophisticated rotational therapy techniques such as tomotherapy(Mackie et al1993)and intensity-modulated arc therapy(IMAT)(Yu1995)have emerged.While tomotherapy delivers highly conformal dose to the target and low dose to normal tissues with a dedicated unit comprised of a CT-like gantry and a binary collimation system,IMAT delivers a comparable dose distribution(Cao et al2007)with a general-purpose linac equipped with a standard dynamic multileaf collimation system.In contrast to tomotherapy,in which the patient slides through a rotating fan beam of radiation,IMAT irradiates the patient in a cone-beam geometry.A continuous dose distribution is delivered to the target by dynamically changing the MLC-definedfields as the gantry rotates.Intensity modulation from each beam angle is achieved with overlapping arcs,and the number of arcs may increase with the complexity of intensity modulation.The treatment planning of rotational IMRT is very similar to conventional IMRT planning but with a few additional procedures and delivery constraints.For example,in IMAT,the continuous arc in the dynamic delivery is discretized into a large number of evenly spaced static beam angles for optimization purposes.In one approach,the optimization process follows the traditional two-step approach:first,an idealfluence map for each static beam angle is generated with an optimization algorithm(Webb1994,Spirou and Chui1996,1998). Then,the idealfluence maps are translated into a set of IMAT sequences with an IMAT leaf sequencing algorithm,which considers additional MLC transitioning constraints in delivery (Cao et al2006,Shepard et al2007,Gladwish et al2007,Luan et al2008,Oliver et al2008).In another approach,instead of optimizing on intensity maps,the aperture shapes and weightings can be optimized directly in one step.Earl et al have adapted their original direct aperture optimization algorithm(DAO)for step and shoot IMRT into IMAT planning(Shepard et al 2002,Earl et al2003).Considering the MLC constraints and other physical factors,the MLC leaf positions and aperture weightings are simultaneously optimized to obtain a deliverable IMAT sequence.Afinal dose calculation is performed following the IMAT sequencing to account for collimator-specific dosimetric effects.Although deterministic dose calculation methods such as the convolution/superposition (CVSP)algorithms are widely used due to their dosimetric efficacy,they may be impractical for rotational therapy because of the proportional dependence of CPU time on the number of beams.For rotational treatments,which are typically planned with coarsely and evenly spaced static beams,the dose calculation problem may be more complex.In3D conformal arc therapy,the MLC-shaped aperture dynamically conforms to the beam’s-eye-view(BEV)Dose calculations of intensity-modulated arcs4735 of the target.Dose calculations with the static planning beams ignore the fact that apertures in between these beams are linearly interpolated by the MLC controller during the continuous arc delivery.As a result,the error in dose calculation may increase.For example,a discretized calculation of3D conformal arc therapy plan produces the well-known ripple-like artefacts, particularly in the low isodose lines,increasing the dose uncertainty in regions which may be occupied by an organ-at-risk(OAR)(Yu et al2002).However,because the BEV of the target is not likely to change drastically from one angle to another,such approximation is generally adequate.With intensity modulations in IMAT,however,the larger shape variation between neighbouring angles increases dose calculation errors.The error may further increase if the aperture weightings are also allowed to vary in the arc delivery by modulating the machine dose rate.In principle,these errors can be virtually eliminated by discretizing the arcs morefinely; but with a large increase in the number of beams,a deterministic dose calculation method may be too time consuming to be feasible clinically as the CPU time increases proportionally with the number of beams.In contrast,Monte Carlo based(MC)methods do not suffer from such time scaling aspect—for a given geometry,the rate of statistical convergence depends mainly on the total number of photons used in the simulation,not on the number of beams or segments.In this paper,we employ a homegrown dose engine that uses the MC kernel-superposition method(Naqvi et al2003)and compare the calculations with the deterministic CVSP method used in the Pinnacle3system(Philips Medical Systems,WI,USA).We evaluate whether the two methods can faithfully calculate modulated arc deliveries in clinically feasible time.In addition,dosimetric plan comparisons were obtained for ten clinical cases to examine any dosimetric differences between static-beam planning and continuous arc delivery.2.Materials and methods2.1.Monte Carlo kernel-based convolution/superposition dose engine(MCKS)It is widely accepted that MC methods provide the best accuracy for radiotherapy dose calculations at present.However,the computational times for traditional MC simulations may be too long for clinical implementation because of the large time taken by explicit secondary electron transport.Nonetheless,due to the random sampling nature of MC,the simulation time is theoretically independent of the number of beams.Thus,if the calculation involves a large number of beams,the computation efficiency may be better than that of the deterministic CVSP methods,such as the collapsed-cone method(Ahnesj¨o1989),employed in commercial planning systems.In contrast,the MC Kernel superposition method(MCKS)used in this work performs full primary photon transport in place of a calculation of the total energy released per unit mass(TERMA).Unlike the traditional MC codes,it does not carry out an explicit secondary electron transport.Instead it scores the energy deposited by monoenergetic point kernels issued from the photon interaction sites.Hence it gains slightly in accuracy in the detail of photonfluence modelling compared to deterministic polyenergetic CVSP methods, and obviates the need forfluence or kernel hardening corrections.A typical MCKS simulation starts by sampling the beam angles andfield segments randomly with a probabilities depending on their monitor unit(MU)weightings.A photon is randomly sampled from a dual-source model,although other types of source models,such as a phase-space source,are also possible. The sampled photon is propagated through the collimator structure and the simulation phantom and interacted somewhere in the phantom with a probability depending on the photon’s energy and the mass density profile along the photon direction.From each interaction site,kernel directions are randomly sampled and energy is deposited in the voxels along these kernel ray4736G Tang et al directions.Properties such as the MLC transmission,tongue-and-groove effect and the curved ends geometry of the leaves are taken into account,as each photon is raytraced through the physical model of the MLC and the jaws.Inhomogeneities are considered by rectilinear density scaling(Woo et al1990),which leads to inaccuracies in regions of severe disequilibrium,as in the case of deterministic CVSP.However,the main advantage of MCKS over deterministic CVSP methods in our current context is its non-serial random sampling of variables,which renders CPU time for dose calculation virtually independent of the complexity of the treatment plan.Further details of MCKS have been discussed in the literature(Naqvi et al2003).2.2.IMAT planningTen IMRT cases were studied:a lung,two brain,three prostate and four head and neck(HN). For each case,an ideal static-beam IMRT plan utilizing36equi-spaced beams was optimized with the P3IMRT module in the Pinnacle3treatment planning system.The ideal intensity maps for all beam angles were then transferred to either the k-link segmentation algorithm (Luan et al2008)or the continuous intensity map optimization algorithm(CIMO)(Cao et al 2006,Shepard et al2007)in order to convert them into a deliverable IMAT sequence.For both sequencers,the maximum distance MLC allowed to travel between the adjacent planning beam angles,d max,is governed by the maximum speed of MLC,v max,the static planning beam spacingθand the angular speed of the linac gantryω:d max=v maxθω.(1)With the maximum leaf speed of3cm s−1and the maximum gantry speed of6degrees s−1, the maximum displacement MLC can travel is5cm between the successive static beam angles which are10◦apart.Both segmentation algorithms allow variable weightings for the segments within the same arc,implying that dose-rate variation will be required during delivery. The above algorithms,henceforth referred as sequencer A or sequencer B,will be used to illustrate how the dose calculation accuracy depends not only on the plan complexity but also on the sequencing algorithm.Thefinal output of a leaf sequencing algorithm is a MLC sequence for the continuous arc delivery,which was derived from the36-beam static plan.Suppose this plan starts from the gantry angle of175◦and ends at185◦(i.e.anti-clockwise arc),the optimization algorithms will optimize at the static beam angles of[175,165,...,185]degrees during planning.For an unbiased delivery,the beam weightings or MUs are delivered in a uniform‘sector’,spreading from the individual static beams.Thus,in the delivery sequence file,a kick-off beam will be added at half of the static beam angular interval before thefirst beam and an ending beam will be added at half of the static beam angular interval beyond the last beam.The transformation of the static-beam planning to IMAT delivery sequence is demonstrated in table1.2.3.Rediscretizing the continuous delivery sequenceSince we are not performing a truly continuous arc dose calculation,it is necessary to rediscretize these continuous delivery beams intofinely spaced static beams for accurate dose calculation.To closely mimic the continuous arc delivery for dose calculation purposes, we sub-sampled the planning intervals of10◦into every0.5◦.The MLC shape at any angle is obtained by linearly interpolating shapes between the nearest neighbouring planning angles. Note that every interpolated MLC aperture is simulated in MCKS:the physical characteristics of the collimation system are modelled and photons are raytraced through each aperture.The resultant dose distribution is therefore an accurate representation of the continuous delivery.Dose calculations of intensity-modulated arcs4737 Table1.This table illustrates the transformation from static-beam planning to continuous arcdelivery.Optimizations(ideal intensity maps and leaf sequencing)are based on the static planningbeams;the MUs are delivered into sectors during the arc delivery.Note that this arbitrary plandisplays different MU or weighting for different beams,indicating the necessity of dose-ratevariation in delivery.Static planning Cumulative Absolute Delivery Cumulative Absolutebeam angle MU MU sector MU MU175◦1010180◦→170◦1010165◦3020170◦→160◦3020155◦4010160◦→150◦4010.. ................175◦79515160◦→170◦79515185◦8005170◦→180◦8005Table2.An example of the rediscretization procedure.The sequence in table1is rediscretized to 2◦separation for interpolated dose calculation.Delivery Cumulative MU Absolute MU Rediscretization Cumulative MU Absolute MU 180◦179◦22177◦42↓1010175◦62173◦82171◦102170◦.. ................170◦171◦7961173◦7971↓8005175◦7981177◦7991179◦8001180◦The planned MUs of each aperture are evenly shared with the newly interpolated apertures on both sides of the planned aperture.Depending on thefineness of the interpolation,the MUs distributed in the delivery beam intervals are combined to an absolute MU value at the rediscretized beams proportionally,as depicted in table2.putational parameters in Pinnacle3and MCKSTo benchmark the speed and accuracy of the MCKS calculations relative to the Pinnacle3 calculations,a36-beam IMAT case was computed with Pinnacle3on a Sun Solaris10platform that uses the AMD Opteron2.8GHz CPU.The Pinnacle3computations were performed using the collapsed-cone CVSP algorithm.The same plan was computed with MCKS until2%4738G Tang et alstatistics were reached as quantified by the standard deviation,σt .The standard deviation with respect to the CPU time t is defined asσt =1N −1N i =1(D i,t −D i,t →∞)2.(2)The N voxels include only those within a certain dose range D .In our benchmark comparison,the dose distributions produced from MCKS and Pinnacle 3matched within 1.4%of the planning target volume (PTV)dose in the high dose regions and within 1.8%of the OAR dose in the lower dose regions,in the presence of the 2%statistical fluctuation.This is in agreement with the extensive dose comparisons performed for the MCKS method (Naqvi et al 2003),indicating that the accuracy of the MCKS method is not affected by the number of beams.The same plan was also computed with MCKS on a Mac OS X system at2.0GHz using the same number of histories.It was found that the MacBook Ris approximately 10%faster than the Pinnacle 3workstation for MCKS calculation.For convenience,all MCKS calculations were computed with the MacBook,but CPU times were adjusted by 10%to make fair comparisons with Pinnacle 3calculations.For comparison purposes,all 36-beam Pinnacle 3plans were also calculated with MCKS using the same voxel size of 2×2×3mm 3.In addition,all the plans were also calculated using a set of finely interpolated beams.Arbitrarily,we utilized 720beams with a spacing of 0.5◦to simulate the continuous delivery arc since we expect that the number of beams does not affect the computational efficiency.From now on the 36-beam and 720-beam calculations will be called the ‘static-beam calculation’and ‘interpolated-static-beam calculation’,respectively.The computational efficiency of the static-beam calculation and interpolated-static-beam calculation will be compared.We will also present the effect of voxel size on the computational efficiency for Pinnacle 3and MCKS.3.Results and discussion3.1.Statistical comparison of static-beam calculation and interpolated-static-beam calculationLike most MC techniques,MCKS follows Poisson statistics.We illustrate the statistical convergence of our arc calculation using a brain case,which has a relatively large field size of 14×12cm 2.A static-beam simulation and an interpolated-static-beam simulation were both run for 500min (t →∞in equation (2))by which time near ‘perfect’statistics are achieved.During the simulations,a dose file was written every minute and was compared to the ‘perfect’statistics.In figure 1,the variation of σt with CPU times is plotted for voxels with doses larger than 90%of the prescription dose and σt for the lower dose region of 25–50%of the prescription dose.It can be seen that both the static and interpolated beam curves are essentially coincident,verifying that the CPU time is virtually independent of the number of beams.For both the static-beam calculation and interpolated-static-beam calculation,the CPU time for MCKS to reach 2%statistics in both the high dose and low dose regions is approximately 15–30min,depending on the field size.Table 3shows the CPU time ratio of MCKS and Pinnacle 3,T (v),for various voxel sizes for a 36-beam plan.For the larger voxel sizes,the CPU time of MCKS is slightly longer than Pinnacle 3;but as the voxel size decreases,the computational efficiency of MCKS becomes more apparent,indicating that with MCKS higher resolution dose distribution can be calculated in clinically acceptable times.In figure 2,we plot the time required for computation against number of beams,using the 2%standard deviation criteria,σt ,in the high dose region for MCKS.It was found that whenDose calculations of intensity-modulated arcs473950100150200250300350400450500550024681012141618D o s e (c G y )CPU time (min)Figure 1.The standard deviation (σt )of the dose per voxel versus CPU time in the high dose region,i.e.where the dose is greater than 90%of the prescription dose;and the standard deviation of the dose per voxel versus CPU time in the low dose region,i.e.where the dose is 25–50%of the prescription dose.The curves follow Poisson statistics and also illustrate the independency of CPU time on angular spacing.Table 3.Efficiency versus voxel size in MCKS and Pinnacle 3.A 36-field static-beam plan was calculated in Pinnacle 3and MCKS with different voxel sizes.The end point of MCKS simulation is σt =2%for D >0.9D 0,where D 0is the prescription dose.The ratios T (v)are determined with the CPU time of MCKS and collapsed-cone convolution in Pinnacle 3,T (v)=CPU time MCKS /CPU time Pinnacle .V oxelCPU time (min)CPU time (min)T (v)size (mm 3)Pinnacle 3MCKS 1×1×3109630.582×2×32024 1.203×3×3712 1.714×4×348 2.005×5×3273.50more than about 43beams are involved,MCKS is a more efficient dose calculation method.In recent single-arc techniques,such as the Varian RapidArc TM ,there are up to 177beams in a treatment plan.These plans are calculated utilizing four CPUs with an analytical dose computation algorithm.However,with the same number of CPUs,MCKS can perform an even faster dose computation and gain in dose calculation accuracy at the same time.3.2.Dosimetric comparison of static-beam calculation and interpolated-static-beam calculationTable 4summarizes the differences in dose distributions observed between the static-beam and interpolated-static-beam calculations for all ten cases that were generated with sequencer B.4740G Tang et al100200300400500600700050100150200250300350400C P U t i m e (m i n )N beamsFigure 2.Required CPU time as a function of the number of beams using MCKS and collapsed-cone convolution in Pinnacle 3with a 2×2×3mm 3dose grid;the two lines intersect at about 43beams (σt =0.02for high dose region is used as the stopping criteria for MCKS).Significant dosimetric differences,up to 17%,occurred in the high dose regions,especially for the complex cases such as the head and rge discrepancies can also be seen in the critical organs.For example,in the prostate cases,the maximum dose to the rectum is significantly lower in the interpolated-static-beam calculations.With the increased number of beams utilized in dose calculation,the ripple-like artefacts are smoothed out in the corresponding interpolated-static-beam plans.The significance of this is illustrated in figure 3where the static-beam calculation overestimates the brainstem dose in a brain case.With such significant degradation in plan quality,the treatment planning approach might need to be adjusted to gain efficiency.If an optimization algorithm incorporates a similar continuous arc dose calculation approach,one can update the algorithm with the true dose distribution using a set of finely interpolated beams at a certain iteration such that the degradation between static-beam planning and continuous arc delivery can be minimized.3.3.Effects of angular spacing in dose calculationA HN case was randomly selected from the ten cases to demonstrate the influence of sampling frequency on the accuracy of arc dose calculation.This HN case was planned with 36beams and sequenced with sequencer B.Five separate dose distributions were computed with beams spaced every 0.5◦(720beams),1◦(360beams),2◦(180beams),5◦(72beams)and 10◦(36beams).Table 5summarizes the differences between the five scenarios where all the calculations are normalized to the 36-beam calculations.The dose distributions calculated with 0.5◦spacing are significantly different from that calculated with 10◦spacing,while the dose distributions calculated with 2◦spacing and 1◦spacing are similar to the 0.5◦spacing calculations.However,for the computation with a coarser spacing of 5◦,discrepancies are observed when compared to all the other sets of calculations.This is because the resultant dose distributions are affected by the additional interpolated MLC apertures.It is not obviousDose calculations of intensity-modulated arcs4741 Table4.Dosimetric differences in the static-beam calculation and the interpolated-static-beamcalculation for all ten patient case studies.The interpolated-static-beam plans were normalized atthe mean dose of the PTV of the corresponding static-beam plans.Dose differences are defined as[(D720-beam-D36-beam)/D36-beam]×100(%).Case ROI Dose metric Difference(%)Lung GTV D95a−1.8PTV D95−10.1Left lung V20b−10.7Right lung V20−9.9Heart Max−11.8Spinal cord Max0.6Brain1GTV D95−1.0PTV D95−7.2Left eye Max−6.6Right eye Max−6.8Left optic nerve Max−9.4Brainstem Max−1.5Spinal cord Max−8.7Brain2PTV D95−3.9Left eye Max−6.5Right eye Max−2.0Left optic nerve Max−0.5Right optic nerve Max−3.6Brainstem Max−7.2Prostate1GTV D95−0.9PTV D95−6.4Bladder V66.7c−27.4Rectum V71.4c−94.2Prostate2GTV D95−0.5PTV D95−6.5Bladder V66.7−32.2Rectum V71.4−99.2Prostate3GTV D95−1.2PTV D95−7.4Bladder V66.7−22.2Rectum V71.4−99.9HN1CTV1D95−4.0CTV2D95−9.1CTV3D95−15.0Left parotid Mean−16.5Right parotid Mean−10.7Spinal cord Max 3.1HN2GTV D950.7PTV D95−6.4Optical chiasm Max−13.8Left parotid Mean−10.8Right parotid Mean−9.3Right optic nerve Max−13.5Brainstem Max−7.5Spinal cord Max−13.2Table4.(Continued.)Case ROI Dose metric Difference(%)HN3GTV D95−4.4PTV D95−16.9Left parotid Mean−19.1Right parotid Mean−18.2Optical chiasm Max−25.5HN4GTV D95−1.1PTV D95−6.7V ocal cord Max−5.0Spinal cord Max−3.7a V olume of ROI that receives95%of the prescription dose.b The dose that is received by20%volume of the ROI.c Bladder and rectum dose limits are referenced from the RTOG protocol0126.Figure3.A MCKS-calculated isodose distribution comparison of a brain case illustrating theripple artefacts in the low dose region of the36-beam calculation are smoothed out in the720-beam calculation.Apparent dosimetric discrepancies at the brainstem(magenta region)are alsoobserved.The thin lines represent the36-beam calculation and the thick lines represent the720-beam rger dosimetric differences can be seen in the low dose region.(The isodoselines counting from the innermost correspond to50%,40%,30%and20%of the prescriptiondose.)how these additional interpolated MLC shapes influence the accuracy of dose calculation. Table5(a)shows that with more than360beams in dose calculation,the dose differences in both the targets and normal tissues do not differ much from the calculations with higher sampling frequency.Furthermore,the accuracy of arc dose calculation does not directly depend on the number of beams used in calculation but the variation in MLC apertures and their weightings between the static planning beams.Table5(b)displays a comparison of the dosimetric discrepancies of a set of calculations with different number of beams compared toTable5.A comparison of a set of dose calculations for a HN case utilizing different number ofbeams in calculation.The HN case was planned with sequencer B.All doses are normalizedto the36-beam calculation.A MLC constraint of5cm was used in planning for(a)anda MLC constraint of3cm was used in planning for(b).Dose differences are defined as[(D x-beam−D36-beam)/D36-beam]×100(%).Dose differences(%)ROI Dose metric72-beam180-beam360-beam720-beam(a)CTV D95−1.2−4.0−4.3−4.4PTV D95−21.0−16.6−16.5−16.9Left parotid Mean−20.2−18.2−19.1−19.1Right parotid Mean−18.8−17.8−18.5−18.2Optical chiasm Max−27.5−25.6−25.8−25.5(b)CTV D950.30.20.50.1PTV D95−3.7−2.6−2.5−2.7Left parotid Mean−11.3−9.8−10.3−10.6Right parotid Mean−11.1−9.6−10.1−9.8Optical chiasm Max−18.6−14.3−15.7−14.2a36-beam calculation.The plan in table5(b)is similar to that in table5(a)except that the maximum MLC allowed to travel between any two planning angles is3cm instead of5cm. Hence,MLC aperture shape variation is reduced for the plan in table5(b)compared to that in table5(a).It is shown that with smaller MLC shape variation,the differences of all sets of calculations to the36-beam calculation are substantially lower compared to table5(a)and at least180beams may be used to provide a dose distribution that appropriately and sufficiently approximates the continuous arc delivery.3.4.Dependence of optimization/leaf sequencing algorithmIn general,the more the aperture shapes vary between the adjacent planning angles,the more we expect the continuous arc delivery to differ dosimetrically from the planned arc.However, many plans do not require complex aperture shape variation to produce an acceptable plan quality,such as those cases with small and symmetrical targets.Furthermore,different sequencing algorithms produce different MLC shape and weight variations even when the same MLC constraints are used.For the same HN case used in section3.3,two plans were generated by each of the sequencers with MLC constraints of3cm and5cm.Given that both sequencers produce similar plan quality,B-sequenced plans(plan Bs)show more significant dosimetric differences between the static-beam calculation and interpolated-static-beam calculation than A-sequenced plans(plan As),because the aperture shapes and beam weights vary more sharply in plan Bs as shown infigure4.For plan A and plan B with a maximum MLC displacement constraint of3cm,plan B shows a significant difference in the PTV coverage between the static-beam and the interpolated-static-beam calculation.Note that the PTV dose is coincedentally more homogeneous in the interpolated-static-beam calculation, as depicted infigure4(b).However,it is generally difficult to predict whether the interpolated-static-beam calculation would produce positive or negative dose differences.Nonetheless,it is known that with largely different aperture shapes between the adjacent planning beams, the interpolated beams in between the beam intervals will inherit the large shape variation,。
人工智能的领先发展 同等学历英语作文

人工智能的领先发展同等学历英语作文The Rapid Development of Artificial IntelligenceArtificial Intelligence (AI) has been a topic of fascination and speculation for decades. In recent years, the rapid advancements in this field have been truly remarkable. AI systems are now capable of performing tasks that were once thought to be the exclusive domain of human intelligence. From playing complex games like chess and Go to diagnosing medical conditions and driving vehicles, AI has proven its versatility and potential to revolutionize various industries.One of the most significant developments in AI has been the progress made in machine learning. This approach to AI involves the creation of algorithms that can learn from data and improve their performance over time without being explicitly programmed. This has led to the development of powerful AI models, such as deep learning neural networks, which have demonstrated remarkable abilities in areas like image recognition, natural language processing, and predictive analytics.The applications of AI are vast and diverse. In healthcare, AI-powered systems are being used to assist in the early detection of diseases, personalize treatment plans, and streamline administrative tasks. In the financial sector, AI is being leveraged to detect fraud, optimize investment portfolios, and automate trading decisions. In the transportation industry, autonomous vehicles equipped with AI-powered navigation and decision-making capabilities are being developed, with the potential to improve safety, reduce traffic congestion, and lower emissions.Another area where AI is making a significant impact is in the field of scientific research. AI algorithms can analyze vast amounts of data, identify patterns, and generate hypotheses that can guide further investigation. This has led to breakthroughs in fields like materials science, drug discovery, and climate modeling, where AI is helping researchers uncover new insights and accelerate the pace of discovery.The rapid development of AI has also raised important ethical and societal concerns. As AI systems become more sophisticated and integrated into our daily lives, there are concerns about the potential for job displacement, algorithmic bias, and the need for robust privacy and security measures. Policymakers, researchers, and industry leaders are actively working to address these challenges and ensure that the benefits of AI are realized in a responsible andequitable manner.One of the key drivers of the rapid development of AI has been the exponential growth in computing power and data availability. The advent of powerful graphics processing units (GPUs), the proliferation of sensors and connected devices, and the availability of large-scale datasets have all contributed to the rapid advancement of AI technologies. Additionally, significant investments in AI research and development by both the public and private sectors have fueled innovation and pushed the boundaries of what is possible.As AI continues to evolve, it is likely that we will witness even more remarkable advancements in the years to come. Some experts predict that AI will soon surpass human-level performance in many tasks, leading to a new era of "artificial general intelligence" (AGI) –AI systems that can match or exceed human intelligence across a wide range of domains.However, the path to AGI is not without its challenges. Significant technical hurdles, such as the development of robust and reliable AI systems, the ability to handle complex and uncertain environments, and the challenge of imbuing AI with human-like common sense and reasoning, must be overcome. Additionally, the ethical and societal implications of AGI must be carefully considered to ensure that itsdevelopment and deployment are aligned with human values and priorities.Despite these challenges, the potential benefits of advanced AI are vast. Imagine a world where AI-powered systems can help us solve some of the most pressing global challenges, from climate change and disease to poverty and conflict. AI could assist in the development of sustainable energy solutions, the discovery of new medical treatments, and the design of more efficient and equitable social systems. The possibilities are truly endless.In conclusion, the rapid development of AI is a testament to the ingenuity and creativity of human beings. As we continue to push the boundaries of what is possible, it is crucial that we do so in a responsible and ethical manner, ensuring that the benefits of AI are shared equitably and that its risks are mitigated. By embracing the potential of AI while addressing its challenges, we can unlock a future that is more prosperous, sustainable, and fulfilling for all.。
碎纸机外国文献

Designing and Manufacturing an Appropriate Technology Shredder in a Developing CountryJeffrey P. WeissAn Engineering Project submitted to the faculty of the School of Engineering in partial fulfillment of the requirements of the Masters of Manufacturing Systems Engineering degreeUniversity of St. ThomasSt. Paul, MinnesotaDecember 2005AbstractThe focus of this project was to redesign a simple manual shredding machine used to shred breadfruit for the Republic of Haiti. A breadfruit shredder previously designed by a student senior design team was used as the basis for this project. The objective was to apply manufacturing principles, such as Design for Manufacturing and Assembly (DFMA), to simplify and reduce the cost of this machine so that it would be more accessible to poor farmers in Haiti. Each part of the shredder was examined using the DFMA methodology to determine if it could be eliminated or redesigned to simplify it while still making a quality product that met the performance criteria. The limitations of manufacturing a product in a developing country were also taken into consideration and played a key role in the outcome of the design. The result was a design that had a reduced number of parts, was more robust, easier to clean, simpler to build in a developing country, used materials that were more commonly available, and cost less to make.Revised Tommy Breadfruit ShredderAcknowledgementsI would like to acknowledge and send my sincerest thanks to my Project Committee of Dr. Camille George, Dr. Fred Zimmerman, and Mr. John Walker. They contributed numerous ideas during both the project phase and during the writing process. This resulted in a much better product that will hopefully improve the lives of people around the world. Dr. George also spent a great deal of time correcting and critiquing the writing of someone who was unaccustomed to writing in the academic thesis style.Many other people also lent a voice to the project during the research and design review phases. This would include Karl Mueller, Bruce Humphrey, Hank Garwick, Dave Elton, John Schevenius, Gary Olmstead, Fred Hegele, Pat O'Malley, Troy Pontgras, Yvonne Ng, and Clay Solberg. These people took the time to help and offered ideas that had previously been missed, resulting in a better product.I would also like to acknowledge the contribution of Dr. Mike Hennessey at the University of St. Thomas and the work of five of his undergraduate students. Justin Jackelen, Michael Boston, Angela Wachira, Keli Lais, and Matt Ellision took on the task of turning the revised breadfruit shredder drawings into computer animated Solidworks models. This contributed greatly to the visual understanding of the project and presentation. They also provided the fabrication prints that accompany this paper.Table of ContentsChapter I: Introduction (1)The Haitian Situation (2)Breadfruit (3)The Tommy Shredder (5)The Beneficiaries (5)Project Motivation (6)Chapter II: Research and Prior Work (7)UST Senior Design Team Work (7)Literature Search (9)Compatible Technology, International (11)Institutional Libraries (15)Research and International Organizations (15)Expert Inquires (17)Chapter III: Project Proposal (18)Project Objectives (18)Alternative Methods (20)Project Constraints (21)Project Budget (23)Financial Justification (23)Chapter IV: Findings and Results (24)Redesign Process (25)Design for Manufacture and Assembly Process (25)Alternative Designs (29)Design Reviews (30)Design Modifications (33)Fabrication Lessons (38)Design Variations (40)Shredder Blade Project (41)Testing the Redesigned Shredder (45)Redesign Results (47)Schedule (50)Final Budget (51)Chapter V: Discussion and Ramifications (52)Project Dissemination (52)Implementing the Shredder in Developing Countries (53)Project Obstacles (54)Bibliography: (57)Appendices (60)Appendix 1: Revisions 1 and 2 (60)Appendix 2: Revisions 3 and 4 (61)Appendix 3: Revisions 5 and 6 (62)Appendix 4: Revision 7 and 8 (63)Appendix 5: Breadfruit Shredder Exploded Layout (64)Appendix 6: Bill of Materials – Breadfruit Shredder (65)Appendix 7: Frame Plate Fabrication (67)Appendix 8: Drive Shaft Fabrication (69)Appendix 9: Feeder Tube Fabrication (70)Appendix 10: Blade Mount Fabrication (71)Appendix 11: Shredder Press Weight Fabrication (73)Appendix 12: Shredder Assembly Instructions (74)Appendix 13: Original Project Schedule (77)Appendix 14: Revised Project Schedule (78)Appendix 15: Preliminary Sketch by John Walker (79)Appendix 16: Contributions by Karl Mueller (80)Appendix 17: Drawing #001 – Frame Plate (82)Appendix 18: Drawing #002 – Drive Shaft (83)Appendix 19: Drawing #003 – Feeder Tube (84)Appendix 20: Drawing #004 – Drive Shaft Bearing (85)Appendix 21: Drawing #005 – Handle (86)Appendix 22: Drawing #006 – Blade Mount (87)Appendix 23: Drawing #007 – Center Divider (88)Appendix 24: Drawing #008 – Center Divider Spacer Tube (89)Appendix 25: Drawing #009 – Shredder Press Weight (90)Table of FiguresFigure 1: Map of the Republic of Haiti (CIA Fact Book, 2005) (3)Figure 2: Fruit of the Breadfruit Tree () (4)Figure 3: Senior Design Team Shredder (8)Figure 4: Garwick/Elton Breadfruit Shredder (13)Figure 5: Garwick/Elton Bicycle Drive Mechanism (14)Figure 6: Original Tommy Shredder Exploded View (28)Figure 7: Handle/Drive Shaft Changes (34)Figure 8: Drive Shaft Bearing Changes (35)Figure 9: Frame Plate Changes (35)Figure 10: Center Divider Changes (36)Figure 11: Blade Mount Changes (37)Figure 12: Combined Feeder Tube Hoop and Spacer (38)Figure 13: Alignment of Bushing Supports (39)Figure 14: Wooden Bushing Variation (41)Figure 15: Shredder Blade Profile Die, Profile Punch, and Hole Template (43)Figure 16: Fabricated Blade (45)Figure 17: The Revised Tommy Shredder (48)Table of TablesTable 1: Haiti Facts (CIA Fact Book, 2005) (2)Table 2: Proposed Budget (23)Table 3: Shredder Punch Hole Test (44)Table 4: Final Budget (52)Chapter I: IntroductionThis project will focus on redesigning for manufacture a simple breadfruit shredder for the Republic of Haiti. As one of the poorest nations in the Western hemisphere, Haiti is a country that lacks a stable government, education system, manufacturing base, or infrastructure. Malnutrition is a problem to the extent that the United States Department of State estimated that the child malnutrition rate was 22 percent in 2000(). Breadfruit is a natural food resource that is underutilized because it rots quickly and is difficult to store using traditional methods. Drying breadfruit can extend its shelf life and this process is best done when the shreds are even and consistent.A simple manual shredder was developed to produce consistent shreds for the inhabitants of Haiti by a group of senior engineering students at the University of Saint Thomas (UST) in conjunction with Compatible Technology International (CTI), an international non-profit organization. The student version of the shredder was designed and tested and found to meet all of the criteria that they had established. Despite meeting the requirements, the machine had the potential to be optimized to better reflect the manufacturing capabilities available in a developing country. This paper will document the redesign process and look at the manufacturing principles that drove this process. The end result was a machine that was simpler to build with the basic machine tools that would normally be found in a developing country such as Haiti, used materials that were more commonly available, had a reduced number of parts, was more robust, was easier to clean, and had a reduced cost.The Haitian SituationHaiti is considered to be the poorest and most destitute country in the Western hemisphere (CIA Fact Book, 2005). A majority of its population lives in poverty and relies on subsistence farming for survival. It has a long history of political upheaval and unrest since it gained its independence from France in 1804. The rotation of various governments and civil wars has hindered investment in the country and led to high unemployment and dismal living conditions for its inhabitants. The education system is broken or non-existent and there has been an exodus of knowledge from the island as people flee the dire conditions and turmoil.Table 1: Haiti Facts (CIA Fact Book, 2005)Population (Estimate, 2004): 8,121,622Land Area: 27,750 sq kmAverage Life Expectancy: 53 yearsPopulation Below Poverty Line: 80%Percentage of Population in Agriculture: 66%Unemployment Rate (no formal job): 66%Average Literacy Rate: 52%Figure 1: Map of the Republic of Haiti (CIA Fact Book, 2005)Most of the original Haitian forests have been cut down for fuel and the desire to cultivate more land. The weak governments have been unable or unwilling to confront this problem and it has continued unchecked. This deforestation has resulted in massive land erosion in the mountainous country and a net loss of arable land (CIA Fact Book, 2005). Breadfruit trees are abundant throughout the island and are one of the few trees that have survived the deforestation process.BreadfruitBreadfruit is an important food source and has become a staple for the inhabitants of warmer islands in the Caribbean Sea and Pacific Ocean. It has some nutritional value anda high starch content (Adebowale, 2005). Typical ways of preparing breadfruit are grilling, roasting, adding it to soups, and mashing.Figure 2: Fruit of the Breadfruit Tree ()One of the unique properties of breadfruit is its limited shelf life. Once it ripens and comes off of the tree, it will last between one and three days ( ). The breadfruit trees of Haiti produce fruit twice a year for a three week period (six weeks per year). Much of the fruit rots on the ground because of the inability to consume it all for the short time that it is in season (Capecchi, 2005). Typical preservation methods for fruit, such as canning, can be done but these value-adding processes are not common in Haiti and will increase the price of the food. A more economical way of preserving the breadfruit needed to be developed to utilize its potential to alleviate long-term hunger on the island.The Tommy ShredderThe development of a breadfruit harvesting process was taken on by two groups of senior mechanical engineering students as their Senior Design Projects in the 2003-2004 academic year. The first team attempted to devise a solar drier to quickly dehydrate the shredded breadfruit. The drying project showed that the shredded fruit could be successfully air-dried with an optimal shred size of ½” wide (Emiliusen, Mauritzen, McGruder, and Torgerson, 2004). The dried product can be stored for up to a year.The second team worked on developing a small, economical shredder that could efficiently and quickly process the breadfruit down into shreds so that it could be dried (Anderson, Fox, Rick, and Spah, 2004). The concept and methodology for the basic shredder design was done by the senior design team as was the testing to prove out the final design and will not be repeated in this paper. The purpose of this project was to examine and simplify the design, focusing primarily on its manufacturability.The BeneficiariesThe target beneficiaries of this shredder will be women’s cooperative groups based in Haiti. CTI, whose mission is to bring appropriate technologies to help increase food supplies and storage capacities in the developing world, has been working with the Methodist Church missions in Haiti on preserving breadfruit. Dried breadfruit can be ground into flour and local CTI volunteers have created several recipes using this breadfruit flour as the bulk material. UST teamed up with CTI to develop a simple shredder that could be used to shred the breadfruit. The goal of this joint project was tocreate a shredder that was simple to use and economical to manufacture so that local versions could be bought with micro-loans managed by the Methodist Church of Haiti. CTI also planned on helping set up a program to buy the shredded/ground breadfruit and process it into a cereal for Haitian school children (Capecchi, 2004). The plan was to take a resource, preserve it and add value, and then process it to create a commercial good. The objective of this undertaking is to give the women’s co-ops a starter model shredder that would allow them to generate some income from a readily available raw material.This shredder is also capable of processing a variety of different produce. There have been inquiries into its ability to shred cassava, sweet potatoes, and red peppers. The alternative uses of the shredder will not be explored here but the final design for this project will be made readily available and has potential uses worldwide. It will also be submitted to appropriate technology journals to broaden its dissemination.Project MotivationThe author of this master’s engineering project has spent time in developing countries and realized that there are often raw materials that are not fully utilized and exploited. The people generally lack the knowledge to manufacture items in large volumes and have limited manufacturing equipment, start-up money, a reliable source of power, or an infrastructure to transport the goods (Obi, 1999). However, these people are extremely creative and will adapt what they have on hand to work in almost any situation (Humphrey, 2005). The motivation for this project was to help the people develop theirown economy and hopefully raise their standard of living. This project will not only benefit the women of Haiti, it will help the local machine shops, provide work at the processing plant, and give the children of Haiti a stable, year around diet.Chapter II: Research and Prior WorkThe research for this project consisted of searching major journals, books on manufacturing in developing countries, contacting major research libraries, and personal contacts with experts in various fields. Many avenues for help were explored to gather information to improve the final design. The research phase of this project found that the work done by the UST senior design team was one of the few to address the issue of constructing a simple shredder for manufacture and use in a developing country.UST Senior Design Team WorkThis project is based on the work previously done by a University of Saint Thomas (UST) senior engineering design team whose goal was to develop the original breadfruit shredder based on the needs of the country of Haiti and the criteria established by Compatible Technology, International (CTI). The purpose of the original project was to “find the most efficient means of mechanically shredding breadfruit to best prepare the fruit for the drying process” (Anderson et al, 2004). The team developed concepts and tested many different methods of shredding the breadfruit and the mechanical actuators that would be needed for each prototype. The concepts were evaluated and ranked and the team chose the method best suited for their needs. The ‘Tommy Shredder’ developedby the student senior design team is shown in Figure 3 and their paper can be found on the UST website at /cmgeorge/breadfruit_shredder/.Figure 3: Senior Design Team ShredderThe senior design team had originally planned on testing the shredder in its target environment of Haiti but that country was not accessible at the time due to political unrest. A prototype shredder was built and brought to the Caribbean island of St. Vincent where there was an ample supply of breadfruit and established contacts. On the island of St. Vincent, the design was field-tested using breadfruit and the results recorded. The shredder met all of the target criteria established by CTI and the design team. It produced an average shred rate of 200 pounds/hour and cost less than $100 dollars U.S. to build (Anderson et al, 2004). This shredder became the baseline for the current project.Literature SearchA literature search done using the Compendex database at the University of Minnesota found several articles that were possibly related or relevant to the design of the breadfruit shredder. These articles were retrieved and analyzed with the result being that a majority were not related or did not contain information relevant to the design of an appropriate technology machine. Many of the applicable articles are referenced throughout this paper while those with less relevance to the project are cited in this section.In ‘Functional Properties of Native, Physically and Chemically Modified Breadfruit (Artocarpus Artilis) Starch’, Adebowale, Olu-Owolabi, Olawumi, and Lawal (2005) dealt with extracting starch from breadfruit. In the ‘Rediscovery of Local Raw Materials: New Opportunities for Developing Countries’, El-Mously (1997) discussed ways that developing countries could use local, undervalued resources to reduce their dependence on foreign imports. Breadfruit would be an undervalued resource on most Caribbean islands but the article did not provide information that would be relevant to the design of a shredder or this project. In the ‘Framework for Selecting and Introducing Appropriate Production Technology in Developing Countries’, Bruun and Mefford (1996) looked at working with the culture and education of developing countries when setting up a production facility. These are issues that will not be dealt with in this paper. In the ‘Role of Materials in Developing Countries’, Villas-Boas (1990) discussed the lack of use of new, high-tech materials in developing countries due to their cost and availability. Every effort was made to design the shredder using only common materials that would typicallybe available in a poor, developing country. In the ‘Supplier Selection in Developing Countries: a Model Development’, Motwani, Youssef, Kathawala, and Futch (1999) discussed issues involving selecting or qualifying vendors to produce a product. This will be the responsibility of the organization having the shredder built, and is beyond the scope of this project.A search of the Internet using the Google – Advanced Scholar provided more papers that had some relevance. Thakur, Varma, and Goldey (2001) in the ‘Perceptions of Drudgery in Agriculture and Animal Husbandry Operations: A Gender Analysis From Haryana State, India’ discusses the fact that women in developing countries spend much more time working in agriculture than men and the tasks given to them are more monotonous and tedious. The article supports the need for a device like the breadfruit shredder that has the potential to lift them out of that situation. In ‘A Framework for Implementing Appropriate Manufacturing Systems in Developing Economies’, Obi (1999) looked for explanations on why the Industrial Revolution passed by most developing countries and explored ways that these countries can start utilizing their vast manpower resources. He discusses the need to change workers attitudes. Finally, in ‘Meeting a Pressing Need’, Hynd and Smith (2004) discuss a simple oilseed ram press as an appropriate technology device for small scale extracting of oil from seeds and nuts. They examine some of the cultural issues that were associated with implementing the oilseed ram. The insights of this article could be used as a guide for undertaking the next phase of the shredder project; implementation into the Haitian culture. They briefly talk about some of themanufacturing difficulties, such as poor quality, associated with producing goods in a developing country.The best book relating to appropriate technology equipment used in developing countries is the ‘Appropriate Technology Sourcebook’ compiled by Darrow and Saxenian (1993). It is considered ‘The Bible’ by people in the appropriate technology field, such as those at CTI (Humphreys, 2005). The book is a resource listing appropriate technology machine books and papers that are available for purchase from other sources. It does not contain any designs of its own, but it does give a brief description of the contents of the papers and designs that are available for order. A search of this book and the updated website did not reveal any designs for manual shredders or grinders(/atnetwork/atsourcebook/index).Compatible Technology, InternationalCompatible Technology, International (CTI) () is an excellent local resource for dealing with appropriate technology in developing countries and has extensive connections throughout the world. It is an organization dedicated to using simple devices to improve food production and storage in the third world. They are a stakeholder in the design and development of the original shredder. The director of CTI is Bruce Humphreys who granted an interview on issues dealing with manufacturing in developing countries (2005). Some of the key points that he brought up were:Manufacturers in developing countries do not necessarily build parts to a fabrication print. Everything is custom and will look similar to what is desired,but is not quite the same.Creativity is not rewarded in many cultures and there is a desire to continue doing things the old way.Expectations in quality and standards will probably not be met. They do not typically produce to the same quality as is expected in the U.S.There are cultural norms and practices that will be slow to change and may not be overcome. This would primarily relate to the target market of women. Womentend to not use machines, thus the design must be easy to use and relatively toolfree.These assertions by Mr. Humphrey were reinforced in other literature relating to the topic (Obi, 1999).Hank Garwick and Dave Elton are the two CTI volunteers who are most closely tied into the Haiti mission. They have made several trips to Haiti on humanitarian missions associated with both CTI and the Methodist Church. The two offered insight into the Haitian mindset, manufacturing capabilities in Haiti, and experience in shredding breadfruit. Their comments on the manufacturing capabilities in Haiti were that “we would be lucky to find someone who could read a print, and even if they can they probably won’t follow it” (Garwick, 2005).Garwick and Elton were not satisfied with the work of the UST senior design team and continued to develop the shredder after the senior design team’s project ended. They made several small modifications to the design, built a prototype, and brought it down to Haiti to be tested (Fig. 4). The Garwick/Elton version of the shredder did not work as well as intended and did not produce the desired shred rate found by the UST engineering team (Garwick, 2005). It is unclear why this was the case. Several of the better design changes that they made to their shredder were incorporated into the current shredder design. These would include the sheet metal center divider and ideas on the retainer for the shredding blade.Figure 4: Garwick/Elton Breadfruit ShredderGarwick and Elton believed strongly that the prime power for the operation of the shredder should be a leg driven bicycle type mechanism instead of the current hand powered crank. Figure 5 shows a bicycle drive assembly that they added to a shredder (Garwick, 2005). This project is focused on producing a shredder for the poorest of people in Haiti and it was felt that a bicycle type mechanism would significantly add to the cost of the machine while making it unnecessarily complex. It is expected that this shredder will only be fully utilized for several weeks a year during the breadfruit harvest and would not justify the higher cost. The current design is one such that a bicycle type drive could be added to the shredder at a later date if desired by the user.Figure 5: Garwick/Elton Bicycle Drive MechanismInstitutional LibrariesThe United States Military Academy at West Point has an extensive library relating to military manuals and papers. The U.S. military routinely performs operations in developing countries and the units typically tasked with helping the local population are the Civil Affairs units and the Special Operations Forces. These units are often involved in nation building and community development and have close contact with the people. Daniel Prichard, a research librarian at the library, was contacted about any pamphlets, articles, or papers that the library may have on a shredder or appropriate technologies in developing countries. Mr. Prichard found nothing relevant at the Academy’s library (Prichard, 2004).A search of the University of St. Thomas’s and the University of Minnesota’s library systems found no books or on-site literature that was relevant to the design of the breadfruit shredder.Research and International OrganizationsThe Hawaiian Breadfruit Institute is an organization based in Hawaii whose mission is “to promote the study and use of Breadfruit for food and reforestation”( ). It tracks and propagates the 120 known varieties of breadfruit found on the islands of the Pacific Ocean and Caribbean Sea. Dr. Diane Ragone, director of the Hawaiian Breadfruit Institute, was contacted regarding the shredding of breadfruit and the possible existence of similar devices. Dr. Ragone responded that she had not heard of any similar processing methods for breadfruit. Her primary concern for this wasthat the latex found naturally in breadfruit would ‘gum-up’ the machine and clog the shredding blade (Ragone, 2005). This issue was raised with Hank Garwick of CTI and he stated that most of the latex in breadfruit was found in the skin. The skin is removed before processing so this did not appear to be a concern for the shredder. The field tests in St. Vincent by the senior engineering student team did not report any excessive latex build up on the blades.The International Research Development Centre (IRDC) is a Canadian based organization whose purpose is ‘to build healthier, more equitable, and more prosperous societies’ (www.irdc.ca ). An e-mail was sent to IRDC explaining the project and asking about any information that they might have on shredders. The response was a link to their website which brought up nothing of value. A similar search of the United Nations Development Program (UNDP) provided no additional information ().Research was done with the United States Food and Drug Administration (FDA) to see if there were requirements or recommendations for the food industry regarding food processing equipment or the components used in them. The purpose was to find out which materials were considered “Food Grade” and suitable for food contact. The goal is to make the shredder as sanitary and safe as possible regardless of the standards that may be present in a developing country. It was found that the FDA does not keep a list of recommended materials, but has established a list of requirements that manufactures must meet in order to state that it is a material approved for food contact. The premise of therequirements are that if any of the material could ‘migrate’ to the food, it must not pose a threat to humans (FDA, 1999).Expert InquiresThe Minneapolis/St. Paul area is home to several large food producing companies such as General Mills. Food Safety personnel at General Mills were contacted to ask about standards for their food production equipment and any suggestions that would help to make the shredder more sanitary and suitable for food contact. These inquiries covered guidelines that are typical of the food processing industry. Gary Olmstead, Food Safety Instructor at General Mills stated that equipment should be durable and easy to clean (Olmstead, 2005). General Mills avoids having any pieces of equipment over the product because of the risk of parts falling into the food. Fred Hegele, also part of food safety at General Mills, was concerned about the durability of any plastics used in the equipment. He emphasized that the machine cannot have any recessed pockets or hard to clean areas. These would trap bacteria and make it unsafe and unsanitary (Hegele, 2005). John Schevenius, a former General Mills Engineer and founder of CTI, was contacted about suggestions for the shredder. Although he was familiar with the breadfruit program, he could not offer any suggestions for improvement (Schevenius, 2005).The research done here showed that there is a lack of availability of information regarding the design of an appropriate technology machine. The design methodology varies from organization to organization and no standardized process appears to have been completed and published in a major journal regarding the topic. Appropriate。
技术在消除贫困和不平等中的作用英语作文

技术在消除贫困和不平等中的作用英语作文The role of technology in eliminating poverty and inequalityWith the rapid development of the times, technology has become an important force driving social progress. Technology is playing an increasingly important role in eliminating poverty and inequality.Firstly, technology provides more development opportunities for impoverished areas. With the popularization of the Internet, information is no longer blocked. Residents in poor areas can learn about the advanced technology and management experience of the outside world through the Internet, learn new knowledge and skills, and improve their own quality. At the same time, the rise of e-commerce has also provided broader sales channels for products in impoverished areas, helping them open up markets and increase income.Secondly, technology has also played an important role in promoting educational equity. The emergence of online education platforms has enabled children in impoverished areas to receive high-quality educational resources. They can learn various courses online, interact with renowned teachers, and improve their academic performance. Thisnot only contributes to their personal growth, but also lays a solid foundation for their future career development.In addition, the application of technology in the medical field has also contributed to eliminating poverty and inequality. The development of remote medical technology has enabled residents in impoverished areas to enjoy high-quality medical services. They can communicate with distant doctors through video consultations, online consultations, and other means to obtain professional medical advice and treatment plans. This not only improves the efficiency of medical services, but also reduces the medical costs in impoverished areas.However, technology also faces some challenges in eliminating poverty and inequality. For example, the imbalance in technological development may lead to the emergence of new inequalities. Therefore, in the process of promoting and applying technology, we need to pay attention to fairness and inclusiveness, ensuring that everyone can enjoy the benefits brought by technology.In short, technology plays an important role in eliminating poverty and inequality. We should fully utilize the advantages of technology to provide more assistance and support to impoverished areas andvulnerable groups, and promote comprehensive social progress and development.技术在消除贫困和不平等中的作用随着时代的快速发展,技术已成为推动社会进步的重要力量。
HumanResourceManagement英文版
Can be used to develop individual HR systems
Recruitment and Selection
Based on past behaviour as the most valid predictor of future behaviour
US - input oriented – what the individual brings to the job
UK - output oriented – the skills, attitudes and knowledge , expressed in behaviours for effective job performance
a job or situation
McClelland 1993
Underlying traits, motives, skills,
characteristics and knowledge related to
superior performance
Boyatsis 1982
Uk v. US definitions
Armstrong 1991
Features of HRM
Management focussed and top management driven
Line management role key Emphasises strategic fit – integration
with business strategy Commitment oriented Two perspectives – ‘hard’ and ‘soft’ Involves strong cultures and values
斯伦贝谢公司数字化转型的经验与启示
94摘 要:数字化技术为石油行业降低生产成本、提高生产效率提供了重要手段。
近年来,斯伦贝谢等国际油服公司积极利用云计算、数据分析、机器学习、人工智能等新技术融合数据和业务流程,满足客户石油公司需求,积极推进数字化转型和智能化发展。
斯伦贝谢制定清晰的战略,完善科技创新体系和激励培养机制,分三个维度在公司内部推进数字化转型和智能化发展,构建开放的数据生态系统,转化勘探生产业务工作流程,实现边缘环境作业条件,推进数字化技术、硬件设备、软件应用程序和专业领域知识的一体化整合,助力客户加快数字化转型进程。
斯伦贝谢持续优化体系机制,坚持科技以人为本,密切结合客户需求,充分依靠合作伙伴,其成功经验值得中国企业学习借鉴。
关键词:斯伦贝谢;数据生态系统;工作流程;边缘环境;一体化整合;数字化转型Abstract :Digital technologies have provided key measures for the oil and gas industry to reduce cost and increase productivity. Schlumberger and fellow oilfield service companies actively used new technologies such as cloud calculation, data analysis, machine learning, and artificial intelligence to combine data with work flows in order to advance digital transition and intelligent development, meeting the demand of oil companies in recent years. It has made clear strategy and optimized internal system of technological innovation and incentive mechanism, ramped up its development in 3 dimensions inside the company, established an open data ecosystem, transformed E&P work flows, built feasible environment for edge-of-network, advanced the integration of digital technologies, hardware equipment, software applications as well as domain knowledge, and facilitated customs’ digital transition. Chinese oilfield service companies should learn from the experiences of Schlumberger by optimizing the organizational structure, putting people first in technological innovation, focusing on customer needs, and expanding their cooperation network.Key words :Schlumberger; data ecosystem; work flow; edge environment; integration; digital transition斯伦贝谢公司数字化转型的经验与启示曾涛1,刘晗光2,高坚1( 1.中国石油集团长城钻探工程有限公司;2.中国石油大学(华东)图书馆)Experience and inspiration from Schlumberger’s digital transitionZENG Tao 1, LIU Hanguang 2, GAO Jian 1(1. CNPC GreatWall Drilling Company; 2. The Library of China University of Petroleum (Huadong))ORPORATE STRATEGY95降低生产成本、提高生产效率、减少碳排放是石油行业的首要目标。
新兴技术采用的五个优先级说明书
Five priorities for successful adoptionof new and emerging technologies StartEmerging technologiesin a Hybrid IT environment: Put the use-case firstInnovation depends on two elements: a needand a group of skilled people who recognize thatneed and work together to satisfy it. That’s howwe get new technologies. And they emerge tobecome successful when their ability to fulfil ahuman need is proven. That’s why it’s importantto judge every new technology by what itcan do for you and for your organization.Adopting emerging technologies is not justabout purchasing tech and then finding a use for it.It’s about having the right mindset and definingspecific use-cases for how those new technologiescan help your business. When you do that, you geta much better return on your investment if boththe business-case and the use-case are definedbefore the tech is sourced. ContentsPriority 1:Don’t adopt for the sake of adoptingPriority 2:Transcend the skills gapPriority 3:Make sure the price is rightPriority 4:Think opex not capexPriority 5:Security should always be front-of-mindConclusion:It takes a partnershipIn a fast-moving world where disruption is alwaysjust around the corner, it can be hard to step backand consult with experts inside and outside your business. But taking the time to consult will yield benefits because you will avoid falling into a trapthat many organizations do. They rush to modernize their hybrid IT infrastructure and end up applyingnew technologies to their legacy processes and behaviors which have not been updated. A changeof mindset is needed to drive the cultural shift and change management. Fujitsu believes that this new transformative mindset is vital. It’s how you can drive innovation, agility, and improved customer experience. We have set out five priorities which we believe need to be addressed as you plan your strategy for leveraging the value of the new technologies you need to thrive.P riorit y2:Transcend the skills gapWhat really matters are skills and experience. Not just of the new technologies you wantto deploy, but also about those technologies which are already in place and need to be attuned to work seamlessly across the business. One of the biggest challenges is that bothof those attributes are in short supply. Veryfew organizations have large teams of skilled professionals and experienced IT managers who can spend time focused on bedding in new technologies. Most need to work with partners who can bring those qualities to the table. It’s always been a priority for organizations, and competition for great people was always intense. But over the period of the pandemican alarming shortage of skills emerged. Gartner stated the problem starkly; talent shortages are the biggest barrier to emerging technologies adoption. When they published their survey in 2020, only 4% of IT executives said they were worried about the talent shortage. The 2021 results revealed that figure had jumped to 64% due to the push to remote working. Gartner emphasized that the projects that suffer most are those relatedto compute infrastructure and platform services, network, security, digital workplace,IT automation and storage and database. Partners like Fujitsu actively seek and nurture expertise and talent to be able to offer an ecosystem of minds as well as technologiesto enable you to leapfrog the assessment and development phase of any emerging technology. Working to co-create specific solutions for clearly defined needs linked to concrete outcomes is the best way to deal with talent shortages. The external view which the right partner can bring enables you to understand how you can strengthen the IT foundations of the business so that it’s fit for purpose over the long-term. Y ou also benefit from a mix of skills – specialists and generalists – across the entire value chain from consulting, to design and build, through to managing technologies in operation.– are empowered to look at adopting technologies directly, there can be a lackof strict cost control and as technologiesare bought online complexity can rise,which in turn generates cost. Again, that’s why working with the right partner enables you to understand costs. And when youunderstand them, you can control them.Co-creation is key to balancing what you adopt, how you use that tech, and controlling costs against desired outcomes. It’s now an accepted principle that most technologies, processes, services, and platforms can be used as a service. You are probably already leveraging the valueof cloud in a range of ways. You can do thesame with new and emerging technologies.It really is the smart way to capitalize on the great functionality of emerging technologies, without the need for big up-front capex spending. For instance, quantum-inspired computing is fast becoming a key technology in the fight against data overload and complexity. True quantum computing is decades away, but its principles can be used to acceleratethe way you analyze data to make faster, more insightful decisions. Our Digital Annealer solutionis, for example, being used in the financial sectorto speed analysis of market trends, by the public sector to create smart cities and manage complex transport networks, and by pharmaceutical companies as they seek path-breaking drugs.That kind of new technology becomes an opex expense not a capex investment. What you’re really investing in is the outcome you want to achieve – greater returns from the market, cities that really work for everyone, drugs that keep us healthy – rather than a technology. Fujitsu focuses on developingits quantum-inspired capabilities and making them available as a service, so you pay only for whatyou need. It’s the realization of achieving the right balance between your enterprise and a key partnerin your ecosystem. We enable our customers to avoid costly, long-term licensing arrangements and move to a pay-per-transaction or usage model. Good for the balance sheet, great for the business.Priority 4: Think opex not capexSecurity is a priority for every aspect of your business. There’s been an unprecedented rise in cybercrime over the past few years. Headlines about devastating ransomware attacks by shadowy gangs have become commonplace. So, when you adopt new technologies via the cloud you have to consider that new vulnerabilities appear within those new technologies because they are in the cloud. Both issues need to be addressed. Hackers look for vulnerabilities at every point of an ecosystem. Those can be within procedures, or due to human behaviors, or the result of configuration errors and so on. Good security is all about asking the right questions. And using the answers to have a detailed overview of every stage of the function or operationas well as every possible risk. Our dedicated security experts work with customers to askthose questions and do the legwork of checking the entire ecosystem, end-to-end. Because we understand the platforms, we can advise on best practices and protocols within your business. That’s especially important when you’rehandling sensitive and personal data. The compliance demands of ever more stringent regulations can be onerous, but they can’t be avoided. We share the burden to keep yousafe and protected from attack as well as thebad headlines that always follow a breach.Priority 5: Security should always be front-of-mindDeciding which emerging technologies to adopt, when and how to make the most of them, and ensure that they enhance rather than undermine your security,takes partnership. The alternative is to do it all internally. The skills gap and expense make that difficult to achieve. No enterprise is an island. Success is always a collective collaboration. So, leverage the expertise that is out there and focus on your core objectives by leveraging the power of the right technologies for use-cases that will deliver rapid success.We’re here to help. Talk to us…Conclusion:It takes a partnership。
课程思政视域下非遗融入艺术设计专业教学实践与探索
课程思政视域下非遗融入艺术设计专业教学实践与探索TEACHING PRACTICE AND EXPLORATION OF INTEGRATING NON-LEGACY INTO ART DESIGN MAJOR IN THE PERSPECTIVE OF CURRICULUM IDEOLOGY AND POLITICS 引言非物质文化遗产作为中华优秀传统文化的重要组成部分,具有得天独厚的民族文化资源优势,是提升民族凝聚力、增强文化自信的重要载体,将其融入高职院校艺术设计教育教学中,是结合专业特点分类推进课程思政[1],把中华优秀传统文化全方位融入艺术教育各环节[2]的具体实践,将两者互融互通,有利于增强文化自信,落实立德树人根本任务,全面提高人才自主培养质量。
一、非遗融入艺术设计专业教育教学的必要性和可行性非遗蕴含着中华民族独特的价值追求、思想观念、人文精神和道德规范,具有深刻而丰富的教育内容,是课程思政的不竭源泉,其内在的文化艺术价值对现代艺术设计具有重大的启示作用,为艺术设计教育提供了更为丰富的创意元素和艺术素材。
将非遗融入高职院校艺术设计专业教育教学,是实现非遗传承和创新发展的有效途径,其教学成果可以反哺地方经济文化发展,践行课程思政成效,如图 1 所示。
非遗蕴含的中华优秀传统文化与高等教育追求的育人目标是高度一致的,两者存在教学内容的交叉性、育人功能的契合性和相互发展的依存性,将非遗融入地方高职院校艺术设计专业教育教学是必要且可行的。
二、非遗融入地方高职院校艺术设计专业的现状调查为了更好地了解非遗融入高职院校艺术设计专业的教育教学情况,作者对云南省高职院校在校大学生进行了问卷调查,对云南省艺术设计专业教师进行了访谈,调查情况如下:(一)学生问卷调查对象为云南省职业院校艺术设计专业的在校大学生,主要采取网络问卷的形式,发出问卷480份,收回有效问卷467份,数据反馈真实可信。
有效问卷中有64.49%的同学愿意将非遗植入专业课程,丰富设计素材,激发创作灵感,创新设计理念,提升文化内涵,有31.16%的同学表示视情况而定,而将非遗相关知识运用于课堂教学的教师占比仅为20%。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Efficient Implementation of Genus ThreeHyperelliptic Curve Cryptography over F2nIzuru Kitamura and Masanobu KatagiSony Corporation,6-7-35Kitashinagawa Shinagawa-ku,Tokyo,141-0001Japan {Izuru.Kitamura,Masanobu.Katagi}@Abstract.The optimization of the Harley algorithm is an active areaof hyperelliptic curve cryptography.We propose an efficient method forsoftware implementation of genus three Harley algorithm over F2n.Ourmethod is based on fastfinitefield multiplication using one SIMD opera-tion,SSE2on Pentium4,and parallelized Harley algorithm.We demon-strated that software implementation using proposed method is about11%faster than conventional implementation.Keywords.hyperelliptic curve arithmetic,scalar multiplication,Harleyalgorithm,SIMD operation,SSE21IntroductionHyperelliptic curve cryptography(HECC)was proposed by Koblitz[1].A prac-tical addition algorithm was proposed by Cantor[2]and Koblitz[1].HECC has the advantage of a shorter operand length than elliptic curve cryptography (ECC),with the same level of security.The addition algorithm for divisor class groups of hyperelliptic curves,however,is more complicated and therefore its calculation is slower.This algorithm can be applied to arbitrary genus curves. On the other hand,the efficient attacks against curves of genus higher than or equal to four exist[3,4].It is advisable therefore to focus on genus two and genus three curves.Recently,a new addition algorithm for genus two curves over odd prime characteristic was proposed by Harley[5,6].This algorithm drastically reduced the cost of calculating divisor addition and doubling by specifying the genus of curves.His study result triggered other researches,which eventually brought improvements and extensions to the algorithm[7–16].The Harley algorithm has long and involved calculation procedures.In spite of this disadvantage,their parallelization was not enough studied.Most recently, this approach was proposed by Mishra et al[17].They parallelized the affine and the inversion-free formulae of genus2curves.Their work targets at hardware implementation on the assumption of a multiprocessor environment,e.g.4,8and 12multipliers.However,speeding up the calculations based on a multiprocessor environment requires larger chip area,which is an area/speed tradeoff.We are interested in2or3parallel-field operations in SIMD(Single Instruction,Multiple Data)styles,which means that multiple data sets are processed at the same time because we also target software implementation.In particular,genus three HECC is suitable for SIMD because of size of its definitionfield.We propose a fast software implementation technique for genus3curve over F2n using SIMD-style operations.Many processors have SIMD architecture such as MMX and SSE for Pentium,AltiVec for PowerPC,and VIS for SPARC.Several studies of the parallel arithmetic with SIMD of ECC have been re-ported[18–21].In these works,independentfinitefield operations of ECC al-gorithm are executed in same time.These approaches are easily extended to the HECC addition algorithm.In the ECC case,however,the size of thefinite field in ECC is larger than the present computer word size.Thereforefinitefield operations have to be divided several times.On the other hand,finitefield oper-ations for genus three HECC can be executed at one time because the definition field of genus three is smaller than64bits.In addition,SSE2instructions pro-vide2×64bits shift and logical operations.We propose a parallelfinitefield multiplication,which means that twofinitefield multiplication AB and AC are executed at one time,over binaryfields using SSE2.To apply this parallel finitefield multiplication to the Harley algorithm,we manually optimized paral-lelized sequences of HECADD and HECDBL.In this paper,we report an efficient method for software implementation of genus three Harley algorithm over F2n. Section3proposes thefinitefield multiplication using SSE2and Section4shows parallel sequences of the procedure of genus three addition algorithm over bi-naryfields.Section5presents the software implementation resulted from the proposed method.2BackgroundA genus g hyperelliptic curve C over F2n is defined as C:y2+h(x)y=f(x), where h(x),f(x)∈F2n[x],deg h≤g,deg f=2g+1,f is a monic polynomial and C is a non-singular curve.J c(F2n)is the Jacobian variety of C over F2n. Any divisor class of J c(F2n)can be represented by a semi-reduced divisor.A semi-reduced divisor can be expressed by two polynomials a,b∈F2n[x]which satisfy1.a is a monic polynomial,2.deg b<deg a,3.f+hb+b2≡0mod a.This representation was reported by Mumford[22].A semi-reduced divisor with deg a≤g is called a reduced divisor and any divisor class of J c(F2n)is uniquely represented by a reduced divisor.Hereafter we denote D∈J c(F2n)by a reduced divisor D=(a,b).J c(F2n)has an additive group structure.In[5] and[6],Harley defined a group operation to genus two curves over F p,where p is an odd prime number greater than three.In[16],Sugizaki et al.showed an extension of the Harley algorithm to curves over F2n for computing D1+D2=D3 (HECADD)and2D1=D3(HECDBL).The extended Harley algorithm over F2n is follows.Algorithm1HECADDInput:D1=(u1,v1),D2=(u2,v2),deg u1=deg u2=2,gcd(u1,u2)=1 Output:D3=(u3,v3)122.S←(v2+v1)/u1mod u23.V←Su1+v1mod U4.U←(f+hV+V2)/U5.Make U monic6.V←V mod U7.u3←U,v3←U+V+h8.return(u3,v3)Algorithm2HECDBL11111Output:D3=(u3,v3)1.U←u212.S←h−1(f+hv1+v21)/u1mod u13.V←Su1+v1mod U4.U←(f+hV+V2)/U5.Make U monic6.V←V mod U7.u3←U,v3←U+V+h8.return(u3,v3)These algorithms are similar to the elliptic curve chord-tangent law.We explain Algorithm1.From step1to step3is called a composition part and from step 4to step7is called a reduction part.In a composition part,we compute the semi-reduced divisor D=(U,V)such that D∼−D3where the symbol∼indicates to be linearly equivalent.In step1,we compute U=u1u2.In step2 and step3,we compute V such that f+hV+V2≡0mod U.V is obtained by V≡v1mod u1and V≡v2mod u2via the Chinese remainder theorem.In a reduction part,we compute a reduced divisor D 3=(u 3,v 3)such that D 3∼D. We compute u 3=(f+hV+V2)/u1u2and make u 3a monic polynomial.Then we compute v 3≡V mod u 3.Finally,we output a divisor D3=−D 3as:D3= (u3,v3)=(u 3,u 3+v 3+h).HECDBL is similar to HECADD,but the Chinese remainder theorem is replaced by the Newton iteration.In these algorithms, the Karatsuba algorithm is used to reduce the number of multiplications.In algorithms,HECADD takes25multiplications and1inversion,and HECDBL takes27multiplications and1inversion.We deal with the most common situation,i.e.,in HECADD,deg u1=deg u2= 2,gcd(u1,u2)=1and in HECDBL,deg u1=2,gcd(u1,h)=1.We will consider)[23].only this situation because the probability it will not occur is O(12n3Fast Finite Field Multiplication for Genus Three Using SSE2On the64-bit architecture,F259elements are represented by single-precision. Streaming SIMD Extension2(SSE2)[24,25],which the Intel processor Pentium 4includes,can deal with two F259elements in parallel.In this section we discuss a parallelfinitefield multiplication algorithm over F2n,represented by64-bit single-precision,using SSE2.The following multiplication accelerates addition algorithm for genus three HECC over F259.3.1Genus Three Hyperelliptic Curve over F259In this work we choose an example of the genus three hyperelliptic curve over F2n in some papers which is suitable for cryptography,i.e.its Jacobian has a large prime order subgroup.We use an isomorphic curve C1of the hyperelliptic curve described in Section4.2of[26]:y2+h(x)y=f(x)over F259,F259is defined by t59+t6+t5+t4+t3+t+1=0,h (x )=x 3+x 2+6723B8D13BC30C7x +72D7EE15A5C9CF5,f (x )=x 7+x 6+6723B8D13BC30C7x 5+72D7EE15A5C9CF4x 4+24198E10C3B7566x 3+1EB9AF07BD3B303.The order of the Jacobian of C 1is:2×95780971304118053647396689122057683977359360476125197.The curve has the same level security of 176bit-ECC.3.2SMUL:Conventional Finite Field Multiplication over F 2nSome algorithms of finite field multiplication over F 2n was proposed [27].We explain a conventional algorithm of finite field multiplication over F 2n .The mul-tiplication of two finite field elements in F 2n is accomplished as follows.Let A ,B ∈F 2n ,F (x )be an irreducible binary polynomial of degree n .A and B can be represented by binary polynomial of degree at most n −1.Conventional finite field multiplication AB mod F (x )is calculated the following 2steps.Step1.A fast algorithm for multiplication of two polynomials is presented Algorithm 5in [28].We make a table T B [i ]for 0≤i <16as:T B [j ]←(j 3x 3+j 2x 2+j 1x +j 0)B,where j =(j 3j 2j 1j 0)2.Then to calculate AB ,this table is referred to by scanning A every 4bits.Step2.Reduction AB mod F (x )is accomplished as explained Algorithm 6in [27].In this work,we use F (x )=x 59+x 6+x 5+x 4+x 3+x +1as an irreducible polynomial and the following congruence:x 64≡x 11+x 10+x 9+x 8+x 6+x 5mod F (x ).Using this congruence,AB can be expressed as:AB =AB h ×x 64+AB l≡AB h ×(x 11+x 10+x 9+x 8+x 6+x 5)+AB l mod F (x )≡AB l mod F (x ).A reduction of AB h can be performed by adding AB h ×x i six times to AB ,and AB l of degree 63can be obtained.Then the remaining terms of AB from degree 63to degree 59can be reduced by adding AB l /x i six times to AB .In this paper,we call this finite field multiplication algorithm composed of Algorithm 5in [28]and Algorithm 6in [27]as SMUL.3.3PMUL:Parallel Finite Field Multiplication using SSE2We extend SMUL to finite field multiplication in parallel using SSE2.Before we explain details of the algorithm,we mention SSE2instructions.SSE2Instructions:The Intel processors have SIMD instructions called MMX[24,25].The processor introduces four new data types:an 8×8-bit,4×16-bit,2×32-bit,and 64-bit data block.Additionally,the Intel Pentium 4processor includes SSE2technology that allows MMX instructions to work on a 128-bit data block.Therefore,SSE2instructions deal with two F q ,|q |≤64elements at once.SSE2instructions can multiply a 4×16-bit and a 4×16-bit or a 2×32-bitand a2×32-bit,however cannot multiply a64-bit and a64-bit.On the other hand,it can shift a2×64-bit and perform128bitwise logical operations.F2n multiplication only needs shift and logical operations.These SSE2characteristics show that only F2n elements are represented by single-precision.We propose a parallelfinitefield multiplication using SSE2(PMUL).The algorithm is the following:Algorithm3PMUL:Parallel Finite Field Multiplication using SSE2Input:A,B,COutput:D=AB mod F,E=AC mod F,F=x59+x6+x5+x4+x3+x+12.For j=0to15T BC[j]←(j3x3+j2x2+j1x+j0)B(j3x3+j2x2+j2x+j0)C3.For j=0toi←A/x4j mod(x3+x2+x+1)H←H⊕(T BC[i] (64−4j))L←L⊕(T BC[i] 4j)4.L←L⊕(H 11)⊕(H 10)⊕(H 9)⊕(H 8)⊕(H 6)⊕(H 5)5.L←L⊕(L 53)⊕(L 54)⊕(L 55)⊕(L 56)⊕(L 58)⊕(L 59)6.L←L∧x58+x57+...+x+1x58+x57+...+x+17.D←L h,E←L l/*L=AB mod F AC mod F*/8.return D,EAlgorithm3can compute parallel AB and AC with the table T BC by scan-ning A.We note that the algorithm cannot compute parallel AB and CD be-cause it is not available to scan A for the table T BC.In Algorithm3,the symbol i( i)indicates that a2×64-bit is shifted parallel to the right(left)by the i-bit,and⊕(∧)indicates a128bitwise logical exclusive OR(logical AND).In step2,we make the table T BC as previously,where∗∗∗∗∗∗indicates a 2×64-bit.In step3,we scan A every4bits to select the appropriate polynomial on the table T BC,then H retains the upper64bits of AB and AC,and L retains the lower64bits,as follows.H←AB h AC h,L←AB l AC l.In step4, reductions of AB and AC from degree116to degree64can be accomplished as: L←AB h×(x11+···+x5)+AB l AC h×(x11+···+x5)+AC l.In step5,the remaining terms of AB and AC from degree63to degree59 can be reduced,finally to zero in step6.3.4Another Choice of Irreducible PolynomialIn Algorithm3,we use F259defined by the heptanomial x59+x6+x5+x4+ x3+x+1.The performance offinitefield operation generally depends on weight (number of nonzero coefficients)of irreducible polynomial.The lowest weight of irreducible polynomial over F259is pentanomial F(x)=x59+x7+x4+x2+1[29]. Algorithm3can be extended to pentanomial case.We note that reduction step in Algorithm3depends on th2nd highest degree t of the irreducible polynomial. If t satisfies the condition t≥7,2×64-bit register overflows.Therefore,we need to modify the Algorithm3.In the case of F(x)=x59+x7+x4+x2+1,we use the following congruence:x64≡x12+x9+x7+x5mod F(x).Using this congruence,AB can be expressed as:AB≡AB h×(x12+x9+x7+x5)+AB l mod F(x)AB h is52degree and AB h×x12is64degree at most.Since MSB of AB h×x12 is overflowed,we should modify Step4and5in Algorithm3as:H←H⊕((H∧x52) 52)L←L⊕(H 12)⊕(H 9)⊕(H 7)⊕(H 5)L←L⊕(L 52)⊕(L 55)⊕(L 57)⊕(L 59).In spite of extra step for the overflow bit,PMUL using pentanomial is faster than heptanomial case because low weight of irreducible polynomial reduces the cost of XOR and Shift operations in Step4and5in Algorithm3.4Parallel Genus Three Hyperelliptic Curve Arithmetic with SIMD OperationsWe reschedule manually long procedures of the Harley algorithm for genus three [30]into parallel sequences to apply the proposedfinitefield multiplication PMUL.Table4and5in the Appendices show the two-processor version of affine coordinate HECADD and HECDBL for genus three curves[30].We assume that these parallel sequences have SIMD-style operations in order to be applicable not only the hardware implementation but also software implementation.In Table 4and5,each column corresponds to an instruction.For these pairs which are expressed in the Table with the symbol∗,we can use the proposedfinitefield multiplication PMUL,but for other cases,we compute sequentially via SMUL not in parallel via SSE2.We compare the non-parallel formulae[30]with our parallel formulae regarding calculation cost in Table1.M,and I are the time required for multiplication,inversion of the binaryfield,respectively.P is the timing of the proposedfinitefield multiplication PMUL.Note that we estimate multiplication using the coefficient of curve parameter f(x),h(x)as1M.Table1.Number of PMUL,SMUL,and inversion of HECADD and HECDBLHECADD HECDBLSequential Arithmetic77M+1I78M+1IParallel Arithmetic30P+17M+1I30P+18M+1I5Implementation ResultsWe present implementation results.All implementation was run under Windows XP on a1.6GHz Pentium4processor,using Microsoft VC++6.0with inline assembler MMX and SSE2.We applied PMUL using SSE2to the parallelized addition algorithm for genus three HECC over F259.First,Table2presents timing results for operation in thefields F259.Irre-ducible polynomials are pentanomial and heptanomial shown in Sec.3.4and Sec.3.1,respectively.Squaring and inversion are implemented via Algorithm7in[27]and Algorithm10in[27],respectively.We compare two types offinite field multiplication algorithm,SMUL and PMUL.If irreducible polynomial is pentanomial,PMUL is approximately14%faster than twice SMUL operations. Note that PMUL using pentanomial irreducible polynomial is faster than that using heptanomial as we mentioned in the previous section.In addition,timing of a successive two instruction SMUL and squaring is330ns.On the other hand, timing of PMUL is256ns.Squaring is generally faster than multiplication,but in our experiment,we had better deal with squaring as multiplication because PMUL instructions including squaring,which meanfinitefield multiplication A2 and AB at the same time,is fast.Second,Table3presents timing results of addition algorithm and scalar mul-tiplication using binary method for genus three hyperelliptic curve C1shown in Sec.3.1.The proposed method using PMUL is11%faster than the conventional method.Table2.Timings offinitefield operations over F259Irreducible Polynomial SMUL×2PMUL Squaring Inversionpentanomial290ns256ns158ns 1.29µsheptanomial288ns249ns158ns 1.18µsTable3.Timings of addition algorithm and scalar multiplicationAddition Doubling Scalar Multiplication(176bit,binary method) conventional15.9µs14.7µs 4.37msproposed13.5µs13.2µs 3.91ms6ConclusionWe propose a software optimization method of the genus three addition algo-rithm of HECC by combining parallelized Harley Algorithm and the parallel finitefield multiplication using SSE2.We achieved11%faster scalar multiplica-tion than the usual implementation.Our idea is based on one of good feature of HECC.Its smaller definitionfield than ECC has comparable size with recent CPU register size.However,another major CPU,e.g.ARM processor,does not have the2×64 SIMD instruction.It is a further work to optimize the Harley algorithm using SIMD instructions.AcknowledgementsThe authors would like to thank Toru Akishita for valuable comments and sug-gestions.References1.Koblitz,N.:Hyperelliptic Cryptosystems.J.Cryptology1(1989)139–1502.Cantor,D.:Computing in the Jacobian of a Hyperelliptic Curve.Mathematics ofComputation48(1987)95–1013.Adelman,L.M.,DeMarrais,J.,Huang,M.D.:A Subexponential Alogrithm forDiscrete Logarithms over the Rational Subgroup of the Jacobian of Large Genus Hyperelliptic Curves over Finite Fields.In:ANTS-I,LNCS877,Springer-Verlag (1994)28–404.Gaudry,P.:An Algorithm for Solving the Discrete Log Problem on HyperellipticCurves.In:EUROCRYPT2000,LNCS1807,Springer-Verlag(2000)19–345.Harley,R.:Adding.text.http://cristal.inria.fr/˜harley/hyper/(2000)6.Harley,R.:Doubling.c.http://cristal.inria.fr/˜harley/hyper/(2000)7.Matsuo,K.,Chao,J.,Tsuji,S.:Fast Genus Two Hyperelliptic Curve Cryptosys-tems.Technical Report ISEC2001-31,IEICE Japan(2001)89–968.Pelzl,J.,Wollinger,T.,Guajardo,J.,Paar,C.:Hyperelliptic Curve Cryptosys-tems:Closing the Performance Gap to Elliptic Curves.Cryptology ePrint Archive, 2003/06,IACR(2003)nge,T.:Efficient Arithmetic on Genus2Hyperelliptic Curves over Finite Fieldsvia Explicit Formulae.Cryptology ePrint Archive,2002/121,IACR(2002)nge,T.:Inversion-free Arithmetic on Genus2Hyperelliptic Curves.CryptologyePrint Archive,2002/147,IACR(2002)nge,T.:Weighed Coordinate on Genus2Hyperellipitc Curve.Cryptology ePrintArchive,2002/153,IACR(2002)12.Takahashi,N.,Morimoto,H.,Miyaji,A.:Efficient Exponentiation on Genus TwoHyperelliptic Curves(ii).Technical Report ISEC2002-145,IEICE Japan(2003)in Japanese.13.Takahashi,M.:Improving Harley Algorithms for Jacobians of Genus2Hyperel-liptic Curves.In:Proc.of SCIS2002.(2002)in Japanese.14.Miyamoto,Y.,Doi,H.,Matsuo,K.,Chao,J.,Tsujii,S.:A Fast Addition Algorithmof Genus Two Hyperelliptic Curves.In:Proc.of SCIS2002.(2002)in Japanese.15.Kuroki,J.,Gonda,M.,Matsuo,K.,Chao,J.,Tsujii,S.:Fast Genus Three Hyper-elliptic Curve Cryptosystems.In:Proc.of SCIS2002.(2002)16.Sugizaki,T.,Matsuo,K.,Chao,J.,Tsujii,S.:An Extension of Harley AdditionAlgorithm for Hyperelliptic Curves over Finite Fields of Characteristic Two.Tech-nical Report ISEC2002-9,IEICE Japan(2002)49–5617.Mishra,P.K.,Sarkar,P.:Parallelizing Explicit Formula for Arithmetic in theJacobian of Hyperelliptic Curves.Cryptology ePrint Archive,Report2003/180 (2003)/.18.Aoki,K.,Hoshino,F.,Kobayashi,T.:The Fastest ECC Implementations.Proc.of SCIS2000(2000)in Japanese.19.Smart,N.:The Hessian Form of an Elliptic Curve.In:CHES2001,LNCS2162,Springer-Verlag(2001)118–12520.Aoki,K.,Hoshino,F.,Kobayashi,T.,Oguro,H.:Elliptic Curve Arithmetic Usingsimd.In:ISC2001,LNCS2200,Springer-Verlag(2001)235–24721.Izu,T.,Takagi,T.:Fast Elliptic Curve Multiplications with SIMD Operations.In:ICICS2002,LNCS2513,Springer-Verlag(2002)217–23022.Mumford,D.:Tata lectures on theta ii.In:Progress in Mathematics.Number43,Birkh¨a user(1984)23.Nagao,N.:Improving Group Law Algorithms for Jacobians of HyperellipticCurves.In:ANTS-IV,LNCS1838,Springer-Verlag(2000)439–44824.Intel Corporation:IA-32Intel Architecture Software Developer’s Manual,volume1:Basic Architecture.(2003)25.Intel Corporation:IA-32Intel Architecture Software Developer’s Manual,volume2:Instruction Set Reference.(2003)26.Hess,F.,Seroussi,G.,Smart,N.:Two Topics in Hyperelliptic Cryptography.In:/techreports/2000/HPL-2000-118.html.(2000)181–18927.Hankerson,D.,Hernandez,J.L.,Menezes,A.:Software Implementation of EllipticCurve Cryptography over Binary Fields.In:CHES2000,LNCS1965,Springer-Verlag(2000)1–2428.Lop´e z,J.,Dahab,R.:High-speed Software Multiplication in F2m.In:In-docrypt2000,LNCS1977,Springer-Verlag(2000)203–21229.Seroussi,G.:Table of low-weight binary irreducible polynomials./techreports/98/HPL-98-135.html(1998)30.Pelzl,J.:Hyperelliptic Cryptosystems on Embedded muni-cation Security Group,R¨u hr-Universit¨a t Bochum(2002)diploma thesis.AppendicesTable4.Parallel arithmetic of HECADD for genus threeHECADD44M+58A+I with16var.Input:D1=(u1,v1)and D2=(u2,v2)with u1=x3+u12x2+u11x+u10;u2=x3+u22x2+u21x+u20;v1=v12x2+v11x+v10;v2=v22x2+v21x+v20;h(x)=x3+h2x2+h1x+h0;f(x)=x7+f6x6+f5x5+f4x4+f3x3+f2x2+f1x+f0; Output:D3=(u3,v3)=D1+D2withu3=x3+u32x2+u31x+u30;v3=v32x2+v31x+v30;R00←u12R01←u11R02←u10R03←u22R04←u21R05←u20R06←R04×R00R07←R04×R02∗R08←R03×R01R09←R03×R02∗R10←R05×R01R11←R05×R00∗R05←R05+R02R04←R04+R01 R02←R05×R05R12←R05×R04∗R04←R04×R04R03←R03+R00R10←R10+R07 R13←R10×R04R10←R10×R03∗R05←R05+R06R11←R11+R09 R05←R05+R08R02←R02+R10 R08←R03×R11R11←R08×R11R03←R05×R03R05←R05×R02∗R05←R05+R11R04←R04+R03 R05←R05+R13R08←R08+R12 R00←u22R01←u21R03←R04×R01R01←R04×R00∗R01←R01+R08R03←R03+R02 R08←R08×R00R03←R03+R08R00←v12R02←v11R06←v10R07←v22R08←v21R09←v20R00←R00+R07R02←R02+R08 R06←R06+R09R07←R01+R04 R08←R03+R04R09←R03+R01 R01←R01×R02R03←R03×R06 R02←R02+R00R06←R06+R00 R07←R07×R02R08←R08×R06R02←R02+R06R07←R07+R01 R02←R02×R09R00←R00×R04 R02←R02+R01R09←R01+R08 R02←R02+R03R09←R09+R03 R07←R07+R00R09←R09+R00 R04←u22R10←u21R11←u20R12←R00×R04R13←R00×R10∗R12←R12+R07R09←R09+R13 R10←R10+R11R02←R02+R13 R13←R12×R11R04←R12×R04∗R03←R03+R13R00←R00+R12 R02←R02+R13R09←R09+R04 R10←R10×R00R02←R02+R10R01←R09×R05R06←R09×R09∗R01←1/R01R04←R01×R05R06←R01×R06∗R05←R05×R04R12←R05×R05R00←R04×R03R01←R04×R02∗R08←R02×R00R09←R02×R01∗R02←R02+R01R09←R09+R04 R07←R04×R00R04←R04×R01∗R08←R08+R04R09←R09+R00 R11←R10×R00R13←R10×R01∗R07←R07+R13R08←R08+R10 R04←u22R14←u21R10←R02+R01R03←R14+R09 R10←R10+R04R03←R03+R00 R13←R10×R04R15←R10×R14∗R13←R13+R03R15←R15+R12 R04←R01×R02R14←R01×R09∗R13←R13+R04R15←R15+R14 R03←R01×h2R04←R01×R08∗R13←R13+R05R03←R03+h1R01←R01+h2R03←R03+R00 R01←R05×R01R03←R05×R03∗R15←R15+R01R04←R04+R03 R01←R00×R02R03←R00×R09∗R15←R15+R01R04←R04+R03 R05←u22R14←u21R01←R13×R05R03←R13×R14∗R15←R15+R01R04←R04+R03 R15←R15+R08R04←R04+R07 R00←u20R03←R00×R10R15←R15+R00R04←R04+R03 R03←R05×R15R04←R04+R03R00←u12R03←R00+f6Table4.Parallel arithmetic of HECADD for genus three–continued R03←R03×R12R04←R04+R03R00←v12R01←v11R03←v10R02←R02+R10R05←R02×R04R12←R02×R15∗R05←R05+R11R12←R12+R07R12←R12+R04R05←R06×R05R12←R06×R12∗R05←R05+h0R12←R12+h1R05←R05+R03R12←R12+R01R01←R02×R13R03←R02×R10∗R01←R01+R15R03←R03+R13R01←R01+R08R03←R03+R09R01←R06×R01R03←R06×R03∗R01←R01+h2R03←R03+1R01←R01+R00R00←R03×R03R02←R03×h1∗R06←R00+f6R07←R15+R02R00←h2×R03R02←h2×R01∗R08←R13+R00R07←R07+R02R06←R06+R10R08←R08+f5R06←R06+R03R08←R08+R01R00←R06×R10R02←R06×R13∗R08←R08+R00R07←R07+R02R00←R01×R01R00←R00+f4R07←R07+R12R07←R07+R00R03←R03+1R00←R08×R10R02←R08×R03∗R07←R07+R00R12←R12+R02R00←R03×R06R02←R03×R07∗R01←R01+R00R05←R05+R02R01←R01+h2R12←R12+h1R05←R05+h0u32←R06u31←R08u30←R07v32←R01v31←R12v30←R05Table5.Parallel arithmetic of HECDBL for genus threeHECDBL43M+55A+I with16var.Input:D1=(u1,v1)withu1=x3+u12x2+u11x+u10;v1=v12x2+v11x+v10;h(x)=x3+h2x2+h1x+h0;f(x)=x7+f6x6+f5x5+f4x4+f3x3+f2x2+f1x+f0; Output:D3=(u3,v3)=2D1withu3=x3+u32x2+u31x+u30;v3=v32x2+v31x+v30;R00←u12R01←u11R02←u10R06←h1×R00R07←h1×R02∗R08←h2×R01R09←h2×R02∗R10←h0×R01R11←h0×R00∗R05←R02+h0R04←R01+h1R02←R05×R05R12←R05×R04∗R04←R04×R04R03←R00+h2R10←R10+R07 R13←R10×R04R10←R10×R03∗R05←R05+R06R11←R11+R09 R05←R05+R08R02←R02+R10 R08←R03×R11R11←R08×R11R03←R05×R03R05←R05×R02∗R05←R05+R11R04←R04+R03 R05←R05+R13R08←R08+R12 R03←R04×R01R01←R04×R00∗R01←R01+R08R03←R03+R02 R08←R08×R00R03←R03+R08R00←u12R02←u11R06←u10R07←v12R08←v11R09←v10R10←R07×R07R11←R07×h1∗R12←R00+f6R13←R07+f5R14←R12×R02R15←R12×R00∗R13←R13+R15R15←R10+f4R13←R13+R02R15←R15+R08 R08←h2×R08R10←h2×R07∗R14←R14+R06R08←R08+R09 R15←R15+R14R07←R07+f5R09←R13×R00R13←R13×R02∗R15←R15+R10R15←R15+R09R08←R08+R13 R14←R14+R15R08←R08+f3R12←R12×R06R15←R15×R00 R02←R00×R02R00←R00×R00∗R08←R08+R12R11←R11+R15 R07←R07+R00R14←R14+R02 R08←R08+R11R00←u12R02←u11R06←u10R09←R01+R04R10←R14+R07 R09←R09×R10R14←R14×R01 R11←R03+R04R12←R08+R07 R11←R11×R12R08←R08×R03 R13←R03+R01R10←R10+R12 R13←R13×R10R07←R07×R04 R13←R13+R14R09←R09+R14 R11←R11+R14R12←R08+R07 R13←R13+R08R09←R09+R07 R01←R07×R00R03←R07×R02∗R01←R01+R09R13←R13+R03 R07←R07+R01R02←R02+R06 R06←R01×R06R00←R01×R00∗R08←R08+R06R00←R00+R03 R02←R02×R07R11←R11+R12R13←R13+R02 R13←R13+R06R11←R11+R00 R01←R11×R05R06←R11×R11∗R01←1/R01R03←R01×R05R06←R01×R06∗R05←R05×R03R00←R03×R08R01←R03×R13∗R02←u12R04←u11R10←u10R08←R02×R00R09←R02×R01∗R02←R02+R01R09←R09+R04 R07←R04×R00R04←R04×R01∗R08←R08+R04R09←R09+R00 R11←R10×R00R12←R10×R01∗R07←R07+R12R08←R08+R10 R04←u12R14←u11R10←R04+h2R12←R00+h1R10←R10+R01R12←R12+R14 R04←R10×R04R10←R10×R05∗R13←R01×h2R01←R01×R01∗R12←R12+R04R01←R01+R05 R12←R12+R13R12←R05×R12R05←R05×R05∗R03←R05×f6R00←R00×R00 R10←R10+R05R03←R03+R00Table5.Parallel arithmetic of HECDBL for genus three–continued R12←R12+R03R03←v12R04←v11R05←v10R09←R09+R01R08←R08+R10R00←R02×R01R13←R02×R10∗R14←R02×R12R12←R12+R07R14←R14+R11R00←R00+R08R13←R13+R12R00←R06×R00R13←R06×R13∗R14←R06×R14R15←R06×R09∗R15←R15+1R00←R00+h2R13←R13+h1R14←R14+h0R00←R00+R03R13←R13+R04R14←R14+R05R04←R15+f6R02←R15×R15R03←R15×h1∗R03←R03+R10R02←R02+R04R04←R00×R00R05←R00×h2∗R03←R03+R04R06←R01+f5R03←R03+f4R05←R05+R13R07←R15×h2R06←R06+R07R03←R03+R05R15←R15+1R06←R06+R00R01←R02×R01R04←R02×R15∗R03←R03+R01R04←R04+R00R05←R15×R06R07←R15×R03∗R05←R05+R13R07←R07+R14R04←R04+h2R05←R05+h1R07←R07+h0u32←R02u31←R06u30←R03v32←R04v31←R05v30←R07。