美赛论文(最终版)

合集下载

美赛论文

美赛论文

注:LEO 低地球轨道MEO中地球轨道GeO 同步卫星轨道risk-profit 风险利润率fixed-profit rate 固定利润率提出一个合理的商业计划,可以使我们抓住商业机会,我们建立四个模型来分析三个替代方案(水射流,激光,卫星)和组合,然后确定是否存在一个经济上有吸引力的机会,从而设计了四种模型分析空间碎片的风险、成本、利润和预测。

首先,我们建立了利润模型基于净现值(NPV)模型,并确定三个最佳组合的替代品与定性分析:1)考虑了三个备选方案的组合时,碎片的量是巨大的;2)考虑了水射流和激光的结合,认为碎片的大小不太大;3)把卫星和激光的结合当尺寸的这些碎片足够大。

其次,建立风险定性分析模型,对影响因素进行分析在每一种替代的风险,并得出一个结论,风险将逐渐下降直到达到一个稳定的数字。

在定量分析技术投入和对设备的影响投资中,我们建立了双重技术的学习曲线模型,找到成本的变化规律与时间的变化。

然后,我们开发的差分方程预测模型预测的量在未来的四年内每年发射的飞机。

结合结果我们从预测中,我们可以确定最佳的去除选择。

最后,分析了模型的灵敏度,讨论了模型的优势和我们的模型的弱点,目前的非技术性的信,指出了未来工作。

目录1,简介1.1问题的背景1.2可行方案1.3一般的假设1.4我们的思想的轮廓2,我们的模型2.1 时间---利润模型2.1.1 模型的符号2.1.2 模型建立2.1.3 结果与分析2.2 . 差分方程的预测模型2.2.1 模型建立2.2.2 结果分析2.3 双因子技术-学习曲线模型2.3.1 模型背景知识2.3.2 模型的符号2.3.3 模型建立2.3.4 结果分析2.4风险定性分析模型2.4.1 模型背景2.4.2 模型建立2.4.3 结果与分析3.在我们模型的灵敏度分析3.1 差分方程的预测模型。

3.1.1 稳定性分析3.1.2 敏感性分析3.2 双因子技术学习曲线模型3.2.1 稳定性分析3.2.2 敏感性分析4 优点和缺点查分方程预测模型优点缺点双因子技术学习曲线模型优点缺点时间---利润模型优点缺点5..结论6..未来的工作7.参考双赢模式:拯救地球,抓住机遇1..简介问题的背景空间曾经很干净整洁。

34040(美赛论文final)

34040(美赛论文final)

Key Words: Cluster ; Principal Component Analysis ; AHP ; System Dynamics
2
Team # 34040
Page 3 of 24
Contents
1 Introduction 1.1 1.2 Problem Restatement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Problem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.2.3 2 Models 2.1 Part One . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.2 Clustering Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . . . Part One . . . . . . .. . . . . . . . . . . . . . . . . . . . Part Two . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part Three . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 4 4 4 4 5 6 6 6 7 11 11 14 15 17 18 21 21 21 22 22 23 23 24

美国大学生数学建模竞赛优秀论文

美国大学生数学建模竞赛优秀论文

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。

美赛数学建模比赛论文模板

美赛数学建模比赛论文模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby,with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen and Krammes2000 AmericaLC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuela ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T++--=].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001America][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+=22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK. And SimoneA.2005Italy DCV 124.0164.6685-= DCE V 4.046.3366.5585--=2855.035.1119.0745.65DC E DC V ---=FitzpatrickAmericaKV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane trafficsimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

2013美赛论文

2013美赛论文

1.Introduction:The electrical oven is a kind of sealed electrical equipment used to bake food or dry products.With the emergence of the oven, Eating those delicious baking food at home,such as cake , dessert, biscuits , turkey, duck , chicken wings and so on becomes possible.Industrially, the oven can also be used for drying industrial products .Because of the different use of the oven and the individual needs in difference,the shape, capacity , power, etc. of the oven vary.With the rapid development of the economic and the technology,the type and function of the oven are also becoming more diverse and specific .The development of the oven brings great convenience to our production and daily life.Then how does an oven work?The temperature in an oven can be usually set manually by a temperature control system of the oven .When you open an oven,there may be racks and heating elements.There are racks in the interial room of the oven,The items in the oven are baking in a baking tray with the trays on the racks in the oven.Since the shape of the oven is generally cube,the shape of the baking tray is generally rectangular .When rectangular baking trays are in the baking process , the heat is mainly concentrated in the four corners , so that the products placed in the four corners and to some small extent at the edges are overcooked.In contrast,round baking pans are used much less for its space utilization is not high , but the heat over the entire outer edge is distributed evenly, the goods not getting overcooked.So, how to make the distribution of the temperature at the edge of different shapes pan in the oven became the most uniform while the number of the baking pans fit in oven be the maximum, so as to coordinate the degree of the temperature’s uniformity and the space utilization efficience in the oven,so that we can get " The ultimate Brownie pan " ?To solve the problem:First of all,we want to establish a model to represent the heatdistribution at the baking pan edges which are different in the shape .In order to achieve this goal,We need to describe the transfer of heat within the oven,and explain how it influence a pan in the transfer process.Moreover,we need to design an pan model to optimize the shape of the baking pans.Considering the two conditions,where weights p and()p-1,maximizing the number of pans fit in the limited space and even distribution of heat at the edge of the pan,we think of a method that we optimize the shape of the baking pans by calculating the uniformity of the heat distribution at the pan’s edge.We worked on this way and got the optimal program in theory.2.Model 1:Heat distribution modelHeating pipe is the heat source of the oven , then heat come from it including heat conduction , heat radiation and the thermal convection transfer to the space in the oven. Then which kind or kinds of way play the leading role in the baking process on earth?The answer to this question decides our modeling direction.Which factors will have influence on the heat transferring?How to simplify the impact of these factors to describe a baking pan edge heat distribution?These are the main problem we considered at the beginning of the model establishing.Of cause,our aim is to describe the distribution of heat at the edge of the pans,which means we must take impacts effecting heat distribution at the edge into consideration.For example,the thermal conductivity of different items,the thickness of the pans ,ect.Further consideration to the oven baking process :When it preheated to a certain temperature , the over stopped heating, and the temperature will be maintained in the vicinity of this temperature .We can assume there is a stable temperature field inside the oven ,then , the food baking process can be regarded as a process of heat conduction .The goal of our model is solving the heat distribution under different edge shapes of the pan,then in terms of the relationship between temperature and heat we can find the regularity of heat.Along this thinking,we begin to found the model.2.1Assumptions:●Assume oven has no heat loss , the temperature field in the oven is a stablefield.Taking the actual situation into account , it can be interpreted as follows : from the pan and the cake putted into the oven to the baking end, the temperature of the air within the oven is constant , and the temperature value has been as a warm-up from the end of the baking temperature ( this temperature is set at 200℃according to access statistics[1] ).●Assume that the wall thickness of the pan edge does not affect the temperaturedistribution . The access to information shows that the material of the pan is usually good conductor of heat , the thickness of the tray with respect to the length and width of the pan is very small .●Assume the side surface of the pan is perpendicular to the bottom surface.●As is shown in Figure 1-1, using the two adjacent edges representing the axis ofthe x shaft and y shaft respectively andthe upwards direction perpendicular to thebottom representing the the axis ofz shaft ,create a three - dimensionalCartesian coordinate system . Theassumption is that the temperature of theside surface of the pan edge field isuniformly distributed in the z axial direction .In this condition, we can change the three - dimensional temperature distribution of the edge of the pan into a two-dimensional problem which is only related to the x and y distribution problem .●Assume during oven baking process , heat radiation and heat convection isignored so that we are solving the two - dimensional temperature field , considering only heat conduction problem .2.2 Symbols and notes:t :The distribution of the pan 's internal temperature field;22t x ∂∂(22t y ∂∂、22t z ∂∂):When solving the problem under Cartesian coordinate system , the second order partial derivatives of the temperature field in the direction of ,,x y z axis;t T∂∂:The rate of change of the temperature field along with time; 0x tT =∂∂(1x l t T =∂∂、0y t T =∂∂、2y l t T =∂∂):Temperature field t with the change of time rate in the distribution on the respective boundary;22t r ∂∂(22t φ∂∂、22t z ∂∂):When solving the problem under Cylindrical coordinate system , the second order partial derivatives of the temperature field t in the direction of ,,r z ϕ ;t r∂∂:The rate of change of the temperature field t radially; v q :Volumetric rate of heat generation;λ:Thermal conductivity()ϕφ:In the process of Separation of variables,the only function of one variable ϕ related.2.3 Model description:Through a series of assumptions,the goal of our model changing into describing a the heat distribution problem in two-dimensional plane in the stable temperature field , this problem can be achieved by solving the heat conduction equation in a certain state conditions .First, we consider the edge of the tray heat distribution at a certain moment during the baking process , so time is a constant in our research model .Secondly , we ignore the heat change in the vertical pan direction , and we simplify the heat distribution of the edge of the pan to a heat distribution problem in a two-dimensional plane which has boundary to describe the heat distribution of pan edge clearly ,projecting heat differences between pan corner and other portions .Which is to select a plan view of the heat distribution of food and pan to visually depict the heat distribution difference on bakeware edge.According to the different pan shapes,We roughly divide the shape of the baking pan into three categories ,and give out respective description:Rectangle pan:When the horizontal cross section of the pan is rectangular shape , as is shown in Figure , we create a Cartesian coordinate system with a rectangular vertex as an origin,and each of the rectangular length and width directions as the shaft in the positive direction. Then the equation of the temperature field is satisfied [2] as follows:222222t t t t x y z T∂∂∂∂++=∂∂∂∂ It can be simplified in our description of two - dimensional flat space to be:22220t t x y∂∂+=∂∂ Boundary conditions: 12000x x l y y l t t t tt ======== 12000x x l y y l tt t t T T T T ====∂∂∂∂====∂∂∂∂ Symmetric polygon pan:When the horizontal cross section of the pan is between rectangular and circle such assymmetrical hexagonal,symmetrical octagon and so on,the method of creating a Cartesian coordinate system are in common.Circle pan:When the horizontal cross section of the pan is circle,we can suppose ,as is shown in the Figure ,the disk center of the circle as the origin , and base on the disk plane space to create a cylindrical coordinates .Then the equation of the temperature field is satisfied as:λϕv q z t t r r t r r t -=∂∂+∂∂+∂∂+∂∂222222211 It can be simplified in our description of two - dimensional flat space to be :01122222=∂∂+∂∂+∂∂ϕt r r t r r t Boundary conditions:2.4 Model solutions and analysis: According to the model equation we establish which satisfies the temperature distribution in the two-dimensional plane . We use Matlab [3]software to simulate the distribution of the different shape of the baking pan on the temperature and the temperature gradient . As shown below:Rectangular. For the different shape of the baking pans, the distribution solution function image of temperature and temperature gradient function corresponding to different edge of the pans are shown below :1-1 1-2 Figure 1 Temperature distribution function and the temperature gradient functionimage on rectangular baking pan edgeFigure 1-1 is the temperature distribution image of the edge of the rectangular baking pan , and we can learn from the figure that bakeware outer edge distribute the higher temperature than the hotplate center region , and the temperature distribution uneven in the extending direction of the side of the rectangle apparently, besides, temperature distributed in the four corners is higher than the area extend to the middle area ;Figure 1-2 is the image of the function of the temperature gradient of the edge of the rectangular baking pan . Overall, the temperature gradient on the outer edge shows a tendency to be higher than the temperature gradient of the bakeware intermediate zone. Specifically analyzing from the image , the entire bakeware regional temperature gradients are unevenly distributed , the temperature gradient of four corners is smaller , along the sides of the rectangle in its direction the temperature gradient from the right angle vertex to the midpoint of the length and width shows a significantly increasing trend .Considering the distribution of the temperature and its gradient ,the regularity can be seen , the uneven distribution of the phenomenon is more obvious at the edge of the rectangular baking pan in the entire edge , at the same time in the four corners of the rectangle , relatively,the higher the temperature was the smaller gradient became, as aresult heat in the four corners of the pans is the largest with respect to the entire baking pan ,and inwardly,from the outer edge ,the heat of the situation which act as a certain function is gradually reduced .Axisymmetric hexagonal:2-1 2-2 Figure 2 Temperature distribution function and the temperature gradient function image on axisymmetric hexagonal baking pan edgeFigure 2-1 is the temperature distribution function image of the edge of the hexagonal bakeware , from the diagram it can be seen that in this case hotplate edge temperature reduced gradually from outside to inside, and the temperature distribution on the outer edges are not completely uniform because of the uneven blocks of color. Where the angle is sharper in the corner, the temperature tend to be especially higher than the bakeware internal area , similarly the sharp extent is lesser the corner temperature is still higher than bakeware middle area , but relative to the sharp corner , the temperature distribution in unsharp area is more uniform to some extend.Figure 2-2 hexagonal the bakeware temperature gradient function image can be analyzed : Overall , the bakeware temperature gradient along with a gradually decreasing trend from the outside to the inside , but the temperature gradient in the entire outer edge is uneven . Relatively sharp greater degree near the corners , thetemperature gradient is small, and with the corner side extension , showing uneven increase characteristics . And in relatively sharp lesser extent around the corner , the temperature gradient is relatively more evenly distributed , the difference is not very obvious corner and around corners .Symmetrical octagon:3-1 3-2Figure 3 Temperature distribution function and the temperature gradient functionimage on symmetrical octagon baking pan edgeTo some degree, the temperature distribution to the edges of the octagon bakeware is still uneven , the temperature of the corners is slightly higher than the temperature on both sides of the corner . And temperature gradient function images of the octagon shows , the temperature gradient at the corners both sides constituting the corner and the corner is not the same , so it can be drawn that temperature at octagon is relatively higher while the temperature gradient at octagon corner is relatively smaller , and therefore heat is relatively larger. When extending to the intermediate region from the outer space the heat is gradually reduced so is it from corner to its both sides .Contrasted with the rectangular and symmetrical octagon,from an overall point of view temperature function and temperature gradient function distribution more tends to be evenly distributed at the edge .Seeing the image of its function of temperature , temperature at the entire outer edge regardless the corner or the edge is essentially24little changed , and the gradient change of the temperature gradient function in the corner and on both sides of the corner is not that obvious compared with a hexagon . Therefore , with respect to the rectangular and hexagonal , octagonal -shaped edge portion in the heat distribution is more uniform .Circle:4-1 4-2Figure 4 Temperature distribution function and the temperature gradient functionimage on circle baking pan edgeFigure 4-1 shows that the temperature of the circular edge pan gets increasingly low from the outer edge close to the center , and from an image point of view , the hotplate temperature distribution is relatively uniform in the circumferential direction close to the outer edge .Figure 4-2 shows that in a circle hotplate,the temperature gradient along the outer edge of the hotplate to the near central region radially inwardly gradually decreases , and gradient distributes uniformly close to the outer edge along with the circumferential direction.That is, the distribution of the temperature and its gradient in the circumferential direction can be described as a uniform distribution .It’s to say that heat in the edge of the hotplate circular is a uniform distribution in the tangential direction and in radial direction from the outer edge to the inside center space the heat gradually decrease .2.4Conclusions:According to the function of the solution image we get by establishing a bakeware edge heat distribution model in specific two-dimensional case when the pan is in a specific shape , we can conclude :●The heat distribution in a rectangular-edged pan:The heat at the edge of the rectangle is larger than that of the intermediate region overall, and on one edge of the rectangle heat the heat gradually increases from the center of the edge to the ends of it, gradually decreases from the edge region of the pan to the intermediate region , and the distribution of heat is uneven throughout the baking pan edge.The heat distributes in the four corners area is higher than the other vicinal locations of the four sides and the heat gradually decreases from outside to inside .●The heat distribution in polygon pan whose shapes are between rectangular andcircle :Polygon shape baking pan between rectangular and circular the distribution of heat at edge is bigger than the middle area ,and the outside boundary is of uneven heat distribution , the corner and the near corner border are at the high heat value.But with the increase of the number of edges , heat distributes in the vicinity of the corners tends to be evenly distributed gradually .●The heat distribution in circle-edged pan:Near the outer edge ,the pan distributes higher heat than the internal central region , and the heat in the circumferential tangential direction is of a uniform distribution in vicinity of the pan’s outer edge .And in the radial direction heat is gradually decreasing from the inside outward .During the heating process the influence of different heat distribution lead by different shape of the pans :Because of the rectangular-edge baking pan is uneven in the heat distribution , and the four corners concentrate more heat especially, as a consequence products in the four corners of the baking pan are easily baked overdone . In contrast, the heat distribution of the circular hotplate edge is uniform, so the heat distributing on the entire boundary of the hotplate is uniform .Then it does not appear that the product at edge of the pan is baked overdone because of excessive localized heat on rectangle bakeware edge .2.5Model extension:Figure 5 An illustration of the shape transitionWe can guess through the process of model establishing:the heat distribution of polygon hotplate is related to size of the various angles of the polygon .Ignoring the other factors ,the greater the angle , the more uniform heat distribution .At the same time,apolygonal bakeware heat distribution also has a certain relationship with the number of sides of the polygon .the circle can be regarded as the graphics consists of a myriad of segments constituting .Obviously, is proportional to the degree of uniformity of the distribution of the number of sides of the polygon and heat .In addition,as for the rectangle baking type,the ratio of the width and the length can also have influence on its heat distribution.We can apply a evaluation of the degree of uniformity in the model , we can qualify the degree of the heat distribution uniformity at the edge of the polygon so that we can establish a certain numerical relationship between the index and the angle of the polygon as well as the number of its edges .Refer to the water distribution uniformity of sprinkler irrigation [4],we introduce the uniform coefficient proposed by Christiansen .%100*)1(1∑∑=--=n ini ihh h CUdistribution of the bakeware edge .For example , when we consider the relationship between the heat distribution and the symmetry of a polygon shape ,we calculate the change of heat distribution uniformity with different aspect ratio rectangular bakeware and put the result in the following table.Table 1 The changes of heat distribution uniformity with different aspect ratioThe data from the table reflects a rule that the greater the aspect ratio of the rectangle is ,the more uneven the heat distribution of the edge is . This conclusion is based on node temperature distribution via the pde [5]toolbox in the Matlab.We first check point after refinement node diagram , the principle is getting the same distance from the nearest edge of the point and at the same time make the point we choose distribute even on each side as possible as we can.Then, we calculate the temperature values at each point .Then through the temperature distribution of the above formula calculation pattern uniformity coefficient. Then through the above formula we calculate the pattern uniformity coefficient of temperature distribution. Because of the limited number of the nodes , The image may not reflect the true and reliable uniformity of temperature distribution to some degree . But we tried to make the access point data collection process homogenization , and we noted the accuracy of the data in the process calculation , which made our data has a certain credibility to some degree .3.Model 2:Bakeware model3.1Problem Analysis:We have shown the specific heat distribution on the edge of different shape bakingpan in the process of using.Although the temperature distribution on the edge of the rectangular baking pan is not uniform, but for a rectangular oven , that meet certain length and width rectangular bakeware it can achieve 100% utilization of the space .Round baking pan’s temperature distribution on the edge is basic uniform , but its biggest drawback is the lower space utilization than the rectangular for a rectangular oven .If the rectangular baking tray is applied to our current life,the food in the baking pan will not be even heating because of the temperature difference caused by the baking pan shape ,unable to make delicious food。

数学建模 美赛获奖论文

数学建模 美赛获奖论文
Some players believe that “corking” a bat enhances the “sweet spot” effect. There are some arguments about that .Such asa corked bat has (slightly) less mass.,less mass (lower inertia) means faster swing speed and less mass means a less effective collision. These are just some people’s views, other people may have different opinions. Whethercorking is helpful in the baseball game has not been strongly confirmed yet. Experiments seem to have inconsistent results.
________________
F2
________________
F3
________________
F4
________________
2010 Mathematical Contest in Modeling (MCM) Summary Sheet
(Attach a copy of this page to each copy of your solution paper.)
Keywords:simple harmonic motion system , differential equations model , collision system

美赛论文模板(超实用)

美赛论文模板(超实用)

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number50930Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryOur goal is a model that can use for control the water temperature through a person take a bath.After a person fills a bathtub with some hot water and then he take a bath,the water will gets cooler,it cause the person body discomfort.We construct models to analyze the temperature distribution in the bathtub space with time changing.Our basic heat transfer differential equation model focuses on the Newton cooling law and Fourier heat conduction law.We assume that the person feels comfortable in a temperature interval,consider with saving water,we decide the temperature of water first inject adopt the upper bound.The water gets more cooler with time goes by,we assume a time period and stipulation it is the temperature range,use this model can get the the first inject water volume through the temperature decline from maximum value to minimum value.Then we build a model with a partial differential equation,this model explain the water cooling after the fill bathtub.It shows the temperature distribution and water cool down feature.Wecan obtain the water temperature change with space and time by MATLAB.When the temperature decline to the lower limit,the person adds a constant trickle of hot water.At first the bathtub has a certain volume of minimum temperature of the water,in order to make the temperature after mixed with hot water more closer to the original temperature and adding hot water less,we build a heat accumulation model.In the process of adding hot water,we can calculate the temperature change function by this model until the bathtub is full.After the water fill up,water volume is a constant value,some of the water will overflow and take away some heat.Now,temperature rise didn't quickly as fill it up before,it should make the inject heat and the air convection heat difference smallest.For the movement of people, can be seen as a simple mixing movement, It plays a very good role in promoting the evenly of heat mixture. so we put the human body's degree of motion as a function, and then establish the function and the heat transfer model of the contact, and draw the relationship between them. For the impact of the size of the bathtub, due to the insulation of the wall of the bathtub, the heat radiation of the whole body is only related to the area of the water surface, So the shape and size of the bath is just the area of the water surface. Thereby affecting the amount of heat radiation, thereby affecting the amount of water added and the temperature difference,So after a long and wide bath to determine the length of the bath. The surface area is also determined, and the heattransfer rate can be solved by the heat conduction equation, which can be used to calculate the amount of hot water. Finally, considering the effect of foaming agent, after adding the foam, the foam floats on the liquid surface, which is equivalent to a layer of heat transfer medium, This layer of medium is hindered by the convective heat transfer between water and air, thereby affecting the amount of hot water added. ,ContentTitile .............................................................................................. 错误!未定义书签。

2016 美国大学生数学竞赛优秀论文AB

2016 美国大学生数学竞赛优秀论文AB

2016年美赛A题热水澡一个人用热水通过一个水龙头来注满一个浴缸,然后坐在在浴缸中,清洗和放松。

不幸的是,浴缸不是一个带有二次加热系统和循环喷流的温泉式浴缸,而是一个简单的水容器。

过一会儿,洗澡水就会明显地变凉,所以洗澡的人需要不停地将热水从水龙头注入,以加热洗浴水。

该浴缸的设计是以这样一种方式,当浴缸里的水达到容量极限,多余的水通过溢流口泄流。

考虑空间和时间等因素,建立一个浴缸的水温模型,以确定最佳的策略,使浴缸里的人可以用这个模型来让整个浴缸保持或尽可能接近初始的温度,而不浪费太多的水。

使用你的模型来确定你的策略对浴缸的形状和体积,浴缸里的人的形状、体积、温度,以及浴缸中的人的运动等因素的依赖程度。

如果这个人一开始用了一种泡泡浴剂加入浴缸,以协助清洗,这会怎样影响你的模型的结果?除了要求的一页MCM摘要提交之外,你的报告必须包括一页的为浴缸用户准备的非技术性的说明书来阐释你的策略,同时解释为什么洗澡水的温度得到均衡地保持是如此之难。

2016年美赛B题太空垃圾在地球轨道上的小碎片的数量已引起越来越多的关注。

据估计,目前有超过500,000块的空间碎片,也被称为轨道碎片,由于被认为对空间飞行器是潜在的威胁而正在被跟踪。

2009年2月10日,俄罗斯卫星kosmos-2251和美国卫星iridium-33相撞之后,该问题受到了新闻媒体更广泛的讨论。

一些消除碎片方法已经被提出。

这些方法包括使用微型的基于太空的喷水飞机和高能量的激光来针对一些特定的碎片和设计大型卫星来清扫碎片。

碎片按照大小和质量分步,从刷了油漆的薄片到废弃的卫星都有。

碎片在轨道上的高速度飞行使得捕捉十分困难。

建立一个以时间为考量的模型,以确定最佳的方法或系列方法,为一个私营企业提供商机,以解决空间碎片问题。

你的模型应该包括定量和定性的对成本,风险,收益的估计,并考虑其他的一些重要因素。

你的模型应该能够评估某种方法,以及组合的系列方法,并能够研究各种重要的假设情况。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number 46639Problem ChosenCFor office use onlyF1________________F2________________F3________________F4________________2016 MCM/ICM Summary SheetAn Optimal Investment Strategy ModelSummaryWe develop an optimal investment strategy model that appears to hold promise for providing insight into not only how to sort the schools according to investment priority, but also identify optimal investment amount of a specific school. This model considers a large number of parameters thought to be important to investment in the given College Scorecard Data Set.In order to develop the required model, two sub-models are constructed as follows: 1.For Analytic Hierarchy Process (AHP) Model, we identify the prioritizedcandidate list of schools by synthesizing the elements which have an influence on investment. First we define the specific value of any two elements’ effect on investment. And then the weight of each element’s influence on investment can be identified. Ultimately, we take the relevant parameters into the calculated weight, and then we get any school’s recommended value of investment.2.For Return On Investment M odel, it’s constructed on the basis of AHP Model.Let us suppose that all the investment is used to help the students to pay tuition fee.Then we can see optimal investment as that we help more students to the universities of higher return rate. However, because of dropout rate, there will be an optimization investment amount in each university. Therefore, we can change the problem into a nonlinear programming problem. We identify the optimal investment amount by maximizing return-on-investment.Specific attention is given to the stability and error analysis of our model. The influence of the model is discussed when several fundamental parameters vary. We attempt to use our model to prioritize the schools and identify investment amount of the candidate schools, and then an optimal investment strategy is generated. Ultimately, to demonstrate how our model works, we apply it to the given College Scorecard Data Set. For various situations, we propose an optimal solution. And we also analyze the strengths and weaknesses of our model. We believe that we can make our model more precise if more information are provided.Contents1.Introduction 21.1Restatement of the Problem (2)1.2Our Approach (2)2.Assumptions 23.Notations 34.The Optimal Investment Model 44.1Analytic Hierarchy Process Model (4)4.1.1Constructing the Hierarchy (4)4.1.2Constructing the Judgement Matrix (5)4.1.3Hierarchical Ranking (7)4.2Return On Investment Model (8)4.2.1Overview of the investment strategy (8)4.2.2Analysis of net income and investment cost (9)4.2.3Calculate Return On Investment (11)4.2.4Maximize the Total Net Income (11)5.Test the Model125.1Error Analysis (12)5.2Stability Analysis (13)6.Results136.1Results of Analytic Hierarchy Process (13)6.2Results of Return On Investment Model (14)7.Strengths and Weaknesses157.1Strengths (15)7.2Weaknesses (16)References16 Appendix A Letter to the Chief Financial Officer, Mr. Alpha Chiang.171.Introduction1.1Restatement of the ProblemIn order to help improve educational performance of undergraduates attending colleges and universities in the US, the Goodgrant Foundation intends to donate a total of $100,000,000 to an appropriate group of schools per year, for five years, starting July 2016. We are to develop a model to determine an optimal investment strategy that identifies the school, the investment amount per school, the return on that investment, and the time duration that the organization’s money should be provided to have the highest likelihood of producing a strong positive effect on student performance. Considering that they don’t want to duplicate the investments and focus of other large grant organizations, we interpret optimal investment as a strategy that maximizes the ROI on the premise that we help more students attend better colleges. So the problems to be solved are as follows:1.How to prioritize the schools by optimization level.2.How to measure ROI of a school.3.How to measure investment amount of a specific school.1.2Our ApproachWe offer a model of optimal investment which takes a great many factors in the College Scorecard Data Set into account. To begin with, we make a 1 to N optimized and prioritized candidate list of school we are recommending for investment by the AHP model. For the sake that we invest more students to better school, several factors are considered in the AHP model, such as SAT score, ACT score, etc. And then, we set investment amount of each university in the order of the list according to the standard of maximized ROI. The implement details of the model will be described in section 4.2.AssumptionsWe make the following basic assumptions in order to simplify the problem. And each of our assumptions is justified.1.Investment amount is mainly used for tuition and fees. Considering that theincome of an undergraduate is usually much higher than a high school students, we believe that it’s necessary to help more poor students have a chance to go to college.2.Bank rates will not change during the investment period. The variation ofthe bank rates have a little influence on the income we consider. So we make this assumption just to simplify the model.3.The employment rates and dropout rates will not change, and they aredifferent for different schools4.For return on investment, we only consider monetary income, regardlessof the intangible income.3.NotationsWe use a list of symbols for simplification of expression.4.The Optimal Investment ModelIn this section, we first prioritize schools by the AHP model (Section 4.1), and then calculate ROI value of the schools (Section 4.2). Ultimately, we identify investment amount of every candidate schools according to ROI (Section 4.3).4.1Analytic Hierarchy Process ModelIn order to prioritize schools, we must consider each necessary factor in the College Scorecard Data Set. For each factor, we calculate its weight value. And then, we can identify the investment necessity of each school. So, the model can be developed in 3 steps as follows:4.1.1Constructing the HierarchyWe consider 19 elements to measure priority of candidate schools, which can be seen in Fig 1. The hierarchy could be diagrammed as follows:Fig.1AHP for the investment decisionThe goal is red, the criteria are green and the alternatives are blue. All the alternatives are shown below the lowest level of each criterion. Later in the process, each alternatives will be rated with respect to the criterion directly above it.As they build their hierarchy, we should investigate the values or measurements of the different elements that make it up. If there are published fiscal policy, for example, or school policy, they should be gathered as part of the process. This information will be needed later, when the criteria and alternatives are evaluated.Note that the structure of the investment hierarchy might be different for other foundations. It would definitely be different for a foundation who doesn't care how much his score is, knows he will never dropout, and is intensely interested in math, history, and the numerous aspects of study[1].4.1.2Constructing the Judgement MatrixHierarchy reflects the relationship among elements to consider, but elements in the Criteria Layer don’t always weigh equal during aim measure. In deciders’ mind, each element accounts for a particular proportion.To incorporate their judgments about the various elements in the hierarchy, decision makers compare the elements “two by two”. The fundamental scale for pairwise comparison are shown in Fig 2.Fig 2Right now, let's see which items are compared. Our example will begin with the six criteria in the second row of the hierarchy in Fig 1, though we could begin elsewhere if we want. The criteria will be compared as to how important they are to the decisionmakers, with respect to the goal. Each pair of items in this row will be compared.Fig 3 Investment Judgement MatrixIn the next row, there is a group of 19 alternatives under the criterion. In the subgroup, each pair of alternatives will be compared regarding their importance with respect to the criterion. (As always, their importance is judged by the decision makers.) In the subgroup, there is only one pair of alternatives. They are compared as to how important they are with respect to the criterion.Things change a bit when we get to the alternatives row. Here, the factor in each group of alternatives are compared pair-by-pair with respect to the covering criterion of the group, which is the node directly above them in the hierarchy. What we are doing here is evaluating the models under consideration with respect to score, then with respect to Income, then expenditure, dropout rate, debt and graduation rate.The foundation can evaluate alternatives against their covering criteria in any order they choose. In this case, they choose the order of decreasing priority of the covering criteria.Fig 4 Score Judgement MatrixFig 5 Expenditure Judgement MatrixFig 6 Income Judgement MatrixFig 7 Dropout Judgement MatrixFig 8 Debt Judgement MatrixFig 9 Graduation Matrix4.1.3 Hierarchical RankingWhen the pairwise comparisons are as numerous as those in our example, specialized AHP software can help in making them quickly and efficiently. We will assume that the foundation has access to such software, and that it allows the opinions of various foundations to be combined into an overall opinion for the group.The AHP software uses mathematical calculations to convert these judgments to priorities for each of the six criteria. The details of the calculations are beyond the scope of this article, but are readily available elsewhere[2][3][4][5]. The software also calculates a consistency ratio that expresses the internal consistency of the judgments that have been entered. In this case the judgments showed acceptable consistency, and the software used the foundation’s inputs to assign these new priorities to the criteria:Fig 10.AHP hierarchy for the foundation investing decision.In the end, the AHP software arranges and totals the global priorities for each of the alternatives. Their grand total is 1.000, which is identical to the priority of the goal. Each alternative has a global priority corresponding to its "fit" to all the foundation's judgments about all those aspects of factor. Here is a summary of the global priorities of the alternatives:Fig 114.2 ROI Model4.2.1 Overview of the investment strategyConsider a foundation making investment on a set of N geographically dispersed colleges and university in the United States, D = {1, 2, 3……N }. Then we can select top N schools from the candidate list which has been sorted through analytic hierarchy process. The total investment amount is M per year which is donated by the Goodgrant Foundation. The investment amount is j m for each school j D ∈, satisfying the following balance constraint:j j D mM ∈=∑ (1)W e can’t invest too much or too little money to one school because we want to help more students go to college, and the student should have more choices. Then the investment amount for each school must have a lower limit lu and upper limit bu as follows:j lu m bu ≤≤ (2)The tuition and fees is j p , and the time duration is {1,2,3,4}j t ∈. To simplify ourmodel, we assume that our investment amount is only used for freshmen every year. Because a freshmen oriented investment can get more benefits compared with others. For each school j D ∈, the number of the undergraduate students who will be invested is j n , which can be calculated by the following formula :,jj j j m n j D p t =∈⨯ (3)Figure12The foundation can use the ROI model to identify j m and j t so that it canmaximize the total net income. Figure1 has shown the overview of our investment model. We will then illustrate the principle and solution of this model by a kind of nonlinear programming method.4.2.2 Analysis of net income and investment costIn our return on investment model, we first focus on analysis of net income and investment cost. Obviously, the future earnings of undergraduate students are not only due to the investment itself. There are many meaning factors such as the effort, the money from their parents, the training from their companies. In order to simplify the model, we assume that the investment cost is the most important element and we don’t consider other possible influence factors. Then we can conclude that the total cost of the investment is j m for each school j D ∈.Figure 13For a single student, the meaning of the investment benefits is the expected earnings in the future. Assuming that the student is not going to college or university after graduating from high school and is directly going to work. Then his wage base is 0b as a high school graduate. If he works as a college graduate, then his wage base is 0a . Then we can give the future proceeds of life which is represented symbolically by T and we use r to represent the bank rates which will change over time. We assume that the bank rates will not change during the investment period. Here, we use bank rates in 2016 to represent the r . The future proceeds of life of a single undergraduate student will be different due to individual differences such as age, physical condition environment, etc. If we consider these differences, the calculation process will be complicated. For simplicity’s sake, we uniform the future proceeds of life T for 20 years. Then we will give two economics formulas to calculate the total expected income in the next T years for graduates and high school graduates:40(1)Tk k a u r +==+∑(4) 40(1)T kk b h r +==+∑(5) The total expected income of a graduate is u , and the total expected income of a highschool graduate is h .Then, we continue to analyze the net income. The net income can be calculated by the following formula:os NetIncome TotalIncome C t =- (6) For each school j D ∈, the net income is j P , the total income is j Q , and the cost is j m . Then we will get the following equation through formula (6):j j j P Q m =- (7)Therefore, the key of the problem is how to calculate j Q . In order to calculate j Q, weneed to estimate the number of future employment j ne . The total number of the invested is j n , which has been calculated above. Considering the dropout rates j α and the employment rates j β for each school j , we can calculate the number of future employment j ne through the following formula:(4)(1)jt j j j j n e n βα-=⨯⨯- (8)That way, we can calculate j Q by the following formula:()j j Q ne u h =⨯- (9)Finally, we take Eq. (2) (3) (4) (7) (8) into Eq. (6), and we will obtain Eq. (9) as follows:4(4)00400(1)()(1)(1)j TT t j j j j j k kk k j jm a b P m p t r r βα+-+===⨯⨯-⨯--⨯++∑∑ (10) We next reformulate the above equation of j P for concise presentation:(4)(1)j t j jj j j jc m P m t λα-⨯⨯=⨯-- (11)where jj j p βλ= and 400400(1)(1)TT k kk k a b c r r ++===-++∑∑ .4.2.3 Calculate Return On InvestmentROI is short of return on investment which can be determined by net income andinvestment cost [7]. It conveys the meaning of the financial assessment. For each schoolj D ∈ , the net income is j P , and the investment cost equals to j m . Then the j ROIcan be calculated by the following formula:100%j j jP ROI m =⨯ (12)We substitute Eq. (10) into Eq. (11), and we will get a new formula as follows:(4)((1)1)100%j t j j j jc ROI t λα-⨯=⨯--⨯ (13)4.2.4 Maximize the Total Net IncomeGiven the net income of each school, we formulate the portfolio problem that maximize the total net income, S=Max(4)((1))j t j jj j j j Dj Djc m P m t λα-∈∈⨯⨯=⨯--∑∑ (14)S. T.jj DmM ∈=∑,{1,2,3,4}t = ,j lu m bu ≤≤ ,Considering the constraint jj DmM ∈=∑, we can further simplify the model,S is equivalent to S’=Max(4)((1))j t j jj j j Dj Djc m P t λα-∈∈⨯⨯=⨯-∑∑ (15)S. T.jj DmM ∈=∑,{1,2,3,4t = ,j l u m b u ≤≤. By solving the nonlinear programming problem S’, we can get the sameanswer as problem S.5. Testing the Model 5.1 Error AnalysisSince the advent of analytic hierarchy process, people pay more attention to it due to the specific applicability, convenience, practicability and systematization of the method. Analytic hierarchy process has not reached the ideal situation whether in theory or application level because the results depend largely on the preference and subjective judgment. In this part, we will analyze the human error problem in analytic hierarchy process.Human error is mainly caused by human factors. The human error mainly reflects on the structure of the judgment matrix. The causes of the error are the following points:1. The number of times that human judge the factors’ importance is excessive.2. The calibration method is not perfect.Then we will give some methods to reduce errors:1. Reduce times of human judgment. One person repeatedly gave the samejudgment between two factors. Or many persons gave the same judgment between two factors one time. Finally, we take the average as result.2. Break the original calibration method. If we have defined the ranking vector111121(,...)n a a a a =between the factor 1A with others. Then we can get all theother ranking vector. For example : 12122111(,1...)na a a a a =.5.2 Stability AnalysisIt is necessary to analyze the stability of ranking result [6], because the strong subjectivefactors. If the ranking result changed a little while the judgment changed a lot, we can conclude that the method is effective and the result is acceptable, and vice versa. We assume that the weight of other factors will change if the weight of one factor changed from i ξ to i η:[8](1)(,1,2...,)(1)i j j i i j n i j ηξηξ-⨯==≠- (16)And it is simple to verify the equation:11nii η==∑ (17)And the new ranking vector ω will be:A ωη=⨯ (18)By this method, the Relative importance between other factors remain the same while one of the factor has changed.6. Results6.1 Results of Analytic Hierarchy ProcessWe can ranking colleges through the analytic hierarchy process, and we can get the top N = 20 schools as follows6.2 Results of Return On Investment ModelBased on the results above, we next use ROI model to distribute investment amountj m and time duration j t for each school j D ∈ by solving the following problem:Max (4)((1))j t j jj j j Dj Djc m P t λα-∈∈⨯⨯=⨯-∑∑S. T.jj DmM ∈=∑,{1,2,3,4t = , j l u m b u≤≤ . In order to solve the problem above, we collected the data from different sources. Inthe end, we solve the model with Lingo software. The program code is as follows:model: sets:roi/1..20/:a,b,p,m,t;endsets data:a = 0.9642 0.9250 0.9484 0.9422 0.9402 0.9498 0.90490.9263 0.9769 0.9553 0.9351 0.9123 0.9410 0.98610.9790 0.9640 0.8644 0.9598 0.9659 0.9720;b = 0.8024 0.7339 0.8737 0.8308 0.8681 0.7998 0.74920.6050 0.8342 0.8217 0.8940 0.8873 0.8495 0.87520.8333 0.8604 0.8176 0.8916 0.7527 0.8659;p = 3.3484 3.7971 3.3070 3.3386 3.3371 3.4956 3.22204.0306 2.8544 3.1503 3.2986 3.3087 3.3419 2.78452.9597 2.92713.3742 2.7801 2.5667 2.8058;c = 49.5528;enddatamax=@sum(roi(I):m(I)/t(I)/p(I)*((1-b(I))^4)*c*(1-a(I)+0.05)^(4-t(I)));@for(roi:@gin(t));@for(roi(I):@bnd(1,t(I),4));@for(roi(I):@bnd(0,m(I),100));@sum(roi(I):m(I))=1000;ENDFinally, we can get the investment amount and time duration distribution as follows:7.Strengths and Weaknesses7.1Strengths1.Fixing the bank rates during the investment period may run out, but it will haveonly marginal influences.2.For return on investment, we only consider monetary income, regardless of the3.intangible income. But the quantization of these intangible income is very importantand difficult. It needs to do too much complicated technical analysis and to quantify 4.too many variables. Considering that the investment persists for a short time, thiskind of random error is acceptable.5.Due to our investment which is freshmen oriented, other students may feel unfair.It is likely to produce adverse reaction to our investment strategy.6.The cost estimation is not impeccable. We only consider the investment amount andignore other non-monetary investment.5. AHP needs higher requirements for personnel quality.7.2Weaknesses1.Our investment strategy is distinct and clear, and it is convenient to implement.2.Our model not only identifies the investment amount for each school, but alsoidentifies the time duration that the organization’s money should be provide d.3.Data processing is convenient, because the most data we use is constant, average ormedian.4.Data sources are reliable. Our investment strategy is based on some meaningful anddefendable subset of two data sets.5.AHP is more simple, effective and universal.References[1] Saaty, Thomas L. (2008). Decision Making for Leaders: The Analytic Hierarchy Process for Decisions in a Complex World. Pittsburgh, Pennsylvania: RWS Publications. ISBN 0-9620317-8-X.[2] Bhushan, Navneet, Kanwal Rai (January 2004). Strategic Decision Making: Applying the Analytic Hierarchy Process. London: Springer-Verlag. ISBN 1-8523375-6-7.[3] Saaty, Thomas L. (2001). Fundamentals of Decision Making and Priority Theory. Pittsburgh, Pennsylvania: RWS Publications. ISBN 0-9620317-6-3.[4] Trick, Michael A. (1996-11-23). "Analytic Hierarchy Process". Class Notes. Carnegie Mellon University Tepper School of Business. Retrieved 2008-03-02.[5] Meixner, Oliver; Reiner Haas (2002). Computergestützte Entscheidungs-findung: Expert Choice und AHP – innovative Werkzeuge zur Lösung komplexer Probleme (in German). Frankfurt/Wien: Redline Wirtschaft bei Ueberreuter. ISBN 3-8323-0909-8.[6] Hazelkorn, E. The Impact of League Tables and Ranking System on Higher Education Decision Making [J]. Higher Education Management and Policy, 2007, 19(2), 87-110.[7] Leslie: Trainer Assessment: A Guide to Measuring the Performance of Trainers and Facilitors, Second Edition, Gower Publishing Limited, 2002.[8] Aguaron J, Moreno-Jimenea J M. Local stability intervals in the analytic hierarchy process. European Journal of Operational Research. 2000Letter to the Chief Financial Officer, Mr. Alpha Chiang. February 1th, 2016.I am writing this letter to introduce our optimal investment strategy. Before I describe our model, I want to discuss our proposed concept of a return-on-investment (ROI). And then I will describe the optimal investment model by construct two sub-model, namely AHP model and ROI model. Finally, the major results of the model simulation will be showed up to you.Considering that the Goodgrant Foundation aims to help improve educational performance of undergraduates attending colleges and universities in the US, we interpret return-on-investment as the increased income of undergraduates. Because the income of an undergraduate is generally much higher than a high school graduate, we suggest all the investment be used to pay for the tuition and fees. In that case, if we take both the income of undergraduates’ income and dropout rate into account, we can get the return-in-investment value.Our model begins with the production of an optimized and prioritized candidate list of schools you are recommending for investment. This sorted list of school is constructed through the use of specification that you would be fully qualified to provided, such as the score of school, the income of graduate student, the dropout rate, etc. With this information, a precise investment list of schools will be produced for donation select.Furthermore, we developed the second sub-model, ROI model, which identifies the investment amount of each school per year. If we invest more money in a school, more students will have a chance to go to college. However, there is an optimal investment amount of specific school because of the existence of dropout. So, we can identify every candidate school’s in vestment amount by solve a nonlinear programming problem. Ultimately, the result of the model simulation show that Washington University, New York University and Boston College are three schools that worth investing most. And detailed simulation can be seen in our MCM Contest article.We hope that this model is sufficient in meeting your needs in any further donation and future philanthropic educational investments within the United States.。

相关文档
最新文档