美赛数学建模比赛论文模板

合集下载

美国大学生数学建模竞赛优秀论文

美国大学生数学建模竞赛优秀论文

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number7018Problem ChosencFor office use onlyF1________________F2________________F3________________F4________________ SummaryThe article is aimed to research the potential impact of the marine garbage debris on marine ecosystem and human beings,and how we can deal with the substantial problems caused by the aggregation of marine wastes.In task one,we give a definition of the potential long-term and short-term impact of marine plastic garbage. Regard the toxin concentration effect caused by marine garbage as long-term impact and to track and monitor it. We etablish the composite indicator model on density of plastic toxin,and the content of toxin absorbed by plastic fragment in the ocean to express the impact of marine garbage on ecosystem. Take Japan sea as example to examine our model.In ask two, we designe an algorithm, using the density value of marine plastic of each year in discrete measure point given by reference,and we plot plastic density of the whole area in varies locations. Based on the changes in marine plastic density in different years, we determine generally that the center of the plastic vortex is East—West140°W—150°W, South—North30°N—40°N. According to our algorithm, we can monitor a sea area reasonably only by regular observation of part of the specified measuring pointIn task three,we classify the plastic into three types,which is surface layer plastic,deep layer plastic and interlayer between the two. Then we analysis the the degradation mechanism of plastic in each layer. Finally,we get the reason why those plastic fragments come to a similar size.In task four, we classify the source of the marine plastic into three types,the land accounting for 80%,fishing gears accounting for 10%,boating accounting for 10%,and estimate the optimization model according to the duel-target principle of emissions reduction and management. Finally, we arrive at a more reasonable optimization strategy.In task five,we first analyze the mechanism of the formation of the Pacific ocean trash vortex, and thus conclude that the marine garbage swirl will also emerge in south Pacific,south Atlantic and the India ocean. According to the Concentration of diffusion theory, we establish the differential prediction model of the future marine garbage density,and predict the density of the garbage in south Atlantic ocean. Then we get the stable density in eight measuring point .In task six, we get the results by the data of the annual national consumption ofpolypropylene plastic packaging and the data fitting method, and predict the environmental benefit generated by the prohibition of polypropylene take-away food packaging in the next decade. By means of this model and our prediction,each nation will reduce releasing 1.31 million tons of plastic garbage in next decade.Finally, we submit a report to expediction leader,summarize our work and make some feasible suggestions to the policy- makers.Task 1:Definition:●Potential short-term effects of the plastic: the hazardeffects will be shown in the short term.●Potential long-term effects of the plastic: thepotential effects, of which hazards are great, willappear after a long time.The short- and long-term effects of the plastic on the ocean environment:In our definition, the short-term and long-term effects of the plastic on the ocean environment are as follows.Short-term effects:1)The plastic is eaten by marine animals or birds.2) Animals are wrapped by plastics, such as fishing nets, which hurt or even kill them.3)Deaden the way of the passing vessels.Long-term effects:1)Enrichment of toxins through the food chain: the waste plastic in the ocean has no natural degradation in theshort-term, which will first be broken down into tinyfragments through the role of light, waves,micro-organisms, while the molecular structure has notchanged. These "plastic sands", easy to be eaten byplankton, fish and other, are Seemingly very similar tomarine life’s food,causing the enrichment and delivery of toxins.2)Accelerate the greenhouse effect: after a long-term accumulation and pollution of plastics, the waterbecame turbid, which will seriously affect the marineplants (such as phytoplankton and algae) inphotosynthesis. A large number of plankton’s deathswould also lower the ability of the ocean to absorbcarbon dioxide, intensifying the greenhouse effect tosome extent.To monitor the impact of plastic rubbish on the marine ecosystem:According to the relevant literature, we know that plastic resin pellets accumulate toxic chemicals , such as PCBs、DDE , and nonylphenols , and may serve as a transport medium and soure of toxins to marine organisms that ingest them[]2. As it is difficult for the plastic garbage in the ocean to complete degradation in the short term, the plastic resin pellets in the water will increase over time and thus absorb more toxins, resulting in the enrichment of toxins and causing serious impact on the marine ecosystem.Therefore, we track the monitoring of the concentration of PCBs, DDE, and nonylphenols containing in the plastic resin pellets in the sea water, as an indicator to compare the extent of pollution in different regions of the sea, thus reflecting the impact of plastic rubbish on ecosystem.To establish pollution index evaluation model: For purposes of comparison, we unify the concentration indexes of PCBs, DDE, and nonylphenols in a comprehensive index.Preparations:1)Data Standardization2)Determination of the index weightBecause Japan has done researches on the contents of PCBs,DDE, and nonylphenols in the plastic resin pellets, we illustrate the survey conducted in Japanese waters by the University of Tokyo between 1997 and 1998.To standardize the concentration indexes of PCBs, DDE,and nonylphenols. We assume Kasai Sesside Park, KeihinCanal, Kugenuma Beach, Shioda Beach in the survey arethe first, second, third, fourth region; PCBs, DDE, andnonylphenols are the first, second, third indicators.Then to establish the standardized model:j j jij ij V V V V V min max min --= (1,2,3,4;1,2,3i j ==)wherej V max is the maximum of the measurement of j indicator in the four regions.j V min is the minimum of the measurement of j indicatorstandardized value of j indicator in i region.According to the literature [2], Japanese observationaldata is shown in Table 1.Table 1. PCBs, DDE, and, nonylphenols Contents in Marine PolypropyleneTable 1 Using the established standardized model to standardize, we have Table 2.In Table 2,the three indicators of Shioda Beach area are all 0, because the contents of PCBs, DDE, and nonylphenols in Polypropylene Plastic Resin Pellets in this area are the least, while 0 only relatively represents the smallest. Similarly, 1 indicates that in some area the value of a indicator is the largest.To determine the index weight of PCBs, DDE, and nonylphenolsWe use Analytic Hierarchy Process (AHP) to determine the weight of the three indicators in the general pollution indicator. AHP is an effective method which transforms semi-qualitative and semi-quantitative problems into quantitative calculation. It uses ideas of analysis and synthesis in decision-making, ideally suited for multi-index comprehensive evaluation.Hierarchy are shown in figure 1.Fig.1 Hierarchy of index factorsThen we determine the weight of each concentrationindicator in the generall pollution indicator, and the process are described as follows:To analyze the role of each concentration indicator, we haveestablished a matrix P to study the relative proportion.⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=111323123211312P P P P P P P Where mn P represents the relative importance of theconcentration indicators m B and n B . Usually we use 1,2,…,9 and their reciprocals to represent different importance. The greater the number is, the more important it is. Similarly, the relative importance of m B and n B is mn P /1(3,2,1,=n m ).Suppose the maximum eigenvalue of P is m ax λ, then theconsistency index is1max --=n nCI λThe average consistency index is RI , then the consistencyratio isRICI CR = For the matrix P of 3≥n , if 1.0<CR the consistency isthougt to be better, of which eigenvector can be used as the weight vector.We get the comparison matrix accoding to the harmful levelsof PCBs, DDE, and nonylphenols and the requirments ofEPA on the maximum concentration of the three toxins inseawater as follows:⎥⎥⎥⎦⎤⎢⎢⎢⎣⎡=165416131431P We get the maximum eigenvalue of P by MATLAB calculation0012.3max =λand the corresponding eigenvector of it is()2393.02975.09243.0,,=W1.0042.012.1047.0<===RI CI CR Therefore,we determine the degree of inconsistency formatrix P within the permissible range. With the eigenvectors of p as weights vector, we get thefinal weight vector by normalization ()1638.02036.06326.0',,=W . Defining the overall target of pollution for the No i oceanis i Q , among other things the standardized value of threeindicators for the No i ocean is ()321,,i i i i V V V V = and the weightvector is 'W ,Then we form the model for the overall target of marine pollution assessment, (3,2,1=i )By the model above, we obtained the Value of the totalpollution index for four regions in Japanese ocean in Table 3T B W Q '=In Table3, the value of the total pollution index is the hightest that means the concentration of toxins in Polypropylene Plastic Resin Pellets is the hightest, whereas the value of the total pollution index in Shioda Beach is the lowest(we point up 0 is only a relative value that’s not in the name of free of plastics pollution)Getting through the assessment method above, we can monitor the concentration of PCBs, DDE and nonylphenols in the plastic debris for the sake of reflecting the influence to ocean ecosystem.The highter the the concentration of toxins,the bigger influence of the marine organism which lead to the inrichment of food chain is more and more dramatic.Above all, the variation of toxins’ concentration simultaneously reflects the distribution and time-varying of marine litter. We can predict the future development of marine litter by regularly monitoring the content of these substances, to provide data for the sea expedition of the detection of marine litter and reference for government departments to make the policies for ocean governance.Task 2:In the North Pacific, the clockwise flow formed a never-ending maelstrom which rotates the plastic garbage. Over the years, the subtropical eddy current in North Pacific gathered together the garbage from the coast or the fleet, entrapped them in the whirlpool, and brought them to the center under the action of the centripetal force, forming an area of 3.43 million square kilometers (more than one-third of Europe) .As time goes by, the garbage in the whirlpool has the trend of increasing year by year in terms of breadth, density, and distribution. In order to clearly describe the variability of the increases over time and space, according to “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999—2008”, we analyze the data, exclude them with a great dispersion, and retain them with concentrated distribution, while the longitude values of the garbage locations in sampled regions of years serve as the x-coordinate value of a three-dimensional coordinates, latitude values as the y-coordinate value, the Plastic Count per cubic Meter of water of the position as the z-coordinate value. Further, we establish an irregular grid in the yx plane according to obtained data, and draw a grid line through all the data points. Using the inverse distance squared method with a factor, which can not only estimate the Plastic Count per cubic Meter of water of any position, but also calculate the trends of the Plastic Counts per cubic Meter of water between two original data points, we can obtain the unknown grid points approximately. When the data of all the irregular grid points are known (or approximately known, or obtained from the original data), we can draw the three-dimensional image with the Matlab software, which can fully reflect the variability of the increases in the garbage density over time and space.Preparations:First, to determine the coordinates of each year’s sampled garbage.The distribution range of garbage is about the East - West 120W-170W, South - North 18N-41N shown in the “Count Densities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”, we divide a square in the picture into 100 grids in Figure (1) as follows:According to the position of the grid where the measuring point’s center is, we can identify the latitude and longitude for each point, which respectively serve as the x- and y- coordinate value of the three-dimensional coordinates.To determine the Plastic Count per cubic Meter of water. As the “Plastic Count per cubic Meter of water” provided by “Count Densities of P lastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”are 5 density interval, to identify the exact values of the garbage density of one year’s different measuring points, we assume that the density is a random variable which obeys uniform distribution in each interval.Uniform distribution can be described as below:()⎪⎩⎪⎨⎧-=01a b x f ()others b a x ,∈We use the uniform function in Matlab to generatecontinuous uniformly distributed random numbers in each interval, which approximately serve as the exact values of the garbage density andz-coordinate values of the three-dimensional coordinates of the year’s measuring points.Assumptions(1)The data we get is accurate and reasonable.(2)Plastic Count per cubic Meter of waterIn the oceanarea isa continuous change.(3)Density of the plastic in the gyre is a variable by region.Density of the plastic in the gyre and its surrounding area is interdependent , However, this dependence decreases with increasing distance . For our discussion issue, Each data point influences the point of each unknown around and the point of each unknown around is influenced by a given data point. The nearer a given data point from the unknown point, the larger the role.Establishing the modelFor the method described by the previous,we serve the distributions of garbage density in the “Count Pensities of Plastic Debris from Ocean Surface Samples North Pacific Gyre 1999--2008”as coordinates ()z y,, As Table 1:x,Through analysis and comparison, We excluded a number of data which has very large dispersion and retained the data that is under the more concentrated the distribution which, can be seen on Table 2.In this way, this is conducive for us to get more accurate density distribution map.Then we have a segmentation that is according to the arrangement of the composition of X direction and Y direction from small to large by using x co-ordinate value and y co-ordinate value of known data points n, in order to form a non-equidistant Segmentation which has n nodes. For the Segmentation we get above,we only know the density of the plastic known n nodes, therefore, we must find other density of the plastic garbage of n nodes.We only do the sampling survey of garbage density of the north pacificvortex,so only understand logically each known data point has a certain extent effect on the unknown node and the close-known points of density of the plastic garbage has high-impact than distant known point.In this respect,we use the weighted average format, that means using the adverse which with distance squared to express more important effects in close known points. There're two known points Q1 and Q2 in a line ,that is to say we have already known the plastic litter density in Q1 and Q2, then speculate the plastic litter density's affects between Q1、Q2 and the point G which in the connection of Q1 and Q2. It can be shown by a weighted average algorithm22212221111121GQ GQ GQ Z GQ Z Z Q Q G +*+*=in this formula GQ expresses the distance between the pointG and Q.We know that only use a weighted average close to the unknown point can not reflect the trend of the known points, we assume that any two given point of plastic garbage between the changes in the density of plastic impact the plastic garbage density of the unknown point and reflecting the density of plastic garbage changes in linear trend. So in the weighted average formula what is in order to presume an unknown point of plastic garbage density, we introduce the trend items. And because the greater impact at close range point, and thus the density of plastic wastes trends close points stronger. For the one-dimensional case, the calculation formula G Z in the previous example modify in the following format:2212122212212122211111112121Q Q GQ GQ GQ Q Q GQ Z GQ Z GQ Z Z Q Q Q Q G ++++*+*+*=Among them, 21Q Q known as the separation distance of the known point, 21Q Q Z is the density of plastic garbage which is the plastic waste density of 1Q and 2Q for the linear trend of point G . For the two-dimensional area, point G is not on the line 21Q Q , so we make a vertical from the point G and cross the line connect the point 1Q and 2Q , and get point P , the impact of point P to 1Q and 2Q just like one-dimensional, and the one-dimensional closer of G to P , the distant of G to P become farther, the smaller of the impact, so the weighting factor should also reflect the GP in inversely proportional to a certain way, then we adopt following format:221212222122121222211111112121Q Q GQ GP GQ GQ Q Q GQ GP Z GQ Z GQ Z Z P Q Q Q Q G ++++++*+*+*=Taken together, we speculated following roles:(1) Each known point data are influence the density of plastic garbage of each unknown point in the inversely proportional to the square of the distance;(2) the change of density of plastic garbage between any two known points data, for each unknown point are affected, and the influence to each particular point of their plastic garbage diffuse the straight line along the two known particular point; (3) the change of the density of plastic garbage between any two known data points impact a specific unknown points of the density of plastic litter depends on the three distances: a. the vertical distance to a straight line which is a specific point link to a known point;b. the distance between the latest known point to a specific unknown point;c. the separation distance between two known data points.If we mark 1Q ,2Q ,…,N Q as the location of known data points,G as an unknown node, ijG P is the intersection of the connection of i Q ,j Q and the vertical line from G to i Q ,j Q()G Q Q Z j i ,,is the density trend of i Q ,j Q in the of plasticgarbage points and prescribe ()G Q Q Z j i ,,is the testing point i Q ’ s density of plastic garbage ,so there are calculation formula:()()∑∑∑∑==-==++++*=Ni N ij ji i ijGji i ijG N i Nj j i G Q Q GQ GPQ Q GQ GP G Q Q Z Z 11222222111,,Here we plug each year’s observational data in schedule 1 into our model, and draw the three-dimensional images of the spatial distribution of the marine garbage ’s density with Matlab in Figure (2) as follows:199920002002200520062007-2008(1)It’s observed and analyzed that, from 1999 to 2008, the density of plastic garbage is increasing year by year and significantly in the region of East – West 140W-150W, south - north 30N-40N. Therefore, we can make sure that this region is probably the center of the marine litter whirlpool. Gathering process should be such that the dispersed garbage floating in the ocean move with the ocean currents and gradually close to the whirlpool region. At the beginning, the area close to the vortex will have obviously increasable about plastic litter density, because of this centripetal they keeping move to the center of the vortex ,then with the time accumulates ,the garbage density in the center of the vortex become much bigger and bigger , at last it becomes the Pacific rubbish island we have seen today.It can be seen that through our algorithm, as long as the reference to be able to detect the density in an area which has a number of discrete measuring points,Through tracking these density changes ,we Will be able to value out all the waters of the density measurement through our models to determine,This will reduce the workload of the marine expedition team monitoring marine pollution significantly, and also saving costs .Task 3:The degradation mechanism of marine plasticsWe know that light, mechanical force, heat, oxygen, water, microbes, chemicals, etc. can result in the degradation of plastics . In mechanism ,Factors result in the degradation can be summarized as optical ,biological,and chemical。

美赛数学建模比赛论文模板

美赛数学建模比赛论文模板

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (4)5.1 Building of the Cellular automaton model (4)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (9)5.2.2 Conclusion (9)5.3 Taking the an intelligent vehicle system into a account (9)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (10)6. Improvement of the model (11)6.1 strength and weakness (11)6.1.1 Strength (11)6.1.2 Weakness (11)6.2 Improvement of the model (11)7. Reference (13)1. IntroductionAs is known to all, it’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless theyare passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models,then analyze the performance of this rule in light and heavy traffic. Firstly,we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules,and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions●The highway of double row three lanes that we study can representmulti-lane freeways.●The data that we refer to has certain representativeness and descriptive●Operation condition of the highway not be influenced by blizzard oraccidental factors●Ignore the driver's own abnormal factors, such as drunk driving andfatigue driving●The operation form of highway intelligent system that our analysis canreflect intelligent system●In the intelligent vehicle system, the result of the sampling data hashigh accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years. Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t , p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not takethe initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one) [4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number ofsmall vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed control Fig.5.1.3Fig.5.1.3 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it willbecome difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.● Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate the speed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio of small vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.). It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control.a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by us to perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table 5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Because of the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby,with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consideractual traffic conditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantimethe conclusion is more persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensivelyanalysis.●Due to there are many traffic factors, we are only studied some of the factors,thus our model need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, We further put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig .6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the 85V speed of each model as the main reference basis. In recent years, the85V of some researchers for the typical countries from Table 6.1[ 21]:Author Country ModelOttesen and Krammes2000 AmericaLC DC L DC V C ⨯---=01.0012.057.144.10285Andueza2000Venezuela ].[308.9486.7)/894()/2795(25.9885curve horizontal L DC Ra R V T++--=].[tan 819.27)/3032(69.10085gent L R V T +-= Jessen2001America][00239.0614.0279.080.86185LSD ADT G V V P --+=][00212.0432.010.7285NLSD ADT V V P -+=Donnell2001 America22)2(8500724.040.10140.04.78T L G R V --+=22)3(85008369.048.10176.01.75T L G R V --+=22)4(8500810.069.10176.05.74T L G R V --+=22)5(8500934.008.21.83T L G V --=BucchiA.BiasuzziK. And SimoneA.2005Italy DCV 124.0164.6685-= DCE V 4.046.3366.5585--=2855.035.1119.0745.65DC E DC V ---=FitzpatrickAmericaKV 98.17507.11185-= Meanwhile, there are other vehicles driving rules such as speed limit in adverseweather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane trafficsimulations using cellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffi c Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

优秀的数学建模论文范文(通用8篇)

优秀的数学建模论文范文(通用8篇)

优秀的数学建模论文范文第1篇摘要:将数学建模思想融入高等数学的教学中来,是目前大学数学教育的重要教学方式。

建模思想的有效应用,不仅显著提高了学生应用数学模式解决实际问题的能力,还在培养大学生发散思维能力和综合素质方面起到重要作用。

本文试从当前高等数学教学现状着手,分析在高等数学中融入建模思想的重要性,并从教学实践中给出相应的教学方法,以期能给同行教师们一些帮助。

关键词:数学建模;高等数学;教学研究一、引言建模思想使高等数学教育的基础与本质。

从目前情况来看,将数学建模思想融入高等教学中的趋势越来越明显。

但是在实际的教学过程中,大部分高校的数学教育仍处在传统的理论知识简单传授阶段。

其教学成果与社会实践还是有脱节的现象存在,难以让学生学以致用,感受到应用数学在现实生活中的魅力,这种教学方式需要亟待改善。

二、高等数学教学现状高等数学是现在大学数学教育中的基础课程,也是一门必修的课程。

他能为其他理工科专业的学生提供很多种解题方式与解题思路,是很多专业,如自动化工程、机械工程、计算机、电气化等必不可少的基础课程。

同时,现实生活中也有很多方面都涉及高数的运算,如,银行理财基金的使用问题、彩票的概率计算问题等,从这些方面都可以看出人们不能仅仅把高数看成是一门学科而已,它还与日常生活各个方面有重要的联系。

但现在很多学校仍以应试教育为主,采取填鸭式教学方式,加上高数的教材并没有与时俱进,将其与生活的关系融入教材内,使学生无法意识到高数的重要性以及高数在日常生活中的魅力,因此产生排斥甚至对抗的心理,只是在临考前突击而已。

因此,对高数进行教学改革是十分有必要的,而且怎么改,怎么让学生发现高数的魅力,并积极主动学习高数也是作为教师所面临的一个重大问题。

三、将数学建模思想融入高等数学的重要性第一,能够激发学生学习高数的兴趣。

建模思想实际上是使用数学语言来对生活中的实际现象进行描述的过程。

把建模思想应用到高等数学的学习中,能够让学生们在日常生活中理解数学的实际应用状况与解决日常生活问题的方便性,让学生们了解到高数并不只是一门课程,而是整个日常生活的基础。

数学建模 美赛获奖论文

数学建模 美赛获奖论文
Some players believe that “corking” a bat enhances the “sweet spot” effect. There are some arguments about that .Such asa corked bat has (slightly) less mass.,less mass (lower inertia) means faster swing speed and less mass means a less effective collision. These are just some people’s views, other people may have different opinions. Whethercorking is helpful in the baseball game has not been strongly confirmed yet. Experiments seem to have inconsistent results.
________________
F2
________________
F3
________________
F4
________________
2010 Mathematical Contest in Modeling (MCM) Summary Sheet
(Attach a copy of this page to each copy of your solution paper.)
Keywords:simple harmonic motion system , differential equations model , collision system

美赛论文模板(超实用)

美赛论文模板(超实用)

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number50930Problem ChosenAFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary SheetSummaryOur goal is a model that can use for control the water temperature through a person take a bath.After a person fills a bathtub with some hot water and then he take a bath,the water will gets cooler,it cause the person body discomfort.We construct models to analyze the temperature distribution in the bathtub space with time changing.Our basic heat transfer differential equation model focuses on the Newton cooling law and Fourier heat conduction law.We assume that the person feels comfortable in a temperature interval,consider with saving water,we decide the temperature of water first inject adopt the upper bound.The water gets more cooler with time goes by,we assume a time period and stipulation it is the temperature range,use this model can get the the first inject water volume through the temperature decline from maximum value to minimum value.Then we build a model with a partial differential equation,this model explain the water cooling after the fill bathtub.It shows the temperature distribution and water cool down feature.Wecan obtain the water temperature change with space and time by MATLAB.When the temperature decline to the lower limit,the person adds a constant trickle of hot water.At first the bathtub has a certain volume of minimum temperature of the water,in order to make the temperature after mixed with hot water more closer to the original temperature and adding hot water less,we build a heat accumulation model.In the process of adding hot water,we can calculate the temperature change function by this model until the bathtub is full.After the water fill up,water volume is a constant value,some of the water will overflow and take away some heat.Now,temperature rise didn't quickly as fill it up before,it should make the inject heat and the air convection heat difference smallest.For the movement of people, can be seen as a simple mixing movement, It plays a very good role in promoting the evenly of heat mixture. so we put the human body's degree of motion as a function, and then establish the function and the heat transfer model of the contact, and draw the relationship between them. For the impact of the size of the bathtub, due to the insulation of the wall of the bathtub, the heat radiation of the whole body is only related to the area of the water surface, So the shape and size of the bath is just the area of the water surface. Thereby affecting the amount of heat radiation, thereby affecting the amount of water added and the temperature difference,So after a long and wide bath to determine the length of the bath. The surface area is also determined, and the heattransfer rate can be solved by the heat conduction equation, which can be used to calculate the amount of hot water. Finally, considering the effect of foaming agent, after adding the foam, the foam floats on the liquid surface, which is equivalent to a layer of heat transfer medium, This layer of medium is hindered by the convective heat transfer between water and air, thereby affecting the amount of hot water added. ,ContentTitile .............................................................................................. 错误!未定义书签。

美国大学生数学建赛论文模板【范文】

美国大学生数学建赛论文模板【范文】

For office use onlyT1________________ T2________________ T3________________ T4________________Team Control Number21432Problem ChosenCFor office use onlyF1________________F2________________F3________________F4________________2012 Mathematical Contest in Modeling (MCM) Summary SheetTwo models to make conspirators nowhere to hide in social network With the development of high-technology, the number of white collar, high-tech crimes grow by more than 4% a year [1]. Bec ause of conspirators’ high IQ and professional knowledge, they are hard to be tracked down. Thus, we need use some special data mining and analytical methods to analyze social networks’ inherent law and finally offer help for litigating criminal suspect.M odel I is used for calculating everyone’s criminal possibility by the following 4 procedures: 1) Derive topics’ danger coefficient by Ana lytic Hierarchy Process (AHP);2) Set the discriminate line by Support Vector Machine (SVM); 3) Use the weight sum to c alculate everyone’s criminal possibility; 4) Provide a nomination form about conspiracy leaders by the Pagerank algorithm.Model II is an improved text analysis, used for more accurately analyzing the content and context of relevant information. The model includes four steps as follows: 1) Ascertain keywords and topics by counting their arisen times; 2) Syncopate me ssages’ sentence; 3) Match intelligently between messages and topics; 4) Get results by model I at last.We utilize models to evaluate requirement 1 and 2. The results show the fault of rates are 8.33% and 12.5%, which is acceptable.Table1. The results of requirement 1 and 2.conspirators criminal possibility leaders rankRequirement1Seeri 0.494 Julia 0.137 Sherri 0.366 Beth 0.099 Dolores 0.323 Jerome 0.095Requirement2 Sherri 0.326 Alex 0.098 Paige 0.306 Paige 0.094 Melia 0.284 Sherri 0.092To verify our two models and describe the ideas for requirement 3, we use models to analyze the 10 people’s example. The results of model II sho w our topics contain 78.8% initial information, better than the former 5 topics’ 57.7%. The results of model I can identify two shadowy conspirators, Bob and Inez. Thus, the models are more accurate and effective.According to the requirement4, we specifically discuss the effect of the thorough network analysis to our models. Meanwhile, we try to extend our models in distinguishing the safe page and unsafe page in Internet and the results derived from our models are reasonable.Two models to make conspirators nowhere to hideTeam #13373February 14th ,2012ContentIntroduction (3)The Description of the Problem (3)Analysis (3)What is the goal of the Modeling effort? (4)Flow chart (4)Assumptions (5)Terms, Definitions and Symbols (5)Model I (6)Overview (6)Model Built (6)Solution and Result (9)Analysis of the Result (10)Model II (11)Overview (11)Model Built (11)Result and Analysis (12)Conclusions (13)Technical summary (13)Strengths and Weaknesses (13)Extension (14)Reference (14)Appendix (16)IntroductionWith the development of our society, more and more high-tech conspiracy crimes and white-collar crimes take place in business and government professionals. Unlike simple violent crime, it is a kind of bran-new crime style, would gradually create big fraud schemes to hurt others’ benefit and destroy business companies.In order to track down the culprits and stop scams before they start, we must make full use of effective simulation model and methodology to search their criminal information. We create a Criminal Priority Model (CPM) to evaluate every suspect’s criminal possibility by analyzing text message and get a priority line which is helpful to ICM’s investigation.In addition, using semantic network analysis to search is one of the most effective ways nowadays; it will also be helpful we obtain and analysis semantic information by automatically extract networks using co-occurrence, grammatical analysis, and sentiment analysis. [1]During searching useful information and data, we develop a whole model about how to effective search and analysis data in network. In fact, not only did the coalescent of text analysis and disaggregated model make a contribution on tracking down culprits, but also provide an effective way for analyzing other subjects. For example, we can utilize our models to do the classification of pages.In fact, the conditions of pages’classification are similar to criminological analysis. First, according to the unsafe page we use the network crawler and Hyperlink to find the pages’ content and the connection between each pages. Second, extract the messages and the relationships between pages by Model II. Third, according to the available information, we can obtain the pages’priority list about security and the discriminate line separating safe pages and the unsafe pages by Model I. Finally we use the pages’ relationships to adjust the result.The Description of the ProblemAnalysisAfter reading the whole ICM problem, we make a depth analysis about the conspiracy and related information. In fact, the goal of ICM leads us to research how to take advantage of the thorough network, semantic, and text analyses of the message contents to work out personal criminal possibility.At first, we must develop a simulation model to analysis the current case’s data, and visualize the discriminate line of separating conspirator and non-conspirator.Then, by increasing text analyses to research the possible useful information from “Topic.xls”, we can optimize our model and develop an integral process of automatically extract and operate database.At last, use a new subject and database to verify our improved model.What is the goal of the Modeling effort?●Making a priority list for crime to present the most likely conspirators●Put forward some criteria to discriminate conspirator and non-conspirator, createa discriminate line.●Nominate the possible conspiracy leaders●Improve the model’s accuracy and the credit of ICM●Study the principle and steps of semantic network analysis●Describe how the semantic network analysis could empower our model.Flow chartFigure 1Assumptions●The messages have no serious error.●These messages and text can present what they truly mean.●Ignore special people, such as spy.●This information provided by ICM is reasonable and reliable.Terms, Definitions and SymbolsTable 2. Model parametersParameter MeaningThe rate of sending message to conspirators to total sending messageThe rate of receiving message to conspirators to total receiving messageThe dangerous possibility of one’s total messagesThe rate of messages with known non-conformist to total messagesDanger coefficient of topicsThe number of one’s sending messagesThe number of one’s receiving messagesThe number of one’s sending messages from criminalThe number of one’s receiving messages from criminalThe number of one’s sending messages from non-conspiratorThe number of one’s receiving messages from non-conspiratorDanger coefficient of peopleModel IOverviewModel I is used for calculating and analyzing everyone’s criminal possibility. In fact, the criminal possibility is the most important parameter to build a priority list and a discriminate line. The model I is made up of the following 4 procedures: (1) Derive topics’danger coefficient by Analytic Hierarchy Process (AHP); (2) Set the discriminate line by Support Vector Machine (SVM); (3) Use the weight sum to calculate everyone’s criminal possibility; (4) Provide a nomination form about conspiracy leaders by the Pagerank algorithm.Model BuiltStep.1PretreatmentIn order to decide the priority list and discriminate line, we must sufficiently study the data and factors in the ICM.For the first, we focus on the estimation about the phenomena of repeated names. In the name.xls, there are three pair phenomena of repeated names. Node#7 and node#37 both call Elsie, node#16 and node#34 both call Jerome, node#4 and node#32 both calls Gretchen. Thus, before develop simulation models; we must evaluate who are the real Elsie, Jerome and Gretchen.To decide which one accord with what information the problem submitsFirst we study the data in message.xls ,determine to analysis the number of messages of Elsie, Jerome and Gretchen. Table1 presents the correlation measure of their messages with criminal topic.Figure2By studying these data and figures, we can calculate the rate of messages about criminal topic to total messages; node#7 is 0.45455, while node#37 is 0.27273. Furthermore node#7 is higher than node#37 in the number of messages.Thus, we evaluate that node #7 is more likely Elsie what the ICM points out.In like manner, we think node#34, node#32 are those senior managers the ICM points out. In the following model and deduction, we assume node#7 is Elsie, node#34 is Jerome and node #32 is Gretchen.Step.2Derive topics’ danger coefficient by Analytic Hierarchy ProcessUse analytic hierarchy process to calculate the danger every topic’s coefficient. During the research, we decide use four factors’ effects to evaluate :● Aim :Evaluate the danger coefficient of every topic.[2]● Standard :The correlation with dangerous keywordsThe importance of the topic itselfThe relationship of the topic and known conspiratorsThe relationship of the topic and known non-conspirators● Scheme : The topics (1,2,3……15)Figure3According to previous research, we decide the weight of The Standard to Aim :These weights can be evaluated by paired comparison algorithm, and build a matrix about each part.For example, build a matrix about Standard and Aim, the equation is followingij j i a C C ⇒:ijji ij n n ij a a a a A 1,0)(=>=⨯ The other matrix can be evaluated by the similar ways. At last, we make a consistency check to matrix A and find it is reasonable.The result shows in the table, and we can use the data to continue the next model. Step.3 Use the weight sum to calculate everyone ’s criminal possibilityWe will start to study every one’s danger coefficient by using four factors,, and .[3]100-第一份工作开始时间)(第一份工作结束时间第一份工作持续时间=The first factor means calculate the rate of someone’s sending criminal messages to total sending messages.The second factors means calculate the rate of someone’s receivingcriminal messages to total receiving messages.=The third factormeans calculate the dangerous possibility of someone’stotal messages.The four factorthe rate of someone’s messages with non-conspirators tototal messages.At last, we use an equation to calculate every one’s criticality, namely thepossibility of someone attending crime. ( Shows every factors’weighing parameter)After calculating these equations abov e, we derive everyone’s criminal possibilityand a priority list. (See appendix for complete table about who are the most likely conspirators) We instead use a cratering technique first described by Rossmo [1999]. The two-dimensional crime points xi are mapped to their radius from the anchor point ai, that is, we have f : xi → ri, where f(xi) = j i i a a (a shifted modulus). The set ri isthen used to generate a crater around the anchor point.There are two dominatingStep.4 Provide a nomination form about conspiracy leaders by the Pagerankalgorithm.At last, we will find out the possible important leaders by pagerank model, and combined with priority list to build a prior conspiracy leaders list.[4]The essential idea from Page Rank is that if node u has a link to node v, then the author of u is an implicitly conferring some importance to node v. Meanwhile it means node v has a important chance. Thus, using B (u) to show the aggregation of links to node u, and using F (u) to show the aggregation of received links of node u, The C is Normalization factor. In each iteration, propagate the ranks as follows: The next equation shows page rank of node u:Using the results of Page Rank and priority list, we can know those possiblecriminal conspiracy leaders.Solution and ResultRequirement 1:According to Model I above, we calculate these data offered by requirement 1 and build two lists. The following shows the result of requirement 1.By running model I step2, we derive danger coefficient of topics, the known conspiracy topic 7, 11 and 13 are high danger coefficient (see appendix Table4. for complete information).After running model step3, we get a list of every one’s criticality .By comparing these criticality, we can build a priority list about criminal suspects. In fact, we find out criminal suspects are comparatively centralized, who are highly different from those known non-conspirators. This illuminates our model is relative reasonable. Thus we decide use SVM to get the discriminate line, namely to separate criminal suspects and possible non-conspirators (see appendix Table5. for complete information). Finally, we utilize Page rank to calculate criminal suspects’ Rank and status, table4 shows the result. Thus, we nominate 5 most likely criminal leaders according the results of table4.They are Julia, Beth, Jerome, Stephanie and Neal.According to the requirement of problem1, we underscore the situations of three senior managers Jerome, Delores and Gretchen. Because the SVM model makes a depth analysis about conspirators, Jerome is chosen as important conspirator, Delores and Gretchen both has high danger coefficient. We think Jerome could be a conspirator, while Delores and Gretchen are regarded as important criminal suspects. Using the software Ucinet, we derive a social network of criminal suspects.The blue nodes represent non-conspirators. The red nodes represent conspirators. The yellow nodes represent conspiracy leaders.Figure 4Requirement 2:Using the similar model above, we can continue analyzing the results though theconditions change.We derive three new tables (4, 5 and 6): danger coefficient of topics, every one’s criticality and the probability of nominated. At last, we get a new priority list (table6) and 5 most likely criminal leaders: Alex, Sherri, Yao, Elsie and Jerome.We sincerely wish that our analysis can be helpful to ICM’s investigation. We figure out a new figure, which shows the social network of criminal suspects for requirement 2.Figure 5Analysis of the Result1)AnalysisIn the requirement 1, we find out 24 possible criminal suspects. All of 7 known conspirators are in the 24 suspects and their danger coefficients are also pretty high. However, there are 2 known non-conspirators are in these suspects.Thus, the rate of making mistakes is 8.33%. In all, we still have enough reasons to think the model is reasonable.In addition, we find 5 suspects who are likely conspirators by Support Vector Machine (SVM).In the requirement 2, we also choose 24 the most likely conspirators after run our CPM. All of 8 known conspirators are also in the 24 suspects and their danger coefficients are pretty high. Because 3 known non-conspirators are in these suspects, the rate of making mistakes is 12.5%, which is higher to the result of requirement 1.2)ComparisonTo research the effect of changing the number of criminal topics and conspirators to results, we decide to do an additional research about their effect.We separate the change of topics and crimes’numbers, analysis result’s changes of only one factor:In order to analyze the change between requirement 1 and 2, we choose those people whose rank has a big change over 30.Reference: the node.1st result: the part of the requirement1’s priority list.2nd result: the part of the requirement2’s priority list.3rd result: the priority’s changes of requirement 1 and 2.After investigate these people, we find out the topics about them isn’t close connected with node#0. Thus, the change of node#0 does not make a great effect on their change.However, there are more than a half of people who talk about topic1. According to the analysis, we find the topic1 has a great effect on their change. The topic1 is more important to node#0.Thus; we can do an assumption that the decision of topics has bigger effect on the decision of the personal identity and decide to do a research in the following content.Model IIOverviewAccording to requirement3, we will take the text analysis into account to enhance our model. In the paper, text analysis is presented as a paradigm for syntactic-semantic analysis of natural language. The main characteristics of this approach are: the vectors of messages about keywords, semanteme and question formation. In like manner, we need get three vectors of topics. Then, we utilize similarity to separate every message to corresponding topics. Finally, we evaluate these effects of text analysis by model I.Model BuiltStep.1PretreatmentIn this step, we need conclude relatively accurate topics by keywords in messages. Not only builds a database about topics, but also builds a small database for adjusting the topic classification of messages. The small database for adjusting is used for studying possible interpersonal relation between criminal suspects, i. e. Bob always use positive and supportive words to comment topics and things about Jerry, and then we think Bob’s messages are closely connected with topics about Jerry. [5] At first, we need to count up how many keywords in the whole messages.Text analysis is word-oriented, i.e., the word plays the central role in language understanding. So we avoid stipulating a grammar in the traditional sense and focus on the concept of word. During the analysis of all words in messages, we ignore punctuation, some simple word such as “is” and “the”, and extract relative importantwords.Then, a statistics will be completed about how many times every important word occurs. We will make a priority list and choose the top part of these words.Finally, according to some messages, we will turn these keywords into relatively complete topics.Step.2Syncopate sentenceWe will make a depth research to every sentence in messages by running program In the beginning, we can utilize the same way in step1 to syncopate sentence, deriving every message’s keywords. We decide create a vector about keywords: = () (m is the number keywords in everymessage)For improving the accuracy and relativity of our keywords, we decide to build a vector that shows every keyword’s synonyms, antonym.= () (1<k<m, p is the number of correlative words)According to primary analysis, we can find some important interpersonal relations between criminal suspects, i.e. Bob is closely connected with Jerry, then we can build a vector about interpersonal relation.= () (n is the number of relationships in one sentence )Step.3Intelligent matchingIn order to improve the accuracy of our disaggregated model, we use three vectors to do intelligent matching.Every message has three vectors:. Similarly, every topic alsohas three vectors.At last, we can do an intelligent matching to classify. [6]Step.4Using CPMAfter deriving new the classification of messages, we will make full use of new topics to calculate every one’s criticality.Result and AnalysisAfter calculating the 10 people example, we derive new topics. By verifying the topics’ contained initial information, we can evaluate the effect of models.The results of model II show our topics contain 78.8% initial information, better than former 5 topics’ 57.7%.T hus, new topics contain more initial information. Meanwhile, we build a database about interpersonal relation, and using it to optimize the results of everyone’s criminal possibility.Table 3#node primary new #node primary new1 0 0.065 6 0.342 0.2652 0.342 0.693 7 0.891 0.9123 0.713 0.562 8 0.423 0.354 1 1 9 0.334 0.7235 0.823 0.853 10 0.125 0.15 The results of model I can identify the two shadowy conspirators, Bob and Inez. In the table, the rate of fault is becoming smaller.According to Table11, we can derive some information:1.Analysis the danger coefficient of two people, Bob and Inez. Bob is theperson who self-admitted his involvement in a plan bargain for a reducedsentence. His data changes from 0.342 to 0.693. And Inez is the person whogot off, his data changes from 0.334 to 0.723. The models can identify thetwo shadowy people.2.Carol, the person who was later dropped his data changes from 0.713 to0.562. Although it still has a relatively high danger coefficient, the resultsare enhancing by our models.3.The distance between high degree people and low degree become bigger, itpresents the models would more distinctly identify conspirators andnon-conspirators.Thus, the models are more accurate and effective.ConclusionsTechnical summaryWe bring out a whole model about how to extract and analysis plentiful network information, and finally solve the classification problems. Four steps are used to make the classification problem easier.1)According known conspirators and correlative information, use resemblingnetwork crawler to extract what we may need information and messages.[7]2)Using the second model to analysis and classify these messages and text, getimportant topics.3)Using the first model to calculate everyone’s criminal possibility.4)Using an interpersonal relation database derived by step2 to optimize theresults. [8]Strengths and WeaknessesStrengths:1)We analyze the danger coefficient of topics and people by using different characteristics to describe them. Its results have a margin of error of 10percentage points. That the Models work well.2)In the semantic analysis, in addition to obtain topics from messages in social network, we also extract the relationships of people and adjust the final resultimprove the model.3)We use 4 characteristics to describe people’s danger coefficient. SVM has a great advantage in classification by small characteristics. Using SVM to classify the unknown people and its result is good.Weakness:1)For the special people, such as spy and criminal researcher, the model works not so well.2)We can determine some criminals by topics; at the same time we can also use the new criminals to adjust the topics. The two react upon each other. We canexpect to cycle through several times until the topics and criminals are stable.However we only finish the first cycle.3)For the semantic analysis model we have established, we just test and verify in the example (social network of 10 people). In the condition of large social network, the computational complexity will become greater, so the classify result is still further to be surveyed.ExtensionAccording to our analysis, not only can our model be applied to analyze criminal gangs, but also applied to similar network models, such as cells in a biological network, safe pages in Internet and so on. For the pages’ classification in Internet, our model would make a contribution. In the following, we will talk about how to utilize [9] Our model in pages’ classification.First, according to the unsafe page we use the network crawler and Hyperlink to find the pages’content and the connection between each page. Second, extract the messages and the relationships between pages by Model II. Third, according to the available information, we can obtain the pages’priority list about security and the discriminate line separating safe pages and the unsafe pages by Model I. Finally we use the pages’ relationships to adjust the result.Reference1. http://books.google.pl/books?id=CURaAAAAYAAJ&hl=zh-CN2012.2. AHP./wiki/%E5%B1%82%E6%AC%A1%E5%88%86%E6%9E%90%E6%B 3%95.3. Schaller, J. and J.M.S. Valente, Minimizing the weighted sum of squared tardiness on a singlemachine. Computers & Operations Research, 2012. 39(5): p. 919-928.4. Frahm, K.M., B. Georgeot, and D.L. Shepelyansky, Universal emergence of PageRank.Journal of Physics a-Mathematical and Theoretical, 2011. 44(46).5. Park, S.-B., J.-G. Jung, and D. Lee, Semantic Social Network Analysis for HierarchicalStructured Multimedia Browsing. Information-an International Interdisciplinary Journal, 2011.14(11): p. 3843-3856.6. Yi, J., S. Tang, and H. Li, Data Recovery Based on Intelligent Pattern Matching.ChinaCommunications, 2010. 7(6): p. 107-111.7. Nath, R. and S. Bal, A Novel Mobile Crawler System Based on Filtering off Non-ModifiedPages for Reducing Load on the Network.International Arab Journal of Information Technology, 2011. 8(3): p. 272-279.8. Xiong, F., Y. Liu, and Y. Li, Research on Focused Crawler Based upon Network Topology.Journal of Internet Technology, 2008. 9(5): p. 377-380.9. Huang, D., et al., MyBioNet: interactively visualize, edit and merge biological networks on theWeb. Bioinformatics, 2011. 27(23): p. 3321-3322.AppendixTable 4requirement 1topic danger topic danger topic danger topic danger7 1.65 4 0.78 5 0.47 8 0.1713 1.61 10 0.77 15 0.46 14 0.1711 1.60 12 0.47 9 0.19 6 0.141 0.812 0.473 0.18requirement 2topic danger topic danger topic danger topic danger1 0.402 0.26 15 0.15 14 0.117 0.37 9 0.23 8 0.15 3 0.0913 0.37 10 0.21 5 0.14 6 0.0611 0.30 12 0.18 4 0.12Table 5requirement 1#node danger #node danger #node danger #node danger 21 0.74 22 0.19 0 0.13 23 0.03 67 0.69 4 0.19 40 0.13 72 0.03 54 0.61 33 0.19 36 0.13 62 0.03 81 0.49 47 0.19 11 0.12 51 0.02 7 0.47 41 0.19 69 0.12 57 0.02 3 0.37 28 0.18 29 0.12 64 0.02 49 0.36 16 0.18 12 0.11 71 0.02 43 0.36 31 0.17 25 0.11 74 0.01 10 0.32 37 0.17 82 0.11 58 0.01 18 0.29 27 0.16 60 0.10 59 0.01 34 0.29 45 0.16 42 0.10 70 0.00 48 0.28 50 0.16 65 0.09 53 0.00 20 0.27 24 0.16 9 0.09 76 0.00 15 0.27 44 0.16 5 0.09 61 0.00 17 0.26 38 0.16 66 0.09 75 -0.01 2 0.23 13 0.16 26 0.08 77 -0.01 32 0.23 35 0.15 39 0.06 55 -0.02 30 0.20 1 0.15 80 0.04 68 -0.02 73 0.20 46 0.15 78 0.04 52 -0.0319 0.20 8 0.14 56 0.03 63 -0.03 14 0.19 6 0.14 79 0.03requirement 2#node danger #node danger #node danger #node danger 0 0.39881137 75 0.1757106 47 0.1090439 11 0.0692506 21 0.447777778 52 0.1749354 71 0.1089147 4 0.0682171 67 0.399047158 38 0.1738223 82 0.1088594 42 0.0483204 54 0.353754153 10 0.1656977 14 0.1079734 65 0.046124 81 0.325736434 19 0.1559173 27 0.1060724 60 0.0459948 2 0.306054289 40 0.1547065 23 0.105814 39 0.0286822 18 0.303178295 30 0.1517626 5 0.1039406 62 0.0245478 66 0.28372093 80 0.145155 8 0.10228 78 0.0162791 7 0.279870801 24 0.1447674 73 0.1 56 0.0160207 63 0.261886305 70 0.1425711 50 0.0981395 64 0.0118863 68 0.248514212 29 0.1425562 26 0.097213 72 0.011369548 0.239668277 45 0.1374667 1 0.0952381 79 0.009302349 0.238076781 37 0.1367959 69 0.0917313 51 0.0056848 34 0.232614868 17 0.1303064 33 0.0906977 57 0.0056848 3 0.225507567 6 0.1236221 31 0.0905131 74 0.0054264 35 0.222435188 22 0.1226934 36 0.0875452 76 0.005168 77 0.214470284 13 0.1222868 41 0.0822997 53 0.0028424 20 0.213718162 44 0.115007 46 0.0749354 58 0.0015504 43 0.204328165 12 0.1121447 28 0.0748708 59 0.0015504 32 0.193311469 15 0.1121447 16 0.074234 61 0.0007752 55 0.182687339 9 0.1117571 25 0.0701292Table 6requirement 1#node leader #node leader #node leader #node leader 15 0.1368 49 0.0481 7 0.0373 19 0.0089 14 0.0988 4 0.0423 21 0.0357 32 0.0073 34 0.0951 10 0.0422 18 0.029 22 0.0059 30 0.0828 67 0.0421 48 0.0236 81 0.0053 17 0.0824 54 0.0377 20 0.0232 73 043 0.0596 3 0.0377 2 0.0181 33 0requirement 2#node leader #node leader #node leader #node leader 21 0.0981309 7 0.0714406 54 0.0526831 43 0.01401872 0.0942899 34 0.0707246 32 0.0464614 81 0.00977763 0.0916127 0 0.0706746 18 0.041114248 0.0855984 20 0.0658119 68 0.028532867 0.0782211 49 0.0561665 35 0.024741。

美国大学生数学建模竞赛MCM写作模板(各个部分)

美国大学生数学建模竞赛MCM写作模板(各个部分)

美国⼤学⽣数学建模竞赛MCM写作模板(各个部分)摘要:第⼀段:写论⽂解决什么问题1.问题的重述a. 介绍重点词开头:例1:“Hand move” irrigation, a cheap but labor-intensive system used on small farms, consists of a movable pipe with sprinkler on top that can be attached to a stationary main.例2:……is a real-life common phenomenon with many complexities.例3:An (effective plan) is crucial to………b. 直接指出问题:例1:We find the optimal number of tollbooths in a highway toll-plaza for a given number of highway lanes: the number of tollbooths that minimizes average delay experienced by cars.例2:A brand-new university needs to balance the cost of information technology security measures with the potential cost of attacks on its systems.例3:We determine the number of sprinklers to use by analyzing the energy and motion of water in the pipe and examining the engineering parameters of sprinklers available in the market.例4: After mathematically analyzing the ……problem, our modeling group would like to present our conclusions, strategies, (and recommendations )to the …….例5:Our goal is... that (minimizes the time )……….2.解决这个问题的伟⼤意义反⾯说明。

美赛论文优秀模版

美赛论文优秀模版

For office use onlyT1________________ T2________________ T3________________ T4________________ Team Control Number11111Problem ChosenABCDFor office use onlyF1________________F2________________F3________________F4________________ 2015Mathematical Contest in Modeling (MCM/ICM) Summary Sheet In order to evaluate the performance of a coach, we describe metrics in five aspects: historical record, game gold content, playoff performance, honors and contribution to the sports. Moreover, each aspect is subdivided into several secondary metrics. Take playoff performance as example, we collect postseason result (Sweet Sixteen, Final Four, etc.) per year from NCAA official website, Wikimedia and so on.First, ****grade.To eval*** , in turn, are John Wooden, Mike Krzyzewski, Adolph Rupp, Dean Smith and Bob Knight.Time line horizon does make a difference. According to turning points in NCAA history, we divide the previous century into six periods with different time weights which lead to the change of ranking.We conduct sensitivity analysis on FSE to find best membership function and calculation rule. Sensitivity analysis on aggregation weight is also performed. It proves AM performs better than single model. As a creative use, top 3 presidents (U.S.) are picked out: Abraham Lincoln, George Washington, Franklin D. Roosevelt.At last, the strength and weakness of our mode are discussed, non-technical explanation is presented and the future work is pointed as well.Key words: Ebola virus disease; Epidemiology; West Africa; ******ContentsI. Introduction (2)1.1 (2)1.2 (2)1.3 (2)1.4 (2)1.5 (2)1.6 (2)II. The Description of the Problem (2)2.1 How do we approximate the whole course of paying toll? (2)2.2 How do we define the optimal configuration? (2)2.3 The local optimization and the overall optimization (3)2.4 The differences in weights and sizes of vehicles (3)2.5 What if there is no data available? (3)III. Models (3)3.1 Basic Model (3)3.1.1 Terms, Definitions and Symbols (3)3.1.2 Assumptions (3)3.1.3 The Foundation of Model (4)3.1.4 Solution and Result (4)3.1.5 Analysis of the Result (4)3.1.6 Strength and Weakness (4)3.2 Improved Model (4)3.2.1 Extra Symbols (4)3.2.2 Additional Assumptions (5)3.2.3 The Foundation of Model (5)3.2.4 Solution and Result (5)3.2.5 Analysis of the Result (5)3.2.6 Strength and Weakness (6)IV. Conclusions (6)4.1 Conclusions of the problem (6)4.2 Methods used in our models (6)4.3 Applications of our models (6)V. Future Work (6)5.1 Another model (6)5.1.1 The limitations of queuing theory (6)5.1.2 (6)5.1.3 (7)5.1.4 (7)5.2 Another layout of toll plaza (7)5.3 The newly- adopted charging methods (7)VI. References (7)VII. Appendix (8)I. IntroductionIn order to indicate the origin of the toll way problems, the following background is worth mentioning.1.11.21.31.41.51.6II. The Description of the Problem2.1 How d o we approximate the whole course of paying toll?●●●●1) From the perspective of motorist:2) From the perspective of the toll plaza:3) Compromise:2.3 The l ocal optimization and the overall optimization●●●Virtually:2.4 The differences in weights and sizes of vehicl es2.5 What if there is no data availabl e?III. Models3.1 Basic Model3.1.1 Terms, Definitions and SymbolsThe signs and definitions are mostly generated from queuing theory.●●●●●3.1.2 Assumptions●●●●3.1.3 The Foundation of Model1) The utility function●The cost of toll plaza:●The loss of motorist:●The weight of each aspect:●Compromise:2) The integer programmingAccording to queuing theory, we can calculate the statistical properties as follows.3)The overall optimization and the local optimization●The overall optimization:●The local optimization:●The optimal number of tollbooths:3.1.4 Solution and Result1) The solution of the integer programming:2) Results:3.1.5 Analysis of the Result●Local optimization and overall optimization:●Sensitivity: The result is quite sensitive to the change of the three parameters●Trend:●Comparison:3.1.6 Strength and Weakness●Strength: In despite of this, the model has proved that . Moreover, we have drawnsome useful conclusions about . T he model is fit for, such as●Weakness: This model just applies to . As we have stated, .That’s just whatwe should do in the improved model.3.2 Improved Model3.2.1 Extra Symbols●●●●3.2.2 Additional Assumptions●●●Assumptions concerning the anterior process are the same as the Basic Model.3.2.3 The Foundation of Model1) How do we determine the optimal number?As we have concluded from the Basic Model,3.2.4 Solution and Result1) Simulation algorithmBased on the analysis above, we design our simulation arithmetic as follows.●Step1:●Step2:●Step3:●Step4:●Step5:●Step6:●Step7:●Step8:●Step9:2) Flow chartThe figure below is the flow chart of the simulation.3) Solution3.2.5 Analysis of the Result3.2.6 Strength and Weakness●Strength: The Improved Model aims to make up for the neglect of . The resultseems to declare that this model is more reasonable than the Basic Model and much more effective than the existing design.●Weakness: . Thus the model is still an approximate on a large scale. This hasdoomed to limit the applications of it.IV. Conclusions4.1 Conclusions of the probl em●●●4.2 Methods used in our mod els●●●4.3 Applications of our mod els●●●V. Future Work5.1 Another model5.1.1 The limitations of queuing theory5.1.25.1.41)●●●●2)●●●3)●●●4)5.2 Another layout of toll plaza5.3 The newly- ad opted charging methodsVI. References[1][2][4]VII. Appendix。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

The Keep-Right-Except-To-Pass RuleSummaryAs for the first question, it provides a traffic rule of keep right except to pass, requiring us to verify its effectiveness. Firstly, we define one kind of traffic rule different from the rule of the keep right in order to solve the problem clearly; then, we build a Cellular automaton model and a Nasch model by collecting massive data; next, we make full use of the numerical simulation according to several influence factors of traffic flow; At last, by lots of analysis of graph we obtain, we indicate a conclusion as follow: when vehicle density is lower than 0.15, the rule of lane speed control is more effective in terms of the factor of safe in the light traffic; when vehicle density is greater than 0.15, so the rule of keep right except passing is more effective In the heavy traffic.As for the second question, it requires us to testify that whether the conclusion we obtain in the first question is the same apply to the keep left rule. First of all, we build a stochastic multi-lane traffic model; from the view of the vehicle flow stress, we propose that the probability of moving to the right is 0.7and to the left otherwise by making full use of the Bernoulli process from the view of the ping-pong effect, the conclusion is that the choice of the changing lane is random. On the whole, the fundamental reason is the formation of the driving habit, so the conclusion is effective under the rule of keep left.As for the third question, it requires us to demonstrate the effectiveness of the result advised in the first question under the intelligent vehicle control system. Firstly, taking the speed limits into consideration, we build a microscopic traffic simulator model for traffic simulation purposes. Then, we implement a METANET model for prediction state with the use of the MPC traffic controller. Afterwards, we certify that the dynamic speed control measure can improve the traffic flow .Lastly neglecting the safe factor, combining the rule of keep right with the rule of dynamical speed control is the best solution to accelerate the traffic flow overall.Key words:Cellular automaton model Bernoulli process Microscopic traffic simulator model The MPC traffic controlContentContent (2)1. Introduction (3)2. Analysis of the problem (3)3. Assumption (3)4. Symbol Definition (3)5. Models (3)5.1 Building of the Cellular automaton model (3)5.1.1 Verify the effectiveness of the keep right except to pass rule (4)5.1.2 Numerical simulation results and discussion (5)5.1.3 Conclusion (8)5.2 The solving of second question (8)5.2.1 The building of the stochastic multi-lane traffic model (8)5.2.2 Conclusion (8)5.3 Taking the an intelligent vehicle system into a account (8)5.3.1 Introduction of the Intelligent Vehicle Highway Systems (9)5.3.2 Control problem (9)5.3.3 Results and analysis (9)5.3.4 The comprehensive analysis of the result (9)6. Improvement of the model (10)6.1 strength and weakness (10)6.1.1 Strength (10)6.1.2 Weakness (10)6.2 Improvement of the model (10)7. Reference (12)1. IntroductionAs is known to all, it ’s essential for us to drive automobiles, thus the driving rules is crucial important. In many countries like USA, China, drivers obey the rules which called “The Keep-Right-Except-To-Pass (that is, when driving automobiles, the rule requires drivers to drive in the right-most unless they are passing another vehicle)”.2. Analysis of the problemFor the first question, we decide to use the Cellular automaton to build models, then analyze the performance of this rule in light and heavy traffic. Firstly, we mainly use the vehicle density to distinguish the light and heavy traffic; secondly, we consider the traffic flow and safe as the represent variable which denotes the light or heavy traffic; thirdly, we build and analyze a Cellular automaton model; finally, we judge the rule through two different driving rules, and then draw conclusions.3. AssumptionIn order to streamline our model we have made several key assumptions● The highway of double row three lanes that we study can representmulti-lane freeways.● The data that we refer to has certain representativeness and descriptive● Operation condition of the highway not be influenced by blizzard or accidental factors ● Ignore the driver's own abnormal factors, such as drunk driving and fatigue driving ● The operation form of highway intelligent system that our analysis can reflectintelligent system● In the intelligent vehicle system, the result of the sampling data has high accuracy.4. Symbol Definitioni The number of vehiclest The time5. ModelsBy analyzing the problem, we decided to propose a solution with building a cellular automaton model.5.1 Building of the Cellular automaton modelThanks to its simple rules and convenience for computer simulation, cellular automaton model has been widely used in the study of traffic flow in recent years.Let )(t x i be the position of vehicle i at time t , )(t v i be the speed of vehicle i at time t ,p be the random slowing down probability, and R be the proportion of trucks and buses, the distance between vehicle i and the front vehicle at time t is:1)()(1--=-t x t x gap i i i , if the front vehicle is a small vehicle.3)()(1--=-t x t x gap i i i , if the front vehicle is a truck or bus.5.1.1 Verify the effectiveness of the keep right except to pass ruleIn addition, according to the keep right except to pass rule, we define a new rule called: Control rules based on lane speed. The concrete explanation of the new rule as follow:There is no special passing lane under this rule. The speed of the first lane (the far left lane) is 120–100km/h (including 100 km/h);the speed of the second lane (the middle lane) is 100–80km8/h (including80km/h);the speed of the third lane (the far right lane) is below 80km/ h. The speeds of lanes decrease from left to right.● Lane changing rules based lane speed controlIf vehicle on the high-speed lane meets control v v <, ),1)(min()(max v t v t gap i f i +≥, safe b i gap t gap ≥)(, the vehicle will turn into the adjacent right lane, and the speed of the vehicle after lane changing remains unchanged, where control v is the minimum speed of the corresponding lane.● The application of the Nasch model evolutionLet d P be the lane changing probability (taking into account the actual situation that some drivers like driving in a certain lane, and will not take the initiative to change lanes), )(t gap f i indicates the distance between the vehicle and the nearest front vehicle, )(t gap b i indicates the distance between the vehicle and the nearest following vehicle. In this article, we assume that the minimum safe distance gap safe of lane changing equals to the maximum speed of the following vehicle in the adjacent lanes.● Lane changing rules based on keeping right except to passIn general, traffic flow going through a passing zone (Fig. 5.1.1) involves three processes: the diverging process (one traffic flow diverging into two flows), interacting process (interacting between the two flows), and merging process (the two flows merging into one)[4].Fig.5.1.1 Control plan of overtaking process(1) If vehicle on the first lane (passing lane) meets ),1)(min()(max v t v t gap i f i +≥ and safe b i gap t gap ≥)(, the vehicle will turn into the second lane, the speed of the vehicle after lane changing remains unchanged.5.1.2 Numerical simulation results and discussionIn order to facilitate the subsequent discussions, we define the space occupation rate as L N N p truck CAR ⨯⨯+=3/)3(, where CAR N indicates the number of small vehicles on the driveway,truck N indicates the number of trucks and buses on the driveway, and L indicates the total length of the road. The vehicle flow volume Q is the number of vehicles passing a fixed point per unit time,T N Q T /=, where T N is the number of vehicles observed in time duration T .The average speed ∑∑⨯=T it i a v T N V 11)/1(, t i v is the speed of vehicle i at time t . Take overtaking ratio f p as the evaluation indicator of the safety of traffic flow, which is the ratio of the total number of overtaking and the number of vehicles observed. After 20,000 evolution steps, and averaging the last 2000 steps based on time, we have obtained the following experimental results. In order to eliminate the effect of randomicity, we take the systemic average of 20 samples [5].Overtaking ratio of different control rule conditionsBecause different control conditions of road will produce different overtaking ratio, so we first observe relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.(a) Based on passing lane control (b) Based on speed controlFig.5.1.3Fig.5.1.3Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions.It can be seen from Fig. 5.1.3:(1) when the vehicle density is less than 0.05, the overtaking ratio will continue to rise with the increase of vehicle density; when the vehicle density is larger than 0.05, the overtaking ratio will decrease with the increase of vehicle density; when density is greater than 0.12, due to the crowding, it will become difficult to overtake, so the overtaking ratio is almost 0.(2) when the proportion of large vehicles is less than 0.5, the overtaking ratio will rise with the increase of large vehicles; when the proportion of large vehicles is about 0.5, the overtaking ratio will reach its peak value; when the proportion of large vehicles is larger than 0.5, the overtaking ratio will decrease with the increase of large vehicles, especially under lane-based control condition s the decline is very clear.Concrete impact of under different control rules on overtaking ratioFig.5.1.4Fig.5.1.4 Relationships among vehicle density, proportion of large vehicles and overtaking ratio under different control conditions. (Figures in left-hand indicate the passing lane control, figures in right-hand indicate thespeed control. 1f P is the overtaking ratio of small vehicles over large vehicles, 2f P is the overtaking ratio ofsmall vehicles over small vehicles, 3f P is the overtaking ratio of large vehicles over small vehicles, 4f P is the overtaking ratio of large vehicles over large vehicles.).It can be seen from Fig. 5.1.4:(1) The overtaking ratio of small vehicles over large vehicles under passing lane control is much higher than that under speed control condition, which is because, under passing lane control condition, high-speed small vehicles have to surpass low-speed large vehicles by the passing lane, while under speed control condition, small vehicles are designed to travel on the high-speed lane, there is no low- speed vehicle in front, thus there is no need to overtake. ● Impact of different control rules on vehicle speedFig. 5.1.5 Relationships among vehicle density, proportion of large vehicles and average speed under different control conditions. (Figures in left-hand indicates passing lane control, figures in right-hand indicates speed control. a X is the average speed of all the vehicles, 1a X is the average speed of all the small vehicles, 2a X is the average speed of all the buses and trucks.).It can be seen from Fig. 5.1.5:(1) The average speed will reduce with the increase of vehicle density and proportion of large vehicles.(2) When vehicle density is less than 0.15,a X ,1a X and 2a X are almost the same under both control conditions.● Effect of different control conditions on traffic flowFig.5.1.6Fig. 5.1.6 Relationships among vehicle density, proportion of large vehicles and traffic flow under different control conditions. (Figure a1 indicates passing lane control, figure a2 indicates speed control, and figure b indicates the traffic flow difference between the two conditions.It can be seen from Fig. 5.1.6:(1) When vehicle density is lower than 0.15 and the proportion of large vehicles is from 0.4 to 1, the traffic flow of the two control conditions are basically the same.(2) Except that, the traffic flow under passing lane control condition is slightly larger than that of speed control condition.5.1.3 ConclusionIn this paper, we have established three-lane model of different control conditions, studied the overtaking ratio, speed and traffic flow under different control conditions, vehicle density and proportion of large vehicles.5.2 The solving of second question5.2.1 The building of the stochastic multi-lane traffic model5.2.2 ConclusionOn one hand, from the analysis of the model, in the case the stress is positive, we also consider the jam situation while making the decision. More specifically, if a driver is in a jam situation, applying ))(,2(x P B R results with a tendency of moving to the right lane for this driver. However in reality, drivers tend to find an emptier lane in a jam situation. For this reason, we apply a Bernoulli process )7.0,2(B where the probability of moving to the right is 0.7and to the left otherwise, and the conclusion is under the rule of keep left except to pass, So, the fundamental reason is the formation of the driving habit.5.3 Taking the an intelligent vehicle system into a accountFor the third question, if vehicle transportation on the same roadway was fully under the control of an intelligent system, we make some improvements for the solution proposed by usto perfect the performance of the freeway by lots of analysis.5.3.1 Introduction of the Intelligent Vehicle Highway SystemsWe will use the microscopic traffic simulator model for traffic simulation purposes. The MPC traffic controller that is implemented in the Matlab needs a traffic model to predict the states when the speed limits are applied in Fig.5.3.1. We implement a METANET model for prediction purpose[14].5.3.2 Control problemAs a constraint, the dynamic speed limits are given a maximum and minimum allowed value. The upper bound for the speed limits is 120 km/h, and the lower bound value is 40 km/h. For the calculation of the optimal control values, all speed limits are constrained to this range. When the optimal values are found, they are rounded to a multiplicity of 10 km/h, since this is more clear for human drivers, and also technically feasible without large investments.5.3.3 Results and analysisWhen the density is high, it is more difficult to control the traffic, since the mean speed might already be below the control speed. Therefore, simulations are done using densities at which the shock wave can dissolve without using control, and at densities where the shock wave remains. For each scenario, five simulations for three different cases are done, each with a duration of one hour. The results of the simulations are reported in Table5.1, 5.2, 5.3.●Enforced speed limits●Intelligent speed adaptationFor the ISA scenario, the desired free-flow speed is about 100% of the speed limit. The desired free-flow speed is modeled as a Gaussian distribution, with a mean value of 100% of the speed limit, and a standard deviation of 5% of the speed limit. Based on this percentage, the influence of the dynamic speed limits is expected to be good[19].5.3.4 The comprehensive analysis of the resultFrom the analysis above, we indicate that adopting the intelligent speed control system can effectively decrease the travel times under the control of an intelligent system, in other words, the measures of dynamic speed control can improve the traffic flow.Evidently, under the intelligent speed control system, the effect of the dynamic speed control measure is better than that under the lane speed control mentioned in the first problem. Becauseof the application of the intelligent speed control system, it can provide the optimal speed limit in time. In addition, it can guarantee the safe condition with all kinds of detection device and the sensor under the intelligent speed system.On the whole, taking all the analysis from the first problem to the end into a account, when it is in light traffic, we can neglect the factor of safe with the help of the intelligent speed control system.Thus, under the state of the light traffic, we propose a new conclusion different from that in the first problem: the rule of keep right except to pass is more effective than that of lane speed control.And when it is in the heavy traffic, for sparing no effort to improve the operation efficiency of the freeway, we combine the dynamical speed control measure with the rule of keep right except to pass, drawing a conclusion that the application of the dynamical speed control can improve the performance of the freeway.What we should highlight is that we can make some different speed limit as for different section of road or different size of vehicle with the application of the Intelligent Vehicle Highway Systems.In fact, that how the freeway traffic operate is extremely complex, thereby, with the application of the Intelligent Vehicle Highway Systems, by adjusting our solution originally, we make it still effective to freeway traffic.6. Improvement of the model6.1 strength and weakness6.1.1 Strength●it is easy for computer simulating and can be modified flexibly to consider actual trafficconditions ,moreover a large number of images make the model more visual.●The result is effectively achieved all of the goals we set initially, meantime the conclusion ismore persuasive because of we used the Bernoulli equation.●We can get more accurate result as we apply Matlab.6.1.2 Weakness●The relationship between traffic flow and safety is not comprehensively analysis.●Due to there are many traffic factors, we are only studied some of the factors, thus ourmodel need further improved.6.2 Improvement of the modelWhile we compare models under two kinds of traffic rules, thereby we come to the efficiency of driving on the right to improve traffic flow in some circumstance. Due to the rules of comparing is too less, the conclusion is inadequate. In order to improve the accuracy, Wefurther put forward a kinds of traffic rules: speed limit on different type of cars.The possibility of happening traffic accident for some vehicles is larger, and it also brings hidden safe troubles. So we need to consider separately about different or specific vehicle types from the angle of the speed limiting in order to reduce the occurrence of traffic accidents, the highway speed limit signs is in Fig.6.1.Fig.6.1Advantages of the improving model are that it is useful to improve the running condition safety of specific type of vehicle while considering the difference of different types of vehicles. However, we found that the rules may be reduce the road traffic flow through the analysis. In the implementation it should be at the85V speed of each model as the main reference basis. Inrecent years, the85V of some researchers for the typical countries from Table 6.1[ 21]: Author Country ModelOttesen and Krammes2000 America LCDCLDCVC⨯---=01.0012.057.144.10285Andueza2000 Venezuela].[308.9486.7)/894()/2795(25.9885curvehorizontalLDCRaRVT++--=].[tan819.27)/3032(69.10085gentLRVT+-=Jessen2001 America][00239.0614.0279.080.86185LSDADTGVVP--+=][00212.0432.010.7285NLSDADTVVP-+=Donnell2001 America22)2(8500724.040.10140.04.78TLGRV--+=22)3(85008369.048.10176.01.75TLGRV--+=22)4(8500810.069.10176.05.74TLGRV--+=22)5(8500934.008.21.83TLGV--=BucchiA.BiasuzziK.And SimoneA.2005 ItalyDC V124.0164.6685-=DCEV4.046.3366.5585--=Meanwhile, there are other vehicles driving rules such as speed limit in adverse weather conditions. This rule can improve the safety factor of the vehicle to some extent. At the same time, it limits the speed at the different levels.7. Reference[1] M. Rickert, K. Nagel, M. Schreckenberg, A. Latour, Two lane traffic simulations usingcellular automata, Physica A 231 (1996) 534–550.[20] J.T. Fokkema, Lakshmi Dhevi, Tamil Nadu Traffic Management and Control inIntelligent Vehicle Highway Systems,18(2009).[21] Yang Li, New Variable Speed Control Approach for Freeway. (2011) 1-66。

相关文档
最新文档