rfc4829.Label Switched Path (LSP) Preemption Policies for MPLS Traffic Engineering
Network Working Group J. de Oliveira, Ed. Request for Comments: 4829 Drexel University Category: Informational JP. Vasseur, Ed. Cisco Systems, Inc. L. Chen Verizon Laboratories C. Scoglio Kansas State University April 2007 Label Switched Path (LSP) Preemption Policies for
MPLS Traffic Engineering
Status of This Memo
This memo provides information for the Internet community. It does
not specify an Internet standard of any kind. Distribution of this
memo is unlimited.
Copyright Notice
Copyright (C) The IETF Trust (2007).
IESG Note
This RFC is not a candidate for any level of Internet Standard. The IETF disclaims any knowledge of the fitness of this RFC for any
purpose and, in particular, notes that the decision to publish is not based on IETF review for such things as security, congestion control, or inappropriate interaction with deployed protocols. The RFC Editor has chosen to publish this document at its discretion. Readers of
this document should exercise caution in evaluating its value for
implementation and deployment. See RFC 3932 for more information. Abstract
When the establishment of a higher priority (Traffic Engineering
Label Switched Path) TE LSP requires the preemption of a set of lower priority TE LSPs, a node has to make a local decision to select which TE LSPs will be preempted. The preempted LSPs are then rerouted by
their respective Head-end Label Switch Router (LSR). This document
presents a flexible policy that can be used to achieve different
objectives: preempt the lowest priority LSPs; preempt the minimum
number of LSPs; preempt the set of TE LSPs that provide the closest
amount of bandwidth to the required bandwidth for the preempting TE
LSPs (to minimize bandwidth wastage); preempt the LSPs that will have the maximum chance to get rerouted. Simulation results are given and de Oliveira, et al. Informational [Page 1]
a comparison among several different policies, with respect to
preemption cascading, number of preempted LSPs, priority, wasted
bandwidth and blocking probability is also included.
Table of Contents
1. Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3
3. LSP Setup Procedure and Preemption . . . . . . . . . . . . . . 5
4. Preemption Cascading . . . . . . . . . . . . . . . . . . . . . 6
5. Preemption Heuristic . . . . . . . . . . . . . . . . . . . . . 7 5.1. Preempting Resources on a Path . . . . . . . . . . . . . . 7
5.2. Preemption Heuristic Algorithm . . . . . . . . . . . . . . 8
6. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 6.1. Simple Case: Single Link . . . . . . . . . . . . . . . . . 10
6.2. Network Case . . . . . . . . . . . . . . . . . . . . . . . 12
7. Security Considerations . . . . . . . . . . . . . . . . . . . 16
8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 16
9. Informative References . . . . . . . . . . . . . . . . . . . . 17 de Oliveira, et al. Informational [Page 2]
1. Motivation
The IETF Traffic Engineering Working Group has defined the
requirements and protocol extensions for DiffServ-aware MPLS Traffic Engineering (DS-TE) [RFC3564] [RFC4124]. Several Bandwidth
Constraint models for use with DS-TE have been proposed [RFC4127]
[RFC4128] [RFC4126] and their performance was analyzed with respect
to the use of preemption.
Preemption can be used as a tool to help ensure that high priority
LSPs can always be routed through relatively favorable paths.
Preemption can also be used to implement various prioritized access
policies as well as restoration policies following fault events
[RFC2702].
Although not a mandatory attribute in the traditional IP world,
preemption becomes important in networks using online, distributed
Constrained Shortest Path First (CSPF) strategies for their Traffic
Engineering Label Switched Path (TE LSP) path computation to limit
the impact of bandwidth fragmentation. Moreover, preemption is an
attractive strategy in an MPLS network in which traffic is treated in a differentiated manner and high-importance traffic may be given
special treatment over lower-importance traffic [DEC-PREP, ATM-PREP]. Nevertheless, in the DS-TE approach, whose issues and requirements
are discussed in [RFC3564], the preemption policy is considered an
important piece on the bandwidth reservation and management puzzle,
but no preemption strategy is defined. Note that preemption also
plays an important role in regular MPLS Traffic Engineering
environments (with a single pool of bandwidth).
This document proposes a flexible preemption policy that can be
adjusted in order to give different weight to various preemption
criteria: priority of LSPs to be preempted, number of LSPs to be
preempted, amount of bandwidth preempted, blocking probability. The implications (cascading effect, bandwidth wastage, priority of
preempted LSPs) of selecting a certain order of importance for the
criteria are discussed for the examples given.
2. Introduction
In [RFC2702], issues and requirements for Traffic Engineering in an
MPLS network are highlighted. In order to address both traffic-
oriented and resource-oriented performance objectives, the authors
point out the need for priority and preemption parameters as Traffic Engineering attributes of traffic trunks. The notion of preemption
and preemption priority is defined in [RFC3272], and preemption
attributes are defined in [RFC2702] and [RFC3209].
de Oliveira, et al. Informational [Page 3]
A traffic trunk is defined as an aggregate of traffic flows belonging to the same class that are placed inside an LSP [RFC3564]. In this
context, preemption is the act of selecting an LSP that will be
removed from a given path in order to give room to another LSP with a higher priority (lower preemption number). More specifically, the
preemption attributes determine whether an LSP with a certain setup
preemption priority can preempt another LSP with a lower holding
preemption priority from a given path, when there is competition for available resources. Note that competing for resources is one
situation in which preemption can be triggered, but other situations may exist, themselves controlled by a policy.
For readability, a number of definitions from [RFC3564] are repeated here:
Class-Type (CT): The set of Traffic Trunks crossing a link that is
governed by a specific set of Bandwidth constraints. CT is used for the purposes of link bandwidth allocation, constraint-based routing, and admission control. A given Traffic Trunk belongs to the same CT on all links.
TE-Class: A pair of:
i. A Class-Type.
ii. A preemption priority allowed for that Class-Type. This means
that an LSP transporting a Traffic Trunk from that Class-Type can use that preemption priority as the set-up priority, as the holding
priority, or both.
By definition, there may be more than one TE-Class using the same CT, as long as each TE-Class uses a different preemption priority. Also, there may be more than one TE-Class with the same preemption
priority, provided that each TE-Class uses a different CT. The
network administrator may define the TE-Classes in order to support
preemption across CTs, to avoid preemption within a certain CT, or to avoid preemption completely, when so desired. To ensure coherent
operation, the same TE-Classes must be configured in every Label
Switched Router (LSR) in the DS-TE domain.
As a consequence of a per-TE-Class treatment, the Interior Gateway
Protocol (IGP) needs to advertise separate Traffic Engineering
information for each TE-Class, which consists of the Unreserved
Bandwidth (UB) information [RFC4124]. The UB information will be
used by the routers, checking against the bandwidth constraint model parameters, to decide whether preemption is needed. Details on how
to calculate the UB are given in [RFC4124].
de Oliveira, et al. Informational [Page 4]
3. LSP Setup Procedure and Preemption
A new LSP setup request has two important parameters: bandwidth and
preemption priority. The set of LSPs to be preempted can be selected by optimizing an objective function that represents these two
parameters, and the number of LSPs to be preempted. More
specifically, the objective function could be any, or a combination, of the following [DEC-PREP, ATM-PREP]:
* Preempt the LSPs that have the least priority (preemption
priority). The Quality of Service (QoS) of high priority traffic
would be better satisfied, and the cascading effect described below can be limited.
* Preempt the least number of LSPs. The number of LSPs that need to be rerouted would be lower.
* Preempt the least amount of bandwidth that still satisfies the
request. Resource utilization could be improved. The preemption
of larger TE LSPs (more than requested) by the newly signaled TE
LSP implies a larger amount of bandwidth to be rerouted, which is
likely to increase the probability of blocking (inability to find a path for some TE LSPs).
* Preempt LSPs that minimize the blocking probability (risk that
preempted TE LSP cannot be rerouted).
After the preemption selection phase is finished, the selected LSPs
are signaled as preempted and the new LSP is established (if a new
path satisfying the constraints can be found). The UB information is then updated via flooding of an IGP-TE update and/or simply pruning
the link where preemption occurred. Figure 1 shows a flowchart that summarizes how each LSP setup request is treated in a preemption-
enabled scenario.
de Oliveira, et al. Informational [Page 5]
LSP Setup Request
(TE-Class i, bw=r)
|
|
v NO
UB[TE-Class i] >= r ? -------> Reject LSP
Setup and flood an updated IGP-TE
| LSA/LSP
|YES
v NO
Preemption Needed ? -------> Setup LSP/Update UB if a threshold is | crossed
| YES
v
Preemption ----> Setup LSP/Reroute Preempted LSPs
Algorithm Update UB
Figure 1: Flowchart for LSP setup procedure.
In [DEC-PREP], the authors propose connection preemption policies
that optimize the discussed criteria in a given order of importance: number of LSPs, bandwidth, and priority; bandwidth, priority, and
number of LSPs. The novelty in our approach is the use of an
objective function that can be adjusted by the service provider in
order to stress the desired criteria. No particular criteria order
is enforced. Moreover, a new criterion is added to the objective
function: optimize the blocking probability (to minimize the risk
that an LSP is not rerouted after being preempted).
4. Preemption Cascading
The decision of preempting an LSP may cause other preemptions in the network. This is called preemption cascading effect and different
cascading levels may be achieved by the preemption of a single LSP.
The cascading levels are defined in the following manner: when an LSP is preempted and rerouted without causing any further preemption, the cascading is said to be of level zero. However, when a preempted LSP is rerouted, and in order to be established in the new route it also causes the preemption of other LSPs, the cascading is said to be of
level 1, and so on.
Preemption cascading is not desirable and therefore policies that
minimize it are of interest. Typically, this can result in severe
network instabilities. In Section 5, a new versatile preemption
heuristic will be presented. In Section 6, preemption simulation
results will be discussed and the cascading effect will be analyzed. de Oliveira, et al. Informational [Page 6]
5. Preemption Heuristic
5.1. Preempting Resources on a Path
It is important to note that once a request for an LSP setup arrives, each LSR along the TE LSP path checks the available bandwidth on its outgoing link. For the links in which the available bandwidth is not enough, the preemption policy needs to be activated in order to
guarantee the end-to-end bandwidth reservation for the new LSP. This is a distributed approach, in which every node on the path is
responsible for running the preemption algorithm and determining
which LSPs would be preempted in order to fit the new request. A
distributed approach may not lead to an optimal solution.
Alternatively, in a centralized approach, a manager entity runs the
preemption policy and determines the best LSPs to be preempted in
order to free the required bandwidth in all the links that compose
the path. The preemption policy would try to select LSPs that
overlap with the path being considered (preempt a single LSP that
overlaps with the route versus preempt a single LSP on every link
that belongs to the route).
Both centralized and distributed approaches have advantages and
drawbacks. A centralized approach would be more precise, but require that the whole network state be stored and updated accordingly, which raises scalability issues. In a network where LSPs are mostly
static, an offline decision can be made to reroute LSPs and the
centralized approach could be appropriate. However, in a dynamic
network in which LSPs are set up and torn down in a frequent manner
because of new TE LSPs, bandwidth increase, reroute due to failure,
etc., the correctness of the stored network state could be
questionable. Moreover, the setup time is generally increased when
compared to a distributed solution. In this scenario, the
distributed approach would bring more benefits, even when resulting
in a non-optimal solution (The gain in optimality of a centralized
approach compared to a distributed approach depends on many factors: network topology, traffic matrix, TE strategy, etc.). A distributed approach is also easier to be implemented due to the distributed
nature of the current Internet protocols.
Since the current Internet routing protocols are essentially
distributed, a decentralized approach was selected for the LSP
preemption policy. The parameters required by the new preemption
policies are currently available for OSPF and Intermediate System to Intermediate System (IS-IS).
de Oliveira, et al. Informational [Page 7]
5.2. Preemption Heuristic Algorithm
Consider a request for a new LSP setup with bandwidth b and setup
preemption priority p. When preemption is needed, due to lack of
available resources, the preemptable LSPs will be chosen among the
ones with lower holding preemption priority (higher numerical value) in order to fit r=b-Abw(l). The variable r represents the actual
bandwidth that needs to be preempted (the requested, b, minus the
available bandwidth on link l: Abw(l)).
L is the set of active LSPs having a holding preemption priority
lower (numerically higher) than p. So L is the set of candidates for preemption. b(l) is the bandwidth reserved by LSP l in L, expressed
in bandwidth units, and p(l) is the holding preemption priority of
LSP l.
In order to represent a cost for each preemption priority, an
associated cost y(l) inversely related to the holding preemption
priority p(l) is defined. For simplicity, a linear relation
y(l)=8-p(l) is chosen. y is a cost vector with L components, y(l). b is a reserved bandwidth vector with dimension L, and components b(l). Concerning the objective function, four main objectives can be
reached in the selection of preempted LSPs:
* minimize the priority of preempted LSPs,
* minimize the number of preempted LSPs,
* minimize the preempted bandwidth,
* minimize the blocking probability.
To have the widest choice on the overall objective that each service provider needs to achieve, the following equation was defined (for
simplicity chosen as a weighted sum of the above mentioned criteria): H(l)= alpha y(l) + beta 1/b(l) + gamma (b(l)-r)^2 + theta b(l)
In this equation:
- alpha y(l) captures the cost of preempting high priority LSPs.
- beta 1/b(l) penalizes the preemption of low bandwidth LSPs,
capturing the cost of preempting a large number of LSPs.
- gamma (b(l)-r)^2 captures the cost of preemption of LSPs that are
much larger or much smaller than r.
- theta b(l) captures the cost of preempting large LSPs.
de Oliveira, et al. Informational [Page 8]
Coefficients alpha, beta, gamma, and theta can be chosen to emphasize one or more components of H.
The coefficient theta is defined such that theta = 0 if gamma > 0.
This is because when trying to minimize the blocking probability of
preempted LSPs, the heuristic gives preference to preempting several small LSPs (therefore gamma, which is the weight for minimizing the
preempted bandwidth enforcing the selection of LSPs with similar
amount of bandwidth as the requested, needs to be set as zero). The selection of several small LSPs in a normally loaded portion of the
network will increase the chance that such LSPs are successfully
rerouted. Moreover, the selection of several small LSPs may not
imply preempting much more than the required bandwidth (resulting in low-bandwidth wastage), as it will be seen in the discussed examples. When preemption is to happen in a heavy loaded portion of the
network, to minimize blocking probability, the heuristic will select fewer LSPs for preemption in order to increase the chance of
rerouting.
H is calculated for each LSP in L. The LSPs to be preempted are
chosen as the ones with smaller H that add enough bandwidth to
accommodate r. When sorting LSPs by H, LSPs with the same value for H are ordered by bandwidth b, in increasing order. For each LSP with repeated H, the algorithm checks whether the bandwidth b assigned to only that LSP is enough to satisfy r. If there is no such LSP, it
checks whether the bandwidth of each of those LSPs added to the
previously preempted LSPs’ bandwidth is enough to satisfy r. If that is not true for any LSP in that repeated H-value sequence, the
algorithm preempts the LSP that has the larger amount of bandwidth in the sequence, and keeps preempting in decreasing order of b until r
is satisfied or the sequence is finished. If the sequence is
finished and r is not satisfied, the algorithm again selects LSPs to be preempted based on an increasing order of H. More details on the
algorithm are given in [PREEMPTION].
When the objective is to minimize blocking, the heuristic will follow two options on how to calculate H:
* If the link in which preemption is to happen is normally loaded,
several small LSPs will be selected for preemption using H(l)=
alpha y(l) + theta b(l).
* If the link is overloaded, few LSPs are selected using H(l)= alpha y(l) + beta 1/b(l).
de Oliveira, et al. Informational [Page 9]
6. Examples
6.1. Simple Case: Single Link
We first consider a very simple case, in which the path considered
for preemption is composed by a single hop. The objective of this
example is to illustrate how the heuristic works. In the next
section, we will study a more complex case in which the preemption
policies are being tested on a network.
Consider a link with 16 LSPs with reserved bandwidth b in Mbps,
preemption holding priority p, and cost y, as shown in Table 1. In
this example, 8 TE-Classes are active. The preemption here is being performed on a single link as an illustrative example.
------------------------------------------------------------------ LSP L1 L2 L3 L4 L5 L6 L7 L8
------------------------------------------------------------------ Bandwidth (b) 20 10 60 25 20 1 75 45
Priority (p) 1 2 3 4 5 6 7 5
Cost (y) 7 6 5 4 3 2 1 3
------------------------------------------------------------------ LSP L9 L10 L11 L12 L13 L14 L15 L16
------------------------------------------------------------------ Bandwidth (b) 100 5 40 85 50 20 70 25
Priority (p) 3 6 4 5 2 3 4 7
Cost (y) 5 2 4 3 6 5 4 1
------------------------------------------------------------------ Table 1: LSPs in the considered link.
A request for an LSP establishment arrives with r=175 Mbps and p=0
(highest possible priority, which implies that all LSPs with p>0 in
Table 1 will be considered when running the algorithm). Assume
Abw(l)=0.
If priority is the only important criterion, the network operator
configures alpha=1, beta=gamma=theta=0. In this case, LSPs L6, L7,
L10, L12, and L16 are selected for preemption, freeing 191 bandwidth units to establish the high-priority LSP. Note that 5 LSPs were
preempted, but all with a priority level between 5 and 7.
In a network in which rerouting is an expensive task to perform (and the number of rerouted TE LSPs should be as small as possible), one
might prefer to set beta=1 and alpha=gamma=theta=0. LSPs L9 and L12 would then be selected for preemption, adding up to 185 bandwidth
units (less wastage than the previous case). The priorities of the
selected LSPs are 3 and 5 (which means that they might themselves
preempt some other LSPs when rerouted).
de Oliveira, et al. Informational [Page 10]
Suppose the network operator decides that it is more appropriate to
configure alpha=1, beta=10, gamma=0, theta=0 (the parameters were set to values that would balance the weight of each component, namely
priority and number, in the cost function), because in this network
rerouting is very expensive, LSP priority is important, but bandwidth is not a critical issue. In this case, LSPs L7, L12, and L16 are
selected for preemption. This configuration results in a smaller
number of preempted LSPs when compared to the first case, and the
priority levels are kept between 5 and 7.
To take into account the number of LSPs preempted, the preemption
priority, and the amount of bandwidth preempted, the network operator may set alpha > 0, beta > 0, and gamma > 0. To achieve a balance
among the three components, the parameters need to be normalized.
Aiming for a balance, the parameters could be set as alpha=1, beta=10 (bringing the term 1/b(l) closer to the other parameters), and
gamma=0.001 (bringing the value of the term (b(l)-r)^2 closer to the other parameters). LSPs L7 and L9 are selected for preemption,
resulting in exactly 175 bandwidth units and with priorities 3 and 7 (note that less LSP are preempted but they have a higher priority
which may result in a cascading effect).
If the minimization of the blocking probability is the criterion of
most interest, the cost function could be configured with theta=1,
alpha=beta=gamma=0. In that case, several small LSPs are selected
for preemption: LSPs L2, L4, L5, L6, L7, L10, L14, and L16. Their
preemption will free 181 Mbps in this link, and because the selected LSPs have small bandwidth requirement there is a good chance that
each of them will find a new route in the network.
From the above example, it can be observed that when the priority was the highest concern and the number of preempted LSPs was not an
issue, 5 LSPs with the lowest priority were selected for preemption. When only the number of LSPs was an issue, the minimum number of LSPs was selected for preemption: 2, but the priority was higher than in
the previous case. When priority and number were important factors
and a possible waste of bandwidth was not an issue, 3 LSPs were
selected, adding more bandwidth than requested, but still with low
preemption priority. When considering all the parameters but the
blocking probability, the smallest set of LSP was selected, 2, adding just enough bandwidth, 175 Mbps, and with priority levels 3 and 7.
When the blocking probability was the criterion of interest, several (8) small LSPs were preempted. The bandwidth wastage is low, but the number of rerouting events will increase. Given the bandwidth
requirement of the preempted LSPs, it is expected that the chances of finding a new route for each LSP will be high.
de Oliveira, et al. Informational [Page 11]
6.2. Network Case
For these experiments, we consider a 150 nodes topology with an
average network connectivity of 3. 10% of the nodes in the topology
have a degree of connectivity of 6. 10% of the links are OC3, 70% are OC48, and 20% are OC192.
Two classes of TE LSPs are in use: Voice LSPs and Data Internet/VPN
LSPs. For each class of TE LSP, the set of preemptions (and the
proportion of LSPs for each preemption) and the size distributions
are as follows (a total of T LSPs is considered):
T: total number of TE LSPs in the network (T = 18,306 LSPs)
Voice:
Number: 20% of T
Preemption: 0, 1 and 2
Size: uniform distribution between 30M and 50M
Internet/VPN TE:
Number: 4% of T
Preemption: 3
Size: uniform distribution between 20M and 50M
Number: 8% of T
Preemption 4
Size: uniform distribution between 15M and 40M
Number: 8% of T
Preemption 5
Size: uniform distribution between 10M and 20M
Number: 20% of T
Preemption 6
Size: uniform distribution between 1M and 20M
Number: 40% of T
Preemption 7
Size: uniform distribution between 1K and 1M
LSPs are set up mainly due to network failure: a link or a node
failed and LSPs are rerouted.
de Oliveira, et al. Informational [Page 12]
The network failure events were simulated with two functions:
- Constant: 1 failure chosen randomly among the set of links every 1 hour.
- Poisson process with interarrival average = 1 hour.
Table 2 shows the results for simulations with constant failure. The simulations were run with the preemption heuristic configured to
balance different criteria (left side of the table), and then with
different preemption policies that consider the criteria in a given
order of importance rather than balancing them (right side of the
table).
The proposed heuristic was configured to balance the following
criteria:
HPB: The heuristic with priority and bandwidth wastage as the most
important criteria (alpha=10, beta=0, gamma=0.001, theta=0).
HBlock: The heuristic considering the minimization of blocking
probability (normal load links: alpha=1, beta=0, gamma=0, theta=0.01) (heavy load links: alpha=1, beta=10).
HNB: The heuristic with number of preemptions and wasted bandwidth in consideration (alpha=0, beta=10, gamma=0.001, theta=0).
Other algorithms that consider the criteria in a given order of
importance:
P: Sorts candidate LSPs by priority only.
PN: Sorts the LSPs by priority, and for cases in which the priority
is the same, orders those LSPs by decreasing bandwidth (selects
larger LSPs for preemption in order to minimize number of preempted
LSPs).
PB: Sorts the LSPs by priority, and for LSPs with the same priority, sorts those by increasing bandwidth (select smaller LSPs in order to reduce bandwidth wastage).
de Oliveira, et al. Informational [Page 13]
------------------------------------------------- | Heuristic | Other algorithms | ------------------------------------------------- | HPB | HBlock| HNB | P | PN | PB | ----------------------------------------------------------------- Need to be | 532 | 532 | 532 | 532 | 532 | 532 | Rerouted | | | | | | | ----------------------------------------------------------------- Preempted | 612 | 483 | 619 | 504 | 477 | 598 | ----------------------------------------------------------------- Rerouted |467|76%|341|73%|475|77%|347|69%|335|70%|436|73%| Blocked |145|24%|130|27%|144|23%|157|31%|142|30%|162|27%| ----------------------------------------------------------------- Max Cascading | 4.5 | 2 | 5 | 2.75 | 2 | 2.75 | ----------------------------------------------------------------- Wasted Bandwidth| | | | | | | AVR (Mbps) | 6638 | 532 | 6479 | 8247 | 8955 | 6832 | Worst Case(Mbps)| 35321 |26010 |36809 | 28501 | 31406 | 23449 | ----------------------------------------------------------------- Priority | | | | | | | Average | 6 | 6.5 | 5.8 | 6.6 | 6.6 | 6.6 | Worst Case | 1.5 | 3.8 | 1.2 | 3.8 | 3.8 | 3.8 | ----------------------------------------------------------------- Extra Hops | | | | | | | Average | 0.23 | 0.25 | 0.22 | 0.25 | 0.25 | 0.23 | Worst Case | 3.25 | 3 | 3.25 | 3 | 3 | 2.75 | ----------------------------------------------------------------- Table 2: Simulation results for constant network failure:
1 random failure every hour.
From Table 2, we can conclude that among the heuristic (HPB, HBlock, HNB) results, HBlock resulted in the smaller number of LSPs being
preempted. More importantly, it also resulted in an overall smaller rejection rate and smaller average wasted bandwidth (and second
overall smaller worst-case wasted bandwidth.)
Although HBlock does not try to minimize the number of preempted
LSPs, it ends up doing so, because it preempts LSPs with lower
priority mostly, and therefore it does not propagate cascading much
further. Cascading was the overall lowest (preemption caused at most two levels of preemption, which was also the case for the policy PN). The average and worst preemption priority was very satisfactory
(preempting mostly lowest-priority LSPs, like the other algorithms P, PN, and PB).
When HPB was in use, more LSPs were preempted as a consequence of the higher cascading effect. That is due to the heuristic’s choice of
preempting LSPs that are very similar in bandwidth size to the
de Oliveira, et al. Informational [Page 14]
bandwidth size of the preemptor LSP (which can result in preempting a higher priority LSP and therefore causing cascading). The wasted
bandwidth was reduced when compared to the other algorithms (P, PN,
PB).
When HNB was used, cascading was higher than the other cases, due to the fact that LSPs with higher priority could be preempted. When
compared to P, PN, or PB, the heuristic HNB preempted more LSPs (in
fact, it preempted the largest number of LSPs overall, clearly
showing the cascading effect), but the average wasted bandwidth was
smaller, although not as small as HBlock’s (the HNB heuristic tries
to preempt a single LSP, meaning it will preempt LSPs that have a
reserved bandwidth similar to the actual bandwidth needed. The
algorithm is not always successful, because such a match may not
exist, and in that case, the wasted bandwidth could be high). The
preempted priority was the highest on average and worse case, which
also shows why the cascading level was also the highest (the
heuristic tries to select LSPs for preemption without looking at
their priority levels). In summary, this policy resulted in a poor
performance.
Policy PN resulted in the small number of preempted LSPs overall and small number of LSPs not successfully rerouted. Cascading is low,
but bandwidth wastage is very high (overall highest bandwidth
wastage). Moreover, in several cases in which rerouting happened on portions of the network that were underloaded, the heuristic HBlock
preempted a smaller number of LSPs than PN.
Policy P selects a larger number of LSPs (when compared to PN) with
low priority for preemption, and therefore it is able to successfully reroute less LSPs when compared to HBlock, HPB, HNB, or PN. The
bandwidth wastage is also higher when compared to any of the
heuristic results or to PB, and it could be worse if the network had LSPs with a low priority and large bandwidth, which is not the case. Policy PB, when compared to PN, resulted in a larger number of
preempted LSPs and an overall larger number of blocked LSPs (not
rerouted), due to preemption. Cascading was slightly higher. Since the selected LSPs have low priority, they are not able to preempt
much further and are blocked when the links are congested. Bandwidth wastage was smaller since the policy tries to minimize wastage, and
preempted priority was about the same on average and worst case.
The simulation results show that when preemption is based on
priority, cascading is not critical since the preempted LSPs will not be able to propagate preemption much further. When the number of
LSPs is considered, fewer LSPs are preempted and the chances of
rerouting increases. When bandwidth wastage is considered, smaller de Oliveira, et al. Informational [Page 15]
LSPs are preempted in each link and the wasted bandwidth is low. The heuristic seems to combine these features, yielding the best results, especially in the case of blocking probability. When the heuristic
was configured to minimize blocking probability (HBlock), small LSPs with low priority were selected for preemption on normally loaded
links and fewer (larger) LSPs with low priority were selected on
congested links. Due to their low priority, cascading was not an
issue. Several LSPs were selected for preemption, but the rate of
LSPs that were not successfully rerouted was the lowest. Since the
LSPs are small, it is easier to find a new route in the network.
When selecting LSPs on a congested link, fewer larger LSPs are
selected improving load balance. Moreover, the bandwidth wastage was the overall lowest. In summary, the heuristic is very flexible and
can be configured according to the network provider’s best interest
regarding the considered criteria.
For several cases, the failure of a link resulted in no preemption at all (all LSPs were able to find an alternate path in the network) or resulted in preemption of very few LSPs and subsequent successfully
rerouting of the same with no cascading effect.
It is also important to note that for all policies in use, the number of extra hops when LSPs are rerouted was not critical, showing that
preempted LSPs can be rerouted on a path with the same length or a
path that is slightly longer in number of hops.
7. Security Considerations
The practice described in this document does not raise specific
security issues beyond those of existing TE.
8. Acknowledgements
We would like to acknowledge the input and helpful comments from
Francois Le Faucheur (Cisco Systems) and George Uhl (Swales
Aerospace).
de Oliveira, et al. Informational [Page 16]
9. Informative References
[ATM-PREP] Poretsky, S. and Gannon, T., "An Algorithm for
Connection Precedence and Preemption in Asynchronous
Transfer Mode (ATM) Networks", Proceedings of IEEE ICC 1998.
[DEC-PREP] Peyravian, M. and Kshemkalyani, A. D. , "Decentralized Network Connection Preemption Algorithms", Computer
Networks and ISDN Systems, vol. 30 (11), pp. 1029-1043, June 1998.
[PREEMPTION] de Oliveira, J. C. et al., "A New Preemption Policy for DiffServ-Aware Traffic Engineering to Minimize
Rerouting", Proceedings of IEEE INFOCOM 2002.
[RFC2702] Awduche, D., Malcolm, J., Agogbua, J., O’Dell, M., and J. McManus, "Requirements for Traffic Engineering Over MPLS", RFC 2702, September 1999.
[RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan,
V., and G. Swallow, "RSVP-TE: Extensions to RSVP for
LSP Tunnels", RFC 3209, December 2001.
[RFC3272] Awduche, D., Chiu, A., Elwalid, A., Widjaja, I., and X. Xiao, "Overview and Principles of Internet Traffic
Engineering", RFC 3272, May 2002.
[RFC3564] Le Faucheur, F. and W. Lai, "Requirements for Support
of Differentiated Services-aware MPLS Traffic
Engineering", RFC 3564, July 2003.
[RFC4124] Le Faucheur, F., "Protocol Extensions for Support of
Diffserv-aware MPLS Traffic Engineering", RFC 4124,
June 2005.
[RFC4126] Ash, J., "Max Allocation with Reservation Bandwidth
Constraints Model for Diffserv-aware MPLS Traffic
Engineering & Performance Comparisons", RFC 4126,
June 2005.
[RFC4127] Le Faucheur, F., "Russian Dolls Bandwidth Constraints
Model for Diffserv-aware MPLS Traffic Engineering",
RFC 4127, June 2005.
de Oliveira, et al. Informational [Page 17]
[RFC4128] Lai, W., "Bandwidth Constraints Models for
Differentiated Services (Diffserv)-aware MPLS Traffic
Engineering: Performance Evaluation", RFC 4128,
June 2005.
Authors’ Addresses
Jaudelice C. de Oliveira (editor)
Drexel University
3141 Chestnut Street (ECE Dept.)
Philadelphia, PA 19104
USA
EMail: jau@https://www.360docs.net/doc/146440425.html,
JP Vasseur (editor)
Cisco Systems, Inc.
1414 Massachusetts Avenue
Boxborough, MA 01719
USA
EMail: jpv@https://www.360docs.net/doc/146440425.html,
Leonardo Chen
Verizon Laboratories
40 Sylvan Rd. LA0MS55
Waltham, MA 02451
USA
EMail: leonardo.c.chen@https://www.360docs.net/doc/146440425.html,
Caterina Scoglio
Kansas State University
2061 Rathbone Hall
Manhattan, Kansas 66506-5204
USA
EMail: caterina@https://www.360docs.net/doc/146440425.html,
de Oliveira, et al. Informational [Page 18]
Full Copyright Statement
Copyright (C) The IETF Trust (2007).
This document is subject to the rights, licenses and restrictions
contained in BCP 78 and at https://www.360docs.net/doc/146440425.html,/copyright.html, and
except as set forth therein, the authors retain all their rights.
This document and the information contained herein are provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED
WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Intellectual Property
The IETF takes no position regarding the validity or scope of any
Intellectual Property Rights or other rights that might be claimed to pertain to the implementation or use of the technology described in
this document or the extent to which any license under such rights
might or might not be available; nor does it represent that it has
made any independent effort to identify any such rights. Information on the procedures with respect to rights in RFC documents can be
found in BCP 78 and BCP 79.
Copies of IPR disclosures made to the IETF Secretariat and any
assurances of licenses to be made available, or the result of an
attempt made to obtain a general license or permission for the use of such proprietary rights by implementers or users of this
specification can be obtained from the IETF on-line IPR repository at https://www.360docs.net/doc/146440425.html,/ipr.
The IETF invites any interested party to bring to its attention any
copyrights, patents or patent applications, or other proprietary
rights that may cover technology that may be required to implement
this standard. Please address the information to the IETF at
ietf-ipr@https://www.360docs.net/doc/146440425.html,.
Acknowledgement
Funding for the RFC Editor function is currently provided by the
Internet Society.
de Oliveira, et al. Informational [Page 19]
线性规划题型三线性规划中的求参数取值或取值范围问题
线性规划题型三 线性规划中的求参数取值或取值范围问题 一.已知含参数约束条件,求约束条件中参数的取值范围。 例1、已知|2x -y +m|<3表示的平面区域包含点(0,0)和(-1,1),则m 的取值范围是 ( ) A 、(-3,6) B 、(0,6) C 、(0,3) D 、(-3,3) 例2.已知:不等式9)2(2<+-m y x 表示的平面区域包含点(0,0)和点(-1,1)则m 的取值范围是() A(-3,6)B.(0,6)C(0,3)D(-3,3) 二.已知含参约束条件及目标函数的最优解,求约束条件中的参数取值问题 已知z=3x+y ,x ,y 满足?? ? ??≥≤+≥m x y x x y 32,,且z 的最大值 是最小值的3倍,则m 的值是 A. 61B.51C.41D.3 1 2.设实数y x ,满足不等式组?? ? ??≤++≤≥020k y x x y x ,若y x z 3+= 的最大值为12,则实数k 的值为. 二.目标函数中设计参数,已知线性约束条件 求目标函数中的参数的取值或取值范围问题例4、已知x 、y 满足以下约束条件5 53x y x y x +≥?? -+≤??≤? 使z=x+ay(a>0)取得最小值的最优解有无数个则a 的值( ) A 、-3 B 、3 C 、-1 D 、1 变式、已知x 、y 满足以下约束条件553x y x y x +≥?? -+≥??≤? 使z=x+ay(a>0)取得最小值的最优解有无数个则a 的值( ) A 、-3 B 、3 C 、-1 D 、1
若使z=x+ay(a<0)取得最小值的最优解有无数个,则a 的值( ) 若使z=x+ay 取得最小值的最优解有无数个,则a 的值( ) 例 2.已知:x 、y 满足约束条件?? ? ??≤-≤+-≥+-0 1033032y y x y x ,目标 处取得最大值,求实数a 的取值范围. 直线ax+by+c=0(a>0) b>0直线的斜率小于零,直线由左至右呈上升趋势 b<0直线的斜率大于零,直线由左至右呈下降趋势 若直线ax+by+c=0(a>0)则在ax+by+c=0(a>0)右侧的点P(x 0,y 0) 使ax 0+by 0+c>0,左侧的点P(x 0,y 0),使ax 0+by 0+c<0 若直线ax+by+c=0(a<0)则在ax+by+c=0(a>0)右侧的点P(x 0,y 0) 使ax 0+by 0+c<0,左侧的点P(x 0,y 0),使ax 0+by 0+c>0
6.2(问题)线性规划中的参数问题(原卷版)
2018届学科网高三数学成功在我 专题六不等式 问题二:线性规划中的参数问题 一、考情分析 线性规划是高考必考问题,常有以下几种类型:(1)平面区域的确定问题;(2)区域面积问题;(3)最值问题;(4)逆向求参数问题.而逆向求参数问题,是线性规划中的难点,其主要是依据目标函数的最值或可行域的情况决定参数取值. 二、经验分享 (1)求平面区域的面积: ①首先画出不等式组表示的平面区域,若不能直接画出,应利用题目的已知条件转化为不等式组问题,从而再作出平面区域; ②对平面区域进行分析,若为三角形应确定底与高,若为规则的四边形(如平行四边形或梯形),可利用面积公式直接求解,若为不规则四边形,可分割成几个三角形分别求解再求和即可. (2)利用几何意义求解的平面区域问题,也应作出平面图形,利用数形结合的方法去求解. (3)先准确作出可行域,再借助目标函数的几何意义求目标函数的最值.当目标函数是非线性的函数时,常利用目标函数的几何意义来解题. (4)当目标函数中含有参数时,要根据临界位置确定参数所满足的条件,含参数的平面区域问题,要结合直线的各种情况进行分析,不能凭直觉解答,目标函数含参的线性规划问题,要根据z的几何意义确定最优解,切忌搞错符号. 三、知识拓展 常见代数式的几何意义: ①x2+y2表示点(x,y)与原点(0,0)的距离,x-a2+y-b2表示点(x,y)与点(a,b)的距离; ②y x表示点(x,y)与原点(0,0)连线的斜率, y-b x-a 表示点(x,y)与点(a,b)连线的斜率. 四、题型分析 (一) 目标函数中含参数 若目标函数中含有参数,则一般会知道最值,此时要结合可行域,确定目标函数取得最值时所经过的可行域内的点(即最优解),将点的坐标代入目标函数求得参数的值. 1.目标函数中x的系数为参数
cad中统计多条直线长度的方法
cad中统计多条线段长度 新建文本文档。在文档中输入以下内容: ======================================================= ;统计cad图中线的总长度 ;************** Write By zhenglin**************** (defun c:tj_l (/ p n e e1 e2 l s x1 x2 y1 y2 xx yy ll lll) (setq ln 0 lll 0) (setq p (ssget)) ; Select objects (if p (progn ; If any objects selected (setq l 0 n (sslength p)) (while (< l n) ; For each selected object... (if (= "LINE" (cdr (assoc 0 (setq e (entget (ssname p l)))))) (progn (setq e1 (assoc 10 e) e2 (assoc 11 e)) (setq x1 (cadr e1) y1 (caddr e1)) (setq x2 (cadr e2) y2 (caddr e2)) (setq xx (abs (- x2 x1)) yy (abs (- y2 y1))) (setq ll (sqrt (+ (* xx xx) (* yy yy)))) (setq lll (+ lll ll)) (setq ln (+ ln 1)) ) ) (setq l (1+ l)) ) ) ) (print "total line number= ") (princ ln) (print "total line length= ") (princ lll) (terpri) ) ======================================== 另存为tj_l.lsp 打开cad —> 工具—> 加载应用程序—> 找到并选中tj_l.lsp点加载在命令行输入tj_l 然后选中所有要统计的线。 按F2查看结果
CAD中计算多条线段的长度
(princ "\n程序:统计线段长度命令:zz") (defun C:zz (/ CURVE TLEN SS N SUMLEN) (vl-load-com) (setq SUMLEN 0) (setq SS (ssget '((0 . "CIRCLE,ELLIPSE,LINE,*POLYLINE,SPLINE,ARC")))) (setq N 0) (repeat (sslength SS) (setq CURVE (vlax-ename->vla-object (ssname SS N))) (setq TLEN (vlax-curve-getdistatparam CURVE (vlax-curve-getendparam CURVE))) (setq SUMLEN (+ SUMLEN TLEN)) (setq N (1+ N)) ) (princ (strcat "\n共选择 " (itoa (sslength SS)) " 条线段. 线段总长: " (rtos SUMLEN 2 3) " .")) (princ) ) 将以上代码复制在记事本内后“另存为”→“统计线段长度.lsp” 打开CAD,运行“appload”命令加载刚保存的“统计线段长度.lsp”文件按命令提示“程序:统计线段长度命令:zz” 输入命令zz选择要统计长度的线段即可。 命令: 程序:统计线段长度命令:zz命令: 命令:zz选择对象: 指定对角点: 找到 4 个选择对象: 共选择 4 条线段. 线段总长: 1623.294. cad中线段怎么合并? 2012-06-14 10:45 wxy00520|分类:图像处理软件|浏览9615次 我是PE--空格--选线段--J--另一条线--怎么始终合并不了啊?(线段中有半圆,但是连接着的) 才开始几段还能合并呢,后面怎么都不行了
线性规划题型三线性规划中的求参数取值或取值范围问题
线性规划题型三线性规划中的求参数取值或取 值范围问题 集团文件发布号:(9816-UATWW-MWUB-WUNN-INNUL-DQQTY-
线性规划题型三 线性规划中的求参数取值或取值范围问题 一.已知含参数约束条件,求约束条件中参数的取值范围。 例1、已知|2x -y +m|<3表示的平面区域包含 点(0,0)和(-1,1),则m 的取值范围是 ( ) A 、(-3,6) B 、(0,6) C 、(0,3) D 、(-3,3) 例2.已知:不等式9)2(2<+-m y x 表示的平面区域包含点(0,0)和点(-1,1)则m 的取值范围是() A(-3,6)B.(0,6)C(0,3)D(-3,3) 二.已知含参约束条件及目标函数的最优解,求约束条件中的参数取值问题 2.12,则实数k 的值为. 二.值或范围.
例4、已知x 、y 满足以下约束条件5503x y x y x +≥?? -+≤??≤? 使z=x+ay(a>0)则a 的值( ) A 、-3 B 、3 C 、-1 D 、1 变式、已知x 、y 满足以下约束条件5503x y x y x +≥??-+≥??≤?使z=x+ay(a>0)则a 的值( ) A 、-3 B 、3 C 、-1 D 、1 若使z=x+ay(a<0)若使z=x+ay 取得最小值的最优解有无数个,则例2.已知:x 、y 满足约束条件?? ? ??≤-≤+-≥+-0 1033032y y x y x (-3,0)处取得最大值,求实数a 的取值范围.直线ax+by+c=0(a>0) b>0直线的斜率小于零,直线由左至右呈上升趋势 b<0直线的斜率大于零,直线由左至右呈下降趋势 若直线ax+by+c=0(a>0)则在ax+by+c=0(a>0)使ax 0+by 0+c>0,左侧的点P(x 0,y 0),使ax 0+by 0+c<0 若直线ax+by+c=0(a<0)则在ax+by+c=0(a>0)使ax 0+by 0+c<0,左侧的点P(x 0,y 0),使ax 0+by 0+c>0
高考中含参数线性规划问题专题(学生版)
高考中含参数线性规划问题专题(学生版) -CAL-FENGHAI-(2020YEAR-YICAI)_JINGBIAN
高考中线性规划专题 纵观近几年高考试题,线性规划问题是每年的必考内容。题型多以选择题、填空题出现,它是直线方程在解决实际问题中的运用,特别是含参数线性规划问题,与数学中的其它知识结合较多,题目灵活多变,要引起高度重视. 近三年全国卷是这样考 1.(2015·新课标全国卷Ⅰ理科·T15)若x,y 满足约束条件?? ? ??≤-+≤-≥-0400 1y x y x x 则y x 的最 大值为 . 2.(2015·新课标全国卷Ⅰ文科·T15)若x,y 满足约束条件20210220x y x y x y +-≤?? -+≤??-+≥?则 z=3x+y 的最大值为 . 3.(2015·新课标全国卷Ⅱ理科·T14)若x,y 满足约束条件则z=x+y 的最大值为 . 4.(2015·新课标全国卷Ⅱ文科·T4)若x,y 满足约束条件50210210x y x y x y +-≤?? --≥??-+≤?则z=2x+y 的最大值为 . 5. (2014·新课标全国卷Ⅱ高考文科数学·T9) 设x,y 满足约束条件1010330x y x y x y +-≥?? --≤??-+≥? 则 z=x+2y 的最大值为( ) A.8 B.7 C.2 D.1
6. (2014·新课标全国卷Ⅱ高考理科数学·T9)设x,y 满足约束条件70310350x y x y x y +-≤?? -+≤??--≥? 则 z=2x-y 的最大值为 ( ) A.10 B.8 C.3 D.2 7.(2013·新课标全国Ⅱ高考理科·T9)已知a>0,x,y 满足约束条件 ()133x x y y a x ?≥? +≤??≥-? 若z=2x+y 的最小值为1,则a= ( ) A.14 B. 1 2 C.1 D.2 8.(2013·新课标全国Ⅱ高考文科·T3)设,x y 满足约束条件 10,10,3,x y x y x -+≥?? +-≥??≤? ,则23z x y =-的最小值是( ) A.7- B.6- C.5- D.3- 9.(2013·新课标Ⅰ高考文科·T14)设x ,y 满足约束条件 ? ? ?≤-≤-≤≤013 1y x x ,则y x z -=2的最大值为______. 10. (2013·大纲版全国卷高考文科·T15)若x y 、满足约束条件 0,34,34,x x y x y ≥?? +≥??+≤? 则z x y =-+的最小值为 . 11.(2013·大纲版全国卷高考理科·T15)记不等式组0,34,34,x x y x y ≥?? +≥??+≤? 所表 示的平面区域为.D 若直线()1y a x D a =+与有公共点,则的取值范围是 .
CAD测量连续线段长度的简单办法(1)
测量CAD图中多条线段长度的简单办法 由于在Cad中没有连续测量线段长度的命令,多数人都是利用查询直线命令,将线段一段一段的测量再通过计算器相加,很是麻烦,现介绍两种更为简单实用的多线段测量方法。 1.利用PL命令测量多条线段长度: 使用多段线(pline)命令快捷健pl,连续在测量点上画线,再用(list)快捷健li命令点这条线确认就会出现该线的属性,可以看到该线段的总长度和该线段区域的面积。 2.利用PE命令测量线段多条线段的长度: 输入:PE回车确认,M回车确认,连续点选要测量的线段后回车确认,Y回车确认,J(闭合)回车二次确认,若线段出现闭合需要再输入O将闭合打开。此时所有欲测量的线段已经连接为一条多线段,再输入 li(list),就可以看到线段的总长度和该线段区域的面积了。 1
附录:需要熟记的CAD常用快捷键 一、常用功能键 F1: 获取帮助 F2: 实现作图窗和文本窗口的切换 F3: 控制是否实现对象自动捕捉 F4: 数字化仪控制 F5: 等轴测平面切换 F6: 控制状态行上坐标的显示方式 F7: 栅格显示模式控制 F8: 正交模式控制 F9: 栅格捕捉模式控制 F10: 极轴模式控制 F11: 对象追踪式控制 二、常用字母快捷键 A: 绘圆弧 B: 定义块 C: 画圆 D: 尺寸资源管理器 E: 删除 F: 倒圆角 G: 对相组合 H: 填充 I: 插入 S: 拉伸 T: 文本输入 W: 定义块并保存到硬盘中 L: 直线 M: 移动 X: 炸开 V: 设置当前坐标 U: 恢复上一次操做 O: 偏移 P: 移动 Z: 缩放 AA: 测量区域和周长(area) AL: 对齐(align) 2
高中数学含参数的线性规划题目及答案
线性含参经典小题 1.已知0>a ,y x ,满足约束条件,()?? ? ??-≥≤+≥.3,3,1x a y y x x 若y x z +=2的最小值为1,则=a () A.41 B.2 1 C.1 D.2 2.已知变量y x ,满足约束条件,?? ? ??≤-≤+-≥+-.01,033,032y y x y x 若目标函数ax y z -=仅在点()03, -处取得最大值,则实数a 的取值范围为( ) A. (3,5) B .(∞+,2 1) C.(-1,2) D.(13 1, ) 3.若y x ,满足?? ? ??≤--≥-≥+.22,1, 1y x y x y x 且y ax z 2+=仅在点(1,0)处取得最小值,则a 的取值范围是( ) A.(-1,2) B.(-2,4) C.(-4,0) D .(-4,2) 4.若直线x y 2=上存在()y x ,满足约束条件?? ? ??≥≤--≤-+.,032, 03m x y x y x 则实数m 的最大值为( ) A.-1 B .1 C.2 3 D.2 5.若不等式组? ??? ??? ≤+≥≤+≥-a y x y y x y x 0220表示的平面区域是一个三角形,则a 的取值范围是( ) A.34≤a B.10≤ 6.若实数y x ,满足不等式组,?? ? ??≥-+≤-≤-.02,01,02a y x y x 目标函数y x t 2-=的最大值为2,则实数a的 值是( ) A.-2 B.0 C.1 D.2 7.设1>m ,在约束条件?? ? ??≤+≤≥1y x mx y x y 下,目标函数my x z +=的最大值小于2,则m 的取值范 围为() A.()211+, B.()+∞+,21 C.(1,3) D.()∞+, 3 8.已知,x y 满足约束条件10,230,x y x y --≤??--≥? 当目标函数(0,0)z ax by a b =+>>在该约束条件下 取到最小值22a b +的最小值为( ) A、5 B 、4 D、2 9.y x ,满足约束条件?? ? ??≥+-≤--≤-+0220220 2y x y x y x ,若ax y z -=取得最大值的最优解不唯一,则实数a 的值 为 A,12 1 -或 B.2 12或 C.2或1 D.12-或 10、当实数x ,y 满足?? ? ??≥≤--≤-+.1,01,042x y x y x 时,41≤+≤y ax 恒成立,则实数a 的取值范围是_____ ___. 11.已知a >0,x,y 满足约束条件()1 33x x y y a x ?≥?+≤??≥-? 若z=2x+y 的最小值为1,则a= (princ "\n程序:统计线段长度命令:test") (defun C:TEST (/ CURVE TLEN SS N SUMLEN) (vl-load-com) (setq SUMLEN 0) (setq SS (ssget '((0 . "CIRCLE,ELLIPSE,LINE,*POLYLINE,SPLINE,ARC")))) (setq N 0) (repeat (sslength SS) (setq CURVE (vlax-ename->vla-object (ssname SS N))) (setq TLEN (vlax-curve-getdistatparam CURVE (vlax-curve-getendparam CURVE))) (setq SUMLEN (+ SUMLEN TLEN)) (setq N (1+ N)) ) (princ (strcat "\n共选择 " (itoa (sslength SS)) " 条线段. 线段总长: " (rtos SUMLEN 2 3) "米.")) (princ) ) 将以上代码复制在记事本内后“另存为”→“统计线段长度.lsp” 打开CAD,运行“appload”命令加载刚保存的“统计线段长度.lsp”文件 按命令提示“程序:统计线段长度命令:test” 输入命令test 选择要统计长度的线段即可。 附:我的命令行操作提示 命令: 命令: appload 已成功加载统计线段长度.lsp。 命令: 程序:统计线段长度命令:test 命令: 命令: test 选择对象: 指定对角点: 找到 4 个 选择对象: 共选择 4 条线段. 线段总长: 1667.294米 线性规划专题 一、命题规律讲解 1、 求线性(非线性)目标函数最值题 2、 求可行域的面积题 3、 求目标函数中参数取值范围题 4、 求约束条件中参数取值范围题 5、 利用线性规划解答应用题 一、线性约束条件下线性函数的最值问题 线性约束条件下线性函数的最值问题即简单线性规划问题,它的线性约束条件是一个二元一次不等式组,目标函数是一个二元一次函数,可行域就是线性约束条件中不等式所对应的方程所表示的直线所围成的区域,区域内的各点的点坐标(),x y 即简单线性规划的可行解,在可行解中的使得目标函数取得最大值和最小值的点的坐标(),x y 即简单线性规划的最优解。 例1 已知43 35251x y x y x -≤-?? +≤??≥? ,2z x y =+,求z 的最大值和最小值 例2已知,x y 满足124126x y x y x y +=?? +≥??-≥-? ,求z=5x y -的最大值和最小值 二、非线性约束条件下线性函数的最值问题 高中数学中的最值问题很多可以转化为非线性约束条件下线性函数的最值问题。它们的约束条件是一个二元不等式组,目标函数是一个二元一次函数,可行域是直线或曲线所围成的图形(或一条曲线段),区域内的各点的点坐标(),x y 即可行解,在可行解中的使得目标函数取得最大值和最小值的点的坐标 (),x y 即最优解。 例3 已知,x y 满足,2 2 4x y +=,求32x y +的最大值和最小值 例4 求函数4 y x x =+[]()1,5x ∈的最大值和最小值。 三、线性约束条件下非线性函数的最值问题 这类问题也是高中数学中常见的问题,它也可以用线性规划的思想来进行解决。它的约束条件是一个二元一次不等式组,目标函数是一个二元函数,可行域是直线所围成的图形(或一条线段),区域内的各点的点坐标(),x y 即可行解,在可行解中的使得目标函数取得最大值和最小值的点的坐标(),x y 即最优解。 例5 已知实数,x y 满足不等式组10101x y x y y +-≤??-+≥??≥-? ,求22 448x y x y +--+的最小值。 例6 实数,x y 满足不等式组0 0220 y x y x y ≥?? -≥??--≥? ,求11y x -+的最小值 四、非线性约束条件下非线性函数的最值问题 在高中数学中还有一些常见的问题也可以用线性规划的思想来解决,它的约束条件是一个二元不等式组,目标函数也是一个二元函数,可行域是由曲线或直线所围成的图形(或一条曲线段),区域内的各点的点坐标(),x y 即可行解,在可行解中的使得目标函数取得最大值和最小值的点的坐标(),x y 即最优解。 例7 已知,x y 满足y 2 y x +的最大值和最小值 5.在直角坐标系xOy 中,直线l 的参数方程为, 4x t y t =??=+?(t 为参数).以原点O 为极点, 以x 轴的正半轴为极轴建立极坐标系,曲线C 的极坐标方程为42sin()4 ρθπ=+,则直 线l 和曲线C 的公共点有 A .0个 B .1个 C .2个 D .无数个 7.直线y x =与函数2 2,,()42, x m f x x x x m >? =?++≤?的图象恰有三个公共点,则实数m 的取 值范围是 A .[1,2)- B .[1,2]- C .[2,)+∞ D .(,1]-∞- (3)若实数x ,y 满足不等式组1,2,0,y x y x y +≤?? -≤??≥? 则y x z 2-=的最小值为 (A )2 7- (B ) 2- (C )1 (D ) 2 5 2.参数方程2cos (sin x y θθθ=??=?, , 为参数)和极坐标方程6cos ρθ=-所表示的图形分别是 ( ) (A) 圆和直线 (B) 直线和直线 (C) 椭圆和直线 (D) 椭圆和圆 5.若x ,y 满足约束条件?? ? ??≤≤≥+-≥+30030x y x y x ,则y x z -=2的最大值为( ) (A )9 (B )8 (C )7 (D )6 7.圆2220x y ax +-+=与直线l 相切于点(3,1)A ,则直线l 的方程为( ) (A) 250x y --= (B) 210x y --= (C)20x y --= (D) 40x y +-= 6.已知函数?????≥-+<--=0 ,120 ,12)(22x x x x x x x f ,则对任意R ∈21,x x ,若120x x <<,下列不等 式成立的是( ) (A )12()()0f x f x +< (B )12()()0f x f x +> (C )12()()0f x f x -> (D )12()()0f x f x -< (3)直线11x t y t =+?? =-?(t 为参数)的倾斜角的大小为 1. 新建文本文档,将以下代码复制在记事本内,“另存为”→“统 计线段长度.lsp”。 (princ "\n程序:统计线段长度命令:zz")? (defun C:zz (/ CURVE TLEN SS N SUMLEN) ? (vl-load-com) ?(setq SUMLEN 0)? (setq SS (ssget '((0 . "CIRCLE,ELLIPSE,LINE,*POLYLINE,SPLINE,ARC"))))? (setq N 0)? (repeat (sslength SS) ? (setq CURVE (vlax-ename->vla-object (ssname SS N))) ? (setq TLEN (vlax-curve-getdistatparam CURVE (vlax-curve-getendparam CURVE))) ? (setq SUMLEN (+ SUMLEN TLEN)) ? (setq N (1+ N)) ? ) ? (princ (strcat "\n共选择 " (itoa (sslength SS)) " 条线段. 线段总长: " (rtos SUMLEN 2 3) " ?.")) (princ) ? ) ?? 2.打开CAD →菜单栏中找到“管理”(老版本“工具”)→打开“加载应用程序”(或在命令行中运行“appload”命令打开)→找到并选中“统计线段长度.lsp”→点“加载”→显示“已成功加载统计线段长度.lsp。”→点“关闭”。 3. 在命令行输入“zz”+回车→选中所有要统计的线→选中后点鼠标右键(或回车)。 高中数学含参数的线性规 划题目及答案 The Standardization Office was revised on the afternoon of December 13, 2020cad统计线段长度方法
高考线性规划必考题型(非常全)
线性规划和方程参数
★CAD中统计多条线段长度
高中数学含参数的线性规划题目及答案