Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance
Random Early Detection Gateways for Congestion Avoidance

Random Early Detection Gateways for Congestion Avoidance

Sally Floyd and Van Jacobson?

Lawrence Berkeley Laboratory

University of California

?oyd@https://www.360docs.net/doc/929633417.html,

van@https://www.360docs.net/doc/929633417.html,

To appear in the August1993IEEE/ACM Transactions on Networking

Abstract

This paper presents Random Early Detection(RED)gateways for congestion avoidance in packet-switched networks.The gateway detects incipient congestion by computing the average queue size.The

gateway could notify connections of congestion either by dropping packets arriving at the gateway or

by setting a bit in packet headers.When the average queue size exceeds a preset threshold,the gateway

drops or marks each arriving packet with a certain probability,where the exact probability is a function

of the average queue size.

RED gateways keep the average queue size low while allowing occasional bursts of packets in the queue.During congestion,the probability that the gateway noti?es a particular connection to reduce

its window is roughly proportional to that connection’s share of the bandwidth through the gateway.

RED gateways are designed to accompany a transport-layer congestion control protocol such as TCP.

The RED gateway has no bias against bursty traf?c and avoids the global synchronization of many

connections decreasing their window at the same time.Simulations of a TCP/IP network are used to

illustrate the performance of RED gateways.

1Introduction

In high-speed networks with connections with large delay-bandwidth products,gateways are likely to be designed with correspondingly large maximum queues to accommodate transient congestion.In the current Internet,the TCP transport protocol detects congestion only after a packet has been dropped at the gateway. However,it would clearly be undesirable to have large queues(possibly on the order of a delay-bandwidth product)that were full much of the time;this would signi?cantly increase the average delay in the network. Therefore,with increasingly high-speed networks,it is increasingly important to have mechanisms that keep throughput high but average queue sizes low.

In the absence of explicit feedback from the gateway,there are a number of mechanisms that have been proposed for transport-layer protocols to maintain high throughput and low delay in the network.Some of these proposed mechanisms are designed to work with current gateways[15,23,31,33,34],while other mechanisms are coupled with gateway scheduling algorithms that require per-connection state in the gateway[20,22].In the absence of explicit feedback from the gateway,transport-layer protocols could infer congestion from the estimated bottleneck service time,from changes in throughput,from changes in

1INTRODUCTION2 end-to-end delay,as well as from packet drops or other methods.Nevertheless,the view of an individual connection is limited by the timescales of the connection,the traf?c pattern of the connection,the lack of knowledge of the number of congested gateways,the possibilities of routing changes,as well as by other dif?culties in distinguishing propagation delay from persistent queueing delay.

The most effective detection of congestion can occur in the gateway itself.The gateway can reliably distinguish between propagation delay and persistent queueing delay.Only the gateway has a uni?ed view of the queueing behavior over time;the perspective of individual connections is limited by the packet arrival patterns for those connections.In addition,a gateway is shared by many active connections with a wide range of roundtrip times,tolerances of delay,throughput requirements,etc.;decisions about the duration and magnitude of transient congestion to be allowed at the gateway are best made by the gateway itself.

The method of monitoring the average queue size at the gateway,and of notifying connections of incip-ient congestion,is based of the assumption that it will continue to be useful to have queues at the gateway where traf?c from a number of connections is multiplexed together,with FIFO scheduling.Not only is FIFO scheduling useful for sharing delay among connections,reducing delay for a particular connection during its periods of burstiness[4],but it scales well and is easy to implement ef?ciently.In an alternate approach,some congestion control mechanisms that use variants of Fair Queueing[20]or hop-by-hop?ow control schemes[22]propose that the gateway scheduling algorithm make use of per-connection state for every active connection.We would suggest instead that per-connection gateway mechanisms should be used only in those circumstances where gateway scheduling mechanisms without per-connection mechanisms are clearly inadequate.

The DECbit congestion avoidance scheme[18],described later in this paper,is an early example of congestion detection at the gateway;DECbit gateways give explicit feedback when the average queue size exceeds a certain threshold.This paper proposes a different congestion avoidance mechanism at the gateway, RED(Random Early Detection)gateways,with somewhat different methods for detecting congestion and for choosing which connections to notify of this congestion.

While the principles behind RED gateways are fairly general,and RED gateways can be useful in con-trolling the average queue size even in a network where the transport protocol can not be trusted to be cooperative,RED gateways are intended for a network where the transport protocol responds to congestion indications from the network.The gateway congestion control mechanism in RED gateways simpli?es the congestion control job required of the transport protocol,and should be applicable to transport-layer conges-tion control mechanisms other than the current version of TCP,including protocols with rate-based rather than window-based?ow control.

However,some aspects of RED gateways are speci?cally targeted to TCP/IP networks.The RED gate-way is designed for a network where a single marked or dropped packet is suf?cient to signal the presence of congestion to the transport-layer protocol.This is different from the DECbit congestion control scheme, where the transport-layer protocol computes the fraction of arriving packets that have the congestion indi-cation bit set.

In addition,the emphasis on avoiding the global synchronization that results from many connections reducing their windows at the same time is particularly relevant in a network with4.3-Tahoe BSD TCP [14],where each connection goes through Slow-Start,reducing the window to one,in response to a dropped packet.In the DECbit congestion control scheme,for example,where each connection’s response to con-gestion is less severe,it is also less critical to avoid this global synchronization.

RED gateways can be useful in gateways with a range of packet-scheduling and packet-dropping al-gorithms.For example,RED congestion control mechanisms could be implemented in gateways with drop preference,where packets are marked as either“essential”or“optional”,and“optional”packets are dropped ?rst when the queue exceeds a certain size.Similarly,for a gateway with separate queues for realtime and non-realtime traf?c,for example,RED congestion control mechanisms could be applied to the queue for one of these traf?c classes.

The RED congestion control mechanisms monitor the average queue size for each output queue,and,us-ing randomization,choose connections to notify of that congestion.Transient congestion is accommodated by a temporary increase in the queue.Longer-lived congestion is re?ected by an increase in the computed average queue size,and results in randomized feedback to some of the connections to decrease their win-dows.The probability that a connection is noti?ed of congestion is proportional to that connection’s share of the throughput through the gateway.

Gateways that detect congestion before the queue over?ows are not limited to packet drops as the method for notifying connections of congestion.RED gateways can mark a packet by dropping it at the gateway or by setting a bit in the packet header,depending on the transport protocol.When the average queue size exceeds a maximum threshold,the RED gateway marks every packet that arrives at the gateway.If RED gateways mark packets by dropping them,rather than by setting a bit in the packet header,when the average queue size exceeds the maximum threshold,then the RED gateway controls the average queue size even in the absence of a cooperating transport protocol.

One advantage of a gateway congestion control mechanism that works with current transport protocols, and that does not require that all gateways in the internet use the same gateway congestion control mecha-nism,is that it could be deployed gradually in the current Internet.RED gateways are a simple mechanism for congestion avoidance that could be implemented gradually in current TCP/IP networks with no changes to transport protocols.

Section2discusses previous research on Early Random Drop gateways and other congestion avoidance gateways.Section3outlines design guidelines for RED gateways.Section4presents the RED gateway algorithm,and Section5describes simple simulations.Section6discusses in detail the parameters used in calculating the average queue size,and Section7discusses the algorithm used in calculating the packet-marking probability.

Section8examines the performance of RED gateways,including the robustness of RED gateways for a range of traf?c and for a range of parameter values.Simulations in Section9demonstrate,among other things,the RED gateway’s lack of bias against bursty traf?c.Section10describes how RED gateways can be used to identify those users that are using a large fraction of the bandwidth through a congested gateway. Section11discusses methods for ef?ciently implementing RED gateways.Section12gives conclusions and describes areas for future work.

2Previous work on congestion avoidance gateways

2.1Early Random Drop gateways

Several researchers have studied Early Random Drop gateways as a method for providing congestion avoid-ance at the gateway.1

Hashem[11]discusses some of the shortcomings of Random Drop2and Drop Tail gateways,and brie?y investigates Early Random Drop gateways.In the implementation of Early Random Drop gateways in[11], if the queue length exceeds a certain drop level,then the gateway drops each packet arriving at the gateway with a?xed drop probability.This is discussed as a rough initial implementation.Hashem[11]stresses that

in future implementations the drop level and the drop probability should be adjusted dynamically,depending on network traf?c.

Hashem[11]points out that with Drop Tail gateways each congestion period introduces global synchro-nization in the network.When the queue over?ows,packets are often dropped from several connections, and these connections decrease their windows at the same time.This results in a loss of throughput at the gateway.The paper shows that Early Random Drop gateways have a broader view of traf?c distribution than do Drop Tail or Random Drop gateways and reduce global synchronization.The paper suggests that because of this broader view of traf?c distribution,Early Random Drop gateways have a better chance than Drop Tail gateways of targeting aggressive users.The conclusions in[11]are that Early Random Drop gateways deserve further investigation.

For the version of Early Random Drop gateways used in the simulations in[36],if the queue is more than half full then the gateway drops each arriving packet with probability0.02.Zhang[36]shows that this version of Early Random Drop gateways was not successful in controlling misbehaving users.In these simulations,with both Random Drop and Early Random Drop gateways,the misbehaving users received roughly75%higher throughput than the users implementing standard4.3BSD TCP.

The Gateway Congestion Control Survey[21]considers the versions of Early Random Drop described above.The survey cites the results in which the Early Random Drop gateway is unsuccessful in controlling misbehaving users[36].As mentioned in[32],Early Random Drop gateways are not expected to solve all of the problems of unequal throughput given connections with different roundtrip times and multiple congested gateways.In[21],the goals of Early Random Drop gateways for congestion avoidance are described as “uniform,dynamic treatment of users(streams/?ows),of low overhead,and of good scaling characteristics in large and loaded networks”.It is left as an open question whether or not these goals can be achieved. 2.2Other approaches to gateway mechanisms for congestion avoidance

Early descriptions of IP Source Quench messages suggest that gateways could send Source Quench mes-sages to source hosts before the buffer space at the gateway reaches capacity[26],and before packets have to be dropped at the gateway.One proposal[27]suggests that the gateway send Source Quench messages when the queue size exceeds a certain threshold,and outlines a possible method for?ow control at the source hosts in response to these messages.The proposal also suggests that when the gateway queue size approaches the maximum level the gateway could discard arriving packets other than ICMP packets.

The DECbit congestion avoidance scheme,a binary feedback scheme for congestion avoidance,is de-scribed in[29].In the DECbit scheme the gateway uses a congestion-indication bit in packet headers to provide feedback about congestion in the network.When a packet arrives at the gateway,the gateway cal-culates the average queue length for the last(busy+idle)period plus the current busy period.(The gateway is busy when it is transmitting packets,and idle otherwise.)When the average queue length exceeds one, then the gateway sets the congestion-indication bit in the packet header of arriving packets.

The source uses window?ow control,and updates its window once every two roundtrip times.If at least half of the packets in the last window had the congestion indication bit set,then the window is decreased exponentially.Otherwise,the window is increased linearly.

There are several signi?cant differences between DECbit gateways and the RED gateways described in this paper.The?rst difference concerns the method of computing the average queue size.Because the DECbit scheme chooses the last(busy+idle)cycle plus the current busy period for averaging the queue size, the queue size can sometimes be averaged over a fairly short period of time.In high-speed networks with large buffers at the gateway,it would be desirable to explicitly control the time constant for the computed average queue size;this is done in RED gateways using time-based exponential decay.In[29]the authors report that they rejected the idea of a weighted exponential running average of the queue length because when the time interval was far from the roundtrip time,there was bias in the network.This problem of

3DESIGN GUIDELINES5 bias does not arise with RED gateways because RED gateways use a randomized algorithm for marking packets,and assume that the sources use a different algorithm for responding to marked packets.In a DECbit network,the source looks at the fraction of packets that have been marked in the last roundtrip time. For a network with RED gateways,the source should reduce its window even if there is only one marked packet.

A second difference between DECbit gateways and RED gateways concerns the method for choosing connections to notify of congestion.In the DECbit scheme there is no conceptual separation between the algorithm to detect congestion and the algorithm to set the congestion indication bit.When a packet arrives at the gateway and the computed average queue size is too high,the congestion indication bit is set in the header of that packet.Because of this method for marking packets,DECbit networks can exhibit a bias against bursty traf?c[see Section9];this is avoided in RED gateways by using randomization in the method for marking packets.For congestion avoidance gateways designed to work with TCP,an additional motivation for using randomization in the method for marking packets is to avoid the global synchronization that results from many TCP connections reducing their window at the same time.This is less of a concern in networks with the DECbit congestion avoidance scheme,where each source decreases its window fairly moderately in response to congestion.

Another proposal for adaptive window schemes where the source nodes increase or decrease their win-dows according to feedback concerning the queue lengths at the gateways is presented in[25].Each gateway has an upper threshold UT indicating congestion,and a lower threshold LT indicating light load conditions. Information about the queue sizes at the gateways is added to each packet.A source node increases its window only if all the gateway queue lengths in the path are below the lower thresholds.If the queue length is above the upper threshold for any queue along the path,then the source node decreases its window.One disadvantage of this proposal is that the network responds to the instantaneous queue lengths,not to the average queue lengths.We believe that this scheme would be vulnerable to traf?c phase effects and to biases against bursty traf?c,and would not accommodate transient increases in the queue size.

3Design guidelines

This section summarizes some of the design goals and guidelines for RED gateways.The main goal is to provide congestion avoidance by controlling the average queue size.Additional goals include the avoidance of global synchronization and of a bias against bursty traf?c and the ability to maintain an upper bound on the average queue size even in the absence of cooperation from transport-layer protocols.

The?rst job of a congestion avoidance mechanism at the gateway is to detect incipient congestion.As de?ned in[18],a congestion avoidance scheme maintains the network in a region of low delay and high throughput.The average queue size should be kept low,while?uctuations in the actual queue size should be allowed to accommodate bursty traf?c and transient congestion.Because the gateway can monitor the size of the queue over time,the gateway is the appropriate agent to detect incipient congestion.Because the gateway has a uni?ed view of the various sources contributing to this congestion,the gateway is also the appropriate agent to decide which sources to notify of this congestion.

In a network with connections with a range of roundtrip times,throughput requirements,and delay sensitivities,the gateway is the most appropriate agent to determine the size and duration of short-lived bursts in queue size to be accommodated by the gateway.The gateway can do this by controlling the time constants used by the low-pass?lter for computing the average queue size.The goal of the gateway is to detect incipient congestion that has persisted for a“long time”(several roundtrip times).

The second job of a congestion avoidance gateway is to decide which connections to notify of congestion at the gateway.If congestion is detected before the gateway buffer is full,it is not necessary for the gateway to drop packets to notify sources of congestion.In this paper,we say that the gateway marks a packet,

4THE RED ALGORITHM6 and noti?es the source to reduce the window for that connection.This marking and noti?cation can consist of dropping a packet,setting a bit in a packet header,or some other method understood by the transport protocol.The current feedback mechanism in TCP/IP networks is for the gateway to drop packets,and the simulations of RED gateways in this paper use this approach.

One goal is to avoid a bias against bursty traf?https://www.360docs.net/doc/929633417.html,works contain connections with a range of bursti-ness,and gateways such as Drop Tail and Random Drop gateways have a bias against bursty traf?c.With Drop Tail gateways,the more bursty the traf?c from a particular connection,the more likely it is that the gateway queue will over?ow when packets from that connection arrive at the gateway[7].

Another goal in deciding which connections to notify of congestion is to avoid the global synchroniza-tion that results from notifying all connections to reduce their windows at the same time.Global synchro-nization has been studied in networks with Drop Tail gateways[37],and results in loss of throughput in the network.Synchronization as a general network phenomena has been explored in[8].

In order to avoid problems such as biases against bursty traf?c and global synchronization,congestion avoidance gateways can use distinct algorithms for congestion detection and for deciding which connections to notify of this congestion.The RED gateway uses randomization in choosing which arriving packets to mark;with this method,the probability of marking a packet from a particular connection is roughly proportional to that connection’s share of the bandwidth through the gateway.This method can be ef?ciently implemented without maintaining per-connection state at the gateway.

One goal for a congestion avoidance gateway is the ability to control the average queue size even in the absence of cooperating sources.This can be done if the gateway drops arriving packets when the average queue size exceeds some maximum threshold(rather than setting a bit in the packet header).This method could be used to control the average queue size even if most connections last less than a roundtrip time(as could occur with modi?ed transport protocols in increasingly high-speed networks),and even if connections fail to reduce their throughput in response to marked or dropped packets.

4The RED algorithm

This section describes the algorithm for RED gateways.The RED gateway calculates the average queue size, using a low-pass?lter with an exponential weighted moving average.The average queue size is compared to two thresholds,a minimum threshold and a maximum threshold.When the average queue size is less than the minimum threshold,no packets are marked.When the average queue size is greater than the maximum threshold,every arriving packet is marked.If marked packets are in fact dropped,or if all source nodes are cooperative,this ensures that the average queue size does not signi?cantly exceed the maximum threshold.

When the average queue size is between the minimum and the maximum threshold,each arriving packet is marked with probability p a,where p a is a function of the average queue size avg.Each time that a packet is marked,the probability that a packet is marked from a particular connection is roughly proportional to that connection’s share of the bandwidth at the gateway.The general RED gateway algorithm is given in Figure1.

Thus the RED gateway has two separate algorithms.The algorithm for computing the average queue size determines the degree of burstiness that will be allowed in the gateway queue.The algorithm for calculating the packet-marking probability determines how frequently the gateway marks packets,given the current level of congestion.The goal is for the gateway to mark packets at fairly evenly-spaced intervals,in order to avoid biases and to avoid global synchronization,and to mark packets suf?ciently frequently to control the average queue size.

The detailed algorithm for the RED gateway is given in Figure2.Section11discusses ef?cient imple-mentations of these algorithms.

The gateway’s calculations of the average queue size take into account the period when the queue is

for each packet arrival

calculate the average queue size avg

if min th≤avg

calculate probability p a

with probability p a:

mark the arriving packet

else if max th≤avg

mark the arriving packet

Figure1:General algorithm for RED gateways.

empty(the idle period)by estimating the number m of small packets that could have been transmitted by the gateway during the idle period.After the idle period the gateway computes the average queue size as if m packets had arrived to an empty queue during that period.

As avg varies from min th to max th,the packet-marking probability p b varies linearly from0to max p:

p b←max p(avg?min th)/(max th?min th).

The?nal packet-marking probability p a increases slowly as the count increases since the last marked packet:

p a←p b/(1?count·p b)

As discussed in Section7,this ensures that the gateway does not wait too long before marking a packet.

The gateway marks each packet that arrives at the gateway when the average queue size avg exceeds max th.

One option for the RED gateway is to measure the queue in bytes rather than in packets.With this option,the average queue size accurately re?ects the average delay at the gateway.When this option is used, the algorithm would be modi?ed to ensure that the probability that a packet is marked is proportional to the packet size in bytes:

p b←max p(avg?min th)/(max th?min th)

p b←p b PacketSize/MaximumPacketSize

p a←p b/(1?count·p b)

In this case a large FTP packet is more likely to be marked than is a small TELNET packet.

Sections6and7discuss in detail the setting of the various parameters for RED gateways.Section6 discusses the calculation of the average queue size.The queue weight w q is determined by the size and duration of bursts in queue size that are allowed at the gateway.The minimum and maximum thresholds min th and max th are determined by the desired average queue size.The average queue size which makes the desired tradeoffs(such as the tradeoff between maximizing throughput and minimizing delay)depends on network characteristics,and is left as a question for further research.Section7discusses the calculation of the packet-marking probability.

In this paper our primary interest is in the functional operation of the RED gateways.Speci?c questions about the most ef?cient implementation of the RED algorithm are discussed in Section11.

5A simple simulation

This section describes our simulator and presents a simple simulation with RED gateways.Our simulator is a version of the REAL simulator[19]built on Columbia’s Nest simulation package[1],with extensive

Initialization:

avg←0

count←?1

for each packet arrival

calculate the new average queue size avg: if the queue is nonempty

avg←(1?w q)avg+w q q

else

m←f(time?q

time←time

Saved Variables:

avg:average queue size

q

gateways use FIFO queueing.

Source and sink nodes implement a congestion control algorithm equivalent to that in4.3-Tahoe BSD TCP.3Brie?y,there are two phases to the window-adjustment algorithm.A threshold is set initially to half the receiver’s advertised window.In the slow-start phase,the current window is doubled each roundtrip time until the window reaches the threshold.Then the congestion-avoidance phase is entered,and the current window is increased by roughly one packet each roundtrip time.The window is never allowed to increase to more than the receiver’s advertised window,which this paper refers to as the“maximum window size”. In4.3-Tahoe BSD TCP,packet loss(a dropped packet)is treated as a“congestion experienced”signal.The source reacts to a packet loss by setting the threshold to half the current window,decreasing the current window to one packet,and entering the slow-start phase.

Figure3shows a simple simulation with RED gateways.The network is shown in Figure4.The simu-lation contains four FTP connections,each with a maximum window roughly equal to the delay-bandwidth product,which ranges from33to112packets.The RED gateway parameters are set as follows:w q=0.002, min th=5packets,max th=15packets,and max p=1/50.The buffer size is suf?ciently large that packets are never dropped at the gateway due to buffer over?ow;in this simulation the RED gateway controls the average queue size,and the actual queue size never exceeds forty packets.

For the charts in Figure3,the x-axis shows the time in seconds.The bottom chart shows the packets from nodes1-4.Each of the four main rows shows the packets from one of the four connections;the bottom row shows node1packets,and the top row shows node4packets.There is a mark for each data packet as it arrives at the gateway and as it departs from the gateway;at this time scale,the two marks are often indistinguishable.The y-axis is a function of the packet sequence number;for packet number n from node i,the y-axis shows n mod90+(i?1)100.Thus,each vertical‘line’represents90consecutively-numbered packets from one connection arriving at the gateway.Each‘X’shows a packet dropped by the gateway,and each‘X’is followed by a mark showing the retransmitted packet.Node1starts sending packets at time0, node2starts after0.2seconds,node3starts after0.4seconds,and node4starts after0.6seconds.

The top chart of Figure3shows the instantaneous queue size q and the calculated average queue size avg.The dotted lines show min th and max th,the minimum and maximum thresholds for the average queue size.Note that the calculated average queue size avg changes fairly slowly compared to q.The bottom row of X’s on the bottom chart shows again the time of each dropped packet.

This simulation shows the success of the RED gateway in controlling the average queue size at the gateway in response to a dynamically changing load.As the number of connections increases,the frequency with which the gateway drops packets also increases.There is no global synchronization.The higher throughput for the connections with shorter roundtrip times is due to the bias of TCP’s window increase algorithm in favor of connections with shorter roundtrip times(as discussed in[6,7]).For the simulation in Figure3the average link utilization is76%.For the following second of the simulation,when all four sources are active,the average link utilization is82%.(This is not shown in Figure3.) Because RED gateways can control the average queue size while accommodating transient congestion, RED gateways are well-suited to provide high throughput and low average delay in high-speed networks with TCP connections that have large windows.The RED gateway can accommodate the short burst in the queue required by TCP’s slow-start phase;thus RED gateways control the average queue size while still allowing TCP connections to smoothly open their windows.Figure5shows the results of simulations of the network in Figure6with two TCP connections,each with a maximum window of240packets,roughly equal to the delay-bandwidth product.The two connections are started at slightly different times.The simulations compare the performance of Drop Tail and of RED gateways.

In Figure5the x-axis shows the total throughput as a fraction of the maximum possible throughput on the congested link.The y-axis shows the average queue size in packets(as seen by arriving packets).

Queue size (solid line) and average queue size (dashed line).

Time

Q u e u e 0.00.20.40.60.8 1.0

0103

Time P a c k e t N u m b e r (M o d 90) f o r F o u r C o n n e c t i o n s

0.00.20.4

0.60.8 1.0

010*******

40

Figure 3:A simulation with four FTP connections with staggered start times.

Five 5-second simulations were run for each of 11sets of parameters for Drop Tail gateways,and for 11sets of parameters for RED gateways;each mark in Figure 5shows the results of one of these ?ve-second simulations.The simulations with Drop Tail gateways were run with the buffer size ranging from 15to 140packets;as the buffer size is increased,the throughput and the average queue size increase correspondingly.In order to avoid phase effects in the simulations with Drop Tail gateways,the source node takes a random

(‘triangle’ for RED, ‘square’ for Drop Tail)Throughput (%)

A v e r a g e Q u e u e 0.40.60.8

1.0

020*********

Figure 5:Comparing Drop Tail and RED gateways.

SINK

45Mbps

100Mbps

FTP SOURCES

Figure 6:Simulation network.

time drawn from the uniform distribution on [0,t]seconds to prepare an FTP packet for transmission,where t is the bottleneck service time of 0.17ms.[7].

The simulations with RED gateways were all run with a buffer size of 100packets,with min th ranging from 3to 50packets.For the RED gateways,max th is set to 3min th ,with w q =0.002and max p =1/50.The dashed lines show the average delay (as a function of throughput)approximated by 1.73/(1?x )for

the simulations with RED gateways,and approximated by0.1/(1?x)3for the simulations with Drop Tail gateways.For this simple network with TCP connections with large windows,the network power(the ratio of throughput to delay)is higher with RED gateways than with Drop Tail gateways.There are several reasons for this difference.With Drop Tail gateways with a small maximum queue,the queue drops packets while the TCP connection is in the slow-start phase of rapidly increasing its window,reducing throughput. On the other hand,with Drop Tail gateways with a large maximum queue the average delay is unacceptably large.In addition,Drop Tail gateways are more likely to drop packets from both connections at the same time,resulting in global synchronization and a further loss of throughput.

Later in the paper,we discuss simulation results from networks with a more diverse range of connections. The RED gateway is not speci?cally designed for a network dominated by bulk data transfer;this is simply an easy way to simulate increasingly-heavy congestion at a gateway.

6Calculating the average queue length

The RED gateway uses a low-pass?lter to calculate the average queue size.Thus,the short-term increases in the queue size that result from bursty traf?c or from transient congestion do not result in a signi?cant increase in the average queue size.

The low-pass?lter is an exponential weighted moving average(EWMA):

avg←(1?w q)avg+w q q.(1) The weight w q determines the time constant of the low-pass?lter.The following sections discuss upper and lower bounds for setting w q.The calculation of the average queue size can be implemented particularly ef?ciently when w q is a(negative)power of two,as shown in Section11.

6.1An upper bound for w q

If w q is too large,then the averaging procedure will not?lter out transient congestion at the gateway.

Assume that the queue is initially empty,with an average queue size of zero,and then the queue increases from0to L packets over L packet arrivals.After the L th packet arrives at the gateway,the average queue size avg L is

avg L=

L

i=1

iw q(1?w q)L?i

=w q(1?w q)L

L

i=1

i(

1

w q

.(2)

This derivation uses the following identity[9,p.65]:

L

∑i=1ix i=

x+(Lx?L?1)x L+1

ave_L

Figure7:avg L as a function of w q and L.

Given a minimum threshold min th,and given that we wish to allow bursts of L packets arriving at the gateway,then w q should be chosen to satisfy the following equation for avg L

(1?w q)L+1?1

L+1+

7CALCULATING THE PACKET-MARKING PROBABILITY 147Calculating the packet-marking probability

The initial packet-marking probability p b is calculated as a linear function of the average queue size.In this section we compare two methods for calculating the ?nal packet-marking probability,and demonstrate the advantages of the second method.In the ?rst method,when the average queue size is constant the number of arriving packets between marked packets is a geometric random variable;in the second method the number of arriving packets between marked packets is a uniform random variable.

The initial packet-marking probability is computed as follows:

p b ←max p (avg ?min th )/(max th ?min th ).

The parameter max p gives the maximum value for the packet-marking probability p b ,achieved when the average queue size reaches the maximum threshold.

(top row for Method 1, bottom row for Method 2)Packet Number

010002000300040005000

....

.......................................................................................................................................................................................1:2:Figure 8:Randomly-marked packets,comparing two packet-marking methods.

Method 1:Geometric random variables.In Method 1,let each packet be marked with probability p b .Let the intermarking time X be the number of packets that arrive,after a marked packet,until the next packet is marked.Because each packet is marked with probability p b ,

Prob [X =n ]=(1?p b )n ?1p b .

Thus with Method 1,X is a geometric random variable with parameter p b ,and E [X ]=1/p b .

With a constant average queue size,the goal is to mark packets at fairly regular intervals.It is undesirable to have too many marked packets close together,and it is also undesirable to have too long an interval between marked packets.Both of these events can result in global synchronization,with several connections reducing their windows at the same time,and both of these events can occur when X is a geometric random variable.

Method 2:Uniform random variables.A more desirable alternative is for X to be a uniform random variable from {1,2,...,1/p b }(assuming for simplicity that 1/p b is an integer).This is achieved if the marking probability for each arriving packet is p b /(1?count ·p b ),where count is the number of unmarked packets that have arrived since the last marked packet.Call this Method 2.In this case,

Prob [X =n ]=p b 1?i p b

=p b for 1≤n ≤1/p b ,

and

Prob [X =n ]=0for n >1/p b .

For Method2,E[X]=1/(2p b)+1/2.

Figure8shows an experiment comparing the two methods for marking packets.The top line shows Method1,where each packet is marked with probability p,for p=0.02.The bottom line shows Method2, where each packet is marked with probability p/(1+i p),for p=0.01,and for i the number of unmarked packets since the last marked packet.Both methods marked roughly100out of the5000arriving packets. The x-axis shows the packet number.For each method,there is a dot for each marked packet.As expected, the marked packets are more clustered with Method1than with Method2.

For the simulations in this paper,we set max p to1/50.When the average queue size is halfway between min th and max th,the gateway drops,on the average,roughly one out of50(or one out of1/max p)of the arriving packets.RED gateways perform best when the packet-marking probability changes fairly slowly as the average queue size changes;this helps to discourage oscillations in the average queue size and in the packet-marking probability.There should never be a reason to set max p greater than0.1,for example.When max p=0.1,then the RED gateway marks close to1/5th of the arriving packets when the average queue size is close to the maximum threshold(using Method2to calculate the packet-marking probability).If congestion is suf?ciently heavy that the average queue size cannot be controlled by marking close to1/5th of the arriving packets,then after the average queue size exceeds the maximum threshold,the gateway will mark every arriving packet.

8Evaluation of RED gateways

In addition to the design goals discussed in Section3,several general goals have been outlined for congestion avoidance schemes[14,16].In this section we describe how our goals have been met by RED gateways.

?Congestion avoidance.If the RED gateway in fact drops packets arriving at the gateway when the average queue size reaches the maximum threshold,then the RED gateway guarantees that the calculated average queue size does not exceed the maximum threshold.If the weight w q for the EWMA procedure has been set appropriately[see Section6.2],then the RED gateway in fact controls the actual average queue size.If the RED gateway sets a bit in packet headers when the average queue size exceeds the maximum threshold,rather than dropping packets,then the RED gateway relies on the cooperation of the sources to control the average queue size.

?Appropriate time scales.After notifying a connection of congestion by marking a packet,it takes at least a roundtrip time for the gateway to see a reduction in the arrival rate.In RED gateways the time scale for the detection of congestion roughly matches the time scale required for connections to respond to congestion.RED gateways don’t notify connections to reduce their windows as a result of transient congestion at the gateway.

?No global synchronization.The rate at which RED gateways mark packets depends on the level of congestion.During low congestion,the gateway has a low probability of marking each arriving packet, and as congestion increases,the probability of marking each packet increases.RED gateways avoid global synchronization by marking packets at as low a rate as possible.

?Simplicity.The RED gateway algorithm could be implemented with moderate overhead in current networks,as discussed further in Section11.

?Maximizing global power4.The RED gateway explicitly controls the average queue size.Figure5 shows that for simulations with high link utilization,global power is higher with RED gateways than with Drop Tail gateways.Future research is needed to determine the optimum average queue size for different network and traf?c conditions.

?Fairness.One goal for a congestion avoidance mechanism is fairness.This goal of fairness is not well-de?ned,so we simply describe the performance of the RED gateway in this regard.The RED gateway

does not discriminate against particular connections or classes of connections.(This is in contrast to Drop Tail or Random Drop gateways,as described in[7]).For the RED gateway,the fraction of marked packets for each connection is roughly proportional to that connection’s share of the bandwidth.However,RED gateways do not attempt to ensure that each connection receives the same fraction of the total throughput, and do not explicitly control misbehaving users.RED gateways provide a mechanism to identify the level of congestion,and RED gateways could also be used to identify connections using a large share of the total bandwidth.If desired,additional mechanisms could be added to RED gateways to control the throughput of such connections during periods of congestion.

?Appropriate for a wide range of environments.The randomized mechanism for marking packets is appropriate for networks with connections with a range of roundtrip times and throughput,and for a large range in the number of active connections at one time.Changes in the load are detected through changes in the average queue size,and the rate at which packets are marked is adjusted correspondingly.The RED gateway’s performance is discussed further in the following section.

Even in a network where RED gateways signals congestion by dropping marked packets,there are many occasions in a TCP/IP network when a dropped packet does not result in any decrease in load at the gateway. If the gateway drops a data packet for a TCP connection,this packet drop will be detected by the source, possibly after a retransmission timer expires.If the gateway drops an ACK packet for a TCP connection,or a packet from a non-TCP connection,this packet drop could go unnoticed by the source.However,even for a congested network with a traf?c mix dominated by short TCP connections or by non-TCP connections, the RED gateway still controls the average queue size by dropping all arriving packets when the average queue size exceeds a maximum threshold.

8.1Parameter sensitivity

This section discusses the parameter sensitivity of RED gateways.Unlike Drop Tail gateways,where the only free parameter is the buffer size,RED gateways have additional parameters that determine the upper bound on the average queue size,the time interval over which the average queue size is computed,and the maximum rate for marking packets.The congestion avoidance mechanism should have low parameter sensitivity,and the parameters should be applicable to networks with widely varying bandwidths.

The RED gateway parameters w q,min th,and max th are necessary so that the network designer can make conscious decisions about the desired average queue size,and about the size and duration in queue bursts to be allowed at the gateway.The parameter max p can be chosen from a fairly wide range,because it is only an upper bound on the actual marking probability p b.If congestion is suf?ciently heavy that the gateway cannot control the average queue size by marking at most a fraction max p of the packets,then the average queue size will exceed the maximum threshold,and the gateway will mark every packet until congestion is controlled.

We give a few rules that give adequate performance of the RED gateway under a wide range of traf?c conditions and gateway parameters.

1:Ensure adequate calculation of the average queue size:set w q≥0.001.The average queue size at the gateway is limited by max th,as long as the calculated average queue size avg is a fairly accurate re?ection of the actual average queue size.The weight w q should not be set too low,so that the calculated average queue length does not delay too long in re?ecting increases in the actual queue length[See Section6]. Equation3describes the upper bound on w q required to allow the queue to accommodate bursts of L packets without marking packets.

2:Set min th suf?ciently high to maximize network power.The thresholds min th and max th should be set suf?ciently high to maximize network power.As we stated earlier,more research is needed on determining the optimal average queue size for various network conditions.Because network traf?c is often bursty,the actual queue size can also be quite bursty;if the average queue size is kept too low,then the

output link will be underutilized.

3:Make max th?min th suf?ciently large to avoid global synchronization.Make max th?min th larger than the typical increase in the average queue size during a roundtrip time,to avoid the global synchroniza-tion that results when the gateway marks many packets at one time.One rule of thumb would be to set max th to at least twice min th.If max th?min th is too small,then the computed average queue size can regularly oscillate up to max th;this behavior is similar to the oscillations of the queue up to the maximum queue size with Drop Tail gateways.

To investigate the performance of RED gateways in a range of traf?c conditions,this section discusses a simulation with two-way traf?c,where there is heavy congestion resulting from many FTP and TELNET connections,each with a small window and limited data to send.The RED gateway parameters are the same as in the simple simulation in Figure3,but the network traf?c is quite different.

Figure9shows the simulation,which uses the network in Figure10.Roughly half of the41connections go from one of the left-hand nodes1-4to one of the right-hand nodes5-8;the other connections go in the opposite direction.The roundtrip times for the connections vary by a factor of4to1.Most of the connections are FTP connections,but there are a few TELNET connections.(One of the reasons to keep the average queue size small is to ensure low average delay for the TELNET connections.)Unlike the previous simulations,in this simulation all of the connections have a maximum window of either8or16packets. The total number of packets for a connection ranges from20to400packets.The starting times and the total number of packets for each connection were chosen rather arbitrarily;we are not claiming to represent realistic traf?c models.The intention is simply to show RED gateways in a range of environments.

Because of the effects of ack-compression with two-way traf?c,the packets arriving at the gateway from each connection are somewhat bursty.When ack-packets are‘compressed’in a queue,the ack packets arrive at the source node in a burst.In response,the source sends a burst of data packets[38].

The top chart in Figure9shows the queue for gateway A,and the next chart shows the queue for gateway B.For each chart,each‘X’indicates a packet dropped at that gateway.The bottom chart shows the packets for each connection arriving and departing from gateway A(and heading towards gateway B).For each connection,there is a mark for each packet arriving and departing from gateway A,though at this time scale the two marks are indistinguishable.Unlike the chart in Figures3,in Figure9the packets for the different connections are displayed overlapped,rather than displayed on separate rows.The x-axis shows time,and the y-axis shows the packet number for that connection,where each connection starts at packet number0. For example,the leftmost‘strand’shows a connection that starts at time0,and that sends220packets in all.Each‘X’shows a packet dropped by one of the two gateways.The queue is measured in packets rather in bytes;short packets are just as likely to be dropped as are longer packets.The bottom line of the bottom chart shows again an‘X’for each packet dropped by one of the two gateways.

Because Figure9shows many overlapping connections,it is not possible to trace the behavior of each of the connections.As Figure9shows,the RED gateway is effective in controlling the average queue size. When congestion is low at one of the gateways,the average queue size and the rate of marking packets is also low at that gateway.As congestion increases at the gateway,the average queue size and the rate of marking packets both increase.Because this simulation consists of heavy congestion caused by many connections,each with a small maximum window,the RED gateways have to drop a fairly large number of packets in order to control congestion.The average link utilization over the one-second period is61%for the congested link in one direction,and59%for the other direction.As the?gure shows,there are periods at the beginning and the end of the simulation when the arrival rate at the gateways is low.

Note that the traf?c in Figures3and9in quite varied,and in each case the RED gateway adjusts its rate of marking packets to maintain an acceptable average queue size.For the simulations in Figure9 with many short connections,there are occasional periods of heavy congestion,and a higher rate of packet drops is needed to control congestion.In contrast,with the simulations in Figure3with a small number of connections with large maximum windows,the congestion can be controlled with a small number of dropped

9BURSTY TRAFFIC18 packets.For the simulations in Figure9,the burstiness of the queue is dominated by short-term burstiness as packet bursts arrive at the gateway from individual connections.For the simulations in Figure3,the burstiness of the queue is dominated by the window increase/decrease cycles of the individual connections. Note that the RED gateway parameters are unchanged in these two simulations.

The performance of a slightly different version of RED gateways with connections with different roundtrip times and with connections with multiple congested gateways has been analyzed and explored elsewhere[5]. 9Bursty traf?c

This section shows that unlike Drop Tail or Random Drop gateways,RED gateways do not have a bias against bursty traf?c.5Bursty traf?c at the gateway can result from an FTP connection with a long delay-bandwidth product but a small window;a window of traf?c will be sent,and then there will be a delay until the ack packets return and another window of data can be sent.Variable-bit-rate video traf?c and some forms of interactive traf?c are other examples of bursty traf?c seen by the gateway.

In this section we use FTP connections with in?nite data,small windows,and small roundtrip times to model the less-bursty traf?c,and we use FTP connections with smaller windows and longer roundtrip times to model the more-bursty traf?c.

We consider simulations of the network in Figure11.Node5packets have a roundtrip time that is six times that of the other packets.Connections1-4have a maximum window of12packets,while connection 5has a maximum window of8packets.Because node5has a large roundtrip time and a small window, node5packets often arrive at the gateway in a loose cluster.By this,we mean that considering only node5 packets,there is one long interarrival time,and many smaller interarrival times.

Figures12through14show the results of simulations of the network in Figure11with Drop Tail, Random Drop,and RED gateways respectively.The simulations in Figures12and13were run with the buffer size ranging from8packets to22packets.The simulations in Figure14were run many times with a minimum threshold ranging from3to14packets,and a buffer size ranging from12to56packets.

Each simulation was run for ten seconds,and each mark represents one one-second period of that sim-ulation.For Figures12and13,the x-axis shows the buffer size,and the y-axis shows node5’s throughput as a percentage of the total throughput through the gateway.In order to avoid traf?c phase effects(effects caused by the precise timing of packet arrivals at the gateway),in the simulations with Drop Tail gateways the source takes a random time drawn from the uniform distribution on[0,t]seconds to prepare an FTP packet for transmission,where t is the bottleneck service time of0.17ms.[7].In these simulations our concern is to examine the gateway’s bias against bursty traf?c.

For each set of simulations there is a second?gure showing the average queue size(in packets)seen by arriving packets at the bottleneck gateway,and a third?gure showing the average link utilization on the congested link.Because RED gateways are quite different from Drop Tail or Random Drop gateways, the gateways cannot be compared simply by comparing the maximum queue size;the most appropriate comparison is between a Drop Tail gateway and a RED gateway that maintain the same average queue size.

With Drop Tail or Random Drop gateways,the queue is more likely to over?ow when the queue contains some packets from node5.In this case,with either Random Drop or Drop Tail gateways,node5packets have a disproportionate probability of being dropped;the queue contents when the queue over?ows are not representative of the average queue contents.

Figure14shows the result of simulations with RED gateways.The x-axis shows min th and the y-axis shows node5’s throughput.The throughput for node5is close to the maximum possible throughput,

10IDENTIFYING MISBEHA VING USERS19 given node5’s roundtrip time and maximum window.The parameters for the RED gateway are as follows: w q=0.002and max p=1/50.The maximum threshold is twice the minimum threshold and the buffer size, which ranges from12to56packets,is four times the minimum threshold.

Figure15shows that with the simulations with Drop Tail or with Random Drop gateways,node5 receives a disproportionate share of the packet drops.Each mark in Figure15shows the results from a one-second period of simulation.The boxes show the simulations with Drop Tail gateways from Figure 12,the triangles show the simulations with Random Drop gateways from Figure13,and the dots show the simulations with RED gateways from Figure14.For each one-second period of simulation,the x-axis shows node5’s throughput(as a percentage of the total throughput)and the y-axis shows node5’s packet drops (as a percentage of the total packet drops).The number of packets dropped in one one-second simulation period ranges from zero to61;the chart excludes those one-second simulation periods with less than three dropped packets.

The dashed line in Figure15shows the position where node5’s share of packet drops exactly equals node5’s share of the throughput.The cluster of dots is roughly centered on the dashed line,indicating that for the RED gateways,node5’s share of dropped packets re?ects node5’s share of the throughput.In contrast,for simulations with Random Drop(or with Drop Tail)gateways node5receives a small fraction of the throughput but a large fraction of the packet drops.This shows the bias of Drop Tail and Random Drop gateways against the bursty traf?c from node5.

Our simulations with an ISO TP4network using the DECbit congestion avoidance scheme also show a bias against bursty traf?c.With the DECbit congestion avoidance scheme node5packets have a dispropor-tionate chance of having their congestion indication bits set.The DECbit congestion avoidance scheme’s bias against bursty traf?c would be corrected by DECbit congestion avoidance with selective feedback[28], which has been proposed with a fairness goal of dividing each resource equally among all of the users shar-ing it.This modi?cation uses a selective feedback algorithm at the gateway.The gateway determines which users are using more than their“fair share”of the bandwidth,and only sets the congestion-indication bit in packets belonging to those users.We have not run simulations with this algorithm.

10Identifying misbehaving users

In this section we show that RED gateways provide an ef?cient mechanism for identifying connections that use a large share of the bandwidth in times of congestion.Because RED gateways randomly choose packets to be marked during congestion,RED gateways could easily identify which connections have received a signi?cant fraction of the recently-marked packets.When the number of marked packets is suf?ciently large,a connection that has received a large share of the marked packets is also likely to be a connection that has received a large share of the bandwidth.This information could be used by higher policy layers to restrict the bandwidth of those connections during congestion.

The RED gateway noti?es connections of congestion at the gateway by marking packets.With RED gateways,when a packet is marked,the probability of marking a packet from a particular connection is roughly proportional to that connection’s current share of the bandwidth through the gateway.Note that this property does not hold for Drop-Tail gateways,as demonstrated in Section9.

For the rest of this section,we assume that each time the gateway marks a packet,the probability that a packet from a particular connection is marked exactly equals that connection’s fraction of the bandwidth through the gateway.Assume that connection i has a?xed fraction p i of the bandwidth through the gate-way.Let S i,n be the number of the n most-recently-marked packets that are from connection i.From the assumptions above,the expected value for S i,n is np i.

From standard statistical results given in the appendix,S i,n is unlikely to be much larger than its expected

11IMPLEMENTATION20 value for suf?ciently large n:

Prob(S i,n≥cp i n)≤e?2n(c?1)2p2i

for1≤c≤1/p i.The two lines in Figure16show the upper bound on the probability that a connection receives more than C times the expected number of marked packets,for C=2,4,and for n=100;the x-axis shows p i.

The RED gateway could easily keep a list of the n most recently-marked packets.If some connection has a large fraction of the marked packets,it is likely that the connection also had a large fraction of the average bandwidth.If some TCP connection is receiving a large fraction of the bandwidth,that connection could be a misbehaving host that is not following current TCP protocols,or simply a connection with either a shorter roundtrip time or a larger window than other active connections.In either case,if desired,the RED gateway could be modi?ed to give lower priority to those connections that receive a large fraction of the bandwidth during times of congestion.

11Implementation

This section considers ef?cient implementations of RED gateways.We show that the RED gateway algo-rithm can be implemented ef?ciently,with only a small number of add and shift instructions for each packet arrival.In addition,the RED gateway algorithm is not tightly coupled to packet forwarding and its compu-tations do not have to be made in the time-critical packet forwarding path.Much of the work of the RED gateway algorithm,such as the computation of the average queue size and of the packet-marking probability p b,could be performed in parallel with packet forwarding,or could be computed by the gateway as a lower-priority task as time permits.This means that the RED gateway algorithm need not impair the gateway’s ability to process packets,and the RED gateway algorithm can be adapted to increasingly-high-speed output lines.

If the RED gateway’s method of marking packets is to set a congestion indication bit in the packet header, rather than dropping the arriving packet,then setting the congestion indication bit itself adds overhead to the gateway algorithm.However,because RED gateways are designed to mark as few packets as possible, the overhead of setting the congestion indication bit is kept to a minimum.This is unlike DECbit gateways, for example,which set the congestion indication bit in every packet that arrives at the gateway when the average queue size exceeds the threshold.

For every packet arrival at the gateway queue,the RED gateway calculates the average queue size.This can be implemented as follows:

avg←avg+w q(q?avg)

As long as w q is chosen as a(negative)power of two,this can be implemented with one shift and two additions(given scaled versions of the parameters)[14].

Because the RED gateway computes the average queue size at packet arrivals,rather than at?xed time intervals,the calculation of the average queue size is modi?ed when a packet arrives at the gateway to an empty queue.After the packet arrives at the gateway to an empty queue the gateway calculates m,the number of packets that might have been transmitted by the gateway during the time that the line was free. The gateway calculates the average queue size as if m packets had arrived at the gateway with a queue size of zero.The calculation is as follows:

m←(time?q

time is the start of the queue idle time,and s is a typical transmission time for a small packet.This entire calculation is an approximation,as it is based on the number of packets that might have arrived at the

实验5 JAVA常用类

山西大学计算机与信息技术学院 实验报告 姓名学号专业班级 课程名称 Java实验实验日期成绩指导教师批改日期 实验5 JAVA常用类实验名称 一.实验目的: (1)掌握常用的String,StringBuffer(StringBuilder)类的构造方法的使用;(2)掌握字符串的比较方法,尤其equals方法和==比较的区别; (3)掌握String类常用方法的使用; (4)掌握字符串与字符数组和byte数组之间的转换方法; (5)Date,Math,PrintWriter,Scanner类的常用方法。 二.实验内容 1.二进制数转换为十六进制数(此程序参考例题249页9. 2.13) 程序源代码 import java.util.*; public class BinToHexConversion{ //二进制转化为十六进制的方法 public static String binToHex(String bin){ int temp; //二进制转化为十六进制的位数 if(bin.length()%4==0) temp = bin.length()/4; else temp = bin.length()/4 + 1; char []hex = new char[temp]; //十六进制数的字符形式 int []hexDec = new int[temp];//十六进制数的十进制数形式 int j = 0; for(int i=0;i=0&&dec<10) return (char)('0'+dec-0); else if(dec>=10&&dec<=15) return (char)('A'+dec-10); else return '@'; }

孙鑫深入详解MFC学习笔记

Windows编程 一、#define的几个注意点 ①#与##的用法; #xxx将后面的参数xxx字符串化 xxx##yyy,将两个参数连接 ②\的用法 一行结束使用,表示一行未结束。 二、函数调用约定_stdcall _stdcall是Pascal方式清理C方式压栈,通常用于Win32Api中,函数采用从右到左的压栈方式,堆栈由它自己清理。在win32应用程序里,宏APIENTRY,WINAPI,都表示_stdcall,非常常见。 相对应的_cdecl,堆栈由main()函数或者其他函数清理。 C和C++程序的缺省调用方式则为__cdecl,下图为VC++6.0的默认设置,因此在不显式写明调用约定的情况下,一般都是采用__cdecl方式,而在与Windows API打交道的场景下,通常都是显式的写明使用__stdcall,才能与Windows API保持一致。 另外,还要注意的是,如printf此类支持可变参数的函数,由于不知道调用者会传递多少个参数,也不知道会压多少个参数入栈,因此函数本身内部不可能清理堆栈,只能由调用者清理了。 三、防止头文件重复包含----预编译 在写好的类的首位加上预编译代码,例如: #ifndef xxx_h #define xxx_h Class xxx { ... }; #endif 四、HDC、CDC、CClientDC、CWindowDC HDC是平台SDK提供的全局类,与设备上下文相关 CDC则是类似于封装在CWnd中的一个HDC。 CClientDC:继承于CDC,构造函数完成获取DC,析构函数完成释放DC。 CWindowDC:继承于CDC,构造函数完成获取DC,析构函数完成释放DC,在整个窗口上绘图 CMetaFileDC:图元文件设备描述环境类 创建:CMetaFileDC dc; dc.Create(); 接下来用一般dc的绘图操作,绘图的内容均会保存至图元文件中; HMETAFILE m_hMetaFile=dc.Close();//图元文件赋予数据成员显示图元文件:用一般dc的PlayMetaFile(m_hMetaFile)显示图元文件 窗口销毁时删除图元文件 SDK函数::DeleteMetaFile(m_hMetaFile) 五、OnDraw函数、OnCreate函数 OnDraw函数:窗口重绘的时候被框架类FrameWnd调用,响应WM_PAINT消息。 OnCreate函数:窗口建立的时候调用的函数,响应WM_CREATE消息。

小白学Python笔记系列3.3.1 math库与random库

数学库及其使用 math库中常用的数学函数 函数数学表示含义 圆周率piππ的近似值,15位小数 自然常数e e e的近似值,15位小数 ceil(x) x 对浮点数向上取整 floor(x) x 对浮点数向下取整 pow(x,y)x y计算x的y次方 log(x)lg x以e为基的对数, log10(x)log10x以10为基的对数, sqrt(x)平方根 x

数学库及其使用 函数数学表示 含义exp(x)e的x次幂,degrees(x)将弧度值转换成角度radians(x)将角度值转换成弧度 sin(x)sin x 正弦函数cos(x)cos x 余弦函数tan(x)tan x 正切函数 asin(x)arcsin x 反正弦函数,x ?[-1.0,1.0]acos(x)arccos x 反余弦函数,x ?[-1.0,1.0]atan(x) arctan x 反正切函数,x ?[-1.0,1.0] math库中常用的数学函数

随机数库及其使用 random库中常用的函数 函数含义 seed(x)给随机数一个种子值,默认随机种子是系 统时钟 random()生成一个[0, 1.0)之间的随机小数 uniform(a,b)生成一个a到b之间的随机小数 randint(a,b)生成一个a到b之间的随机整数 randrange(a,b,c)随机生成一个从a开始到b以c递增的数 choice()从列表中随机返回一个元素 shuffle()将列表中元素随机打乱 sample(,k)从指定列表随机获取k个元素

随机数库及其使用 示例

JAVA中常用类的常用方法

JAVA屮常用类的常用方法 一.java?丨ang.Object 类 1、clone()方法 创建丼返M此对象的一个副木。要进行“克隆”的对象所属的类必须实现https://www.360docs.net/doc/929633417.html,ng. Cloneable 接口。 2、equals(Objectobj)方法 0 功能:比较引用类型数据的等价性。 0 等价标准.?引用类型比较引用,基木类型比较值。 0 存在特例.?对File、String、Date及封装类等类型来说,是比较类型及对象的内稃而+ 考虑引用的是否为同一实例。 3、finalize〇方法 当垃圾丨"丨收器确定>(、存在对该对象的更多引用时,由对象的垃圾丨"丨收器调用此方法。 4、hashCode〇方法返 回该对象的哈希码值。 5、notify〇方法 唤醒在此对象监视器上等待的中?个线祝。 6、notifyAII〇方法 唤醒在此对象监视器上等待的所有线程= 7、toString()方法 返W该对象的字符串表示。在进行String与其它类型数据的连接操作时,&动调用tostringo 方法。可以根据耑要重写toStringO方法。 8、wait()方法 在其他线程调用此对象的n〇tify()方法或notifyAIIO方法前,异致当前线程等待。 二、字符串相关类 I String 类 charAt(int index)返回指定索引处的char值。compareTo{String anotherString)按字

典顺序比较两个字符串。compareTolgnoreCase(Stringstr)按字典顺序比较两个字 符串,不考虑人小写。concat(String str)将指定字符串连接到此字符串的结尾。 endsWith(String suffix)测试此字符串是否以指定的〗?缀结束。equals{Object anObject)将此字符串与指定的对象比较。 equalslgnoreCase(String anotherString)将此String 与另一个String 比较,考虑人小'与’。indexOf(int ch)返H指定字符在此字符串屮第一次出现处的索引。 indexOf(String str)返回第一次出现的指定子字符串在此字符串屮的索引, lastlndexOf(intch)返回指定字符在此字符串中最后??次出现处的索引。 length()返|n丨此字符串的长度。 replace(char oldChar, char newChar) 返回一个新的字符串,它是通过用newChar替换此字符串中出现的所有oldChar得到的。 split(String regex)根据给定正则表达式的匹配拆分此字符串。startsWith{String prefix)测试此字符 串是否以指定的前缀开始。substring(int beginlndex) 返回一个新的字符串,它是此字符串的一个子字符串。该子字符串始于指定索引处的字符,一直到此字符串末尾。 substring(int beginlndex, int endlndex) 返回一个新字符串,它是此字符串的一个子字符串。该子字符串从指定的beginlndex 处开始,一直到索引endlndex-1处的字符。 t〇CharArray()将此字符串转换为一个新的字符数组。

CRichEditCtrl

CRichEditCtrl MFC Library Reference Using CRichEditCtrl(https://www.360docs.net/doc/929633417.html,/tie/7576199.html)rich edit控件是用户能够输入和编辑文本的窗口。文本能被指定字符和段落格式,并且也能包含嵌入式OLE对象。rich edit 控件在MFC中通过CRichEditCtrl类描绘。关于哪些你想知道更多?RichEdit控件概述 如果你在对话框中使用rich edit控件(不管你的程序是SDI,MDI,还是基本对话框),你必须在对话框显示之前调用AfxInitRichEdit一次。调用此函数的典型位置 在你的程序的InitInstance成员函数中。你不必每次显示对话框时调用它,仅仅第一次就可以了。如果你使用CRichEditView你不必调用 AfxInitRichEdit.Rich edit控件(CRichEditCtrl)为格式化文本提供程序接口。然而,一个程序必须实现任一用户接口组件,这个组件对于用户格式化操作可用是必要 的。那就是,Rich edit控件支持选定文本的字符或段落属性的改变。字符属性

的一些例子就是黑粗体,斜体,字体系列,和点大小。段落属性的例子如对齐,页边空白,和移字键 (英文原文:tab stops.表示在rich edit中按下tab键光标会移动一段距离)。然而,这是给你提供的用户接口,不管那是一个工具条按钮,菜单项,或是一个格式化字符对话框。也有函数对目 前选择查询richedit控件。使用这些函数显示当前属性设置,比如,设置一个选定标记在用户接口上,如果当前选择是黑粗体字符格式属性。参见CharacterFomatting和paragraph formatting查看更多字符段落格式化信息。rich edit控件支持几乎所有多行编辑控件( multiline edit controls)的操作和通知消息。因此,使用EDIT控件的应用程序很容易的变换为使用RICH EDIT控件。附加的消息和通知(notifications)能使程序访问richedit的其它特性。参看CEdit查看编辑控件消息。与rich edit控件有关的类 CRichEditView, CRichEditDoc, 和CRichEditCntrItem类提供在MFC的文档/视图结构环境内的RICH EDIT控件的功能。CRichEditView保持着文本和文本的格式化特性。CRichEditDoc保持着视图中OLE客户项的序列。CRichEditCntrItem提供对OLE客户项的container-side

过程控制实习报告

过程控制工程实习报告 学院:机械与控制工程班级:自动化10-3班 学号: 姓名:傅 指导老师:周 日期:年月

目录 1 绪论 (2) 1.1 过程控制系统的概述 (2) 2 西门子PLC的介绍 (2) 2.1 S7-300PLC介绍 (2) 2.2 S7-3O0主要功能模块介绍 (2) 3 基于PLC的双容量水箱控制系统硬件组成 (3) 3.1硬件模块 (3) 3.2 双容量水箱控制系统实验装置 (4) 3.3双容量水箱对象组成 (4) 4 基于PLC的双容量水箱控制系统的编程设计 (5) 4.1 控制原理 (5) 4.2 STEP 7简介 (6) 4.3 SEP7硬件组态及参数设置 (6) 4.4 SETP7程序设计 (8) 5 控制系统程序编写及调试、运行 (8) 5.1 S7-300_PLC模拟量输入输出量程转换模块FC105简介 (8) 5.2 系统的I/O地址分配 (8) 5.3 双容量水箱控制系统程序 (9) 6 实习结语 (13)

1 绪论 1.1 过程控制系统的概述 过程控制是指根据工业生产过程的特点,采用测量仪表、执行机构和计算机等自动化工具,应用控制理论,设计工业生产过程控制系统,实现工业生产过程自动化。随着生产过程的连续化﹑大型化和不断强化, 随着对过程内在规律的进一步了解,以及仪表﹑计算机技术的不断发展, 生产过程控制技术近年来发展异常迅速.所谓生产过程自动化, 一般指工业生产中(如石油﹑化工﹑冶金﹑炼焦﹑造纸﹑建材﹑陶瓷及热力发电等)连续的或按一定程序周期进行的生产过程的自动控制.凡是采用模拟或数字控制方式对生产过程的某一或某些物理参数(如温度﹑压力﹑流量等)进行的自动控制统称为过程控制,随着科学技术的飞速前进,过程控制也在日新月异地发展。它不仅在传统的工业改造中,起到了提高质量,节约原材料和能源,减少环境污染等十分重要的作用。生产过程自动化是保持生产稳定、降低消耗、减少成本、改善劳动条件、保证安全和提高劳动生产率重要手段,在社会生产的各个行业起着极其重要的作用。 2 西门子PLC的介绍 2.1 S7-300PLC介绍 S7-300是通用可编程控制器,它广泛地应用于自动化领域,涉及多个行业,可用于组建集中式或分布式结构的测控系统,重点在于为生产制造工程中的系统解决方案提供一个通用的自动化平台,性能优良,运行可靠。 S7-300PLC采用模块化结构,模块种类的品种繁多,功能齐全,应用范围十分广泛,可用于集中形式的扩展,也可用于带ET200M分布式结构的配置。S7系列PLC用DIN标准导轨安装,各模块用总线连接器连接在一起,系统配置灵活、维护简便、易扩展。CPU模块是PLC的核心,负责存储并执行用户程序,存取其他模块的数据,一般还具有某种类型的通信功能。信号模块用来传送数字量及模拟量信号,通信模块可提供PROFIBUS、以太网等通信连接形式。 2.2 S7-3O0主要功能模块介绍 1、导轨(Rail) S7-300的模块机架(起物理支撑作用,无背板总线),西门子提供五种规格的导轨。 2、电源模块(PS) 将市电电压(AC120/230V)转换为DC24V,为CPU和24V直流负载电路(信号模块、传感器、执行器等)提供直流电源。输出电流有2A、5A、10A三种*正常:绿色LED灯亮 *过载:绿色LED灯闪

一组惊艳的后台管理界面设计欣赏

一组惊艳的后台管理界面设计欣赏 做那种界面的时候,和设计网页的感觉不一样。因为得考虑的不只是美观,更多应该在操作体验及视觉舒适度上下足功夫。 这种界面会成为管理员长时间驻留的地方,会有很多数据表格。如果我们的配色太鲜艳或者太暗沉,都会让使用者遭受眼球上的磨难。 而且一些按钮和操作焦点没有做好区分和规范,那都会是致命的。 关于界面中的交互,这里优设哥送上前辈们留下的一句金句,非常经典,也很好记: 1、操作前可以预知; 2、操作中有反馈; 3、操作后可撤销; 不管怎么样,这三句话,你可以牢牢记住。这是只可意会不可言传的。等到你钻研达到一定程度了,自然就会明白咯。 而视觉方面的注意事项,其实大家都知道,依然是不要超过三个颜色。这点大家虽然都知道,可是在实际设计的过程中,会不知不觉的用多一些颜色出来,而且还潜意识说服自己:“这个加上也不错,干脆就用上吧。”整套设计稿出来,全局观察的时候,结果就有些混乱,干扰到数据的主视觉区域了。 好吧!今天就为大家带来speckyboy上收集整理的一些dribbble上的精华作品。我相信,这些作品惊艳到你甚至临摹不过来,不信可以试试哟:) Device Dashboard

Admin Tools Admin Charts

Wood Control Panel Responsive Dashboard

Datatable Content Ultramarine Admin

Flyout Menu Administration Panel

Minimalist Admin Dashboard Charts

随机振动(振动频谱)计算(Random Vibration)

Random Vibration 1. 定义 1.1 功率谱密度 当波的频谱密度乘以一个适当的系数后将得到每单位频率波携带的功率,这被称为信号的功率谱密度(power spectral density, PSD)。 功率谱密度谱是一种概率统计方法,是对随机变量均方值的量度。 1.2 均方根 均方根(RMS)是指将N项的平方和除于N后,开平方的结果。均方根值也是有效值,如对于220交流电,示波器显示的有效值或均方根值为220V。 2. 加速度功率谱密度 2.1 单位 加速度单位:m/s^2或g 加速度功率谱密度单位:(m/s^2)^2/Hz或g^2/Hz Hz单位为:1/s, 所以加速度功率谱密度单位也可写为:m^2/s^3 2.2功率谱密度函数 功率谱密度函数曲线的纵坐标是(g2/Hz)。功率谱曲线下的面积就是随机加速度的总方差(g2): σ2= ∫Φ(f)df 其中:Φ(f)........功率谱密度函数 σ ............. 均方根加速度 3. 计算示例 随机振动100-2000HZ,功率谱密度为0.01g^2/Hz,则其加速度峰值计算如下: σ2=0.01*(2000-100)=19 σ=4.36g 峰值加速度不大于3倍均方根加速度:13.08g

4、SAE J 1455 随机振动要求 4.1功率谱图 4.1.1 Vertical axis 4.1.2 Transverse axis 4.1.3 Longitudinal axis

4.2 Vertical axis加速度计算 功率谱曲线下的面积:σ2=(40-5)0.016+0.5*(500-40)*0.016=4.24σ=2.06g 峰值加速度不大于3倍均方根加速度:6.18g 5. FGE随机振动要求 5.1功率谱图

《网络程序设计》复习题

1、什么叫套接字?套接字按通信性质可以分为哪两类? 2、理解线程的创建与使用方法,并能应用到程序设计中。 3、异构环境下的网络程序设计需要考虑哪些问题? 4、为什么在数据结构struct sockaddr_in中,成员变量sin_addr和sin_port需要转换为网络字节顺序,而sin_family不需要呢? 5、从网络编程的角度来简述和比较IP地址和端口的作用。 6、为什么网络编程时需要考虑字节顺序问题? 7、WinSock编程中需要哪些文件? 8、UDP程序的工作模型隐含着通信标识五元组的建立过程。这五元组在UDP的客户与服务端是由哪些函数分别确定的? 9、什么是阻塞与非阻塞通信?请解释两者的区别。 10、简述各种类型数据的发送与接收处理的方法。 11、简述基于UDP的客户机/服务器端socket编程流程。 12、什么是通信三元组和五元组?三元组和五元组每个元素在网络连接中起到什么作用? 13、为什么服务端在TCP通信过程中需要调用bind( )函数而客户端不需要?为什么客户机通常不需要绑定自己的端口号? 14、简述套接字Select模型原理,以及select模型的优势和不足。 15、简述阻塞模式服务器和客户端工作流程,以及阻塞模式套接字的优势和不足。 16、在实际应用中,很多TCP服务器程序在非正常退出时,如果立即重启服务器进程则会发生绑定服务器端口失败的错误,从而无法启动服务器进程,但等待一段时间后就可以了。为什么会发生这种情况呢?如何解决这个问题(或采取什么措施可以立即重启服务器进程)?(要求掌握setsockopt()函数的用法) 17、TCP程序的工作模型隐含着通信标识五元组的建立过程。这五元组在TCP的客户与服务端是由哪些函数分别确定的? 18、accept( )为什么要返回一个套接口?或者说,为什么要为每一个连接创建一个套接口来处理?UDP 服务器端为什么不需要多个套接口? 19、理解生产者-消费者模型,理解线程的同步与互斥方法(event和critical-section),并能应用到程序设计中。 20、采用阻塞式I/O模型时,套接字函数recv()的返回值有哪几种?分别对应什么情况? 21、closesocket()函数和shutdown()函数有何差别? 22、什么是TCP的三次握手机制?为什么要使用TCP的三次握手机制? 23、服务器端并发的两种模型及编程实现。 考试形式:闭卷 考试时间:120分钟 考试题型:选择题(2’×10=20’)、简答题(10’×6=60’)、程序设计题(20’)

4:一个经典的多线程同步问题汇总

一个经典的多线程同步问题 程序描述: 主线程启动10个子线程并将表示子线程序号的变量地址作为参数传递给子线程。子线程接收参数 -> sleep(50) -> 全局变量++ -> sleep(0) -> 输出参数和全局变量。 要求: 1.子线程输出的线程序号不能重复。 2.全局变量的输出必须递增。 下面画了个简单的示意图: 分析下这个问题的考察点,主要考察点有二个: 1.主线程创建子线程并传入一个指向变量地址的指针作参数,由于线程启动须要花费一定的时间,所以在子线程根据这个指针访问并保存数据前,主线程应等待子线程保存完毕后才能改动该参数并启动下一个线程。这涉及到主线程与子线程之间的同步。 2.子线程之间会互斥的改动和输出全局变量。要求全局变量的输出必须递增。这涉及到各子线程间的互斥。 下面列出这个程序的基本框架,可以在此代码基础上进行修改和验证。 //经典线程同步互斥问题 #include #include #include long g_nNum; //全局资源 unsigned int__stdcall Fun(void *pPM); //线程函数 const int THREAD_NUM = 10; //子线程个数 int main() { g_nNum = 0;

HANDLE handle[THREAD_NUM]; int i = 0; while (i < THREAD_NUM) { handle[i] = (HANDLE)_beginthreadex(NULL, 0, Fun, &i, 0, NULL); i++;//等子线程接收到参数时主线程可能改变了这个i的值} //保证子线程已全部运行结束 WaitForMultipleObjects(THREAD_NUM, handle, TRUE, INFINITE); return 0; } unsigned int__stdcall Fun(void *pPM) { //由于创建线程是要一定的开销的,所以新线程并不能第一时间执行到这来int nThreadNum = *(int *)pPM; //子线程获取参数 Sleep(50);//some work should to do g_nNum++; //处理全局资源 Sleep(0);//some work should to do printf("线程编号为%d 全局资源值为%d\n", nThreadNum, g_nNum); return 0; } 运行结果:

后台系统规范设计心得

后台系统采用一整套UI,为什么会形式各异?能统一并带来更好的体验吗?基于交互设计师自己的内心疑问,我们迫切的产出一套设计规范用于统一后台操作系统,利于用户使用习惯的培养和延续,降低学习成本,提高使用效率,有效提高开发效率,方便功能的优化扩展。基于现有的系统,我们抽丝剥茧,才有了如今的阶段性成果。 用户是谁?需求是什么?交互设计师对于一个项目最基本的了解就是源于这两个问题。我们做这次规范也是如此。。。。。。 你知,或者不知

规范就在那里 不悲不喜 用户是谁?我们的系统是给谁用的? 初期是给我们的小二,后期系统会开放,外部商家或是委托方也会来使用我们的系统。那么我们第一次做系统规范的由于商家或委托方的信息掌握不到,可以主要针对我们的小二。我们的小二有什么特点呢?他们目前是什么方法在录入信息的呢? 小二这个角色在我们的部分可以细分为:认证小二,物流小二,行业运营小二(行业运营小二里面还分:买手,买手助理,店铺运营小二,网站运营小二……)每种角色来这个系统的目的都是不同的,行业运营小二,进来主要是录入信息,跟踪订单。物流小二主要是查看审核行业小二的申请,跟踪订单,查看报表。认证小二主要是审核行业小二或商家的申请,跟踪认证情况,分析数据。

需求是什么? 由于当初系统发展情况参差不齐交互资源不够,前端控件没做,导致用户极难使用。系统开放过程中,频繁的更换交互、视觉还有前端,导致现在单个系统交互方式有差异,多个系统呈现给同一个用户时,虽然基本框架没问题,但是在操作过程中效率低。 解决方案:用户急需一个好用的后台系统来提高他们的效率。 我们这次要做规范的目的是统一现有三个系统的视觉+交互规范,然后交付前段,前端来规定他们的代码规范。让我们一起走上提高我们小二的工作效率的第一步。 你念,或者不念 规范就在那里 不来不去 从哪个纬度展开规范设计? 1. 控件 在计算机编程当中,控件(或部件,widget或control)是一种图形用户界面元素。是一种基本的可视构件块,包含在应用程序中,控制着该程序处理的所有数据以及关于这些数据的交互操作。 在PARC研究中心对施乐的Alto电脑(Xerox Alto)用户界面的研究基础上,如今已逐渐产生一组包含常规信息的可重用控件。常规控件的不同组合通常打包在部件工具箱中,程序员可以构建图形用户界面(G UI s)。大多操作系统包括一套用于程序设计的控件,程序员只需将它们加入应用程序,指定它们的行为。 组合控件 描述:为实现产品中某一特定功能而独立出来的模块,其特点在于功能相对单一,在结构层和行为层应具备很强的可复用性,在表现层应保持一致性,方便用户识别。从单一控件可以

java随机函数用法Random

java随机函数用法Random import java.util.Random; public class RandomNumber{ public static void main(String[] args) { // 使用https://www.360docs.net/doc/929633417.html,ng.Math的random方法生成随机数System.out.println("Math.random(): " + Math.random()); // 使用不带参数的构造方法构造java.util.Random对象System.out.println("使用不带参数的构造方法构造的Random对象:"); Random rd1 = new Random(); // 产生各种类型的随机数 // 按均匀分布产生整数 System.out.println("int: " + rd1.nextInt()); // 按均匀分布产生长整数 System.out.println("long: " + rd1.nextLong()); // 按均匀分布产生大于等于0,小于1的float数[0, 1) System.out.println("float: " + rd1.nextFloat());

// 按均匀分布产生[0, 1)范围的double数 System.out.println("double: " + rd1.nextDouble()); // 按正态分布产生随机数 System.out.println("Gaussian: " + rd1.nextGaussian()); // 生成一系列随机数 System.out.print("随机整数序列:"); for (int i = 0; i < 5; i++) { System.out.print(rd1.nextInt() + " "); } System.out.println(); // 指定随机数产生的范围 System.out.print("[0,10)范围内随机整数序列: "); for (int i = 0; i < 10; i++) { // Random的nextInt(int n)方法返回一个[0, n)范围内的随机数 System.out.print(rd1.nextInt(10) + " "); } System.out.println(); System.out.print("[5,23)范围内随机整数序列: "); for (int i = 0; i < 10; i++) {

Java常用类库介绍

教学内容 第七讲Java常用类库介绍 7.1 Java类库的结构 类库就是Java API(Application Programming Interface,应用程序接口),是系统提供的已实现的标准类的集合。在程序设计中,合理和充分利用类库提供的类和接口,不仅可以完成字符串处理、绘图、网络应用、数学计算等多方面的工作,而且可以大大提高编程效率,使程序简练、易懂。 Java类库中的类和接口大多封装在特定的包里,每个包具有自己的功能。表7.1列出了Java中一些常用的包及其简要的功能。其中,包名后面带“. *”的表示其中包括一些相关的包。有关类的介绍和使用方法,Java中提供了极其完善的技术文档。我们只需了解技术文档的格式就能方便地查阅文档。 表7.1Java提供的部分常用包 注:在使用Java时,除了https://www.360docs.net/doc/929633417.html,ng外,其他的包都需要import语句引入之后才能使用。 7.2 https://www.360docs.net/doc/929633417.html,ng包中的常用类

https://www.360docs.net/doc/929633417.html,ng是Java语言最广泛使用的包。它所包括的类是其他包的基础,由系统自动引入,程序中不必用import语句就可以使用其中的任何一个类。https://www.360docs.net/doc/929633417.html,ng中所包含的类和接口对所有实际的Java程序都是必要的。下面我们将分别介绍几个常用的类。 7.2.1 String类和StringBuffer类 许多语言中,字符串是语言固有的基本数据类型。但在Java语言中字符串通过String类和StringBuffer类来处理。 1.String类 Java语言中的字符串属于String类。虽然有其它方法表示字符串(如字符数组),但Java使用String 类作为字符串的标准格式。Java编译器把字符串转换成String对象。String对象一旦被创建了,就不能被改变。如果需要进行大量的字符串操作,应该使用StringBuffer类或者字符数组,最终结果可以被转换成String格式。 (1)创建字符串 创建字符串的方法有多种方式,通常我们用String类的构造器来建立字符串。表6.2列出了String 类的构造器及其简要说明。 表7.2 String类构造器概要 【例7.1】使用多种方法创建一个字符串并输出字符串内容。 public class StrOutput { public static void main(Sring[] args) { //将字符串常量作为String对象对待,实际上是将一个String对象赋值给另一个 String s1 = "Hello,java!"; //声明一个字符串,然后为其赋值 String s2; s2 = "Hello,java!";

CSerialPort类解析

CserialPort类的功能及成员函数介绍 CserialPort类是免费提供的串口类,Codeguru是一个非常不错的源代码网站CserialPort类支持线连接(非MODEM)的串口编程操作。 CserialPort类是基于多线程的,其工作流程如下:首先设置好串口参数,再开启串口检测工作线程,串口检测工作线程检测到串口接收到的数据、流控制事件或其他串口事件后,就以消息方式通知主程序,激发消息处理函数来进行数据处理,这是对接受数据而言的,发送数据可直接向串口发送。 介绍几个经常用到的函数: 1、串口初始化函数InitPort 这个函数是用来初始化串口的,即设置串口的通信参数:需要打开的串口号、波特率、奇偶校验方式、数据位、停止位,这里还可以用来进行事件的设定。如果串口初始化成功,就返回TRUE,若串口被其他设备占用、不存在或存在其他股占,就返回FALSE,编程者可以在这儿提示串口操作是否成功如果在当前主串口调用这个函数,那么pPortOwner可用this指针表示,串口号在函数中做了限制,只能用1,2,3和4四个串口号,而事实上在编程时可能用到更多串口号,可以通过通过注释掉本函数中“assert(portur>0&&portnr<5)”语句取消对串口号的限制。 if (m_ComPort[0].InitPort(this,1,9600,'N',8,1,EV_RXFLAG | EV_RXCHAR,512)) //portnr=1(2),baud=9600,parity='N',databits=8,stopsbits=1,

//dwCommEvents=EV_RXCHAR|EV_RXFLAG,nBufferSize=512 { m_ComPort[0].StartMonitoring(); //启动串口监视线程 SetTimer(1,1000,NULL); //设置定时器,1秒后发送数据} e lse { CString str; str.Format("COM1 没有发现,或被其它设备占用"); AfxMessageBox(str); } 2、启动串口通信监测线程函数StartMonitoring() 串口初始化成功后,就可以调用BOOL StartMonitoring()来启动串口检测线程,线程启动成功,返回TRUE。 BOOL CSerialPort::StartMonitoring() { if (!(m_Thread = AfxBeginThread(CommThread, this))) return FALSE; TRACE("Thread started\n"); return TRUE; } 注意这个函数一旦调用,就会建立一个线程,这个线程一直不会结束,调用StopMonitoring ()只是将这个线程挂起。 3、暂停或停止监测线程函数StopMonitoring() 该函数暂停或停止串口检测,要注意的是,调用该函数后,串口资源仍然被占用 // // Suspend the comm thread // BOOL CSerialPort::StopMonitoring() { TRACE("Thread suspended\n"); m_Thread->SuspendThread(); return TRUE; } 4、关闭串口函数ClosePort() 该函数功能是关闭串口,释放串口资源,调用该函数后,如果要继续使用串口,还需要调用InitPort()函数。 这里有一个问题,在以前的版本中,如果调用了StartMonitoring函数,关闭串口后,再打开就会出现问题,及网上所说的关闭死锁问题。找了大量资料后,

后台系统规范设计心得

后台系统规范设计心得 后台系统规范设计心得 时间:2012-03-15 10:39来源:阿里巴巴良无限UPD团队作者:阿里巴巴良无限UPD团围观: 1964 次 .Aav553 { display:none; } 后台系统采用一整套UI,为什么会形式各异?能统一并 带来更好的体验吗?基于交互设计师自己的内心疑问,我们迫切的产出一套设计规范用于统一后台操作系统,利于用户使用习惯的培养和延续,降低学习成本,提高使用效率,有效提高开发效率,方便功能的优化扩展。基于现有的系统,我们抽丝剥茧,才有了如今的阶段性成果。 一些事 用户是谁?需求是什么?交互设计师对于一个项目最基 本的了解就是源于这两个问题。我们做这次规范也是如此。。。。。。一些事 你知,或者不知 互联网的一些事

规范就在那里 互联网的一些事 不悲不喜 互联网的一些事 用户是谁?我们的系统是给谁用的? yixieshi 初期是给我们的小二,后期系统会开放,外部商家或是委托方也会来使用我们的系统。那么我们第一次做系统规范的由于商家或委托方的信息掌握不到,可以主要针对我们的小二。互联网的一些事 我们的小二有什么特点呢?他们目前是什么方法在录入信息的呢? yixieshi 小二这个角色在我们的部分可以细分为:认证小二,物流小二,行业运营小二(行业运营小二里面还分:买手,买手助理,店铺运营小二,网站运营小二……)每种角色来这个系统的目的都是不同的,行业运营小二,进来主要是录入信息,跟踪订单。物流小二主要是查看审核行业小二的申请,跟踪订单,查看报表。认证小二主要是审核行业小二或商家的申请,跟踪认证情况,分析数据。互联网的一些事

引用java中随机函数的使用

引用java中随机函数的使用 引用 axunlb的java中随机函数的使用 java中随机函数的使用 Random N = new Random(1000);中的1000产生的随机数在0到1000之间,参数用于指定随机数产生的范围 方法1 (数据类型)(最小值+m()*(最大值-最小值+1)) 例: (int)(1+m()*(10-1+1)) 从1到10的int型随数 方法2 获得随机数 for (int i=0;i<30;i++) {.println((int)(1+m()*10));} (int)(1+m()*10) 通过包的random方法得到1-10的int随机数 公式是:最小值---最大值(整数)的随机数

(类型)最小值+m()*最大值 方法3 Random ra =new Random(); for (int i=0;i<30;i++) {.println(ra.nextInt(10)+1);} 通过包中的Random类的nextInt方法来得到1-10的int随机数import .*; class Test { public static void main(String args[]) { int[] t = new int[10]; Random rand = new Random(); for(int i=0;i

} } } java中Random的构造函数Random()中默认的种子就是当前时间和midnight, January 1, 1970 UTC的差值(用毫秒计),所以每次运行程序都可以得到不同的结果nt()也可以如此用r.nextInt(100)—–100以内的随机数

JAVA中常用类的常用方法

JAVA中常用类的常用方法 一、类 1、clone()方法 创建并返回此对象的一个副本。要进行“克隆”的对象所属的类必须实现. Cloneable接口。 2、equals(Object obj)方法 功能:比较引用类型数据的等价性。 等价标准:引用类型比较引用,基本类型比较值。 存在特例:对File、String、Date及封装类等类型来说,是比较类型及对象的内容而不考虑引用的是否为同一实例。 3、finalize()方法 当垃圾回收器确定不存在对该对象的更多引用时,由对象的垃圾回收器调用此方法。 4、hashCode()方法返回该对象的哈希码值。 5、notify()方法唤醒在此对象监视器上等待的单个线程。 6、notifyAll()方法唤醒在此对象监视器上等待的所有线程。 7、toString()方法 返回该对象的字符串表示。在进行String与其它类型数据的连接操作时,自动调用toString()方法。可以根据需要重写toString()方法。 8、wait()方法 在其他线程调用此对象的 notify() 方法或 notifyAll() 方法前,导致当前线程等待。

二、字符串相关类 l String类 charAt(int index) 返回指定索引处的 char 值。 compareTo(String anotherString) 按字典顺序比较两个字符串。compareToIgnoreCase(String str) 按字典顺序比较两个字符串,不考虑大小写。concat(String str) 将指定字符串连接到此字符串的结尾。 endsWith(String suffix) 测试此字符串是否以指定的后缀结束。 equals(Object anObject) 将此字符串与指定的对象比较。 equalsIgnoreCase(String anotherString) 将此 String 与另一个 String 比较,不考虑大小写。 indexOf(int ch) 返回指定字符在此字符串中第一次出现处的索引。 indexOf(String str) 返回第一次出现的指定子字符串在此字符串中的索引。lastIndexOf(int ch) 返回指定字符在此字符串中最后一次出现处的索引。length() 返回此字符串的长度。 replace(char oldChar, char newChar) 返回一个新的字符串,它是通过用 newChar 替换此字符串中出现的所有 oldChar 得到的。 split(String regex) 根据给定正则表达式的匹配拆分此字符串。 startsWith(String prefix) 测试此字符串是否以指定的前缀开始。 substring(int beginIndex) 返回一个新的字符串,它是此字符串的一个子字符串。该子字符串始于指定索引处的字符,一直到此字符串末尾。

相关文档
最新文档