Linear programming bounds for unitary space time codes
A Mathematical Theory of Communication

Reprinted with corrections from The Bell System Technical Journal,V ol.27,pp.379–423,623–656,July,October,1948.A Mathematical Theory of CommunicationBy C.E.SHANNONI NTRODUCTIONHE recent development of various methods of modulation such as PCM and PPM which exchangebandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication.A basis for such a theory is contained in the important papers of Nyquist1and Hartley2on this subject.In the present paper we will extend the theory to include a number of new factors,in particular the effect of noise in the channel,and the savings possible due to the statistical structure of the original message and due to the nature of thefinal destination of the information.The fundamental problem of communication is that of reproducing at one point either exactly or ap-proximately a message selected at another point.Frequently the messages have meaning;that is they refer to or are correlated according to some system with certain physical or conceptual entities.These semantic aspects of communication are irrelevant to the engineering problem.The significant aspect is that the actual message is one selected from a set of possible messages.The system must be designed to operate for each possible selection,not just the one which will actually be chosen since this is unknown at the time of design.If the number of messages in the set isfinite then this number or any monotonic function of this number can be regarded as a measure of the information produced when one message is chosen from the set,all choices being equally likely.As was pointed out by Hartley the most natural choice is the logarithmic function.Although this definition must be generalized considerably when we consider the influence of the statistics of the message and when we have a continuous range of messages,we will in all cases use an essentially logarithmic measure.The logarithmic measure is more convenient for various reasons:1.It is practically more useful.Parameters of engineering importance such as time,bandwidth,numberof relays,etc.,tend to vary linearly with the logarithm of the number of possibilities.For example,adding one relay to a group doubles the number of possible states of the relays.It adds1to the base2logarithm of this number.Doubling the time roughly squares the number of possible messages,ordoubles the logarithm,etc.2.It is nearer to our intuitive feeling as to the proper measure.This is closely related to(1)since we in-tuitively measures entities by linear comparison with common standards.One feels,for example,thattwo punched cards should have twice the capacity of one for information storage,and two identicalchannels twice the capacity of one for transmitting information.3.It is mathematically more suitable.Many of the limiting operations are simple in terms of the loga-rithm but would require clumsy restatement in terms of the number of possibilities.The choice of a logarithmic base corresponds to the choice of a unit for measuring information.If the base2is used the resulting units may be called binary digits,or more briefly bits,a word suggested by J.W.Tukey.A device with two stable positions,such as a relay or aflip-flop circuit,can store one bit of information.N such devices can store N bits,since the total number of possible states is2N and log22N N.If the base10is used the units may be called decimal digits.Sincelog2M log10M log102332log10M1Nyquist,H.,“Certain Factors Affecting Telegraph Speed,”Bell System Technical Journal,April1924,p.324;“Certain Topics in Telegraph Transmission Theory,”A.I.E.E.Trans.,v.47,April1928,p.617.2Hartley,R.V.L.,“Transmission of Information,”Bell System Technical Journal,July1928,p.535.INFORMATIONSOURCE TRANSMITTER RECEIVER DESTINATIONNOISESOURCEFig.1—Schematic diagram of a general communication system.a decimal digit is about31physical counterparts.We may roughly classify communication systems into three main categories:discrete, continuous and mixed.By a discrete system we will mean one in which both the message and the signal are a sequence of discrete symbols.A typical case is telegraphy where the message is a sequence of lettersand the signal a sequence of dots,dashes and spaces.A continuous system is one in which the message and signal are both treated as continuous functions,e.g.,radio or television.A mixed system is one in which both discrete and continuous variables appear,e.g.,PCM transmission of speech.Wefirst consider the discrete case.This case has applications not only in communication theory,but also in the theory of computing machines,the design of telephone exchanges and otherfields.In additionthe discrete case forms a foundation for the continuous and mixed cases which will be treated in the second half of the paper.PART I:DISCRETE NOISELESS SYSTEMS1.T HE D ISCRETE N OISELESS C HANNELTeletype and telegraphy are two simple examples of a discrete channel for transmitting information.Gen-erally,a discrete channel will mean a system whereby a sequence of choices from afinite set of elementarysymbols S1S n can be transmitted from one point to another.Each of the symbols S i is assumed to have a certain duration in time t i seconds(not necessarily the same for different S i,for example the dots anddashes in telegraphy).It is not required that all possible sequences of the S i be capable of transmission on the system;certain sequences only may be allowed.These will be possible signals for the channel.Thus in telegraphy suppose the symbols are:(1)A dot,consisting of line closure for a unit of time and then line open for a unit of time;(2)A dash,consisting of three time units of closure and one unit open;(3)A letter space consisting of,say,three units of line open;(4)A word space of six units of line open.We might place the restriction on allowable sequences that no spaces follow each other(for if two letter spaces are adjacent, it is identical with a word space).The question we now consider is how one can measure the capacity of such a channel to transmit information.In the teletype case where all symbols are of the same duration,and any sequence of the32symbols is allowed the answer is easy.Each symbol representsfive bits of information.If the system transmits n symbols per second it is natural to say that the channel has a capacity of5n bits per second.This does not mean that the teletype channel will always be transmitting information at this rate—this is the maximum possible rate and whether or not the actual rate reaches this maximum depends on the source of information which feeds the channel,as will appear later.In the more general case with different lengths of symbols and constraints on the allowed sequences,we make the following definition:Definition:The capacity C of a discrete channel is given bylog N TC LimT∞and thereforeC log X0In case there are restrictions on allowed sequences we may still often obtain a difference equation of this type andfind C from the characteristic equation.In the telegraphy case mentioned aboveN t N t2N t4N t5N t7N t8N t10as we see by counting sequences of symbols according to the last or next to the last symbol occurring. Hence C is log0where0is the positive root of12457810.Solving this wefind C0539.A very general type of restriction which may be placed on allowed sequences is the following:We imagine a number of possible states a1a2a m.For each state only certain symbols from the set S1S n can be transmitted(different subsets for the different states).When one of these has been transmitted the state changes to a new state depending both on the old state and the particular symbol transmitted.The telegraph case is a simple example of this.There are two states depending on whether or not a space was the last symbol transmitted.If so,then only a dot or a dash can be sent next and the state always changes. If not,any symbol can be transmitted and the state changes if a space is sent,otherwise it remains the same. The conditions can be indicated in a linear graph as shown in Fig.2.The junction points correspond totheFig.2—Graphical representation of the constraints on telegraph symbols.states and the lines indicate the symbols possible in a state and the resulting state.In Appendix1it is shown that if the conditions on allowed sequences can be described in this form C will exist and can be calculated in accordance with the following result:Theorem1:Let b s i j be the duration of the s th symbol which is allowable in state i and leads to state j. Then the channel capacity C is equal to log W where W is the largest real root of the determinant equation:∑s W bsi j i j0where i j1if i j and is zero otherwise.For example,in the telegraph case(Fig.2)the determinant is:1W2W4W3W6W2W410On expansion this leads to the equation given above for this case.2.T HE D ISCRETE S OURCE OF I NFORMATIONWe have seen that under very general conditions the logarithm of the number of possible signals in a discrete channel increases linearly with time.The capacity to transmit information can be specified by giving this rate of increase,the number of bits per second required to specify the particular signal used.We now consider the information source.How is an information source to be described mathematically, and how much information in bits per second is produced in a given source?The main point at issue is the effect of statistical knowledge about the source in reducing the required capacity of the channel,by the useof proper encoding of the information.In telegraphy,for example,the messages to be transmitted consist of sequences of letters.These sequences,however,are not completely random.In general,they form sentences and have the statistical structure of,say,English.The letter E occurs more frequently than Q,the sequence TH more frequently than XP,etc.The existence of this structure allows one to make a saving in time(or channel capacity)by properly encoding the message sequences into signal sequences.This is already done to a limited extent in telegraphy by using the shortest channel symbol,a dot,for the most common English letter E;while the infrequent letters,Q,X,Z are represented by longer sequences of dots and dashes.This idea is carried still further in certain commercial codes where common words and phrases are represented by four-orfive-letter code groups with a considerable saving in average time.The standardized greeting and anniversary telegrams now in use extend this to the point of encoding a sentence or two into a relatively short sequence of numbers.We can think of a discrete source as generating the message,symbol by symbol.It will choose succes-sive symbols according to certain probabilities depending,in general,on preceding choices as well as the particular symbols in question.A physical system,or a mathematical model of a system which produces such a sequence of symbols governed by a set of probabilities,is known as a stochastic process.3We may consider a discrete source,therefore,to be represented by a stochastic process.Conversely,any stochastic process which produces a discrete sequence of symbols chosen from afinite set may be considered a discrete source.This will include such cases as:1.Natural written languages such as English,German,Chinese.2.Continuous information sources that have been rendered discrete by some quantizing process.Forexample,the quantized speech from a PCM transmitter,or a quantized television signal.3.Mathematical cases where we merely define abstractly a stochastic process which generates a se-quence of symbols.The following are examples of this last type of source.(A)Suppose we havefive letters A,B,C,D,E which are chosen each with probability.2,successivechoices being independent.This would lead to a sequence of which the following is a typicalexample.B DC B C E C C C AD C B D D A AE C E E AA B B D A E E C A C E E B A E E C B C E A D.This was constructed with the use of a table of random numbers.4(B)Using the samefive letters let the probabilities be.4,.1,.2,.2,.1,respectively,with successivechoices independent.A typical message from this source is then:A A A C D CB DC E A AD A D A CE D AE A D C A B E D A D D C E C A A A A A D.(C)A more complicated structure is obtained if successive symbols are not chosen independentlybut their probabilities depend on preceding letters.In the simplest case of this type a choicedepends only on the preceding letter and not on ones before that.The statistical structure canthen be described by a set of transition probabilities p i j,the probability that letter i is followedby letter j.The indices i and j range over all the possible symbols.A second equivalent way ofspecifying the structure is to give the“digram”probabilities p i j,i.e.,the relative frequency ofthe digram i j.The letter frequencies p i,(the probability of letter i),the transition probabilities 3See,for example,S.Chandrasekhar,“Stochastic Problems in Physics and Astronomy,”Reviews of Modern Physics,v.15,No.1, January1943,p.1.4Kendall and Smith,Tables of Random Sampling Numbers,Cambridge,1939.p i j and the digram probabilities p i j are related by the following formulas:p i ∑j p i j∑j p j i ∑j p j p j ip i j p i p i j∑j p ij ∑i p i ∑i jp i j 1As a specific example suppose there are three letters A,B,C with the probability tables:p i j A B C 045i B 21151p i jA1518270C 274135A typical message from this source is the following:A B B A B A B A B A B A B A B B B A B B B B B A B A B A B A B A B B B A C A C A BB A B B B B A B B A B AC B B B A B A.The next increase in complexity would involve trigram frequencies but no more.The choice ofa letter would depend on the preceding two letters but not on the message before that point.A set of trigram frequencies p i j k or equivalently a set of transition probabilities p i j k would be required.Continuing in this way one obtains successively more complicated stochastic pro-cesses.In the general n -gram case a set of n -gram probabilities p i 1i 2i n or of transition probabilities p i 1i 2i n 1i n is required to specify the statistical structure.(D)Stochastic processes can also be defined which produce a text consisting of a sequence of“words.”Suppose there are five letters A,B,C,D,E and 16“words”in the language with associated probabilities:.10A.16BEBE .11CABED .04DEB .04ADEB.04BED .05CEED .15DEED .05ADEE.02BEED .08DAB .01EAB .01BADD .05CA .04DAD .05EESuppose successive “words”are chosen independently and are separated by a space.A typical message might be:DAB EE A BEBE DEED DEB ADEE ADEE EE DEB BEBE BEBE BEBE ADEE BED DEED DEED CEED ADEE A DEED DEED BEBE CABED BEBE BED DAB DEED ADEB.If all the words are of finite length this process is equivalent to one of the preceding type,but the description may be simpler in terms of the word structure and probabilities.We may also generalize here and introduce transition probabilities between words,etc.These artificial languages are useful in constructing simple problems and examples to illustrate vari-ous possibilities.We can also approximate to a natural language by means of a series of simple artificial languages.The zero-order approximation is obtained by choosing all letters with the same probability and independently.The first-order approximation is obtained by choosing successive letters independently but each letter having the same probability that it has in the natural language.5Thus,in the first-order ap-proximation to English,E is chosen with probability .12(its frequency in normal English)and W with probability .02,but there is no influence between adjacent letters and no tendency to form the preferred5Letter,digram and trigram frequencies are given in Secret and Urgent by Fletcher Pratt,Blue Ribbon Books,1939.Word frequen-cies are tabulated in Relative Frequency of English Speech Sounds,G.Dewey,Harvard University Press,1923.digrams such as TH,ED,etc.In the second-order approximation,digram structure is introduced.After aletter is chosen,the next one is chosen in accordance with the frequencies with which the various lettersfollow thefirst one.This requires a table of digram frequencies p i j.In the third-order approximation, trigram structure is introduced.Each letter is chosen with probabilities which depend on the preceding twoletters.3.T HE S ERIES OF A PPROXIMATIONS TO E NGLISHTo give a visual idea of how this series of processes approaches a language,typical sequences in the approx-imations to English have been constructed and are given below.In all cases we have assumed a27-symbol “alphabet,”the26letters and a space.1.Zero-order approximation(symbols independent and equiprobable).XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZL-HJQD.2.First-order approximation(symbols independent but with frequencies of English text).OCRO HLI RGWR NMIELWIS EU LL NBNESEBY A TH EEI ALHENHTTPA OOBTTV ANAH BRL.3.Second-order approximation(digram structure as in English).ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TU-COOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.4.Third-order approximation(trigram structure as in English).IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONS-TURES OF THE REPTAGIN IS REGOACTIONA OF CRE.5.First-order word approximation.Rather than continue with tetragram,,n-gram structure it is easierand better to jump at this point to word units.Here words are chosen independently but with their appropriate frequencies.REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NAT-URAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHESTHE LINE MESSAGE HAD BE THESE.6.Second-order word approximation.The word transition probabilities are correct but no further struc-ture is included.THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHAR-ACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THATTHE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.The resemblance to ordinary English text increases quite noticeably at each of the above steps.Note that these samples have reasonably good structure out to about twice the range that is taken into account in their construction.Thus in(3)the statistical process insures reasonable text for two-letter sequences,but four-letter sequences from the sample can usually befitted into good sentences.In(6)sequences of four or more words can easily be placed in sentences without unusual or strained constructions.The particular sequence of ten words“attack on an English writer that the character of this”is not at all unreasonable.It appears then that a sufficiently complex stochastic process will give a satisfactory representation of a discrete source.Thefirst two samples were constructed by the use of a book of random numbers in conjunction with(for example2)a table of letter frequencies.This method might have been continued for(3),(4)and(5), since digram,trigram and word frequency tables are available,but a simpler equivalent method was used.To construct(3)for example,one opens a book at random and selects a letter at random on the page.This letter is recorded.The book is then opened to another page and one reads until this letter is encountered. The succeeding letter is then recorded.Turning to another page this second letter is searched for and the succeeding letter recorded,etc.A similar process was used for(4),(5)and(6).It would be interesting if further approximations could be constructed,but the labor involved becomes enormous at the next stage.4.G RAPHICAL R EPRESENTATION OF A M ARKOFF P ROCESSStochastic processes of the type described above are known mathematically as discrete Markoff processes and have been extensively studied in the literature.6The general case can be described as follows:There exist afinite number of possible“states”of a system;S1S2S n.In addition there is a set of transition probabilities;p i j the probability that if the system is in state S i it will next go to state S j.To make thisMarkoff process into an information source we need only assume that a letter is produced for each transition from one state to another.The states will correspond to the“residue of influence”from preceding letters.The situation can be represented graphically as shown in Figs.3,4and5.The“states”are thejunctionFig.3—A graph corresponding to the source in example B.points in the graph and the probabilities and letters produced for a transition are given beside the correspond-ing line.Figure3is for the example B in Section2,while Fig.4corresponds to the example C.In Fig.3Fig.4—A graph corresponding to the source in example C.there is only one state since successive letters are independent.In Fig.4there are as many states as letters. If a trigram example were constructed there would be at most n2states corresponding to the possible pairs of letters preceding the one being chosen.Figure5is a graph for the case of word structure in example D. Here S corresponds to the“space”symbol.5.E RGODIC AND M IXED S OURCESAs we have indicated above a discrete source for our purposes can be considered to be represented by a Markoff process.Among the possible discrete Markoff processes there is a group with special properties of significance in communication theory.This special class consists of the“ergodic”processes and we shall call the corresponding sources ergodic sources.Although a rigorous definition of an ergodic process is somewhat involved,the general idea is simple.In an ergodic process every sequence produced by the process 6For a detailed treatment see M.Fr´e chet,M´e thode des fonctions arbitraires.Th´e orie des´e v´e nements en chaˆıne dans le cas d’un nombrefini d’´e tats possibles.Paris,Gauthier-Villars,1938.is the same in statistical properties.Thus the letter frequencies,digram frequencies,etc.,obtained from particular sequences,will,as the lengths of the sequences increase,approach definite limits independent of the particular sequence.Actually this is not true of every sequence but the set for which it is false has probability zero.Roughly the ergodic property means statistical homogeneity.All the examples of artificial languages given above are ergodic.This property is related to the structure of the corresponding graph.If the graph has the following two properties7the corresponding process will be ergodic:1.The graph does not consist of two isolated parts A and B such that it is impossible to go from junctionpoints in part A to junction points in part B along lines of the graph in the direction of arrows and also impossible to go from junctions in part B to junctions in part A.2.A closed series of lines in the graph with all arrows on the lines pointing in the same orientation willbe called a“circuit.”The“length”of a circuit is the number of lines in it.Thus in Fig.5series BEBES is a circuit of length5.The second property required is that the greatest common divisor of the lengths of all circuits in the graph beone.If thefirst condition is satisfied but the second one violated by having the greatest common divisor equal to d1,the sequences have a certain type of periodic structure.The various sequences fall into d different classes which are statistically the same apart from a shift of the origin(i.e.,which letter in the sequence is called letter1).By a shift of from0up to d1any sequence can be made statistically equivalent to any other.A simple example with d2is the following:There are three possible letters a b c.Letter a isfollowed with either b or c with probabilities13respectively.Either b or c is always followed by lettera.Thus a typical sequence isa b a c a c a c a b a c a b a b a c a cThis type of situation is not of much importance for our work.If thefirst condition is violated the graph may be separated into a set of subgraphs each of which satisfies thefirst condition.We will assume that the second condition is also satisfied for each subgraph.We have in this case what may be called a“mixed”source made up of a number of pure components.The components correspond to the various subgraphs.If L1,L2,L3are the component sources we may writeL p1L1p2L2p3L37These are restatements in terms of the graph of conditions given in Fr´e chet.where p i is the probability of the component source L i.Physically the situation represented is this:There are several different sources L1,L2,L3which are each of homogeneous statistical structure(i.e.,they are ergodic).We do not know a priori which is to be used,but once the sequence starts in a given pure component L i,it continues indefinitely according to the statistical structure of that component.As an example one may take two of the processes defined above and assume p12and p28.A sequence from the mixed sourceL2L18L2would be obtained by choosingfirst L1or L2with probabilities.2and.8and after this choice generating a sequence from whichever was chosen.Except when the contrary is stated we shall assume a source to be ergodic.This assumption enables one to identify averages along a sequence with averages over the ensemble of possible sequences(the probability of a discrepancy being zero).For example the relative frequency of the letter A in a particular infinite sequence will be,with probability one,equal to its relative frequency in the ensemble of sequences.If P i is the probability of state i and p i j the transition probability to state j,then for the process to be stationary it is clear that the P i must satisfy equilibrium conditions:P j∑iP i p i jIn the ergodic case it can be shown that with any starting conditions the probabilities P j N of being in state j after N symbols,approach the equilibrium values as N∞.6.C HOICE,U NCERTAINTY AND E NTROPYWe have represented a discrete information source as a Markoff process.Can we define a quantity which will measure,in some sense,how much information is“produced”by such a process,or better,at what rate information is produced?Suppose we have a set of possible events whose probabilities of occurrence are p1p2p n.These probabilities are known but that is all we know concerning which event will occur.Can wefind a measure of how much“choice”is involved in the selection of the event or of how uncertain we are of the outcome?If there is such a measure,say H p1p2p n,it is reasonable to require of it the following properties:1.H should be continuous in the p i.2.If all the p i are equal,p i12,p216.On the right wefirst choose between two possibilities each withprobability13,1216H121312is because this second choice only occurs half the time.In Appendix2,the following result is established:Theorem2:The only H satisfying the three above assumptions is of the form:H Kn∑i1p i log p iwhere K is a positive constant.This theorem,and the assumptions required for its proof,are in no way necessary for the present theory.It is given chiefly to lend a certain plausibility to some of our later definitions.The real justification of these definitions,however,will reside in their implications.Quantities of the form H∑p i log p i(the constant K merely amounts to a choice of a unit of measure) play a central role in information theory as measures of information,choice and uncertainty.The form of H will be recognized as that of entropy as defined in certain formulations of statistical mechanics8where p i isthe probability of a system being in cell i of its phase space.H is then,for example,the H in Boltzmann’s famous H theorem.We shall call H∑p i log p i the entropy of the set of probabilities p1p n.If x is a chance variable we will write H x for its entropy;thus x is not an argument of a function but a label for anumber,to differentiate it from H y say,the entropy of the chance variable y.The entropy in the case of two possibilities with probabilities p and q1p,namelyH p log p q log qis plotted in Fig.7as a function of p.HBITSpFig.7—Entropy in the case of two possibilities with probabilities p and1p.The quantity H has a number of interesting properties which further substantiate it as a reasonable measure of choice or information.1.H0if and only if all the p i but one are zero,this one having the value unity.Thus only when weare certain of the outcome does H vanish.Otherwise H is positive.2.For a given n,H is a maximum and equal to log n when all the p i are equal(i.e.,1。
运筹学(第五版)习题答案

运筹学习题答案第一章(39页)1.1用图解法求解下列线性规划问题,并指出问题是具有唯一最优解、无穷多最优解、无界解还是无可行解。
(1)max 12z x x =+51x +102x £50 1x +2x ³1 2x £4 1x ,2x ³0 (2)min z=1x +1.52x 1x +32x ³3 1x +2x ³2 1x ,2x ³0 (3)max z=21x +22x 1x -2x ³-1 -0.51x +2x £2 1x ,2x ³0 (4)max z=1x +2x 1x -2x ³0 31x -2x £-3 1x ,2x ³0 解:(1)(图略)有唯一可行解,max z=14 (2)(图略)有唯一可行解,min z=9/4 (3)(图略)无界解(4)(图略)无可行解1.2将下列线性规划问题变换成标准型,并列出初始单纯形表。
(1)min z=-31x +42x -23x +54x 41x -2x +23x -4x =-2 1x +2x +33x -4x £14 -21x +32x -3x +24x ³2 1x ,2x ,3x ³0,4x 无约束无约束(2)max kk z s p =11nmk ik ik i k z a x ===åå11(1,...,)mikk xi n =-=-=åik x ³0 (i=1(i=1……n; k=1,…,m) (1)解:设z=-z ¢,4x =5x -6x , 5x ,6x ³0 标准型:标准型:Max z ¢=31x -42x +23x -5(5x -6x )+07x +08x -M 9x -M 10x s. t . -41x +2x -23x +5x -6x +10x =2 1x +2x +33x -5x +6x +7x =14 -21x +32x -3x +25x -26x -8x +9x =2 1x ,2x ,3x ,5x ,6x ,7x ,8x ,9x ,10x ³0 初始单纯形表: j c ® 3 -4 2 -5 5 0 0 -M -M i qB C B Xb 1x 2x 3x 5x6x7x 8x9x10x-M 10x 2 -4 1 -2 1 -1 0 0 0 1 2 0 7x14 1 1 3 -1 1 1 0 0 0 14 -M 9x2 -2 [3] -1 2 -2 0 -1 1 0 2/3 -z ¢4M 3-6M 4M-4 2-3M 3M-5 5-3M 0 -M 0 0 (2)解:加入人工变量1x ,2x ,3x ,…n x ,得:,得: Max s=(1/kp )1n i=å1m k =åik a ik x -M 1x -M 2x -…..-M n xs.t. 11mi ik k x x =+=å(i=1,2,3(i=1,2,3……,n) ik x ³0, i x ³0, (i=1,2,3(i=1,2,3……n; k=1,2….,m) M 是任意正整数是任意正整数 初始单纯形表:初始单纯形表: jc-M -M … -M 11k a p 12k a p… 1mk ap (1)n k a p 2n k a p …mnkapi qB C BXb 1x2x … n x11x12x … 1mx … 1n x2n x… nmx -M 1x1 1 0 … 0 1 1 … … 0 0 … 0 -M 2x 1 0 1 … 0 0 … … 0 0 … 0 … … … … … … … … … … … … … … … … -M n x 1 0 0 … 1 0 0 … 0 … 1 1 … 1 -s n M 0 0 … 0 11k a M p +12ka Mp + … 1mk a M p + (1)n k aM p +2n k a M p +…mnk a M p +1.3在下面的线性规划问题中找出满足约束条件的所有基解。
MIMO-OFDM系统中信道估计解析

题目:MIMO-OFDM系统中信道估计及信号检测算法的研究独创性(或创新性)声明本人声明所呈交的论文是本人在导师指导下进行的研究工作及取得的研究成果。
尽我所知,除了文中特别加以标注和致谢中所罗列的内容以外,论文中不包含其他人已经发表或撰写过的研究成果,也不包含为获得北京邮电大学或其他教育机构的学位或证书而使用过的材料。
与我一同工作的同志对本研究所做的任何贡献均已在论文中作了明确的说明并表示了谢意。
申请学位论文与资料若有不实之处,本人承担一切相关责任。
本人签名:关于论文使用授权的说明学位论文作者完全了解北京邮电大学有关保留和使用学位论文的规定,即:研究生在校攻读学位期间论文工作的知识产权单位属北京邮电大学。
学校有权保留并向国家有关部门或机构送交论文的复印件和磁盘,允许学位论文被查阅和借阅;学校可以公布学位论文的全部或部分内容,可以允许采用影印、缩印或其它复制手段保存、汇编学位论文。
(保密的学位论文在解密后遵守此规定)本学位论文不属于保密范围,适用本授权书。
本人签名:夺^摘要MIMO-OFDM系统中信道估计及信号检测算法的研究输入多输出(MIMO)和正交频分复用(OFDM)是LTE的两大核心技术。
多输入多输出(MIMO)技术利用各种分集技术带来的分集增益可以提高系统的信道容量、数据的传输速率以及系统的频谱利用率,这些都是在不增加系统带宽和发射功率的情况下取得的;正交频分复用(OFDM)技术是多载波调制技术的一种,其物理信道是由若干个并行的正交子信道组成,因此可有效地对抗频率选择性衰落,同时通过插入循环前缀(CP)可以有效消除由多径而引起的符号间干扰(ISI)。
由于多输入多输出(MIMO)在提高系统容量和正交频分复用(OFDM)在对抗多径衰落方面的优势,基于两者结合的MIMO-OFDM系统已经引起了广泛的关注。
信道估计算法和信号检测算法是MIMO-OFDM系统的关键技术。
其中信道估计算法对MIMO-OFDM系统接收端的相干解调和空时检测起着至关重要的作用,信道估计的准确性将影响系统的整体性能。
一种逼近最大似然的高效球形译码算法

w i h c n a p o c h e o ma c f x mu - k l o d d t cin i e mu t. ne n y t m f h c a p r a h t e p r r n e o f ma i m- ei o ee t n t l - tn a s s l i h o h ia e o
ta e f ewe n p ro a c n o lx t r d o b t e e r n e a d c mp e i f m y.
Ke y wor ds:wiee s c mmunc t n;mu t— n e a s se ; ma i m —i e io d d tc in;s h r r ls o iai o lia tnn y tm x mu lk l o e e to h p ee
M a i u .i ei o d Pe f r a c x m m 1k l o r o m n e h
Z N n —h a ,S U F n ,S N Jnto HA G Qigc u n H eg U i— a
( c ol f l t ncE gne n n pol t n eh o g ,N S N nig2 0 9 C ia S ho o e r i n ier gadO t e r i T c nl y U T, aj 10 4, hn ) E co i e co c o n
Ab t a t:Ba e n t r e e r h sr cur f s h r e o e ,a f ce td tc in ag rt m , sr c s d o he te s a c tu t e o p e e d c d r n e in e e to l o i i h
北航最优化方法最新最全答案2015版

将此问题化成线性规划.
minimize f (x)
x∈Rn
subject to Ax = b
x ≥ 0.
5
解: 引入变量 t ,所给问题等价于
minimize t subject to f (x) = t,
Ax = b, x ≥ 0.
考虑问题
minimize t
subject to f (x) ≤ t, Ax = b,
4. 单纯形法的练习:习题2.10,习题2.11,习题2.12,习题2.13,习题2.20(说明单纯形 法的效率的一般性例子中,自变量为三个时所得问题),习题2.21(说明单纯形法采用最小 相对费用系数进基原则确定进基变量时,如果所求解问题是退化的,则单纯形法会出现 循环!),习题2.31.
5. 两阶段法的练习:习题2.14-习题2.16;大 M 法的练习:习题2.18.
2u1 − 2v1 + u3 − v3 = 3, ui, vi, s ≥ 0, i = 1, 2, 3.
方法2: 引入非负变量 t1, t2, t3 ,将原问题转化成等价问题
minimize t1 + t2 + t3 subject to x + y ≤ 1,
2x + z = 3, |x| = t1, |y| = t2, |z| = t3.
(c)
minimize subject to
x1 + 4x2 + x3 x1 − 2x2 + x3 = 4 x1 − x3 = 1
x2 ≥ 0, x3 ≥ 0.
解:
(c) 由于变量 x1 无限制,可利用约束 x1 = x3 + 1 对其消去. 因此,得其标准形
MIMO系统中一种基于线性接收的满分集正交空时编码

中图分类 号 :N 2 T 9
文献 标识码 : A
文章 编号 :002 5 (00 0 -9 60 10 - 8 2 1 )600 -5 7
一
在无线通信领域 , 多输人多输 出( I O 作为 MM ) 种新的无线通信技术 , 具有很高的系统容量和频
第6 期
李
冬等 : I O系统中一种基于线性 接收的满分集正交空时编码 MM
・0 97・
接收信 号 可 以表 不为
对 MM E接 收 S
y √ s + = (日 w )
( 1 )
‰疆 =
2 编 码 设 计
+ )H ~ ( (日) 日日 5 )
… = =
。
.。= I l m 2
.. I =h。, =12 … , n=12, , , l m l m ,, M, , … Ⅳ 则
收稿 日期 : 0・ - 2 91 2 0 14 作者简介 : 李
基金项 目: 教育部博 士点基金 (O5693 ) 20o907 资助
冬(91 , 18 一)西北工业大学博 士研究生 , 主要从 事 MM I O无线通信 、 通信信号处理 的研究。
21 08 l 期 0年 2月 第2 卷第 6
西 北 工 业 大 学 学 报
J un lo r w se oye h ia o ra fNot e tr P ltc nclUmv mi h n e  ̄
D c 2 1 e. 00 V0 . 8 o 6 12 N .
MI 系 统 中 一 种 基 于 线 性 接 收 的 MO 满 分 集 正 交 空 时 编 码
一种低复杂度的线性离散码发射天线选择技术
一
种 低 复杂 度 的 线 性 离 散码 发 射 天 线 选 择 技 术
许 莉 朱近 康
( 中国科学技术大学 电子工程与信息科学系 , 合肥 ,3 0 7 20 2 )
摘 要 : 了充 分 利 用 多天 线 系统 的 性 能 增 益 , 为 通过 研 究 集 中 式 M I MO( lpeip t lpeo tu) Mut l n u t l up t 系统 中 最 i mu i
大化 最 小后 验 S NR (in lon i ai) 则 的局 限性 , 出 了一 种低 复 杂度 的基 于 线 性 离散 码 的 发 射 天 线 选 Sg a t o ert 准 s o 提 择 方 案 。 然后 分 析 了分 布 式 MI MO 系统 中 , 动 台距 不 同基 站 间 不 同 的 大 尺度 衰 落 对 信 道 特 征 值 的 影 响 , 明 移 证
是 一 种重 要 的手 段 , 括 空 时码 的天 线 选 择[9 包 8] .和
中 图分 类 号 : TN9 9 2
文献标识码 : A
Tr n m i t nna S l c i n wih Lo Co p e iy a s t An e e e to t w m l x t
f r Li e r Di e so de o n a s r i n Co s p
了所 提 天 线 选择 准 则在 分 布 式 系统 中的 有 效性 。 真 结果 表 明 , 准静 态信 道 环 境 下 , 仿 在 所提 天 线 选择 准 则具 有 比
最 大化 最 小后 验 S NR 准 则 更好 的性 能 , 并且 复 杂度 更低 。
关键 词 : 线 通 信技 术 ; 线 选择 ; 无 天 线性 离散 码 ; 迫零 接 收机
计算机网络第四版(课后练习+答案)
第 1 章概述1.假设你已经将你的狗Berníe 训练成可以携带一箱3 盒8mm 的磁带,而不是一小瓶内哇地. (当你的磁盘满了的时候,你可能会认为这是一次紧急事件。
)每盒磁带的窑最为7GB 字节;无论你在哪里,狗跑向你的速度是18km/h 。
请问,在什么距离范围内Berníe的数据传输速率会超过一条数据速率为150Mbps的传输线?答:狗能携带21千兆字节或者168千兆位的数据。
18 公里/小时的速度等于0.005 公里/秒,走过x公里的时间为x / 0.005 = 200x秒,产生的数据传输速度为168/200x Gbps或者840 /x Mbps。
因此,与通信线路相比较,若x<5.6 公里,狗有更高的速度。
6. 一个客户·服务器系统使用了卫星网络,卫星的高度为40 000km. 在对一个请求进行响应的时候,最佳情形下的延迟是什么?答:由于请求和应答都必须通过卫星,因此传输总路径长度为160,000千米。
在空气和真空中的光速为300,000 公里/秒,因此最佳的传播延迟为160,000/300,000秒,约533 msec。
9.在一个集中式的二叉树上,有2n -1 个路出器相互连接起来:每个树节点上都布一个路由器。
路由器i 为了与路由器j 进行通信,它要给树的根发送一条消息。
然后树根将消息送下来给j 。
假设所有的路由器对都是等概率出现的,请推导出当n很大时,每条消息的平均跳数的一个近似表达式。
答:这意味着,从路由器到路由器的路径长度相当于路由器到根的两倍。
若在树中,根深度为1,深度为n,从根到第n层需要n-1跳,在该层的路由器为0.50。
从根到n-1 层的路径有router的0.25和n-2跳步。
因此,路径长度l为:18.OSI 的哪一层分别处理以下问题?答:把传输的比特流划分为帧——数据链路层决定使用哪条路径通过子网——网络层.28.一幅图像的分辨率为1024X 768 像素,每个像素用3 字节来表示。
R语言-最优化_整数规划、线性规划求解(Rsymphony)
R 语⾔-最优化_整数规划、线性规划求解(Rsymphony )Rsymphony 包简介Rsymphony,混合整数线性规划SYMPHONY 求解器,其中主函数有:主要参数作⽤obj规划⽬标系数mat约束向量矩阵dir约束⽅向向量,有’>’、’<’、’=’构成rhs约束值bounds上下限的约束,默认0到INF type限定⽬标变量的类型,’B’指的是0-1规划,’C’代表连max 逻辑值,T为求最⼤值,F为最⼩值exampleexample1example2Rsymphony_solve_LP(obj, mat, dir, rhs, bounds = NULL , types = NULL , max = FALSE , verbosity = -2, time_limit = -1, node_limit = -1, gap_limit = -1, first_feasible = FALSE , write_lp = FALSE , write_mps = FALSE )12345# 求解library (Rsymphony)obj <- c(2, 4, 3)mat <- matrix(c(3, 2, 1, 4, 1, 3, 2, 1, 2), nrow = 3)# [,1] [,2] [,3]#[1,] 3 4 2#[2,] 2 1 1#[3,] 1 3 2dir <- c("<=", "<=", "<=")rhs <- c(60, 40, 80)max <- F Rsymphony_solve_LP(obj, mat, dir, rhs, max = max)#$solution #[1] 0 0 30#$objval #[1] 90#$status # 0表有解,1表找不到解#TM_OPTIMAL_SOLUTION_FOUND # 01234567891011121314151617181920obj <- c(3, 1, 3)mat <- matrix(c(-1, 0, 1, 2, 4, -3, 1, -3, 2), nrow = 3)# [,1] [,2] [,3]#[1,] -1 2 1#[2,] 0 4 -3#[3,] 1 -3 2dir <- c("<=", "<=", "<=")rhs <- c(4, 2, 3)max <- TRUE types <- c("I", "C", "I")Rsymphony_solve_LP(obj, mat, dir, rhs, types = types, max = max)#$solution #[1] 5.00 2.75 3.00#$objval #[1] 26.75#$status #TM_OPTIMAL_SOLUTION_FOUND # 0 12345678910111213141516171819。
数据结构与算法题库(附参考答案)
数据结构与算法题库(附参考答案)一、单选题(共86题,每题1分,共86分)1.在快速排序的一趟划分过程中,当遇到与基准数相等的元素时,如果左右指针都不停止移动,那么当所有元素都相等时,算法的时间复杂度是多少?A、O(NlogN)B、O(N)C、O(N2)D、O(logN)正确答案:C2.一棵有 1001 个结点的完全二叉树,其叶子结点数为▁▁▁▁▁ 。
A、254B、250C、501D、500正确答案:C3.对于一个具有N个顶点的无向图,若采用邻接矩阵表示,则该矩阵的大小是:A、(N−1)2B、NC、N2D、N−1正确答案:C4.在有n(>1)个元素的最大堆(大根堆)中,最小元的数组下标可以是:A、⌊n/2⌋−1B、⌊n/2⌋+2C、1D、⌊n/2⌋正确答案:B5.一棵非空二叉树,若先序遍历与中序遍历的序列相同,则该二叉树▁▁▁▁▁ 。
A、所有结点均无左孩子B、所有结点均无右孩子C、只有一个叶子结点D、为任意二叉树正确答案:A6.度量结果集相关性时,如果准确率很高而召回率很低,则说明:A、大部分检索出的文件都是相关的,但还有很多相关文件没有被检索出来B、大部分相关文件被检索到,但基准数据集不够大C、大部分检索出的文件都是相关的,但基准数据集不够大D、大部分相关文件被检索到,但很多不相关的文件也在检索结果里正确答案:A7.若某表最常用的操作是在最后一个结点之后插入一个结点或删除最后一个结点。
则采用哪种存储方式最节省运算时间?A、单循环链表B、带头结点的双循环链表C、单链表D、双链表正确答案:B8.设数组 S[ ]={93, 946, 372, 9, 146, 151, 301, 485, 236, 327, 43, 892},采用最低位优先(LSD)基数排序将 S 排列成升序序列。
第1 趟分配、收集后,元素 372 之前、之后紧邻的元素分别是:A、43,892B、236,301C、301,892D、485,301正确答案:C9.在快速排序的一趟划分过程中,当遇到与基准数相等的元素时,如果左指针停止移动,而右指针在同样情况下却不停止移动,那么当所有元素都相等时,算法的时间复杂度是多少?A、O(NlogN)B、O(N2)C、O(N)D、O(logN)正确答案:B10.在快速排序的一趟划分过程中,当遇到与基准数相等的元素时,如果左右指针都会停止移动,那么当所有元素都相等时,算法的时间复杂度是多少?A、O(NlogN)B、O(N)C、O(logN)D、O(N2)正确答案:A11.如果AVL树的深度为6(空树的深度定义为−1),则此树最少有多少个结点?A、12B、20C、33D、64正确答案:C12.已知指针ha和hb分别是两个单链表的头指针,下列算法将这两个链表首尾相连在一起,并形成一个循环链表(即ha的最后一个结点链接hb 的第一个结点,hb的最后一个结点指向ha),返回ha作为该循环链表的头指针。