随机变量RandomVariables

合集下载

几种常见的概率分布率

几种常见的概率分布率

点数(x)
率(f)
μx P (x)= e –μ . x!
N × P (x)
0
57
0
P(0)=e-3.87 ×3.870/0!=0.0209 54.5072
1
203
203 P(0)=e-3.87 ×3.871/1!=0.0807 210.4656
2
283
766 P(0)=e-3.87 ×3.872/2!=0.1562 407.3696
3
525
1575 P(0)=e-3.87 ×3.873/3!=0.2015 525.5120
4
532
2128 P(0)=e-3.87 ×3.874/4!=0.1949 508.2992
5
408
2040 P(0)=e-3.87 ×3.875/5!=0.1509 393.5472
6
273
1638 P(0)=e-3.87 ×3.876/6!=0.0973 253.7584
几种常见的概率分布率
几种常见的概率分布率

几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
几种常见的概率分布率
2. 普阿松分布:----小概率事件( p≦ 0.1)符合普阿松式分布.
nk
x------在n次抽样中某一种类型的个体数.
μ= N
n k (N-K)(N-n)
S2 = N2(N-1) ^ nk N= x
N------^群体大小的估计. K------加有标记的个体数.

随机变量的独立性及其应用-本科论文

随机变量的独立性及其应用-本科论文

摘要随机变量的独立性是统计学概率论中最基本的概念之一, 通过对它的研究可使很多实际问题的具体计算得到简化, 所以关于随机变量独立性的研究构成了概率论的重要课题.本论文首先对随机变量独立性进行定义, 然后分别对离散型随机变量和连续型随机变量独立性进行研究分析, 同时得出了一些相关的推论, 然后对独立随机变量与数字特征之间的关系以及独立随机变量和的分布进行论述证明.最后本论文对随机变量独立性的一些应用进行了整合分析.关键词:随机变量;独立性;数字特征ABSTRACTThe independence of the random variable is one of the most basic concept in theory of probability statistics. Through the study of it can make many simplifies the calculation of the actual problem. Study of independent random variables constitutes the important subject of probability theory.This paper first independence of random variables are defined. Then respectively to the discrete random variable and continuous random variables independence for this paper. Some relevant inferences are drawn at the same time, and then the relationship between the characteristics of independent random variables with digital and independent random variables and discusses the distribution of the certificate. Finally this thesis on the independence of random variable application integration analysis.Key words:Random variable; independence; numerical characteristics目录摘要 (1)ABSTRACT (2)前言 (4)第一章随机变量独立性及其判定 (5)1.1 随机变量独立性定义 (5)1.1.1随机变量及随机变量独立性的定义 (5)1.1.2随机变量独立性的两个简单定理 (6)1.2 离散型随机变量独立性的判定 (7)1.2.1离散型随机变量判别法一 (7)1.2.2离散型随机变量判别法二 (10)1.3 连续型随机变量独立性的判定 (13)1.3.1连续型随机变量判别法一 (13)1.3.2连续型随机变量判别法二 (14)第二章随机变量独立性的性质与应用 (16)2.1 随机变量与数字特征 (16)2.1.1随机变量独立性与数学期望 (16)2.1.2随机变量独立性与方差 (17)2.1.3随机变量独立性与协方差 (18)2.1.4随机变量独立性与相关系数 (18)2.2 随机变量和的分布 (20)2.2.1独立离散型随机变量和的分布 (20)2.2.2独立连续型随机变量和的分布 (21)2.2.3 独立的离散型随机变量与连续型随机变量和的分布 (22)2.3 随机变量独立性的应用 (24)2.3.1应用一利用离散型随机变量的独立性确定分布中的参数. (24)2.3.2应用二求离散型独立随机变量的联合分布列 (25)2.3.3应用三利用连续型随机变量的独立性求常用分布函数的联合概率密度 (26)总结 (28)致谢 (29)参考文献 (30)前言概率论是研究随机现象数量规律的数学分支, 而随机现象是相对于决定性现象而言的.由于随机现象的普遍性, 使得其在现实生活中具有极其广泛的应用, 特别是在科学技术、工业和农业生产等方面.而随机变量则是指随机事件的数量表现, 随机变量的独立性是概率统计中最基本的概念之一, 无论在科学理论研究还是在社会生产、生活等实际的应用中都具有非常重要的意义.当前概率论和数理统计很多已有的研究成果都是在随机变量独立性的前提下得到的, 因而对随机变量独立性的研究具有非常重要的现实意义.随机变量独立性的研究经历着缓慢的发展过程.在上世纪九十年代后, 有关随机变量独立性的研究进入了一个新的时期, 将随机变量分为离散型随机变量和连续型随机变量, 然后分别对其进行定向判定研究, 并对随机变量的应用也展开了一个新的局面.本文将在此基础上对随机变量独立性判定做详细、全面的论述, 并对随机变量独立性在求数字特征中的应用和独立随机变量和的分布等方面做详细的介绍.第一章随机变量独立性及其判定1.1 随机变量独立性定义在我们研究随机变量独立性判定时, 首先我们需要了解什么是随机变量独立独立性, 当然在此之前我们需要了解一个更为具体的概念, 即什么是随机变量.随机变量表示随机试验中各种结果的实值单值函数.如某一时间段经过火车站安全门的人数, 传真机在一定时间内收到的传真次数等等, 都是关于随机变量的实例.1.1.1随机变量及随机变量独立性的定义定义1.1.1设(Ω,F,P)为概率空间, ξ=ξ(ω)为Ω上定义的实值函数,如果有{ω:ξ(ω)<x}∈Ϝ, 任x∈R则称ξ(ω)为随机变量.随机变量是Ω上关于F可测的实值函数.一般我们省略ω, 将{ω:ξ(ω)<x}等简写成ξ,ξ(ω)等.随机变量在不同条件下因为偶然因素的影响, 其取值可能不同, 即随机变量具有不确定性、随机性.定义 1.1.2设ξ1,ξ2,···,ξn为概率空间(Ω,F,P)上的n个随机变量, 若其联合分布函数等于各自的边缘分布函数之积, 即F(x1,···,x n)=F1(x1)···F n(X n),其中x i∈R,i=1,···,n.称ξ1,ξ2,···,ξn相互独立.1.1.2随机变量独立性的两个简单定理定理1.1.1如果随机变量ξ1,ξ2,···,ξn相互独立, 则其中任何一部分随机变量仍然独立.证明如果ξ1,ξ2,···,ξn相互独立, 考虑其任意部分随机变量组成的子向量, 在F(x1,···,x n)=F1(x1)···F n(X n)中令与子向量无干的所有x i→+∞, 则左边可化为其子向量的边缘分布函数, 同样右边相应地化为子向量的各分量的边缘分布函数之积, 故定理1.1.1得证.定理1.1.2随机变量(ξ,η)相互独立, 当且仅当P{ξ∈B1,η∈B2}=P{ξ∈B1}P{η∈B2},任B1,B2∈B 证明充分性F(x1,···,x n)=F1(x1)···F n(X n)中仅仅是上式中B1=(−∞,x1),B2=(−∞,x2)的特殊情况, 充分性得证.必要性先固定x2, 记ℇ={B1:P{ξ∈B1,η<x2}=P{ξ∈B1}P{η<x2}}则由定理1.1.1知ℇ⊃P={(−∞,x1):x1∈R易见, ℇ为σ代数, 故ℇ⊃σ(P)=B.因而P{ξ∈B1,η<x2}=P{ξ∈B1}P{η<x2},任B1∈B ,x2∈R在固定B1∈B, 记μ={B2:P{ξ∈B1,η∈B2}=P{ξ∈B1}P{η∈B2}}同样地有μ⊃P且μ为σ代数, 故μ⊃σ(P)=B, 必要性得证.综上, 随机变量(ξ,η)相互独立, 当且仅当P{ξ∈B1,η∈B2}=P{ξ∈B1}P{η∈B2},任B1、B2∈B1.2 离散型随机变量独立性的判定受偶然因素影响, 随机变量在不同的条件下可能取各种随机变量不同的值, 即其具有不确定性、随机性, 但这些取值在某个范围的概率是确定的.随机变量既可以是离散型的, 也可以是连续型的.同时在研究随机变量的独立性时,我们也可分为离散型随机变量独立性和连续型随机变量独立性两种分别进行研究, 首先我们对离散型随机变量进行探讨研究, 当然在此之前我们要知道什么样的随机变量才是离散型随机变量.定义1.2.1设ξ为概率空间(Ω,Ϝ,P)上的随机变量, 如果存在数列{x k}和{p k}满足1.p k≥02.∑p k=1k使得P{ξ=x k}=p k , k=1,2,···,则称随机变量ξ(及概率分布)为离散型的.1.2.1离散型随机变量判别法一定理1.2.1设二维离散型随机变量(ξ,η)的联合分布列为P ij=P(ξ=x i,η=y j), 其中i=1,2,···,m;j=1,2,···,n,ξ的边际分布列为p i·=P(ξ=x i),i=1,2,···,m;η的边际分布列为p·j=P(η=y j),j=1,2,···,n.则ξ和η相互独立的充要条件是:对所有的取值(x i,y j)有p ij=p i··p·j ,其中i=1,2,···,m;j=1,2,···,n,证明充分性如果p ij=p i··p·j ,(i=1,2,···,m;j=1,2,···,n.)则对任意的x,y, 因为(ξ,η)是离散型随机变量, 所以F(x,y)=P(ξ≤x,η≤y)=∑x i≤x ∑P(ξ=x i,η=y j) y j≤y=∑x i≤x ∑p ijy j≤y=∑x i≤x∑p i··p·jy j≤y=∑p i·∑p·jy j≤yx i≤x =∑P(ξ=x i)∑P(η=y j)y j≤yx i≤x=P(ξ≤x)P(η≤y)=Fξ(x)Fη(y).即ξ和η是相互独立的, 充分性得证.必要性如果ξ和η相互独立, 不妨设x1<x2<x3<···<x m,y1<y2<y3<···<y n于是对任意x,y, 有F(x,y)=Fξ(x)Fη(y)即P(ξ≤x,η≤y)=P(ξ≤x)P(η≤y)当x=x1,y=y1时, 有P(ξ≤x1,η≤y1)=P(ξ≤x1)P(η≤y1)即P(ξ=x1,η=y1)=P(ξ=x1)P(η=y1)亦即p11=p1··p·1.当x=x1,y=y2时, 有P(ξ≤x1,η≤y2)=P(ξ≤x1)P(η≤y2)P(ξ=x1,η=y1)+P(ξ=x1,η=y2)=P(ξ=x1)P(η=y1)+P(ξ=x1)P(η=y2)=P(ξ=x1)·{P(η=y1)+P(η=y2)}p11+p12=p1·(p·1+p·2)=p1··p·1+p1··p·2由p11=p1··p·1.得p12=p1··p·2如此下去, 可得p1n=p1··p·n一般地有p1j=p1··p·j ,(j=1,2,···,n).同样, 如果取x=x2,y=y j ,(j=1,2,···,n), 可得出p2j=p2··p·j ,(j=1,2,···,n).最后可得p mj=p m··p·j ,(j=1,2,···,n).即有p ij=p i··p·j ,(i=1,2,···,m;j=1,2,···,n).充分性得证.综上所述, 定理得证.由定理1.2.1可以判定, 对于二维离散型随机变量(ξ,η), 等式F(x,y)=Fξ(x)Fη(y) (x,y∈R)成立与等式p ij=P(ξ=x i,η=y j)成立是等价的.因此p ij=P(ξ=x i,η=y j)可以直接用来判定二维离散型随机变量的独立性。

5Continuousrandomvariables:5个连续的随机变量

5Continuousrandomvariables:5个连续的随机变量

5Continuous random variablesWe deviate from the order in the book for this chapter,so the subsections in this chapter do not correspond to those in the text.5.1Densities of continuous random variableRecall that in general a random variable X is a function from the sample space to the real numbers.If the range of X isfinite or countable infinite, we say X is a discrete random variable.We now consider random variables whose range is not countably infinite orfinite.For example,the range of X could be an interval,or the entire real line.For discrete random variables the probability mass function is f X(x)= P(X=x).If we want to compute the probability that X lies in some set, e.g.,an interval[a,b],we sum the pmf:P(a≤X≤b)= x:a≤x≤b f X(x)A special case of this isP(X≤b)= x:x≤b f X(x)For continuous random variables,we will have integrals instead of sums. Definition1.A random variable X is continuous if there is a non-negative function f X(x),called the probability density function(pdf)or just density, such thatP(X≤t)= t−∞f X(x)dxProposition1.If X is a continuous random variable with density f(x), then1.P(X=x)=0for any x∈R.2.P(a≤X≤b)= b a f(x)dx3.For any subset C of R,P(X∈C)= C f(x)dx14. ∞−∞f(x)dx=1Proof.First we observe that subtracting the two equationsP(X≤b)= b−∞f X(x)dx,P(X≤a)= a−∞f X(x)dxgivesP(X≤b)−P(X≤a)= b a f X(x)dxand we have P(X≤b)−P(X≤a)=P(a<X≤b),soP(a<X≤b)= b a f X(x)dx(1) Now for any nP(X=x)≤P(x−1/n<X≤x)= x x−1/n f X(t)dtAs n→∞,the integral goes to zero,so P(X=x)=0.Property2now follows from eq.(1)sinceP(a≤X≤b)=P(a<X≤b)+P(X=a)=P(a<X≤b)Note that since the probability X equals any single real number is zero, P(a≤X≤b),P(a<X≤b),P(a≤X<b),and P(a<X<b)are all the same.Property3is easy if C is a disjoint union of intervals.For more general sets,it is not clear what C even means.This is beyond the scope of this course.Property4is just the fact that P(−∞<X<∞)=1.Caution Often the range of X is not the entire real line.Outside of the range of X the density f X(x)is zero.So the definition of f x(x)will typically involves cases:in one region it is given by some formula,elsewhere it is simply0.So integrals over all of R which contain f X(x)will reduce to intervals overa subset of R.If you mistakenly integrate the formula over the entire real line you will of course get nonsense.25.2CatalogAs with discrete RV’s,two continuous RV’s defined on completely different probability spaces can have the same density.And there are certain densities that come up a lot.So we start a catalog of them.Uniform:(two parameters a,b∈R with a<b)The uniform density on [a,b]isf(x)= 1b−a dx=d−cσ√2 x−µπ(1+x2)3Example:Suppose X is a random variable with an exponential distribution with parameterλ=2.Find P(X≤2)and P(X≤1|X≤2).Example:Suppose X has the Cauchy distribution.Find the number c with the property that P(X≥c)=1/4.Example:Suppose X has the densityf(x)= c x(2−x)if0≤x≤20otherwisewhere c is a constant.Find the constant c and then compute P(1/2≤X).5.3Expected valueA rigorous treatment of the expected value of a continuous random variable requires the theory of abstract Lebesgue integration,so our discussion will not be rigorous.For a discrete RV X,the expected value isE[X]= x xf X(x)We will use this definition to derive the expected value for a continuous RV. The idea is to write our continuous RV as the limit of a sequence of discrete RV’s.Let X be a continuous RV.We will assume that it is bounded.So there is a constant M such that the range of X lies in[−M,M],i.e.,−M≤X≤M. Fix a positive integer n and divide the range into subintervals of width1/n. In each of these subintervals we“round”the value of X to the left endpoint of the interval and call the resulting RV X n.So X n is defined byX n(ω)=kn≤X(ω)<k+1nf X n(kNowf X n(kn)=P(kn)= k+1n f X(x)dxSoE[X n]=Mn−1k=−Mn k n knk nf X(x)dxWhen n is large,the integrals in the sum are over a very small interval.In this interval,x is very close to k/n.In fact,they differ by at most1/n.So the limit as n→∞of the above should beMn−1k=−Mn k+1n x f X(x)dx= M−M x f X(x)dx= ∞−∞x f X(x)dxThe last equality comes from the fact that f X(x)is zero outside[−M,M]. So we make the following definitionDefinition2.Let X be a continuous RV with density f X(x).The expected value of X isE[X]= ∞−∞x f X(x)dxprovided∞−∞|x|f X(x)dx<∞(If this last integral is infinite we say the expected value of X is not defined.) The variance of X isσ2=E[(X−µ)2],µ=E[X]provided the expected value is defined.5Just as with discrete RV’s,if X is a continuous RV and g is a function from R to R,then we can define a new RV by Y=g(X).How do we compute the mean of Y?One approach would be to work out the density of Y and then use the definition of expected value.We have not yet seen how tofind the density of Y,but for this question there is a shortcut just as there was for discrete RV.Theorem1.Let X be a continuous RV,g a function from R to R.Let Y=g(X).ThenE[Y]=E[g(X)]= ∞−∞g(x)f X(x)dxProof.Since we do not know how tofind the density of Y,we cannot prove this yet.We just give a non-rigorous derivation.Let X n be the sequence of discrete RV’s that approximated X defined above.Then g(X n)are discrete RV’s.They approximate g(X).In fact,if the range of X is bounded and g is continous,then g(X n)will converge uniformly to g(X).So E[g(X n)]should converges to E[g(X)].Now g(X n)]is a discrete RV,and by the law of the unconscious statisticianE[g(X n)]= x g(x)f X n(x)(2) Looking back at our previous derivation we see this isE[g(X n)]=Mn−1k=−Mn g(k n knk n)f X(x)dxwhich converges tog(x)f X(x)dx(3)6Just as in the discrete case,there is an application of this theorem that gives us a shortcut for computing the varianceCorollary1.If X is a continuous random variable withfinite varianceσ2 and meanµ,thenσ2=E[X2]−µ2= ∞−∞x2f X(x)dx−µ2Proof.By the theoremσ2=E[(X−µ)2]= (x−µ)2f X(x)dx= [x2−2µx+µ)2]f X(x)dx = x2f X(x)dx−2µ x f X(x)dx+µ2 f X(x)dx= x2f X(x)dx−2µ2+µ2= x2f X(x)dx−µ2Example:Find the mean and variance of the uniform distribution on[a,b].The mean isµ= b a x f(x)dx= b a x2b2−a22(4) For the variance we have tofirst computeE[X2]= b a x2f(x)dx(5) We then subtract the square of the mean andfindσ2=(b−a)2/12. Example:Find the mean and variance of the normal distribution. Example:Find the mean of the Cauchy distributionThe gamma function is defined byΓ(w)= ∞0x w−1e−x dx(6)7The gamma distribution has range[0,∞)and depends on two parameters λ>0,w>0.The density isf(x)= λwλ,σ2=w35f X(x)1/83/81/8 GRAPH8Example:Compute cdf of exponential distribution.Theorem2.Let X be a continuous RV with pdf f(x)and cdf F(x).Then they are related byF(x)= x−∞f(t)dt,f(x)=F′(x)Proof.Thefirst equation is immediate from the def of the cdf.To get the second equation,differentiate thefirst equation and remember that the fun-damental theorem of calculus saysdSince x n→∞,every outcome is in E n for large enough n.So∪∞n=1E n=Ω. Solim n→∞F(x n)=limn→∞P(E n)=1(9)The proof that the limit as x→−∞is0is similar.GAPNow consider a continuous random variable X with density f.ThenF(x)=P(X≤x)= x−∞f(t)dtSo given the density we can compute the cdf by doing the above integral. Differentiating the above we getF′(x)=f(x)So given the cdf we can compute the density by differentiating.Theorem4.Let F(x)be a function from R to[0,1]such that1.F(x)is non-decreasing.2.lim x→−∞F(x)=0,lim x→∞F(x)=1.3.F(x)is continuous from the right.Then F(x)is the cdf of some random variable,i.e.,there is a probability space(Ω,F,P)and a random variable X on it such that F(x)=P(X≤x). fThe proof of this theorem is way beyond the scope of this course.5.5Function of a random variableLet X be a continuous random variable and g:R→R.Then Y=g(X)is a new random variable.We want tofind its density.This is not as easy as in the discrete case.In particular f Y(y)is not x:g(x)=y f X(x).KEY IDEA:Compute the cdf of Y and then differentiate it to get the pdf of Y.10Example:Let X be uniform on[0,1].Let Y=X2.Find the pdf of Y.GAPExample:Let X be uniform on[−1,1].Let Y=X2.Find the pdf of Y.GAPExample:Let X be uniform on[0,1].Letλ>0.Y=−1Theorem5.Let X be a non-negative continuous random variable with cdf F(x).ThenE[X]= ∞0[1−F(x)]dx(12) provided the integral converges.Proof.We use integration by parts on the integral.Let u(x)=1−F(x)and dv=dx.So du=−fdx and v=x.So∞0[1−F(x)]dx=x(1−F(x))|∞x=0+ ∞0x f(x)dx=E[X](13)Note that the boundary term at∞is zero since F(x)→1as x→∞.We can use the above to prove the law of the unconscious statistician for a special case.We assume that X≥0and that the function g is from[0,∞) into[0,∞)and it strictly increasing.Note that this implies that g has an inverse.ThenE[Y]= ∞0[1−F Y(x)]dx= ∞0[1−P(Y≤x)]dx(14)= ∞0[1−P(g(X)≤x)]dx= ∞0[1−P(X≤g−1(x))]dx(15)= ∞0[1−F X(g−1(x))]dx(16)Now we do a change of variables.Let s=g−1(x).So x=g(s)and dx= g′(s)ds.So above becomes∞0[1−F X(s)]g′(s)ds(17) Now integrate this by parts to get[1−F X(s)]g(s)|∞s=0+ ∞0g(s)f(s)ds(18) which proves the theorem in this special case.12。

既不离散也不连续的随机变量

既不离散也不连续的随机变量

目录中文摘要 (1)英文摘要 (1)一、引言 (2)二、随机变量及其分布 (2)(一)随机变量及其分布 (2)1.随机变量的概念 (2)2.分布函数的定义 (2)3.分布函数的性质 (3)(二)离散型随机变量 (3)1.离散型随机变量及其分布的定义 (3)2.分布列的基本性质 (3)3.用分布函数判别离散型随机变量的一种方法 (5)(三)非离散型随机变量 (6)1.连续型随机变量及密度函数的定义 (6)2.密度函数的性质 (7)3.连续型随机变量分布函数的特征 (8)4。

非离散非连续的随机变量 (8)三、既不离散也不连续的随机变量及其判别 (9)(一)随机变量的判别 (9)(二)既不离散也不连续的随机变量的判别 (9)(三)考研中常见的非离散非连续的随机变量示例 (11)四、结束语 (13)参考文献 (13)既不离散也不连续的随机变量彭惠敏摘要:通过对随机变量进行分类,借助离散型、连续型随机变量的分布函数、性质、数字特征及其必要条件的讨论,给出了判别既不离散也不连续的随机变量的方法,即用离散型和连续型随机变量分布函数必要条件的逆否命题加以判别,文中给出了大量例证,并给出了近几年考研中遇到的此类题目,使初学者对随机变量的分类有更为深刻的理解。

关键词:离散型随机变量;连续型随机变量;既不离散也不连续的随机变量;分布函数Neither Discrete Nor Continuous Random VariablePeng Hui-minAbstract: Through the study of the classification of random variables and the discussion of the distribution function, the nature, the digital characteristics, as well as the necessary conditions of both discrete and continuous random variable, this paper demonstrates the means of discriminating the neither discrete nor continuous random variable, that is, by virtue of the converse-negative proposition of the necessary conditions of the two variables’distribution function. A large number of examples and examination questions of this kind appeared in the recent few years of postgraduate entrance exams are given so as to render an in-depth understanding of the classification of the random variables to the beginners.Key words: discrete random variable; continuous random variable; neither discrete nor continuous random variable; distribution function一、引言除了离散型随机变量和连续型随机变量之外,还有既不离散也不连续的随机变量,有的教科书上称“由于这种情况比较复杂,一般不对这种情况加以讨论”,所以很多教科书上根本不提及既不离散也不连续的随机变量,以至于初学者认为只有离散型和连续型两类随机变量,造成很大的误解。

Ch8随机变量

Ch8随机变量

Chapter 8 Random Variable8.1 What is a random variable?A random variable (abbreviation: r.v.) assigns a number value to each outcome (simple event, sample point) of a random circumstance. e.g. Toss a die and observe the number of face upRandom variables is denoted by X, Y (or X(ω), when a specific simple eventωis emphasized)●A continuous random variable can takeany value in an interval or a collectionof intervals●A discrete random variable can takeone of a countable list of distinct values 8.2 Discrete random variableExamples1. (Toss a fair coin –the random circumstance)X = the number of times the “head” occurs Then X can take 1(“head”occurs) or 0 (“tail” occurs)2. X = # of defectives in a package of 50 cellphones3. Y = # of calls needed to get through in a fixed time period4. Repeatedly tossing a fair coin, and define the r.v.:X =number of tosses until the head occursThe Probability Distribution of discrete random variablesThe Probability Distribution Function (pdf ) of a discrete r.v. X is a table ,or a rule that assigns probability )(i i x X P p ==to the possible values i x of the r.v. X , denoted by),(~1p 1p 0p p x x X i ii i 1i 1=<<⎪⎪⎭⎫ ⎝⎛∑ The pdf describes completely the probabilistic rule of a discrete r.v.In Example 4, one has P(X =k ) = k21⎪⎭⎫ ⎝⎛, i.e. ⎪⎪⎭⎫ ⎝⎛ k 2121k 1X ~Miniproblem What about tossing a biased coin, where the probability of occurring head is p8.3 Expected value (Mean) for a discrete r.v. Expected value (Expectation ) of a discrete r.v. X , denoted by EX , is a weighted average value of X :i i ip x EX ∑=, which is exactly the Mean value of X , Usually μ is used to represent EX● Describing the central tendency of pdf ● Weighted average of all possible values ● Approximated by the average when the experiment is repeated independently a very large number of timesStandard Deviation for a discrete r.v. Variance of X :i 2i i 2denoteby 2definedby p x EX X E X V )()()(μσ-==-=∑● Weighted average squared deviation fromthe meanStandard Deviation of X : σ=)(X VMeasures the variability of the distributionExample 8.7The randomized schedule I and II are employed respectively for investing $100 Net gains after 1 year for I and for II are⎪⎪⎭⎫ ⎝⎛994005001010005000X 1...$$$~,⎪⎪⎭⎫ ⎝⎛52341020X 2...$$$~. Then10EX EX 21$==,936X V 92172X V 21.$)(,.$)(==Plan II sounds better.8. 4 Binomial Random VariablesBinomial Experiments and Binomial r.v. Binomial Experiment characteristics: ● Sequence of n identical trials● Each trial has 2 outcomes: S(‘Success’) or F(‘failure’)● The probability of success of each trial is p● Trials are independentBinomial r.v.:X = number of S in the n trials Examples⏹# of games Gophers won in the last season ⏹# of defective items in a batch of 5 items ⏹# of correct on a 33 questions quiz ⏹# of customers who purchase out of 100 customers who enter storeThe Probability for Binomial r.v.For a Binomial r.v. X , one hasP (X=k )),()!(!!p 1q n k 0q p k n k n k n k -=≤≤-=-That showsX ~⎪⎪⎭⎫ ⎝⎛--n k n k n p q p k n k n q n k 0 )!(!!, where we say that X has a binomial distribution , and denote it by X ~B(n , p )Microsoft Excel Tips : Calculating P(X=k ) by BINOMDIST(k,n,p,false ) P(X ≤k ) by BINOMDIST(k,n,p,true )(“false ” stands for exactly k successes, “true ” indicates for a cumulative probability)Tendency (Mean) & Standard Deviation of B (n,p ))()(,p 1np X V np EX -==More ExamplesRandom circumstance r.v. distributionToss 3 fair coins, #of H B (3,21)Roll a die 8 times #of 4 or 6 B (8,31)Randomly sample #of seeing B (1000,p ) 1000 US adults UFO p =proportionRoll 2 dies once Sum is 7 B (1, 366)MiniproblemYou t oss 2 coins. You’re interested in the number of tails. What are the expected value & standard deviation of this random variable, number of tails?8.5 Continuous random variables● Infinite number of outcomes in interval, too many to list like discrete variable● Probabilities of taking any special values are 0.ExamplesX = length of time between customer arrivalsSo are height, time, weight, monetary valuesProbability Density FunctionDiffer from determining the probability, we are only able to find the probability that X falls between two values and do this by determining the area between the two values under a curve f(x), called the Probability Density Function of the r.v. X.i.e. f(x) is a probability density function of X, if for any c, d, we have)()(dXcPdXcP<<=≤≤=Area under the curve f(x) between c and df(x) is not probability , but density(密度)hh xXxPxfh )(lim)(+≤≤=→Continuous Random Variable Accumulated Probability Function F(x)=)(XF Area under the curve f(x) between -∞and xExpected value (Mean) and Variance Mean of X :EX =μ = 纵坐标在概率密度函数f (x )与0间,横坐标在),(∞-∞的面积片在横坐标x 处加一个大小为x 的力(如x 值为负, 则表示反向的力), 所得的平衡点的横坐标Variance of X :22EX X E X V σ=-=)()(Standard Deviation of X : σ=)(X V ( 附注 FormulaMean ⎰=dx x xf )(μVariance ⎰-=dx x f x 22)()(μσ )x f x fBasic Rules)()(,)(X V c X V c EX c X E =++=+ )()(,)(X V c cX V cEX cX E 2== StandardizedIf μ=EX , 2X V σ=)( , then0X E =⎪⎭⎫ ⎝⎛-σμ,1X V =⎪⎭⎫ ⎝⎛-σμ where σμ-X is called the Standardized Error of X (随机变量X 对于其均值的无量纲化的随机偏差)Uniform Distribution(均匀分布)If a r.v. X takes equally likely outcomes in interval [c , d ], which is equivalent to that the probability density function f (x ) of X has the form of⎪⎩⎪⎨⎧≤≤-=)()()(otherwise 0d x c if c d 1x f ,then X is said to submit a uniformdistribution on the interval [c , d ], and denoted by X ~U [c , d ]Example 8.13 If you arrive at a bus stop randomly, where the bus comes every 10 minutes. Let X =waiting time until the next bus arrives. Then X ~U [0,10]. The probability that the waiting time between 5 and 7 minutes isP (5 ≤≤X 7) = Base ⨯Height= (7 - 5) 101⨯ = 51Another exampleYou’re the production manager of a soft drink bottling company. You believe that when a machine is set to dispense 12 oz., it really dispenses 11.5 to 12.5 oz. inclusive. Suppose the amount dispensed has a uniform distribution. What is the probability that less than 11.8 oz. is dispensed?Solution: A randomly drawn machine is set to dispense X oz. Then X ~ U [11.5,12.5], andP (11.5 ≤≤X 11.8) = Base ⨯Height = (11.8 - 11.5) ⨯ 1 = 0.30Mean & standard deviation of Uniform r.v.s2d c +=μ,12c d 22)(-=σNormal DistributionProbability Density FunctionWe say that r.v. X submit Normal distribution with parameters2σμ,, if its probability density function has the form off(x) =2x21e21⎪⎭⎫⎝⎛--σμσπ, (0>σ),denoted by X ~N(2σμ,) .The calculation shows that the total area under the above curve is 1.Characters●Mean, median, mode are equal ( =μ)●σis the standard deviation of therandom variable X, which has infinite range, but effective width is 6 σThe area of the following shadow is 0.95 and the area between μ-3σ and μ+3σ is 0.997Effect of Varying Parameters (μ, 2σ)Small 2σ large 2σUsing probability Tables (Reducing to the Standard Normal Distribution N (0,1)) to calculate● If X is a normal r.v. , then aX +b is also anormal r.v. i.e. if),(~2N X σμ, then 22~(,)aX b N a b a μσ++ ● If X ~N (2σμ,), then σμ-X ~N (0,1)(Explain : change mean μ and standard deviation σ into 0 and 1. The value of σμ-=X Z is called z-score in statistics)Numerical Table of function Φ(z ) -- Cumulative Probability Function of the standard Normal Distribution (dx e 21z 2x 21z-∞-⎰=πΦ)() (标准正态分布累积概率函数的数值表: p.538-)(We have 500z 1z .)(),()(=-=-ΦΦΦ)A part of the value of Φ(z)-1/2 is shown asz ):Using z-score to solve problemsExampleP(3.8 ≤ X ≤ 5)= P(0.12 ≤ Z ≤ 0)).()(1200--=ΦΦ0478050120120150..).()].([.=-=--=ΦΦ Example),(~1005N X , then σμ-=X Z and).().().().(210301058Z 10517P 8X 17P ΦΦ-=-≤≤-=≤≤Real exampleYou work in Quality Control for GE. Light bulb life has a normal distribution with mean 2000 hours and standarddeviation 200 hours. What’s the probability that a bulb will last1. between 2000 & 2400 hours?2. less than 1470 hours?477222400X2000P.)()()(=-=≤≤ΦΦ00465216521470XP.).().()(=-=-=≤ΦΦMini-problem 1 (Reliability)Life testing has revealed that a particular type of TV picture tube has a length of life that is approximately normally distributed with a mean of 8000 hours and a standard deviation of 1000 hours. The manufacturer wants to set a guarantee period for the tube that will obligate the manufacturer to replace no more than 5% of all tubes sold. How long should the guarantee period be? Finding Percentiles (百分点)The 25th percentile x of a Normal r.v. isthe value having z-score zσμ-=xsatisfying 250z .)(=Φ. e.g. if the 25th percentile of pulse rate is 64, then there are 25% people has pulse rate below 64.The same way, the p -th percentile x is determined by 100p x =-)(σμΦ Microsoft Excel TipsNORMSDIST(z ) provides )(z ΦNORMSINV(p ) provides z satisfying p z =)(Φ8.7 Approximating Binomial Distribution ProbabilitiesNormal Approximation of a Binomial r.v.:A B (n , p ) r.v. X is approximately to follow the distribution N (np , np (1-p )), i.e. the N (EX , V(X))(Or say, )1(p np npX -- - the standardize of X , isapproximated by the standard normal distribution as n goes to infinite)Example 8.18 Number of Heads in 30 Flips of a fair coin ( B (30, 0.5))The Histogram looks quite like the density function of N (15,7.5)Approximating Cumulative Probabilitiesfor Binomial r.v.s()X np k np k np P X k P ---≤=≤≈ΦPractically, this approximation is applied when both np and n(1-p) are at least 5, and usually at least 10 is preferable8.8 Sums, difference, and Combinations of r.v.A linear combination of r.v. X ,Y ,…means aX+bY+…(including X+Y and X-Y ). ● Rule 1 E (aX+bY+…)=aEX+bEY+… ● Rule 2 If X ,Y ,…are independent , thenV (aX+bY+…)=2a V (X )+ 2b V (Y )+…● Warning: V (-X )(≠-V (X ))=V(X ) Combining Independent Normal r.v.sIf X ,Y are independent, and ),(~2X X N X σμ,),(~2Y Y N Y σμ, then ),(~2Y 2X Y X N Y X σσμμ+++Example (Missing flight or not)Meg leaves home 45 minutes before the last call for her flight will occur. Assume the driving time (minutes) ),(~925N X and the airport time ),(~415N Y , then the probability of missing her flight is))(()(49251545Z P 45Y X P ++->=>+082303911391Z P .).().(=-=>=ΦAdding Binomial r.v.s with the same Success probabilityIf X ,Y are independent, and ),(~p n B X X , ),(~p n B Y Y , then ),(~p n n B Y X Y X ++Example 8.23Strategies for Exam when out of timeTwo-part multiple-choice test with 10 questions in each part and having 4 choices for each question. You need get 13 questions or more right to pass. However, you don ’t have time to study all the materials. Fromexperience you know that if you study all the 20 questions well, then you can narrow all questions into 2 choices. If you study the first part carefully, then you will get right for these questions with probability 0.8, but you have to guess completely the other part. How can you do?Solution: you haveStrategy 1 - Study all the material wellLet S be the scores you will get, then,(=)),S=~,(.,10VS5ES520Ns.d. (standard deviation) =5= 2.24P≤S-=≥=131)((12SP)1- BINOMDIST(12,20,0.5,true) =0.1316 Strategy 2 - Study first part material only Let X and Y be the scores you will get for the part 1 and part 2 respectively, then X and Y are independent and,X.()(=,~=.,),EXX1NV68108(Y.),)(N=~=.,,.VY1105875225EY)(,.+)+=(=VY3475X25XY10E.s.d. of X +Y =473.= 1.86)(13Y X P ≥+is tedious to calculate, since Y X + no more follows the binomial distribution, but )(13Y X P ≥+can be estimated by simulation, which is, e.g. done 1000 times by randomly generating values of X +Y . e.g. there are 130 times of them not smaller than 13. Then we have 13013Y X P .)(≈≥+Conclusion: Neither one looks betterExponential Distribution⎩⎨⎧<≥=-)()()(0x 00x e x f xλλ,denoted by X ~exp λMean: EX =λ1, Variance: V (X ) = 21λConclusion of this chapter● Defined discrete and continuous random variable● Described the binomial, uniform, normal,& exponential random variables●Calculated probabilities for continuous random variable●Exploiting the normal approximation of the binomial distributionExercise of 7th week1. Mini-problem 12. A volunteer organization wants to find donors. 3volunteers agree to call potential donors. In past, about 20%of those called agreed independently to make a donation. If the 3volunteers make 10, 12and 18calls respectively, what is the probability that they get at least 10 donors?3. Alice and Julie each swam a mile a day. Alice’s times are normally distributed with mean =37 minutes and standard deviation =1 minute. Julie is faster but less consistent than Alice, and her times are normally distributed with mean =33minutes andstandard deviation =2 minutes. Their times are independent each other. Can Alice ever win?4. The standard medical treatment for a certain disease is successful in 60% of all cases.(1)The treatment is given to n =200 patients. What is the probability that the treatment is successful for 70% or more of these 200 patients?(2)How about the case of n=205 设),(~p n B X (二项分布).分别对于 12108n ,,=及902010p .,,.,. =,n 10k ,,, =,作出k n k k np 1p C k X P --==)()(的数值表. (若用Matlab ,则要求给出程序与数值表;如用Execel 则要求写出计算的要领与注释(傻瓜化!))6. The number of training units that must be passed before a complex computer software program is mastered varies from one to five, depending on the student. After much experience, the software manufacturer has determined the probability distribution that describes the fraction of usersmastering the software after each number of training units:a. Calculate mean μ, variance σ2, and the standard deviation σ. Interpret μ in the context of the problem.b. Graph p(x). What is the probability that X is in the interval (μ–2σ, μ+2σ )?c. If the firm wants to ensure that at least 70% of the students master the program, what is the minimum number of training units that must be administered?7. A weather forecaster predicts that May rainfall in a local area will be between 3 and 6 cm but has no idea where the amount will be within that interval. Let X be the amount of May rainfall in the local area, and assume that X is uniformly distributed in the interval of 3 to 6 cma. State the density function of X and plot it on a graph.b. Obtain the mean and the standard deviation of the probability distribution.c. What is the probability that the observed Mayrainfall will be less than 5 cm?8. Gauges are used to reject all packages crackerwhere a certain weight is not within the specificationoz. It is know that this weight is normally 225ddistributed with mean 225oz. and variance 441 squareoz. Determine the value d such that the specificationscover 95% of the weight.9. The average life of a certain type of small motor is10 years with standard deviation of 2 years. Assumethat the lifetime of a motor follows a normaldistribution. The manufacturer replaces free allmotors that fail while under guarantee. If he is willingto replace only 3% of the motors that fail, how long aguarantee should he offer?10. The IQs of 600 applicants of a certain college areapproximately normally distributed with a mean of115and a standard deviation of 12. If the collegerequires an IQ at least 95, how many of these studentswill be rejected on this basis regardless of theirqualification?11. The safety jackets produced by a manufacturerhave rating with an average 840newtons(unit of the force) and a standard deviation 15newtons. Suppose that it follows normal distribution. To check whether the process is operating correctly, a manager takes a sample of 100jackets, rates them, and calculates the bar graph, the mean rating for jackets in the sample.(1) What is the probability that the sample mean is less than or equal to 830 newtons?(2) Suppose that on one particular day, the manager observed sample mean = 830. Would you conclude that this indicates that the true process mean for that day is still 840 newtons? Why?。

概率论与数理统计(英文) 第三章

概率论与数理统计(英文) 第三章

3.R a n d o m V a r i a b l e s 3.1 Definition of Random VariablesIn engineering or scientific problems, we are not only interested in the probability of events, but also interested in some variables depending on sample points. (定义在样本点上的变量)For example, we maybe interested in the life of bulbs produced by a certain company, or the weight of cows in a certain farm, etc. These ideas lead to the definition of random variables.1. random variable definitionHere are some examples.Example 3.1.1 A fair die is tossed. The number X shown is a random variable, it takes values in the set {1,2,6}.Example 3.1.2The life t of a bulb selected at random from bulbs produced by company A is a random variable, it takes values in the interval (0,) .Since the outcomes of a random experiment can not be predicted in advance, the exact value of a random variable can not be predicted before the experiment, we can only discuss the probability that it takes some value or the values in some subset of R.2. Distribution functionNote The distribution function ()F X is defined on real numbers, not on sample space. Example 3.1.3Let X be the number we get from tossing a fair die. Then the distribution function of X is (Figure 3.1.1)Figure 3.1.1 The distribution function in Example 3.1.33. PropertiesThe distribution function ()F x of a random variable X has the following properties:SolutionBy definition,1≤==-=.(2000)(2000)10.6321P X F e-Question:What are the probabilities (2000)P X=?P X<and (2000)SolutionLet 1X be the total number shown, then the events 1{}X k = contains 1k - sample points, 2,3,4,5k =. Thus11()36k P X k -==, 2,3,4,5k = AndsoThusFigure 3.1.2 The distribution function in Example 3.1.53.2 Discrete Random Variables 离散型随机变量In this book, we study two kinds of random variables. ,,}n aAssume a discrete random variable X takes values from the set 12{,,,}n X a a a =. Let()n n P X a p ==,1,2,.n = (3.2.1)Then we have 0n p ≥, 1,2,,n = 1n n p=∑.the probability distribution of the discrete random variable X (概率分布)注意随机变量X 的分布所满足的条件(1) P i ≥0(2) P 1+P 2+…+P n =1离散型分布函数 And the distribution function of X is given by()()n n a xF x P X x p ≤=≤=∑ (3.2.2)Solutionn=3, p=1/2X p r1/8 13/8 23/8 31/8two-point distribution(两点分布)某学生参加考试得5分的概率是p , X 表示他首次得5分的考试次数,求X 的分布。

南京大学随机过程练习题附中文解释及答案

南京大学随机过程练习题附中文解释及答案
(以第九版为准) 第二章 Random Variables 随机变量 1、(2.16)An airline knows that 5 percent of the people making reservations on a certain flight will not show up. Consequently, their policy is to sell 52 tickets for a flight that can hold only 50 passengers. What is the probability that there will be a seat available for every passenger who shows up? 航空公司知道预订航班的人有 5%最终不来搭乘航班。因此,他们的政策是对于 一个能容纳 50 个旅客的航班售 52 张票。问每个出现的旅客都有位置的概率是多 少?
8、(3.8)An unbiased die is successively rolled. Let X and Y denote, respectively, the number of rolls necessary to obtain a six and a five. Find (a) E[X], (b) E[X|Y=1] 相继地掷一颗不均匀的骰子。令 X 和 Y 分别记得到一个 6 和一个 5 所必须的抛 掷次数。求(a)E[X],(b)E[X|Y=1]。 重要:E[E[X|Y]]=E[X]
3、(4.32) Each of two switches is either on or off during a day. On day n, each switch will independently be on with probability [1+#of on switches during day n-1]/4. For instance, if both switches are on during day n-1, then each will independently be on during day n with probability3/4. What fraction of days are both switches on? What fractions are both off? 在一天中两个开关或者开或者关。在第 n 天,每个开关独立地处于开的概率是[1+ 第 n-1 天是开的开关数]/4。例如,如果在第 n-1 天两个开关都是开的,那么在第 n 天,每个开关独立地处于开的概率是 3/4。问两个开关都是开的天数的比例是 多少?两个开关都是关的天数的比例是多少?

随机变量的定义定义

随机变量的定义定义

条件随机变量
01
定义
条件随机变量是指在给定某些变量的条件下,另一个变量 的概率分布。
02 03
描述
条件随机变量通常用于描述两个或多个随机事件之间的条 件关系,例如在概率图模型中,条件随机变量被用来表示 节点之间的条件依赖关系。
应用
条件随机变量在许多实际问题中都有应用,例如在自然语 言处理中,给定上下文的情况下,下一个词的概率分布可 以用条件随机变量来表示;在推荐系统中,给定用户历史 行为的情况下,用户的兴趣偏好可以用条件随机变量来表 示。
02
相关系数的定义
相关系数是协方差与两个随机变量各自方差的比值,用于衡量两个随机
变量的线性相关程度。
03
协方差与相关系数的性质
协方差和相关系数具有对称性、非负性、规范性等性质,这些性质使得
协方差和相关系数成为描述两个随机变量之间线性关系的重要指标。
03
CATALOGUE
随机变量的应用
在统计学中的应用
方差的定义
方差是随机变量取值与期望值之差的平方的平均值,用于衡量随机变量取值的离散程度 。
方差的性质
方差具有非负性、规范性、可加性等性质,这些性质使得方差成为描述随机变量离散程 度的重要指标。
协方差与相关系数
01
协方差的定义
协方差是两个随机变量取值之间线性关系的度量,反映了两个随机变量
同时取值的波动情况。
概率分布
随机变量是概率论的基本概念之一,它可以表示某一随机现象的 结果,并具有特定的概率分布。
随机事件的概率
通过随机变量,我们可以计算随机事件的概率,了解事件发生的 可能性。
随机过程的描述
在随机过程中,随机变量用于描述随机现象的变化规律,帮助我 们理解随机现象的本质。
  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

x2 p2

xi pi

性质 (1) pi 0,要通过四个均设有信
号灯的路口,每个信号灯以概率 p允许通过,设各信 号灯的工作是相互独立的。以 X表示该汽车首次停下 时,它已通过的路口的个数,求X的分布律. 解:X所有可能的取值为:0,1,2,3,4 X=0表示经过的路口为0,即第一个信号灯就不 允许通过,其概率为1-p 即:P(X=0)=1-p X=1表示通过的路口为1个,即第一个信号灯允 许通过,第二个不允许通过,且信号定独立工 作,故其概率为p(1-p) 即:P(X=0)=p(1-p)
P( X k ) C p q
k n k
nk
k = 0, 1, 2 ,, n,
称X 服从参数为n,p的二项分布,记为
X ~ B ( n, p )
例8 已知某批产品的一级品率为0.2,现从中有放回
地抽取20只,问20只元件中恰有k (k =0,1, 2 ,…, 20)
只一级品的概率是多少? 解: 易知这是n=20的20重贝努利实验,且事件A为 任取一件元件为一级品,P(A)=0.2
的分布趋于对称
(4) 几何分布 例 9 一射手每次打靶射击一发子弹,打中的概率 为 p(0<p<1), 不中的概率为 q=1p 。今向靶作独立 重复射击,直到中靶为止,则消耗的子弹数 X 是 一个离散型随机变量,其分布为 X P 或记为 P(X=k)=q k-1 p, k=1, 2, ... 1 p 2 3 … … k qk-1p …
将取哪个值。 2. 定义域不同 其定义域为样本空间S,是一个集合,自变量是样 样本点,与数学上的定义方式有所区别
随机变量的引入,使我们能用随机变量来描 述各种随机现象,并且有可能利用数学分析 的方法来对随机试验的结果进行深入广泛的 研究和讨论。
引入随机变量后,对随机现象统计规律 的研究,就由对事件及事件概率的研究 扩大为对随机变量及其取值规律的研究.
X P
求 解:
-1 1/4
2 1/2
3 1/4
P(X1/2), P(3/2 < X 5/2), P(2 X3) P(X 1/2)=P(X=-1)=1/4 P(3/2 < X 5/2)=P(X=2)=1/2 P(2 X3)=P(X=2)+P(X=3)=1/2+1/4=3/4
一般地, 设 L 是实数域上一集合,则有
m = 0,1…, l, l =min(M, n)
例11
已知某批产品的一级品率为0.2,现从中不放
回地抽取20只,问20只元件中恰有k (k =1, 2 ,…, 20)
只一级品的概率是多少?
解: (一)可看作超几何分布来计算 设产品总数为N,则一级品数为0.2N,20只 元件中一级品的个数为X: 则 k 20 k
(2) 一小时内呼叫不超过5次的概率 解:由泊松分布的定义知:
(1) P( X 4)
4
4!
e
5
34 3 e 0.168 4!
5
(2)
3k 3 P( X 5) P( X k ) e k 0 k 0 k!
4 二项分布与超几何分布的关系,二项分布 与泊松分布的关系
E 中事件的概率 首先,如何能够系统而全面地描述 E 的随机事件呢?
—— 我们能否引入一个变量(即数),当它取不 同的值时,或许可以表达不同的随机事件?
S 的某些样本点组成的集合
因此,我们需要根据问题的性质,通过引入
一个变量,来描述随机试验的样本点。 即引入样本空间到实数域上的一个映射
s
.
X()
0 q
1 p
0<p<1, q=1-p,或记为
P(X=k)=pkq1-k , k =0,1
则称X 服从参数为p 的两点分布或参数为 p 的0-1分 布。
(3) 二项分布
设在一次伯努利试验中有两个可能的结果,A与 SA ,且有P (A)=p。则在 n 重伯努利试验中事件A 发生的次数 X 是一个离散型随机变量,其分布为
R
例1. 掷一枚硬币,观察其面朝上的情况 ( E )
样本空间: S={正面,反面}
定义映射
X:
SR1 , 满足:
X(正面)=1,X(反面)=0 其中
X 为掷一枚 硬币,出现 正面的次数
{: X()=1}={出现正面},{:X()=0}={出现反面}
X 的取值是随机的,但是我们知道它所有 的可能的取值为{0,1}
C0.2 N C0.8 N P( X k ) 20 CN
(二)当产品数量很大时近似看成 二项分布来进行计算
(6) 泊松分布 定义: 若离散型随机变量X 的分布律为
P( X k )

k
k!
e

k =0,1,2, 其中常数 > 0, 则称 X 服从 参数为的泊松分布, 记为X ( )。
例2. 对于某型电子元件,任抽一件,观测其寿命( E)
样本空间,S={ t : t 0} 定义映射
其可能取值的 范围为 [0,+)
X:
X为任抽 一电子元 件的寿命。
S R
t t
X 在某一范围内的取值可以表达E中的事件,如 {: X()[a, b]}={t : t [a, b]}
§2-1 随机变量
(n 1) p 1 k (n 1) p
当( n + 1)p = 整数时,在 k = [( n + 1)p ]与 [( n + 1)p ] – 1 处的概率取得最大值 当( n + 1)p 整数时,在 k = [( n + 1)p ] 处的概率取得最大值 对固定的 n、p, P ( X = k) 的取值 呈不对称分布; 固定 p, 随着 n 的增大,其取值
应用场合 在一定时间间隔内: 电话总机接到的电话次数; 一匹布上的疵点个数; 大卖场的顾客数; 市级医院急诊病人数; 一个容器中的细菌数; 某一地区发生的交通事故的次数
放射性物质发出的粒子数;
一本书中每页印刷错误的个数; 等等
例12 假设电话交换台每小时接到的呼叫次数 X服从参 数=3的泊松分布,求 (1) 每小时恰有4次呼叫的概率
定义:设(S,,P)是一概率空间,若X为样本空间 S 到实数域 R1 上的映射: X:S R1
X()
满足:xR1, 有 {: X() x} 则称X()为(S, , P)上的一个随机变量。 常常将 {:X()x }简记为(Xx)。 随机变量常用大写字母X,Y,Z表示, 小写字母x,y,z表示实数
一个元素称为E的随机事件
P: R1 A P(A) 满足三条公理
问题 第一章研究的是对试验E求P(A),只是孤立的
研究一个个事件,对E的全貌不了解。同时,A是集 合,P(A)是数,无法用图形和其他数学工具,对其 研究受到限制。因此为了深入地研究随机现象,认 识随机现象的整体性质,需要全面地研究随机实验
同样的方法可求
P( X 2) p 2 (1 p)
P( X 3) p3 (1 p)
P( X 4) p
故X的分布律为
4
0 1 p
1 p(1 p)
2 p 2 (1 p)
3 p 3 (1 p)
4 4 p
例4 设随机变量X所有可能取的值为1,2,...,n,且已知
P ( X L)
xi L
P( X x ) p
i xi L
i
3 几种常见的离散型随机变量
(1) 单点分布
例 6 若随机变量 X 只取一个常数值 C ,即 P( X=C )=1,则称 X 服从单点分布。
(2) 0-1分布
例7 若随机变量 X 只取两个值0或1,其分布为
X P
P(X=xi)= pi , i=1, 2, ... (2.1)
则称(2.1)式为离散型随机变量X 的分布律。
(2.1)式也可以用表格的形式表示如下:
X P x1 p1 x2 p2 … … xi pi … …
上述表格称为离散型随机变量 X 的分布列, 分布列也可以表示成下列矩阵的形式
x1 p 1
P(X=k)与k成正比,求 X 的分布;
解:由题意知:P(X=k)=b.k,现在要求b 由离散性随机变量的性质知: b+2b+3b+ ┄ +nb=1 解得: b 2 n(n 1) 故X的分布律为
2k P( X k ) , k 0,1,2 n n(n 1)
例5
设随机变量 X 的分布律为
k e k!
例14 某人进行射击,设每次射击的命中率为0.001,
他独立射击了 5000 次,试求他至少命中两次的概 率。 解:设命中次数为X,则
X ~ B(5000,0.001)
故:P( X 2) 1 P( X 0) P( X 1)
0 1 1 C5000 0.00100.9995000 C5000 0.00110.9994999 (1)
引入随机变量 X 以后,就可以用 X 来描述事
件。一般地,设 L 是实数域上一集合,将X
在 L 上的取值写成{XL},它表示事件
{: X() L} 即 {X L} = {: X()L}
随机变量与一般实函数的差别:
1. X 随试验结果的不同而取不同的值,因而在试验
之前只知道它可能取值的范围,而不能预先肯定它
事件及 事件概率
随机变量及其 取值规律
§2-2 离散型随机变量
1 定义
若随机变量 X 所有可能的取值为有限 个或可列个,则称X为离散型随机变量。 否则称为非离散型随机变量。
2. 离散型随机变量的分布
Discrete Random Variables 定义:若随机变量X所有可能的取值为x1,
相关文档
最新文档