EmpiricalLikelihoodforRightCensoredandLeftTruncateddata经验似然和左截断右删失数据

合集下载

威布尔分布在Matlab中的命令

威布尔分布在Matlab中的命令

Uniform (Continuous) DistributionQuantile-Quantile Plots Empirical Cumulative Distribution Function (CDF)Hidden Markov Model Functions New Functions for Extreme Value DistributionsStatistics Toolbox wblcdfWeibull cumulative distribution function (cdf)SyntaxP = wblbcdf(X, A, B)[P, PLO, PUP] = wblcdf(X, A, B, PCOV, alpha)DescriptionP = wblbcdf(X, A, B) computes the cdf of the Weibull distribution with scale parameter A and shape parameter B, at each of the values in X. X, A, and B can be vectors, matrices, or multidimensional arrays that all have the same size. A scalar input is expanded to a constant array of the same size as the other inputs. The default values for A and B are both 1. The parameters A and B must be positive.[P, PLO, PUP] = wblcdf(X, A, B, PCOV, alpha) returns confidence bounds for P when the input parameters A and B are estimates. PCOV is the 2-by-2 covariance matrix of the estimated parameters. alpha has a default value of 0.05, and specifies 100(1 - alpha)% confidence bounds. PLO and PUP are arrays of the same size as P containing the lower and upper confidence bounds.The function wblcdf computes confidence bounds for P using a normal approximation to the distribution of the estimateand then transforms those bounds to the scale of the output P. The computed bounds give approximately the desired confidence level when you estimate m u, sigma, and PCOV from large samples, but in smaller samples other methods of computing the confidence bounds might be more accurate.The Weibull cdf isExamplesWhat is the probability that a value from a Weibull distribution with parameters a = 0.15 and b =0.8 is less than 0.5?probability = wblcdf(0.5, 0.15, 0.8)probability =0.9272How sensitive is this result to small changes in the parameters?[A, B] = meshgrid(0.1:0.05:0.2,0.2:0.05:0.3);probability = wblcdf(0.5, A, B)probability =vartestn wblfitwblcdf wblinvStatistics Toolbox wblinvInverse of the Weibull cumulative distribution functionSyntaxX = wblinv(P, A, B)[X, XLO, XUP] = wblinv(P, A, B, PCOV, alpha)DescriptionX = wblinv(P, A, B) returns the inverse cumulative distribution function (cdf) for a Weibull distribution with scale parameter A and shape parameter B, evaluated at the values in P. P, A, and B can be vectors, matrices, or multidimensional arrays that all have the same size. A scalar input is expanded to a constant array of the same size as the other inputs. The default values for A and B are both 1.[X, XLO, XUP] = wblinv(P, A, B, PCOV, alpha) returns confidence bounds for X when the input parameters A and B are estimates. PCOV is a 2-by-2 matrix containing the covariance matrix of the estimated parameters. alpha has a default value of 0.05, and specifies 100(1 - a lpha)% confidence bounds. XLO and XUP are arrays of the same size as X containing the lower and upper confidence bounds.The function wblinv computes confidence bounds for X using a normal approximation to the distribution of the estimatewhere q is the P th quantile from a Weibull distribution with scale and shape parameters both equal to 1. The computed bounds give approximately the desired confidence level when you estimate mu, sigma, and PCOV from large samples, but in smaller samples other methods of computing the confidence bounds might be more accurate.The inverse of the Weibull cdf isExamplesThe lifetimes (in hours) of a batch of light bulbs has a Weibull distribution with parameters a = 200 and b = 6. What is the median lifetime of the bulbs?life = wblinv(0.5, 200, 6)life =188.1486What is the 90th percentile?life = wblinv(0.9, 200, 6)life =wblfit wbllikewblinv wblpdfwbllike wblplotwblpdf wblrndwblplot wblstatwblrnd wishrnd。

AvailabilityCascadesandRiskRegulation

AvailabilityCascadesandRiskRegulation
314. Adam B. Cox and Eric A. Posner, The Second-Order Structure of Immigration Law (November 2006)
315. Lior J. Strahilevitz, Wealth without Markets? (November 2006)
Timur Kuran
Follow this and additional works at:/
public_law_and_legal_theory
Part of the Law Commons
This Working Paper is brought to you for free and open access by the Working Papers at Chicago Unbound. It has been accepted for inclusion in Public Law and Legal Theory Working Papers by an authorized administrator of Chicago Unbound. For more information, please contact
312. Dennis W. Carlton and Randal C. Picker, Antitrust and Regulation (October 2006)
313. Robert Cooter and Ariel Porat, Liability Externalities and Mandatory Choices: Should Doctors Pay Less? (November 2006)
University of Chicago Law School

Limited attention, information disclosure, and financial reporting

Limited attention, information disclosure, and financial reporting
December 20, 2002
Limited Attention, Information Disclosure, and Financial Reporting
Davidபைடு நூலகம்Hirshleifer∗ and Siew Hong Teoh∗
This paper models firms’ choices between alternative means of presenting information, and the effects of different presentations on market prices when investors have limited attention and processing power. In a market equilibrium with partially attentive investors, we examine the effects of alternative: levels of discretion in pro forma earnings disclosure, methods of accounting for employee option compensation, and degrees of aggregation in reporting.

Limited Attention, Information Disclosure, and Financial Reporting
Abstract This paper models firms’ choices between alternative means of presenting information, and the effects of different presentations on market prices when investors have limited attention and processing power. In a market equilibrium with partially attentive investors, we examine the effects of alternative: levels of discretion in pro forma earnings disclosure, methods of accounting for employee option compensation, and degrees of aggregation in reporting.

eta-Scaling of dN_{ch}deta at sqrt{s_{NN}} = 200 GeV by the PHOBOS Collaboration and the Or

eta-Scaling of dN_{ch}deta at sqrt{s_{NN}} = 200 GeV by the PHOBOS Collaboration and the Or

a r X i v :n u c l -t h /0209004v 2 20 D e c 20021η-Scaling of dN ch /dηat√s NN =200GeV,we have analyzed themby means of stochastic theory named the Ornstein-Uhlenbeck process with two sources.Moreover,we display that z r =η/s NN =130GeV by PHOBOS Collaboration.2)Those distributions have been explained by a stochastic approach named the Ornstein-Uhlenbeck (OU)process with the evolution parameter t ,the frictional coefficient γand the variance σ2:∂P (y,t )∂y y +1γ∂2s NN /m N )at t =0and P (y,0)=0.5[δ(y +y max )+δ(y −y max )],we obtain the following distribution function for dn/dη(assuming y ≈η)using the probability density P (y,t )1)dn2V 2(t )+exp−(η−ηmax e −γt )2η2 )scaling function with z max =ηmax /ηrms and V 2r (t )=V 2(t )/η2rms.dn2V 2r (t )+exp−(z r −z max e −γt )22Letters2.Semi-phenomenological analyses of data In Fig.1,we show distributions of dn/dη.As is seen in Fig.1,the intercepts of dn/dη|η=0with different centrality cuts are located in the following narrow intervaldns NN=200GeV.Next,we should examine a power-like law in(0.5 N part )−1(dN ch/dη)|η=0( N part being number of participants)which is proposed by WA98Collaborations4)as1dη η=0=A N part α.(5) As seen in Fig.2,it can be stressed that the power-like law holds fairly well∗).Using estimated parameters A andα,we can express c as0.5 N partc=2πV2(t))·exp −(ηmax e−γt)2/2V2(t) andχ2are shown in Table II.The results are shown in Fig.3.To describe the dip structures,thefinite evolution time is necessary in our approach.Letters322.533.544.55(d N c h /d η)|η=0 /(0.5〈N p a r t 〉)〈N part 〉22.533.544.55(d N c h /d η)|η=0 /(0.5〈N p a r t 〉)〈N part 〉Fig.2.Determination of the parameters A and α.The method of linear regression is used.Thecorrelation coefficient (c.c.)is 0.991(200GeV).From data at 130GeV we have A =1.79,α=0.103and c.c.=0.993.Table I.Empirical examination of Eq.(6)(√N part 93±5138±6200±7.5277±8.5344.5±11N (Ex)ch 1230±601870±902750±1403860±1904960±250c (Ex)0.123±0.0120.124±0.0120.127±0.0120.129±0.0120.130±0.012c (Eq.(6))0.122±0.0090.124±0.0080.127±0.0080.130±0.0080.128±0.008p 0.855±δp 0.861±δp 0.864±δp 0.868±δp 0.873±δp 0.876±δp V 2(t ) 3.62±0.26 3.49±0.23 3.31±0.20 3.09±0.17 2.95±0.15 2.79±0.13N (Th)ch 780±131270±211930±302821±433951±605050±77c ∗(Th)0.121±δc t 0.123±δc t 0.124±δc t 0.125±δc t 0.127±δc t 0.127±δc t χ2/n .d .f . 1.07/510.91/510.88/51 1.18/51 1.06/51 1.46/51ηrms =3.2Comparison with other approaches First we consider a problem between the role of Jacobian and dip structure at η≈0.The authors of Refs.5)and 6)have explained dN ch /dηby means of the Jacobian between the rapidity variable (y )and the pseudorapidity (η):The following relation is well knowndnEdndy,(7)where dn/dy =(1/s NN =200GeV can be explained by Eq.(7).As is seen in Fig.4,for dn/dηin the full phase space (|η|<5.4),it is difficult to explain the ηdistribu-tion.On the other hand,if we restrict the central region (|η|<4),i.e.,neglecting the data in 4.0<|η|<5.4,we have better description.These results are actually4Letters0.040.080.12d n /d ηη00.040.080.12d n /d η00.040.080.120.16d n /d ηηFig.3.Analyses of dn/dηwith centrality cuts using Eq.(2).(See Table II.)utilized in Refs.5)and 6).In other words,this fact suggests us that we have to consider other approaches to explain the dip structure in central region as well as the behavior in the fragmentation region.In our case it is the stochastic theory named the OU process with two sources at ±y max and at t =0.3.3z r scaling in dn/dηdistributions The values of ηrms =s NN =200GeV.To compare z r scaling at 200GeV with one at 130GeV,1)we show the latter in Fig.5(b).It is difficult to distinguish them.This coincidence means that there is no change in dn/dz r as colliding energy increases,except for the region of |z r |>∼2.2.4.Interpretation of the evolution parameter t with γIn our present treatment the evolution parameter t and the frictional coefficient γare dimensionless.When we assign the meaning of second [s]to t ,the frictional coefficient γhas the dimensionLetters50.040.080.120.16d n /d ηηηFig.4.Analyses of dn/dηby means of single Gaussian and Eq.(7).(a)Data in full-ηvariable aredescribed by V 2(t )=5.27±0.26and m/p t =1.13±0.10.The best χ2=20.0/51.(b)Data in |η|<4.0are used,i.e.,4.0<|η|<5.4are neglected.V 2(t )=7.41±0.85and m/p t =0.82±0.13are used.The best χ2=4.0/37.Introduction of renormalization is necessary,due to the Jacobian0.050.10.150.20.250.30.350.4d n /d z rz r = η/ηrms 00.050.10.150.20.250.30.350.4d n /d z rz r = η/ηrmsFig.5.Normalized distribution of dn/dz r with z r =η/ηrms scaling and estimated parameters usingEq.(3).(a)√s NN =130GeV,p =0.854±0.002,V 2r (t )=0.494±0.010,χ2/n .d .f .=25.5/321.Dashedlines are magnitudes of error-bars.Notice that z 2r =z max (1−p )+V 2r =1.0,due to the sum of two Gaussian distributions.of [sec −1=(1/3)×10−23fm −1].The magnitude of the interaction region in Au-Au collision is assumed to be 10fm.See,for example,Ref.7).t is estimated ast ≈10fm /c ≈3.3×10−23sec .(8)The frictional coefficient and the variance are obtained in Table III.They are com-parable with values [τ−1Y =0.1−0.08fm −1]of Ref.8),which have been obtained from the proton spectra at SPS collider.5.Concluding remarks c1)We have analyzed dn/dηdistribution by Eqs.(2)6LettersTable III.Values ofγandσ2at√γ[fm−1]0.0960.0990.1000.1010.1030.1040.101σ2[fm−1]0.8170.8000.7630.7200.6960.6660.744σ2/γ8.518.087.637.13 6.76 6.407.42s NN=200GeV and130GeV,we have shown that both distributions are coincided with each other.If there are no labels(200GeV and130 GeV)in Fig.5,we cannot distinguish them.This coincidence means that there is no particular change in dn/dηbetween√1)M.Biyajima,M.Ide,T.Mizoguchi and N.Suzuki,Prog.Theor.Phys.108(2002)559andAddenda(to appear).See also hep-ph/0110305.2) B.B.Back et al.[PHOBOS Collaboration],Phys.Rev.Lett.87(2001),102303.3)R.Nouicer et al.,[PHOBOS Collaboration],nucl-ex/0208003.4)M.M.Aggarwal et al.[WA98Collaboration],Eur.Phys.J.C18(2001)651.5) D.Kharzeev and E.Levin,Phys.Lett.B523(2001),79.6)K.J.Eskola,K.Kajantie,P.V.Ruuskanen and K.Tuominen,Phys.Lett.B543(2002),208.7)K.Morita,S.Muroya,C.Nonaka and T.Hirano,nucl-th/0205040;to appear in Phys.Rev.C.s NN/m N) asc(200GeV)exp −η2(200)max2V2(t)(130) ≈0.94,V(t)(130)where the suffixes mean colliding energies.Letters7 8)G.Wolschin,Eur.Phys.J.A5(1999),85.。

empirical analysis normative analysis

empirical analysis normative analysis

empirical analysis normative analysisEmpirical analysis and normative analysis are two different approaches to studying social and economic phenomena. Empirical analysis involves data analysis to provide descriptive or predictive insights about a given issue or phenomenon. Normative analysis involves making prescriptive judgments about what ought to be based on a set of values or principles. In this article, we explore the differences between these two approaches and their importance in decision-making.Empirical analysis typically involves collecting and analyzing data from a variety of sources such as surveys, interviews, and observations. The goal of this analysis is to gather information about a particular phenomenon or trend, usually to understand its causes, effects, and patterns. For example, empirical analysis can be used to analyze the economic impact of a particular policy or to understand the relationship between education and poverty.In contrast, normative analysis focuses on examining what should be the case, based on a set of values or principles. This approach is typically used to evaluate policies or decisions based on their adherence to certain ideals, such as social justice, equality, or economic efficiency. Normative analysis can be used to answer questions such as whether a particular policy is fair or whether it is equitable to a particular group.While these two approaches to analysis are distinct, they are often used together to provide a more comprehensiveunderstanding of a given issue. For example, empirical analysis can provide information regarding the economic and social impact of a particular policy, while normative analysis can be used to examine whether the policy aligns with certain ethical or moral principles.The importance of empirical and normative analysis lies in their ability to support informed decision-making. Empirical analysis provides data-driven insights that can inform the development and implementation of policy, while normative analysis ensures that the policy aligns with the values and principles of society. Without these two approaches, decisions may be made without consideration of either the empirical evidence or the ethical implications.In conclusion, empirical analysis and normative analysis are two different approaches to understanding social and economic phenomena. While empirical analysis focuses on descriptive or predictive insights, normative analysis focuses on prescriptive judgments based on a set of values or principles. Both approaches are important in decision-making as they provide a comprehensive understanding of a particular issue, ensuring that policies align with both empirical evidence and social values.。

An Exploratory Lattice Study of Spectator Effects in Inclusive Decays of the Lambda_b Baryo

An Exploratory Lattice Study of Spectator Effects in Inclusive Decays of the Lambda_b Baryo
MS −1 αs (a ) MS αs (mb ) 9/2β0
=
2δ Fδ 1 + 2C −N Nc c δ Fδ −C 1+ N 2 N2
c c
L1 (a−1 ) L2 (a−1 )
,
(13)
− 1 = 0.40 ± 0.04 .
(14)
In estimating δ we have used ΛQCD = 250 MeV, a−1 = 1.10 GeV, mb = 4.5 GeV and β0 = 9. The error in δ is evaluated includes a 20% uncertainty for ΛQCD . In the second step of the matching we relate the matrix elements renormalised in the continuum to those regularized on lattice, both at the same scale, a−1 . Although this involves corrections of O (αs ), which are, in principle, beyond the 3
SHEP 99-07 hep-lat/9906031
arXiv:hep-lat/9906031v2 7 Dec 2001
An Exploratory Lattice Study of Spectator Effects in Inclusive Decays of the Λb Baryon
UKQCD Collaboration: Massimo Di Pierro and Chris T Sachrajda Department of Physics and Astronomy University of Southampton, SO17 1BJ, UK and Chris Michael Theoretical Physics Division Dept. of Math. Sciences University of Liverpool, L69 3BX, UK

An empirical comparison between direct and indirect test result checking approaches

An empirical comparison between direct and indirect test result checking approaches

An Empirical Comparison betweenDirect and Indirect Test Result Checking Approaches∗†‡§Peifeng HuThe University of Hong KongPokfulamHong Kongpfhu@cs.hku.hkZhenyu ZhangThe University of Hong KongPokfulamHong Kong zyzhang@cs.hku.hkW.K.ChanCity University of Hong Kong T at Chee AvenueHong Kong wkchan@.hkT.H.TseThe University of Hong KongPokfulamHong Kongthtse@cs.hku.hkABSTRACTAn oracle in software testing is a mechanism for checking whether the system under test has behaved correctly for any executions.In some situations,oracles are unavailable or too expensive to apply. This is known as the oracle problem.It is crucial to develop techniques to address it,and metamorphic testing(MT)was one of such proposals.This paper conducts a controlled experiment to investigate the cost effectiveness of using MT by38testers on three open-source programs.The fault detection capability and time cost of MT are compared with the popular assertion checking method. Our results show that MT is cost-efficient and has potentials for detecting more faults than the assertion checking method.∗c ACM,2006.This is the authors’version of the work.It is posted here by permission of ACM for your personal use.Not for redistribution.The definitive version was published in Proceedings of the3rd International Workshop on Software Quality Assurance(SOQUA2006)(in conjunction with the14th ACM SIGSOFT International Symposium on Foundations of Software Engineering(SIGSOFT2006/FSE-14)),pages6–13.ACM Press, New York,2006./10.1145/1188895.1188901.†This research is supported in part by a grant of the Research Grants Council of Hong Kong(project no.HKU7145/04E),a grant of City University of Hong Kong and a grant of The University of Hong Kong.‡All correspondence should be addressed to Prof.T.H.Tse at Department of Computer Science,The University of Hong Kong,Pokfulam,Hong Kong.Tel:(+852)28592183.Fax:(+852)25578447.Email: thtse@cs.hku.hk.§Part of the work was done when Chan was with The Hong Kong University of Science and Technology,Clear Water Bay,Hong Kong.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on thefirst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior specific permission and/or a fee.SOQUA’06,November6,2006,Portland,OR,USA.Copyright2006ACM1-59593-584-3/06/0011...$5.00.Categories and Subject DescriptorsD.2.5[Software Engineering]:Testing and Debugging—Testingtools;D.2.8[Software Engineering]:Metrics—Product metricsGeneral TermsExperimentationKeywordsMetamorphic testing,test oracle,controlled experiment,empirical evaluation1.INTRODUCTIONSoftware testing assures programs by executing test cases overthe programs with the intent to reveal failures[3].To do so, software testers need to evaluate test results through an oracle,which is a mechanism for checking whether a program has behaved correctly[35].In many situations,however,oracles are unavailableor too expensive to apply.This is known as the oracle problem[35]. Usually,the main purpose of implementing a specific program is to compute unknown results.If the expected results could easily be computed through other automatic means,then there would not bea need to implement the program in thefirst place.On the other hand,manual checking of program outputs is slow,ineffective,and costly,especially for a large number of test cases.Assessing the correctness of program outcomes has,therefore,been recognizedas“one of the most difficult tasks in software testing”[27].As we shall review in Section2,assertion checking[4]and metamorphic testing(MT)[9,17,18,11]are techniques to alleviatethe oracle problem.Assertion checking verifies the result or intermediate program states of a test case.It directly confirms the execution behavior of a program in terms of a checking condition.MT takes another direction,which verifies follow-up test casesbased on existing test cases.It cross-checks the test resultsof existing test cases and their follow-up test cases.In other words,MT indirectly verifies the behaviors of multiple program executions in terms of a checking condition.It would be interestingto compare the two approaches on their effectiveness to identify failures.HKU CS Tech Report TR-2006-13There have been various case studies in applying metamorphic testing to different types of programs,ranging from conventional programs and object-oriented programs,to pervasive programs and web services.Chen et al.[16]reported on the testing of programs for solving partial differential equations.They further investigated the integration of metamorphic testing with fault-based testing and global symbolic evaluation[18].Gotlieb and Botella[22]developed an automated framework to check against a restricted class of metamorphic relations.Tse and others applied metamorphic approach to the unit testing[33]and integration testing[10]of context-sensitive middleware-based applications. Chan et al.[13,14]developed a metamorphic approach to the online testing of service-oriented software applications. Throughout these studies,both the testing and the evaluation of experimental results were conducted by the researchers themselves. The programs under test were from academic sources and relatively small.There is a need for systematic empirical research on how well MT can be applied in practical situations and how effective it is compared with other testing strategies1.Like other comparisons of testing strategies such as between controlflow and dataflow criteria[21]and among different data flow criteria[25],controlled experimental evaluations are essential. They should answer the following research questions:(a)Can testers be trained to apply MT properly?(b)How does the fault detection effectiveness of MT compare with other effective strategies?(c)What is the effort for applying MT?This paper reports and discusses the results in such a controlled experiment.We restricted the scope to object-oriented testing at the class level[4].The subjects were38postgraduate students enrolled in an advanced software testing course.Before doing the experiment,they were taught the concepts of MT and a reference strategy—assertion checking[4]—to alleviate the oracle problem.The training sessions for either concept were similar in duration.Three open-source programs were selected as target programs.The subjects were required to apply both MT and assertion checking strategies to test these programs independently. We ran their test cases over faulty versions of the target programs to assess the capability of these two testing strategies in detecting faults[1].Results were analyzed to compare the costs and effectiveness between MT and assertion checking.The main contributions of this paper are four-fold:(i)It is the first controlled experiment to study the above questions.(ii)The experiment shows that metamorphic testing is more effective than assertion checking for identifying faults for object-oriented programs.(iii)It confirms the belief that the subjects can formulate metamorphic relations and implement MT without much difficulty. In fact,the experiment shows that all subjects manage to propose metamorphic relations after a brief introduction,and identical or very similar metamorphic relations are proposed by different subjects.(iv)It also indicates that metamorphic testing is worth applying in terms of time cost.The paper is organized as follows:Section2discusses the related literature.Section3introduces the fundamental notions and procedures of metamorphic testing.Section4describes the experiment,and the result is presented and discussed in Section5. Finally,Section6concludes the paper.1Other researchers have evaluated the selection of metamorphic relations. However,their work is not yet publicly accessible at the time of submission of this paper.Thus,we shall exclude them from our discussions.2.RELATED WORKMany approaches have been proposed to alleviate the test oracle problem.Instead of checking the output directly,these approaches generated various types of oracle to verify the correctness of a program.Chapman[15]suggested that a previous version of a program could be used to verify the correctness of the current version.Weyuker[35]suggested checking whether some identity relations would be preserved by the program under test.Blum and others[6,2]proposed a program checker,which was an algorithm for checking the output of computation for numerical programs. Their theory was subsequently extended into the theory of self-testing/correcting[5].Xie and Memon[36]studied different types of oracle for graphic user interface(GUI)testing.Binder[4] discussed four categories and eighteen oracle patterns in object-oriented program testing.Assertion checking[32]is another method to verify the execution results of programs.An assertion,which is embedded directly in the source code,is a Boolean expression that verifies whether the execution of a test case satisfies some necessary properties for correct implementation.Assertions are supported by many programming languages and are easy to implement. Assertion checking has been widely used in object-oriented testing. For example,state invariants[4,23],represented by assertions,can be used to check the stated-based behaviors of a system.Briand et al.[8]investigated the effectiveness of using state-invariant assertions as oracles and compared it with the results using precise oracles for object-oriented programs.It was shown that state-invariant assertions were effective in detecting state-related errors. Since our target programs are also object-oriented programs,we have chosen assertion checking as the alternative testing strategy in our experimental comparison.Some researchers have proposed to prepare test specifications, either manually or automatically,to alleviate the test oracle problem.Memon et al.[28]assumed that a test specification of internal object interactions was available and used it to identify non-conformance of the execution traces.This type of approach is common in conformance testing for telecommunication protocols. Sun et al.[31]proposed a similar approach to test the harnesses of st and others[24,34]trained pattern classifiers to learn the casual input-output relationships of a legacy system. They then used the classifiers as test oracles.Podgurski and others [30,20]classified failure reports into categories via classifiers,and then refined the classification by further means.Bowring et al.[7] used a progressive approach to train a classifier to help regression testing.Chan et al.[12]used classifiers to identify different types of behaviors related to the synchronization failures of objects in a multimedia application.3.PRELIMINARIES OF METAMORPHICRELATIONS AND TESTINGThis section introduces metamorphic testing.As we have briefed in Section1,metamorphic testing relies on a checking condition that relates multiple test cases and their results in order to verify whether any failures are revealed.Such a checking condition is known as a metamorphic relation.We shallfirst revisit metamorphic relations and then discuss how they are used in the metamorphic approach to software testing.3.1Metamorphic RelationsA metamorphic relation(MR)is an existing or expected relation over a set of distinct inputs and their corresponding outputs for multiple executions of the target function[17].Consider,forinstance,the sine function.For any inputs x1and x2such that x1+x2=π,we must have sin x1=sin x2.Definition1(Metamorphic Relation)[11]Let x1,x2,...,x k be a series of inputs to a function f,where k≥1, and f(x1),f(x2),...,f(x k) be the corresponding series ofresults.Suppose f(x i1),f(x i2),...,f(x im) is a subseries,possibly an empty subseries,of f(x1),f(x2),...,f(x k) .Let x k+1,x k+2,...,x n be another series of inputs to f,where n≥k+1,and f(x k+1),f(x k+2),...,f(x n) be the corresponding series of results.Suppose,further,that there exists relationsr(x1,x2,...,x k,f(x i1),f(x i2),...,f(x im),x k+1,x k+2,...,x n)andr′(x1,x2,...,x n,f(x1),f(x2),...,f(x n))such that r′must be true whenever r is satisfied.We say thatMR={(x1,x2,...,x n,f(x1),f(x2),...,f(x n))|r(x1,x2,...,x k,f(x i1),f(x i2),...,f(x im),x k+1,x k+2,...,x n)→r′(x1,x2,...,x n,f(x1),f(x2),...,f(x n))}is a metamorphic relation.When there is no ambiguity,we simply write the metamorphic relation asMR:If r(x1,x2,...,x k,f(x i1),f(x i2),...,f(x im),x k+1,x k+2,...,x n)then r′(x1,x2,...,x n,f(x1),f(x2),...,f(x n)). Furthermore,x1,x2,...,x k are known as source test cases and x k+1,x k+2,...,x n are known as follow-up test cases.Similar to assertions in the mathematical sense,metamorphic relations are also necessary properties of the function to be implemented.They can,therefore,be used to detect inconsistencies in a program.They can be any relations involving the inputs and outputs of two or more executions of the target program.They may include inequalities,periodicity properties,convergence properties, subsumption relationships,and so on.Intuitively,human testers are needed to study the problem domain related to a target program and formulate metamorphic relations accordingly.This is akin to requirements engineering, in which humans instead of automatic requirements engines are necessary for formulating systems requirements.Is there a systematic methodology guiding testers to formulate metamorphic relations like the methodologies that guide systems analysts to specify requirements?This remains an open question.We shall further investigate along this line in the future.We observe that other researchers are also beginning to formulate important properties in the form of specifications to facilitate the verification of system behaviors[19].3.2Metamorphic TestingIn practice,if the program is written by a competent programmer, most test cases are“successful test cases”,which do not reveal any failure.These successful test cases have been considered useless in conventional testing.Metamorphic testing(MT)uses information from such successful test cases,which will be referred to as source test cases.Consider a program p for a target function f in the input domain D.A set of source test cases T={t1,t2,...,t k}can be selected according to any test case selection strategy.Executing the program p on T produces outputs p(t1),p(t2),...,p(t k). When there is an oracle,the test results can be verified against f(t1),f(t2),...,f(t k).If these results reveal any failure,testing stops.On the other hand,when there is no oracle or when no failure is revealed,the metamorphic testing approach can continue to be applied to automatically generate follow-up test cases T′={t k+1,t k+2,...,t n}based on source test cases T,so thatthe program can be verified against metamorphic relations.For example,given a source test case x1for a program that implements the sine function,we can construct a follow-up test case x2based on the metamorphic relation x1+x2=π.Definition2(Metamorphic Testing)[11]Let P be an imple-mentation of a target function f.The metamorphic testing of the metamorphic relationMR:If r(x1,x2,...,x k,f(x i1),f(x i2),...,f(x im),x k+1,x k+2,...,x n),then r′(x1,x2,...,x n,f(x1),f(x2),...,f(x n)) involves the following steps:(1)Given a series of source test cases x1,x2,...,x k and their respective results P(x1),P(x2),..., P(x k) ,generate a series of follow-up test cases x k+1,x k+2,...,x naccording to the relation r(x1,x2,...,x k,P(x i1),P(x i2),...,P(x im), x k+1,x k+2,...,x n)over the implementation P.(2)Check the relation r′(x1,x2,...,x n,P(x1),P(x2),...,P(x n))over P.If r′is false,then the metamorphic testing of MR reveals a failure.3.3Metamorphic Testing ProcedureGotlieb and Botella[22]developed an automated framework for a subclass of metamorphic relations.The framework translates a specification into a constraint logic programming language program.Test cases can be automatically be generated according to metamorphic testing.Their framework only works on a restricted subset of the C language and is not applicable to test cases involving objects.Since we want to apply MT to test real-world object-oriented programs,we adopt the original procedure[9]as follows: Firstly,testers identify and formulate metamorphic relations MR1,MR2,...,MR n from the target function f.For each metamorphic relation MR i,testers construct a function gen i to generate follow-up test cases from the source test cases.Next, for each metamorphic relation MR i,testers construct a function ver i,which will be used to verify whether multiple inputs and the corresponding outputs satisfy MR i.After that,testers generate a set of source test cases T according to a preferred test case selection strategy.Finally,for every test case in T,the test driver invokes the function gen i to generate follow-up test cases and apply the function ver i to check whether the test cases satisfy the given metamorphic relation MR i.If a metamorphic relation MR i is violated by any test case,ver i reports that an error is found in the program under test.4.EXPERIMENTThis section describes the set up of the controlled experiment.It firstly formulates the research questions to be investigated and then describes the experimental design and experimental procedure. 4.1Research QuestionsThe research questions to be investigated are summarized as follows:(a)Can the subjects properly apply MT after training?Can thesubjects identify correct and useful metamorphic relationsfrom target programs?(b)Is MT an effective testing method?Does MT have acomparative advantage over other testing strategies suchas assertion checking in terms of the number of mutantsdetected?To address this question,we shall use the standardstatistical technique of null hypothesis testing.Null Hypothesis H0:difference between MTterms of the number ofAlternativesignificant differencechecking in terms ofdetected.We aim at applying thethe Mann-Whitney test tofindshould be rejected,with athat the difference betweenstatistically significant rather(c)What is the effort,in terms of4.2Experimental DesignOur experiment identifies fourvariables.The independentsubjects,target programs,and faultyThe dependent variables are effort inof metamorphic relations/assertions,terms of mutation detection ratio.strategies,we incorporate MT andof this section,we describe the otherSection5will analyze the resultsvariables.Subjects:All the38subjectscomputer science who attended theSoftware Engineering:SoftwareHong Kong.These students had atcomputer science,computerThe majority of them wereindustrial experience.The rests were MPhil and PhD students.We controlled that the training sessions of either approach are comparable in duration and in content.Since differences in software engineering background might affect the students’capability to apply metamorphic testing or assertion checking,we conducted a brief survey prior to the experimentation.It showed that most of them had real-life or academic experience in object-oriented design,Java programming, software testing,and assertion checking.Figure1lists the survey result.As most of subjects were knowledgeable about object-oriented design and Java programming,they were deemed to be competent in the experimental tasks.On the other hand,we found a few students having rather limited experience in software testing and assertion checking.Since they did not have prior concepts of metamorphic testing either,the experiment did not specifically favor the metamorphic approach.Target Programs:We used three open-source programs as target programs.All of them were Java programs selected from real-world software systems.Thefirst target program Boyer is a program using the Boyer-Moore algorithm to support the applications in Canadian Mind Products,an online commercial software company2.The program returns the index of thefirst occurrence of a specified pattern within a given text.The second target program BooleanExpression evaluates Boolean expressions and returns the resulting Boolean value.For example,the evaluation result of“!(true&&false)||true”is“true”. 2URL /products1.html.The program is part of a popular open-source project jboolexpr3 in SourceForge4,which is the largest open-source project website. The target program is a core part of the project.The third target program is TxnT ableSorter.It is taken from a popular open-source project Eurobudget5in the SourceForge website.Eurobudget is an office application written in Java,similar to Microsoft Money or Quicken.Table1specifies the statistics of the three target programs.The sizes of these programs are in line with the sizes of the target programs used in typical software testing researches such as[1] or the famous Siemens test suites.Thefirst program is a piece of commercial software.The second program is a core part of a standard library.The third one is selected from real office software with hundreds of classes and more than100,000lines of code in total.All of them are open source.Faulty Versions of Target Programs:To investigate the relative effectiveness of metamorphic testing and assertion checking,we used mutation operators to seed faults to programs.A previous study[1]showed that well-defined mutation operators were valid for testing experiments6.In our experiment,mutants were seeded using the tool muJava[26].The tool uses two types of mutation operator:class 3Available at /projects/jboolexpr.4URL .5Available at .6We also attempted to use publicly accessible real faults of these programs to conduct the experiments.However,descriptions of these faults in the source repositories were either too vague or not available.Table2:Categories of Mutation OperatorsCategory DescriptionAOD Delete Arithmetic OperatorsAOI Insert Arithmetic OperatorsAOR Replace Arithmetic OperatorsROR Replace Relational OperatorsCOR Replace Conditional OperatorsCOI Insert Conditional OperatorsCOD Delete Conditional OperatorsSOR Replace Shift OperatorsLOR Replace Logical OperatorsLOI Insert Logical OperatorLOD Delete Logical OperatorASR Replace Assignment Operatorslevel and method level.Class level mutation operators are operators specific to generating faults in object-oriented programs at the class level.Method level mutation operators defined in[29]are operators specific for statement faults.We only seeded method level mutation operators to the programs under study,because our experiment concentrated on unit testing and because this set of operators had been studied extensively[29,1].Table2list all the mutation operators used in the controlled experiment.A total of151mutants were generated by muJava for the class Boyer,145for the class BooleanExpression,and378for TxnT ableSorter.Note that faults were only seeded into the methods supposedly covered by the test cases for unit testing.Table3lists the number of mutants under each category of operators.We used all of them in the controlled experiment.4.3Experimental ProcedureBefore the experiment,the subjects were given a six-hour training to use MT and assertion checking.The target programs and the tasks to be performed were also presented to the subjects.The subjects were briefed about the main functionality of each target program and the algorithm used,thus simulating the process in real-life in which a tester acquires the background knowledge of the program under test.They were blind to the use of mutants in the controlled experiment.For each program,the subjects were required to apply MT strictly following the procedure in Section3.3,as well as to add assertions to the source code for checking.We did not restrict the number of metamorphic relations and assertions.The subjects were told to develop metamorphic relations and assertions as they sawfit,with a view to thoroughly test each target program.We did not mandate the use of a particular testing case generation strategy,such as all-def-use criterion,for MT or assertion checking. The subjects were simply asked to provide adequate test cases for testing the target programs.This avoided the possibility that some particular test case selection strategy,when applied in large scale, might favor either MT or assertion checking.We asked the students to submit metamorphic relations, functions to generate follow-up test cases,functions to verify metamorphic relations,test cases for metamorphic testing,source code with inserted assertions,and test cases for assertion checking. They were also asked to report the time costs in applying metamorphic testing and assertion checking.Before testing the faulty versions with these functions,assertions,and test cases,we checked the student submissions carefully to ensure that there was no implementation error.4.4Addressing the Threats to ValidityWe briefly describe the threats to validity in this section before we present our main results in the next section.Internal Validity:Internal validity refers to whether the observed effects depend only on the intended experimental variables.For this experiment,we provided the subjects with all the background materials and confirmed with them that they had sufficient time to perform all the tasks.On the other hand,we appreciate that students might be interrupted by minor Internet activities when they performed their tasks.Hence,the time costs reported by the subjects should be conservative.Furthermore,the subjects did not know the nature and details of the faults seeded. This measure ensured that their“designed”metamorphic relations and assertions were unbiased with respect to the seeded faults. External Validity:External validity is the degree to which the results are generalizable to the testing of real-world systems.The programs used in our experiment were from real-life applications. For example,Eurobudget is widely used and has been downloaded more than10000times from SourceForge.On the other hand, some real-world programs can be much larger and less well documented than the open-source programs studied.More future studies may be in order for the testing of large complex systems using the MT method.5.EXPERIMENTAL RESULTSThis section presents the experimental results of applying metamorphic testing and assertion checking.They are structured according to the dependent variables presented in the last section.5.1Metamorphic Relations and AssertionsA critical and difficult step in applying MT and assertion checking is to develop metamorphic relations and assertions for target programs.Table4reports on the number of metamorphic relations and assertions identified by the subjects for the three target programs.The mean numbers of metamorphic relations developed by the subjects for the respective programs were2.79, 2.68,and5.00.The total numbers of different metamorphic relations identified by all subjects for the respective programs were 18,39,and25.The mean numbers of assertions for the respective programs were6.96,11.35,and10.97.For the sake of brevity, we list in Table5only the metamorphic relations identified by the subjects for the Boyer program.The results show that all the subjects could properly apply metamorphic testing and assertion checking after training.In general,they could identify a larger number of assertions than metamorphic relations.Furthermore,their abilities to identify metamorphic relations varied.In particular,we observe that all38subjects managed to propose metamorphic relations after some training for each of the three open-source programs.It confirms the belief by the originators of MT that testers can formulate metamorphic relations effectively.5.2Comparison of Fault DetectionCapabilitiesWe use the subjects’metamorphic relations,assertions,and source and follow-up test cases to test the faulty versions of the target programs.The mutation detection ratio[1]is used to compare the fault detection capabilities of MT and assertion checking strategies.The mutation detection ratio of a test set is defined as the number of mutants detected by the test set over the total number of mutants.For metamorphic testing,a mutant is detected if a source test case and follow up test cases executed。

英语专八阅读理解模拟试题附带翻译及解析

英语专八阅读理解模拟试题附带翻译及解析

英语专八阅读理解模拟试题(附带翻译及解析)考研英语阅读理解模拟试题及解析一The majority of successful senior managers do not closely follow the classical rational model of first clarifying goals, assessing the problem, formulating options, estimating likelihoods of success, making a decision, and only then taking action to implement the decision. Rather, in their day-by-day tactical maneuvers, these senior executives rely on what is vaguely termed intuition to manage a network of interrelated problems that require them to deal with ambiguity, inconsistency, novelty, and surprise;and to integrate action into the process of thinking.Generations of writers on management have recognized that some practicing managers rely heavily on intuition. In general, however, such writers display a poor grasp of what intuition is. Some see it as the opposite of rationality; others view it as an excuse for capriciousness.Isenberg&#39;s recent research on the cognitive processes of senior managers reveals that managers&#39; intuition is neither of these. Rather, senior managers use intuition in at least five distinct ways. First, they intuitively sense when a problem exists. Second, managers rely on intuition to perform well-learned behavior patterns rapidly. This intuition is not arbitrary or irrational, but is based on years of painstaking practice and hands-on experience that build skills. A third function of intuition is to synthesize isolated bits of data and practice into an integrated picture, often in an Aha!experience. Fourth, some managers use intuition as a check on the results of more rational analysis. Most senior executives are familiar with the formal decision analysis models and tools, and those who use such systematic methods for reaching decisions are occasionally leery of solutions suggested by these methods which run counter to their sense of the correct course of action. Finally, managers can use intuition to bypass in-depth analysis and move rapidly to engender a plausible solution. Used in this way, intuition is an almost instantaneous cognitive process in which a manager recognizes familiar patterns.One of the implications of the intuitive style of executive management is that thinking is inseparable from acting. Since managers often know what is right before they can analyze and explain it, they frequently act first and explain later. Analysis is inextricably tied to action in thinking/acting cycles, in which managers develop thoughts about their companies and organizations not by analyzing a problematic situation and then acting, but by acting and analyzing in close concert.Given the great uncertainty of many of the management issues that they face, senior managers often instigate a course of action simply to learn more about an issue. They then use the results of the action to develop a more complete understanding of the issue. One implication of thinking/acting cycles is that action is often part of defining the problem, not just ofimplementing the solution.1. According to the text, senior managers use intuition in all of the following ways EXCEPT to[A] Speed up of the creation of a solution to a problem.[B] Identify a problem.[C] Bring together disparate facts.[D] Stipulate clear goals.2. The text suggests which of the following about the writers on management mentioned in line 1, paragraph 2?[A] They have criticized managers for not following the classical rational model of decision analysis.[B] They have not based their analyses on a sufficiently large sample of actual managers.[C] They have relied in drawing their conclusions on what managers say rather than on what managers do.[D] They have misunderstood how managers use intuition in making business decisions.3. It can be inferred from the text that which of the following would most probably be one major difference in behavior between Manager X, who uses intuition to reach decisions, and Manager Y, who uses only formal decision analysis?[A] Manager X analyzes first and then acts;Manager Y does not.[B] Manager X checks possible solutions to a problem by systematic analysis;Manager Y does not.[C] Manager X takes action in order to arrive at the solution to a problem;Manager Y does not.[D] Manager Y draws on years of hands-on experience in creating a solution to a problem;Manager X does not.4. The text provides support for which of the following statements?[A] Managers who rely on intuition are more successful than those who rely on formal decision analysis.[B] Managers cannot justify their intuitive decisions.[C] Managers&#39;&#39; intuition works contrary to their rational and analytical skills.[D] Intuition enables managers to employ their practical experience more efficiently.5. Which of the following best describes the organization of the first paragraph of the text?[A] An assertion is made and a specific supporting example is given.[B] A conventional model is dismissed and an alternative introduced.[C] The results of recent research are introduced and summarized.[D] Two opposing points of view are presented and evaluated.答案与考点解析1. 「答案」D「考点解析」这是一道归纳推导题。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Wilks (1938): -2logR has an asymptotic p2 distribution under null hypothesis..
1.1 Empirical Likelihood Ratio Test
Nonparametric situation:

Empirical Likelihood is defined as (Owen 1988):
Where X1, X2, …, Xn are independent random variables from an unknown distribution F. And it is well known that L(F) is maximized by the empirical distribution function Fn over all possible CDFs, where
Part III: Future Research

Owen (1991) has demonstrated that empirical likelihood ratio can be applied to regression models. However, for censored/truncated data the empirical likelihood results are rare. Exception: Li (2002). Notice we are talking about ordinary regression model, not the Cox proportional hazards regression model.
B. Right Truncation: it occurs when only individuals
who have experienced the event of interest are included in the sample.
1.2 Censoring and Truncation
Part III: Future research
The same setting as one sample case. Under k constraints:
Part III: Future researcmples.
Part III: Future research

1.1 Empirical Likelihood Ratio Test
1.2 Censoring and Truncation 1.3 Literature Review



1.4 Counting Process and Survival Analysis
1.1 Empirical Likelihood Ratio Test

Example of Right-Censored and LeftTruncated data: [Klein, Moeschberger, p65]
In a survival study of the Channing House retirement center located in California, ages at which individuals entered the retirement community (truncation event) and ages when members of the community died or still alive at the end of the study (censoring event) were recorded.

Part II: Empirical Likelihood Theorem for Right-Censored and Left-Truncated Data
2.1
One Sample Case Two Sample Case
2.2
Part III: One Sample Case
First, introduce some notations:
1.2 Censoring and Truncation

Truncation
A. Left Truncation: it occurs when subjects enter a
study at a particular time and are followed from this delayed entry time until the event happens or until the subject is censored;
Owen (1988) demonstrated when θ=θ0,
-2logR(F|θ) has an asymptotic χ2 distribution with df=1.
1.2 Censoring and Truncation

Survival analysis is the analysis of time-to-event data. Two important features of time-to-event data: A. Censoring B. Truncation
for a given g(.).
1.1 Empirical Likelihood Ratio Test

Owen defined the empirical likelihood ratio function as
where L(F|θ) is the maximized empirical likelihood function among all CDFs satisfying the above constraint (null hypothesis).
• Part I: Introduction and Background • Part II: Empirical Likelihood Theorem for
Right-Censored and Left-Truncated Data
• Part III: Future Research
Part I: Introduction and Background
For pure truncated data, Li (1995) studied the empirical likelihood theorem.

1.3 Literature Review
For solely censoring data: Pan and Zhou (1999) showed that the empirical likelihood ratio with continuous mean and hazard constraint also have a chi-square limit. Fang (2000) proved the empirical likelihood ratio with discrete hazard constraint follows a chisquare distribution under one sample case and two sample case.
We think the Empirical likelihood method can be used there to do inference.
Likelihood Ratio Test:
Maximumof LikelihoodUnder H 0 R Maximumof LikelihoodUnder H 0 H a
Two cases: A. Parametric
B. Non-parametric
1.1 Empirical Likelihood Ratio Test Parametric situation:
1.1 Empirical Likelihood Ratio Test

Owen focused on studying the properties of the likelihood ratio function when F0 satisfies certain constraint, (the null hypothesis)
Part III: One Sample Case

Then, the likelihood function is:
Part III: One Sample Case
Part III: One Sample Case
Part III: One Sample Case
Part III: Two Sample Case


Part III: Regression models

We propose a “redistribution” algorithm of estimating the parameters in the regression models with censored/truncated data.
Empirical Likelihood for Right Censored and Left Truncated data
Jingyu (Julia) Luan
University of Kentucky, Johns Hopkins University March 30, 2004
Outline of the Presentation:

1.2 Censoring and Truncation

Censoring occurs when an individual’s life length is known to happen only in a certain period of time.
A. Right Censoring B. Left Censoring C. Interval Censoring
相关文档
最新文档