a competitive mean squared error approach to beamforming(翻译),波束形成
Contrast Analysis

In Neil Salkind(Ed.),Encyclopedia of Research Design.Thousand Oaks,CA:Sage.2010Contrast AnalysisHerv´e Abdi⋅Lynne J.WilliamsA standard analysis of variance(a.k.a.anova)provides an F-test,which is called an om-nibus test because it reflects all possible differences between the means of the groups analyzed by the anova.However,most experimenters want to draw conclusions more precise than “the experimental manipulation has an effect on participants’behavior.”Precise conclusions can be obtained from contrast analysis because a contrast expresses a specific question about the pattern of results of an anova.Specifically,a contrast corresponds to a prediction precise enough to be translated into a set of numbers called contrast coefficients which reflect the prediction.The correlation between the contrast coefficients and the observed group means directly evaluates the similarity between the prediction and the results.When performing a contrast analysis we need to distinguish whether the contrasts are planned or post hoc.Planned or a priori contrasts are selected before running the experiment. In general,they reflect the hypotheses the experimenter wanted to test and there are usually few of them.Post hoc or a posteriori(after the fact)contrasts are decided after the experiment has been run.The goal of a posteriori contrasts is to ensure that unexpected results are reliable.When performing a planned analysis involving several contrasts,we need to evaluate if these contrasts are mutually orthogonal or not.Two contrasts are orthogonal when their contrast coefficients are uncorrelated(i.e.,their coefficient of correlation is zero).The number of possible orthogonal contrasts is one less than the number of levels of the independent variable.Herv´e AbdiThe University of Texas at DallasLynne J.WilliamsThe University of Toronto ScarboroughAddress correspondence to:Herv´e AbdiProgram in Cognition and Neurosciences,MS:Gr.4.1,The University of Texas at Dallas,Richardson,TX75083–0688,USAE-mail:herve@ /∼herve2Contrast Analysis All contrasts are evaluated using the same general procedure.First,the contrast is for-malized as a set of contrast coefficients(also called contrast weights).Second,a specific F ratio(denoted Fψ)is computed.Finally,the probability associated with Fψis evaluated. This last step changes with the type of analysis performed.1How to express a research hypothesis as a contrastWhen a research hypothesis is precise,it is possible to express it as a contrast.A research hypothesis,in general,can be expressed as a shape,a configuration,or a rank ordering of the experimental means.In all of these cases,we can assign numbers which will reflect the predicted values of the experimental means.These numbers are called contrast coefficients when their mean is zero.To convert a set of numbers into a contrast,it suffices to subtract their mean from each of them.Often,for convenience we will express contrast coefficients with integers.For example,assume that for a4-group design,a theory predicts that thefirst and second groups should be equivalent,the third group should perform better than these two groups and the fourth group should do better than the third with an advantage of twice the gain of the third over thefirst and the second.When translated into a set of ranks this prediction gives:C1C2C3C4Mean11242After subtracting the mean,we get the following contrast:C1C2C3C4Mean−1−1020In case of doubt,a good heuristic is to draw the predicted configuration of results,and then to represent the position of the means by ranks.A BDI&W ILLIAMS3 2A priori(planned)orthogonal contrasts2.1How to correct for multiple testsWhen several contrast are evaluated,several statistical tests are performed on the same data set and this increases the probability of a Type I error(i.e.,rejecting the null hypothesis when it is true).In order to control the Type I error at the level of the set(a.k.a.the family) of contrasts one needs to correct theαlevel used to evaluate each contrast.This correction for multiple contrasts can be done using theˇSid`a k equation,the Bonferroni(a.k.a.Boole, or Dunn)inequality or the Monte-Carlo technique.2.1.1ˇSid`a k and BonferroniThe probability of making as least one Type I error for a family of orthogonal(i.e.,statisti-cally independent)C contrasts isα[P F]=1−(1−α[P C])C.(1) withα[P F]being the Type I error for the family of contrasts;andα[P C]being the Type I error per contrast.This equation can be rewritten asα[P C]=1−(1−α[P F])1 C.(2) This formula,called theˇSid`a k equation,shows how to correct theα[P C]values used for each contrast.Because theˇSid`a k equation involves a fractional power,ones can use an approximation known as the Bonferroni inequality,which relatesα[P C]toα[P F]byα[P C]≈α[P F]C.(3)ˇSid`a k and Bonferroni are related by the inequalityα[P C]=1−(1−α[P F])1 C≥α[P F]C.(4)They are,in general,very close to each other.As can be seen,the Bonferroni inequality is a pessimistic estimation.ConsequentlyˇSid`a k should be preferred.However,the Bonferroni inequality is more well known,and hence,is used and cited more often.2.1.2Monte-CarloThe Monte-Carlo technique can also be used to correct for multiple contrasts.The Monte Carlo technique consists of running a simulated experiment many times using random data,4Contrast Analysis Table1:Results of a Monte-Carlo simulation.Numbers of Type I errors when performing C=5contrasts for10,000analyses of variance performed on a6group design when the H0is true.How to read the table?For example,192families over10,000have2 Type I errors,this gives2×192=384Type I errors.Number of families X:Number of Type1Number ofwith X Type I errors errors per family Type I errors7,868001,90711,9071922384203601345205010,0002,403with the aim of obtaining a pattern of results showing what would happen just on the basis of chance.This approach can be used to quantifyα[P F],the inflation of Type I error due to multiple testing.Equation2can then be used to setα[P C]in order to control the overall value of the Type I error.As an illustration,suppose that6groups with100observations per group are created with data randomly sampled from a normal population.By construction,the H0is true(i.e.,all population means are equal).Now,construct5independent contrasts from these6groups. For each contrast,compute an F-test.If the probability associated with the statistical index is smaller thanα=.05,the contrast is said to reach significance(i.e.,α[P C]is used).Then have a computer redo the experiment10,000times.In sum,there are10,000experiments, 10,000families of contrasts and5×10,000=50,000contrasts.The results of this simulation are given in Table1.Table1shows that the H0is rejected for2,403contrasts over the50,000contrasts actually performed(5contrasts times10,000experiments).From these data,an estimation ofα[P C] is computed as:α[P C]=number of contrasts having reached significance total number of contrasts=2,40350,000=.0479.(5)This value falls close to the theoretical value ofα=.05.It can be seen also that for7,868experiments no contrast reached significance.Equiva-lently for2,132experiments(10,000−7,868)at least one Type I error was made.From these data,α[P F]can be estimated as:A BDI&W ILLIAMS5α[P F]=number of families with at least1Type I error total number of families=2,13210,000=.2132.(6)This value falls close to the theoretical value given by Equation1:α[P F]=1−(1−α[P C])C=1−(1−.05)5=.226.2.2Checking the orthogonality of two contrastsTwo contrasts are orthogonal(or independent)if their contrast coefficients are uncorrelated. Recall that contrast coefficients have zero sum(and therefore a zero mean).Therefore,two contrasts whose A contrast coefficients are denoted C a,1and C a,2,will be orthogonal if and only if:Aa=1C a,i C a,j=0.(7)2.3Computing sum of squares,mean square,and FThe sum of squares for a contrast can be computed using the C a coefficients.Specifically, the sum of squares for a contrast is denoted SSψ,and is computed asSSψ=S(∑C a M a.)2∑C2a(8)where S is the number of subjects in a group.Also,because the sum of squares for a contrast has one degree of freedom it is equal to the mean square of effect for this contrast:MSψ=SSψdfψ=SSψ1=SSψ.(9)The Fψratio for a contrast is now computed asFψ=MSψMS error.(10)6Contrast Analysis 2.4Evaluating F for orthogonal contrastsPlanned orthogonal contrasts are equivalent to independent questions asked to the data. Because of that independence,the current procedure is to act as if each contrast were the only contrast tested.This amounts to not using a correction for multiple tests.This procedure gives maximum power to the test.Practically,the null hypothesis for a contrast is tested by computing an F ratio as indicated in Equation10and evaluating its p value using a Fisher sampling distribution withν1=1andν2being the number of degrees of freedom of MS error[e.g.,in independent measurement designs with A groups and S observations per groupν2=A(S−1)].2.5An exampleThis example is inspired by an experiment by Smith(1979).The main purpose in this experi-ment was to show that being in the same mental context for learning and for test gives better performance than being in different contexts.During the learning phase,subjects learned a list of80words in a room painted with an orange color,decorated with posters,paintings and a decent amount of paraphernalia.Afirst memory test was performed to give subjects the impression that the experiment was over.One day later,subjects were unexpectedly re-tested for their memory.An experimenter asked them to write down all the words of the list they could remember.The test took place in5different experimental conditions.Fifty subjects(ten per group)were randomly assigned to one of thefive experimental groups.The five experimental conditions were:1.Same context.Subjects are tested in the same room in which they learned the list.2.Different context.Subjects are tested in a room very different from the one in which theylearned the list.The new room is located in a different part of the campus,is painted grey and looks very austere.3.Imaginary context.Subjects are tested in the same room as subjects from Group2.Inaddition,they are told to try to remember the room in which they learned the list.In order to help them,the experimenter asks them several questions about the room and the objects in it.4.Photographed context.Subjects are placed in the same condition as Group3,and,inaddition,they are shown photos of the orange room in which they learned the list.5.Placebo context.Subjects are in the same condition as subjects in Group2.In addition,before starting to try to recall the words,they are askedfirst to perform a warm-up task, namely,to try to remember their living room.The data and anova results of the replication of Smith’s experiment are given in the Tables2 and3.2.5.1Research hypotheses for contrast analysisSeveral research hypotheses can be tested with Smith’s experiment.Suppose that the exper-iment was designed to test these hypotheses:A BDI&W ILLIAMS7Table2:Data from a replication of an experiment by Smith(1979).The dependent variable is the number of words recalled.Experimental ContextGroup1Group2Group3Group4Group5Same Different Imagery Photo Placebo251114258262115152017929231015610217147121815171422247141214141204202717117221211211912114Y a.180110170190100M a.1811171910M a.−M..3−424−5∑(Y as−M a.)2218284324300314Table3:anova table for a replication of Smith’s experiment(1979).Source df SS MS F Pr(F)Experimental4700.00175.005.469∗∗.00119Error451,440.0032.00Total492,140.00–Research Hypothesis1.Groups for which the context at test matches the context during learning(i.e.,is the same or is simulated by imaging or photography)will perform better than groups with a different or placebo contexts.–Research Hypothesis2.The group with the same context will differ from the group with imaginary or photographed contexts.–Research Hypothesis3.The imaginary context group differs from the photographed con-text group.–Research Hypothesis4.The different context group differs from the placebo group.2.5.2ContrastsThe four research hypotheses are easily transformed into statistical hypotheses.For example, thefirst research hypothesis is equivalent to stating the following null hypothesis:The means of the population for groups1.,3.,and4.have the same value as the means of the population for groups2.,and5..8Contrast AnalysisTable4:Orthogonal contrasts for the replication of Smith(1979).contrast Gr.1Gr.2Gr.3Gr.4Gr.5∑C aψ1+2−3+2+2−30ψ2+20−1−100ψ300+1−100ψ40+100−10This is equivalent to contrasting groups1.,3.,4.and groups2.,5..Thisfirst contrast is denotedψ1:ψ1=2µ1−3µ2+2µ3+2µ4−3µ5.The null hypothesis to be tested isH0,1∶ψ1=0Thefirst contrast is equivalent to defining the following set of coefficients C a:C aGr.1Gr.2Gr.3Gr.4Gr.5a+2−3+2+2−30Note that the sum of the coefficients C a is zero,as it should be for a contrast.Table4 shows all4contrasts.2.5.3Are the contrast orthogonal?Now the problem is to decide if the contrasts constitute an orthogonal family.We check that every pair of contrasts is orthogonal by using Equation7.For example,Contrasts1and2 are orthogonal becauseA=5C a,1C a,2=(2×2)+(−3×0)+(2×−1)+(2×−1)+(−3×0)+(0×0)=0.a=12.5.4F testThe sum of squares and Fψfor a contrast are computed from Equations8and10.For example,the steps for the computations of SSψ1are given in Table5:A BDI&W ILLIAMS9Table5:Steps for the computation of SSψ1of Smith(1979).Group M a C a C a M a C2a118.00+2+36.004211.00−3−33.009317.00+2+34.004419.00+2+38.004510.00−3−30.009045.0030SSψ1=S(∑C a M a.)2∑C2a=10×(45.00)230=675.00MSψ1=675.00Fψ1=MSψ1MS error=675.0032.00=21.094.(11)The significance of a contrast is evaluated with a Fisher distribution with1and A(S−1)= 45degrees of freedom,which gives a critical value of4.06forα=.05(7.23forα=.01).The sum of squares for the remaining contrasts are SSψ.2=0,SSψ.3=20,and SSψ.4=5with1 and A(S−1)=45degrees of freedom.Therefore,ψ2,ψ3,andψ4are non-significant.Note that the sums of squares of the contrasts add up to SS experimental.That is:SS experimental=SSψ.1+SSψ.2+SSψ.3+SSψ.4=675.00+0.00+20.00+5.00=700.00.When the sums of squares are orthogonal,the degrees of freedom are added the same way as the sums of squares are.This explains why the maximum number of orthogonal contrasts is equal to number of degrees of freedom of the experimental sum of squares.3A priori(planned)non-orthogonal contrastsSo,orthogonal contrasts are relatively straightforward because each contrast can be evalu-ated on its own.Non-orthogonal contrasts,however,are more complex.The main problem is to assess the importance of a given contrast conjointly with the other contrasts.There are10Contrast Analysis currently two(main)approaches to this problem.The classical approach corrects for multi-ple statistical tests(e.g.,using aˇSid`a k or Bonferroni correction),but essentially evaluates each contrast as if it were coming from a set of orthogonal contrasts.The multiple regression (or modern)approach evaluates each contrast as a predictor from a set of non-orthogonal predictors and estimates its specific contribution to the explanation of the dependent vari-able.The classical approach evaluates each contrast for itself,whereas the multiple regression approach evaluates each contrast as a member of a set of contrasts and estimates the spe-cific contribution of each contrast in this set.For an orthogonal set of contrasts,the two approaches are equivalent.3.1The classical approachSome problems are created by the use of multiple non-orthogonal contrasts.Recall that the most important one is that the greater the number of contrasts,the greater the risk of a Type I error.The general strategy adopted by the classical approach to take this problem is to correct for multiple testing.3.1.1ˇSid`a k and Bonferroni corrections for non-orthogonal contrastsWhen a family of contrasts are nonorthogonal,Equation1gives a lower bound forα[P C] (cf.ˇSid`a k,1967;Games,1977).So,instead of having the equality,the following inequality, called theˇSid`a k inequality,holdsα[P F]≤1−(1−α[P C])C.(12) This inequality gives an upper bound forα[P F],therefore the real value ofα[P F]is smaller than its estimated value.As previously,we can approximate theˇSid`a k inequality by Bonferroni asα[P F]<Cα[P C].(13) And,as previously,ˇSid`a k and Bonferroni are linked to each other by the inequalityα[P F]≤1−(1−α[P C])C<Cα[P C].(14)3.1.2An example:Classical approachLet us go back to Smith’s(1979)study(see Table2).Suppose that Smith wanted to test these three hypotheses:–Research Hypothesis1.Groups for which the context at test matches the context during learning will perform better than groups with different contexts;Table6:Non-orthogonal contrasts for the replication of Smith(1979).contrast Gr.1Gr.2Gr.3Gr.4Gr.5∑C aψ12−322−30ψ233−2−2−20ψ31−41110Table7:Fψvalues for the nonorthogonal contrasts from the replication of Smith(1979).Fψp(Fψ)r Y.ψr2Y.ψψ1.9820.964321.0937<.0001ψ2−.1091.01190.2604.6123ψ3.5345.2857 6.2500.0161–Research Hypothesis2.Groups with real contexts will perform better than those with imagined contexts;–Research Hypothesis3.Groups with any context will perform better than those with no context.These hypotheses can easily be transformed into the set of contrasts given in Table6. The values of Fψwere computed with Equation10(see also Table3)and are in shown in Table7along with their p values.If we adopt a value ofα[P F]=.05,aˇSid`a k correction (from Equation2)will entail evaluating each contrast at theαlevel ofα[P C]=.0170 (Bonferroni will give the approximate value ofα[P C]=.0167).So,with a correction for multiple comparisons we will conclude that Contrasts1and3are significant.3.2Multiple regression approachAnova and multiple regression are equivalent if we use as many predictors for the multiple regression analysis as the number of degrees of freedom of the independent variable.An obvious choice for the predictors is to use a set of contrasts coefficients.Doing so makes contrast analysis a particular case of multiple regression analysis.When used with a set of orthogonal contrasts,the multiple regression approach gives the same results as the anova based approach previously described.When used with a set of non-orthogonal contrasts, multiple regression quantifies the specific contribution of each contrast as the semi-partial coefficient of correlation between the contrast coefficients and the dependent variable.We can use the multiple regression approach for non-orthogonal contrasts as long as the following constraints are satisfied:1.There are no more contrasts than the number of degrees of freedom of the independentvariable;2.The set of contrasts is linearly independent(i.e.,not multicollinear).That is,no contrastcan be obtained by combining the other contrasts.3.2.1An example:Multiple regression approachLet us go back once again to Smith’s(1979)study of learning and recall contexts.Suppose we take our three contrasts(see Table6)and use them as predictors with a standard multiple regression program.We willfind the following values for the semi-partial correlation between the contrasts and the dependent variable:ψ1∶r2Y.C a,1 C a,2C a,3=.1994ψ2∶r2Y.C a,2 C a,1C a,3=.0000ψ3∶r2Y.C a,3 C a,1C a,2=.0013,with r2Y.C a,1 C a,2C a,3being the squared correlation ofψ1and the dependent variable with theeffects ofψ2andψ3partialled out.To evaluate the significance of each contrast,we compute an F ratio for the corresponding semi-partial coefficients of correlation.This is done using the following formula:F Y.Ca,i C a,k C a, =r2Y.C a,i C a,k C a,1−r2Y.A×(df residual).(15)This results in the following F ratios for the Smith example:ψ1∶F Y.Ca,1 C a,2C a,3=13.3333,p=0.0007;ψ2∶F Y.Ca,2 C a,1C a,3=0.0000,p=1.0000;ψ3∶F Y.Ca,3 C a,1C a,2=0.0893,p=0.7665.These F ratios follow a Fisher distribution withν1=1andν2=45degrees of freedom.F critical=4.06whenα=.05.In this case,ψ1is the only contrast reaching significance(i.e., with Fψ>F critical).The comparison with the classic approach shows the drastic differences between the two approaches.4A posteriori(post-hoc)contrastsFor a posteriori contrasts,the family of contrasts is composed of all the possible contrasts even if they are not explicitly made.Indeed,because we choose the contrasts to be made a posteriori,this implies that we have implicitly made and judged uninteresting all the possible contrasts that have not been made.Hence,whatever the number of contrasts actually performed,the family is composed of all the possible contrasts.This number grows very fast: A conservative estimate indicates that the number of contrasts which can be made on A groups is equal to1+{[(3A−1) 2]−2A}.(16)So,using aˇSid`a k or Bonferroni approach will not have enough power to be useful.4.1Scheff´e’s testScheff´e’s test was devised to test all possible contrasts a posteriori while maintaining the overall Type I error level for the family at a reasonable level,as well as trying to have a conservative but relatively powerful test.The general principle is to insure that no discrepant statistical decision can occur.A discrepant decision would occur if the omnibus test would fail to reject the null hypothesis,but one a posteriori contrast could be declared significant.In order to avoid such a discrepant decision,the Scheff´e approachfirst tests any contrast as if it were the largest possible contrast whose sum of squares is equal to the experimental sum of squares(this contrast is obtained when the contrast coefficients are equal to the deviations of the group means to their grand mean);and second makes the test of the largest contrast equivalent to the anova omnibus test.So,if we denote by F critical,omnibus the critical value for the anova omnibus test(performed on A groups),the largest contrast is equivalent to the omnibus test if its Fψis tested against a critical value equal toF critical,Scheff´e=(A−1)×F critical,omnibus.(17) Equivalently,Fψcan be divided by(A−1)and its probability can be evaluated with a Fisher distribution withν1=(A−1)andν2being equal to the number of degrees of freedom of the mean square error.Doing so makes it impossible to reach a discrepant decision.4.1.1An example:Scheff´eSuppose that the Fψratios for the contrasts computed in Table7were obtained a posteriori. The critical value for the anova is obtained from a Fisher distribution withν1=A−1=4 andν2=A(S−1)=45.Forα=.05this value is equal to F critical,omnibus=2.58.In order to evaluate if any of these contrasts reaches significance,we need to compare them to the critical value ofF critical,Scheff´e=(A−1)×F critical,omnibus=4×2.58=10.32.With this approach,only thefirst contrast is considered significant.Related entriesAnalysis of Variance,Bonferonni correction,Post-Hoc comparisons.Further Readings1.Abdi,H.,Edelman,B.,Valentin,D.,&Dowling,W.J.(2009).Experimental Design and Analysis for Psychology.Oxford:Oxford University Press.2.Rosenthal,R.,&Rosnow,R.L.(2003).Contrasts and effect sizes in behavioral research:A correlational approach.Boston:Cambridge University Press.。
地质雷达和电法的英文文献3

Electrical resistivity tomography technique for landslide investigation:A reviewA.Perrone ⁎,penna,S.PiscitelliInstitute of Methodologies for Environmental Analysis,CNR,Italya b s t r a c ta r t i c l e i n f o Article history:Received 18September 2013Accepted 8April 2014Available online 18April 2014Keywords:ReviewElectrical resistivity tomography 2D 3DTime-lapse LandslidesIn the context of in-situ geophysical methods the Electrical Resistivity Tomography (ERT)is widely used for the near-surface exploration of landslide areas characterized by a complex geological setting.Over the last decade the technological improvements in field-data acquisition systems and the development of novel algorithms for tomographic inversion have made this technique more suitable for studying landslide areas,with a particular at-tention to the rotational,translational and earth-flow slides.This paper aims to present a review of the main re-sults obtained by applying ERT for the investigation of a wide spectrum of landslide phenomena which affected various geological formations and occurred in different geographic areas.In particular,signi ficant and represen-tative results obtained by applying 2D and 3D ERT are analyzed highlighting the advantages and drawbacks of this geophysical technique.Finally,recent applications of the time-lapse ERT (tl-ERT)for landslide investigation and the future scienti fic challenges to be faced are presented and discussed.©2014Elsevier B.V.All rights reserved.Contents 1.Introduction ...............................................................652.The ERT method for landslide investigation .................................................662.1.The 2D ERT imaging ........................................................672.2.The 3D ERT imaging ........................................................722.3.The time-lapse ERT monitoring ...................................................723.Discussion on the ERT advantages and drawbacks for landslide investigation ..................................774.Conclusions ...............................................................79Acknowledgments ..............................................................79References ................................ (79)1.IntroductionLandslides are complex geological phenomena with a high socio-economical impact also in terms of loss of lives and damage.Their inves-tigation usually requires a multidisciplinary approach,based on the in-tegration of satellite,airborne and ground-based sensing technologies.Each technique allows the study of speci fic triggering factors and/or particular physical features,characterizing the landslide body compared with the material not affected by the movement.Airborne and satellite methods (i.e.digital aerophotogrammetry,GPS,differential interferometric SAR,etc.)can provide information on the surface characteristics of the investigated slope,such as geomorpho-logical features,the areal extension of the landslide body,super ficial displacement and velocity (Catani et al.,2005;Squarzoni et al.,2005;Glenn et al.,2006;Lanari et al.,2007;Baldi et al.,2008;Roering et al.,2009;Cascini et al.,2010;Strozzi et al.,2010;Ventura et al.,2011;Guzzetti et al.,2012),without giving any information on subsoil characteristics.Direct ground-based techniques (i.e.piezometer,inclinometer,labo-ratory tests,etc.)give true information on the mechanical and hydraulic characteristics of the terrains affected by the landslide but in a speci fic point of the subsoil (Petley et al.,2005;Marcato et al.,2012).Earth-Science Reviews 135(2014)65–82⁎Corresponding author at:CNR-IMAA,c.da S.Loja,85050Tito Scalo PZ.Tel.:+390971427282.E-mail address:angela.perrone@r.it (A.Perrone)./10.1016/j.earscirev.2014.04.0020012-8252/©2014Elsevier B.V.All rightsreserved.Contents lists available at ScienceDirectEarth-Science Reviewsj o u r n a l h om e p a g e :w w w.e l s e v i e r.c o m /l o c a t e /e a r s c i r e vIn-situ geophysical techniques are able to measure physical param-eters directly or indirectly linked with the lithological,hydrological and geotechnical characteristics of the terrains related to the movement (McCann and Foster,1990;Hack,2000;Jongmans and Garambois, 2007).These techniques,less invasive than the previous ones,provide information integrated on a greater volume of the soil thus overcoming the point-scale feature of classic geotechnical measurements.Among the in-situ geophysical techniques,the Electrical Resistivity Tomogra-phy(ERT)has been increasingly applied for landslide investigation (McCann and Foster,1990;Hack,2000;Jongmans and Garambois, 2007;references in Table1,3and5).This technique is based on the measure of the electrical resistivity and can provide2D and3D images of its distribution in the subsoil.The frequent use of this method in the study of landslide areas is mainly due to the factors that can affect resistivity and its extreme var-iability in space and time domains.Indeed,this parameter is mostly in-fluenced by the mineralogy of the particles,the ground water content, the nature of electrolyte,the porosity and the intrinsic matrix resistivity with weathering and alteration(Archie,1942;Reynolds,1997;Park and Kim,2005;Bievre et al.,2012).Some of these factors,especially the change of water content and the consequent increase in pore water pressures,can play an important role in the triggering mechanisms of a landslide(Bishop,1960;Morgenstern and Price,1965).This paper aims at presenting the current state of-the-art on the ap-plication of ERT for landslide investigation,mainly considering the tech-nological and methodological improvements of this technique.The work is focused on the scientific papers published in international journals since2000and available online.In particular,this study pre-sents the results offield geophysical surveys based on2D,3D and time-lapse ERT carried out for the investigation of different typologies of landslide,also considering the acquisition systems and the inversion algorithms.The main advantages and drawbacks related to the applica-tion of the ERT method are identified and discussed.Finally,the future challenges for a better use of the ERT in the landslide investigation and monitoring are presented.2.The ERT method for landslide investigationThe Electrical Resistivity Tomography is an active geophysical meth-od that can provide2D or3D images of the distribution of the electrical resistivity in the subsoil.The analysis and interpretation of these electri-cal images allow the identification of resistivity contrasts that can be mainly due to the lithological nature of the terrains and the water con-tent variation.The in-field procedure includes the use of a multi-electrode cable, laid out on the ground,to which a number of steel electrodes are con-nected at afixed distance according to a specific electrode configuration. The electrodes are used both for the injection of the current(I)in the subsoil and the measurement of the voltage(V).Knowing the I and V values and the geometrical coefficient depending on the electrode con-figuration used,the apparent resistivity values characterizing the sub-soil investigated can be calculated.These values are positioned at pseudo-depths according to a geometrical reconstruction(Edwards, 1977),which results in a pseudo-section representing an approximate picture of the true subsurface resistivity distribution(Hack,2000).To obtain an electrical resistivity tomography,the apparent resistiv-ity values must be inverted by using inversion routines.The best known and most applied algorithm is Res2Dinv(Loke and Barker,1996;Loke et al.,2003)based on a smoothness-constrained least-squares method which allows to obtain two-dimensional sections throughfinite differ-ences orfinite elements computations,taking into account the topo-graphic corrections.To evaluate thefit of the resistivity model obtained,the root mean square error(RMS)can be considered.This error provides the percentage difference between the measured values and those calculated;so,the correspondence between thefield data and the ones of the model is higher when the error is lower.Although Res2Dinv is the most widely applied software,many other methods are currently available for the electrical resistivity data inversion(see Section2.1and Table1).Thefirst applications of ERT in the study of landslides(Gallipoli et al., 2000;Lapenna et al.,2003)involved the use of manual systems charac-terized by separated energization and measurement devices and single cables.Due to the absence of multi-core cables,the operators used four separate insulated cables connected to four metal electrodes,two of steel for the injection of current and the other two non-polarizable for the measurement of the voltage.The use of manual equipment resulted in rather slow data acquisition;moreover,the possibility or the necessi-ty to keep the energization and the measurement systems separate mainly favored the use of dipole–dipole configuration which is more suitable for the investigation of vertical boundaries(landslide lateral boundaries,source area,fault)than for the identification of the horizon-tal ones(sliding surface,lithological contact).Technological improvements,which produced more compact and portable equipments and faster acquisition systems,as well as the de-velopment of novel software for data processing and the creation of 2D and3D tomographic images of the resistivity distribution in the sub-soil have greatly increased the applicability of this technique for the study of landslide areas.Over the last15years the number of systems for the resistivity im-aging survey has considerably grown.Two categories of systems are now available,the static and the dynamic.In the static one many elec-trodes are connected to a multi-electrode cable and planted into the ground during the survey.The dynamic systems use a small number of nodes but move the entire equipment to obtain a wide coverage (Loke,2013).The static systems are usually used for the investigation of landslides.In particular,the introduction of static multi-electrode sys-tems(Barker,1981;Griffiths and Turnbull,1985;Griffiths et al.,1990;Li and Oldenburg.,1992;Dahlin,1993,1996;Dahlin and Bernstone,1997; Stummer,2002),mainly using single channel data acquisition,has greatly reduced the acquisition time and also improved some logistic as-pects.These systems allow the use of a large number of electrodes with an increase in the profile length and the automatic change of spatial res-olution and investigation depth.They have made it easier to carry out 2D ERT on landslides and obtain a3D geoelectrical model of the subsoil, particularly where the logistic conditions are advantageous(small-sized landslides and slightly steep slopes).The development of algorithms for the inversion of apparent resistiv-ity data(Dey and Morrison,1979;Barker,1992;Oldenburg et al.,1993; Oldenburg and Li,1994;Tsourlos,1995;LaBrecque et al.,1996;Loke and Barker,1996;Dahlin,2001and reference therein;Loke et al.,2003) made it easier to analyze the data and generate2D and3D images useful for the characterization of the slope investigated so as to obtain informa-tion on the geometry of a landslide body(i.e.the slide material thickness, the location of areas characterized by a higher water content,the pres-ence of potentially unstable areas,etc.).From a temporal point of view, the information obtained can be considered static being related only to the day of acquisition.Resistivity data are usually acquired after the occur-rence of an event and give an image of that moment,without providing any indications on the dynamic evolution affecting the slope investigated. Very recently,the development of static multi-channel measuring sys-tems,able to simultaneously acquire a number of potential measure-ments for a single pair of current electrodes,have significantly reduced the acquisition time.These systems can be set up to provide ERT at specif-ic times during the day,and they can also repeat the measurement in order to give ERT images at very close time intervals called time-lapse ERT(tl-ERT).This is extremely important as it allows the exploitation of ERT not only to define the geometrical characteristics of the landslide body or the slope investigated but also to monitor a potentially unstable area.The literature reports some examples of tl-ERT applications in land-slide areas with the main aim to obtain information on the water content change(see Section2.3).Obviously,although some software for the pro-cessing of data continuously acquired has already been developed,there66 A.Perrone et al./Earth-Science Reviews135(2014)65–82is still a need to improve this aspect and especially to quantify the rela-tionship between the variations of the electrical resistivity as a function of changes in hydrological parameters.2.1.The2D ERT imagingSince2000a lot of papers dealing with the application of2D ERT for landslide investigation have been published.For each paper Table1 specifies the year of publication,the name of the authors and the jour-nal,the typology of landslide investigated,the lithological nature of the material involved in the movement and the country affected by the event.The majority of the case histories considered(73%)are located in Europe,a lower percentage(24%)in Asia and a very low percentage (3%)in America(Fig.1).No example has been found for Oceania, while only few examples of the Vertical Electrical Sounding(VES)appli-cation for the investigation of unstable areas(Ayenew and Barbieri, 2005;Epada et al.,2012)are available for Africa.The65papers analyzed deal with different landslide typologies.Two of the papers are reviews(Hack,2000;Jongmans and Garambois,2007) and other three do not include information on the type of landslide (Otto and Sass,2006;Yilmaz,2007;Mondal et al.,2008),therefore, only60papers have been considered for the classification of the phe-nomenon typology.In particular,as also shown in Table2,twelve(20%)papers concern complex landslides(slides evolving in earth-flow;or retrogressive land-slides,etc.)(Gallipoli et al.,2000;Lapenna et al.,2003;Bichler et al., 2004;Perrone et al.,2004;Lapenna et al.,2005;Park and Kim,2005; Colangelo et al.,2008;Naudet et al.,2008;Panek et al.,2008;Sass et al.,2008;Jongmans et al.,2009;Ogusnsuyi,2010),nineteen(32%) study translational or rotational slides(Godio and Bottino,2001; Meric et al.,2005;Drahor et al.,2006;Friedel et al.,2006;Perrone et al.,2006;Göktürkler et al.,2008;Lee et al.,2008;Marescot et al., 2008;Schrott and Sass,2008;Erginal et al.,2009;Bekler et al.,2011; de Bari et al.,2011;Grandjean et al.,2011;Le Roux et al.,2011;Bièvre et al.,2012;Hibert et al.,2012;Ravindran and Ramanujam,2012; Sastry and Mondal,2013;Shan et al.,2013),six(10%)analyze rockfalls and rockslides(Batayneh et al.,2002;Godio et al.,2006;Ganerød et al., 2008;Heincke et al.,2010;Socco et al.,2010;Oppikofer et al.,2011), eight(13%)investigate deep seated landslides(Lebourg et al.,2005; Jomard et al.,2007a,b;Van Den Eeckhaut et al.,2007;Jomard et al., 2010;Migońet al.,2010;Tric et al.,2010;Zerathe and Lebourg,2012), twelve(20%)consider debris,earthflows or shallow landslides (Havenith et al.,2000;Jongmans et al.,2000;Demoulin et al.,2003; Grandjean et al.,2006;Perrone et al.,2008;Piegari et al.,2009; Schmutz et al.,2009;Chambers et al.,2011;Carpentier et al.,2012; Chang et al.,2012;Mainsant et al.,2012;Ravindran and Prabhu, 2012),and three(5%)focus on quick clay slides(Lundstrom et al., 2009;Donohue et al.,2012;Solberg et al.,2012).No examples of topples and lateral spread have been found.To define the resistive characteristics of the material involved in the landslides,63papers,excluding the reviews,have been analyzed.In par-ticular,in41case studies(65%)the slide material is conductive,in14case studies(22%)it is resistive and in the remaining8(13%)it is not well de-fined(Table2).This percentage distribution is mainly due both to the clayey andflyschoid nature of the material involved in the landslides and the high content of water that usually characterize landslide areas.Table1also reports the information related to the acquisition sys-tems,the electrode configuration and inversion software used by each team of authors.As for the acquisition systems,the different models of the IRIS-Instruments()are found to be the most widely used among the available commercial tools,in addi-tion to:i)ABEM Lund Imaging System(http://www.abem.se),ii) GeoTomo of Geolog(http://www.geolog2000.de),iii)AGI-SuperSting (),iv)OYO McOHM Profiler-4System(http:// www.oyo.co.jp/english.html),v)Campus Tigre(/files/index.html),vi)Multi Function Digital DC Resis-tivity IP/Meter(/fp745352/Multi-Funciton-Digital-DC-Resistivity-IP-Meter.html).They are static acquisition sys-tems usually working by using a multi-electrode cable and measuring a voltage only on a single pair of electrodes.As regards the arrays,dipole–dipole(DD)is the most used electrode configuration,followed by Wenner(W)and Wenner–Schlumberger (WS);only few examples using pole–pole(PP),pole–dipole(PD), multi-gradient(MG)and Wenner-alpha(W-α)electrode configuration can be found.In most cases,the authors use PP and PD array to study complex deep seated landslides(Lebourg et al.,2005;Jomard et al., 2007a,b,2010;Tric et al.,2010;Zerathe and Lebourg,2012)so as to reach deeper investigation depths.In order to highlight vertical structures some authors prefer to use the DD configuration(Godio and Bottino,2001;Lebourg et al.,2005; Godio et al.,2006;Perrone et al.,2006;Colangelo et al.,2008;Naudet et al.,2008;Perrone et al.,2008)also in combination with other config-urations to study deep and complex landslides(Lebourg et al.,2005; Jomard et al.,2007a,2010;Tric et al.,2010).The W and WS arrays are used to characterize horizontal discontinuities(Colangelo et al.,2008; Perrone et al.,2008;de Bari et al.,2011)and,in the last examples (since2012),especially to investigate shallow and non-complex landslides.Sometimes,different array configurations are used to measure resis-tivity data along the same profile in order to compare the resistivity im-ages obtained and overcome the intrinsic limitations of each configuration(Godio and Bottino,2001;Lebourg et al.,2005;Friedel et al.,2006;Godio et al.,2006;Perrone et al.,2006;Jomard et al., 2007a;Van Den Eeckhaut et al.,2007;Colangelo et al.,2008;Ganerod et al.,2008;Naudet et al.,2008;Perrone et al.,2008;Heincke et al., 2010;Jomard et al.,2010;Tric et al.,2010;de Bari et al.,2011; Grandjean et al.,2011).The resistivity distribution obtained with the different configurations is often proved to be comparable and the one showing the lowest RMS is generally reported(Van Den Eeckhaut et al.,2007).Friedel et al.(2006)show a quantitative comparison be-tween the results obtained using W,WS and DD configurations along the same profile.The authors point out the difference of resolution and sensitivity of each single array.All the models obtained have the same basic features,which indicates a high data quality and a stable in-version procedure.The authors conclude that in their specific case study,the best compromise between resolution and measurement time is represented by the joint inversion of WS+DD data set.Generally,to invert the data the authors mainly apply the RES2Dinv algorithm proposed by Loke and Barker(1996).For the same aim,Park and Kim(2005)use the DIPRO algorithm(Hee Song Geotek,2002),Meric et al.(2005)the DCIP2D(UBC-GIF2001), based on subspace methods(Oldenburg et al.,1993);Yilmaz (2007)the IP2DI(Wannamaker,1992);Gokturkler et al.(2008) the DC2DInvRes(Günther,2007);Heincke et al.(2010)the BERT al-gorithm(Günther et al.,2006)and Chang et al.(2012)the Earth Im-ager TM2D(AGI,2009).The main information obtained by applying the2D ERT technique helped the authors to define the geological setting of the investigated subsoil,to reconstruct the geometry of landslide body,to estimate the thickness of sliding material,to locate the possible sliding surface and lateral boundaries of the landslide,to characterize fractures or tectonic elements that could bring about an event,etc.(Fig.2).In some cases, ERT was also applied with the aim to evaluate the groundwater condi-tions,to locate areas with a high water content,to verify the network of water drainage,to study the groundwater circulation and storage within an unstable area(Perrone et al.,2004;Lapenna et al.,2005; Grandjean et al.,2006;Jomard et al.,2007a,b;Yilmaz,2007; Colangelo et al.,2008;Gokturkler et al.,2008;Marescot et al.,2008; Heickne et al.,2010;McClymont et al.,2010;Langston et al.,2011).In many of the case studies reported,ERT are compared with other geophysical methods,such as seismics and Ground Penetrating Radar67A.Perrone et al./Earth-Science Reviews135(2014)65–82GPR,or stratigraphical and hydrological data (Table 1last 2columns)in order to validate and calibrate the resistivity results.Among the geophysical methods,the ERT and seismic tomography combination proves to be the most successful (Fig.3).The joint applica-tion of GPR,ERT and seismic tomography seems to solve and overcome the resolution problems of each single method.Indeed,the GPR pro-vides more useful information on the shallowest layers (Sass et al.,2008),ERT on the intermediate layers and the seismic on the deepest ones (Bichler et al.,2004).If the investigated material is very wet,the seismic method can work better than ERT,providing information on the displacement material (Jongmans et al.,2009;Le Roux et al.,2011).Literature reports very few examples of ERT combined with In-duced Polarization (IP)used for the discrimination of clayey material from the matrix or for a better interpretation of ERT (Marescot et al.,2008;Sastry et al.,2013).2.2.The 3D ERT imagingLandslides are volumetric targets and their reconstruction and char-acterization should be carried out by means of 3D imaging and visuali-zation procedures.Although the introduction of multi-electrode and multi-channel systems has strongly increased the speed of data acquisi-tion,literature reports only very few cases of 3D ERT application in land-slide areas (Table 3).Very often,logistic conditions in these areas are not so conducive giving rise to problems in transporting and installing the instruments and equipment.The planning and the carrying out of a 3D geoelectrical campaign on landslides can be very tiring,exhausting and costly.In-deed,the slide material is usually strongly reworked,and,although the measurement equipment is now very compact and easy to carry,it still remains extremely dif ficult to be moved on the slope.Depending on the type of landslide and material involved,the slope can be very steep making it very dif ficult to install the cable network necessary to perform a 3D survey.Generally,landslides can present a large super fi-cial extension and,therefore,a very long multi-core cable could be nec-essary to cover the entire area investigated.A possible solution could beto use more instruments connected to each other and many multi-core cables.This would probably reduce the ef ficiency of the method and in-crease the electrical power required by the system.Despite all these problems,some authors have tried to perform a 3D investigation of a landslide (Bichler et al.,2004;Lebourg et al.,2005;Drahor et al.,2006;Friedel et al.,2006;Yilmaz,2007;Chambers et al.,2009;Heincke et al.,2010;Chambers et al.,2011;Grandjean et al.,2011;Di Maio and Piegari,2011;Udphuay et al.,2011;Di Maio and Piegari,2012).In all the cases reported,the acquisition has been carried out in a 2D way along parallel pro files whose direction is generally transversal to the dip of the slope and,sometimes,additional perpendic-ular pro files are also used.The acquisition systems of the IRIS-Instruments ( )and the DD electrode con figuration prove to be the most used also for the 3D applications.In one case,the authors apply a system (ALERT system of the British Geological Survey)that they themselves designed.Only few authors carry out a 3D inversion of the acquired data by applying some dedicated software (Fig.4),the others have used the 2D pro files in a graphical way to get a 3D fence diagram (Bichler et al.,2004;Drahor et al.,2006;Grandjean et al.,2011)(Fig.5).As reported in Table 4,the slides (63%)are the most studied type of landslide and,as in the case of 2D ERT applications,the material in-volved in the movement is essentially conductive (67%).The information obtained through 3D ERT,very similar to that ob-tained for the 2D ERT applications,allowed the de finition of a 3D geoelectrical model useful for the reconstruction of the subsoil geologi-cal setting and the identi fication of areas characterized by a high water content.2.3.The time-lapse ERT monitoringDespite the ERT technological and methodological development over the past 15years,2D and 3D ERT surveys have provided only static information.Generally,these investigations have been carried out after the occurrence of an event or in old landslide areas potentially subject to new activations.The information gathered is always related to the ac-quisition time without providing any indications on the possible evolu-tion of physical parameters in the slope investigated.Considering the in fluence that the water content change could have on the electrical re-sistivity and taking into account the role played by the water content in the triggering of some landslides,a continuous monitoring of the resis-tivity could give information on the dynamic behavior of the slope in-vestigated.This has led to the use of a new acquisition procedure known as time-lapse ERT (tl-ERT).These are usually acquire through multi-channel systems which allow the simultaneous potential mea-surement on many channels by means of a single pair of current elec-trodes.The Syscal PRO of IRIS Instruments proves to be the most popular.Systems like the GEOMON 4D (Supper et al.,2008),ALERT (Kuras et al.,2009;Wilkinson et al.,2010)and A-ERT (Hilbich et al.,2011)have also been developed in order to obtain tl-ERT.These systems can use local power generated by wind,solar and fuel cell technology,and can incorporate telemetric control and data transfer (Loke et al.,2013).To accommodate time-lapse resistivity in inverse models,different approaches such as the ratio method,the cascaded time-lapse inversion,the difference inversion and the differencing in dependent inversions have been proposed (Hayley et al.,2011;Loke et al.,2014).In the most common,the measured data,acquired at each monitoring phase,are independently inverted (Loke et al.,1999;Tsourlos,2003).This kind of approach mainly assumed that the time-lapse images are calcu-lated under the time-invariant static condition and that the changes of the ground properties during the acquisition can be ignored.However,the images obtained from this approach may be strongly contaminated with inversion artefacts due to both the presence of noise in the mea-surements and independent inversion errors.This generates false anomalies of ground condition changes.Furthermore,thetime-Fig.1.Geographical distribution of the case histories considered for the review.The graph shows that most of the examples considered are related to landslides located in Europe.Table 2Percentage distribution of landslide typologies studied by applying 2D ERT and of resistiv-ity values related to the material involved in these ndslide typology %Resistivity values %Slides32Conductive 65Debris and earth flows 20Resistive 22Complex landslides 20Mixed13Deep seated landslides 13Rock slides10Quick clay slides572 A.Perrone et al./Earth-Science Reviews 135(2014)65–82Fig.2.Varco d'Izzo landslide (Basilicata region,southern Italy):identi fication of the sliding surface and de finition of landslide shape by the comparison between the HH ′2D ERT and the stratigraphic data inferred from boreholes B22,B23and B24(redrawn from Lapenna et al.,(2005)).73A.Perrone et al./Earth-Science Reviews 135(2014)65–82。
OIML-R137-1-e06

Gas metersPart 1: RequirementsCompteurs de gaz Partie 1: ExigencesM L R 137-1 E d i t i o n 2006 (E )OIML R 137-1Edition 2006 (E)I NTERNATIONAL R ECOMMENDATIONContents Foreword (3)1 Scope (4)2 Terminology (4)2.1 Gas meter and its constituents (4)characteristics (6)2.2 Metrological2.3 Operating conditions (for definition, see 2.2.14) (7)conditions (8)2.4 Testequipment (9)2.5 Electronicrequirements (9)3 Constructional3.1 Construction (9)Direction (10)3.2 Flow3.3 Pressuretappings (11)conditions (11)3.4 Installation4 Seals and markings (12)units (12)4.1 Measurementandinscriptions (12)4.2 Markings4.3 Verification marks and protection devices (13)requirements (15)5 Metrologicaloperatingconditions (15)5.1 RatedofQ max, Q t and Q min (15)5.2 Values5.3 Accuracy classes and maximum permissible errors (15)5.4 Weighted mean error (WME) (16)5.5 Repair and damage of seals (16)requirements (17)6 Technical6.1 Indicatingdevice (17)element (18)6.2 Testdevices (19)6.3 Ancillarysources (19)6.4 Power6.5 Checks, limits and alarms for electronic gas meters (20)7 Requirements for metrological controls (21)results (21)7.1 Testconditions (21)7.2 Referenceapproval (21)7.3 Type7.4 Type examination tests (24)7.5 Initial verification and subsequent verification (31)7.6 Additional requirements for statistical verifications (32)7.7 Additional requirements for in-service inspections (33)Annex A: Environmental tests for electronic instruments or devices (34)Annex B: Flow disturbance tests (42)Annex C: Overview of tests applicable for different metering principles (45)Annex D:Bibliography (47)ForewordThe International Organization of Legal Metrology (OIML) is a worldwide, intergovernmental organization whose primary aim is to harmonize the regulations and metrological controls applied by the national metrological services, or related organizations, of its Member States. The main categories of OIML publications are:! International Recommendations (OIML R), which are model regulations that establish the metrological characteristics required of certain measuring instruments and which specify methods and equipment for checking their conformity. OIML Member States shall implement these Recommendations to the greatest possible extent;! International Documents (OIML D), which are informative in nature and which are intended to harmonize and improve work in the field of legal metrology;! International Guides (OIML G), which are also informative in nature and which are intended to give guidelines for the application of certain requirements to legal metrology; and! International Basic Publications (OIML B), which define the operating rules of the various OIML structures and systems.OIML Draft Recommendations, Documents and Guides are developed by Technical Committees or Subcommittees which comprise representatives from the Member States. Certain international and regional institutions also participate on a consultation basis. Cooperative agreements have been established between the OIML and certain institutions, such as ISO and the IEC, with the objective of avoiding contradictory requirements. Consequently, manufacturers and users of measuring instruments, test laboratories, etc. may simultaneously apply OIML publications and those of other institutions.International Recommendations, Documents, Guides and Basic Publications are published in English (E) and translated into French (F) and are subject to periodic revision.Additionally, the OIML publishes or participates in the publication of Vocabularies (OIML V) and periodically commissions legal metrology experts to write Expert Reports (OIML E). Expert Reports are intended to provide information and advice, and are written solely from the viewpoint of their author, without the involvement of a Technical Committee or Subcommittee, nor that of the CIML. Thus, they do not necessarily represent the views of the OIML.This publication - reference OIML R 137-1, Edition 2006 - was developed by Technical Subcommittee TC 8/SC 8 Gas meters. It was approved for final publication by the International Committee of Legal Metrology in 2006 and will be submitted to the International Conference of Legal Metrology in 2008 for formal sanction. It supersedes the previous editions of R 31 (1995) and R 32 (1989) and partially supersedes OIML R 6 (1989).OIML Publications may be downloaded from the OIML web site in the form of PDF files. Additional information on OIML Publications may be obtained from the Organization’s headquarters:Bureau International de Métrologie Légale11, rue Turgot - 75009 Paris - FranceTelephone: 33 (0)1 48 78 12 82Fax: 33 (0)1 42 82 17 27E-mail: biml@Internet: Gas metersPart 1: Requirements1 ScopeThis Recommendation applies to gas meters based on any principle, used to meter the quantity of gas in volume, mass or energy units that has passed through the meter at operating conditions. It applies also to gas meters intended to measure quantities of gaseous fuels or other gases, except gases in the liquefied state and steam.Dispensers for compressed natural gas (CNG dispensers) are also excluded from the scope of this Recommendation.This Recommendation also applies to correction devices, and other electronic devices that can be attached to the gas meter. However, provisions for conversion devices, either as part of the gas meter or as a separate instrument, or provisions for devices for the determination of the superior calorific value and gas metering systems consisting of several components, are defined in the draft OIML Recommendation on Measuring systems for gaseous fuel [8].2 TerminologyThe terminology used in this Recommendation conforms to the International Vocabulary of Basic and General Terms in Metrology (VIM - Edition 1993) [1] and the International Vocabulary of Terms in Legal Metrology (VIML - Edition 2000) [2]. In addition, for the purposes of this Recommendation, the following definitions apply.2.1 G AS METER AND ITS CONSTITUENTS2.1.1 Gas meterInstrument intended to measure, memorize and display the quantity of gas passing the flow sensor at operating conditions.2.1.2 Measurand (VIM 2.6)Particular quantity subject to measurement.2.1.3 Sensor (VIM 4.14)Element of a measuring instrument or measuring chain that is directly affected by the measurand.2.1.4 Measuring transducer (VIM 4.3)Device that provides an output quantity having a determined relationship to the input quantity.2.1.5 Mechanical output constant (mechanical gas meters only)Value of the quantity corresponding to one complete revolution of the shaft of the mechanical output. This value is determined by multiplying the value of the quantity corresponding to one complete revolution of the test element by the transmission ratio of the indicating device to this shaft.The mechanical output is an element to drive an ancillary device.2.1.6 CalculatorPart of the gas meter which receives the output signals from the measuring transducer(s) and, possibly, associated measuring instruments, transforms them and, if appropriate, stores the results in memory until they are used. In addition, the calculator may be capable of communicating both ways with ancillary devices.2.1.7 Indicating device (VIM 4.12 adapted)Part of the gas meter which displays the measurement results, either continuously or on demand. Note: A printing device, which provides an indication at the end of the measurement, is not an indicating device. 2.1.8 Adjustment deviceDevice incorporated in the gas meter that only allows the error curve to be shifted generally parallel to itself, with a view to bringing errors (of indication) within the limits of the maximum permissible error (MPE).2.1.9 Correction deviceDevice intended for correction of known errors as a function of e.g. flowrate, Reynolds number (curve linearization), or pressure and/or temperature.2.1.10 Ancillary deviceDevice intended to perform a particular function, directly involved in elaborating, transmitting or displaying measurement results.The main ancillary devices are:a) repeating indicating device;b) printing device;c) memory device; andd) communication device.Note 1: An ancillary device is not necessarily subject to metrological control.Note 2: An ancillary device may be integrated in the gas meter.2.1.11 Associated measuring instrumentInstrument connected to the calculator or the correction device for measuring certain gas properties, for the purpose of making a correction.2.1.12 Equipment under test (EUT)(Part of the) gas meter and/or associated devices which is exposed to one of the tests.2.1.13 Family of metersGroup of meters of different sizes and/or different flowrates, in which all the meters shall have the following characteristics:• the same manufacturer;• geometric similarity of the measuring part;• the same metering principle;• roughly the same ratios Q max/Q min and Q max/Q t;• the same accuracy class;• the same electronic device for each meter size;• a similar standard of design and component assembly; and•the same materials for those components that are critical to the performance of the meter.2.2 M ETROLOGICAL CHARACTERISTICS2.2.1 Quantity of gasTotal quantity of gas obtained by integrating the flow over time, expressed as volume V , mass m or energy E passed through the gas meter, disregarding the time taken. This is the measurand (see 2.1.2).2.2.2 Indicated value (of a quantity)Value Y i of a quantity, as indicated by the meter.2.2.3 Cyclic volume of a gas meter (positive displacement gas meters only)Volume of gas corresponding to one full revolution of the moving part(s) inside the meter (working cycle).2.2.4 True value (of a quantity) (VIM 1.19 + notes)Value consistent with the definition of a given particular quantity.2.2.5 Conventional true value (of a quantity) (VIM 1.20)Value Y ref attributed to a particular quantity and accepted, sometimes by convention, as having an uncertainty appropriate for a given purpose.2.2.6 Absolute error (of indication) (VIM3.10 + notes) Indicated value of a quantity Y i minus a true value of a quantity. 2.2.7 Relative error or error (of indication) e (VIM 3.12 + note) Error of measurement divided by a true value of the measurand. The error is expressed as a percentage, and is calculated by:%100)(×−=refref i Y Y Y e2.2.8 Weighted mean error (WME)The weighted mean error (WME) is calculated as follows:()()∑∑==⋅=ni ini iiQQ e QQ WME 1max1max /)/(where: • Q i /Q max is a weighting factor; • e i is the error at the flowrate Q i ; • at Q i > 0.9·Q max a weighting factor of 0.4 shall be used instead of 1. 2.2.9 Intrinsic errorError determined under reference conditions.2.2.10 Fault ∆e (OIML D 11,3.9)Difference between the error of indication and the intrinsic error of a measuring system or of its constituent elements.Note: In practice this is the difference between the error of the meter observed during or after a test, and the error of the meter prior to this test, performed under reference conditions.2.2.11 Maximum permissible error (MPE) (VIM 5.21)Extreme values permitted by the present Recommendation for an error.2.2.12 Accuracy class (VIM 5.19)Class of measuring instrument that meets certain metrological requirements that are intended to maintain errors within specified limits.2.2.13 Durability (OIML D 11,3.17)Ability of a measuring instrument to maintain its performance characteristics over a period of use.2.2.14 Operating conditionsConditions of the gas (temperature, pressure and gas composition) at which the quantity of gas is measured.2.2.15 Rated operating conditions (adapted from VIM 5.5)Conditions of use giving the range of values of the measurand and the influence quantities, for which the errors of the gas meter are required to be within the limits of the maximum permissible error.2.2.16 Reference conditions (adapted from VIM 5.7)Set of reference values, or reference ranges of influence quantities, prescribed for testing the performance of a gas meter, or for the intercomparison of the results of measurements.2.2.17 Base conditionsConditions to which the measured volume of gas is converted (examples: base temperature and base pressure).Note: Operating and base conditions relate to the volume of gas to be measured or indicated only and should not be confused with “rated operating conditions” and “reference conditions” (VIM 5.05 and 5.07) which refer to influence quantities.2.2.18 Test element of an indicating deviceDevice to enable precise reading of the measured gas quantity.2.2.19 Resolution (of an indicating device) (VIM 5.12)Smallest difference between indications of an indicating device that can be meaningfully distinguished.Note: For a digital device, this is the change in the indication when the least significant digit changes by one step.For an analogue device, this is half the difference between subsequent scale marks.2.2.20 Drift (VIM 5.16)Slow change of a metrological characteristic of a measuring instrument.2.3 O PERATING CONDITIONS (for definition, see 2.2.14)2.3.1 Flowrate, QQuotient of the actual quantity of gas passing through the gas meter and the time taken for this quantity to pass through the gas meter.2.3.2 Maximum flowrate, Q maxHighest flowrate at which a gas meter is required to operate within the limits of its maximum permissible error, whilst operated within its rated operating conditions.2.3.3 Minimum flowrate, Q minLowest flowrate at which a gas meter is required to operate within the limits of its maximum permissible error, whilst operated within its rated operating conditions.2.3.4 Transitional flowrate, Q tFlowrate which occurs between the maximum flowrate Q max and the minimum flowrate Q min that divides the flowrate range into two zones, the “upper zone” and the “lower zone”, each characterized by its own maximum permissible error.2.3.5 Working temperature, t wTemperature of the gas to be measured at the gas meter.2.3.6 Minimum and maximum working temperatures, t min and t maxMinimum and maximum gas temperature that a gas meter can withstand, within its rated operating conditions, without deterioration of its metrological performance.2.3.7 Working pressure, p wGauge pressure of the gas to be measured at the gas meter. The gauge pressure is the difference between the absolute pressure of the gas and the atmospheric pressure.2.3.8 Minimum and maximum working pressure, p min and p maxMinimum and maximum internal gauge pressure that a gas meter can withstand, within its rated operating conditions, without deterioration of its metrological performance.2.3.9 Static pressure loss or pressure differential, ∆pMean difference between the pressures at the inlet and outlet of the gas meter while the gas is flowing.2.3.10 Working density, ρwDensity of the gas flowing through the gas meter, corresponding to p w and t w2.4 T EST CONDITIONS2.4.1 Influence quantity (VIM 2.7)Quantity that is not the measurand but which affects the result of the measurement.2.4.2 Influence factor (OIML D 11,3.13.1)Influence quantity having a value within the rated operating conditions of the gas meter, as specified in this Recommendation.2.4.3 Disturbance (OIML D 11,3.13.2)Influence quantity having a value within the limits specified in this Recommendation, but outside the specified rated operating conditions of the gas meter.Note: An influence quantity is a disturbance if for that influence quantity the rated operating conditions are not specified.2.4.4 Overload conditionsExtreme conditions, including flowrate, temperature, pressure, humidity and electromagnetic interference that a gas meter is required to withstand without damage. When it is subsequently operated within its rated operating conditions, it must do so within the limits of its maximum permissible error.2.4.5 TestSeries of operations intended to verify the compliance of the equipment under test (EUT) with certain requirements.2.4.6 Test procedureDetailed description of the test operations.2.4.7 Test programDescription of a series of tests for a certain type of equipment.2.4.8 Performance testTest intended to verify whether the equipment under test (EUT) is capable of accomplishing its intended functions.2.5 E LECTRONIC EQUIPMENT2.5.1 Electronic gas meterGas meter equipped with electronic devices.Note: For the purposes of this Recommendation auxiliary equipment, as far as it is subject to metrological control, is considered part of the gas meter, unless the auxiliary equipment is approved and verified separately.2.5.2 Electronic deviceDevice employing electronic sub-assemblies and performing a specific function. Electronic devices are usually manufactured as separate units and are capable of being tested independently.2.5.3 Electronic sub-assemblyPart of an electronic device employing electronic components and having a recognizable function of its own.2.5.4 Electronic componentSmallest physical entity, which uses electron or gap conduction in semi-conductors, or conduction by means of electrons or ions in gases or in a vacuum.requirements3 Constructional3.1 C ONSTRUCTION3.1.1 MaterialsA gas meter shall be made of such materials and be so constructed to withstand the physical, chemical and thermal conditions to which it is likely to be subjected and to fulfil correctly its intended purposes throughout its life.3.1.2 Soundness of casesThe case of a gas meter shall be gas-tight up to the maximum working pressure of the gas meter. If a meter is to be installed in the open air it shall be impermeable to run-off water.3.1.3 Condensation/climate provisionsThe manufacturer may incorporate devices for the reduction of condensation, where condensation may adversely affect the performance of the device.3.1.4 Protection against external interferenceA gas meter shall be constructed and installed in such a way that mechanical interference capable of affecting its accuracy is either prevented, or results in permanently visible damage to the gas meter or to the verification marks or protection marks.3.1.5 Indicating deviceThe indicating device can be connected to the meter body physically or remotely. In the latter case the data to be displayed shall be stored in the gas meter.Note: National or regional requirements may contain provisions to guarantee access to the data stored in the meter for customers and consumers.3.1.6 Safety deviceThe gas meter may be equipped with a safety device that shuts off the gas flow in the event of calamities, such as an earthquake or a fire. A safety device may be connected to the gas meter, provided that it does not influence the metrological integrity of the meter.Note: A mechanical gas meter equipped with an earthquake sensor plus an electrical powered valve is not considered to be an electronic gas meter.3.1.7 Connections between electronic partsConnections between electronic parts shall be reliable and durable.3.1.8 ComponentsComponents of the meter may only be exchanged without subsequent verification if the type examination establishes that the metrological properties and especially the accuracy of the meter are not influenced by the exchange of the components concerned. Such components shall be identified at least by their own type indication.Note: National bodies may require components to be marked with the model(s) of the meter(s) to which they may be attached and may require such exchange to be carried out by authorized persons.3.1.9 Zero flowThe gas meter totalization shall not change when the flowrate is zero, while the installation conditions are free from pulsations and vibrations.Note: This requirement refers to stationary operating conditions. This condition does not refer to the response of the gas meter to changed flowrates.3.2 F LOW D IRECTION3.2.1 Direction of the gas flowOn a gas meter where the indicating device registers positively for only one direction of the gas flow, this direction shall be indicated by a method which is clearly understood, e.g. an arrow. This indication is not required if the direction of the gas flow is determined by the construction.3.2.2 Plus and minus signThe manufacturer shall specify whether or not the gas meter is designed to measure bi-directional flow. In the case of bi-directional flow a double-headed arrow with a plus and minus sign shall be used to indicate which flow direction is regarded as positive and negative respectively.3.2.3 Recording of bi-directional flowIf a meter is designed for bi-directional use, the quantity of gas passed during reverse flow shall either be subtracted from the indicated quantity or be recorded separately. The maximum permissible error shall be met for both forward and reverse flow.3.2.4 Reverse flowIf a meter is not designed to measure reverse flow, the meter shall either prevent reverse flow, or it shall withstand incidental or accidental reverse flow without deterioration or change in its metrological properties.3.2.5 Indicating deviceA gas meter may be provided with a device to prevent the indicating device from functioning whenever gas is flowing in an unauthorized direction.3.3 P RESSURE TAPPINGS3.3.1 GeneralIf a gas meter is designed to operate above an absolute pressure of 0.15 MPa, the manufacturer shall either equip the meter with pressure tappings, or specify the position of pressure tappings in the installation pipe work.3.3.2 BoreThe bore of the pressure tappings shall be large enough to allow correct pressure measurements.3.3.3 ClosurePressure tappings shall be provided with a means of closure to make them gas-tight.3.3.4 MarkingsThe pressure tapping on the gas meter for measuring the working pressure (2.3.7) shall be clearly and indelibly marked “p m” (i.e. the pressure measurement point) or “p r” (i.e. the pressure reference point) and other pressure tappings “p”.3.4 I NSTALLATION CONDITIONSThe manufacturer shall specify the installation conditions (as applicable) with respect to:- the position to measure the working temperature of the gas (2.3.5);- filtering;- levelling and orientation;disturbances;- flow- pulsations or acoustic interference;changes;pressure- rapid- absence of mechanical stress (due to torque and bending);- mutual influences between gas meters;instructions;- mounting- maximum allowable diameter differences between the gas meter and connecting pipework; and- other relevant installation conditions.4 Seals and markings4.1 M EASUREMENT UNITSAll quantities shall be expressed in SI units [3] or as other legal units of measurement [4], unless a country’s legal units are different. In the next section the unit corresponding to the quantity indicated is expressed by <unit>.4.2 M ARKINGS AND INSCRIPTIONSAll markings prescribed in 4.2 shall be visible, easily legible and indelible under rated conditions of use.Any marking other than those prescribed in the type approval document shall not lead to confusion.4.2.1 General applicable markings for gas metersAs relevant, the following information shall be marked on the casing or on an identification plate, or clearly and unambiguously visible via the indicating device:a) Type approval mark (according to national or regional regulation);b) Name or trade mark of the manufacturer;c) Type designation;d) Serial number of the gas meter and its year of manufacture;e) Accuracy class;f) Maximum flowrate Q max = … <unit>;g) Minimum flowrate Q min = … <unit>;h) Transition flowrate Q t = … <unit>;i) Gas temperature range and pressure range for which the errors of the gas meter shall bewithin the limits of the maximum permissible error, expressed as:t min – t max = … - … <unit>;p min – p max = … - … <unit> gauge pressure.j) The density range within which the errors shall comply with the limits of the maximum permissible error may be indicated, and shall be expressed as:ρ = … - … <unit>This marking may replace the range of working pressures (i) unless the working pressure marking refers to a built-in conversion device;k) Pulse values of HF and LF frequency outputs (imp/<unit>, pul/<unit>, <unit>/imp);Note: The pulse value is given to at least six significant figures, unless it is equal to an integer multiple or decimal fraction of the used unit.l) Letter V or H, as applicable, if the meter can be operated only in the vertical or horizontal position;m) Indication of the flow direction, e.g. an arrow (if applicable, see 3.2.1 and 3.2.2);n) Measurement point for the working pressure according to 3.3.4; ando) Environmental temperatures, if they differ from the gas temperature as mentioned in i).4.2.2 Additional markings for mechanical gas meters with a built-in mechanical conversiondevice having only one indicating devicep) Base temperature t b = … <unit>;q) Temperature t sp = … <unit> specified by the manufacturer according to 5.3.4.4.2.3 Additional markings for gas meters with output drive shaftsr) Gas meters fitted with output drive shafts or other facilities for operating detachable additional devices shall have each drive shaft or other facility characterized by anindication of its constant (C) in the form “1 rev = … <unit>” and the direction ofrotation. “rev” is the abbreviation of the word “revolution”;s) If there is only one drive shaft the maximum permissible torque shall be marked in the form “M max = … N.mm”;t) If there are several drive shafts, each shaft shall be characterized by the letter M with a subscript in the form “M1, M2, … M n”;u) The following formula shall appear on the gas meter:k1M1 + k2M2 + … + k n M n≤ A N.mm,where:A is the numerical value of the maximum permissible torque applied to the drive shaftwith the highest constant, where the torque is applied only to this shaft; this shaft shallbe characterised by the symbol M1,k i (i = 1, 2, … n) is a numerical value determined as follows: k i = C1 / C i,M i (i = 1, 2, … n) represents the torque applied to the drive shaft characterized by thesymbol M i,C i(i = 1, 2, … n) represents the constant for the drive shaft characterized by thesymbol M i.4.2.4 Additional markings for gas meters with electronic devicesv) For an external power supply: the nominal voltage and nominal frequency;w) For a non-replaceable or replaceable battery: the latest date by which the battery is to be replaced, or the remaining battery capacity.x) Software identification of the firmware4.3 V ERIFICATION MARKS AND PROTECTION DEVICES4.3.1 General provisionProtection of the metrological properties of the meter is accomplished via hardware (mechanical) sealing or via electronic sealing devices.In any case, memorized quantities of gas shall be protected by means of a hardware seal.The design of verification marks and hardware seals is subject to national or regional legislation. Seals shall be able to withstand outdoor conditions.4.3.2 Verification marksVerification marks indicate that the gas meter has successfully passed the initial verification (7.5). Verification marks shall be realized as hardware seals.。
Generalizations of the BiasVariance Decomposition for Prediction Error

2 Generalizing the de nitions
Often squared error is a very convenient loss function to use. It possesses well known mathematical properties such as the bias/variance decomposition (1) that make it very attractive to use. However there are situations where squared error is clearly not the most appropriate loss function. This is especially true in classi cation problems where a loss function like 0-1 loss seems much more realistic. So how might we extend these concepts of variance and bias to general loss functions? There is one obvious requirement that it seems natural for any generalization to ful ll. When using squared error loss our general de nitions must reduce to the standard ones. 3
The bias and variance of a real valued random variable, using squared error loss, are well understood. However because of recent developments in classi cation techniques it has become desirable to extend these concepts to general random variables and loss functions. The 0-1 (misclassi cation) loss function with categorical random variables has been of particular interest. We explore the concepts of variance and bias and develop a decomposition of the prediction error into functions of the systematic and variable parts of our predictor. After providing some examples we conclude with a discussion of the various de nitions that have been proposed.
chapter 3 notes from book

MSE squares errors, thusgiving more weight to larger errors, whichcausesmore problems.
MAPE should be used when there is a need to put errors in perspective.
Time-series forecasts:
Simply attempt to project past experience into the future.Thesetechniques use historical data with the assumption that the future will be like the past. Somemodels merely attempt to smooth out random variations in historical data; others attempt toidentify specific patterns in the data and project or extrapolate those patterns into the future,without trying to identify causes of the patterns.
MAD weights all errors evenly,
MSE weights errors according to theirsquaredvalues, and
MAPE weights according to relative error.
Contents Explanations and error diagnosis

Explanations and error diagnosisLIFOG´e rard Ferrand,Willy Lesaint,Alexandre Tessierpublic,rapport de rechercheD3.2.2Contents1Introduction3 2Preliminary notations and definitions42.1Notations (4)2.2Constraint Satisfaction Problem (4)2.3Constraint Satisfaction Program (5)2.4Links between CSP and program (7)3Expected Semantics83.1Correctness of a CSP (8)3.2Symptom and Error (8)4Explanations94.1Explanations (10)4.2Computed explanations (12)5Error Diagnosis125.1From Symptom to Error (13)5.2Diagnosis Algorithms (13)6Conclusion141AbstractThe report proposes a theoretical approach of the debugging of constraint programs based on the notion of explanation tree(D1.1.1and D1.1.2part 2).The proposed approach is an attempt to adapt algorithmic debugging to constraint programming.In this theoretical framework for domain reduction, explanations are proof trees explaining value removals.These proof trees are defined by inductive definitions which express the removals of values as con-sequence of other value removals.Explanations may be considered as the essence of constraint programming.They are a declarative view of the com-putation trace.The diagnosis consists in locating an error in an explanation rooted by a symptom.keywords:declarative diagnosis,algorithmic debugging,CSP,local consis-tency operator,fix-point,closure,inductive definition21IntroductionDeclarative diagnosis[15](also known as algorithmic debugging)have been success-fully used in different programming paradigms(e.g.logic programming[15],func-tional programming[10]).Declarative means that the user has no need to consider the computational behavior of the programming system,he only needs a declarative knowledge of the expected properties of the program.This paper is an attempt to adapt declarative diagnosis to constraint programming thanks to a notion of explanation tree.Constraint programs are not easy to debug because they are not algorithmic programs[14]and tracing techniques are revealed limited in front of them.More-over it would be incoherent to use only low level debugging tools whereas for these languages the emphasis is on declarative semantics.Here we are interested in a wide field of applications of constraint programming:finite domains and propagation.The aim of constraint programming is to solve Constraint Satisfaction Problems (CSP)[17],that is to provide an instantiation of the variables which is solution of the constraints.The solver goes towards the solutions combining two different methods.Thefirst one(labeling)consists in partitioning the domains.The second one(domain reduction)reduces the domains eliminating some values which cannot be correct according to the constraints.In general,the labeling alone is very expen-sive and domain reduction only provides a superset of the solutions.Solvers use a combination of these two methods until to obtain singletons and test them.The formalism of domain reduction given in the paper is well-suited to define ex-planations for the basic events which are“the withdrawal of a value from a domain”. It has already permitted to prove the correctness of a large family of constraint re-traction algorithms[6].A closed notion of explanations have been proved useful in many applications:dynamic constraint satisfaction problems,over-constrained problems,dynamic backtracking,...Moreover,it has also been used for failure anal-ysis in[12].The introduction of labeling in the formalism has already been proposed in[13].But this introduction complicates the formalism and is not really necessary here(labeling can be considered as additional constraints).The explanations de-fined in the paper provide us with a declarative view of the computation and their tree structure is used to adapt algorithmic debugging to constraint programming.From an intuitive viewpoint,we call symptom the appearance of an anomaly dur-ing the execution of a program.An anomaly is relative to some expected properties of the program,here to an expected semantics.A symptom can be a wrong answer or a missing answer.A wrong answer reveals a lack in the constraints(a missing constraint for example).This paper focuses on the missing answers.Symptoms are caused by erroneous constraints.Strictly speaking,the localization of an erroneous constraint,when a symptom is given,is error diagnosis.It amounts to search for a kind of minimal symptom in the explanation tree.For a declarative diagnostic system,the input must include at least(1)the actual program,(2)the symptom and(3)a knowledge of the expected semantics.This knowledge can be given by the3programmer during the diagnosis session or it can be specified by other means but,from a conceptual viewpoint,this knowledge is given by an oracle .We are inspired by GNU-Prolog [7],a constraint programming language over finite domains,because its glass-box approach allows a good understanding of the links between the constraints and the rules used to build explanations.But this work can be applied to all solvers over finite domains using propagation whatever the local consistency notion used .Section 2defines the basic notions of CSP and program.In section 3,symptoms and errors are described in this framework.Section 4defines explanations.An algorithm for error diagnosis of missing answer is proposed in section 5.2Preliminary notations and definitionsThis section gives briefly some definitions and results detailed in [9].2.1NotationsLet us assume fixed:•a finite set of variable symbols V ;•a family (D x )x ∈V where each D x is a finite non empty set,D x is the domain of the variable x .We are going to consider various families f =(f i )i ∈I .Such a family can be identified with the function i →f i ,itself identified with the set {(i,f i )|i ∈I }.In order to have simple and uniform definitions of monotonic operators on a power-set,we use a set which is similar to an Herbrand base in logic programming:we define the domain by D = x ∈V ({x }×D x ).A subset d of D is called an environment .We denote by d |W the restriction of d to a set of variables W ⊆V ,that is,d |W ={(x,e )∈d |x ∈W }.Note that,with d,d ⊆D ,d = x ∈V d |{x },and (d ⊆d ⇔∀x ∈V,d |{x }⊆d |{x }).A tuple (or valuation )t is a particular environment such that each variable ap-pears only once:t ⊆D and ∀x ∈V,∃e ∈D x ,t |{x }={(x,e )}.A tuple t on a set of variables W ⊆V ,is defined by t ⊆D |W and ∀x ∈W,∃e ∈D x ,t |{x }={(x,e )}.2.2Constraint Satisfaction ProblemA Constraint Satisfaction Problem (CSP)on (V,D )is made of:•a finite set of constraint symbols C ;•a function var :C →P (V ),which associates with each constraint symbol the set of variables of the constraint;4•a family(T c)c∈C such that:for each c∈C,T c is a set of tuples on var(c),T c is the set of solutions of c.Definition1A tuple t is a solution of the CSP if∀c∈C,t|var(c)∈T c.From now on,we assumefixed a CSP(C,var,(T c)c∈C)on(V,D)and we denote by Sol its set of solutions.Example1The conference problem[12]Michael,Peter and Alan are organizing a two-day seminar for writing a report on their work.In order to be efficient,Peter and Alan need to present their work to Michael and Michael needs to present his work to Alan and Peter.So there are four variables, one for each presentation:Michael to Peter(MP),Peter to Michael(PM),Michael to Alan(MA)and Alan to Michael(AM).Those presentations are scheduled for a whole half-day each.Michael wants to know what Peter and Alan have done before presenting his own work(MA>AM,MA>PM,MP>AM,MP>PM).Moreover,Michael would prefer not to come the afternoon of the second half-day because he has got a very long ride home(MA=4,MP=4,AM=4,PM=4).Finally,note that Peter and Alan cannot present their work to Michael at the same time(AM=PM).The solutions of this problem are:{(AM,2),(MA,3),(MP,3),(PM,1)}and{(AM,1),(MA,3),(MP,3),(PM,2)}.The set of constraints can be written in GNU-Prolog[7]as:conf(AM,MP,PM,MA):-fd_domain([MP,PM,MA,AM],1,4),MA#>AM,MA#>PM,MP#>AM,MP#>PM,MA#\=4,MP#\=4,AM#\=4,PM#\=4,AM#\=PM.2.3Constraint Satisfaction ProgramA program is used to solve a CSP,(i.e tofind the solutions)thanks to domain reduction and beling can be considered as additional constraints,so we concentrate on the domain reduction.The main idea is quite simple:to remove from the current environment some values which cannot participate to any solution of some constraints,thus of the CSP.These removals are closely related to a notion of local consistency.This can be formalized by local consistency operators.Definition2A local consistency operator r is a monotonic function r:P(D)→P(D).Note that in[9],a local consistency operator r have a type(in(r),out(r))with in(r),out(r)⊆D.Intuitively,out(r)is the set of variables whose environment is reduced(values are removed)and these removals only depend on the environments of the variables of in(r).But this detail is not necessary here.5Example2The GNU-Prolog solver uses local consistency operators following the X in r scheme[4]:for example,AM in0..max(MA)-1.It means that the values of AM must be between0and the maximal value of the environment of MA minus1.As we want contracting operator to reduce the environment,next we will consider d→d∩r(d).But in general,the local consistency operators are not contracting functions,as shown later to define their dual operators.A program on(V,D)is a set R of local consistency operators.Example3Following the X in r scheme,the GNU-Prolog conference problem is implemented by the following program:AM in1..4,MA in1..4,PM in1..4,MP in1..4,MA in min(AM)+1..infinity,AM in0..max(MA)-1,MA in min(PM)+1..infinity,PM in0..max(MA)-1,MP in min(AM)+1..infinity,AM in0..max(MP)-1,MP in min(PM)+1..infinity,PM in0..max(MP)-1,MA in-{val(4)},AM in-{val(4)},PM in-{val(4)},MP in-{val(4)},AM in-{val(PM)},PM in-{val(AM)}.From now on,we assumefixed a program R on(V,D).We are interested in particular environments:the commonfix-points of the re-duction operators d→d∩r(d),r∈R.Such an environment d verifies∀r∈R, d =d ∩r(d ),that is values cannot be removed according to the operators.Definition3Let r∈R.We say an environment d is r-consistent if d⊆r(d).We say an environment d is R-consistent if∀r∈R,d is r-consistent.Domain reduction from a domain d by R amounts to compute the greatestfix-point of d by R.Definition4The downward closure of d by R,denoted by CL↓(d,R),is the greatest d ⊆D such that d ⊆d and d is R-consistent.In general,we are interested in the closure of D by R(the computation starts from D),but sometimes we would like to express closures of subset of D(environments, tuples).It is also useful in order to take into account dynamic aspects or labeling [9,6].Example4The execution of the GNU-Prolog program provides the following clo-sure:{(AM,1),(AM,2),(MA,2),(MA,3),(MP,2),(MP,3),(PM,1),(PM,2)}.By definition4,since d⊆D:Lemma1If d is R-consistent then d⊆CL↓(D,R).62.4Links between CSP and programOf course,the program is linked to the CSP.The operators are chosen to “imple-ment”the CSP.In practice,this correspondence is expressed by the fact that the program is able to test any valuation.That is,if all the variables are bounded,the program should be able to answer to the question:“is this valuation a solution of the CSP ?”.Definition 5A local consistency operator r preserves the solutions of a set of con-straints C if,for each tuple t ,(∀c ∈C ,t |var(c )∈T c )⇒t is r -consistent.In particular,if C is the set of constraints C of the CSP then we say r preserves the solutions of the CSP.In the well-known case of arc-consistency,a set of local consistency operators R c is chosen to implement each constraint c of the CSP.Of course,each r ∈R c preserves the solutions of {c }.It is easy to prove that if r preserves the solutions of C and C ⊆C ,then r preserves the solutions C .Therefore ∀r ∈R c ,r preserves the solutions of the CSP.To preserve solutions is a correction property of operators.A notion of com-pleteness is used to choose the set of operators “implementing”a CSP.It ensures to reject valuations which are not solutions of constraints.But this notion is not necessary for our purpose.Indeed,we are only interested in the debugging of miss-ing answers,that is in locating a wrong local consistency operators (i.e.constraints removing too much values).In the following lemmas,we consider S ⊆Sol,that is S a set of solutions of the CSP and S (= t ∈S t )its projection on D .Lemma 2Let S ⊆Sol ,if r preserves the solutions of the CSP then S is r -consistent.Proof.∀t ∈S,t ⊆r (t )so S ⊆ t ∈S r (t ).Now,∀t ∈S,t ⊆ S so∀t ∈S,r (t )⊆r ( S ).Extending definition 5,we say R preserves the solutions of C if for each r ∈R ,r preserves the solutions of C .From now on,we consider that the fixed program R preserves the solutions of the fixed CSP.Lemma 3If S ⊆Sol then S ⊆CL ↓(D ,R ).Proof.by lemmas 1and 2.Finally,the following corollary emphasizes the link between the CSP and the program.Corollary 1 Sol ⊆CL ↓(D ,R ).The downward closure is a superset (an “approximation”)of Sol which is itself the projection (an “approximation”)of Sol.But the downward closure is the most accurate set which can be computed using a set of local consistency operators in the framework of domain reduction without splitting the domain (without search tree).73Expected SemanticsTo debug a constraint program,the programmer must have a knowledge of the problem.If he does not have such a knowledge,he cannot say something is wrong in his program!In constraint programming,this knowledge is declarative.3.1Correctness of a CSPAt first,the expected semantics of the CSP is considered as a set of tuples:the expected solutions .Next definition is motivated by the debugging of missing answer.Definition 6Let S be a set of tuples.The CSP is correct wrt S if S ⊆Sol .Note that if the user exactly knows S then it could be sufficient to test each tuple of S on each local consistency operator or constraint.But in practice,the user only needs to know some members of S and some members of D \ S .We consider the expected environment S ,that is the approximation of S .By lemma 2:Lemma 4If the CSP is correct wrt a set of tuples S then S is R -consistent.3.2Symptom and ErrorFrom the notion of expected environment,we can define a notion of symptom.A symptom emphasizes a difference between what is expected and what is actually computed.Definition 7h ∈D is a symptom wrt an expected environment d if h ∈d \CL ↓(D ,R ).It is important to note that here a symptom is a symptom of missing solution (an expected member of D is not in the closure).Example 5From now on,let us consider the new following CSP in GNU-Prolog :conf(AM,MP,PM,MA):-fd_domain([MP,PM,MA,AM],1,4),MA #>AM,MA #>PM,MP #>AM,PM #>MP,MA #\=4,MP #\=4,AM #\=4,PM #\=4,AM #\=PM.As we know,a solution of the conference problem contains (AM,1).But,the execution provides an empty closure.So,in particular,(AM,1)has been removed.Thus,(AM,1)is a symptom.8Definition 8R is approximately correct wrt d if d ⊆CL ↓(D ,R ).Note that R is approximately correct wrt d is equivalent to there is no symptom wrt d .By this definition and lemma 1we have:Lemma 5If d is R -consistent then R is approximately correct wrt d .In other words,if d is R -consistent then there is no symptom wrt d .But,our purpose is debugging (and not program validation),so:Corollary 2Let S be a set of expected tuples.If R is not approximately correct wrt S then S is not R -consistent,thus the CSP is not correct wrt S .The lack of an expected value is caused by an error in the program,more precisely a local consistency operator.If an environment d is not R -consistent,then there exists an operator r ∈R such that d is not r -consistent.Definition 9A local consistency operator r ∈R is an erroneous operator wrt d if d ⊆r (d ).Note that d is R -consistent is equivalent to there is no erroneous operator wrt d in R .Theorem 1If there exists a symptom wrt d then there exists an erroneous operator wrt d (the converse does not hold).When the program is R = c ∈C R c with each R c a set of local consistency operators preserving the solutions of c ,if r ∈R c is an erroneous operator wrt Sthen it is possible to say that c is an erroneous constraint.Indeed,there exists a value (x,e )∈ S \r ( S ),that is there exists t ∈S such that (x,e )∈t \r (t ).So t is not r -consistent,so t |var(c )∈T c i.e.c rejects an expected solution.4ExplanationsThe previous theorem shows that when there exists a symptom there exists an erroneous operator.The goal of error diagnosis is to locate such an operator from a symptom.To this aim we now define explanations of value removals as in [9],that is a proof tree of a value removal.If a value has been wrongly removed then there is something wrong in the proof of its removal,that is in its explanation.94.1ExplanationsFirst we need some notations.Let d=D\d.In order to help the understanding,we always use the notation d for a subset of D if intuitively it denotes a set of removed values.Definition10Let r be an operator,we denote by r the dual of r defined by:∀d⊆D, r(d)=r(d).We consider the set of dual operators of R:let R={ r|r∈R}.Definition11The upward closure of d by R,denoted by CL↑(d, R)exists and is the least d such that d⊆d and∀r∈R, r(d )⊆d (see[9]).Next lemma establishes the correspondence between downward closure of local consistency operators and upward closure of their duals.Lemma6CL↑(d, R)=CL↓(d,R).Proof.CL↑(d, R)=min{d |d⊆d ,∀ r∈ R, r(d )⊆d }=min{d |d⊆d ,∀r∈R,d ⊆r(d )}=max{d |d ⊆d,∀r∈R,d ⊆r(d )} Now,we associate rules in the sense of[1]with these dual operators.These rules are natural to build the complementary of an environment and well suited to provide proof(trees)of value removals.Definition12A deduction rule is a rule h←B such that h∈D and B⊆D.Intuitively,a deduction rule h←B can be understood as follow:if all the elements of B are removed from the environment,then h does not participate in any solution of the CSP and it can be removed.A very simple case is arc-consistency where theB corresponds to the well-known notion of support of h.But in general(even for hyper arc-consistency)the rules are more intricate.Note that these rules are only a theoretical tool to define explanations and to justify the error diagnosis method.But in practice,this set does not need to be given.The rules are hidden in the algorithms which implement the solver.For each operator r∈R,we denote by R r a set of deduction rules which defines r,that is,R r is such that: r(d)={h∈D|∃B⊆d,h←B∈R r}.For each operator,this set of deduction rules exists.There possibly exists many such sets, but for classical notions of local consistency one is always natural[9].The deduction rules clearly appear inside the algorithms of the solver.In[3]the proposed solver is directly something similar to the set of rules(it is not exactly a set of deduction rules because the heads of the rules do not have the same shape that the elements of the body).10(AM,1)(MA,3)(MA,4)(MA,2)(PM,1)(PM,1)(PM,2)(MP,1)MA >PM MA >PM MA =4PM >MPPM >MP PM >MP MP >AMMA >AMFigure 1:An explanation for (AM,1)Example 6With the GNU-Prolog operator AM in 0..max(MA)-1are associated the deduction rules:•(AM,1)←(MA,2),(MA,3),(MA,4)•(AM,2)←(MA,3),(MA,4)•(AM,3)←(MA,4)•(AM,4)←∅Indeed,for the first one,the value 1is removed from the environment of AM only when the values 2,3and 4are not in the environment of MA.From the deduction rules,we have a notion of proof tree [1].We consider the set of all the deduction rules for all the local consistency operators of R :let R = r ∈R R r .We denote by cons(h,T )the tree defined by:h is the label of its root and T the set of its sub-trees.The label of the root of a tree t is denoted by root(t ).Definition 13An explanation is a proof tree cons(h,T )with respect to R ;it is in-ductively defined by:T is a set of explanations with respect to R and (h ←{root(t )|t ∈T })∈R .Example 7The explanation of figure 1is an explanation for (AM,1).Note that the root (AM,1)of the explanation is linked to its children by the deduction rule (AM,1)←(MA,2),(MA,3),(MA,4).Here,since each rule is associated with an operator which is itself associated with a constraint (arc-consistency case),the constraint is written at the right of the rule.Finally we prove that the elements removed from the domain are the roots of the explanations.Theorem 2CL ↓(D ,R )is the set of the roots of explanations with respect to R .Proof.Let E the set of the roots of explanations wrt to R .By induction on explanations E ⊆min {d |∀ r ∈ R, r (d )⊆d }.It is easy to check thatr (E )⊆E .Hence,min {d |∀ r ∈ R, r (d )⊆d }⊆E .So E =CL ↑(∅, R).11In[9]there is a more general result which establishes the link between the closure of an environment d and the roots of explanations of R∪{h←∅|h∈d}.But here, to be lighter,the previous theorem is sufficient because we do not consider dynamic aspects.All the results are easily adaptable when the starting environment is d⊂D.4.2Computed explanationsNote that for error diagnosis,we only need a program,an expected semantics,a symptom and an explanation for this symptom.Iterations are briefly mentioned here only to understand how explanations are computed in concrete terms,as in the PaLM system[11].For more details see[9].CL↓(d,R)can be computed by chaotic iterations introduced for this aim in[8].The principle of a chaotic iteration[2]is to apply the operators one after the other in a“fairly”way,that is such that no operator is forgotten.In practice this can be implemented thanks to a propagation queue.Since⊆is a well-founded ordering (i.e.D is afinite set),every chaotic iteration is stationary.The well-known result of confluence[5,8]ensures that the limit of every chaotic iteration of the set of local consistency operators R is the downward closure of D by R.So in practice the computation ends when a commonfix-point is reached.Moreover,implementations of solvers use various strategies in order to determine the order of invocation of the operators.These strategies are used to optimize the computation,but this is out of the scope of this paper.We are interested in the explanations which are“computed”by chaotic iterations, that is the explanations which can be deduced from the computation of the closure.A chaotic iteration amounts to apply operators one after the other,that is to apply sets of deduction rules one after another.So,the idea of the incremental algorithm [9]is the following:each time an element h is removed from the environment by a deduction rule h←B,an explanation is built.Its root is h and its sub-trees are the explanations rooted by the elements of B.Note that the chaotic iteration can be seen as the trace of the computation, whereas the computed explanations are a declarative vision of it.The important result is that CL↓(d,R)is the set of roots of computed explana-tions.Thus,since a symptom belongs to there always exists a computed explanation for each symptom.5Error DiagnosisIf there exists a symptom then there exists an erroneous operator.Moreover,for each symptom an explanation can be obtained from the computation.This section describes how to locate an erroneous operator from a symptom and its explanation.125.1From Symptom to ErrorDefinition14A rule h←B∈R r is an erroneous rule wrt d if B∩d=∅and h∈d.It is easy to prove that r is an erroneous operator wrt d if and only if there exists an erroneous rule h←B∈R r wrt d.Consequently,theorem1can be extended into the next lemma.Lemma7If there exists a symptom wrt d then there exists an erroneous rule wrt d.We say a node of an explanation is a symptom wrt d if its label is a symptom wrt d.Since,for each symptom h,there exists an explanation whose root is labeled by h,it is possible to deal with minimality according to the relation parent/child in an explanation.Definition15A symptom is minimal wrt d if none of its children is a symptom wrt d.Note that if h is a minimal symptom wrt d then h∈d and the set of its children B is such that B⊆d.In other words h←B is an erroneous rule wrt d.Theorem3In an explanation rooted by a symptom wrt d,there exists at least one minimal symptom wrt d and the rule which links the minimal symptom to its children is an erroneous rule.Proof.Since explanations arefinite trees,the relation parent/child iswell-founded.To sum up,with a minimal symptom is associated an erroneous rule,itself as-sociated with an erroneous operator.Moreover,an operator is associated with,a constraint(e.g.the usual case of hyper arc-consistency),or a set of constraints. Consequently,the search for some erroneous constraints in the CSP can be done by the search for a minimal symptom in an explanation rooted by a symptom.5.2Diagnosis AlgorithmsThe error diagnosis algorithm for a symptom(x,e)is quite simple.Let E the computed explanation of(x,e).The aim is tofind a minimal symptom in E by asking the user with questions as:“is(y,f)expected?”.Note that different strategies can be used.For example,the“divide and conquer”strategy:if n is the number of nodes of E then the number of questions is O(log(n)), that is not much according to the size of the explanation and so not very much compared to the size of the iteration.13Example8Let us consider the GNU-Prolog CSP of example5.Remind us that its closure is empty whereas the user expects(AM,1)to belong to a solution.Let the explanation offigure1be the computed explanation of(AM,1).A diagnosis session can then be done using this explanation tofind the erroneous operator or constraint of the CSP.Following the“divide and conquer”strategy,first question is:“Is(MA,3)a symptom ?”.According to the conference problem,the knowledge on MA is that Michael wants to know other works before presenting is own work(that is MA>2)and Michael cannot stay the last half-day(that is MA is not4).Then,the user’s answer is:yes.Second question is:“Is(PM,2)a symptom?”.According to the conference prob-lem,Michael wants to know what Peter have done before presenting his own work to Alan,so the user considers that(PM,2)belongs to the expected environment:its answer is yes.Third question is:“Is(MP,1)a symptom?”.This means that Michael presents his work to Peter before Peter presents his work to him.This is contradicting the conference problem:the user answers no.So,(PM,2)is a minimal symptom and the rule(PM,2)←(MP,1)is an erroneous one.This rule is associated to the operator PM in min(MP)+1..infinite,associated to the constraint PM>MP.Indeed,Michael wants to know what Peter have done before presenting his own work would be written PM<MP.Note that the user has to answer to only three questions whereas the explanation contains height nodes,there are sixteen removed values and eighteen operators for this problem.So,it seems an efficient way tofind an error.Note that it is not necessary for the user to exactly know the set of solutions,nor a precise approximation of them.The expected semantics is theoretically considered as a partition of D:the elements which are expected and the elements which are not. For the error diagnosis,the oracle only have to answer to some questions(he has to reveal step by step a part of the expected semantics).The expected semantics can then be considered as three sets:a set of elements which are expected,a set of elements which are not expected and some other elements for which the user does not know.It is only necessary for the user to answer to the questions.It is also possible to consider that the user does not answer to some questions, but in this case there is no guarantee tofind an error[16].Without such a tool,the user is in front of a chaotic iteration,that is a wide list of events.In these conditions, it seems easier tofind an error in the code of the program than tofind an error in this wide trace.Even if the user is not able to answer to the questions,he has an explanation for the symptom which contains a subset of the CSP constraints.6ConclusionOur theoretical foundations of domain reduction have permitted to define notions of expected semantics,symptom and error.14。
试题英文数理统计

一、填空(一)各章节的introduction1、Continuous variables or interval data can assume any value in some interval of real numbers.连续变量或间隔数据可以假设在某个实数间隔中的任意值。
(measurement)Discrete variables assume only isolated values.离散变量只假定孤立的值。
(counting)11、The lower or first quartile is the 25th percentile and the upper or third quartile is the 75th percentile.12、The fist qurtile Q1 is the median of the observations falling below the median of the entire sample and the third quartile Q3 is the median of the observations falling above the median of the entire sample.The interquartile range is defined as IQR=Q3-Q1.第一个四分位数Q1是低于整个样本中位数的观测值的中位数,第三个四分位数Q3是高于整个样本中位数的观测值的中位数。
四分位数范围定义为IQR=Q3-Q1。
2、Statistics applied to the life sciences in often called biostatistics or biometry.统计学应用于生命科学,通常称为生物统计学或生物计量学。
3、A descriptive measure associated with a random variable when it is considered over the entire population is called a parameter.当在整个总体中考虑一个随机变量时,与它相关的描述性度量称为参数4、One is forced to examine a subset or sample of the population and make inferences about the entire variable of a sample is called a statistic.人们被迫检查总体中的一个子集或样本,并对样本中的整个变量做出推断,这被称为统计量。
莫里莫里泡沫胶带产品说明书

Table.Spearman Correlations between CRISS and individual components at12months and Comparison of ABA and PBO using CRISS index and individual components at12 months;Outcome ABAN=44PBON=44TreatmentDifference(ABA-PBO)P-value^ACR CRISS(0.0-1.0) median(IQR)0.68(1.00)0.01(0.86)0.03 SpearmanCorrelationLSmean(SE)LSmean(SE)LS mean(SE)P-value^^D mRSS(0-51)-0.75*-6.7(1.30)-3.8(1.23)-2.9(1.75)0.10D FVC%predicted0.36*-1.4(1.30)-3.1(1.20)1.7(1.72)0.32D PTGA(0-10)-0.17-0.50(0.392)-0.30(0.385)-0.20(0.557)0.73D MDGA(0-10)-0.47*-1.34(0.282)-0.18(0.284)-1.16(0.403)0.004D HAQ-DI(0-3)-0.21-0.11(0.079)0.11(0.076)-0.22(0.108)0.05^p-value for treatment comparisons based on Van Elteren test^^p-value for treatment comparisons based on ANCOVA model with treatment,duration of SSc and baseline value as covariates*p<0.01using Spearman correlation coefficientNegative score denotes improvement,except for FVC%where negative score denotes worsening;LS mean=least squares mean;SE=standard errorFRI0328BRANCHED CHAIN AMINO ACIDSIN THE TREATMENTOF POLYMYOSITIS AND DERMATOMYOSITIS:RESULTSFROM THE BTOUGH STUDYNaoki Kimura,Hitoshi Kohsaka.Tokyo Medical and Dental University(TMDU), Department of Rheumatology,Tokyo,JapanBackground:Muscle functions of patients with polymyositis and dermato-myositis(PM/DM)remain often impaired even after successful control of the immune-mediated muscle injury by immunosuppressive therapy.The only effort at the present to regain muscle functions except for the immu-nosuppression is rehabilitation,which is carried out systematically in lim-ited institutes.No medicines for rebuilding muscles have been approved. Branched chain amino acids(BCAA)promote skeletal muscle protein syn-thesis and inhibit muscle atrophy.They thus have positive effects on muscle power,but have never been examined for the effects on PM/DM patients.Objectives:To assess the efficacy and safety of BCAA in the treatment of PM/DM for official approval of their use in Japan.Methods:Untreated adults with PM/DM were enrolled in a randomized, double-blind trial to receive either TK-98(drug name of BCAA)or pla-cebo in addition to the conventional immunosuppressive agents.One package of TK-98(4.15g)contained L-isoleucine952mg,L-leucine 1904mg,and L-valine1144mg(molar ratio is1:2:1.35),and6packages were administered daily in3divided doses.After12weeks,patients with average manual muscle test(MMT)score less than9.5were enrolled in an open label extension study for12weeks.The primary end point was the change of the MMT score at12weeks.The secondary end points were the disease activity evaluated with myositis disease activity core set (MDACS)and the change of functional index(FI),which evaluates dynamic repetitive muscle functions.Results:Forty-seven patients were randomized to the TK-98(24patients [12with PM and12with DM])and placebo(23patients[11with PM and12with DM])groups.The baseline MMT scores were equivalent (7.97±0.92[mean±SD]in the TK-98group and7.84±0.86in the placebo group).The change of MMT scores at12weeks were0.70±0.19(mean ±SEM)and0.69±0.18,respectively(P=0.98).Thirteen patients from the TK-98group and12from the placebo group were enrolled in the exten-sion study.The MMT scores in both groups improved comparably throughout the extension study.The increase of the FI scores of the shoulder flexion at12weeks was significantly larger in the TK-98group (27.9±5.67and12.8±5.67in the right shoulder flexion[P<0.05],27.0±5.44and13.4±5.95in the left shoulder flexion[P<0.05]).The improvement rate of the average FI scores of all tested motions(head lift,shoulder flexion,and hip flexion)through the first12weeks was larger in the TK-98group.No difference was found in the disease activ-ity throughout the study period.Frequencies of the adverse events until 12weeks were comparable.Conclusion:Although BCAA exerted no effects in the improvement of the muscle strength evaluated with MMT,they were effective in the improve-ment of dynamic repetitive muscle functions in patients with PM/DM with-out significant increase of adverse events.Disclosure of Interests:None declaredDOI:10.1136/annrheumdis-2019-eular.5235FRI0329ANALYSIS OF11CASES OF ANTI-PL-7ANTIBODYPOSITIVE PATIENTS WITH IDIOPATHIC INFLAMMATORYMYOPATHIES.MALIGNANCY MAY NOT BE UNCOMMONCOMPLICATION IN ANTI-PL-7ANTIBODY POSITIVEMYOSITIS PATIENTSTaiga Kuga,Yoshiyuki Abe,Masakazu Matsushita,Kurisu Tada,Ken Yamaji, Naoto Tamura.Juntendo University School of Medicine,Department of Internal Medicine and Rheumatology,Tokyo,JapanBackground:Various autoantibodies are known to be related to idiopathic inflammatory myopathies(IIM).Anti-PL-7antibody is anti-threonyl-tRNA synthetase antibody associated with antisynthetase syndrome(ASS).Since anti-PL-7antibody is rare(mostly1-4%of myositis,while a Japanese study reported17%),little is known as to clinical characteristics of it(1). Objectives:To analyze clinical characteristics of anti-PL-7positive IIM patients.Methods:Anti-PL-7antibody was detected by EUROLINE Myositis Profile 3.IIM diagnosis was made by the2017EULAR/ACR classification criteria (2)and/or Bohan And Peter classification(3).Eleven anti-PL-7antibody positive adult patients(all female),age at onset(61.5±12.6years)were enrolled in this study between2009and2018.Clinical manifestations, laboratory and instrumental data were reviewed in this single centre retro-spective cohort.Results:Characteristic symptoms were identified at diagnosis:skin mani-festations(7/11cases,63.6%),muscle weakness(8/11cases,72.7%), arthralgia(5/11cases,45.5%)and Raynaud’s phenomenon(4/11cases, 36.4%).Myogenic enzymes were elevated in most cases(10/11cases, 90.9%).ILD was detected in all patients(11/11cases,100%)and2 patients(18.2%)developed rapidly progressive rgest IIM subtype was polymyositis(PM,5/11cases),followed by dermatomyositis(DM,3/ 11cases)and amyopathic dermatomyositis(ADM,3/11cases).Five patients(45.5%)complicated with malignancy within3years from the diagnosis of IIM.Though clinical manifestations and laboratory data showed any difference between malignancy group and non-malignancy group,all3ADM cases but no DM cases complicated with malignancy in this study.Conclusion:Anti-PL-7antibody positive IIM patients frequently complicated with ILD.Frequency of cancer in ASS patients within three years from diagnosis was 1.7%and not much different from the general population in previous report from France(4).Though this study only included IIM patients and may have selection bias,careful malignancy survey may be essential in Anti-PL-7antibody positive IIM patients.REFERENCES:[1]Y Yamazaki,et al.Unusually High Frequency of Autoantibodies to PL-7Associated With Milder Muscle Disease in Japanese Patients With Poly-myositis/DermatomyositisARTHRITIS&RHEUMATISM Vol.54,No.6, June2006,pp2004–2009[2]Lundberg IE,Tjärnlund A,Bottai M,et al.EULAR/ACR classification crite-ria for adult and juvenile idiopathic inflammatory myopathies and their Major Subgroups.Ann Rheum Dis.2017;76:1955–64.[3]Bohan A,Peter J.Polymyositis and dermatomyositis.N Engl J Med1975,292:344-347;403-407.[4]Hervier B,et al.Hierarchical cluster and survival analyses of antisynthe-tase syndrome:phenotype and outcome are correlated with anti-tRNA syn-thetase antibody specificity.Autoimmunity reviews.2012;12:210–217. Disclosure of Interests:Taiga Kuga:None declared,Yoshiyuki Abe:None declared,Masakazu Matsushita:None declared,Kurisu Tada Grant/ research support from:Eli Lilly,Ken Yamaji:None declared,Naoto Tamura Grant/research support from:Astellas Pharma Inc.,Asahi Kasei Pharma,AYUMI Pharmaceutical Co.,Chugai Pharmaceutical Co.LTD, Eisai Inc.,:Takeda Pharmaceutical Company Ltd.,Speakers bureau:Jans-sen Pharmaceutical K.K.,Bristol-Myers Squibb K.K.,:Mitsubishi Tanabe Pharma Co.DOI:10.1136/annrheumdis-2019-eular.4150846Friday,14June2019Scientific Abstractson December 25, 2023 by guest. Protected by copyright./Ann Rheum Dis: first published as 10.1136/annrheumdis-2019-eular.5235 on 27 May 2019. Downloaded from。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
一个竞争性均方误差波束形成方法摘要:我们对待信号估计的波束的问题,其目的是从数组中观察中估计设置一个信号幅度。
常规波束形成方法通常着眼于将信干噪比(SINR)最大化。
然而,这并不能保证小均方误差(MSE),因此,平均产生的信号的估计会和真实信号相差甚远。
在这里,我们考虑的策略是,以尽量减少估计值和未知之间的信号波形的MSE。
我们建议的所有的方法都是去最大限度地提高SINR,但在同一时间里,它们都被设计为具有良好的MSE性能。
由于MSE依赖于未知的的信号功率,我们开发出了具有竞争力的波束形成方法,最小化鲁棒的MSE估计。
两种设计策略被提出:极小化最大MSE,极小化最大遗憾。
通过数值例子表明,在一个很大的SNR范围内,我们所建议的极小化最大波束形成方法可以超越现有的一些标准鲁棒的的方法。
.最后,我们应用我们的子带波束形成技术,并说明了宽带信号估计他们的优势。
关键词:极小化最大均方误差,极小化最大遗憾,稳健波束形成,子带波束形成。
Ⅰ简介波束形成是一个为了时间估计,干扰消除,源的定位,经典谱估计的处理时间传感器阵列测量的经典方法。
它已被应用于广泛的领域,无所不在,如雷达,声纳,无线通讯,语音处理和医疗成像等领域(详见,参考文献[1-4])。
依赖波束形成器而设计数据的传统方法通常试图极小化最大信号与干扰加噪声比(SINR)。
最大化SINR需要干扰加噪声的协方差矩阵和阵列导向矢量的知识。
由于协方差通常是未知的,它往往是由测量样本的协方差所代替,当信号在训练数据中时,这就导致了在高信噪比(SNR)的情况下,性能下降。
有些波束形成技术是设计来减轻这种影响[5-8],而另一些也需要去克服导向向量的不确定性[9-13],[14]。
在这里,我们假定导向矢量是确切的知道的,我们的目标是为信号估计设计一个波束形成器。
尽管事实上SINR已被用来作为衡量性能的标准和在许多波束形成设计方法的准则,最大化SINR或许也无法保证的一个很好的信号估计。
在估计的环境下,我们的目标是设计一个波束形成,以获得一个信号振幅接近其真实价值的估计,使它会更有意义,选择权,并尽量减少相关的是客观的估计错误,即真实之间的信号和它的估计之间的区别,而不是SINR 的差值。
此外,它可能会更翔实考虑把估计错误作为比较不同的波束形成方法的性能尺度。
如果信号功率是已知的,那么最小均方误差(MMSE)的波束可以被设计。
由此产生的波束可以表示为依赖功率的常数,这个常数乘以一个固定的权重向量是在SINR内的最佳值。
由于信干噪比在缩放时不敏感,最小均方误差的方法也能最大化SINR。
如果比例是固定的,那么缩放选择不影响信号的波形,而只是影响它的大小。
在一些应用中,实际幅度值可能是非常重要的。
在子带波束形成的背景下[15-20],这些就特别重要,由于它能够减少传统的宽带战略的复杂性,近些年获得很大的关注。
在这种情况下,独立执行波束形成在锐减频段和信道的输出相结合。
由于不同的尺度系数在每个通道使用,MMSE的战略一般会造成信号的波形和基于SINR为的方法所产生的不同。
因此,一个不错的选择缩放因子可以显著影响的估计波形。
通常情况下,信号功率为不明,MMSE波束就无法实现。
在这种情况下,其他的设计标准是需要选择缩放因子。
一个常用的方法是选择不会使信号失真的缩放因子,这相当于减少约束下的波束形成不偏不倚的均方误差(MSE)。
这就导致了著名的最小方差无失真响应(MVDR)波束的形成。
然而,就像我们解析和模拟的事实同时显示的那样,尽管MVDR方法是在一间不带偏见的MSE意义上是最优的技术,它往往会带来一个大的估计错误。
另一种策略就是从数据中来估计信号的功率,并使用结合的MMSE 的波束形成器。
这方法是密切相关的盲目极小化最大MSE 技术,最近在开发[21]。
以[21]中的结果为基础,它可以证明,如果估计信号功率和噪声的协方差是已知的,那么这种方法可以比MVDR 方法更加能改进MSE 。
然而,正如我们在第五节中的仿真展示那样,在典型的情况下,协方差是未知的,使用这种方法的性能就会大大恶化。
当信号功率和噪声方差是未知时,为了开发具有良好的MSE 性能的波束形成器,在本文中我们提出了两种设计战略,开发了上,下限形式的信号功率波束形成的先验知识。
这两种方法在可行信号地区内,优化了最坏的情况下的MSE 标准。
这些技术的优势在于两方面:首先,它们允许在信号的功率纳入先验知识,这在很多情况下是可用的,而且这种方式比其他方法更能改进的MSE 的性能。
第二,在没有先验知识可用的情况下,功率可以很容易地从数据中估算出,从而导致实际的波束形成技术。
的确,我们通过模拟演示得到,利用估计边界并联合所推荐的策略可以比先前的波束形成提升MSE 性能。
我们选择的标准,是在对线性模型[22-24]的估计的背景下发展起来的最近的观念为基础的。
在第一个方法中,我们尽量减少幅度(或方差的零均值随(12)机信号的情况下)范围为常数的所有信号的最坏情况的MSE 。
在第二种方法中,我们减少了在所有范围内的信号的最坏情况的遗憾[23],[25],[26],其中的遗憾被定义为在不确定因素存在的情况下波束形成器输出的MSE 和当功率是确定时的最小到达MSE 之间的差别。
这个策略考虑到了信号幅度的上界约束和下界约束两个方面。
为了说明我们的方法的优点,我们提出了一些数据实例,以此来比较传统的基于SINR 的策略,依据MSE 来缩放的策略,以及目前提出的健壮的方法[10]-[12]之间的波束形成器。
我们也在子带波束形成技术中运用了我们的技术,并说明了它们估计宽带信号的优势。
这里提到的理论观点在[13]中也会提及。
然而,在这里我们特别介绍了方法的实际方面,即,和缩放SINR 技术比较时,它们的性能特点,以及,它们在波束形成子带的影响。
本文的结构如下。
在第二节中,我们制定我们的问题并复习现有的方法。
极小化最大MSE 和极小化最大遗憾波束形成器的开发在第三节。
在第IV 及V ,我们讨论的实际的考虑并给出了数据实例,包括一个子带波束形成的应用。
ⅡSINR 的和MMSE 波束形成我们分别用黑体小写字母M c 表示向量,黑体大写字母⨯N M C表示矩阵。
I 是相应维度的单位矩阵,*(.)是对应的矩阵埃尔米特共轭,∧(.)表示一个估计向量。
A,SINR 波束形成对波束形成的主要任务之一是估计从源阵列组()s t 观察信号的幅度,t N ≤≤a i e y(t)=s(t)+(t)+(t),1 (1) 这里,()M y t C ∈是在t 时刻观测数组的复合矢量,M 是传感器的数量。
()s t 是待估测的信号幅度,a 是已知的并取决于和()s t 相关的平面波到到达方向的导向向量,i (t)是干扰信号,e (t)是高斯噪声矢量,N 是快照的数量(is the number of snapshots )我们的目的是通过利用一组波束形成权()w t ,从观测结果()y t 中估计信号的的幅度()s t ,输出的窄带波束如下:*()()(),1s t w t y t t N ∧=≤≤(2)一般来说,波束形成权()w t w =是用来使SINR 最大化*22*()s w a SINR w w Rw σ= (3) 其中,*{()()}R E i e i e =++是干扰和噪声协方差矩阵,2s σ是在确定情况下,由22|()|s s t σ=说给定的信号功率,并且当()s t 为零均值平稳随机过程时,22{()}s E s t σ=。
在实际情况下,协方差矩阵R 通常是不可用的并会被一个估计值所代替,最简单的方法就是使用一个样本协方差 *11()()N sm t R y t y t N ∧==∑ (4)由此导致了Capon 波束形成[27]。
一个可以转换的方式就是使用对角加载估计,由下面公式给出: *11()()Ndt sm t R R I y t y t I N ξξ∧∧==+=+∑ (5)当载入因素ξ发生变化是,将导致Capon 波束形成。
通常是选择210ξσ≈,2σ是但传感器中的噪声功率。
另外一个通用的方法就是特征空间波束形成,它的相反的协方差矩阵估计值是:11()()eig sm s R R P ∧∧--= (6)s P 是正交投影到信号子空间。
B ,MSE 波束形成α为任意值,都有()()SINR w SINR w α=,权重向量的SINR 最大化指定到一个常数,虽然缩放波束形成的权重将不会影响的SINR ,它可以对其他性能措施的影响,其中的波束形成器()s t 是用来估计信号波形应用,扩展成为尤为重要的子带的背景下的波束形成。
一种确定α流行的设计策略是要求*w a =1,导致MVDR 波束形成器 11MVDR R R --=w *1a a a (7)这或者可以得到波束形成的以解决方案**w min w Rw subjec to w a =1 (8)其中,下面我们显示,具有最小的MSE 受约束的波束形成是无偏的,虽然这种方法有几个最优性能的解释,它并不必然导致一个好的信号估计。
相反,我们可以尽量选择α最小化,而不需要直接输出的MSE 持平。
假设()s t s =是确定的,为简便起见,我们在那里省略了指数,MSE 和s 之间以及其估计为:222*{||}()|()||||1|E s s V s B s Rw s ∧∧∧-=+=+-*w a w (9)这里,2(){|{}|}V s E s E s ∧∧∧=-,是s ∧的方差,(){}B s E s s ∧∧=-是s ∧的偏方差。
当s 是一个零均值的随机变量,2s σ为方差,2||s 被2s σ替换。
对于具体,在讨论的其余部分,我们承担的确定性模型;但是,所有的结果在随机设置有效的地方,2||s 被2s σ替换。
21(||)R s β-+2*MVDR w(s)=|s |aa a=(s)w (10)其中最后一个等式的应用矩阵反演定理,我们定义了信号的常数: 22||()1||s s s β=+-1*-1aR a a R a (11)()s β满足0()1s β≤<,在2||s 范围内是单调增加的,所以对于所有的s ,都满足||()||||||MVDR w s w <。
把()w s 代入(9),得到最小均方误差MSE ,我们通过MSEB OPT 表示,由下式给出: 22||1||OPT s MSE s =+*-1a R a(12) MVDR 波形发生器的MSE ,如下式给出: 1MVDR MSE =*-1a R a (13)比较(12)和(13),我们发现,在||0s >的情况下,都有MSEB OPT <MSEB MVDR 。
所以,在具有相同的SINR 的情况下,MMSE 比 MVD 造成更小的的MSE 。