meta分析 网络meta分析实战教程
meta分析教程

meta分析教程Step 1: Formulate your research questionBefore conducting a meta-analysis, it is crucial to clearly define your research question or objective. This will help guide your search for relevant studies and determine the criteria for including or excluding studies from your analysis.The next step is to perform an extensive literature search to identify all relevant studies on your research question. This can be done by searching electronic databases, such as PubMed or Google Scholar, using specific keywords and inclusion/exclusion criteria. Additionally, it may be helpful to review the reference lists of selected articles to find additional studies that were missed during the initial search.Step 3: Screen and select studiesStep 4: Extract data from selected studiesStep 5: Assess study quality and risk of biasTo evaluate the quality and risk of bias of the included studies, a critical appraisal should be conducted. This involves assessing factors such as study design, sample size, blinding, randomization, and potential sources of bias. Various tools and checklists, such as the Cochrane Collaboration's Risk of Bias Tool, can be used to systematically assess the quality of individual studies.Step 6: Analyze the dataStep 7: Assess heterogeneityHeterogeneity refers to the variability in effect sizes across studies. It is important to assess and quantify heterogeneity using statistical tests, such as the Q test or I² statistic. If significant heterogeneity is present, subgroup analyses or sensitivity analyses may be conducted to explore potential sources of heterogeneity.Step 8: Publication bias assessmentPublication bias occurs when studies with statistically significant results are more likely to be published, while studies with nonsignificant or negative findings remain unpublished. To assess publication bias, funnel plots can be used to visually examine the symmetry of the distribution of effect sizes. Statistical tests, such as Egger's regression or Begg's rank correlation, can also be applied to quantify the degree of asymmetry.Step 9: Interpret and report the findingsFinally, the results of the meta-analysis should be interpreted and reported in a clear and concise manner. The findings should be discussed in light of the research question, the characteristics of the included studies, and the limitationsof the analysis. Conclusions should be drawn based on the strength of evidence provided by the meta-analysis.。
meta分析的实施步骤

Meta分析的实施步骤简介Meta分析是一种通过综合多个独立研究结果来获得更准确和可靠的结论的统计方法。
它可以解决单个研究可能无法得出一致结论的问题,还可以提供更大样本量和更广泛范围的评估。
本文将介绍meta分析的实施步骤,并以列点的方式给出详细说明。
实施步骤1.明确研究目的:–确定要分析的研究问题和目标。
–确定研究领域和主题,以便确定适当的文献检索策略。
2.文献检索和筛选:–制定文献检索策略,包括选择适当的数据库和关键词。
–检索和筛选符合研究目的和标准的相关文献,如纳入和排除标准。
3.数据提取:–建立数据提取表格或工具,包括提取的变量和相关数据。
–独立提取数据,并双重检查以确保准确性。
4.质量评估:–评估纳入研究的质量和偏倚风险。
–使用适当的工具、量表或评估标准进行评估。
5.效应量的计算:–根据研究设计和数据类型选择适当的效应量测量方法。
–计算每个研究的效应量和标准误差。
6.数据合并:–使用统计软件进行meta分析的数据合并。
–选择合适的模型(例如,固定效应模型或随机效应模型)进行合并。
7.统计分析和解释:–分析合并效应量,并计算相应的置信区间和p值。
–进行敏感性分析和亚组分析,以探究可能的异质性和系统误差。
8.结果报告和解释:–撰写meta分析报告,包括研究背景、方法、结果和讨论等部分。
–解释结果的实际意义和潜在影响,讨论研究结果的局限性和不确定性。
9.提出结论:–总结meta分析的主要结论和发现。
–提出未来研究的建议和方向。
总结通过执行上述meta分析的实施步骤,研究者可以综合多个独立研究的结果,提供更准确和可靠的结论。
这种方法对于整合和综合现有证据,获得更具统计学意义的结论具有重要意义。
然而,执行meta分析时需要详细考虑文献检索、数据提取、质量评估等关键步骤,并以透明和系统的方式进行分析和报告。
r语言跑网络meta分析的流程

r语言跑网络meta分析的流程下载温馨提示:该文档是我店铺精心编制而成,希望大家下载以后,能够帮助大家解决实际的问题。
文档下载后可定制随意修改,请根据实际需要进行相应的调整和使用,谢谢!并且,本店铺为大家提供各种各样类型的实用资料,如教育随笔、日记赏析、句子摘抄、古诗大全、经典美文、话题作文、工作总结、词语解析、文案摘录、其他资料等等,如想了解不同资料格式和写法,敬请关注!Download tips: This document is carefully compiled by theeditor.I hope that after you download them,they can help yousolve practical problems. The document can be customized andmodified after downloading,please adjust and use it according toactual needs, thank you!In addition, our shop provides you with various types ofpractical materials,such as educational essays, diaryappreciation,sentence excerpts,ancient poems,classic articles,topic composition,work summary,word parsing,copy excerpts,other materials and so on,want to know different data formats andwriting methods,please pay attention!R语言进行网络Meta分析的详细流程网络Meta分析是一种统计方法,用于整合多个研究结果,特别是在存在多种干预措施或比较组时。
系统综述 meta分析的实施步骤

系统综述:meta分析的实施步骤1. 简介Meta分析是一种系统综述的方法,通过整合多个独立研究的结果,以统计的方式评估研究之间的一致性和差异性。
Meta分析的目的是通过合并数据,提供一个更为准确和可靠的效应估计,从而为决策者提供科学依据。
2. 步骤2.1. 确定研究问题在进行meta分析之前,首先需要明确研究的目标和问题。
确定研究问题可以帮助研究者明确自己需要合并哪些研究的数据,以及需要评估什么样的效应。
2.2. 确定包含和排除标准确定包含和排除标准是指确定符合研究问题的研究并筛选出合适的研究。
通常,这些标准包括研究类型、样本量、研究设计等。
这一步骤的目的是确保所选研究的质量和可比性。
2.3. 搜索和筛选研究在这一步骤中,需要对相关数据库进行搜索,并根据确定的包含和排除标准对检索到的研究进行筛选。
筛选研究的过程可以包括初筛、全文阅读和最终筛选。
只有符合研究问题和标准的研究才会被保留下来。
2.4. 提取数据一旦确定了符合研究问题和标准的研究,就需要从每个研究中提取所需的数据。
通常,需要提取的数据包括样本量、效应量和相关的统计指标。
提取数据的过程需要按照统一的数据提取表格或表单进行。
2.5. 进行数据分析在完成数据提取后,可以开始进行数据的统计分析。
常用的分析方法包括计算效应量的加权平均、计算异质性和进行子组分析等。
这些分析方法可以帮助研究者判断研究之间的差异和一致性。
2.6. 评估偏倚风险评估偏倚风险是meta分析中非常重要的一步,它可以帮助研究者判断所选研究的质量和可信度。
常用的评估偏倚风险的工具包括Cochrane Collaboration’s risk of bias tool和Newcastle-Ottawa Quality Assessment Scale等。
2.7. 进行结果的解释和展示完成数据分析后,需要对结果进行解释和展示。
可以通过表格、图形和描述性文字等方式来呈现结果。
此外,还可以进行敏感性分析和亚组分析等进一步分析,以检验结果的稳定性和可靠性。
Meta分析的步骤与实例分析

报告审核:对报告进行同行评议,确保报告的科学性和准确性。
实例分析的结论与意义
实例分析的结论:Meta分析的实例分析表明,该方法能够有效地整合多个研究结果, 提高研究结论的可信度。
实例分析的意义:通过实例分析,可以深入了解Meta分析的应用范围和局限性,为未 来的研究提供参考和借鉴。
根据研究目的和纳入标准确定文献检索范围 使用关键词和限制条件进行文献筛选 仔细阅读文献摘要和全文,确定是否符合纳入标准 记录筛选过程中的相关信息,以便后续分析
提取数据并进行统计分析
提取数据:从各 个相关研究中收 集原始数据,确 保数据的准确性 和完整性。
统计分析:采用 适当的统计方法 对数据进行整合 和分析,以得出 综合结论。
Part Three
Meta分析的实例 分析
选择研究问题
确定研究领域 和关键词
确定研究问题 和假设
筛选相关文献 和数据来源
评估研究问 题的价值和
可行性
检索相关文献
确定检索范围和关键词
制定检索策略和条件
添加标题
添加标题
选择合适的数据库和检索工具
添加标题
添加标题
筛选符合条件的文献并提取相关信 息
筛选符合条件的文献
确定研究问题
明确研究目的和范围
确定纳入和排除标准
添加标题
添加标题
收集相关文献
添加标题
添加标题
确定研究问题的关键词和主题
检索相关文献
确定主题和纳入标准 制定检索策略 检索多个数据库和平台 筛选符合条件的文献
筛选符合条件的文献
确定研究问 题
筛选文献
meta分析过程..PPT课件

• 一种对单独的研究结果进行统计分析的方法,对研究间差异的来源进行检验, 并对具有足够相似性的结果进行定量合成。本质就是一种统计学方法。
.
2
系统评价的基本步骤
.
3
Meta 分析的原理和基本思想
• 在用样本信息推断总体参数时,是存在抽样误差的,并且抽样误差的 大小与样本量的大小有关,样本量较大时抽样误差较小。
Var(di )
.
29
异质性检验方法(I2)
•
.
30
异质性资料的处理方法
根据异质性检验结果,选择模型的类型。
无异质性:选用固定效应模型估计合并效应量; 有异质性:找出混杂因素进行校正或选用随机效应
模型估计合并效应量。若异质性过大,特别在效应 方向上极其不一致,则应放弃进行Meta分析,只对 结果进行一般性的统计描述。
资料类型 计数资料
计量资料
研究设计类型 随机对照试验 非随机实验性研究 队列研究 病例对照研究 横断面研究 诊断准确性试验 随机对照试验 非随机实验性研究 队列研究 病例对照研究 横断面研究
合并效应量 RR,OR,RD OR,RR,RD RR,OR,RD OR OR OR WMD,SMD WMD,SMD WMD,SMD WMD,SMD WMD,SMD
校正,即将各式中的Wi按下式进行计算:
Wi
(d
1
Wi
)1
D
Q (K 1)
( Wi
Wi2 ) Wi
.
18
固定效应模型和随机效应模型区别在于加权的方式不同,固定效应模 型以每个研究内方差的倒数作为权重,而随机效应模型是以研究内方 差与研究间方差之和的倒数作为权重,调整的结果是样本量较大的研 究给予较小的权重,而样本量较小的研究则给予较大的权重,可以部 分消除异质性的影响。
meta分析基础实践PPT课件

.
50
异质性分析
• Revman 5在给出森林图的同时,也会计算异质性 (卡方值(Chi²)和I² )。
• Cochrane Q检验,本质为卡方检验,卡方值(Chi²) 其p如小于0.10可以认为存在异质性
• I²反应异性性所占比重,其0-40%为轻度异质性, 40-60%中度异质性,75-100%较大异质性。一般认 为如I²大于50%可以可考虑研究间存在较大异质性
.
7
正确认识Meta分析
• 最好的meta分析是传统综述的临床洞察与定量 数据的完美结合
• Meta分析时相关文献应收集全面,防止有重要 的研究被遗漏
• 控制Meta分析的文献质量,防止垃圾入、垃圾 出
• Meta分析多代表综合效应,但研究不同效应不 同,应注意各研究间的异质性
• 发表的文章多为高阳性或高效率的结论,注意 提防发表偏移
• 计算后结果仅显示符合分组条件的数据
• Meta-disc无法将两个亚组的数据在同一张 图或表格上显示出来,不过这点可以通过 Revman 5补充。
.
76
作图
选择菜单栏的analyze下的Plots栏目
.
77
作图
选择左上角的下拉栏作图
.
78
作图
.
79
总结
• Meta分析将多个独立的临床研究综合起来进行定 量分析。
Methodology review方法学评估;overview of reviews 系统评价的再评价;
Flexible review 其他评估。
.
17
建立系统评价
确定系统评价的标题
.
18
建立系统评价
图说Meta系列:最详细的Meta分析步骤

图说Meta系列:最详细的Meta分析步骤
曾经无知的我写下“本系列推送将规范meta分析的撰写”,如今系列推送完毕,回头再看,才发现有诸多不完美,规范更加谈不上。
我们未能详细的介绍meta分析的方方面面,仅针对一般步骤进行了简单介绍,对于深奥的meta分析来说,仅仅是为大家起到了入门的作用。
如何处理不同类型的数据,如何制作其他类型的meta分析都未能介绍,只能期待以后有更多的机会。
当然,我们相信随着meta分析的方法学的不断完善,系列推送的内容也终将被时代抛弃,只有保持不断学习,才能够立于不败之地。
谢谢大家的支持与鼓励!
系列推送汇总
Meta分析统计方法全攻略
图说meta一:临床研究的类型
图说meta二:证据等级图
图说meta三:meta分析的类型
图说meta四:meta分析的步骤
图说meta五:meta分析选题原则与步骤
图说meta六:文献检索概述
图说meta七:文献筛选暨Endnote使用方法简介
图说meta八:质量评价暨偏倚风险评估工具简介
图说meta九:质量评价图暨RevMan软件使用方法简介
图说meta十:森林图简介
图说meta十一:森林图暨RevMan软件使用方法简介
图说meta十二:森林图暨Stata软件使用方法简介
图说meta十三:漏斗图简介
图说meta十四:漏斗图暨Stata软件使用方法简介
图说meta十五:系统评价PRISMA报告标准简介。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Lecture15:mixed-effects logistic regression28November2007In this lecture we’ll learn about mixed-effects modeling for logistic regres-sion.1Technical recapWe moved from generalized linear models(GLMs)to multi-level GLMs by adding a stochastic component to the linear predictor:η=α+β1X1+···+βn X n+b0+b1Z1+···+b m Z m(1) and usually we assume the random effects vector b is normally distributed with mean0and variance-covariance matrixΣ.In a mixed-effects logistic regression model,we simply embed the stochas-tic linear predictor in the binomial error function(recall that in this case, the predicted meanµcorresponds to the binomial parameter p):P(y;µ)=nynµyn(1−µ)(1−y)n(Binomial error distribution)(2)logµ1−µ=η(Logit link)(3)µ=eη1+eη(Inverse logit function)(4)11.1Fitting multi-level logit modelsAs with linear mixed models,the likelihood function for a multi-level logit model must marginalize over the random effects b:∞P( x|β,b)P(b|Σ)d b(5)Lik(β,Σ| x)=−∞Unfortunately,this likelihood cannot be evaluated exactly and thus the maximum-likelihood solution must be approximated.You can read about some of the approximation methods in Bates(2007,Section9).Laplacian approximation to ML estimation is available in the lme4package and is recommended.Penalized quasi-likelihood is also available but not recom-mended,and adaptive Gaussian quadrature is recommended but not yet available.1.2An exampleWe return to the dative dataset and(roughly)follow the example in Baayen Section7.4.We will construct a model with all the available predictors (except for speaker),and with verb as a random effect.First,however,we need to determine the appropriate scale at which to enter the length(in number of words)of the recipient and theme arguments.Intuitively,both raw scales and log scales are plausible.If our response were continuous,a natural thing to do would be to look at scatterplots of each of these variables against the response.With a binary response,however,such a scatterplot is not very informative.Instead,we take two approaches:1.Look at the empirical relationship between argument length and meanresponse,using a shingle;pare single-variable logistic regressions of response against raw/logargument length and see which version has a better log-likelihood.First we will define convenience functions to use for thefirst approach: >tapply.shingle<-function(x,s,fn,...){result<-c()for(l in levels(s)){Linguistics251lecture15notes,page2Roger Levy,Fall2007x1<-x[s>l[1]&s<l[2]]result<-c(result,fn(x1,...))}result}>logit<-function(x){log(x/(1-x))}We then plot the mean response based on shingles(Figure1):>my.intervals<-cbind(1:29-0.5,1:29+1.5)>response<-ifelse(dative$RealizationOfRecipient=="PP",1,0) >recipient.x<-with(dative,tapply.shingle(LengthOfRecipient, shingle(LengthOfRecipient,my.intervals),mean))>recipient.y<-with(dative,tapply.shingle(response, shingle(LengthOfRecipient,my.intervals),mean))>plot(recipient.x,logit(recipient.y))>theme.y<-with(dative,tapply.shingle(response,shingle(LengthOfTheme,my.intervals),mean))>theme.x<-with(dative,tapply.shingle(LengthOfTheme, shingle(LengthOfTheme,my.intervals),mean))>plot(theme.x,logit(theme.y))These plots are somewhat ambiguous and could support either a linear or logarithmic relationship in logit space.(Keep in mind that(a)we’re not seeing points where there are100%of responses that are“successful”or “failures”;and(b)there are very few data points at the larger lengths.) So we resort to the logistic regression approach(recall that the deviance is simply-2times the log-likelihood):>summary(glm(response~LengthOfTheme,dative,family="binomial"))$deviance[1]3583.41>summary(glm(response~log(LengthOfTheme),dative,family="binomial"))$deviance[1]3537.279>summary(glm(response~LengthOfRecipient,dative,Linguistics251lecture15notes,page3Roger Levy,Fall2007Figure1:Responses of recipient and theme based on shinglesfamily="binomial"))$deviance[1]3104.92>summary(glm(response~log(LengthOfRecipient),dative,family="binomial"))$deviance[1]2979.884In both cases the log-length regression has a lower deviance and hence a higher log-likelihood.So we’ll enter these terms into the overall mixed-effects regression as log-lengths.>dative.glmm<-lmer(RealizationOfRecipient~log(LengthOfRecipient)+log(LengthOfTheme)+AnimacyOfRec+AnimacyOfTheme+AccessOfRec+AccessOfTheme+PronomOfRec+PronomOfTheme+DefinOfRec+DefinOfTheme+SemanticClass+Modality+(1|Verb),dative,family="binomial",method="Laplace") >dative.glmm[...]Random effects:Linguistics251lecture15notes,page4Roger Levy,Fall2007Groups Name Variance Std.Dev.Verb(Intercept)4.6872 2.165number of obs:3263,groups:Verb,75Estimated scale(compare to1)0.7931773Fixed effects:Estimate Std.Error z value Pr(>|z|) (Intercept) 1.94630.6899 2.8210.004787** AccessOfThemegiven 1.62660.2764 5.8863.97e-09*** AccessOfThemenew-0.39570.1950-2.0290.042451* AccessOfRecgiven-1.24020.2264-5.4794.28e-08*** AccessOfRecnew0.27530.2472 1.1130.265528log(LengthOfRecipient) 1.28910.15528.306<2e-16*** log(LengthOfTheme)-1.14250.1100-10.390<2e-16*** AnimacyOfRecinanimate 2.18890.26958.1234.53e-16*** AnimacyOfThemeinanimate-0.88750.4991-1.7780.075334. PronomOfRecpronominal-1.55760.2491-6.2534.02e-10*** PronomOfThemepronominal 2.14500.26548.0816.40e-16*** DefinOfRecindefinite0.78900.2087 3.7800.000157*** DefinOfThemeindefinite-1.07030.1990-5.3797.49e-08*** SemanticClassc0.40010.3744 1.0690.285294 SemanticClassf0.14350.61520.2330.815584 SemanticClassp-4.1015 1.5371-2.6680.007624** SemanticClasst0.25260.2137 1.1820.237151 Modalitywritten0.13070.20960.6230.533008 (Incidentally,this model has higher log-likelihood than the same model with raw instead of log-argument length,supporting our choice of log-length as the preferred predictor.)Thefixed-effect coefficients can be interpreted as normal in a logistic regression.It is important to note that there is considerable variance in the random effect of verb.The scale of the random effect is that of the linear predictor,and if we consult the logistic curve we can see that a standard deviation of2.165means that it would be quite typical for the magnitude of this random effect to be the difference between a PO response probability of 0.1and0.5.Linguistics251lecture15notes,page5Roger Levy,Fall2007Figure2:Random intercept for each verb in analysis of the dative dataset Because of this considerable variance of the effect of verb,it is worthlooking at the BLUPs for the random verb intercept:>nms<-rownames(ranef(dative.glmm)$Verb)>intercepts<-ranef(dative.glmm)$Verb[,1]>support<-tapply(dative$Verb,dative$Verb,length)>labels<-paste(nms,support)>barplot(intercepts[order(intercepts)],names.arg=labels[order(intercepts)], las=3,mgp=c(3,-0.5,0),ylim=c(-6,4))#mgp fix to give room for verb names The results are shown in Figure2.On the labels axis,each verb is followedby its support:the number of instances in which it appears in the dativedataset.Verbs with larger support will have more reliable random-interceptBLUPs.From the barplot we can see that verbs including tell,teach,andshow are strongly biased toward the double-object construction,whereassend,bring,sell,and take are strongly biased toward the prepositional-object construction.This result is theoretically interesting because the dative alternation hasbeen at the crux of a multifaceted debate that includes:•whether the alternation is meaning-invariant;•if it is not meaning-invariant,whether the alternants are best handledvia constructional or lexicalist models;•whether verb-specific preferences observable in terms of raw frequencytruly have their locus at the verb,or can be explained away by otherproperties of the individual clauses at issue.Linguistics251lecture15notes,page6Roger Levy,Fall2007Because verb-specific preferences in this model play such a strong role de-spite the fact that many other factors are controlled for,we are on betterfooting to reject the alternative raised by the third bullet above that verb-specific preferences can be entirely explained away by other properties of theindividual clauses.Of course,it is always possible that there are other ex-planatory factors correlated with verb identity that will completely explainaway verb-specific preferences;but this is the nature of science.(This is alsoa situation where controlled,designed experiments can play an importantrole by eliminating the correlations between predictors.)1.3Model comparison&hypothesis testingFor nested mixed-effects logit models differing only infixed-effects structure,likelihood-ratio tests can be used for model comparison.Likelihood-ratiotests are especially useful for assessing the significance of predictors consistingof factors with more than two levels,because such a predictor simultaneously introduces more than one parameter in the model:>dative.glmm.noacc<-lmer(RealizationOfRecipient~log(LengthOfRecipient)+log(LengthOfTheme)+AnimacyOfRec+AnimacyOfTheme+PronomOfRec+PronomOfTheme+DefinOfRec+DefinOfTheme+SemanticClass+Modality+(1|Verb),dative,family="binomial",method="Laplace")>anova(dative.glmm,dative.glmm.noaccessibility)[...]Df AIC BIC logLik Chisq Chi Df Pr(>Chisq) dative.glmm.noacc151543.961635.31-756.98dative.glmm191470.931586.65-716.4681.0274<2.2e-16*** >dative.glmm.nosem<-lmer(RealizationOfRecipient~log(LengthOfRecipient)+log(LengthOfTheme)+AnimacyOfRec+AnimacyOfTheme+AccessOfRec+AccessOfTheme+PronomOfRec+PronomOfTheme+DefinOfRec+DefinOfTheme+Modality+(1|Verb),dative,family="binomial",method="Laplace")>anova(dative.glmm,dative.glmm.nosem)Linguistics251lecture15notes,page7Roger Levy,Fall2007Figure3:Thefit between predicted and observed probabilities for each decileof predicted probability for dative.glmm[...]Df AIC BIC logLik Chisq Chi Df Pr(>Chisq) dative.glmm.nosem151474.551565.90-722.27dative.glmm191470.931586.65-716.4611.61840.02043*1.4Assessing a logit modelWhen assessing thefit of a model whose response is continuous,a plot of the residuals is always useful.This is not a sensible strategy for assessing thefit of a model whose response is categorical.Something that is often doneinstead is to plot predicted probability against observed proportion for somebinning of the data.This process is described in Baayen page305,throughthe languageR function plot.logistic.fit.fnc():>plot.logistic.fit.fnc(dative.glmm,dative)This is really a very goodfit.Finally,a slight word of warning:our model assumed that the randomverb-specific intercepts are normally distributed.As a sanity check,we canuse the Shapiro-Wilk test to check the distribution of BLUPs for the intercepts:Linguistics251lecture15notes,page8Roger Levy,Fall2007>shapiro.test(ranef(dative.glmm)$Verb[,1])Shapiro-Wilk normality testdata:interceptsW=0.9584,p-value=0.0148There is some evidence here that the intercepts are not normally distributed. This is more alarming given that the model has assumed that the intercepts are normally distributed,so that it is biased toward assigning BLUPs that adhere to a normal distribution.2Further ReadingThere is good theoretical coverage(and some examples)of GLMMs in Agresti (2002,Chapter12).There is a bit of R-specific coverage in Venables and Ripley(2002,Section10.4)which is useful to read as a set of applie examples, but the code they present uses penalized quasi-likelihood estimation and this is outdated by lme4.ReferencesAgresti,A.(2002).Categorical Data Analysis.Wiley,second edition. Bates,D.(2007).Linear mixed model implementation in lme4.Manuscript, University of Wisconsin,15May2007.Venables,W.N.and Ripley,B.D.(2002).Modern Applied Statistics with S. Springer,fourth edition.Linguistics251lecture15notes,page9Roger Levy,Fall2007。