语言测试学资料3

合集下载

应用语言学语言测试理论知识点整理

应用语言学语言测试理论知识点整理

应用语言学语言测试理论知识点整理在应用语言学领域,语言测试理论是一个重要的分支,它对于评估语言学习者的语言能力、指导教学实践以及推动语言教育的发展都具有关键意义。

以下将对应用语言学语言测试理论的一些重要知识点进行整理。

一、语言测试的定义与目的语言测试是对语言学习者的语言能力进行测量和评估的一种手段。

其主要目的包括:1、为教育决策提供依据,例如确定学生的升级、留级或毕业。

2、评估教学效果,帮助教师了解教学方法的有效性和学生的学习进展。

3、为学生提供反馈,让他们了解自己的语言水平和不足之处,以便进一步改进学习策略。

二、语言测试的类型1、水平测试(Proficiency Test)旨在测量考生对某种语言的整体掌握程度,不考虑考生之前的学习经历或特定的课程内容。

常见的水平测试如雅思(IELTS)、托福(TOEFL)等。

2、成绩测试(Achievement Test)侧重于检测考生在特定课程或学习阶段所掌握的语言知识和技能,与教学内容紧密相关。

比如学校的期末考试、单元测验等。

3、诊断测试(Diagnostic Test)主要用于发现考生在语言学习中存在的具体问题和薄弱环节,以便为后续的教学和学习提供针对性的指导。

4、潜能测试(Aptitude Test)预测考生学习语言的潜力和能力,而非对现有语言水平的评估。

三、语言测试的质量评估标准1、效度(Validity)指测试能够准确测量出其所要测量的语言能力或语言知识的程度。

效度分为内容效度、结构效度、预测效度等。

内容效度:测试内容是否涵盖了所要考查的语言技能和知识点。

结构效度:测试结果是否与语言能力的理论结构相一致。

预测效度:测试成绩能否有效地预测考生在未来语言学习或实际语言运用中的表现。

2、信度(Reliability)反映测试结果的稳定性和一致性。

包括重测信度、复本信度、分半信度等。

重测信度:对同一批考生在不同时间进行相同测试,两次测试结果的相关性。

复本信度:使用两份内容相似但不完全相同的试卷对同一批考生进行测试,两次结果的相关性。

语言发展测验第三版TOLD―3

语言发展测验第三版TOLD―3

语言发展测验第三版TOLD―3语言发展测验第三版TOLD―3是一套用于测量儿童语言发展的工具,它以3-21岁的儿童为单位进行评估。

TOLD―3的设计覆盖了广泛的语言技能,其中包括言语开发、听觉过程、和口头表达能力。

该评估一般采用在心理检测中最常用的填空表达式做为指导,使被评估者能够进行容易理解,乐于参与的评估过程。

一、简介TOLD―3是由Ronald D. Mathers、Pearl O. Mathers和Ralph M. Reiff于1992年开发的语言发展测验,其目的是为了测量幼儿语言技能的发展水平。

它可以用于诊断儿童发育性失败以及发现语言发展上的异常。

TOLD―3可以扩展家庭语言环境的分析,从而提供一个全面的语言发展调查。

二、评估方法TOLD―3通常由两个部分组成,第一部分是调查教师报告,和家长报告,即基于家庭和社会環境和文化准备;第二部分是直接测试,由亲子活动以及语言语法测量组成。

除了分析报表之外,TOLD―3采用已被证明是有效的语言评估和发展调查技术,如心理词典测验、实验室中的特定游戏和言语行为仿真测验。

三、结果TOLD―3提供的结果是深入的、可信的和可比较的,它使得家庭可以建立一个有用的参考框架来衡量孩子在各个语言技能领域的发展水平以及语言发展的进步情况。

TOLD―3的结果可以帮助家庭成员、教师以及心理咨询师与家庭成员一起建立一个特定的计划来帮助被测量者改善语言发展能力。

四、广泛使用TOLD―3已经被广泛使用,廣泛運用于全球各地的學校,該工具已被刊登在著名的心理学、语言学和教育学期刊上,被视为一个有效的量化测量工具。

该评估的主要优势在于它的设计方法易于使用,而且成本不高,多种测试领域的结果可以一目了然,可以清楚地呈现出被测者在某一方面发展过程中所处的位置。

五、适应年龄TOLD―3与其他常用的语言发展测验不同,它被设计为针对3-21岁儿童的语言发展,而且每个测试领域都有针对不同儿童年龄段设置的评估工具。

应用语言学纲要第3版第三章-语言测试

应用语言学纲要第3版第三章-语言测试
(2)社会科学的实验并非在实验室里做的,而是在现实的世界里做的,所以 实验室条件对所有的被试者不容易保持一致,也很难区别实验和实际活动。
(3)人是一个十分复杂的统一体,即使是同一个被试者,在不同的外部环境 下,每一次测试都会显示出智力上的、生理上的、心理上的差异,从而影响 测试的结果。
第一节 实验方法在语言学中的应用
第一节 实验方法在语言学中的应用
三、测量的信度和效度
(一)信度的测量
2.信度估算的方法:再测法、平行测试法、对半法
3.评估员的信度问题
评估员在评阅主观性试题(如语言测试中的作文、口试等)时,常 常会有误差,这就牵涉到评估员的信度问题: 评估员的内部信度问题、阅 卷员之间的信度问题。
第一节 实验方法在语言学中的应用
二、语言测试的性质
(二)语言测试所包含的信息 语言测试应包括两方面的内容,一是语言方面,二是测试方面。因此,
语言测试需要考虑理论和实践方面的诸多因素,将密切关注以下三个方面的 信息。 1. 关于语言技能方面的信息 2. 关于语言发展方面的信息 3. 关于语言知识方面的信息
第二节 现代语言测试的理论框架
第三节 语言测试中试题的生产程序
二、试题的评分
(一)客观题的评分 (1)人工评分;(2)人工输入+机器评分;(3)机器读入+机器评分
大规模的测试应尽可能地采用第(2)种、第(3)种方式。
(二)主观题的评分 主观题的评分主要针对产生性技能和产生性运用的试题而言。 1. 制定统一的评分标准 2. 培训评分员 3. 改善评分方式
进行监控,而且为语言的教授和学习提供了试验和调查的方法。语言测试对应 用语言学的贡献可以归结为三点: (1)使应用语言学的理论框架转为实际运用。 (2)使教学大纲和教学安排的制定有了明确的目标和标准。 (3)为应用语言学的研究提供了方法论上的借鉴。源自第二节 现代语言测试的理论框架

语言测试学资料3

语言测试学资料3

Chapter 3(第三章)The Reliability of Testing(测试的信度)•The definition of reliability•The reliability coefficient•How to make tests more reliableWhat is reliability?Reliability refers to the trustworthiness and stability of candidates‟ test results.In other words, if a group of students were given the same test twice at different time, the more similar the scores would have been, the more reliable the test is said to be.How to establish the reliability of a test?It is possible to quantify the reliability of a test in the form of a reliability coefficient.They allow us to compare the reliability of different tests.The ideal reliability coefficient is 1.---A test with a reliability coefficient of 1 is one which would give precisely the same results for a particular set of candidates regardless ofwhen it happened to be administered.---A test which had a reliability coefficient of zero would give sets of result quite unconnected with each other.It is between the two extremes of 1 and zero that genuine test reliability coefficients are to be found.How high should we expect for different types of language tests? Lado saysGood vocabulary, structure and reading tests are usually in the 0.9 to 0.99 range, while auditory comprehension tests are more often in the 0.8 to 0.89 range.A reliability coefficient of 0.85 might be considered high for an oral production test but low for a reading test.The way to establish the reliability of a test:1. Test-retest methodIt means to have two sets of scores for comparison. The most obvious way of obtaining these is to get a group of subjects to take the same test twice.2. Split-half methodIn this method, the subjects take the test in the usual way, but each subject is given two scores. One score is for one half of the test, the second score is for the other half. The two sets of scores are then used to obtain the reliability coefficient as if the whole test had been taken twice.In order for this method to work, it is necessary for the test to be spilt into two halves which are really equivalent, through the careful matching of items (in fact where items in the test have been ordered in terms of difficulty, a split into odd-numbered items and even-numbered items may be adequate).3. Parallel forms method(the alternate forms method)It means to use two different forms of the same test to measure a group of students continuously or in a very short time. However, alternate forms are often simply not available.How to make tests more reliableAs we have seen, there are two components of test reliability: the performance of candidates from occasion to occasion, and the reliability of the scoring.Here we will begin by suggesting ways of achieving consistent performances from candidates and then turn our attention to scorer reliability.1.Take enough samples of behaviorOther things being equal, the more items that you have on a test, the more reliable that test will be.e.g.If we wanted to know how good an archer someone was, wewouldn‟t rely on the evidence of a single shot at the target. That one shot could be quite unrepresentative of their ability. To be satisfied that we had a really reliable measure of the ability we should want to see a large number of shots at the target.The same is true for language testing.It has been demonstrated empirically that the addition of further items will make a test more reliable.The additional items should be independent of each other and of existing items.e.g.A reading test asks the question:“Where did the thief hide the jewels?”If an additional item following that took the form: “What was unusual about the hiding place?”Would it make a full contribution to an increase in the reliability of the test?No.Why not?Because it is hardly possible for someone who got the original questions wrong to get the supplementary question right.We do not get an additional sample of their behavior, so the reliability of our estimate of their ability is not increased.Each additional item should as far as possible represent a fresh start for the candidate.Do you think the longer a test is, the more reliability we will get?It is important to make a test long enough to achieve satisfactory reliability, but it should not be made so long that the candidates become so bored or tired that the behavior that they exhibit becomes unrepresentative of their ability.2. Do not allow candidates too much freedomIn general, candidates should not be given a choice, and the range over which possible answers might vary should be restricted.Compare the following writing tasks:a) Write a composition on tourism.b) Write a composition on tourism in this country.c) Write a composition on how we might develop the tourist industry in this country.d) Discuss the following measures intended to increase the number of foreign tourists coming to this country:i)More/better advertising and / or information (where? What formshould it take?)ii)Improve facilities (hotels, transportation, communication etc.). iii)Training of personnel (guides, hotel managers etc.)The successive tasks impose more and more control over what iswritten. The fourth task is likely to be a much more reliable indicator of writing ability than the first.But in restricting the students we must be careful not to distort too much the task that we really want to see them perform.3. Write unambiguous itemsIt is essential that candidates should not be presented with items whose meaning is not clear or to which there is an acceptable answer which the test writer has not anticipated.The best way to arrive at unambiguous items is, having drafted them, to subject them to the critical scrutiny of colleagues, who should try as hard as they can to find alternative interpretations to the ones intended. 4. Provide clear and explicit instructionsThis applies both to written and oral instructions.If it is possible for candidates to misinterpret what they are asked to do, then on some occasions some of them certainly will.A common fault of tests written for the students of a particular teaching institution is the supposition that the students all know what is intended by carelessly worded instructions.The frequency of the complaint that students are unintelligent, have been stupid, have willfully misunderstood what they were asked to do, reveals that the supposition is often unwarranted.Test writers should not rely on the students‟ powers of telepathy toelicit the desired behavior.The best means of avoiding problems is the use of colleagues to criticize drafts of instructions (including those which will be spoken).Spoken instructions should always be read from a prepared text in order to avoid introducing confusion.5. Ensure that tests are well laid out and perfectly legibleToo often, institutional tests are badly typed (or handwritten), have too much text in too small a space, and are poorly reproduced. As a result, students are faced with additional tasks which are not ones meant to measure their language ability. Their variable performance on the unwanted tasks will lower the reliability of a test.6. Candidates should be familiar with format and testing techniquesIn any aspect of a test is unfamiliar to candidates, they are likely to perform less well than they would do otherwise. For this reason, every effort must be made to ensure that all candidates have the opportunity to learn just what will be required of them. This may mean the distribution of sample tests (or of past test paper), or at least the provision of practice materials in the case of tests set within teaching institutions.7. Provide uniform and non-distracting conditions of administrationThe greater the differences between one administration of a test and another, the greater the differences one can expect between a candidate‟s performance on the two occasions.Great care should be taken to ensure uniformity.e.g.Timing should be specified and strictly adhered to;The acoustic conditions should be similar for all administrations of a listening test. Every precaution should be taken to maintain a quiet setting with no distracting sounds or movements.How to obtain scorer reliability1. Use items that permit scoring which is as objective as possibleThis may appear to be a recommendation to use multiple choice items, which permit completely objective scoring. This is not intended. While it would be mistaken to say that multiple choice items are never appropriate, it is certainly true that there are many circumstances in which they are quite inappropriate. What is more, good multiple choice items are notoriously difficult to write and always require extensive pretesting.An alternative to multiple choice is the open-ended item which has a unique, possibly one-word, correct response which the candidates produce themselves. This too should ensure objective scoring, but in fact problems with such matters as spelling which makes a candidate‟s meaning unclear often make demands on the scorer‟s judgment. The longer the required response, the greater the difficulties of this kind.One way of dealing with this is to struct ure the candidate‟s response byproviding part of it.e.g.The open-ended question What was different about the results?may be designed to elicit the responseSuccess was closely associated with high motivation.This is likely to cause problems for scoring. Greater scorer reliability will probably be achieved if the question is followed by:_____ was more closely associated with _____.2. Make comparisons between candidates as direct as possibleThis reinforces the suggestion already made that candidates should not be given a choice of items and that they should be limited in the way that they are allowed to respond.Scoring the compositions all on one topic will be more reliable than if the candidates are allowed to choose from six topics, as has been the case in some well-known tests.3. Provide a detailed scoring keyThis should specify acceptable answers and assign points for partially correct responses. For high scorer reliability the key should be as detailed as possible in its assignment of points. It should be the outcome of efforts to anticipate all possible responses and have been subjected to group criticism. (This advice applies only where responses can be classed as partially or totally …correct‟, not in the case of compositions, forinstance.)4. Train scorersThis is especially important where scoring is more subjective. The scoring of compositions, for example, should hot be assigned to anyone who has not learned to score accurately compositions from past administrations. After each administration, patterns of scoring should be analyzed. Individuals whose scoring deviates markedly and inconsistently from the norm should not be used again.5. Agree acceptable responses and appropriate scores at outset of scoringA sample of scripts should be taken immediately after the administration of the test. Where there are compositions, archetypical representatives of different levels of ability should be selected. Only when all scorers are agreed on the scores to be given to these should real scoring begin.For short answer questions, the scorers should note any difficulties they have in assigning points (the key is unlikely to have anticipated every relevant response), and bring these to the attention of whoever is supervising that part of the scoring. Once a decision has been taken as to the points to be assigned, the supervisor should convey it to all the scorers concerned.6. Identify candidates by number, not nameScorers inevitably have expectations of candidates that they know.Except in purely objective testing, this will affect the way that they score. Studies have shown that even where the candidates are unknown to the scorers, the name on a script (or a photograph) will make a significant difference to the scores given.e.g.A scorer may be influenced by the gender or nationality of a name into making predictions which can affect the score given. The identification of candidates only by number will reduce such effects.7. Employ multiple, independent scoringAs a general rule, and certainly where testing is subjective, all scripts should be scored by at least two independent scores. Neither scorer should know how the other has scored a test paper. Scores should be recorded on separate score sheets and passed to a third, senior, colleague, who compares the two sets of scores and investigates discrepancies. Reliability and validityTo be valid a test must provide consistently accurate measurements. It must therefore be reliable. A reliable test, however, may not be valid at all.For example, as a writing test we might require candidates to write down the translation equivalents of 500 words in their own language. This could well be a reliable test; but it is unlikely to be a valid test of writing.In our efforts to make tests reliable, we must be wary of reducing their validity. This depends in part on what exactly we are trying to measure by setting the task. If we are interested in candidates‟ ability to structure a composition, then it would be hard to justify providing them with a structure in order to increase reliability. At the same time we would still try to restrict candidates in ways which would not render their performance on the task invalid.There will always be some tension between reliability and validity. The tester has to balance gains in one against losses in the other.。

语言测试学3

语言测试学3
back
proficiency test
Designed to measure people’s ability in a language regardless any training they may have had. not syllabus-based, but with a specification of what test candidates have to be able to do for a particular purpose. (e.g.TOEFL, PETS, IELTS)
Lado
3. It is more effective to test grammar since it is limited, but not situation which is infinite. Hence, the grammar items are deigned without context. 4. Forms should be the points for language tests because two languages are different when transferred.
If the NR test is properly The items or parts will be designed, the scores selected according to how attained will typically be adequately they represent distributed in the shape of a these ability levels or “normal” bell-shaped curve. content domains.

语言测试类型知识点总结

语言测试类型知识点总结

语言测试类型知识点总结语言测试的种类有很多,比如笔试、口试、听力测试、阅读测试等。

在进行语言测试时,需要根据测试的目的选择合适的测试方法和评分标准。

不同的语言测试项目需要测试不同的语言技能,比如词汇、语法、听力、口语、阅读、写作等。

下面我们将逐一介绍这些语言测试中的知识点。

一、词汇词汇是语言的基本组成部分,它是语言运用的基础。

在语言测试中,词汇测试通常包括词义、词性、词组、短语、语境等方面的考察。

测试者需要掌握词汇的拼写、发音、用法和搭配等方面的知识。

1、词义:词义是词汇的基本含义,它是词汇测试的重点内容之一。

测试者需要掌握词汇的基本含义,了解常用词汇的多种含义和用法。

2、词性:词性是词汇的重要属性,它决定了词汇的用法和搭配。

测试者需要掌握各种词性的词汇,理解它们在语言中的作用和用法。

3、词组和短语:词组和短语是语言中常用的固定搭配,它们在语言测试中也是重点内容之一。

测试者需要掌握常用的词组和短语,了解它们的意义和用法。

4、语境:语境是词汇使用的重要依据,它可以帮助理解词汇的含义和用法。

测试者需要在不同的语境中运用词汇,理解它们的具体含义和用法。

二、语法语法是语言的基本规则,它决定了语言的结构和用法。

在语言测试中,语法通常包括句子结构、时态、语态、语气、语序、主谓一致、形容词和副词的比较级和最高级、连词、代词等方面的考察。

1、句子结构:句子结构是语法的基本内容之一,它是语言表达的基本单位。

测试者需要掌握不同类型的句子结构,了解它们的构成和用法。

2、时态:时态是表示动作发生时间的一种语法形式,它在语言测试中也是重点内容之一。

测试者需要掌握各种时态的用法,理解它们的差异和应用场合。

3、语态:语态是表示句子主语和谓语之间关系的一种语法形式,它在语言测试中也是重点内容之一。

测试者需要掌握各种语态的用法,了解它们在句子中的作用和区别。

4、语气:语气是表示说话者的语气和情绪的一种语法形式,它在语言测试中也是重点内容之一。

《语言测验基本概念》完整版资料

《语言测验基本概念》完整版资料

┃性 质 ┃┃ 被试比较┃ 分布
┃ 预先┃制定的内容比较
┃┃
┠──────╂─┠────────────╂───────────╂──────────╂───────────────┨────┨

┃┃检验分的布目的 ┃ 区分一切被试的才┃干
┃ 看被试掌握了多少教学 ┃ ┃
┃内容效度〔cont┃┠┃e┃n─检t ─v验常─a的l─id─内模i─ty容╂〕参──┃ ┃─照─被─试──不─知─道──或─很╂少─┃知──道标───准┃┃──参─内被─容试─照完─┨全知道
和评分程序都一样,不能随意改动; • 第三,都经过实验,在进展了大量的阅历性研讨之后
第二讲:言语检验的根本概念
❖言语检验的作用和目的 ❖言语检验的种类 ❖言语检验的质量规范
言语检验的作用和目的
• 作用:科学地丈量出学习者的言语才干 • 目的: • 选拔 • 诊断 • 评价 • 预测 • 研讨
检验的种类
• 按用途〔目的〕划分
• 才干检验〔或程度检验〕proficiency test、
言语检验的作用和目的是什么?
干 作用:科学地丈量出学习者的言语才干
难易度〔facility value〕 ┣━━━━━━╋━━━━━━━━━━━━╋━━━━━━━━━━━━┫
• 按照参照系统划分 想象效度(construct validity)
• 常模参照检验(norm-referenced test):所谓 常模〔norm〕是指规范化样本中检验的分数 分布
东西。 • 内容效度〔content validity〕 • 效标关联效度〔criterion-related validity 〕 • 共时效度(concurrent validity) • 预测效度(predictive validity) • 想象效度(construct validity) • 外表效度(face validity)

语言测试学资料

语言测试学资料

Chapter 5(第五章)Test Techniques and Measuring Overall Ability(测试的技巧和测试综合能力)What are test techniques?They are means of eliciting behavior from candidates which will tell us about their language abilities.What techniques do we need?What we need are techniques which:1.will elicit behavior which is reliable and valid indicator of the ability in which we are interested;2.will elicit behavior which can be reliably scored;3.are as economical of time and effort as possible;4.will have a beneficial backwash effect.In this chapter we’ll discuss one technique, multiple choice. Then we’ll examine the use of techniques which may be used to test ‘overall ability’.Multiple choice:(1) a stemEnid has been here ______ half an hour.(2) a number of optionsA. duringB. forC. whileD. since(3) Key(4) DistractorsWhat’s the most obvious advantage of multiple choice?•Scoring can be perfectly reliable.•Scoring should also be rapid and economical.•It can include more items than would otherwise be possible in a given period of time.The difficulties with multiple choice are as follows:1) The technique tests only recognition knowledgeIf there is a lack of fit between at least some candidates’ productive and receptive skills, then performance on a multiple choice test may give a quite inaccurate picture of those candidates’ ability.A multiple choice grammar test score, for example, may be a poor indicator of someone’s ability to use grammatical structures. The person who can identify the correct response in the item above may not be able to produce the correct form when speaking or writing. This is in part a question of construct validity; whether or not grammatical knowledge of the kind that can be demonstrated in a multiple choice test underlies the productive use of grammar.Even if it does, there is still a gap to be bridged between knowledge and use; if use is what we are interested in, that gap will mean that test scores are at best giving incomplete information.2) Guessing may have a considerable but unknowable effect on test scoresThe chance of guessing the correct answer in a three-option multiple choice item is one in three, or roughly thirty-three percent. On average we would expect someone to score 33 on a 100-item purely by guesswork. We would expect some people to score fewer than that by guessing, others to score more.The trouble is that we can never know what part of any particular individual’s score has come about through guessing. Attempts are sometimes made to estimate the contribution of guessing by assuming that all incorrect responses are the result of guessing, and by further assuming that the individual has had average luck in guessing. Scores are then reduced by the number of points the individual is estimated to have obtained by guessing.However, neither assumption is necessarily correct, and we cannot know that the revised score is the same as (or very close to) the one an individual would have obtained without guessing. While other testing methods may also involve guessing, we would normally expect the effect to be much less, since candidates will usually not have a restrictednumber of responses presented to them (with the information that one of them is correct).3) The technique severely restricts what can be testedThe basic problem here is that multiple choice items require distractors, and distractors are not always available. In a grammar test, it may not be possible to find three or four plausible alternatives to the correct structure. The result is that command of what may be an important structure is simply not tested. An example would be the distinction in English between the past tense and the present perfect.Certainly for learners at a certain level of ability, in a given linguistic context, there are no other alternatives that are likely to distract. The argument that his must be a difficulty for any item that attempts to test for this distinction is difficult to sustain, since other items that do not overtly present a choice m ay elicit the candidate’s usual behavior, without the candidate resorting to guessing.4) It is very difficult to write successful itemsA further problem with multiple choice is that, even where items are possible, good ones are extremely difficult to write. Professional test writers reckon to have to write many more items than they actually need for a test, and it is only after pretesting and statistical analysis of performance on the items that they can recognise the ones that are usable.It is s ome teachers’ experience that multiple choice tests that areproduced for use within institutions are often shot through with faults. Common amongst these are:more than one correct answer;no correct answer;there are clues in the options as to which is correct (for example the correct option may be different in length to the others);ineffective distractors.The amount of work and expertise needed to prepare good multiple choice tests is so great that, even if one ignored other problems associated with the technique, one would not wish to recommend it for regular achievement testing (where the same test is not used repeatedly) within institutions. Savings in time for administration and scoring will be outweighed by the time spent on successful test preparation.It is true that the development and use of item banks, from which a selection can be made for particular versions of a test, makes the effort more worthwhile, but great demands are still made on time and expertise.5) Backwash may be harmfulIt should hardly be necessary to point out that where a test which is important to students is multiple choice in nature, there is a danger that practice for the test will have a harmful effect on learning and teaching. Practice at multiple choice items (especially when, as happens, as much attention is paid to improving one’s educated guessing as to the content ofthe items) will not usually be the best way for students to improve their command of a language.6) Cheating may be facilitatedThe fact that the responses on a multiple choice test (a,b,c,d) are so simple makes them easy to communicate to other candidates nonverbally. Some defence against this is to have at least two versions of the test, the only difference between them being the order in which the options are presented.All in all, the multiple choice technique is best suited to relatively infrequent testing of large numbers of candidates. This is not to say that there should be no multiple choice items in tests produced regularly within institutions. In setting a reading comprehension test, for example, there may be certain tasks that lend themselves very readily to the multiple choice format, with obvious distractors presenting themselves in the text.There are real-tasks (say, a shop assistant identifying which one of four dresses a customer is describing) which are essentially multiple choice. The simulation in a test of such a situation would seem to be perfectly appropriate. What the reader is being urged to avoid is the excessive, indiscirimate, and potentially harmful use of the technique.Cloze, C-Test, and dictation: measuring overall abilityThe three techniques have in common the fact that they seem to offer economical ways of measuring overall ability in a language.The cloze technique has in addition been recommended as a means of measuring reading ability.Varieties of cloze procedureIn its original form, the cloze procedure involves deleting a number of words in a passage, leaving blanks, and requiring the person taking the test to attempt to replace the original words. After a short unmutilated ‘lead-in’, it is usually about every seventh word which is deleted.e.g.What is a college?Confusion exists concerning the real purposes, aims, and goals of a college. What are these? What should a college be?Some believe that the chief function 1.____ even a liberal arts college is 2.____ vocational one. I feel that the 3.__________ function of a college, while important, 4.____ nonetheless secondary. Others profess that the 5.____ purpose of a college is to 6.________ paragons…….The cloze procedure seemed very attractive. Cloze tests were easy to construct, administer and score. Reports of early research seemed to suggest that it mattered little which passage was chosen or which words were deleted; the result would be a reliable and valid test of candidates’underlying language abilities.Unfortunately, cloze could not deliver all that was promised on its behalf. For one thing, even if some underlying ability is being measured through the procedure, it is not possible to predict accurately from this what is people’s ability with respect to the variety of separately skills (speaking, writing etc) in which we are usually interested.Further, it turned out that different passages gave different results, as did the deletion of different sets of words in the same passage. Another matter for concern was the fact that intelligent and educated native speakers varied quite considerably in their ability to predict the missing words.What is more, some of them did less well than many non-native speakers. The validity of the procedure, even as a very general measure of overall ability, was thus brought into question.There seems to be fairly general agreement now that the cloze procedure cannot be depended upon automatically to produce reliable useful tests. There is need for careful selection of texts and some pretesting. The fact that deletion of every nth word almost always produces problematical items (for example impossible to predict the missing word), points to the advisability of a careful selection of words to delete, from the outset.The following cloze passage is constructed according to the aboveadvice.e.g.Choose the best word to fill each of the numbered blanks in the passage below. Write your answers in the space provided in the right hand margin. Write only ONE word for each blankThe earth’s vegetation is _(1)_ ofa web of life in which there areintimate and essential relationbetween plants and the earth,between plants and _(2)_ plants,between plants and animals.Sometimes we have no _(3)_ but todisturb _(4)_ relationships, but weshould _(5)_ so thoughtfully,with full awareness that_(6)_ we domay _(7)_ consequences remotein time and place.The deletions in the above passage were chosen to provide ‘interesting’ items. Most of them we might be inclined to regard as testing ‘grammar’, but to respond to them successfully more than grammatical ability is needed; processing of various features of context is usually necessary. Another feature is that native speakers of the same generalacademic ability as the students for whom the test was intended could be expected to provide acceptable responses to all of the items. The acceptable responses are themselves limited in number. Scores on cloze passages of this kind in the Cambridge Proficiency Examination have correlated very highly with performance on the test as a whole. It is this kind of cloze that experts would recommend for measuring overall ability.It may reasonably be thought that cloze procedures, since they produce purely pencil and paper tests, cannot tell us anything about the oral component of overall proficiency.However, some research has explored the possibility of using cloze passages based on tape-recordings of oral interaction to predict oral ability.Family reunionMother: I love that dress, Mum.Grandmother: Oh, it’s M and S.Mother: Is it?Grandmother: Yes, five pounds.Mother: My goodness, it’s not, Mum.Grandmother: But it’s made of that T-shirt stuff, so I don’t think it’ll wash very _________(1), you know, they go all…Mother: sort_______(2)… I know the kind, yes…Grandmother: Yes.Advice on creating cloze type passages1. The chosen passages should be at a level of difficulty appropriate to the people who are to take the test. If there is doubt about the level, a range of passages should be selected for pretesting. In deed it is always advisable to pretest a number of passages, as their behavior is not always predictable.2. The text should be of a style appropriate to the kind of language ability being tested.3. After a couple of sentences of uninterrupted text, deletions should be made at about every eighth or tenth word (the so called pseudo-random method of deletion). Individual deletions can then be moved a word or two to left or right, to avoid problems or to create interesting ‘items’.4. The passage should then be tried out on a good number of comparable native speakers and the range of acceptable responses determined.5. Clear instructions should be devised. In particular, it should be made clear what is to be regarded as a word (with examples of isn’t etc., where possible). Students should be assured that no one can possiblyreplace all the original words exactly. They should be encouraged to begin by reading the passage right through to get an idea of what is being conveyed (the correct responses early in the passage may be determined by later content).6. The layout of the test facilitates scoring. Scorers are given a card with the acceptable responses written in such a way as to lie opposite the candidates’ responses.7. Anyone who is to take a cloze test should have had several opportunities to become familiar with the technique. The more practice they have had, the more likely it is that their scores will represent their true ability in the language.8. Cloze test scores are not directly interpretable. In order to be able to interpret them we need to have some other measure of ability. If a series of cloze passages is to be used as a placement test, then the obvious thing would be to have all students currently in the institution complete the passages. Their cloze scores could then be compared with the level at which they are studying in the institution. Information from teachers as to which students could be in a higher (or lower) class would also be useful. Once a pattern was established between cloze test scores and class level, the cloze passages could be used as at least part of the placement procedure.The C-TestWhat is the C-Test?The C-test is really a variety of cloze, which its originators claim is superior to the kind of cloze described above. Instead of whole words, it is the second half of every second word which is deleted.e.g.There are usually five men in the crew of a fire engine. One o______ them dri____ the eng________. The lea____ sits bes_____ the dri______. The ot_______ firemen s_______ inside t________ cab o_______ the f________ engine. T____ leader h______ usually be________ in t_____ Five Ser_____ for ma____ years. H_____ will kn_____ how t_____ fight diff_____ sorts o_____ fires. S______, when t______ firemen arr____ at a fire, it is always the leader who decides how to fight a fire. He tells each fireman what to do.What’s the advantages of the C-Test ?The advantages of the C-Test over the more traditional cloze procedure are that only exact scoring is necessary (native speakers effectively scoring 100 per cent) and that shorter (and so more) passages are possible. This last point means that a wider range of topics, styles, and levels of ability is possible. The deletion of elements less than the word isalso said to result in a representative sample of parts of speech being so affected. By comparison with cloze, a C-Test of 100 items takes little space and not nearly so much time to complete (candidates do not have to read so much text).What’s the disadvantages of the C-Test?It is harder to read than a cloze passage, and correct responses can often be found in the surrounding text. Thus the candidate who adopts the right puzzle- solving strategy may be at an advantage over a candidate of similar foreign language ability. However, research would seem to indicate that the C-Test functions well as a rough measure of overall ability in a foreign language. The advice given above the development of cloze tests applies equally to the C-Test.DictationResearch revealed high correlations scores on dictation and scores on much longer and more complex tests. Examination of performance on dictation tests made it clear that words and word order were not really given; the candidate heard only stream of sound which had to be decoded into a succession of words, stored, and recreated on paper. The ability to identify words from context was now seen as a very desirable ability, one that distinguished between learners at different levels.Dictation tests give results similar to those obtained from cloze tests.In predicting overall ability they have the advantage of involving listening ability. That is probably the only advantage. Certainly they are as easy to create. They are relatively easy to administer, though not as easy as the paper-and-pencil cloze. But they are certainly not easy to score. It is recommended that the score should be the number of words appearing in their original sequence (misspelled words being regarded as correct as long as no phonological rule is broken). This works quite well when performance is reasonably accurate, but is still time-consuming. With poorer students, scoring becomes tedious.Because of this scoring problem, partial dictation may be considered as an alternative. In this, part of what is dictated is already printed on the candidate’s answer sheet. The candid ate has simply to fill in the gaps. It is then clear just where the candidate is up to, and scoring is likely to be more reiable.Like cloze, dictation may prove a useful technique where estimates of overall ability are needed. The same considerations should guide the choice of passages as with the cloze procedure. The passage has to be broken down into stretches that will be spoken without a break. These should be fairly long, beyond rote memory, so that the candidates will have to decode, store, and then re-encode what they hear. It is usual, when administering the dictation, to begin by reading the entire passage straight through. Then the stretches are read out, not too slowly, one afterthe other with enough time for the candidates to write down what they have heard(It is recommended that the reader silently spell the stretch twice as a guide to writing time).In summary, dictation and the varieties of cloze procedure discussed above provide neither direct information on the separate skills in which we are usually interested nor any easily interpreted diagnostic information. With careful application, however, they can prove useful to non-professional testers for purposes where great accuracy is not called for.。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Chapter 3(第三章)The Reliability of Testing(测试的信度)•The definition of reliability•The reliability coefficient•How to make tests more reliableWhat is reliability?Reliability refers to the trustworthiness and stability of candidates‟ test results.In other words, if a group of students were given the same test twice at different time, the more similar the scores would have been, the more reliable the test is said to be.How to establish the reliability of a test?It is possible to quantify the reliability of a test in the form of a reliability coefficient.They allow us to compare the reliability of different tests.The ideal reliability coefficient is 1.---A test with a reliability coefficient of 1 is one which would give precisely the same results for a particular set of candidates regardless ofwhen it happened to be administered.---A test which had a reliability coefficient of zero would give sets of result quite unconnected with each other.It is between the two extremes of 1 and zero that genuine test reliability coefficients are to be found.How high should we expect for different types of language tests? Lado saysGood vocabulary, structure and reading tests are usually in the 0.9 to 0.99 range, while auditory comprehension tests are more often in the 0.8 to 0.89 range.A reliability coefficient of 0.85 might be considered high for an oral production test but low for a reading test.The way to establish the reliability of a test:1. Test-retest methodIt means to have two sets of scores for comparison. The most obvious way of obtaining these is to get a group of subjects to take the same test twice.2. Split-half methodIn this method, the subjects take the test in the usual way, but each subject is given two scores. One score is for one half of the test, the second score is for the other half. The two sets of scores are then used to obtain the reliability coefficient as if the whole test had been taken twice.In order for this method to work, it is necessary for the test to be spilt into two halves which are really equivalent, through the careful matching of items (in fact where items in the test have been ordered in terms of difficulty, a split into odd-numbered items and even-numbered items may be adequate).3. Parallel forms method(the alternate forms method)It means to use two different forms of the same test to measure a group of students continuously or in a very short time. However, alternate forms are often simply not available.How to make tests more reliableAs we have seen, there are two components of test reliability: the performance of candidates from occasion to occasion, and the reliability of the scoring.Here we will begin by suggesting ways of achieving consistent performances from candidates and then turn our attention to scorer reliability.1.Take enough samples of behaviorOther things being equal, the more items that you have on a test, the more reliable that test will be.e.g.If we wanted to know how good an archer someone was, wewouldn‟t rely on the evidence of a single shot at the target. That one shot could be quite unrepresentative of their ability. To be satisfied that we had a really reliable measure of the ability we should want to see a large number of shots at the target.The same is true for language testing.It has been demonstrated empirically that the addition of further items will make a test more reliable.The additional items should be independent of each other and of existing items.e.g.A reading test asks the question:“Where did the thief hide the jewels?”If an additional item following that took the form: “What was unusual about the hiding place?”Would it make a full contribution to an increase in the reliability of the test?No.Why not?Because it is hardly possible for someone who got the original questions wrong to get the supplementary question right.We do not get an additional sample of their behavior, so the reliability of our estimate of their ability is not increased.Each additional item should as far as possible represent a fresh start for the candidate.Do you think the longer a test is, the more reliability we will get?It is important to make a test long enough to achieve satisfactory reliability, but it should not be made so long that the candidates become so bored or tired that the behavior that they exhibit becomes unrepresentative of their ability.2. Do not allow candidates too much freedomIn general, candidates should not be given a choice, and the range over which possible answers might vary should be restricted.Compare the following writing tasks:a) Write a composition on tourism.b) Write a composition on tourism in this country.c) Write a composition on how we might develop the tourist industry in this country.d) Discuss the following measures intended to increase the number of foreign tourists coming to this country:i)More/better advertising and / or information (where? What formshould it take?)ii)Improve facilities (hotels, transportation, communication etc.). iii)Training of personnel (guides, hotel managers etc.)The successive tasks impose more and more control over what iswritten. The fourth task is likely to be a much more reliable indicator of writing ability than the first.But in restricting the students we must be careful not to distort too much the task that we really want to see them perform.3. Write unambiguous itemsIt is essential that candidates should not be presented with items whose meaning is not clear or to which there is an acceptable answer which the test writer has not anticipated.The best way to arrive at unambiguous items is, having drafted them, to subject them to the critical scrutiny of colleagues, who should try as hard as they can to find alternative interpretations to the ones intended. 4. Provide clear and explicit instructionsThis applies both to written and oral instructions.If it is possible for candidates to misinterpret what they are asked to do, then on some occasions some of them certainly will.A common fault of tests written for the students of a particular teaching institution is the supposition that the students all know what is intended by carelessly worded instructions.The frequency of the complaint that students are unintelligent, have been stupid, have willfully misunderstood what they were asked to do, reveals that the supposition is often unwarranted.Test writers should not rely on the students‟ powers of telepathy toelicit the desired behavior.The best means of avoiding problems is the use of colleagues to criticize drafts of instructions (including those which will be spoken).Spoken instructions should always be read from a prepared text in order to avoid introducing confusion.5. Ensure that tests are well laid out and perfectly legibleToo often, institutional tests are badly typed (or handwritten), have too much text in too small a space, and are poorly reproduced. As a result, students are faced with additional tasks which are not ones meant to measure their language ability. Their variable performance on the unwanted tasks will lower the reliability of a test.6. Candidates should be familiar with format and testing techniquesIn any aspect of a test is unfamiliar to candidates, they are likely to perform less well than they would do otherwise. For this reason, every effort must be made to ensure that all candidates have the opportunity to learn just what will be required of them. This may mean the distribution of sample tests (or of past test paper), or at least the provision of practice materials in the case of tests set within teaching institutions.7. Provide uniform and non-distracting conditions of administrationThe greater the differences between one administration of a test and another, the greater the differences one can expect between a candidate‟s performance on the two occasions.Great care should be taken to ensure uniformity.e.g.Timing should be specified and strictly adhered to;The acoustic conditions should be similar for all administrations of a listening test. Every precaution should be taken to maintain a quiet setting with no distracting sounds or movements.How to obtain scorer reliability1. Use items that permit scoring which is as objective as possibleThis may appear to be a recommendation to use multiple choice items, which permit completely objective scoring. This is not intended. While it would be mistaken to say that multiple choice items are never appropriate, it is certainly true that there are many circumstances in which they are quite inappropriate. What is more, good multiple choice items are notoriously difficult to write and always require extensive pretesting.An alternative to multiple choice is the open-ended item which has a unique, possibly one-word, correct response which the candidates produce themselves. This too should ensure objective scoring, but in fact problems with such matters as spelling which makes a candidate‟s meaning unclear often make demands on the scorer‟s judgment. The longer the required response, the greater the difficulties of this kind.One way of dealing with this is to struct ure the candidate‟s response byproviding part of it.e.g.The open-ended question What was different about the results?may be designed to elicit the responseSuccess was closely associated with high motivation.This is likely to cause problems for scoring. Greater scorer reliability will probably be achieved if the question is followed by:_____ was more closely associated with _____.2. Make comparisons between candidates as direct as possibleThis reinforces the suggestion already made that candidates should not be given a choice of items and that they should be limited in the way that they are allowed to respond.Scoring the compositions all on one topic will be more reliable than if the candidates are allowed to choose from six topics, as has been the case in some well-known tests.3. Provide a detailed scoring keyThis should specify acceptable answers and assign points for partially correct responses. For high scorer reliability the key should be as detailed as possible in its assignment of points. It should be the outcome of efforts to anticipate all possible responses and have been subjected to group criticism. (This advice applies only where responses can be classed as partially or totally …correct‟, not in the case of compositions, forinstance.)4. Train scorersThis is especially important where scoring is more subjective. The scoring of compositions, for example, should hot be assigned to anyone who has not learned to score accurately compositions from past administrations. After each administration, patterns of scoring should be analyzed. Individuals whose scoring deviates markedly and inconsistently from the norm should not be used again.5. Agree acceptable responses and appropriate scores at outset of scoringA sample of scripts should be taken immediately after the administration of the test. Where there are compositions, archetypical representatives of different levels of ability should be selected. Only when all scorers are agreed on the scores to be given to these should real scoring begin.For short answer questions, the scorers should note any difficulties they have in assigning points (the key is unlikely to have anticipated every relevant response), and bring these to the attention of whoever is supervising that part of the scoring. Once a decision has been taken as to the points to be assigned, the supervisor should convey it to all the scorers concerned.6. Identify candidates by number, not nameScorers inevitably have expectations of candidates that they know.Except in purely objective testing, this will affect the way that they score. Studies have shown that even where the candidates are unknown to the scorers, the name on a script (or a photograph) will make a significant difference to the scores given.e.g.A scorer may be influenced by the gender or nationality of a name into making predictions which can affect the score given. The identification of candidates only by number will reduce such effects.7. Employ multiple, independent scoringAs a general rule, and certainly where testing is subjective, all scripts should be scored by at least two independent scores. Neither scorer should know how the other has scored a test paper. Scores should be recorded on separate score sheets and passed to a third, senior, colleague, who compares the two sets of scores and investigates discrepancies. Reliability and validityTo be valid a test must provide consistently accurate measurements. It must therefore be reliable. A reliable test, however, may not be valid at all.For example, as a writing test we might require candidates to write down the translation equivalents of 500 words in their own language. This could well be a reliable test; but it is unlikely to be a valid test of writing.In our efforts to make tests reliable, we must be wary of reducing their validity. This depends in part on what exactly we are trying to measure by setting the task. If we are interested in candidates‟ ability to structure a composition, then it would be hard to justify providing them with a structure in order to increase reliability. At the same time we would still try to restrict candidates in ways which would not render their performance on the task invalid.There will always be some tension between reliability and validity. The tester has to balance gains in one against losses in the other.。

相关文档
最新文档