在线考试系统翻译

合集下载

电气工程及自动化专业英语考试翻译课文Electric Power Systems 电力系统3.1

电气工程及自动化专业英语考试翻译课文Electric Power Systems 电力系统3.1

Section 1 Introduction 第一节介绍The modern society depends on the electricity supply more heavily than ever before.现代社会比以往任何时候对电力供应的依赖更多。

It can not be imagined what the world should be if the electricity supply were interrupted all over the world. 如果中断了世界各地的电力供应,无法想像世界会变成什么样子Electric power systems (or electric energy systems), providing electricity to the modern society, have become indispensable components of the industrial world. 电力系统(或电力能源系统),提供电力到现代社会,已成为产业界的不可缺少的组成部分。

The first complete electric power system (comprising a generator, cable, fuse, meter, and loads) was built by Thomas Edison –the historic Pearl Street Station in New York City which began operation in September 1882. 托马斯爱迪生建立了世界上第一个完整的电力系统(包括发电机,电缆,熔断器,计量,并加载)它就是位于纽约市具有历史意义的珍珠街的发电厂始于1882年9月运作。

This was a DC system consisting of a steam-engine-driven DC generator supplying power to 59 customers within an area roughly 1.5 km in radius. The load, which consisted entirely of incandescent lamps, was supplied at 110 V through an underground cable system. 这是一个直流系统,由一个蒸汽发动机驱动的直流发电机其供电面积约1.5公里至59范围内的客户。

考试 英语翻译 中英文对照

考试 英语翻译 中英文对照

英语翻译2013-6-5Unit1In1812napoleon bonaparte,emperor of the french led his grand army into russia.1812的法国皇帝拿破仑波拿巴,率领大军入侵俄罗斯。

He was prepared for the fierce resistance of the russian people defending their homeland.他准备好俄罗斯人民会为保卫祖国而奋勇抵抗。

He was prepared for the long march across russian soil to moscow the capital city.他准备好在俄罗斯广袤的国土上要经过长途跋涉才能进军首都莫斯科。

But he was not prepared for the devastaing enemy that met him in moscow the raw bitter,bleak russian winter.但他没有料到在莫斯科他会遭遇劲敌—俄罗斯阴冷凄苦的寒冬。

In1941,adolf hitler,leader of nazi germany,launched an attach against the soviet union,as russiaa then was called.1941年,纳粹德国元首阿道夫·希特勒进攻当时被称作苏联的俄罗斯。

Hitler's military might was unequaled.希特勒的军事实力堪称无敌。

His war machine had mowed down resistance in most of europe.他的战争机器扫除了欧洲绝大部分地区的抵抗。

Hitler expected a short campaign but,like napoleon before him,was taught a painful lesson.希特勒希望速战速决,但是,就像在他之前的拿破仑一样,他得到的是痛苦的教训。

大学英语网上作业和在线考试系统的研究

大学英语网上作业和在线考试系统的研究

大学英语网上作业和在线考试系统的研究摘要利用最新的web技术开发包开发系统,用改进的传统遗传算法进行组卷优化,并采用基于词典的最大逆向匹配技术与语义相似度相结合的记分方式来实现简单主观题自动批阅,有效地改善学生学习英语的环境,也尽可能地减轻教师的负担。

关键词网上作业;在线考试;ajax;遗传算法;自动批阅中图分类号:tp315 文献标识码:b 文章编号:1671-489x(2013)06-0052-03英语是人们相互了解学习的“桥梁”工具之一,也是当下有知识的年轻一代应该掌握的基本技能之一。

大学英语作为在校大学生必开的一门公共课程,承担着让大学生熟练地掌握与运用英语的重担。

但目前的课堂教学,具有教学时间长、任务重、实践性强、短时间难以提高等特点,因而必须采取有效的措施改善学生学习英语的环境,进一步促进学生熟练地掌握和运用英语的能力。

大学英语网上作业和在线考试系统将是行之有效的改进措施之一。

1 系统采用的技术平台本系统采用最新的web技术开发包 ajax。

它能让开发的web应用程序直接通过客户端的ajax引擎向服务器传送需要更新的数据信息并返回,然后利用客户端的javascript处理返回的数据,避免了“闪屏”现象的发生。

同时,区别于传统web信息独占式请求发送方式,ajax采用信息异步请求发送方式,并且有一些操作就是在客户端上处理的,因而服务器处理响应所需的时间大大减少[1]。

两种信息发送方式如图1、图2所示。

2 系统主要功能2.1 学习功能系统添加了听力、完形填空、阅读理解、翻译、作文等题型,让学生在课外的时间通过对各题型的学习,进一步理解和巩固所学知识;并提供客观题的正确答案与解题思路以及主观题的参考答案供学生学习,以便及时发现问题,纠正错误。

2.2 网上作业功能在本系统中,教师可以通过此功能模块对学生布置大量的客观题以及翻译等简单类型的主观题作业,并设置好客观题的正确答案和客观题的关键词语,实现学生完成作业后点击提交,由系统实现学生作业的较准确的自动批阅,并记录学生成绩。

etest的名词解释

etest的名词解释

etest的名词解释etest是一个广泛使用的电子考试系统,它在教育领域发挥着重要的作用。

作为一种基于网络的在线考试平台,etest通过电子方式管理和进行各种考试,并且在一定程度上取代了传统的纸质考试。

etest提供了强大和灵活的功能,为学生和教师带来了许多便利。

etest具备多种考试方式,包括选择题、填空题、解答题等。

学生可以根据自己的需要选择参加具体形式的考试,并且在规定的时间和地点完成考试。

etest提供了一个友好和直观的界面,使学生能够轻松地进行考试。

此外,etest还具备自动评分功能,可以快速地对答案进行评分和统计,为教师减轻了批改作业的负担。

etest的优势之一是其便利性。

传统的纸质考试需要大量的物质和人力资源,例如印刷试卷、安排考试场地和监考人员等。

而etest无需纸张和传统的考试环境,只需要一台电脑和网络连接,学生就可以在任何时间和地点参加考试。

这种灵活性极大提高了学生参与考试的便利性,并且能够更好地适应现代社会的学习需求。

etest还具备高效性。

在传统考试中,收集和整理学生的答卷是一个比较繁琐的过程。

而etest系统能够即时收集和保存学生的答卷,减少了人工操作,并且可以提供即时的成绩反馈。

这对于学生和教师来说都是非常重要的,学生可以及时了解自己的得分情况,教师可以及时发现学生存在的问题并进行针对性的辅导。

etest还能够提供个性化的学习和评估。

传统的纸质考试通常只能测量学生掌握知识的范围,但是etest通过多种题型和技术手段,可以更全面地评估学生的学习情况。

通过设置不同的难度和复杂度的题目,etest可以评估学生的能力水平和应用能力。

此外,etest还可以根据学生的优势和不足,提供个性化的学习资源和建议,帮助学生更好地提高学习效果。

然而,etest也存在一些问题和挑战。

首先,有些学生可能不熟悉使用电脑和网络,需要额外的培训和指导。

其次,etest的安全性也是一个关键问题,例如避免作弊和保护学生隐私等。

iTEST 大学英语测试与训练系统产品简介

iTEST 大学英语测试与训练系统产品简介

iTEST 大学英语测试与训练系统产品简介FLTRP iTEST大学英语测试与训练系统(以下简称iTEST)是为高校提供英语试题库资源和在线评测服务的综合测试管理平台。

平台提供大学英语四六级考试、英语专业四八级考试、研究生英语入学考试等高质量模拟题库和基础训练题库,支持学生进行词汇/语法等基础训练、听说读写译专项能力训练和自主模考训练,支持教师进行自动组卷、阅卷、成绩分析统计等个性化教学管理,支持学校组织和实施大规模英语测试,通过现代化手段有效提高考试效率,规范考试管理,从而提升大学英语教学与评估的整体质量。

iTEST集测试、自主训练、教学、科研平台于一体,可全面满足高校英语教学的四大需求:1. 在线标准化英语测试支持学校组织全校规模、班级规模的在线标准化测试,包括基于计算机和网络的大学英语四、六级模拟考试及各类教学评估类考试。

2. 个性化英语技能训练支持学生结合自身学习需要,有针对性地进行词汇、语法等基础训练、听说读写译专项能力训练和针对大学英语四、六级考试等考试的自主模考训练。

3. 专业化测试分析与教学诊断学校和教师使用专业化的考试统计、成绩分析统计报表了解和评估学生的英语能力,提高教学效果。

4. 开展课题研究教师根据需要收集测试相关数据,从事测试课题研究。

系统特色iTEST是国内同类大学英语测试平台中:唯一提供专业考试数据统计分析的测试平台唯一将测试、教学与科研相结合的测试平台首个题库经过专业预测调整的测试平台首个全真模拟大学英语四级网考的测试平台iTEST具有以下特点:权威:作为全国规模最大的外语出版机构,外语教学与研究出版社以专业力量和大规模投入,为大学英语提供值得信赖的高质量测试与训练资源;在语料的选择标准及试题设计上与大学英语四、六级考试高度一致;中国外语教育研究中心指导策划,保证系统设计的合理性与科学性。

科学:试题经过项目分析(Item Analysis),在实际使用前经过大规模样本试测与校正,确保科学的难度与区分度;题库建设基于项目反应理论(IRT),致力于为学校量身定做个性化测试解决方案。

【计算机专业文献翻译】对程序课程的一个可靠的考试系统的评估

【计算机专业文献翻译】对程序课程的一个可靠的考试系统的评估

Evaluation of an Authentic Examination System (AES)for Programming CoursesTorbj?rn Jonsson, Pouria Loghmani and Simin Nadjm-TehraniDepartment of Computer and Information ScienceLink?ping University, Sweden{torjo,poulo,simin}@ida.liu.seAbstractThis paper describes our experience with an authentic examination system for programming courses. We briefly describe the architecture of the system, and present results of evaluating the system in real examination situations. Some of the factors studied in detail are the on-line interactions between the students and examiners, the response times and their effects on the pressure experienced by student, the acceptance of the method among the students, and whether the examination form is gender-neutral. IntroductionAs experienced teachers in programming courses we have noticed the drawbacks in the traditional examination form used in programming courses. The students learn to program via laboratory exercises, but the final evaluation of their abilities and the grading of the examination are in a form that uses paper and pen instead of computers. Considering that the student will never use this mode for producing a program through the professional life, we consider this to be not a suitable method.At the Department of Computer Science at Link?ping University 12 fundamental programming courses for approximately 1000 students in different educational programs are taught annually. This paper deals with a new pedagogical view in these programming courses, which can be applied to any programming language, type of student and educational program. The idea is based on extensive studies around different examination forms, where individual grading, efficient and useful feedback and the authenticity of the examination form are used as basic criteria for the choice of examination method. We believe that the choice of method together with the added efficiency in the assessment process improves the quality of our study programmes. In particular, we believe that it will change the examination process from a summative to a normative assessment occasion [1].For a number of years we have experimented with testing the students via computer-aided examinations in some pilot courses - an authentic examination form for this type of course. However, this examination form has not become more widespread due to insufficient support for thecomputer environment necessary for this kind of examination. During the past year a new authentic examination system (AES) has been developed, where all the students and the examining teachers are connected to the same system. The process, including communication and grading, is supported by this environment. In this paper we describe the examination system and our initial evaluations of this system in a number of relatively large examination sessions. The courses in question covered programming in Ada and were taken by first and second year students.During the past year we have evaluated the AES. The instruments used for the evaluation consisted of questionnaires filled by 231 students over a period of 3 months and 4examinations.The paper is organised as follows. In section 1 we describe why the type of examination we propose is the most appropriate for programming courses and compare to some related systems. Section 2 includes a brief technical description of the examination systems, including its architectural design. In section 3 we describe how the computer system, that manages the examination process on-line, has to be augmented by rules set up in each particular course. Section 4 covers our evaluation methods and is followed by evaluation results in section 5. Section 6 concludes the paper.1 Examination formsEvery examination method has specific characteristics that make it more or less appropriate to a particular course setting. H?kan Oswaldsson studied the range of possible examination forms for a typical programming course prior to the development of the current examination system in our department [5]. While several modes of examination can be considered as effective means for enhanced learning (e.g. home assignments, oral examinations following a design assignment, etc), there are not many examination types that combine the need for a summative assessment, with adequate feedback to induce learning. Combined with the large number of students that we are currently teaching, design of an ideal examination setting is a truly challenging task.The work by Dawson-Howe is an early attempt to bring computer support into the process of programming assignment evaluation and administration [2]. The need for automated examination systems has become more pertinent during the late 90's with the advent of distance and life long learning. For example, at the Open University in UK there have been attempts to exchange student assignments , and their (subsequent) correction and asses sment by examiners via MS Word documents [8]. However, the available reports (e.g. the work by Price and Petre) concentrate on the ease of administration for course assignment and grading, rather than the pedagogical feedback in an on-line authentic examination. In recent years several authors report on automatic assessment systems, mostly concentrating on presentation of the technical aspects of the system and the results of the students in terms of grading [4, 5, 7, 8]. While we share the aspiration of these research teams and conduct similar studies, our focus has been on the formal evaluation of how the students perceived the examination environment. In addition we have studied how they were affected by factors specific to authentic examinations, how the system performance and the examiners' on-line behaviour affects the perceived load on the student, and other such aspects.Student accountsTeacher accountsExamsExamination ProcessingMessagingStatisticsThe AES design is divided into multiple tiers: the Client tier, the Middle tier (consisting of one or more sub-tiers), and the Backend tier (see figure 2.1). Partitioning the design into tiers allows us to choose the appropriate technology for a given situation. Multiple technologies can even be used to provide the same service in different situations. Forexample, HTML pages, JSP pages, and stand-alone applications can all be used in the client tier.Enterprise Beans MessageDrivenBeansClient Tier Middle Tier Backend Tier2 Technical description of the AESAES has been developed using the J2EE platform. This represents a single standard for implementing and deploying complex enterprise applications. Having been designed through an open process, J2EE meets a wide range of enterprise application requirements, including distribution-specific mechanisms such as messaging system, scalability and modularity.The clients are based on the Model-View-Controller (MVC) application architecture, which separates three distinct forms of functionality within the application: The Model represents the structure of the data in the application, as well as application-specific operation on data.The View accesses data from the model and specifies how that data should be presented. Views in the AES consist of stand-alone applications that provide view functionality.The Controller translates user actions on the model and selects the appropriate view based on user preferences.The AES is designed as a set of loosely coupled modules, which are tightly coupled internally. Grouping functionality into modules provides integration between classes that cooperate, yet decouples classes that refer to each other occasionally. Modular design supports the design goal that software will be reusable. Each module has an interface that defines the module's functional requirements and provides a place where later components may be integrated. The AES includes modules for:Figure 2.1: The AES design.Each of the three tiers plays a specific role in the design.The Client tier is responsible for presenting data to the user, interacting with the user, and communicating with the other tiers of the system. In this case the Client tier is the only part of the system visible to the user. The AES Client tier consists mainly of a stand-alone application that communicates with the other tiers through well-defined interfaces. A message-oriented approach based on JMS (Java Messaging System) has been chosen to take care of the communication between the Client tier and the Middle tier.The Middle tier is responsible for any processing involving Enterprise JavaBeans. Enterprise JavaBeans are software components that extend servers to perform application specific functionality. The interface between these components and their containers is defined in the Enterprise JavaBeans specification. The containers provide services to the Enterprise JavaBeans instances they contain, such as controlling transactions, managing security, thread or other resource pooling, and handling persistence, among other high-level system tasks.The Backend tier is the system information infrastructure. This tier includes one or more relational database management systems and potentially other information assets thatcould be useful, e.g. the central university course results administration system (LADOK). The EIS tier also enforces security and offers scalability. The Backend tier provides a layer of software that maps existing data andapplication resources into the design of AES in an implementation-neutral way.The system is separated into five different functional layers, each with its own responsibilities and its own API. These layers are physically split across the three different tiers. The persistence layer, for example, provides the mechanisms necessary to permanently save object state.It provides basic CRUD (create, read, update, delete) services and also deals with the object-to-relational mapping issues. This leads to a more flexible and maintainable system, e.g. layers can be changed with no effect on other layers, as long as the API remains constant.3 Examination set-upThe examination system is only one part of the examination process. The second part is the set-up (the rules) we have for the students. We have tried a few set-ups over a number of years (using a prototype for the system for 5-6 years).3.1 The first set-upThe first version allowed the students to write the programs using a computer instead of writing on paper. We found this method to be an improvement because we did not have to read "illegible" texts and the submitted solutions could be tested afterwards. Grades were based on the number of correctly solved exercises.A problem with this set -up was that all the grading still had to be done after the exam was finished. Most of the students waited to send in the solutions until the last minute of the exam.3.2 The second set-upOur intention was to have an examination where the students should have a response from the examiner(s) within a few minutes and where grades were given to the students when they left the exam. We also intended to provide the student with the possibility of getting a response for each exercise within a few minutes, so they could correct a nearly correct solution.The second set-up (which we use today) is based on both number of correctly solved exercises and the amount of time taken to solve them. A number of deadlines are given. If the student wants a high grade he/she has to solve a number of exercises within a pre-specified time limit.The current examination process follows a few steps:1. The student sends an examination request for an exercise to the examiner(s).2. The examiners can return one of the following results.Passed - the solution is correct.Incomplete - the solution has errors, and must be corrected. It's possible to make a new attempt later.·Fail - the solution is incorrect and the student is not allowed to continue to work on this exercise.3. Every examination attempt and the result will contribute to the final exam grade, and the student is informed of his/her current grade. If the student submits a new examination request on an additional exercise he/she can reach a higher grade.This examination process is built into our current AES, but the rules (time limits etc.) can be changed for separate courses. This makes the system flexible.Time limits and gradingIn the courses this system was tested there were three exercises in each exam and the requirements for different grades were:For the grade 5 (excellent) the student must complete:o 3 exercises correct in 3 hours oro 2 exercises correct in 2 hoursFor the grade 4 (very good) the student must complete:o 2 exercises correct in 3 hours oro 1 exercise correct in 1.5 hoursFor the grade 3 (passed) the student must complete:o 1 exercise correct in 4 hoursThe above set-up together with the AES support gives us the opportunity to grade the students during the exam. Students who have solved an exercise are informed of the grade they have reached. If they are satisfied with that grade they can leave the exam (many students leave after one to two hours when they have grade 4 or 5).Student q uestionsIn an ordinary computer-aided exam, a number of questions are submitted by the students , where the answer can either be classified as personal or as interesting for all students. The examiner can decide if he/she will send the answer to the whole group of students or just to a specific student. The number of questions seems to be relatively constant during the exam (approximately 2-5 questions per 5 minutes). Most questions are sent in during the beginning of the exam, which can be explained by the fact that the students ask about specific things pertaining to the exercises and that there are more students in the beginning of the exam.Submission/approval attemptsIn an ordinary computer-aided exam we have a large number of examination requests from the students. As we can see in figure 3.1 we have a relatively high frequency in the period from 30 minutes to 3 hours. After that, most of the students leave (they can't get a higher grade than 3 after that time).Around the deadlines we can see that the examination attempts appear more often, but not significantly more often. Still, the increase of examination requests leads to more work for the examiners. This can result in an increase in the response time (waiting time for the student).4 Evaluation methodsThe development of the current system started in summer 2001 and continued through winter 2001/2002. When we began testing this system we wanted as a test example a course with a large number of students. One of our introductory courses in programming has around 270 students each year, so that was our first choice. Approximately 180 of these students are Industrial Management Engineering students and the rest are Technical Biology students. Our statistics are based on their first examination in this course, which took place in March 2002.We also used a retake exam in this course to do a new study with a new set of questions. This evaluation was done in May 2002.In these two studies, students filled in questionnaires directly after the exam. The finalquestionnaire had two parts. The first part was mainly questions where answers are in free text format. The second part included questions with scaled answers (grade on to five, disagree - agree, worse - better). The first part was used in three evaluations. The more extensive questionnaire with two parts was used only for the last evaluation (i.e. for the two last exams). The appendix shows the final questionnaire.Both types of questionnaires were anonymous and the questionnaires were filled in after the grading was done for the exams. The students had already received their grades when they filled in the questionnaires. We believe that this provides a measure of objectivity on the student side.We also used the log files from the AES for the exams to get statistical trends about grades, gender, response times for questions respectively approval attempts among others (see section 5).5 Evaluation resultsUnfortunately almost all students had no previous experience with paper based programming examinations, so the replies could not be used for comparisons with that examination form. However, we used the response to study other questions in detail (specially the part related to the time/stress factor).First, how often the students sent in a request (questions or approval attempts), and how long the time for a response was? Secondly, how well was the examination system accepted by the students? A third question was a comparison by grades between the genders.The response rate of the questionnaires was quite good. We had four exams during the evaluation period with the following response rates:Exam 1: 76 answers of 112 students (67.8 %) Exam 2: 87 answers of 105 students (82.8 %) Exam 3: 50 answers of 66 students (75.7 %) Exam 4: 18 answers of 22 students (81.8 %)The first three questionnaires were done at the first examination occasion for the students and the fourth one was done in a retake examination where all the students were students with no grade from an earlier exam.5.1 Events during an examinationThe number of events, questions and examination requests , spread over an examination session of 4 hours can be an interesting metric to look at. The major negative factor that was indicated in the questionnaires was the feeling of time pressure or stress. 17% of the free text answers had some connection to this factor. From a technical point of view we were also interested in finding that the capacity of the system was adequate. Therefore we have summarised the number of interactions taking place in every exam.Figure 5.1: Student events (questions and examination requests) during an exam.In figure 5.1 we can see that the number of questions is higher in the beginning of an examination, but we have question events over the whole examination time.The number of examination requests is relative to time. There were a few requests in the first half an hour and that the first two hours are busy for the examiners. The request rate is quite high when we reach the time limits for the grades (especially the 4 hour limit). From a technical point of view the system performance under the above loads has been adequate. To study the student experience of stress due to waiting time we havecalculated the average waiting for the answer to a question and an approval of an examination request respectively. W e have also looked at the extreme values.It turns out that for a question the shortest answering time was 30 seconds and the longest 6 minutes. The corresponding figures for approval attempts were 1 minute and 10 minutes respectively. The first type of interaction took 2 minutes and 42 seconds, and the second type 3 minutes and 31 seconds on average for one particular exam.The student responses , from the questionnaires, on this amount of time is that it is acceptable to wait a minute ortwo for an answer on a question and that a few minutes waiting for a result on an examination request is all right.Based on this view we conclude that waiting time is not a contributing factor to the stress experienced by the students.5.2 Acceptance by studentsThe student responses indicated an overwhelming support for this examination form.94.5% of the students who returned the questionnaire preferred this examination form toa traditional paper and pencil exam.Many free text answers referred to the examination form being close to a realistic scenario and were positive about the possibility to compile and test (a total number of 94 such comments).In the exam where quantitative questions about the examination form were added to the questionnaire, 16 of 17 students answered that this form was closer to a realistic situation compared to other examination forms. The majority of students considered themselves to be anonymous with respect to examiners during the exam.5.3 Grade comparisons (male-female)We have made a comparison of grades in the first examination between the male end female groups of the students in a course. The numbers we use are normalised so we can compare the figures directly.As shown in figure 5.2, the grades for the female students are on average lower than the grades for the male students . We were interested in this metric to find out whether the Figure 5.2: Grades related to gender of students.examination form is gender-neutral. As it turns out we cannot draw this conclusion. However, one possible explanation is that most of the students who have programmed prior to taking the course are male.Another aspect of the differences in grades could be that we have two different groups of students in this course where the group with a large proportion of female students (Technical Biology) reads the course during their first year and the other group is reading the course during theirsecond year. The students in the second year are likely to have better study habits and are more experienced and have more theoretical knowledge.A third aspect is that the group with a higher ratio of female students only has this programming course as obligatory in the whole educational programme. The other group of students has more courses in programming afterwards and are possibly more motivated to study and reach higher grades in this course.This question is an obvious point for further study.6 Conclusions and ongoing workThis paper has summarized an early experience with an authentic examination system for programming courses. The current formal evaluations of the examination system and the examination setting has provided us with a number of insights on the effectiveness of the system as a tool for learning and for assessment. While the initial evaluations are positive and point towards the success of this examination method for majority of the students, the input from the students opens up new directions for research, and new ideas on how to improve the environment.Future directions of work are the integration of a new automatic correction system into our on-line and off-line student evaluations, and the exposing of the environment to larger number of students, specially those that already have paper and pencil exam exp eriences.References[1]J. Biggs, Teaching for Quality Learning at University, Open University Press, 1999.[2]K.M. Dawson-Howe. Automatic Submission and Administration of Programming Assignments. SIGCSE Bulletin, 27(4), December 1995.[3]J. English, Experience with a computer -Assisted Formal Programming Examination, Proceedings of ITiCSE 2002, p51-54.[4]C. Higgins, P. Symeonidis, and A. Tsintsifas, The Marking System for CourseMaster, Proceedings of ITiCSE 2002, p46-50.[5]L. Malmi, A. Korhonen, and R. Saikkonen, Experiences in Automatic Assessment on Mass Courses and Issues for Designing Virtual Courses, Proceedings of ITiCSE 2002, p 55-59.[6]H. Oswaldsson, Development of an examination system. Masters Thesis LiTH-IDA-Ex-00/73, Dept. of Computer and Information Science, Link?ping University, September 2000.[7]A. Pardo, A Multi-Agent Platform for Automatic Assignment Management, Proceedings of ITiCSE 2002, p60 -p64.[8]B. Price and M. Petre, Teaching Programming through Paperless Assignments: an empirical evaluation of instructor feedback. Technical report, Open University, January 2001.Appendix: Example questionnairePrevious exam typesHave you ever taken a written exam in a programming course before?Is this the first time you have taken a computer-based exam?Would you prefer a regular written exam instead?Classify comparison to traditional paper exams: Worse Equal BetterPossibility to ask questions during the exam Possibility to redo a question during the exam Possibility to learn something during the exam Anonymity of exam correction Testing critical thinking, not just memorisation Possibility to test and evaluate your own programs Disturbances during the examI can show my best side in theoretical questions I can show my best side in practial questions The examination form is not gender-biasedThe exam time in relation to the number of problems Stress level before the examStress level during the examStress level after the examUnsure as to if you have correctly answered a problem or notUnsure about what grade you have receivedThe exam environment is similar to a real situation The exam generally reflects the course contentAbout computer-based exam: Disagree - Agree (grade 1-5)The exam form made it easy to ask questions during the examI received answers to my questions quicklyThe result from my solution submission was returned quicklyI could see immediately whether or not I had passed the examI learned something about the subject during the courseI felt my anonymity was ensuredTesting my program helped me in solving the exam questionsThe responses I received after asking a question and/or submitting a solution helped me understand the problem betterThe exam rules: Disagree - Agree (grade 1-5)I felt relaxed before the exam I felt relaxed during the exam I felt relaxed after the exam It was helpful to be allowed to correct rejected solutions during the examIt was helpful to get my test result back immediately The cutoff for a 3 (1 correct solution, 4 h) is acceptableThe cutoff for a 4 (1 correct solution, 1.5h / 2 correct solutions, 3 h) is acceptableThe cutoff for a 5 (2 correct solutions, 2h / 3 correct solutions, 3 h) is acceptableIt was helpful to have access to the course literature during the examThe interface: Hard - Easy (grade 1-5)What was it like to communicate using the interface? How did you like the presentation of grades etc.? What was it like to ask a question?What was it like to submit a solution?Stability (classify within the following intervals):How many times did you need help in understanding how the system works? >9 4-9 0-3How many times did a system-interaction window accidently get lost? >9 4-9 0-3How many times did the system crash? >2 1-2 0Miscellaneous (Free text answers)Is there any information you think is missing from the exam system? Please explain. Other comments对程序课程的一个可靠的考试系统的评估Torbjörn Jonsson, Pouria Loghmani and Simin Nadjm-TehraniDepartment of Computer and Information ScienceLinköping University, Sweden{torjo,poulo,simin}@ida.liu.se摘要:本文是我们对程序课程的一个可靠的考试系统的经验描述。

除了exam,和考试的各种英语表达怎么说?

除了exam,和考试的各种英语表达怎么说?

除了exam,和考试的各种英语表达怎么说?我们容易对exam和examination这两个词感到困惑。

总是不自觉地想这个问题:它们两者之间到底有什么区别?What is the difference between exam and examination?“exam”和”examination”这两个词有什么区别呢?其实,exam仅仅examination的缩写形式,常用于英语的口语表达中。

Exam n.考试,测验(examination)一般是指为检查学生的知识或者水平水平而举行的正式考试,有时候也能够指临时考试。

E.g Our final exam is approaching.我们就要期末考试了。

The exam is a breeze.考试很容易过。

(breeze作名词的时候,除了表示微风外,还能够表示轻而易举的事或者表示小风波)At the final exam time, there is a subdued atmosphere in the school.在期末考试期间,学校里有一种压抑的气氛。

Don’t worry, you will pass the exam.别担心,你会通过考试的。

Test 既能够作名词也能够作动词一般指的是有具体目的的考试。

E.g You don’t need to worry too much about it; it’s just a test of your eyesight.你不必如此担心,它仅仅对你的视力实行的小测试。

He didn’t test well.他考得不好。

Quiz 作名词,表示小测验或者一些问答比赛通常是指时间比较短,随时实行的口头或者书面的非正式小测验。

E.g We are having a quiz game.我们正在实行问答比赛。

I like watching the quiz shows.我喜欢看智力竞赛节目。

现在,大家能get到这些单词的用法和区别了吗?最近流行这样的一句话,期末考试一般是由学霸们主演的,学渣们负责友情客串的。

中文转成英文在线翻译

中文转成英文在线翻译

中文转成英文在线翻译简介中文转成英文的在线翻译工具在当前的信息化时代扮演着重要的角色。

随着全球化的发展,英文的重要性日益凸显,越来越多的人需要将中文翻译为英文,以满足学习、工作、旅游等各种需求。

在线翻译工具可以快速、准确地将中文转成英文,为用户提供便捷的翻译服务。

本文将介绍中文转成英文在线翻译的原理、技术和应用,并对当前主流的在线翻译工具进行评估和比较。

同时,我们还将探讨在线翻译工具的优缺点,并提供一些建议和注意事项,以帮助用户选择合适的在线翻译工具,并提高翻译质量和效率。

原理中文转成英文的在线翻译工具基于机器翻译技术。

机器翻译是一种通过计算机自动将一种自然语言翻译为另一种自然语言的技术。

它利用计算机处理大量的语言知识和规则,以及统计学习方法,将源语言文本转换为目标语言文本。

具体来说,中文转成英文的在线翻译包括以下几个步骤:1.分词:将中文文本分割成一个个词语,以方便后续处理;2.词性标注:为每个词语标注词性,以提取句子语法和语义信息;3.语言模型:建立语言模型,用于预测句子的合理性和流利度;4.翻译规则:建立中文和英文之间的翻译规则,包括词汇对齐、短语对齐和句子对齐;5.翻译生成:根据翻译规则和语言模型,生成目标语言的翻译结果;6.合并处理:将生成的翻译结果进行合并处理,以提高翻译准确性和流畅性;7.后处理:对生成的翻译结果进行修正和优化,以提升翻译质量。

技术中文转成英文的在线翻译工具采用了多种技术来实现快速、准确的翻译效果。

分词技术中文是一种没有明显分词边界的语言,分词技术是中文处理的基础。

传统的分词技术采用词典匹配的方法,根据预定义的词典对文本进行分词。

近年来,基于统计学习和深度学习的分词方法逐渐兴起,通过分析大规模语料库中的词语出现频率和上下文关系,实现准确的分词效果。

词性标注技术词性标注是根据上下文和句法规则为每个词语标注词性的过程。

中文的词性标注比英文更复杂,因为中文词性多样且变化多端。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
CertNum
varchar
20

证件号码
Telephone
varchar
20

电话
LoginIP
varchar
20

登录IP
UserType
int
4

类型
UserState
int
4

状态
JudgeUser
int
4

判断用户
JudgeTestType
int
4

判断测试类型
RoleMenu
int
4

角色菜单
System to improve the quality of teaching and improved student assessment means to achieve a paperless examination shift from the traditional examination to the network.
datetime
8

创建时间
SubjectInfo表用于存放科目信息,其结构如表1-5所示。
表1-5SubjectInfo表结构
字段名称
数据类型
长度/大小
主键
描述
SubjectID
int
4

主键标识
SubjectName
varchar
50

科目名称
BrowAccount
int
4

浏览账号
SubjectMemo
Abstract
The system uses and SQL Server 2008 database, the development of online examination system based on B / S three-tier architecture in C #. The first layer of the client layer, the layer is located on the client's Web browser, users from a Web browser to access the Web server on the client's Web browser to display the required Home. The second layer is the application layer, with the extended functionality of the application Web server. The task of this layer is to accept the user's request, to perform the procedure and the database connection to the database through SQL data processing request and wait for the database server to the data processing of the results submitted to the Web server, and then back to the client by the Web server . The third layer of the database layer, the database server. Its mission is to accept the request of the Web server to the database manipulation, the database query, modify, update, and run results to the Web server.
real
4

分项分数
PassState
Int
4

通过分数
TotalMark
real
4

总分
PaperInfo表用于存放试卷信息,其结构如表1-3所示。
表1-3PaperInfo表结构
字段名称
数据类型
长度/大小
主键
描述
PaperID
int
4

主键标识
PaperName
varchar
50

试卷名称
PaperType
Int
4

定义分数
RepeatExam
Int
4

重考
FillAutoGrade
Int
4

允许填空题自评
SeeResult
Int
4

允许查看试卷结果
AutoSave
Int
4

允许自动保存
ExamAccount
Int
4

考试账户
ManagerAccount
Int
4

管理账户
TestCount
Int
4

测试计数
int
4

试卷类型
ProduceWay
int
4

出题方式
ShowModal
Int
4

显示模式
ExamTime
int
4

答题时间
StartTime
datetime
8

开始时间
EndTime
datetime
8

结束时间
PaperMark
int
4

试卷总分
PassMark
Int
4

通过分数
MarkDefine
The system is divided into three modules: system management module, user registration module and test module. System management module user questions, papers add, modify, and delete system running; student users registered user registration module, student registration to be eligible to take the exam; exam module is the client user through their account number and password to login into the interface by the system in advance in accordance with the administrator set a good paper structure randomly selected from the exam for the title, the formation of the papers.
1.2
图1.1数据库E-R图
1.3数据库表结构
根据本系统基本功能和所涉及人员对数据库建立了若干表,其具体表结构如下所示。
UserInfo表用于存放系统用户信息,其结构如表1-1所示。
表1-1UserInfo表结构
字段名称
数据类型
长度/大小
主键
描述
UserID
Int
4

主要标识
LoginID
varchar
CreateUserID
int
4

创建用户账号
CreateDate
datetime
8

创建时间
UserScore表用于存放学生成绩,其结构如表1-2所示。
表1-2UserScore表结构
字段名称
数据类型
长度/大小
主键
描述
UserScoreID
Int
4

主键标识
UserID
Int
4

账号
PaperID
AutoJudge
Int
4

自动判断
CreateWay
Int
4

创建方式
CreateUserID
Int
4

创建用户ID
CreateDate
datetime
8

创建时间
RubricInfo表用于存放试题信息,其结构如表1-4所示。
表1-4RubricInfo表结构
字段名称
数据类型
长度
主键
描述
RubricID
5.系统信息管理:主要是对以上几个管理系统常常调用的信息进行录入,方便其他管理的操作,也保证了录入信息的准确性。
结 论
在我完成了在线考试系统网站设计这一重要部分的内容后,我觉得自己就像又重新将自己曾经学习过的知识再次学习了一遍一样,对于在线考试系统网站设计的各个步骤的安排,又有了近一步的熟悉。在做毕业设计的过程中,每一步都是自己亲自了解制作的,虽然遇到的问题、难题也非常多,但在经过遇到问题、思索问题、解决问题的过程中,这一收获是最多的。以往没有在意到的问题,都在这一次的毕业设计中得意体现,这些都培养了我的细心、耐心和恒心。而且指导老师给予的指导更加让我受益良多,无论是理论工作上的编制,还是实际中毕业设计遇到的问题,老师都给我们做了详细的分析,也让我在网站设计时更能理论结合实际,更合理地进行思考。我相信学海无涯,只要自己努力想学,这些知识对自己都是有帮助的。
河北理工大学轻工学院
相关文档
最新文档