An Iconic Position Estimator for a 2D Laser RangeFinder

合集下载

Sparse group lasso and high dimensional multinomial classification

Sparse group lasso and high dimensional multinomial classification

in the form of a C+implementation for multinomial and logistic sparse group lasso regression is available as an R package. For our implementation the time to compute the sparse group lasso solution is of the same order of magnitude as the time required for the multinomial lasso algorithm as implemented in the R-package glmnet. The computation time of our implementation scales well with the problem size. 1.1. Sparse group lasso Consider a convex, bounded below and twice continuously differentiable ˆ ∈ Rn is a sparse group lasso minimizer function f : Rn → R. We say that β if it is a solution to the unconstrained convex optimization problem minimize f + λΦ (1)
arXiv:1205.1245v2 [stat.ML] 6 Feb 2013
Abstract The sparse group lasso optimization problem is solved using a coordinate gradient descent algorithm. The algorithm is applicable to a broad class of convex loss functions. Convergence of the algorithm is established, and the algorithm is used to investigate the performance of the multinomial sparse group lasso classifier. On three different real data examples the multinomial group lasso clearly outperforms multinomial lasso in terms of achieved classification error rate and in terms of including fewer features for the classification. The run-time of our sparse group lasso implementation is of the same order of magnitude as the multinomial lasso algorithm implemented in the R package glmnet. Our implementation scales well with the problem size. One of the high dimensional examples considered is a 50 class classification problem with 10k features, which amounts to estimating 500k parameters. The implementation is available as the R package msgl. Keywords: Sparse group lasso, classification, high dimensional data analysis, coordinate gradient descent, penalized loss. 1. Introduction The sparse group lasso is a regularization method that combines the lasso [1] and the group lasso [2]. Friedman et al. [3] proposed a coordinate descent approach for the sparse group lasso optimization problem. Simon et al. [4] used a generalized gradient descent algorithm for the sparse group lasso and

Chapter 20_Processes with Deterministic Trends(计量经济学-西安交通大学,李庆男)

Chapter 20_Processes with Deterministic Trends(计量经济学-西安交通大学,李庆男)
−1
Xε T
−1 T t=1
T t=1
xt xt
xt ε t
T
T
,
(i). {(xt , εt ) } is an independent sequences; (ii). (a) E (xt εt ) = 0; (b) E |Xti εt |1+δ < ∆ < ∞, f or some δ > 0, i = 1, 2, ..., k ; (iii). (a) MT ≡ E (X X/T ) is positive definite; 2 1+δ (b) E |Xti | < ∆ < ∞, f or some δ > 0, i = 1, 2, ..., k ;
−1
Xε T
−1 T t=1
T t=1
xt xt
xt ε t
T
T
,
Theorem: In addition to (1), suppose that (1). {(xt , εt ) }(k+1)×1 is an i.i.d. sequences; (2). (a) E (xt εt ) = 0; 1
(b) E |Xti εt | < ∞, i = 1, 2, ..., k ; (3). (a) E |Xti |2 < ∞, i = 1, 2, ..., k ; (b) M ≡ E (xt xt ) is positive definite;
a.s ˆ −→ Then β β.
Remark: 1. Assumption (2a) is talking about of the mean of this i.i.d. sequences (Xti εt , i = 1, 2, ..., k ), see Proposition 3.3 of White, 2001, p.32) and (2b) is about its first moment exist. 2. Assumption (3a) guarantee its (Xti Xtj ) first moment exist by Cauchy-Schwarz inequality and (3b) is talking about of the mean of this i.i.d. (Xti Xtj , i = 1, 2, .., k ; j = 1, 2, ..., k ) sequence. An existence of the first moment is what is need for LLN of i.i.d. sequence. See p.15 of Ch.4. Proof: It is obvious that from these assumptions we have Xε T and XX T Therefore

审计学:一种整合方法阿伦斯英文版第12版课后答案Chapter15SolutionsManual

审计学:一种整合方法阿伦斯英文版第12版课后答案Chapter15SolutionsManual

审计学:⼀种整合⽅法阿伦斯英⽂版第12版课后答案Chapter15SolutionsManualChapter 15Audit Sampling for Tests of Controls andSubstantive Tests of TransactionsReview Questions15-1 A representative sample is one in which the characteristics of interest for the sample are approximately the same as for the population (that is, the sample accurately represents the total population). If the population contains significant misstatements, but the sample is practically free of misstatements, the sample is nonrepresentative, which is likely to result in an improper audit decision. The auditor can never know for sure whether he or she has a representative sample because the entire population is ordinarily not tested, but certain things, such as the use of random selection, can increase the likelihood of a representative sample.15-2Statistical sampling is the use of mathematical measurement techniques to calculate formal statistical results. The auditor therefore quantifies sampling risk when statistical sampling is used. In nonstatistical sampling, the auditor does not quantify sampling risk. Instead, conclusions are reached about populations on a more judgmental basis.For both statistical and nonstatistical methods, the three main parts are:1. Plan the sample2. Select the sample and perform the tests3. Evaluate the results15-3In replacement sampling, an element in the population can be included in the sample more than once if the random number corresponding to that element is selected more than once. In nonreplacement sampling, an element can be included only once. If the random number corresponding to an element is selected more than once, it is simply treated as a discard the second time. Although both selection approaches are consistent with sound statistical theory, auditors rarely use replacement sampling; it seems more intuitively satisfying to auditors to include an item only once.15-4 A simple random sample is one in which every possible combination of elements in the population has an equal chance of selection. Two methods of simple random selection are use of a random number table, and use of the computer to generate random numbers. Auditors most often use the computer to generate random numbers because it saves time, reduces the likelihood of error, and provides automatic documentation of the sample selected.15-5In systematic sampling, the auditor calculates an interval and then methodically selects the items for the sample based on the size of the interval. The interval is set by dividing the population size by the number of sample items desired.To select 35 numbers from a population of 1,750, the auditor divides 35 into 1,750 and gets an interval of 50. He or she then selects a random number between 0 and 49. Assume the auditor chooses 17. The first item is the number 17. The next is 67, then 117, 167, and so on.The advantage of systematic sampling is its ease of use. In most populations a systematic sample can be drawn quickly, the approach automatically puts the numbers in sequential order and documentation is easy.A major problem with the use of systematic sampling is the possibility of bias. Because of the way systematic samples are selected, once the first item in the sample is selected, other items are chosen automatically. This causes no problems if the characteristics of interest, such as control deviations, are distributed randomly throughout the population; however, in many cases they are not. If all items of a certain type are processed at certain times of the month or with the use of certain document numbers, a systematically drawn sample has a higher likelihood of failing to obtain a representative sample. This shortcoming is sufficiently serious that some CPA firms prohibit the use of systematic sampling. 15-6The purpose of using nonstatistical sampling for tests of controls and substantive tests of transactions is to estimate the proportion of items in a population containing a characteristic or attribute of interest. The auditor is ordinarily interested in determining internal control deviations or monetary misstatements for tests of controls and substantive tests of transactions.15-7 A block sample is the selection of several items in sequence. Once the first item in the block is selected, the remainder of the block is chosen automatically. Thus, to select 5 blocks of 20 sales invoices, one would select one invoice and the block would be that invoice plus the next 19 entries. This procedure would be repeated 4 other times.15-8 The terms below are defined as follows:15-8 (continued)15-9The sampling unit is the population item from which the auditor selects sample items. The major consideration in defining the sampling unit is making it consistent with the objectives of the audit tests. Thus, the definition of the population and the planned audit procedures usually dictate the appropriate sampling unit.The sampling unit for verifying the occurrence of recorded sales would be the entries in the sales journal since this is the document the auditor wishes to validate. The sampling unit for testing the possibility of omitted sales is the shipping document from which sales are recorded because the failure to bill a shipment is the exception condition of interest to the auditor.15-10 The tolerable exception rate (TER) represents the exception rate that the auditor will permit in the population and still be willing to use the assessed control risk and/or the amount of monetary misstatements in the transactions established during planning. TER is determined by choice of the auditor on the basis of his or her professional judgment.The computed upper exception rate (CUER) is the highest estimated exception rate in the population, at a given ARACR. For nonstatistical sampling, CUER is determined by adding an estimate of sampling error to the SER (sample exception rate). For statistical sampling, CUER is determined by using a statistical sampling table after the auditor has completed the audit testing and therefore knows the number of exceptions in the sample.15-11 Sampling error is an inherent part of sampling that results from testing less than the entire population. Sampling error simply means that the sample is not perfectly representative of the entire population.Nonsampling error occurs when audit tests do not uncover errors that exist in the sample. Nonsampling error can result from:1. The auditor's failure to recognize exceptions, or2. Inappropriate or ineffective audit procedures.There are two ways to reduce sampling risk:1. Increase sample size.2. Use an appropriate method of selecting sample items from thepopulation.Careful design of audit procedures and proper supervision and review are ways to reduce nonsampling risk.15-12 An attribute is the definition of the characteristic being tested and the exception conditions whenever audit sampling is used. The attributes of interest are determined directly from the audit program.15-13 An attribute is the characteristic being tested for in a population. An exception occurs when the attribute being tested for is absent. The exception for the audit procedure, the duplicate sales invoice has been initialed indicating the performance of internal verification, is the lack of initials on duplicate sales invoices.15-14 Tolerable exception rate is the result of an auditor's judgment. The suitable TER is a question of materiality and is therefore affected by both the definition and the importance of the attribute in the audit plan.The sample size for a TER of 6% would be smaller than that for a TER of 3%, all other factors being equal.15-15 The appropriate ARACR is a decision the auditor must make using professional judgment. The degree to which the auditor wishes to reduce assessed control risk below the maximum is the major factor determining the auditor's ARACR.The auditor will choose a smaller sample size for an ARACR of 10% than would be used if the risk were 5%, all other factors being equal.15-16 The relationship between sample size and the four factors determining sample size are as follows:a. As the ARACR increases, the required sample size decreases.b. As the population size increases, the required sample size isnormally unchanged, or may increase slightly.c. As the TER increases, the sample size decreases.d. As the EPER increases, the required sample size increases.15-17 In this situation, the SER is 3%, the sample size is 100 and the ARACR is 5%. From the 5% ARACR table (Table 15-9) then, the CUER is 7.6%. This means that the auditor can state with a 5% risk of being wrong that the true population exception rate does not exceed 7.6%.15-18 Analysis of exceptions is the investigation of individual exceptions to determine the cause of the breakdown in internal control. Such analysis is important because by discovering the nature and causes of individual exceptions, the auditor can more effectively evaluate the effectiveness of internal control. The analysis attempts to tell the "why" and "how" of the exceptions after the auditor already knows how many and what types of exceptions have occurred.15-19 When the CUER exceeds the TER, the auditor may do one or more of the following:1. Revise the TER or the ARACR. This alternative should be followed onlywhen the auditor has concluded that the original specifications weretoo conservative, and when he or she is willing to accept the riskassociated with the higher specifications.2. Expand the sample size. This alternative should be followed whenthe auditor expects the additional benefits to exceed the additionalcosts, that is, the auditor believes that the sample tested was notrepresentative of the population.3. Revise assessed control risk upward. This is likely to increasesubstantive procedures. Revising assessed control risk may bedone if 1 or 2 is not practical and additional substantive proceduresare possible.4. Write a letter to management. This action should be done inconjunction with each of the three alternatives above. Managementshould always be informed when its internal controls are notoperating effectively. If a deficiency in internal control is consideredto be a significant deficiency in the design or operation of internalcontrol, professional standards require the auditor to communicatethe significant deficiency to the audit committee or its equivalent inwriting. If the client is a publicly traded company, the auditor mustevaluate the deficiency to determine the impact on the auditor’sreport on internal control over financial reporting. If the deficiency isdeemed to be a material weakness, the auditor’s report on internalcontrol would contain an adverse opinion.15-20 Random (probabilistic) selection is a part of statistical sampling, but it is not, by itself, statistical measurement. To have statistical measurement, it is necessary to mathematically generalize from the sample to the population.Probabilistic selection must be used if the sample is to be evaluated statistically, although it is also acceptable to use probabilistic selection with a nonstatistical evaluation. If nonprobabilistic selection is used, nonstatistical evaluation must be used.15-21 The decisions the auditor must make in using attributes sampling are: What are the objectives of the audit test? Does audit sampling apply?What attributes are to be tested and what exception conditions are identified?What is the population?What is the sampling unit?What should the TER be?What should the ARACR be?What is the EPER?What generalizations can be made from the sample to thepopulation?What are the causes of the individual exceptions?Is the population acceptable?15-21 (continued)In making the above decisions, the following should be considered: The individual situation.Time and budget constraints.The availability of additional substantive procedures.The professional judgment of the auditor.Multiple Choice Questions From CPA Examinations15-22 a. (1) b. (3) c. (2) d. (4)15-23 a. (1) b. (3) c. (4) d. (4)15-24 a. (4) b. (3) c. (1) d. (2)Discussion Questions and Problems15-25a.An example random sampling plan prepared in Excel (P1525.xls) is available on the Companion Website and on the Instructor’s Resource CD-ROM, which is available upon request. The command for selecting the random number can be entered directly onto the spreadsheet, or can be selected from the function menu (math & trig) functions. It may be necessary to add the analysis tool pack to access the RANDBETWEEN function. Once the formula is entered, it can be copied down to select additional random numbers. When a pair of random numbers is required, the formula for the first random number can be entered in the first column, and the formula for the second random number can be entered in the second column.a. First five numbers using systematic selection:Using systematic selection, the definition of the sampling unit for determining the selection interval for population 3 is the total number of lines in the population. The length of the interval is rounded down to ensure that all line numbers selected are within the defined population.15-26a. To test whether shipments have been billed, a sample of warehouse removal slips should be selected and examined to see ifthey have the proper sales invoice attached. The sampling unit willtherefore be the warehouse removal slip.b. Attributes sampling method: Assuming the auditor is willing to accept a TER of 3% at a 10% ARACR, expecting no exceptions in the sample, the appropriate sample size would be 76, determined from Table 15-8.Nonstatistical sampling method: There is no one right answer to this question because the sample size is determined using professional judgment. Due to the relatively small TER (3%), the sample size should not be small. It will most likely be similar in size to the sample chosen by the statistical method.c. Systematic sample selection:22839 = Population size of warehouse removal slips(37521-14682).76 = Sample size using statistical sampling (students’answers will vary if nonstatistical sampling wasused in part b.300 = Interval (22839/76) if statistical sampling is used (students’ answers will vary if nonstatisticalsampling was used in part b).14825 = Random starting point.Select warehouse removal slip 14825 and every 300th warehouse removal slip after (15125, 15425, etc.)Computer generation of random numbers using Excel (P1526.xls): =RANDBETWEEN(14682,37521)The command for selecting the random number can be entered directly onto the spreadsheet, or can be selected from the function menu (math & trig) functions. It may be necessary to add the analysis tool pack to access the RANDBETWEEN function. Once the formula is entered, it can be copied down to select additional random numbers.d. Other audit procedures that could be performed are:1. Test extensions on attached sales invoices for clerical accuracy. (Accuracy)2. Test time delay between warehouse removal slip date and billing date for timeliness of billing. (Timing)3. Trace entries into perpetual inventory records to determinethat inventory is properly relieved for shipments. (Postingand summarization)15-26 (continued)e. The test performed in part c cannot be used to test for occurrenceof sales because the auditor already knows that inventory wasshipped for these sales. To test for occurrence of sales, the salesinvoice entry in the sales journal is the sampling unit. Since thesales invoice numbers are not identical to the warehouse removalslips it would be improper to use the same sample.15-27a. It would be appropriate to use attributes sampling for all audit procedures except audit procedure 1. Procedure 1 is an analyticalprocedure for which the auditor is doing a 100% review of the entirecash receipts journal.b. The appropriate sampling unit for audit procedures 2-5 is a line item,or the date the prelisting of cash receipts is prepared. The primaryemphasis in the test is the completeness objective and auditprocedure 2 indicates there is a prelisting of cash receipts. All otherprocedures can be performed efficiently and effectively by using theprelisting.c. The attributes for testing are as follows:d. The sample sizes for each attribute are as follows:15-28a. Because the sample sizes under nonstatistical sampling are determined using auditor judgment, students’ answers to thisquestion will vary. They will most likely be similar to the samplesizes chosen using attributes sampling in part b. The importantpoint to remember is that the sample sizes chosen should reflectthe changes in the four factors (ARACR, TER, EPER, andpopulation size). The sample sizes should have fairly predictablerelationships, given the changes in the four factors. The followingreflects some of the relationships that should exist in student’ssample size decisions:SAMPLE SIZE EXPLANATION1. 90 Given2. > Column 1 Decrease in ARACR3. > Column 2 Decrease in TER4. > Column 1 Decrease in ARACR (column 4 is thesame as column 2, with a smallerpopulation size)5. < Column 1 Increase in TER-EPER6. < Column 5 Decrease in EPER7. > Columns 3 & 4 Decrease in TER-EPERb. Using the attributes sampling table in Table 15-8, the sample sizesfor columns 1-7 are:1. 882. 1273. 1814. 1275. 256. 187. 149c.d. The difference in the sample size for columns 3 and 6 result from the larger ARACR and larger TER in column 6. The extremely large TER is the major factor causing the difference.e. The greatest effect on the sample size is the difference between TER and EPER. For columns 3 and 7, the differences between the TER and EPER were 3% and 2% respectively. Those two also had the highest sample size. Where the difference between TER and EPER was great, such as columns 5 and 6, the required sample size was extremely small.Population size had a relatively small effect on sample size.The difference in population size in columns 2 and 4 was 99,000 items, but the increase in sample size for the larger population was marginal (actually the sample sizes were the same using the attributes sampling table).f. The sample size is referred to as the initial sample size because it is based on an estimate of the SER. The actual sample must be evaluated before it is possible to know whether the sample is sufficiently large to achieve the objectives of the test.15-29 a.* Students’ answers as to whether the allowance for sampling error risk is sufficient will vary, depending on their judgment. However, they should recognize the effect that lower sample sizes have on the allowance for sampling risk in situations 3, 5 and 8.b. Using the attributes sampling table in Table 15-9, the CUERs forcolumns 1-8 are:1. 4.0%2. 4.6%3. 9.2%4. 4.6%5. 6.2%6. 16.4%7. 3.0%8. 11.3%c.d. The factor that appears to have the greatest effect is the number ofexceptions found in the sample compared to sample size. For example, in columns 5 and 6, the increase from 2% to 10% SER dramatically increased the CUER. Population size appears to have the least effect. For example, in columns 2 and 4, the CUER was the same using the attributes sampling table even though the population in column 4 was 10 times larger.e. The CUER represents the results of the actual sample whereas theTER represents what the auditor will allow. They must be compared to determine whether or not the population is acceptable.15-30a. and b. The sample sizes and CUERs are shown in the following table:a. The auditor selected a sample size smaller than that determinedfrom the tables in populations 1 and 3. The effect of selecting asmaller sample size than the initial sample size required from thetable is the increased likelihood of having the CUER exceed theTER. If a larger sample size is selected, the result may be a samplesize larger than needed to satisfy TER. That results in excess auditcost. Ultimately, however, the comparison of CUER to TERdetermines whether the sample size was too large or too small.b. The SER and CUER are shown in columns 4 and 5 in thepreceding table.c. The population results are unacceptable for populations 1, 4, and 6.In each of those cases, the CUER exceeds TER.The auditor's options are to change TER or ARACR, increase the sample size, or perform other substantive tests to determine whether there are actually material misstatements in thepopulation. An increase in sample size may be worthwhile inpopulation 1 because the CUER exceeds TER by only a smallamount. Increasing sample size would not likely result in improvedresults for either population 4 or 6 because the CUER exceedsTER by a large amount.d. Analysis of exceptions is necessary even when the population isacceptable because the auditor wants to determine the nature andcause of all exceptions. If, for example, the auditor determines thata misstatement was intentional, additional action would be requiredeven if the CUER were less than TER.15-30 (Continued)e.15-31 a. The actual allowance for sampling risk is shown in the following table:b. The CUER is higher for attribute 1 than attribute 2 because the sample sizeis smaller for attribute 1, resulting in a larger allowance for sampling risk.c. The CUER is higher for attribute 3 than attribute 1 because the auditorselected a lower ARACR. This resulted in a larger allowance for sampling risk to achieve the lower ARACR.d. If the auditor increases the sample size for attribute 4 by 50 items and findsno additional exceptions, the CUER is 5.1% (sample size of 150 and three exceptions). If the auditor finds one exception in the additional items, the CUER is 6.0% (sample size of 150, four exceptions). With a TER of 6%, the sample results will be acceptable if one or no exceptions are found in the additional 50 items. This would require a lower SER in the additional sample than the SER in the original sample of 3.0 percent. Whether a lower rate of exception is likely in the additional sample depends on the rate of exception the auditor expected in designing the sample, and whether the auditor believe the original sample to be representative.15-32a. The following shows which are exceptions and why:b. It is inappropriate to set a single acceptable tolerable exception rate and estimated population exception rate for the combined exceptions because each attribute has a different significance tothe auditor and should be considered separately in analyzing the results of the test.c. The CUER assuming a 5% ARACR for each attribute and a sample size of 150 is as follows:15-32 (continued)d.*Students’ answers will most likely vary for this attribute.e. For each exception, the auditor should check with the controller todetermine an explanation for the cause. In addition, the appropriateanalysis for each type of exception is as follows:15-33a. Attributes sampling approach: The test of control attribute had a 6% SER and a CUER of 12.9%. The substantive test of transactionsattribute has SER of 0% and a CUER of 4.6%.Nonstatistical sampling approach: As in the attributes samplingapproach, the SERs for the test of control and the substantive testof transactions are 6% and 0%, respectively. Students’ estimates ofthe CUERs for the two tests will vary, but will probably be similar tothe CUERs calculated under the attributes sampling approach.b. Attributes sampling approach: TER is 5%. CUERs are 12.9% and4.6%. Therefore, only the substantive test of transactions resultsare satisfactory.Nonstatistical sampling approach: Because the SER for the test ofcontrol is greater than the TER of 5%, the results are clearly notacceptable. Students’ estimates for CUER for the test of controlshould be greater than the SER of 6%. For the substantive test oftransactions, the SER is 0%. It is unlikely that students will estimateCUER for this test greater than 5%, so the results are acceptablefor the substantive test of transactions.c. If the CUER exceeds the TER, the auditor may:1. Revise the TER if he or she thinks the original specificationswere too conservative.2. Expand the sample size if cost permits.3. Alter the substantive procedures if possible.4. Write a letter to management in conjunction with each of theabove to inform management of a deficiency in their internalcontrols. If the client is a publicly traded company, theauditor must evaluate the deficiency to determine the impacton the auditor’s report on internal control over financialreporting. If the deficiency is deemed to be a materialweakness, the auditor’s report on internal control wouldcontain an adverse opinion.In this case, the auditor has evidence that the test of control procedures are not effective, but no exceptions in the sample resulted because of the breakdown. An expansion of the attributestest does not seem advisable and therefore, the auditor shouldprobably expand confirmation of accounts receivable tests. Inaddition, he or she should write a letter to management to informthem of the control breakdown.d. Although misstatements are more likely when controls are noteffective, control deviations do not necessarily result in actualmisstatements. These control deviations involved a lack ofindication of internal verification of pricing, extensions and footingsof invoices. The deviations will not result in actual errors if pricing,extensions and footings were initially correctly calculated, or if theindividual responsible for internal verification performed theprocedure but did not document that it was performed.e. In this case, we want to find out why some invoices are notinternally verified. Possible reasons are incompetence,carelessness, regular clerk on vacation, etc. It is desirable to isolatethe exceptions to certain clerks, time periods or types of invoices.Case15-34a. Audit sampling could be conveniently used for procedures 3 and 4 since each is to be performed on a sample of the population.b. The most appropriate sampling unit for conducting most of the auditsampling tests is the shipping document because most of the testsare related to procedure 4. Following the instructions of the auditprogram, however, the auditor would use sales journal entries asthe sampling unit for step 3 and shipping document numbers forstep 4. Using shipping document numbers, rather than thedocuments themselves, allows the auditor to test the numericalcontrol over shipping documents, as well as to test for unrecordedsales. The selection of numbers will lead to a sample of actualshipping documents upon which tests will be performed.。

人工智能原理_北京大学中国大学mooc课后章节答案期末考试题库2023年

人工智能原理_北京大学中国大学mooc课后章节答案期末考试题库2023年

人工智能原理_北京大学中国大学mooc课后章节答案期末考试题库2023年1.Turing Test is designed to provide what kind of satisfactory operationaldefinition?图灵测试旨在给予哪一种令人满意的操作定义?答案:machine intelligence 机器智能2.Thinking the differences between agent functions and agent programs, selectcorrect statements from following ones.考虑智能体函数与智能体程序的差异,从下列陈述中选择正确的答案。

答案:An agent program implements an agent function.一个智能体程序实现一个智能体函数。

3.There are two main kinds of formulation for 8-queens problem. Which of thefollowing one is the formulation that starts with all 8 queens on the boardand moves them around?有两种8皇后问题的形式化方式。

“初始时8个皇后都放在棋盘上,然后再进行移动”属于哪一种形式化方式?答案:Complete-state formulation 全态形式化4.What kind of knowledge will be used to describe how a problem is solved?哪种知识可用于描述如何求解问题?答案:Procedural knowledge 过程性知识5.Which of the following is used to discover general facts from trainingexamples?下列中哪个用于训练样本中发现一般的事实?答案:Inductive learning 归纳学习6.Which statement best describes the task of “classification” in machinelearning?哪一个是机器学习中“分类”任务的正确描述?答案:To assign a category to each item. 为每个项目分配一个类别。

英国诺丁汉大学讲义如何估计随机效应模型stata课件

英国诺丁汉大学讲义如何估计随机效应模型stata课件
Rodriguez and Goldman (1995) use the structure of this dataset to consider how well quasi-likelihood methods compare with considering the dataset without the multilevel structure and fitting a standard logistic regression.
英国诺丁汉大学讲义如何估计随机 效应模型stata
Estimation Methods for Multilevel
Models
Due to additional random effects no simple matrix formulae exist for finding estimates in multilevel models.
• Can easily be extended to more complex problems.
• Potential downside 1: Prior distributions required for all unknown parameters.
• Potential downside 2: MCMC estimation is much slower than the IGLS algorithm.
• Here there are 4 sets of unknown parameters:
,u,u2,e2
• We will add prior distributions
p(
),p(
),p( ) 2
2
u
英e 国诺丁汉大学讲义如何估计随机

Chapter7-Geometric-Transformation

Chapter7-Geometric-Transformation
A Sx cos(t)
B Sy[k cos(t) sin(t)]
D Sx sin(t) E Sy[k sin(t) cos(t)]
第10页,共21页。
1、Geometric Transformation
• 1.5 Control Points • Control points play a key role in determining
第11页,共21页。
2、Root Mean Square (RMS) Error
• The location of a control point on a digitized map or an image is an estimated location and can deviate from its actual location.
Rotation
y
y
Skew
x
x
Translation
Differential scaling, rotation, skew, and translation in the affine transformation.
第7页,共21页。
1、Geometric Transformation
• 1.3 Affine Transformation
第8页,共21页。
1、Geometric Transformation
• 1.3 Affine Transformation
• An affine transformation of a digitized map or image involves three steps.
1. Update control points to 2. Run affine transformation

广义距(GMM)估计讲义

2
Overview Principal advantage of GMM - it provides a general framework for inference since it encompasses a large number of estimators in econometrics. GMM expands the type of probability models we can consider. Not necessary to specify a p.d.f and therefore all the moments of a random variable. We can focus on a limited set of moments As we see moment equations and minimum generalised distance are key components of GMM Generalisation: two ways 1. Moments can be NL functions of the unknown parameters 2. There may be more moments than unknowns. GMM unifies these two aspects within a single estimation strategy.
5
(1)
The Analogy Principle The analogy principle of estimation... proposes that population parameters be estimated by sample statistics which have the same property in the sample as the parameters do in the population. Goldberger (1968, p. 4): Analogue estimators are estimators obtained by application of the analogy principle. Population moment conditions suggest as an estimator the solution to the corresponding sample moment condition. Example MOM estimator. In the i.i.d case if E [yi − µ] = 0 in the population use as estimator µ that solves the corresponding sample moment conditions N −1 i (yi − µ) = 0, leading to µ = y, the sample mean.

arcmap面试题目(3篇)

第1篇一、基础知识1. 什么是GIS?请简述GIS的主要功能。

解析:GIS(地理信息系统)是一种将地理空间数据与属性数据相结合,用于捕捉、存储、分析、管理和展示地理空间信息的系统。

GIS的主要功能包括数据采集、数据存储、数据处理、数据分析和数据可视化。

2. 请解释以下概念:矢量数据、栅格数据、拓扑关系。

解析:- 矢量数据:以点、线、面等几何对象表示地理空间实体,适用于表示清晰的边界和形状,如道路、河流、行政区划等。

- 栅格数据:以网格的形式表示地理空间信息,每个网格单元包含一个或多个属性值,适用于表示连续的地理现象,如遥感影像、地形高程等。

- 拓扑关系:描述地理空间实体之间的相互关系,如相邻、包含、连接等,用于提高空间数据的查询和分析效率。

3. 请简述ArcGIS软件的主要组件。

解析:ArcGIS软件主要包括以下组件:- ArcGIS Desktop:用于数据采集、编辑、分析、管理和可视化。

- ArcGIS Server:用于发布GIS服务和应用程序。

- ArcGIS Online:提供云基础上的GIS服务、应用程序和地图。

- ArcGIS API for Developers:用于开发GIS应用程序。

二、ArcGIS Desktop操作1. 如何创建一个新的地图文档?解析:在ArcGIS Desktop中,可以通过以下步骤创建一个新的地图文档:- 打开ArcGIS Desktop。

- 选择“文件”菜单中的“新建”选项。

- 选择“地图”类型,然后点击“确定”。

- 在弹出的“新建地图”对话框中,输入地图文档的名称,选择保存位置,然后点击“保存”。

2. 如何添加图层到地图文档中?解析:在ArcGIS Desktop中,可以通过以下步骤添加图层到地图文档中:- 打开地图文档。

- 在“内容”窗口中,右键点击“图层”或“组”,选择“添加数据”。

- 在弹出的“添加数据”对话框中,选择数据源,如文件、数据库或网络,然后选择要添加的图层,点击“添加”。

A Feature Correction Two Stage Vector Quantization


image quality. Other choices may be more appropriate for di erent input image sizes and desired compression ratios. The crucial step of FC2VQ is the detection of the ROFF. By observing that facial features eyes and mouth contain many horizontal edges, a vertical gradient image is generated rst. The eyes and mouth are then located by detecting areas with the largest sum of vertical gradient magnitude in the predetermined areas. Finally, the position and the size of the ROFF are derived from those of the eyes and mouth. More detailed designs of the detection algorithm and the FC2VQ algorithm in general can be found in 3 . For a good treatment on vector quantization, see, e.g. 4 .
A Feature Correction Two Stage Vector Quantization FC2VQ algorithm was previously developed to compress gray-scale photo ID pictures. This algorithm is extended to color images in this paper. Two options are compared, which apply the FC2VQ algorithm in RGB and YCbCr color spaces, respectively. The RGBFC2VQ algorithm is found to yield better image quality than YCbCr-FC2VQ at similar bitrate. With the RGB-FC2VQ algorithm, an 128 128 24-bit color ID image 49,152 bytes can be compressed down to 500 bytes with satisfactory quality.

伍德里奇《计量经济学导论--现代观点》1

T his appendix derives various results for ordinary least squares estimation of themultiple linear regression model using matrix notation and matrix algebra (see Appendix D for a summary). The material presented here is much more ad-vanced than that in the text.E.1THE MODEL AND ORDINARY LEAST SQUARES ESTIMATIONThroughout this appendix,we use the t subscript to index observations and an n to denote the sample size. It is useful to write the multiple linear regression model with k parameters as follows:y t ϭ␤1ϩ␤2x t 2ϩ␤3x t 3ϩ… ϩ␤k x tk ϩu t ,t ϭ 1,2,…,n ,(E.1)where y t is the dependent variable for observation t ,and x tj ,j ϭ 2,3,…,k ,are the inde-pendent variables. Notice how our labeling convention here differs from the text:we call the intercept ␤1and let ␤2,…,␤k denote the slope parameters. This relabeling is not important,but it simplifies the matrix approach to multiple regression.For each t ,define a 1 ϫk vector,x t ϭ(1,x t 2,…,x tk ),and let ␤ϭ(␤1,␤2,…,␤k )Јbe the k ϫ1 vector of all parameters. Then,we can write (E.1) asy t ϭx t ␤ϩu t ,t ϭ 1,2,…,n .(E.2)[Some authors prefer to define x t as a column vector,in which case,x t is replaced with x t Јin (E.2). Mathematically,it makes more sense to define it as a row vector.] We can write (E.2) in full matrix notation by appropriately defining data vectors and matrices. Let y denote the n ϫ1 vector of observations on y :the t th element of y is y t .Let X be the n ϫk vector of observations on the explanatory variables. In other words,the t th row of X consists of the vector x t . Equivalently,the (t ,j )th element of X is simply x tj :755A p p e n d i x EThe Linear Regression Model inMatrix Formn X ϫ k ϵϭ .Finally,let u be the n ϫ 1 vector of unobservable disturbances. Then,we can write (E.2)for all n observations in matrix notation :y ϭX ␤ϩu .(E.3)Remember,because X is n ϫ k and ␤is k ϫ 1,X ␤is n ϫ 1.Estimation of ␤proceeds by minimizing the sum of squared residuals,as in Section3.2. Define the sum of squared residuals function for any possible k ϫ 1 parameter vec-tor b asSSR(b ) ϵ͚nt ϭ1(y t Ϫx t b )2.The k ϫ 1 vector of ordinary least squares estimates,␤ˆϭ(␤ˆ1,␤ˆ2,…,␤ˆk )؅,minimizes SSR(b ) over all possible k ϫ 1 vectors b . This is a problem in multivariable calculus.For ␤ˆto minimize the sum of squared residuals,it must solve the first order conditionѨSSR(␤ˆ)/Ѩb ϵ0.(E.4)Using the fact that the derivative of (y t Ϫx t b )2with respect to b is the 1ϫ k vector Ϫ2(y t Ϫx t b )x t ,(E.4) is equivalent to͚nt ϭ1xt Ј(y t Ϫx t ␤ˆ) ϵ0.(E.5)(We have divided by Ϫ2 and taken the transpose.) We can write this first order condi-tion as͚nt ϭ1(y t Ϫ␤ˆ1Ϫ␤ˆ2x t 2Ϫ… Ϫ␤ˆk x tk ) ϭ0͚nt ϭ1x t 2(y t Ϫ␤ˆ1Ϫ␤ˆ2x t 2Ϫ… Ϫ␤ˆk x tk ) ϭ0...͚nt ϭ1x tk (y t Ϫ␤ˆ1Ϫ␤ˆ2x t 2Ϫ… Ϫ␤ˆk x tk ) ϭ0,which,apart from the different labeling convention,is identical to the first order condi-tions in equation (3.13). We want to write these in matrix form to make them more use-ful. Using the formula for partitioned multiplication in Appendix D,we see that (E.5)is equivalent to΅1x 12x 13...x 1k1x 22x 23...x 2k...1x n 2x n 3...x nk ΄΅x 1x 2...x n ΄Appendix E The Linear Regression Model in Matrix Form756Appendix E The Linear Regression Model in Matrix FormXЈ(yϪX␤ˆ) ϭ0(E.6) or(XЈX)␤ˆϭXЈy.(E.7)It can be shown that (E.7) always has at least one solution. Multiple solutions do not help us,as we are looking for a unique set of OLS estimates given our data set. Assuming that the kϫ k symmetric matrix XЈX is nonsingular,we can premultiply both sides of (E.7) by (XЈX)Ϫ1to solve for the OLS estimator ␤ˆ:␤ˆϭ(XЈX)Ϫ1XЈy.(E.8)This is the critical formula for matrix analysis of the multiple linear regression model. The assumption that XЈX is invertible is equivalent to the assumption that rank(X) ϭk, which means that the columns of X must be linearly independent. This is the matrix ver-sion of MLR.4 in Chapter 3.Before we continue,(E.8) warrants a word of warning. It is tempting to simplify the formula for ␤ˆas follows:␤ˆϭ(XЈX)Ϫ1XЈyϭXϪ1(XЈ)Ϫ1XЈyϭXϪ1y.The flaw in this reasoning is that X is usually not a square matrix,and so it cannot be inverted. In other words,we cannot write (XЈX)Ϫ1ϭXϪ1(XЈ)Ϫ1unless nϭk,a case that virtually never arises in practice.The nϫ 1 vectors of OLS fitted values and residuals are given byyˆϭX␤ˆ,uˆϭyϪyˆϭyϪX␤ˆ.From (E.6) and the definition of uˆ,we can see that the first order condition for ␤ˆis the same asXЈuˆϭ0.(E.9) Because the first column of X consists entirely of ones,(E.9) implies that the OLS residuals always sum to zero when an intercept is included in the equation and that the sample covariance between each independent variable and the OLS residuals is zero. (We discussed both of these properties in Chapter 3.)The sum of squared residuals can be written asSSR ϭ͚n tϭ1uˆt2ϭuˆЈuˆϭ(yϪX␤ˆ)Ј(yϪX␤ˆ).(E.10)All of the algebraic properties from Chapter 3 can be derived using matrix algebra. For example,we can show that the total sum of squares is equal to the explained sum of squares plus the sum of squared residuals [see (3.27)]. The use of matrices does not pro-vide a simpler proof than summation notation,so we do not provide another derivation.757The matrix approach to multiple regression can be used as the basis for a geometri-cal interpretation of regression. This involves mathematical concepts that are even more advanced than those we covered in Appendix D. [See Goldberger (1991) or Greene (1997).]E.2FINITE SAMPLE PROPERTIES OF OLSDeriving the expected value and variance of the OLS estimator ␤ˆis facilitated by matrix algebra,but we must show some care in stating the assumptions.A S S U M P T I O N E.1(L I N E A R I N P A R A M E T E R S)The model can be written as in (E.3), where y is an observed nϫ 1 vector, X is an nϫ k observed matrix, and u is an nϫ 1 vector of unobserved errors or disturbances.A S S U M P T I O N E.2(Z E R O C O N D I T I O N A L M E A N)Conditional on the entire matrix X, each error ut has zero mean: E(ut͉X) ϭ0, tϭ1,2,…,n.In vector form,E(u͉X) ϭ0.(E.11) This assumption is implied by MLR.3 under the random sampling assumption,MLR.2.In time series applications,Assumption E.2 imposes strict exogeneity on the explana-tory variables,something discussed at length in Chapter 10. This rules out explanatory variables whose future values are correlated with ut; in particular,it eliminates laggeddependent variables. Under Assumption E.2,we can condition on the xtjwhen we com-pute the expected value of ␤ˆ.A S S U M P T I O N E.3(N O P E R F E C T C O L L I N E A R I T Y) The matrix X has rank k.This is a careful statement of the assumption that rules out linear dependencies among the explanatory variables. Under Assumption E.3,XЈX is nonsingular,and so ␤ˆis unique and can be written as in (E.8).T H E O R E M E.1(U N B I A S E D N E S S O F O L S)Under Assumptions E.1, E.2, and E.3, the OLS estimator ␤ˆis unbiased for ␤.P R O O F:Use Assumptions E.1 and E.3 and simple algebra to write␤ˆϭ(XЈX)Ϫ1XЈyϭ(XЈX)Ϫ1XЈ(X␤ϩu)ϭ(XЈX)Ϫ1(XЈX)␤ϩ(XЈX)Ϫ1XЈuϭ␤ϩ(XЈX)Ϫ1XЈu,(E.12)where we use the fact that (XЈX)Ϫ1(XЈX) ϭIk . Taking the expectation conditional on X givesAppendix E The Linear Regression Model in Matrix Form 758E(␤ˆ͉X)ϭ␤ϩ(XЈX)Ϫ1XЈE(u͉X)ϭ␤ϩ(XЈX)Ϫ1XЈ0ϭ␤,because E(u͉X) ϭ0under Assumption E.2. This argument clearly does not depend on the value of ␤, so we have shown that ␤ˆis unbiased.To obtain the simplest form of the variance-covariance matrix of ␤ˆ,we impose the assumptions of homoskedasticity and no serial correlation.A S S U M P T I O N E.4(H O M O S K E D A S T I C I T Y A N DN O S E R I A L C O R R E L A T I O N)(i) Var(ut͉X) ϭ␴2, t ϭ 1,2,…,n. (ii) Cov(u t,u s͉X) ϭ0, for all t s. In matrix form, we canwrite these two assumptions asVar(u͉X) ϭ␴2I n,(E.13)where Inis the nϫ n identity matrix.Part (i) of Assumption E.4 is the homoskedasticity assumption:the variance of utcan-not depend on any element of X,and the variance must be constant across observations, t. Part (ii) is the no serial correlation assumption:the errors cannot be correlated across observations. Under random sampling,and in any other cross-sectional sampling schemes with independent observations,part (ii) of Assumption E.4 automatically holds. For time series applications,part (ii) rules out correlation in the errors over time (both conditional on X and unconditionally).Because of (E.13),we often say that u has scalar variance-covariance matrix when Assumption E.4 holds. We can now derive the variance-covariance matrix of the OLS estimator.T H E O R E M E.2(V A R I A N C E-C O V A R I A N C EM A T R I X O F T H E O L S E S T I M A T O R)Under Assumptions E.1 through E.4,Var(␤ˆ͉X) ϭ␴2(XЈX)Ϫ1.(E.14)P R O O F:From the last formula in equation (E.12), we haveVar(␤ˆ͉X) ϭVar[(XЈX)Ϫ1XЈu͉X] ϭ(XЈX)Ϫ1XЈ[Var(u͉X)]X(XЈX)Ϫ1.Now, we use Assumption E.4 to getVar(␤ˆ͉X)ϭ(XЈX)Ϫ1XЈ(␴2I n)X(XЈX)Ϫ1ϭ␴2(XЈX)Ϫ1XЈX(XЈX)Ϫ1ϭ␴2(XЈX)Ϫ1.Appendix E The Linear Regression Model in Matrix Form759Formula (E.14) means that the variance of ␤ˆj (conditional on X ) is obtained by multi-plying ␴2by the j th diagonal element of (X ЈX )Ϫ1. For the slope coefficients,we gave an interpretable formula in equation (3.51). Equation (E.14) also tells us how to obtain the covariance between any two OLS estimates:multiply ␴2by the appropriate off diago-nal element of (X ЈX )Ϫ1. In Chapter 4,we showed how to avoid explicitly finding covariances for obtaining confidence intervals and hypotheses tests by appropriately rewriting the model.The Gauss-Markov Theorem,in its full generality,can be proven.T H E O R E M E .3 (G A U S S -M A R K O V T H E O R E M )Under Assumptions E.1 through E.4, ␤ˆis the best linear unbiased estimator.P R O O F :Any other linear estimator of ␤can be written as␤˜ ϭA Јy ,(E.15)where A is an n ϫ k matrix. In order for ␤˜to be unbiased conditional on X , A can consist of nonrandom numbers and functions of X . (For example, A cannot be a function of y .) To see what further restrictions on A are needed, write␤˜ϭA Ј(X ␤ϩu ) ϭ(A ЈX )␤ϩA Јu .(E.16)Then,E(␤˜͉X )ϭA ЈX ␤ϩE(A Јu ͉X )ϭA ЈX ␤ϩA ЈE(u ͉X ) since A is a function of XϭA ЈX ␤since E(u ͉X ) ϭ0.For ␤˜to be an unbiased estimator of ␤, it must be true that E(␤˜͉X ) ϭ␤for all k ϫ 1 vec-tors ␤, that is,A ЈX ␤ϭ␤for all k ϫ 1 vectors ␤.(E.17)Because A ЈX is a k ϫ k matrix, (E.17) holds if and only if A ЈX ϭI k . Equations (E.15) and (E.17) characterize the class of linear, unbiased estimators for ␤.Next, from (E.16), we haveVar(␤˜͉X ) ϭA Ј[Var(u ͉X )]A ϭ␴2A ЈA ,by Assumption E.4. Therefore,Var(␤˜͉X ) ϪVar(␤ˆ͉X )ϭ␴2[A ЈA Ϫ(X ЈX )Ϫ1]ϭ␴2[A ЈA ϪA ЈX (X ЈX )Ϫ1X ЈA ] because A ЈX ϭI kϭ␴2A Ј[I n ϪX (X ЈX )Ϫ1X Ј]Aϵ␴2A ЈMA ,where M ϵI n ϪX (X ЈX )Ϫ1X Ј. Because M is symmetric and idempotent, A ЈMA is positive semi-definite for any n ϫ k matrix A . This establishes that the OLS estimator ␤ˆis BLUE. How Appendix E The Linear Regression Model in Matrix Form 760Appendix E The Linear Regression Model in Matrix Formis this significant? Let c be any kϫ 1 vector and consider the linear combination cЈ␤ϭc1␤1ϩc2␤2ϩ… ϩc k␤k, which is a scalar. The unbiased estimators of cЈ␤are cЈ␤ˆand cЈ␤˜. ButVar(c␤˜͉X) ϪVar(cЈ␤ˆ͉X) ϭcЈ[Var(␤˜͉X) ϪVar(␤ˆ͉X)]cՆ0,because [Var(␤˜͉X) ϪVar(␤ˆ͉X)] is p.s.d. Therefore, when it is used for estimating any linear combination of ␤, OLS yields the smallest variance. In particular, Var(␤ˆj͉X) ՅVar(␤˜j͉X) for any other linear, unbiased estimator of ␤j.The unbiased estimator of the error variance ␴2can be written as␴ˆ2ϭuˆЈuˆ/(n Ϫk),where we have labeled the explanatory variables so that there are k total parameters, including the intercept.T H E O R E M E.4(U N B I A S E D N E S S O F␴ˆ2)Under Assumptions E.1 through E.4, ␴ˆ2is unbiased for ␴2: E(␴ˆ2͉X) ϭ␴2for all ␴2Ͼ0. P R O O F:Write uˆϭyϪX␤ˆϭyϪX(XЈX)Ϫ1XЈyϭM yϭM u, where MϭI nϪX(XЈX)Ϫ1XЈ,and the last equality follows because MXϭ0. Because M is symmetric and idempotent,uˆЈuˆϭuЈMЈM uϭuЈM u.Because uЈM u is a scalar, it equals its trace. Therefore,ϭE(uЈM u͉X)ϭE[tr(uЈM u)͉X] ϭE[tr(M uuЈ)͉X]ϭtr[E(M uuЈ|X)] ϭtr[M E(uuЈ|X)]ϭtr(M␴2I n) ϭ␴2tr(M) ϭ␴2(nϪ k).The last equality follows from tr(M) ϭtr(I) Ϫtr[X(XЈX)Ϫ1XЈ] ϭnϪtr[(XЈX)Ϫ1XЈX] ϭnϪn) ϭnϪk. Therefore,tr(IkE(␴ˆ2͉X) ϭE(uЈM u͉X)/(nϪ k) ϭ␴2.E.3STATISTICAL INFERENCEWhen we add the final classical linear model assumption,␤ˆhas a multivariate normal distribution,which leads to the t and F distributions for the standard test statistics cov-ered in Chapter 4.A S S U M P T I O N E.5(N O R M A L I T Y O F E R R O R S)are independent and identically distributed as Normal(0,␴2). Conditional on X, the utEquivalently, u given X is distributed as multivariate normal with mean zero and variance-covariance matrix ␴2I n: u~ Normal(0,␴2I n).761Appendix E The Linear Regression Model in Matrix Form Under Assumption E.5,each uis independent of the explanatory variables for all t. Inta time series setting,this is essentially the strict exogeneity assumption.T H E O R E M E.5(N O R M A L I T Y O F␤ˆ)Under the classical linear model Assumptions E.1 through E.5, ␤ˆconditional on X is dis-tributed as multivariate normal with mean ␤and variance-covariance matrix ␴2(XЈX)Ϫ1.Theorem E.5 is the basis for statistical inference involving ␤. In fact,along with the properties of the chi-square,t,and F distributions that we summarized in Appendix D, we can use Theorem E.5 to establish that t statistics have a t distribution under Assumptions E.1 through E.5 (under the null hypothesis) and likewise for F statistics. We illustrate with a proof for the t statistics.T H E O R E M E.6Under Assumptions E.1 through E.5,(␤ˆjϪ␤j)/se(␤ˆj) ~ t nϪk,j ϭ 1,2,…,k.P R O O F:The proof requires several steps; the following statements are initially conditional on X. First, by Theorem E.5, (␤ˆjϪ␤j)/sd(␤ˆ) ~ Normal(0,1), where sd(␤ˆj) ϭ␴͙ෆc jj, and c jj is the j th diagonal element of (XЈX)Ϫ1. Next, under Assumptions E.1 through E.5, conditional on X,(n Ϫ k)␴ˆ2/␴2~ ␹2nϪk.(E.18)This follows because (nϪk)␴ˆ2/␴2ϭ(u/␴)ЈM(u/␴), where M is the nϫn symmetric, idem-potent matrix defined in Theorem E.4. But u/␴~ Normal(0,I n) by Assumption E.5. It follows from Property 1 for the chi-square distribution in Appendix D that (u/␴)ЈM(u/␴) ~ ␹2nϪk (because M has rank nϪk).We also need to show that ␤ˆand ␴ˆ2are independent. But ␤ˆϭ␤ϩ(XЈX)Ϫ1XЈu, and ␴ˆ2ϭuЈM u/(nϪk). Now, [(XЈX)Ϫ1XЈ]Mϭ0because XЈMϭ0. It follows, from Property 5 of the multivariate normal distribution in Appendix D, that ␤ˆand M u are independent. Since ␴ˆ2is a function of M u, ␤ˆand ␴ˆ2are also independent.Finally, we can write(␤ˆjϪ␤j)/se(␤ˆj) ϭ[(␤ˆjϪ␤j)/sd(␤ˆj)]/(␴ˆ2/␴2)1/2,which is the ratio of a standard normal random variable and the square root of a ␹2nϪk/(nϪk) random variable. We just showed that these are independent, and so, by def-inition of a t random variable, (␤ˆjϪ␤j)/se(␤ˆj) has the t nϪk distribution. Because this distri-bution does not depend on X, it is the unconditional distribution of (␤ˆjϪ␤j)/se(␤ˆj) as well.From this theorem,we can plug in any hypothesized value for ␤j and use the t statistic for testing hypotheses,as usual.Under Assumptions E.1 through E.5,we can compute what is known as the Cramer-Rao lower bound for the variance-covariance matrix of unbiased estimators of ␤(again762conditional on X ) [see Greene (1997,Chapter 4)]. This can be shown to be ␴2(X ЈX )Ϫ1,which is exactly the variance-covariance matrix of the OLS estimator. This implies that ␤ˆis the minimum variance unbiased estimator of ␤(conditional on X ):Var(␤˜͉X ) ϪVar(␤ˆ͉X ) is positive semi-definite for any other unbiased estimator ␤˜; we no longer have to restrict our attention to estimators linear in y .It is easy to show that the OLS estimator is in fact the maximum likelihood estima-tor of ␤under Assumption E.5. For each t ,the distribution of y t given X is Normal(x t ␤,␴2). Because the y t are independent conditional on X ,the likelihood func-tion for the sample is obtained from the product of the densities:͟nt ϭ1(2␲␴2)Ϫ1/2exp[Ϫ(y t Ϫx t ␤)2/(2␴2)].Maximizing this function with respect to ␤and ␴2is the same as maximizing its nat-ural logarithm:͚nt ϭ1[Ϫ(1/2)log(2␲␴2) Ϫ(yt Ϫx t ␤)2/(2␴2)].For obtaining ␤ˆ,this is the same as minimizing͚nt ϭ1(y t Ϫx t ␤)2—the division by 2␴2does not affect the optimization—which is just the problem that OLS solves. The esti-mator of ␴2that we have used,SSR/(n Ϫk ),turns out not to be the MLE of ␴2; the MLE is SSR/n ,which is a biased estimator. Because the unbiased estimator of ␴2results in t and F statistics with exact t and F distributions under the null,it is always used instead of the MLE.SUMMARYThis appendix has provided a brief discussion of the linear regression model using matrix notation. This material is included for more advanced classes that use matrix algebra,but it is not needed to read the text. In effect,this appendix proves some of the results that we either stated without proof,proved only in special cases,or proved through a more cumbersome method of proof. Other topics—such as asymptotic prop-erties,instrumental variables estimation,and panel data models—can be given concise treatments using matrices. Advanced texts in econometrics,including Davidson and MacKinnon (1993),Greene (1997),and Wooldridge (1999),can be consulted for details.KEY TERMSAppendix E The Linear Regression Model in Matrix Form 763First Order Condition Matrix Notation Minimum Variance Unbiased Scalar Variance-Covariance MatrixVariance-Covariance Matrix of the OLS EstimatorPROBLEMSE.1Let x t be the 1ϫ k vector of explanatory variables for observation t . Show that the OLS estimator ␤ˆcan be written as␤ˆϭΘ͚n tϭ1xt Јx t ΙϪ1Θ͚nt ϭ1xt Јy t Ι.Dividing each summation by n shows that ␤ˆis a function of sample averages.E.2Let ␤ˆbe the k ϫ 1 vector of OLS estimates.(i)Show that for any k ϫ 1 vector b ,we can write the sum of squaredresiduals asSSR(b ) ϭu ˆЈu ˆϩ(␤ˆϪb )ЈX ЈX (␤ˆϪb ).[Hint :Write (y Ϫ X b )Ј(y ϪX b ) ϭ[u ˆϩX (␤ˆϪb )]Ј[u ˆϩX (␤ˆϪb )]and use the fact that X Јu ˆϭ0.](ii)Explain how the expression for SSR(b ) in part (i) proves that ␤ˆuniquely minimizes SSR(b ) over all possible values of b ,assuming Xhas rank k .E.3Let ␤ˆbe the OLS estimate from the regression of y on X . Let A be a k ϫ k non-singular matrix and define z t ϵx t A ,t ϭ 1,…,n . Therefore,z t is 1ϫ k and is a non-singular linear combination of x t . Let Z be the n ϫ k matrix with rows z t . Let ␤˜denote the OLS estimate from a regression ofy on Z .(i)Show that ␤˜ϭA Ϫ1␤ˆ.(ii)Let y ˆt be the fitted values from the original regression and let y ˜t be thefitted values from regressing y on Z . Show that y ˜t ϭy ˆt ,for all t ϭ1,2,…,n . How do the residuals from the two regressions compare?(iii)Show that the estimated variance matrix for ␤˜is ␴ˆ2A Ϫ1(X ЈX )Ϫ1A Ϫ1؅,where ␴ˆ2is the usual variance estimate from regressing y on X .(iv)Let the ␤ˆj be the OLS estimates from regressing y t on 1,x t 2,…,x tk ,andlet the ␤˜j be the OLS estimates from the regression of yt on 1,a 2x t 2,…,a k x tk ,where a j 0,j ϭ 2,…,k . Use the results from part (i)to find the relationship between the ␤˜j and the ␤ˆj .(v)Assuming the setup of part (iv),use part (iii) to show that se(␤˜j ) ϭse(␤ˆj )/͉a j ͉.(vi)Assuming the setup of part (iv),show that the absolute values of the tstatistics for ␤˜j and ␤ˆj are identical.Appendix E The Linear Regression Model in Matrix Form 764。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

Javier Gonzalez *, Anthony Stentz, Anibal Ollero **.- On leave from University of Malaga, Spain.Field Robotics Center, The Robotics Institute, Carnegie Mellon University,Pittsburgh, PA, 15213 USAAbstractPosition determination for a mobile robot is an impor-tant part of autonomous navigation. In many cases, dead reckoning is insufficient because it leads to large inaccu-racies over time. Beacon- and landmark-based estimators require the emplacement of beacons and the presence of natural or man-made structure respectively in the environ-ment. In this paper we present a new algorithm for effi-ciently computing accurate position estimates based on a radially-scanning laser rangefinder that requires minimal structure in the environment. The algorithm employs a connected set of short line segments to approximate the shape of any environment and can easily be constructed by the rangefinder itself. We describe techniques for effi-ciently managing the environment map, matching the sen-sor data to the map, and computing the robot’s position.We present accuracy and runtime results for our imple-mentation.1. IntroductionDetermining the location of a robot relative to an abso-lute coordinate frame is one of the most important issues in the autonomous navigation problem. In a two dimen-sional space, the location of a mobile robot can be repre-sented by a triplet (t x , t y , θ) known as the robot pose. A mobile coordinate system (Robot Frame) attached to the robot can be considered such that (t x , t y ) represents the translation (position) of the Robot Frame with respect to an absolute coordinate system (World Frame) and θ repre-sents its orientation (heading) (Fig. 1).To estimate the pose (t x ,t y , θ) of a mobile robot equipped with a range sensor the matching between the range data and model data is required. This can be accom-plished by two different approaches: feature-based and iconic. In the feature-based method, a set of features are extracted from the sensed data (such as line segments, cor-ners, etc.) and then matched against the corresponding fea-An Iconic Position Estimator for a 2D Laser RangeFindertures in the model. Shaffer et al. [9] using a laser scanner rangefinder and Crowley [10] and Drumheller [11] using range from a rotating sonar, proposed a feature-based approach for a 2D environment. In contrast, the iconic method works directly on the raw sensed data, minimizing the discrepancy between it and the model. Hebert et al [12]formulated an iconic method to compare two elevation maps acquired from a 3D laser rangefinder. Moravec and Elfes proposed a technique to match two maps represented by occupancy grids [4]. Finally, Cox [13] used an infrared laser rangefinder to get a 2D radial representation of the environment which is matched against a line segment map.In this paper, we present a new iconic approach for esti-mating the pose of a mobile robot equipped with a radial laser rangefinder. Unlike prior approaches, our method can be used in environments with only a minimal amount of structrure, provided enough is present to disambiguate the robot´s pose. Our map consists of a possibly large number of short line segments, perhaps constructed by the rangefinder itself, to approximately represent any environ-ment shape. This representation introduces problems in map indexing and error minimization which are addressed to insure that accurate estimates can be computed quickly.FIGURE 1.World Frame and Robot Frame2. Iconic Position EstimationThe position estimation problem consists of two parts:sensor to map data correspondence and error minimiza-θXYx yt xt ytion. Correspondence is the task of determining which map data point gave rise to each sensor data point. Once the correspondence is computed, error minimization is the task of computing a robot pose that minimizes the error (e.g., distance) between the actual location of each map data point and the sensor’s estimate of its location.In this work we are concerned with scanned data taken from two-dimensional world maps. A convenient way to describe these maps is by means of a set L= {L 1, L 2, ...,L m } where L j represents the line segment between the “left” point (a j l , b j l ) and the “right” point (a j r , b j r ) in the World Frame (see Fig. 2). This line segment lies on the line given in an implicit normalized form by:(1)The sensed data consists of range points taken from a radial laser scanner.FIGURE 2.Different distances to consider for each linesegment.The correspondence problem is formulated as determin-ing which line segment L j from the model L gave rise to the image point p i = (x i ,y i )T . A reasonable heuristic for determining correspondence is the minimun Euclidean distance between model and sensor data. Thus, the dis-tances between the sensed point P i = (X i ,Y i )T and the line segment L j are defined as follows:d ij = d 0 if (a 0j ,b 0j ) is an element of L j d ij = min(d r , d l ) otherwise where (see Fig. 2):(2)where (3)(4)and(5)A j XB j YC j ++0=d o ijd l ijd r ij(a l j ,b lj )(a r j ,b r j )(Xi,Yi)A j X+B j Y+C j = 0(a 0j ,b 0j )d ij oA j X iB j Y iC j++=A j 2B j 2+()12/1=d ij rX i a j r –()2Y i b j r –()2+()12/=d ij lX i a j l –()2Y i b j l –()2+()12/=X i Y i 1θcos θsin –t x θsin θcos t y 01x i y i 1=Equation (5) defines the transformation between a point P i in the World Frame and a point p i in the Robot Frame given by (t x ,t y ,θ) (see Fig.1). Once the line segments from the map and scanned points are represented in the same coordinate system it is possible to search for the segment/point correspondence pairs.The iconic position estimation problem consists of the computation of (t x ,t y ,θ) that minimizes the sum of square distances between the segment and range point of every correspondence pair. To establish such a correspondences all of the segments from the map could be checked against every range point . In a sensor such as Cyclone Range Finder, which will be described later, one thousand scanned points would have to be matched against a model of hundreds of line segments (which could be built by the robot itself). To avoid this extremely expensive procedure,we propose a two-tier map representation:1.- Cell map : array of grid cells in which every cell is labeled either occupied, if it contains at least one line seg-ment, or empty, if it contains no segments. Elfes and Moravec used a similar approach for sonar navigation in their occupancy grid [4].2.-Line map : collection of segments inside each of the occupied cells considered for correspondence.The correspondence of sensed points to the model seg-ments is accomplished in two steps. First, a set of cells is selected for each of the scanned points. Second, only those segments inside these cells are checked for correspon-dence. By using this representation, the number of seg-ments to be checked decreases considerably, drastically reducing the matching time [5].The grid size must be selected according to the charac-teristics of the particular application. One cell for the whole map is inefficient because it requires all of the line segments to be examined for each sensed point (no improvement at all). A very large number of cells is also inefficient because it requires a large number of empty cells to be checked. We have determined that the appropri-ate size of the grid is a function of a variety of parameters (number of line segments in the model, type of environ-ment, initial error, etc.) and therefore an empirical method is proposed for choosing it.3. Cell selectionAfter the scanned points have been transformed to the World Frame, a set of occupied cells must be selected for each of them (Fig. 3). Due to errors in both dead reckoning and the sensor, in a significant number of cases, the pointsP i are located in empty cells. We analyze these errors in more detail below.3.1. Dead Reckoning errorsDead reckoning is intrinsically vulnerable to bad calibra-tion, imperfect wheel contact, upsetting events, etc. Thus,a confidence region bounding the actual location of the robot is used. This region is assumed to be a circle of radius δr proportional to the traversed distance. This uncertainty in the robot position propagates in such a way that an identical uncertainty region centered at the sensed point can be considered (Fig. 3a).In a similar way, the heading error is assumed to bebounded by degrees. This error is also considered to be proportional to the traversed distance. Notice that the effect of this error over the uncertainty region for the sensed point depends on the range (Fig. 3b).FIGURE 3. Uncertainties in the sensed data due to DeadReckoning error pose. (a) Uncertainty region caused by position error.(b) Uncertainty region caused by position and orientation error.3.2. Sensor errorsSensor errors arise for the following reasons: the range provided by the laser rangefinder is noisy as well as trun-cated by the resolution of the sensor, and the angular posi-tion given by the decoder has some inaccuracy. Thus, the two errors considered are range error and orientation error . Although they can be modeled as a gaussian distri-bution [1], here both of them are modeled as bounded errors, as were dead reckoning errors. Their maximum and minimum values define a new region of uncertainty to be added to the one arising from the dead reckoning errors.Figure 4a shows a region defined by two errors parameters δs and εs whose values are obtained from the sensor cali-bration experiments [7]. This region does not increase with the distance traversed by the robot. On the otherεr +−x+xXYYX (a)(b)Uncertainty in robot position Uncertainty in scanned pointεrδr hand, although it depends on the range value, it is not as significant as the dead reckoning error (εs << εr .). Figure 4b shows the final region after considering both dead reckoning and sensor errors. Notice that the sensed point location is not necessarily along the scanning ray but is inside the uncertainty region.FIGURE 4.(a) Uncertainty in the sensed data due to thesensor errors. (b) Uncertainty region caused by Dead Reckoning and Sensor errors.3.3. Cells selection algorithmThe algorithm to select the cells takes into account the above mentioned uncertainty regions. Each time the cell which includes the scanned point is labeled empty a search for a nearby occupied cell is performed (Fig. 5). The searching area is selected to be coincident with the uncer-tainty region given by the sensor and dead reckoning errors (Fig 4b).If no a priori information is available, the matcher assumes the closest occupied cells are the most likely to contain the corresponding model segment. A distance function based on 8-connectivity is used. The search radi-ates out from the cell containing the sensed point until the cell containing the nearest line segment is found. For all the cells located at the same distance, only those both occupied and inside the uncertainty region are examined for the closest line segment within them (Fig 5).FIGURE 5.Cells to be considered when the original cellis empty.x XY(a)x YX(b)εs 2δs: empty cell: occupied cell but not selected : selected cellSCANNED POINTUNCERTAINTYREGIONTo make the algorithm robust against outliers, incom-pleteness of the model, presence of extraneous objects,etc., a progression of increasingly better position estimates is computed (see figure 6). The uncertainty region is reduced along the progression. This approach is based on the fact that the uncertainty due to the error in sensor loca-tion decreases as the position estimate improves. How-ever, the uncertainty region due to the sensor errors does not vary. In practice, this is accomplished by weighting the parameters δr and εr . between 0 and 1.4. Segment correspondenceTo determine which line segment inside the assigned cells matches the scanned point, a minimum distance crite-rion is used. This assumption is valid as long as the dis-placement between sensed data and model is small enough. This assumption limits the allowable distance tra-versed by the robot between consecutive position esti-mates. However, since after each iteration the point/line-segment pairs are updated, the limitation can be relaxed somewhat (Fig. 6).Given a scanned point P i = (X i , Y i ), three different dis-tances for each line segment are computed (Fig. 2). They are given by equations 2,3 and 4. The smallest distance to the line segments inside the selected cells determines which line segment l j is be matched to P i .FIGURE 6.Block diagram of the iconic position estimator5. MinimizationAfter the matched pairs have been determined, the esti-mate is computed by minimizing the following:(6)WORLD MAP SENSED DA TA MATCHEROne Line Segments Points CONVERGENCE?YESpoint/segmentArray of paired Estimated pose:(t x , t y , θ)Dead ReckoningPoseERROR MINIMIZATIONiterationmin e Te ()min e i 211=n∑⎝⎠⎜⎟⎜⎟⎛⎞=where e i =e i (t x , t y , θ) is the distance equation computed for P i .Although the rotation θ makes this optimization problem non-linear, a closed-form solution exists. The Schone-mann approach treats the rotation elements as unknowns and applies Lagrange multipliers to force the rotation matrix to be orthogonal [2]. However, we have opted for a iterative algorithm (Gauss-Newton) to support the future modelling of gaussian uncertainty in the sensor and robot data. Such modelling requires nonscalar weights on the error. No closed-form solution exists for the minimization.In this method the equation to be solved is:(7)where e is the error vector, d is the difference vector between the transformation parameters on successive iter-ations, and J is the Jacobian:(8)Notice that Equation (7) is overdetermined for n>3. In this case we use the pseudoinverse of the Jacobian to find a least square fit of d :(9)Equation (9) is solved iteratively for the displacement vector d until the absolute value of its elements is less than some tolerance. On each iteration, the correspondence between sensor and model data is recomputed to reduce the effects of outliers and mismatches. We have empiri-cally determined that iterating more than once between correspondence updates yields no additional accuracy in the final estimate, thus our approach is functionally equiv-alent to the closed-form solution with updating.6. ApplicationIn this section, we describe the mobile robot and the sen-sor used in this application as well as the implementation and results.6.1. The Locomotion EmulatorThe Locomotion Emulator (LE) is a mobile robot that was developed at the CMU Field Robotics Center (FRC)as a testbed for development of mobile robotic systems (Fig. 7). It is a powerful all-wheel steer, all-wheel drivee J d +0=J t x ∂∂e 1t y ∂∂e 1θ∂∂e 1………t x ∂∂e n t y ∂∂e n θ∂∂e n =d J TJ ()–1–J Te=base with a rotating payload platform. A more complete description can be found in [3].FIGURE 7.The Locomotion Emulator6.2. CycloneThe Cyclone laser range scanner (Fig. 8a) was also developed at the FRC to acquire fast, precise scans of range data over long distances (up to 50m)[6]. The sensor consists of a pulsed Gallium Arsenide infrared laser trans-mitter/receiver pair, aimed vertically upward. A mirror in the upper part of the scanner rotates about the vertical axis and deflects the laser beam so that it emerges parallel to the ground, creating a two dimensional map of 360degrees field of view. The resolution of the range measure-ments is set to be 10cm and the accuracy is 20cm [7].The angular resolution depends upon the resolution of the encoder that is used on the tower motor which is currently programmed to acquire 1000 range readings per revolu-tion.6.3. Experimental resultsThe iconic position estimation algorithm presented in this paper was tested at the highbay area of the FRC. The corridor is about 6m wide and 20m long (Fig.8). The solid line segments denote walls which were constructed from wood partitions. We picked this configuration because its simplicity and reliability in being surveyed. The dotted line represents the path that the LE was instructed to fol-low. It consists of a symmetrical trajectory 19m long. The LE, initially positioned at the beginning of the path, was moved by steps of 1m. At each of these positions, the posi-tion estimator was executed and the robot pose was sur-veyed using a theodolite. Figure 8b shows the sensed data taken by the Cyclone at the 7th step. Notice that a consid-erable number of points from the scanner corresponds to objects that are not included in the model of figure 8c.The estimator was programed to use two different repre-sentations of the model. In the first one, the model was represented by the 8 long line segments shown in figure+−8c. In the second, each of these line segments was split into a number of small segments 10cm long, providing a model with almost 400 line segments. The parameter val-ues used were: δr = 5cm and εr = 5deg for the LE (5% of the step size) and δs = 10cm and εs = 0.7deg for the Cyclone. The grid size was 0.6x0.6m 2.FIGURE 8.(a) The Cyclone laser rangefinder. (b) Rangescan provided by the Cyclone. The circular icon represents the LE at the position where the scan was taken from. (c) World model representation. (d) Map representation.As expected the computed error (surveyed minus esti-mate) for the two representations was exactly the same at the 20 positions along the path (Fig. 9). The maximum position error was 3.6cm, and the average position error was 1.99cm. The maximum heading error was 1.8deg and the average was 0.73deg. These results are significant given the resolution (10cm) and accuracy (20cm) of the scanner.Another important result is the run times. The estimator was run on a Sun Sparc Station 1 with a math coprocessor.For the 8 line segment representation the approximate run times were 0.37sec for the preprocessing (computation of the cell map ), 0.27sec for the minimization and 1.76 for(c)(d)(a)(b)the segment correspondence, giving a total cycle time of 2sec. For the 400 line segment representation, run times were 12.9sec for the preprocessing, 0.29sec for the mini-mization and 3.22 for the segment correspondence, giving 3.5sec of total cycle time. Note that by multiplying the number of line segments by a factor of 50, the preprocess-ing time increases considerably, however the matching time is increased only by a factor of 1.75.In the event that the uncertainty regions for the sensed points can be approximated by circles centered on the points, the segment correspondence can be computed rap-idly using a numerical V oronoi diagram. This approxima-tion worked well for our highbay experiments [8].FIGURE puted errors for the 20 positions alongthe path.7. ConclusionsIn this paper a two dimensional iconic based approach for position estimation was presented. By considering two resolution levels in the map, a two-stage method is pro-posed to solve the point/line-segment correspondence.Furthermore, the uncertainty due to errors in both dead reckoning pose and sensed data are considered in order to bound the searching area. This approach drastically reduces the computation time when the map is given by a high number of line segments (e.g. map built by the robot itself). This algorithm was implemented and tested using aX YPOSITIONERROR[m] x 10-3SCAN POSITI-35.00-30.00-25.00-20.00-15.00-10.00-5.000.005.0010.0015.0020.0025.0030.0035.0040.000.005.0010.0015.00HEADINGERROR[deg]SCAN POSITI-1.20-1.00-0.80-0.60-0.40-0.20-0.000.200.400.600.801.001.201.400.005.0010.0015.002D radial laser scanner mounted on a omnidirectional robot, showing for the first time an explicit quantification of the accuracy of an iconic position estimator. The esti-mator has shown to be robust to incompleteness of the model and spurious data, and provides a highly accurate estimate of the robot position and orientation for many-line environments.AcknowledgementsWe wish to thank to Gary Shaffer for his collaboration in the experiment and contribution to the development of the programs, Kerien Fitzpatrick for facilitating the testing, In So Kweon for suggesting the use of small line segments,and Sergio Sedas and Martial Hebert for their valuable comments and discussions.References[1] C. M. Wang, “Location estimation and uncertainty analysis for mobile robots”, in Proc. IEEE Int. Conf. on Robotics and Automation, pp. 1230-1235, 1988.[2] P. H. Schonemann and R. M. Carroll. “Fitting one matrix to another under choice of a central dilation and a rigid motion”,Psychometrika, vol. 35,pp. 245-255, June 1970.[3] K. W. Fitzpatrick and J.L. Ladd. “Locomotion Emulator: A testbed for navigation research”. In 1989 World Conference on Robotics Research: The Next Five Years and Beyond, May 1989.[4] H. P. Moravec and A. Elfes. “High resolution maps from wide-angle sonar”. In Proc. IEEE International Conference on Robotics and Automation”, March 1985.[5] J. Gonzalez, A. Stenz and A. Ollero. “An Iconic Position Estimator for a 2D Laser RangeFinder”. CMU Robotics Institute Technical Report CMU-RI-TR-92-04, 1992.[6] S. Singh, J. West, “Cyclone: A Laser Rangefinder for Mobile Robot Navigation”, CMU Robotics Institute Technical Report,CMU-RI-TR-91-18, August 1991[7] S. Sedas and J. Gonzalez. “Analytical and Experimental Characterization of a Radial Laser RangeFinder”. CMU Robotics Institute Technical Report CMU-RI-TR-92 1992.[8] G. Shaffer and A. Stenz. “Automated Surveying of Mines Using a Scanning Laser Rangefinder”. to be submitted to the SME Annual Meeting Symposium on Robotics and Automation,February, 1993.[9] G. Shaffer, A. Stentz, W. Whittaker, K. Fitzpatrick. “Position Estimator for Underground Mine Equipment”. In Proc. 10th WVU International Mining Electrotechnology Conference, Mor-gantown, W. V A, July 1990.[10] J. L. Crowley. “Navigation for an intelligent mobile robot”.IEEE Journal of Robotics and Automation, vol. RA-1, no. 1,March 1985.[11] M. Drumheller. “Mobile robot localization using sonar”.IEEE Transactions on Pattern Analysis and Machine Intelli-gence, vol. PAMI-9, no. 2, March 1987.[12] M. Hebert, T. Kanade, I. Kweon. “3-D Vision Techniques for Autonomous V ehicles”. Technical Report CMU-RI-TR-88-12, 1988.7 of 7[13] I. C. Cox. “Blanche-An Experiment in Guidance and Navi-gation of an Autonomous Robot Vehicle”. IEEE Transactions on Robotics and Automation, V ol. 7, no. 2, April 1991.。

相关文档
最新文档