深度学习—训练集、验证集和测试集概念

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

➢Training, Validation and Test Data

Example:

(A)We have data on 16 data items , their attributes and class labels.

RANDOMLY divide them into 8 for training, 4 for validation and 4 for testing.

(B). Next, suppose we develop, three classification models A, B, C from the training data. Let the training errors on these models be as shown below (recall that the models do not necessarily provide perfect results on training data—neither they are required to).

(C). Next, use the three models A, B, C to classify each item in the validation set based on its attribute vales. Recall that we do know their true labels as well. Suppose we get the following results:

model C.

(D). Now use model C to determine class values for each data point in the test set. We do so by substituting the (known) attribute value into the classification model C. Again, recall that we know the true label of each of these data items so that we can compare the values obtained from the classification model with the true labels to determine classification error on the test set. Suppose we get the following results.

(E). Based on the above, an estimate of generalization error is 25%.

What this means is that if we use Model C to classify future items for which only the attributes will be known, not the class labels, we are likely to make incorrect classifications about 25% of the time.

(F). A summary of the above is as follows:

➢Cross Validation

If available data are limited, we employ Cross Validation (CV). In this approach, data are randomly divided into almost k equal sets. Training is done based on (k-1) sets and the k-th set is used for test. This process is repeated k times (k-fold CV). The average error on the k repetitions is used as a measure of the test error.

For the special case when k=1, the above is called Leave- One –Out-Cross-Validation (LOO-CV).

EXAMPLE:

Consider the above data consisting of 16 items.

(A). Let k= 4, i.e., 4- fold Cross Validation.

Divide the data into four sets of 4 items each.

Suppose the following set up occurs and the errors obtained are as shown.

Estimated Classification Error (CE) = 25+35+28+32 = 30%

4

(B). LOO – CV

For this, data are divided into 16 sets, each consisting of 15 training data and one test data.

Suppose Average Classification Error based on the values in the last row is

CE)= 32%

Then the estimate of test error is 32% .

相关文档
最新文档