ETL使用手册

合集下载

ETLPLUS使用指南

ETLPLUS使用指南

ETLPLUS使⽤指南ETLPLUS使⽤指南第⼀章:ETL*PLUS简介 (2)第⼆章:功能详解和操作 (4)⾸次配置 (4)作业管理配置 (14)作业调度 (40)监控机制 (60)注意事项及常见问题 (81)第三章:运⽤上线检查的标准 (86)运⽤设计阶段 (86)集成测试阶段 (87)运⽤上线阶段 (87)⽂件信息修订记录⽂件审核/审批此⽂件需如下审核⽂档分发此⽂档将分发⾄如下个⼈或机构第⼀章ETL*PLUS简介在数据仓储 (Data Warehousing) 环境的建置过程中, ETL*PLUS机制的好坏不但在建设初期会决定数据仓储是否能够顺利进⾏, 同时也会影响到系统在后续维护上的容易性.ETL 指的是E xtraction, T ransformation 与L oading. Extraction 指的是如何将数据从来源端 (Source System) 中截取出来. Transformation 指的是在截取出来的数据格式与数据仓储所需要的数据做转换. Loading 指的是将数据加载⾄数据仓储中. 但由于在数据仓储中, 通常所牵涉的数据来源会⾮常多, 同时所须加载的数据量会相当多. 所以⼀个设计完善且容易维护的ETL*PLUS 机制对数据仓储项⽬的进⾏就显得⾮常重要了.所谓的 ETL*PLUS机制指的是在数据仓储的项⽬中, 能够让许多的作业在作业的执⾏条件满⾜时就能够⾃动地执⾏这些作业.这其中包含了可能需要接收⼀些档案来做数据加载⼯作的作业, 或者是做⼀些数据整合的作业. ⽽作业在执⾏时可能还会有⼀些条件的限制及其它种种的功能.ETL*PLUS具有全浏览器⽅式,使⽤⼗分⽅便,具有很强的跨平台性。

具有很强的作业状态监控功能;能够对企业的调度进⾏统⼀管理,这样有利企业的管理。

ETL*PLUS对后台作业采⽤的是具有很强跨平台的语⾔Perl,从⽽使作业与平台⽆关性。

这也是ETL*PLUS的强⼤之处。

ETL使用手册

ETL使用手册

火龙果 整理ETL使用手册ETL使用手册第一章配置文件结构<loaderJob>//根标签<restartCounter/>//在目标数据库中创建数据表,纪录importDefinition标签重新启动的次数,如果存在表明会抛错.<variables>//接收参数定义<variable/></variables><jdbcDefaultParameters>//默认JDBC连接<jdbcSourceParameters><jdbcSourceParameter/></jdbcSourceParameters><jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters></jdbcDefaultParameters><sql>//执行SQL语句<jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters><sqlStmt><include/></sqlStmt></sql><definitionInclude>//定义包含<include/>//包含多个<definitionInclude>标签文件<echo/>//日志开头要显示的信息<copyTable/>//简单表复制<importDefinition>//导入定义<sortColumns>//确保字段数据唯一<sortColumn/></sortColumns><jdbcParameters>//导入任务定义的JDBC连接<jdbcSourceParameters><jdbcSourceParameter/></jdbcSourceParameters><jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters></jdbcParameters><valueColumns>//直接对应转换列<valueColumn/></valueColumns><transformations>//自定义转换规则<transformation>//转换规则<sourceColumns><sourceColumn/></sourceColumns><targetColumns><targetColumn/></targetColumns><javaScript><include/></javaScript></transformation></transformations><variableColumns><variableColumn/>//将变量值赋给目标字段必须属性override="true"<userIDColumn/>//将当前用户赋给目标字段<timeStampColumn/>//将当前时间赋给目标字段</variableColumns><relationColumns><relationColumn/>//导入外键关系(必须存在对应关系)</relationColumns><constantColumns><constantColumn/>//将固定值(常量)赋给目标字段</constantColumns><counterColumns><counterColumn/>//通过计数器表向目标字段自动增量生成数据(例如:自动加1)<subCounterColumn/><subCounterKeyColumn/></subCounterColumn></counterColumns><tables>//定义目标表<table/></tables></importDefinition></definitionInclude></loaderJob>第二章标签说明<loaderJob>ETL配置文件的根标签。

ETL设计说明书

ETL设计说明书

ETL设计说明书错误!未找到引用源。

Author: Zhang JianCustomer: ***目录1.概述 (5)2.ETL开发策略 (7)3.ETL系统架构设计 (8)3.1ETL整体框架 (8)3.2ETL系统逻辑架构 (8)3.2.1ETL系统的备份和恢复 (9)4.ETL应用框架设计 (10)4.1ETL应用架构逻辑图 (10)4.2ETL模式 (11)4.3数据抽取(Extract)和数据变换(Convert) (11)4.3.1数据抽取(Extract) (11)4.3.2数据变换(Convert) (11)4.3.3数据分割(Split) (12)4.4数据转换(Transform) (12)4.4.1字段合并与拆分 (12)4.4.2赋缺省值 (12)4.4.3数据排序(Sort) (12)4.4.4数据翻译(Lookup) (12)4.4.5数据合并(Merge) (12)4.4.6数据聚合(Aggregate) (13)4.4.7文件比较(File Compare) (13)4.4.8其他复杂计算 (13)4.5数据加载(Load) (13)4.5.1Pre-Load (13)4.5.2Load (13)4.5.3Post-Load (14)4.6ETL进程和进程调度 (14)4.7管理功能(Management Interface) (14)4.8初始数据、历史数据和日常数据ETL (15)5.开发规范 (16)5.1中间文件 (16)5.2临时文件 (16)5.3BAPI参数文件 (17)5.4ETL程序 (17)5.4.1DataStage Project命名 (17)5.4.2DataStage中Job命名 (17)5.4.3DataStage中Stage命名 (18)5.4.4DataStage中Link命名 (19)5.4.5DataStage中Routine命名 (19)5.4.6DataStage产生的Abap程序命名 (19)5.4.7DataStage中Table Definition命名 (20)5.4.8Store procedure程序命名 (21)5.5Reject文件 (21)5.6系统日志 (21)5.7ODBC (22)5.8版本控制 (22)5.8.1ABAP程序及BAPI程序 (22)5.8.2DataStage Job及Routine (22)5.8.3Store Procedure程序 (22)5.8.4文档 (22)5.9ETL Job开发方法规范 (23)5.9.1TableDefinition的使用原则 (23)5.9.2Extract Job的开发原则 (23)5.9.3CS Job的开发原则 (24)5.9.4Load Job的开发原则 (24)5.9.5Gc和Ge Job的开发原则 (25)5.9.6关于存储过程及BAPI (26)6.系统环境 (27)6.1开发、测试和运行环境规划 (27)6.2文件目录 (27)6.3DataStage Manager目录层级规划 (28)7.ETL应用设计 (30)7.1应用模块架构 (30)7.1.1DataStage Server (30)7.1.2DataBase Server (31)7.2ETL Job设计 (31)7.2.1Schedule Job (31)7.2.2Dependence Job (36)7.2.3Maintance Job (36)7.2.4Group Job (38)7.2.5Component Job (40)7.3ETL环境参数 (42)7.3.1JobParams.cfg文件格式 (42)7.3.2参数说明 (42)7.4公共Routine设计 (43)7.4.1Transform Routine (43)7.4.2Before/After SubRoutine (47)7.5初始ETL程序 (48)8.ETL开发流程及管理 (49)8.1开发环境准备 (49)8.2开发步骤 (49)8.2.1日常数据加载: (49)8.2.2初始数据加载: (49)8.2.3历史数据加载: (49)8.3角色及责任 (50)9.ETL质量控制及错误处理 (52)9.1ETL质量控制主要实现手段 (52)9.2拒绝文件及拒绝处理策略 (52)9.3已入库源数据发生错误的应对策略 (52)附录I.ETL Mapping文件文档模板 (54)附录II.ETL Data Flow文档模板 (55)附录III.ETL Job Dependency文档模板 (56)1. 概述ETL系统的核心功能就是按照本设计说明书的架构,将数据由数据源系统加载到数据仓库中。

ETL操作手册_TV模式

ETL操作手册_TV模式

快速入门指南  电视分析仪/接收机模式目录说明 (1)1频道表和调制标准 (2)1.1频道表 (2)1.1.1编辑频道表 (5)1.1.2创建频道表 (6)1.1.3频道表复制 (6)1.2调制标准 (6)1.2.1创建新调制标准 (7)1.2.2 编辑调制标准 (10)1.2.3 调制标准复制 (10)1.2.4 模拟电视调制标准 (10)1.2.5 数字电视调制标准 (12)2模拟电视基础和测试用例 (14)2.1 频谱测量 (15)2.2 载波测量 (16)2.3 视频示波 (16)2.4视频调制测量 (17)2.5 Hum测量 (18)2.6 C/N测量 (20)2.6.1离线(Off-Service)C/N测试 (20)2.6.2在线(In-Service)C/N测试 (21)2.6.3 Quiet Line C/N测试 (22)2.7 CSO测量 (23)2.7.1离线(Off-Service)CSO测试 (23)2.7.2 Quiet Line CSO测试 (25)2.8 CTB测量 (26)3DVB-C,J.83A/C测试用例 (27)3.1 频谱测量 (28)3.2 测试结果一览 (29)3.3 星座图测试(Modulation Analysis) (30)3.4 幅度、相位和群时延分析(Channel Analysis) (31)3.5 APD/CCDF测量 (31)4DVB-T/H测量用例 (32)5TV Analyzer测量 (32)6频道表测试与不利用频道表测试 (33)7通用功能和信息 (34)7.1 信号电平 (34)7.2 衰减调整 (34)7.3 测量显示界面的标识 (35)7.4 状态栏标识 (35)7.5 术语和缩写 (35)说明R&S ETL的操作指南包括三种:快速入门指南、操作手册和在线帮助。

对于ETL而言,主要分为频谱分析仪和电视分析仪/接收机两大功能,为了方便,我们分开撰写了快速操作指南。

Kettle开源ETL平台_安装配置及使用说明v1.1

Kettle开源ETL平台_安装配置及使用说明v1.1

v1.0 可编辑可修改[ ] 初稿[ ] 发布[ √] 修订编撰:肖渺编撰日期:****-**-**保密级别:公开文档版本:【 KETTLE 开源ETL软件】【安装配置与使用说明】2015年09月修订记录目录修订记录 (2)1.安装与配置 (4)1.1ETL与K ETTLE概述 (4)1.2K ETTLE的下载与安装 (7)1.2.1Windows下安装配置Kettle (8)1.2.2Linux下安装配置Kettle (10)1.2.3Kettle下安装JDBC数据库驱动 (15)1.2.4下配置资源库连接 (15)1.2.5Kettle下Hadoop Plugin插件配置 (17)2.KETTLE组件介绍与使用 (19)2.1K ETTLE SPOON使用 (19)2.1.1组件树介绍 (20)2.1.2使用示例1 (23)2.1.3使用示例2 (37)2.1.4使用Kettle装载数据到HDFS (48)2.1.5使用Kettle装载数据到Hive (52)2.1.6使用Kettle进行hadoop的mapreduce图形化开发 (52)2.2K ETTLE PAN的使用 (63)2.3K ETTLE KITECHEN的使用 (64)2.4C ARTE 添加新的ETL执行引擎 (65)2.5E NCR加密工具 (68)1.安装与配置2015年下半年公司承接了江苏电信电子渠道中心数据分析项目,项目实现计划使用大数据应用与分析相关的开源组件与技术来实现;针对数据的抽取与清理,需要使用ETL 工具;针对不同的数据源的数据整合需求,考虑到项目投资与开发成本,项目组初步计划采用开源ETL工具;ETL (Extract,Transformation,Load)工具是构建数据仓库、进行数据整合工作所必须使用的工具。

目前市面有多种商业 ETL 工具,如Informatica PowerCenter, IBM Datastage等。

数据集市化etl工具使用说明书

数据集市化etl工具使用说明书

数据集市化etl工具使用说明书数据集市化ETL工具使用说明书一、概述数据集市化ETL工具是一种用于数据集市建设和数据集市化过程中的数据集成、转换和加载的工具。

它可以帮助用户快速、高效地将不同数据源中的数据进行整合、清洗和转换,最终将数据加载到数据集市中,为数据分析和决策提供支持。

二、安装和配置1. 安装:将数据集市化ETL工具的安装包下载到本地,双击运行安装程序,按照提示完成安装过程。

2. 配置:在安装完成后,打开工具,进入配置界面,根据实际需求进行相应配置,包括数据库连接配置、数据源配置等。

三、数据源配置1. 新建数据源:在工具中选择“数据源管理”,点击“新建数据源”按钮,根据实际情况填写数据源名称、类型、地址、端口等信息,并进行测试连接。

2. 编辑数据源:在数据源管理界面,选中需要编辑的数据源,点击“编辑”按钮,对数据源进行相应的修改和配置。

3. 删除数据源:在数据源管理界面,选中需要删除的数据源,点击“删除”按钮,确认删除操作。

四、数据集成1. 新建数据集成任务:在工具中选择“数据集成任务管理”,点击“新建数据集成任务”按钮,根据实际需求填写任务名称、描述等信息。

2. 配置数据源:在数据集成任务管理界面,选择需要配置的数据集成任务,点击“配置数据源”按钮,选择源数据源和目标数据源,并进行字段映射和数据转换等配置。

3. 运行数据集成任务:在数据集成任务管理界面,选择需要运行的数据集成任务,点击“运行”按钮,等待任务执行完成。

五、数据转换1. 数据字段映射:在数据集成任务的配置过程中,可以根据需要进行源字段和目标字段的映射,确保数据能够正确转换和加载。

2. 数据清洗:在数据集成任务的配置过程中,可以进行数据清洗操作,包括数据去重、数据过滤、数据格式化等,以确保数据的质量和准确性。

3. 数据转换:在数据集成任务的配置过程中,可以进行数据转换操作,包括数据合并、数据拆分、数据计算等,以满足不同的业务需求。

ETL工具Kettle用户手册5.0

ETL工具Kettle用户手册5.0

5.2. 描述...................................................................................................... - 24 -
6. 节点连接(Hops) .............................................................................................. - 24 -
1.6.1 转换.................................................................................................... - 15 -
1.6.2 任务............................................................................................... - 16 -
6.1. 描述...................................................................................................... - 24 -
6.1.1 转换连接........................................................................................ - 25 -
京 数据库浏览器(Database Explorer) ......................................................................... - 23 南5.1. 截图...................................................................................................... - 23 -

ETL测试基础教程:数据提取、转换和加载说明书

ETL测试基础教程:数据提取、转换和加载说明书

About the T utorialAn ETL tool extracts the data from all these heterogeneous data sources, transforms the data (like applying calculations, joining fields, keys, removing incorrect data fields, etc.), and loads it into a Data Warehouse. This is an introductory tutorial that explains all the fundamentals of ETL testing.AudienceThis tutorial has been designed for all those readers who want to learn the basics of ETL testing. It is especially going to be useful for all those software testing professionals who are required to perform data analysis to extract relevant information from a database. PrerequisitesWe assume the readers of this tutorial have hands-on experience of handling a database using SQL queries. In addition, it is going to help if the readers have an elementary knowledge of data warehousing concepts.Disclaimer & CopyrightCopyright 2015 by Tutorials Point (I) Pvt. Ltd.All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher.We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt. Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our website or its contents including this tutorial. If you discover any errors on our websiteorinthistutorial,******************************************iT able of ContentsAbout the Tutorial (i)Audience (i)Prerequisites (i)Disclaimer & Copyright (i)Table of Contents .................................................................................................................................... i i 1.ETL – INTRODUCTION . (1)Difference between ETL and BI Tools (1)ETL Process (2)ETL Tool Function (3)2.ETL TESTING – TASKS (4)3.ETL VS DATABASE TESTING (5)4.ETL TESTING – CATEGORIES (7)5.ETL TESTING – CHALLENGES (9)6.ETL – TESTER'S ROLES (10)7.ETL TESTING – TECHNIQUES (12)8.ETL TESTING – PROCESS (15)9.ETL TESTING – SCENARIOS (TEST CASES) (16)10.ETL TESTING – PERFORMANCE (19)11.ETL TESTING – SCALABILITY (20)12.ETL TESTING – DATA ACCURACY (21)13.ETL TESTING – METADATA (22)ii14.ETL TESTING – DATA TRANSFORMATIONS (23)15.ETL TESTING – DATA QUALITY (24)16.ETL TESTING – DATA COMPLETENESS (25)17.ETL TESTING – BACKUP RECOVERY (26)18.ETL TESTING – AUTOMATION (27)19.ETL TESTING – BEST PRACTICES (28)20.ETL TESTING – INTERVIEW QUESTIONS (30)iiiETL Testing 1The data in a Data Warehouse system is loaded with an ETL (Extract, Transform, Load) tool. As the name suggests, it performs the following three operations: ∙Extracts the data from your transactional system which can be an Oracle, Microsoft, or any other relational database, ∙Transforms the data by performing data cleansing operations, and then ∙ Loads the data into the OLAP data Warehouse.You can also extract data from flat files like spreadsheets and CSV files using an ETL tool and load it into an OLAP data warehouse for data analysis and reporting. Let us take an example to understand it better.ExampleLet us assume there is a manufacturing company having multiple departments such as sales, HR, Material Management, EWM, etc. All these departments have separate databases which they use to maintain information w.r.t. their work and each database has a different technology, landscape, table names, columns, etc. Now, if the company wants to analyze historical data and generate reports, all the data from these data sources should be extracted and loaded into a Data Warehouse to save it for analytical work.An ETL tool extracts the data from all these heterogeneous data sources, transforms the data (like applying calculations, joining fields, keys, removing incorrect data fields, etc.), and loads it into a Data Warehouse. Later, you can use various Business Intelligence (BI) tools to generate meaningful reports, dashboards, and visualizations using this data. Difference between ETL and BI T oolsAn ETL tool is used to extract data from different data sources, transform the data, and load it into a DW system; however a BI tool is used to generate interactive and ad-hoc reports for end-users, dashboard for senior management, data visualizations for monthly, quarterly, and annual board meetings.The most common ETL tools include: SAP BO Data Services (BODS), Informatica – Power Center, Microsoft – SSIS, Oracle Data Integrator ODI, Talend Open Studio, Clover ETL Open source, etc.Some popular BI tools include: SAP Business Objects, SAP Lumira, IBM Cognos, JasperSoft, Microsoft BI Platform, Tableau, Oracle Business Intelligence Enterprise Edition, etc.1. ETL – IntroductionETL ProcessLet us now discuss in a little more detail the key steps involved in an ETL procedure –Extracting the DataIt involves extracting the data from different heterogeneous data sources. Data extraction from a transactional system varies as per the requirement and the ETL tool in use. It is normally done by running scheduled jobs in off-business hours like running jobs at night or over the weekend.Transforming the DataIt involves transforming the data into a suitable format that can be easily loaded into a DW system. Data transformation involves applying calculations, joins, and defining primary and foreign keys on the data. For example, if you want % of total revenue which is not in database, you will apply % formula in transformation and load the data. Similarly, if you have the first name and the last name of users in different columns, then you can apply a concatenate operation before loading the data. Some data do esn’t require any transformation; such data is known as direct move or pass through data. Data transformation also involves data correction and cleansing of data, removing incorrect data, incomplete data formation, and fixing data errors. It also includes data integrity and formatting incompatible data before loading it into a DW system. Loading the Data into a DW SystemIt involves loading the data into a DW system for analytical reporting and information. The target system can be a simple delimited flat file or a data warehouse.2ETL T ool FunctionA typical ETL tool-based data warehouse uses staging area, data integration, and access layers to perform its functions. It’s normally a 3-layer architecture.∙Staging Layer– The staging layer or staging database is used to store the data extracted from different source data systems.∙Data Integration Layer–The integration layer transforms the data from the staging layer and moves the data to a database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions tables in a DW system is called a schema.∙Access Layer–The access layer is used by end-users to retrieve the data for analytical reporting and information.The following illustration shows how the three layers interact with each other.3ETL Testing4ETL testing is done before data is moved into a production data warehouse system. It is sometimes also called as table balancing or production reconciliation . It is different from database testing in terms of its scope and the steps to be taken to complete this. The main objective of ETL testing is to identify and mitigate data defects and general errors that occur prior to processing of data for analytical reporting. ETL Testing – Tasks to be PerformedHere is a list of the common tasks involved in ETL Testing –1. Understand the data to be used for reporting2. Review the Data Model3. Source to target mapping4. Data checks on source data5. Packages and schema validation6. Data verification in the target system7. Verification of data transformation calculations and aggregation rules8. Sample data comparison between the source and the target system9. Data integrity and quality checks in the target system10. Performance testing on data2. ETL Testing – TasksETL Testing 5Both ETL testing and database testing involve data validation, but they are not the same. ETL testing is normally performed on data in a data warehouse system, whereas database testing is commonly performed on transactional systems where the data comes from different applications into the transactional database.Here, we have highlighted the major differences between ETL testing and Database testing.ETL TestingETL testing involves the following operations:1. Validation of data movement from the source to the target system.2. Verification of data count in the source and the target system.3. Verifying data extraction, transformation as per requirement and expectation.4. Verifying if table relations – joins and keys – are preserved during the transformation.Common ETL testing tools include QuerySurge , Informatica , etc. Database TestingDatabase testing stresses more on data accuracy, correctness of data and valid values. It involves the following operations:1. Verifying if primary and foreign keys are maintained.2. Verifying if the columns in a table have valid data values.3. Verifying data accuracy in columns. Example : Number of months column shouldn’t have a value greater than 12.4. Verifying missing data in columns. Check if there are null columns which actually should have a valid value.Common database testing tools include Selenium , QTP , etc.3. ETL vs Database TestingETL TestingEnd of ebook previewIf you liked what you saw…Buy it from our store @ https://6。

  1. 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
  2. 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
  3. 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。

ETL使用手册2007年11月8日ETL使用手册第一章配置文件结构<loaderJob>//根标签<restartCounter/>//在目标数据库中创建数据表,纪录importDefinition标签重新启动的次数,如果存在表明会抛错.<variables>//接收参数定义<variable/></variables><jdbcDefaultParameters>//默认JDBC连接<jdbcSourceParameters><jdbcSourceParameter/></jdbcSourceParameters><jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters></jdbcDefaultParameters><sql>//执行SQL语句<jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters><sqlStmt><include/></sqlStmt></sql><definitionInclude>//定义包含<include/>//包含多个<definitionInclude>标签文件<echo/>//日志开头要显示的信息<copyTable/>//简单表复制<importDefinition>//导入定义<sortColumns>//确保字段数据唯一<sortColumn/></sortColumns><jdbcParameters>//导入任务定义的JDBC连接<jdbcSourceParameters><jdbcSourceParameter/></jdbcSourceParameters><jdbcTargetParameters><jdbcTargetParameter/></jdbcTargetParameters></jdbcParameters><valueColumns>//直接对应转换列<valueColumn/></valueColumns><transformations>//自定义转换规则<transformation>//转换规则<sourceColumns><sourceColumn/></sourceColumns><targetColumns><targetColumn/></targetColumns><javaScript><include/></javaScript></transformation></transformations><variableColumns><variableColumn/>//将变量值赋给目标字段必须属性override="true"<userIDColumn/>//将当前用户赋给目标字段<timeStampColumn/>//将当前时间赋给目标字段</variableColumns><relationColumns><relationColumn/>//导入外键关系(必须存在对应关系)</relationColumns><constantColumns><constantColumn/>//将固定值(常量)赋给目标字段</constantColumns><counterColumns><counterColumn/>//通过计数器表向目标字段自动增量生成数据(例如:自动加1)<subCounterColumn/><subCounterKeyColumn/></subCounterColumn></counterColumns><tables>//定义目标表<table/></tables></importDefinition></definitionInclude></loaderJob>第二章标签说明<loaderJob>ETL配置文件的根标签。

属性默认范围描述参数logMode <importDefinition> and <sql> 定义默认的日志模式包括:"normal", "none" ,"full". 系统默认"normal".-mobjectIDIncrement <importDefinition> 定义配置文件中指定的目标对象数量.默认20.noneobjectIDTableName <importDefinition> 定义配置文件中指定的目标对象表名. .默认"objectid".noneobjectIDColumnNam e <importDefinition>定义配置文件中指定的目标对象字段名. 默认 "next".noneobjectIDNameColum nName <importDefinition>Defines the column name for OIDname defined by TOS. If thisvalue is set, Loader uses OIDlogic as used by TOS.noneobjectIDNameColum nValue <importDefinition>Defines the column name for OIDvalues defined by TOS. If thisvalue is set, Loader uses OIDlogic as used by TOS. Type of thiscolumn is VARCHAR.noneonErrorContinue <sql> and<importDefinition> 定义转换任务执行SQL命令或转换过程中发生错误都将继续执行。

默认"false"-ecommit <sql> 定义SQL命令块分别提交,默认"false"noneuserID none 为userID column标签定义值-u logDir none 定义日志文件夹路径. 默认为当前工作目录-llogFile none 定义日志文件名.默认"LoaderLog-YYYY-MM-DD-HH-mm-ss.txt"-fvendorConfig none 定义数据库类型配置文件名称。

默认 "OctopusDBVendors.xml"-dreturnCode none 定义默认java.exe返回代码到外部环境,转换任务失败时返回。

-rcobjectIDAutoCreat e <importDefinition>定义objectID表是否自动创建默认 "false".noneobjectIDStartValu e <importDefinition>定义objectID起始值,仅在自动创建时,默认 "1"nonecommitCount <loaderJob>,<importDefinition> and<copyTable> 定义系统默认的一次提交数量。

默认 "100"-coidLogic <importDefinition> and <table> 定义表是否使用OID逻辑,系统默认"true".nonetableMode <importDefinition> and <table> 定义默认的表访问方法.系统默认"Query".nonedataCutOff <importDefinition> 定义数据截取开关。

系统默认"false".超出目标字段长度时是否截取数据nonelogTableName <loaderJob> 定义是否在数据清理时使用日志表,如果使用就必须制定日志表名。

nonelogColumnName <loaderJob> 定义日志表中存放被操作的字段名称列名,默认LOGCOLUMNNAME.nonelogRowNumber <loaderJob> 定义日志表中存放被操作的数据行号的列名。

默认LOGROWNUMBER.none logOriginalValue <loaderJob> 定义日志表中存放被操作的数据的原始值的列名。

默认LOGORIGINALVALUE.nonelogNewValue <loaderJob> 定义日志表中存放被操作的数据的原始值的列名。

默认LOGNEWVALUE.nonelogImportDefiniti onName <loaderJob> 定义日志表中存放ImportDefinition标签名称的列名。

默认LOGIMPORTDEFINITIONNAME.nonelogOperationName <loaderJob> 定义日志表中存放数据清理方式(cut off, replaced null...)的列名。

默认LOGOPERATIONNAME.nonelogTypeName <loaderJob> 定义日志表中存放数据操作类型(insert, update...) 的列名。

相关文档
最新文档