使用dataX全量迁移Oracle到MySQL
有关dataX更多内容请参考:https://www.xmmup.com/alikaiyuanetlgongjuzhidataxhedatax-webjieshao.html
环境准备
Oracle和MySQL环境准备
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 | -- 创建专用网络 docker network create --subnet=172.72.7.0/24 ora-network -- oracle 压测工具 docker pull lhrbest/lhrdbbench:1.0 docker rm -f lhrdbbench docker run -d --name lhrdbbench -h lhrdbbench \ --net=ora-network --ip 172.72.7.26 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/lhrdbbench:1.0 \ /usr/sbin/init -- Oracle 12c docker rm -f lhrora1221 docker run -itd --name lhrora1221 -h lhrora1221 \ --net=ora-network --ip 172.72.7.34 \ -p 1526:1521 -p 3396:3389 \ --privileged=true \ lhrbest/oracle_12cr2_ee_lhr_12.2.0.1:2.0 init -- mysql docker rm -f mysql8027 docker run -d --name mysql8027 -h mysql8027 -p 3418:3306 \ --net=ora-network --ip 172.72.7.35 \ -v /etc/mysql/mysql8027/conf:/etc/mysql/conf.d \ -e MYSQL_ROOT_PASSWORD=lhr -e TZ=Asia/Shanghai \ mysql:8.0.27 cat > /etc/mysql/mysql8027/conf/my.cnf << "EOF" [mysqld] default-time-zone = '+8:00' log_timestamps = SYSTEM skip-name-resolve log-bin server_id=80273418 character_set_server=utf8mb4 default_authentication_plugin=mysql_native_password EOF mysql -uroot -plhr -h 172.72.7.35 create database lhrdb; -- 业务用户 CREATE USER lhr identified by lhr; alter user lhr identified by lhr; GRANT DBA to lhr ; grant SELECT ANY DICTIONARY to lhr; GRANT EXECUTE ON SYS.DBMS_LOCK TO lhr; -- 启动监听 vi /u01/app/oracle/product/11.2.0.4/dbhome_1/network/admin/listener.ora lsnrctl start lsnrctl status |
dataX环境准备
1 2 3 4 5 6 7 8 9 10 | docker rm -f lhrdataX docker run -itd --name lhrdataX -h lhrdataX \ --net=ora-network --ip 172.72.7.77 \ -p 9527:9527 -p 39389:3389 -p 33306:3306 \ -v /sys/fs/cgroup:/sys/fs/cgroup \ --privileged=true lhrbest/datax:2.0 \ /usr/sbin/init http://172.17.0.4:9527/index.html admin/123456 |
Oracle端数据初始化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 | -- 源端数据初始化 /usr/local/swingbench/bin/oewizard -s -create -c /usr/local/swingbench/wizardconfigs/oewizard.xml -create \ -version 2.0 -cs //172.72.7.34/lhrsdb -dba "sys as sysdba" -dbap lhr -dt thin \ -ts users -u lhr -p lhr -allindexes -scale 0.0001 -tc 16 -v -cl col TABLE_NAME format a30 SELECT a.table_name,a.num_rows FROM dba_tables a where a.OWNER='LHR' ; select object_type,count(*) from dba_objects where owner='LHR' group by object_type; select object_type,status,count(*) from dba_objects where owner='LHR' group by object_type,status; select sum(bytes)/1024/1024 from dba_segments where owner='LHR'; -- 检查键是否正确:https://www.xmmup.com/ogg-01296-biaoyouzhujianhuoweiyijiandanshirengranshiyongquanbulielaijiexixing.html -- 否则OGG启动后,会报错:OGG-01296、OGG-06439、OGG-01169 Encountered an update where all key columns for target table LHR.ORDER_ITEMS are not present. select owner, constraint_name, constraint_type, status, validated from dba_constraints where owner='LHR' and VALIDATED='NOT VALIDATED'; select 'alter table lhr.'||TABLE_NAME||' enable validate constraint '||CONSTRAINT_NAME||';' from dba_constraints where owner='LHR' and VALIDATED='NOT VALIDATED'; -- 删除外键 SELECT 'ALTER TABLE LHR.'|| D.TABLE_NAME ||' DROP constraint '|| D.CONSTRAINT_NAME||';' FROM DBA_constraints d where owner='LHR' and d.CONSTRAINT_TYPE='R'; sqlplus lhr/lhr@172.72.7.34:1521/lhrsdb @/oggoracle/demo_ora_create.sql @/oggoracle/demo_ora_insert.sql SQL> select * from tcustmer; CUST NAME CITY ST ---- ------------------------------ -------------------- -- WILL BG SOFTWARE CO. SEATTLE WA JANE ROCKY FLYER INC. DENVER CO -- 创建2个clob和blob类型的表 sqlplus lhr/lhr@172.72.7.34:1521/lhrsdb @/oggoracle/demo_ora_lob_create.sql exec testing_lobs; select * from lhr.TSRSLOB; drop table IMAGE_LOB; CREATE TABLE IMAGE_LOB ( T_ID VARCHAR2 (5) NOT NULL, T_IMAGE BLOB, T_CLOB CLOB ); -- 插入blob文件 CREATE OR REPLACE DIRECTORY D1 AS '/home/oracle/'; grant all on DIRECTORY D1 TO PUBLIC; CREATE OR REPLACE NONEDITIONABLE PROCEDURE IMG_INSERT(TID VARCHAR2, FILENAME VARCHAR2, name VARCHAR2) AS F_LOB BFILE; B_LOB BLOB; BEGIN INSERT INTO IMAGE_LOB (T_ID, T_IMAGE,T_CLOB) VALUES (TID, EMPTY_BLOB(),name) RETURN T_IMAGE INTO B_LOB; F_LOB := BFILENAME('D1', FILENAME); DBMS_LOB.FILEOPEN(F_LOB, DBMS_LOB.FILE_READONLY); DBMS_LOB.LOADFROMFILE(B_LOB, F_LOB, DBMS_LOB.GETLENGTH(F_LOB)); DBMS_LOB.FILECLOSE(F_LOB); COMMIT; END; / BEGIN IMG_INSERT('1','1.jpg','xmmup.com'); IMG_INSERT('2','2.jpg','www.xmmup.com'); END; / select * from IMAGE_LOB; ----- oracle所有表 SQL> select * from tab; TNAME TABTYPE CLUSTERID ------------------------------ ------- ---------- ADDRESSES TABLE CARD_DETAILS TABLE CUSTOMERS TABLE IMAGE_LOB TABLE INVENTORIES TABLE LOGON TABLE ORDERENTRY_METADATA TABLE ORDERS TABLE ORDER_ITEMS TABLE PRODUCTS VIEW PRODUCT_DESCRIPTIONS TABLE PRODUCT_INFORMATION TABLE PRODUCT_PRICES VIEW TCUSTMER TABLE TCUSTORD TABLE TSRSLOB TABLE TTRGVAR TABLE WAREHOUSES TABLE 18 rows selected. SELECT COUNT(*) FROM LHR.ADDRESSES UNION ALL SELECT COUNT(*) FROM LHR.CARD_DETAILS UNION ALL SELECT COUNT(*) FROM LHR.CUSTOMERS UNION ALL SELECT COUNT(*) FROM LHR.IMAGE_LOB UNION ALL SELECT COUNT(*) FROM LHR.INVENTORIES UNION ALL SELECT COUNT(*) FROM LHR.LOGON UNION ALL SELECT COUNT(*) FROM LHR.ORDERENTRY_METADATA UNION ALL SELECT COUNT(*) FROM LHR.ORDERS UNION ALL SELECT COUNT(*) FROM LHR.ORDER_ITEMS UNION ALL SELECT COUNT(*) FROM LHR.PRODUCT_DESCRIPTIONS UNION ALL SELECT COUNT(*) FROM LHR.PRODUCT_INFORMATION UNION ALL SELECT COUNT(*) FROM LHR.TCUSTMER UNION ALL SELECT COUNT(*) FROM LHR.TCUSTORD UNION ALL SELECT COUNT(*) FROM LHR.TSRSLOB UNION ALL SELECT COUNT(*) FROM LHR.TTRGVAR UNION ALL SELECT COUNT(*) FROM LHR.WAREHOUSES ; COUNT(*) ---------- 281 281 231 4 900724 575 4 424 1642 1000 1000 2 2 1 0 1000 16 rows selected. |
最终,在Oracle端共包括16张表,2个视图,其中2个表TSRSLOB和IMAGE_LOB包括了blob和clob字段。
生成MySQL端DDL语句
可以使用Navicat的数据传输功能或其它工具直接从Oracle端生成MySQL类型的建表语句如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 | mysql -uroot -plhr -h 172.72.7.35 -D lhrdb -f < ddl.sql mysql> show tables; +----------------------+ | Tables_in_lhrdb | +----------------------+ | ADDRESSES | | CARD_DETAILS | | CUSTOMERS | | IMAGE_LOB | | INVENTORIES | | LOGON | | ORDERENTRY_METADATA | | ORDERS | | ORDER_ITEMS | | PRODUCT_DESCRIPTIONS | | PRODUCT_INFORMATION | | TCUSTMER | | TCUSTORD | | TSRSLOB | | TTRGVAR | | WAREHOUSES | +----------------------+ 16 rows in set (0.01 sec) |
DDL语句如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 | /* Navicat Premium Data Transfer Source Server : ora12c Source Server Type : Oracle Source Server Version : 120200 Source Host : 192.168.1.35:1526 Source Schema : LHR Target Server Type : MySQL Target Server Version : 80099 File Encoding : 65001 Date: 28/06/2022 15:19:41 */ SET NAMES utf8; SET FOREIGN_KEY_CHECKS = 0; -- ---------------------------- -- Table structure for ADDRESSES -- ---------------------------- DROP TABLE IF EXISTS `ADDRESSES`; CREATE TABLE `ADDRESSES` ( `ADDRESS_ID` decimal(12, 0) NOT NULL, `CUSTOMER_ID` decimal(12, 0) NOT NULL, `DATE_CREATED` datetime NOT NULL, `HOUSE_NO_OR_NAME` varchar(60) NULL, `STREET_NAME` varchar(60) NULL, `TOWN` varchar(60) NULL, `COUNTY` varchar(60) NULL, `COUNTRY` varchar(60) NULL, `POST_CODE` varchar(12) NULL, `ZIP_CODE` varchar(12) NULL, PRIMARY KEY (`ADDRESS_ID`), INDEX `ADDRESS_CUST_IX`(`CUSTOMER_ID` ASC) ); -- ---------------------------- -- Table structure for CARD_DETAILS -- ---------------------------- DROP TABLE IF EXISTS `CARD_DETAILS`; CREATE TABLE `CARD_DETAILS` ( `CARD_ID` decimal(12, 0) NOT NULL, `CUSTOMER_ID` decimal(12, 0) NOT NULL, `CARD_TYPE` varchar(30) NOT NULL, `CARD_NUMBER` decimal(12, 0) NOT NULL, `EXPIRY_DATE` datetime NOT NULL, `IS_VALID` varchar(1) NOT NULL, `SECURITY_CODE` decimal(6, 0) NULL, PRIMARY KEY (`CARD_ID`), INDEX `CARDDETAILS_CUST_IX`(`CUSTOMER_ID` ASC) ); -- ---------------------------- -- Table structure for CUSTOMERS -- ---------------------------- DROP TABLE IF EXISTS `CUSTOMERS`; CREATE TABLE `CUSTOMERS` ( `CUSTOMER_ID` decimal(12, 0) NOT NULL, `CUST_FIRST_NAME` varchar(40) NOT NULL, `CUST_LAST_NAME` varchar(40) NOT NULL, `NLS_LANGUAGE` varchar(3) NULL, `NLS_TERRITORY` varchar(30) NULL, `CREDIT_LIMIT` decimal(9, 2) NULL, `CUST_EMAIL` varchar(100) NULL, `ACCOUNT_MGR_ID` decimal(12, 0) NULL, `CUSTOMER_SINCE` datetime NULL, `CUSTOMER_CLASS` varchar(40) NULL, `SUGGESTIONS` varchar(40) NULL, `DOB` datetime NULL, `MAILSHOT` varchar(1) NULL, `PARTNER_MAILSHOT` varchar(1) NULL, `PREFERRED_ADDRESS` decimal(12, 0) NULL, `PREFERRED_CARD` decimal(12, 0) NULL, PRIMARY KEY (`CUSTOMER_ID`), INDEX `CUST_ACCOUNT_MANAGER_IX`(`ACCOUNT_MGR_ID` ASC), INDEX `CUST_DOB_IX`(`DOB` ASC), INDEX `CUST_EMAIL_IX`(`CUST_EMAIL` ASC) ); -- ---------------------------- -- Table structure for IMAGE_LOB -- ---------------------------- DROP TABLE IF EXISTS `IMAGE_LOB`; CREATE TABLE `IMAGE_LOB` ( `T_ID` varchar(5) NOT NULL, `T_IMAGE` longblob NULL, `T_CLOB` longtext NULL ); -- ---------------------------- -- Table structure for INVENTORIES -- ---------------------------- DROP TABLE IF EXISTS `INVENTORIES`; CREATE TABLE `INVENTORIES` ( `PRODUCT_ID` decimal(6, 0) NOT NULL, `WAREHOUSE_ID` decimal(6, 0) NOT NULL, `QUANTITY_ON_HAND` decimal(8, 0) NOT NULL, PRIMARY KEY (`PRODUCT_ID`, `WAREHOUSE_ID`), INDEX `INV_PRODUCT_IX`(`PRODUCT_ID` ASC), INDEX `INV_WAREHOUSE_IX`(`WAREHOUSE_ID` ASC) ); -- ---------------------------- -- Table structure for LOGON -- ---------------------------- DROP TABLE IF EXISTS `LOGON`; CREATE TABLE `LOGON` ( `LOGON_ID` decimal(65, 30) NOT NULL, `CUSTOMER_ID` decimal(65, 30) NOT NULL, `LOGON_DATE` datetime NULL ); -- ---------------------------- -- Table structure for ORDER_ITEMS -- ---------------------------- DROP TABLE IF EXISTS `ORDER_ITEMS`; CREATE TABLE `ORDER_ITEMS` ( `ORDER_ID` decimal(12, 0) NOT NULL, `LINE_ITEM_ID` decimal(3, 0) NOT NULL, `PRODUCT_ID` decimal(6, 0) NOT NULL, `UNIT_PRICE` decimal(8, 2) NULL, `QUANTITY` decimal(8, 0) NULL, `DISPATCH_DATE` datetime NULL, `RETURN_DATE` datetime NULL, `GIFT_WRAP` varchar(20) NULL, `CONDITION` varchar(20) NULL, `SUPPLIER_ID` decimal(6, 0) NULL, `ESTIMATED_DELIVERY` datetime NULL, PRIMARY KEY (`ORDER_ID`, `LINE_ITEM_ID`), INDEX `ITEM_ORDER_IX`(`ORDER_ID` ASC), INDEX `ITEM_PRODUCT_IX`(`PRODUCT_ID` ASC) ); -- ---------------------------- -- Table structure for ORDERENTRY_METADATA -- ---------------------------- DROP TABLE IF EXISTS `ORDERENTRY_METADATA`; CREATE TABLE `ORDERENTRY_METADATA` ( `METADATA_KEY` varchar(30) NULL, `METADATA_VALUE` varchar(30) NULL ); -- ---------------------------- -- Table structure for ORDERS -- ---------------------------- DROP TABLE IF EXISTS `ORDERS`; CREATE TABLE `ORDERS` ( `ORDER_ID` decimal(12, 0) NOT NULL, `ORDER_DATE` datetime NOT NULL, `ORDER_MODE` varchar(8) NULL, `CUSTOMER_ID` decimal(12, 0) NOT NULL, `ORDER_STATUS` decimal(2, 0) NULL, `ORDER_TOTAL` decimal(8, 2) NULL, `SALES_REP_ID` decimal(6, 0) NULL, `PROMOTION_ID` decimal(6, 0) NULL, `WAREHOUSE_ID` decimal(6, 0) NULL, `DELIVERY_TYPE` varchar(15) NULL, `COST_OF_DELIVERY` decimal(6, 0) NULL, `WAIT_TILL_ALL_AVAILABLE` varchar(15) NULL, `DELIVERY_ADDRESS_ID` decimal(12, 0) NULL, `CUSTOMER_CLASS` varchar(30) NULL, `CARD_ID` decimal(12, 0) NULL, `INVOICE_ADDRESS_ID` decimal(12, 0) NULL, PRIMARY KEY (`ORDER_ID`), INDEX `ORD_CUSTOMER_IX`(`CUSTOMER_ID` ASC), INDEX `ORD_ORDER_DATE_IX`(`ORDER_DATE` ASC), INDEX `ORD_SALES_REP_IX`(`SALES_REP_ID` ASC), INDEX `ORD_WAREHOUSE_IX`(`WAREHOUSE_ID` ASC, `ORDER_STATUS` ASC) ); -- ---------------------------- -- Table structure for PRODUCT_DESCRIPTIONS -- ---------------------------- DROP TABLE IF EXISTS `PRODUCT_DESCRIPTIONS`; CREATE TABLE `PRODUCT_DESCRIPTIONS` ( `PRODUCT_ID` decimal(6, 0) NOT NULL, `LANGUAGE_ID` varchar(3) NOT NULL, `TRANSLATED_NAME` varchar(50) NOT NULL, `TRANSLATED_DESCRIPTION` text NOT NULL, PRIMARY KEY (`PRODUCT_ID`, `LANGUAGE_ID`), UNIQUE INDEX `PRD_DESC_PK`(`PRODUCT_ID` ASC, `LANGUAGE_ID` ASC), INDEX `PROD_NAME_IX`(`TRANSLATED_NAME` ASC) ); -- ---------------------------- -- Table structure for PRODUCT_INFORMATION -- ---------------------------- DROP TABLE IF EXISTS `PRODUCT_INFORMATION`; CREATE TABLE `PRODUCT_INFORMATION` ( `PRODUCT_ID` decimal(6, 0) NOT NULL, `PRODUCT_NAME` varchar(50) NOT NULL, `PRODUCT_DESCRIPTION` text NULL, `CATEGORY_ID` decimal(4, 0) NOT NULL, `WEIGHT_CLASS` decimal(1, 0) NULL, `WARRANTY_PERIOD` longtext NULL, `SUPPLIER_ID` decimal(6, 0) NULL, `PRODUCT_STATUS` varchar(20) NULL, `LIST_PRICE` decimal(8, 2) NULL, `MIN_PRICE` decimal(8, 2) NULL, `CATALOG_URL` varchar(50) NULL, PRIMARY KEY (`PRODUCT_ID`), INDEX `PROD_CATEGORY_IX`(`CATEGORY_ID` ASC), INDEX `PROD_SUPPLIER_IX`(`SUPPLIER_ID` ASC) ); -- ---------------------------- -- Table structure for TCUSTMER -- ---------------------------- DROP TABLE IF EXISTS `TCUSTMER`; CREATE TABLE `TCUSTMER` ( `CUST_CODE` varchar(4) NOT NULL, `NAME` varchar(30) NULL, `CITY` varchar(20) NULL, `STATE` char(2) NULL, PRIMARY KEY (`CUST_CODE`) ); -- ---------------------------- -- Table structure for TCUSTORD -- ---------------------------- DROP TABLE IF EXISTS `TCUSTORD`; CREATE TABLE `TCUSTORD` ( `CUST_CODE` varchar(4) NOT NULL, `ORDER_DATE` datetime NOT NULL, `PRODUCT_CODE` varchar(8) NOT NULL, `ORDER_ID` decimal(65, 30) NOT NULL, `PRODUCT_PRICE` decimal(8, 2) NULL, `PRODUCT_AMOUNT` decimal(6, 0) NULL, `TRANSACTION_ID` decimal(65, 30) NULL, PRIMARY KEY (`CUST_CODE`, `ORDER_DATE`, `PRODUCT_CODE`, `ORDER_ID`) ); -- ---------------------------- -- Table structure for TSRSLOB -- ---------------------------- DROP TABLE IF EXISTS `TSRSLOB`; CREATE TABLE `TSRSLOB` ( `LOB_RECORD_KEY` decimal(65, 30) NOT NULL, `LOB1_CLOB` longtext NULL, `LOB2_BLOB` longblob NULL, PRIMARY KEY (`LOB_RECORD_KEY`) ); -- ---------------------------- -- Table structure for TTRGVAR -- ---------------------------- DROP TABLE IF EXISTS `TTRGVAR`; CREATE TABLE `TTRGVAR` ( `LOB_RECORD_KEY` decimal(65, 30) NOT NULL, `LOB1_VCHAR0` text NULL, `LOB1_VCHAR1` text NULL, `LOB1_VCHAR2` text NULL, `LOB1_VCHAR3` varchar(200) NULL, `LOB1_VCHAR4` text NULL, `LOB1_VCHAR5` text NULL, `LOB1_VCHAR6` varchar(100) NULL, `LOB1_VCHAR7` varchar(250) NULL, `LOB1_VCHAR8` text NULL, `LOB1_VCHAR9` text NULL, `LOB2_VCHAR0` text NULL, `LOB2_VCHAR1` text NULL, `LOB2_VCHAR2` text NULL, `LOB2_VCHAR3` text NULL, `LOB2_VCHAR4` text NULL, `LOB2_VCHAR5` text NULL, `LOB2_VCHAR6` text NULL, `LOB2_VCHAR7` varchar(150) NULL, `LOB2_VCHAR8` text NULL, `LOB2_VCHAR9` varchar(50) NULL, PRIMARY KEY (`LOB_RECORD_KEY`) ); -- ---------------------------- -- Table structure for WAREHOUSES -- ---------------------------- DROP TABLE IF EXISTS `WAREHOUSES`; CREATE TABLE `WAREHOUSES` ( `WAREHOUSE_ID` decimal(6, 0) NOT NULL, `WAREHOUSE_NAME` varchar(35) NULL, `LOCATION_ID` decimal(4, 0) NULL, PRIMARY KEY (`WAREHOUSE_ID`), INDEX `WHS_LOCATION_IX`(`LOCATION_ID` ASC) ); SET FOREIGN_KEY_CHECKS = 1; |
dataX-web配置
1、新增项目
2、新增Oracle和MySQL数据源,并测试通过
1 2 3 4 | jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb jdbc:mysql://172.72.7.35:3306/lhrdb |
3、新增datax任务模板
4、构建批量任务
注意:任务构建每次只能建立一张表的迁移,所以,我们需要选择批量构建。
然后就是选择job的模板,最后就是批量创建任务。
创建完成后,点击任务管理:
可以看到,基本是每张表1个任务。任务构建成功后,默认不开启执行调度。
我们可以在后台更新数据库表来批量操作:
1 2 3 4 5 6 7 8 | mysql -uroot -plhr -D datax update job_info t set t.job_cron='0 18 15 4 7 ? 2022'; update job_info t set t.trigger_status=1; -- 日志 tailf /usr/local/datax-web-2.1.2/modules/datax-executor/bin/console.out |
更新完成后,查看界面:
等到15:18分后,刷新一下界面,如下:
发现,除了ORDERS表和PRODUCT_INFORMATION表后,其它都迁移成功了。
PRODUCT_INFORMATION表错误解决
日志内容:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 | 2022-07-04 15:18:00 [JobThread.run-130] <br>----------- datax-web job execute start -----------<br>----------- Param: 2022-07-04 15:18:00 [BuildCommand.buildDataXParam-100] ------------------Command parameters: 2022-07-04 15:18:01 [ExecutorJobHandler.execute-57] ------------------DataX process id: 4782 2022-07-04 15:18:01 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:01 [AnalysisStatistics.analysisStatisticsLog-53] DataX (DATAX-OPENSOURCE-3.0), From Alibaba ! 2022-07-04 15:18:01 [AnalysisStatistics.analysisStatisticsLog-53] Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved. 2022-07-04 15:18:01 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:01 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:01 [ProcessCallbackThread.callbackLog-186] <br>----------- datax-web job callback finish. 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.444 [main] INFO MessageSource - JVM TimeZone: GMT+08:00, Locale: zh_CN 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.449 [main] INFO MessageSource - use Locale: zh_CN timeZone: sun.util.calendar.ZoneInfo[id="GMT+08:00",offset=28800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.684 [main] INFO VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.693 [main] INFO Engine - the machine info => 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] osInfo: Red Hat, Inc. 1.8 25.332-b09 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] jvmInfo: Linux amd64 3.10.0-1127.10.1.el7.x86_64 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] cpu num: 16 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] totalPhysicalMemory: -0.00G 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] freePhysicalMemory: -0.00G 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] maxFileDescriptorCount: -1 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] currentOpenFileDescriptorCount: -1 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] GC Names [PS MarkSweep, PS Scavenge] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] MEMORY_NAME | allocation_size | init_size 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Eden Space | 256.00MB | 256.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Code Cache | 240.00MB | 2.44MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Compressed Class Space | 1,024.00MB | 0.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Survivor Space | 42.50MB | 42.50MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Old Gen | 683.00MB | 683.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Metaspace | -0.00MB | 0.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.724 [main] INFO Engine - 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "content":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "reader":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "parameter":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "password":"***", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "column":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"PRODUCT_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"PRODUCT_NAME\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"PRODUCT_DESCRIPTION\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"CATEGORY_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"WEIGHT_CLASS\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"WARRANTY_PERIOD\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"SUPPLIER_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"PRODUCT_STATUS\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"LIST_PRICE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"MIN_PRICE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"CATALOG_URL\"" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "connection":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbcUrl":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "table":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "PRODUCT_INFORMATION" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "splitPk":"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "username":"lhr" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "name":"oraclereader" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "writer":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "parameter":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "password":"***", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "column":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`PRODUCT_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`PRODUCT_NAME`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`PRODUCT_DESCRIPTION`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`CATEGORY_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`WEIGHT_CLASS`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`WARRANTY_PERIOD`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`SUPPLIER_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`PRODUCT_STATUS`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`LIST_PRICE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`MIN_PRICE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`CATALOG_URL`" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "connection":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbcUrl":"jdbc:mysql://172.72.7.35:3306/lhrdb", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "table":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "PRODUCT_INFORMATION" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "username":"root" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "name":"mysqlwriter" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "setting":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "errorLimit":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "record":0, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "percentage":0.02 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "speed":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "byte":1048576, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "channel":3 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.820 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.823 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.823 [main] INFO JobContainer - DataX jobContainer starts job. 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.825 [main] INFO JobContainer - Set jobId = 0 2022-07-04 15:18:03 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:03.705 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb. 2022-07-04 15:18:03 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:03.882 [job-0] INFO OriginalConfPretreatmentUtil - table:[PRODUCT_INFORMATION] has columns:[PRODUCT_ID,PRODUCT_NAME,PRODUCT_DESCRIPTION,CATEGORY_ID,WEIGHT_CLASS,WARRANTY_PERIOD,SUPPLIER_ID,PRODUCT_STATUS,LIST_PRICE,MIN_PRICE,CATALOG_URL]. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.236 [job-0] INFO OriginalConfPretreatmentUtil - table:[PRODUCT_INFORMATION] all columns:[ 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] PRODUCT_ID,PRODUCT_NAME,PRODUCT_DESCRIPTION,CATEGORY_ID,WEIGHT_CLASS,WARRANTY_PERIOD,SUPPLIER_ID,PRODUCT_STATUS,LIST_PRICE,MIN_PRICE,CATALOG_URL 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] ]. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.372 [job-0] INFO OriginalConfPretreatmentUtil - Write data [ 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] INSERT INTO %s (`PRODUCT_ID`,`PRODUCT_NAME`,`PRODUCT_DESCRIPTION`,`CATEGORY_ID`,`WEIGHT_CLASS`,`WARRANTY_PERIOD`,`SUPPLIER_ID`,`PRODUCT_STATUS`,`LIST_PRICE`,`MIN_PRICE`,`CATALOG_URL`) VALUES(?,?,?,?,?,?,?,?,?,?,?) 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] ], which jdbcUrl like:[jdbc:mysql://172.72.7.35:3306/lhrdb?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&tinyInt1isBit=false] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.373 [job-0] INFO JobContainer - jobContainer starts to do prepare ... 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.373 [job-0] INFO JobContainer - DataX Reader.Job [oraclereader] do prepare work . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.391 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] do prepare work . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.392 [job-0] INFO JobContainer - jobContainer starts to do split ... 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.392 [job-0] INFO JobContainer - Job set Max-Byte-Speed to 1048576 bytes. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.414 [job-0] INFO JobContainer - DataX Reader.Job [oraclereader] splits to [1] tasks. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.415 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] splits to [1] tasks. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.450 [job-0] INFO JobContainer - jobContainer starts to do schedule ... 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.457 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.460 [job-0] INFO JobContainer - Running by standalone Mode. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.483 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.488 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to 1048576. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.489 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.542 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.657 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select "PRODUCT_ID","PRODUCT_NAME","PRODUCT_DESCRIPTION","CATEGORY_ID","WEIGHT_CLASS","WARRANTY_PERIOD","SUPPLIER_ID","PRODUCT_STATUS","LIST_PRICE","MIN_PRICE","CATALOG_URL" from PRODUCT_INFORMATION 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb]. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.908 [0-0-0-reader] ERROR StdoutPluginCollector - 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[WARRANTY_PERIOD], 字段名称:[-103], 字段Java类型:[oracle.sql.INTERVALYM]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.buildRecord(CommonRdbmsReader.java:329) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.transportOneRecord(CommonRdbmsReader.java:237) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:209) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) [oraclereader-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_332] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.913 [0-0-0-reader] ERROR StdoutPluginCollector - 脏数据: 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] {"exception":"Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[WARRANTY_PERIOD], 字段名称:[-103], 字段Java类型:[oracle.sql.INTERVALYM]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 .","record":[{"byteSize":3,"index":0,"rawData":"116","type":"DOUBLE"},{"byteSize":33,"index":1,"rawData":"GI1UIwLTmiYIq58xRuA2R1zol 6 mAeDR","type":"STRING"},{"byteSize":43,"index":2,"rawData":"Qs qeVNnksYSRmWmyKAOttUaVoDMeM zFnyczzTnyME","type":"STRING"},{"byteSize":2,"index":3,"rawData":"53","type":"DOUBLE"},{"byteSize":1,"index":4,"rawData":"7","type":"DOUBLE"}],"type":"reader"} 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.918 [0-0-0-reader] ERROR ReaderRunner - Reader runner Received Exceptions: 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为: select "PRODUCT_ID","PRODUCT_NAME","PRODUCT_DESCRIPTION","CATEGORY_ID","WEIGHT_CLASS","WARRANTY_PERIOD","SUPPLIER_ID","PRODUCT_STATUS","LIST_PRICE","MIN_PRICE","CATALOG_URL" from PRODUCT_INFORMATION 具体错误信息为:com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[WARRANTY_PERIOD], 字段名称:[-103], 字段Java类型:[oracle.sql.INTERVALYM]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:93) ~[plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:220) ~[plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) ~[oraclereader-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_332] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16.497 [job-0] INFO StandAloneJobContainerCommunicator - Total 1 records, 82 bytes | Speed 8B/s, 0 records/s | Error 1 records, 82 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00% 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16.498 [job-0] ERROR JobContainer - 运行scheduler 模式[standalone]出错. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16.498 [job-0] ERROR JobContainer - Exception when job run 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[Framework-14], Description:[DataX传输脏数据超过用户预期,该错误通常是由于源端数据存在较多业务脏数据导致,请仔细检查DataX汇报的脏数据日志信息, 或者您可以适当调大脏数据阈值 .]. - 脏数据条数检查不通过,限制是[0]条,但实际上捕获了[1]条. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.util.ErrorRecordChecker.checkRecordLimit(ErrorRecordChecker.java:58) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.scheduler.AbstractScheduler.schedule(AbstractScheduler.java:89) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.schedule(JobContainer.java:535) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:119) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16.499 [job-0] INFO StandAloneJobContainerCommunicator - Total 1 records, 82 bytes | Speed 82B/s, 1 records/s | Error 1 records, 82 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00% 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16.501 [job-0] ERROR Engine - 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 经DataX智能分析,该任务最可能的错误原因是: 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[Framework-14], Description:[DataX传输脏数据超过用户预期,该错误通常是由于源端数据存在较多业务脏数据导致,请仔细检查DataX汇报的脏数据日志信息, 或者您可以适当调大脏数据阈值 .]. - 脏数据条数检查不通过,限制是[0]条,但实际上捕获了[1]条. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.util.ErrorRecordChecker.checkRecordLimit(ErrorRecordChecker.java:58) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.scheduler.AbstractScheduler.schedule(AbstractScheduler.java:89) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.schedule(JobContainer.java:535) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:119) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:04 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Caught while disconnecting... 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] EXCEPTION STACK TRACE: 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] ** BEGIN NESTED EXCEPTION ** 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] MESSAGE: closing inbound before receiving peer's close_notify 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] STACKTRACE: 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:740) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:719) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.MysqlIO.quit(MysqlIO.java:2249) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4232) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.close(ConnectionImpl.java:1472) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.closeDBResources(DBUtil.java:492) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.getTableColumnsByConn(DBUtil.java:526) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:105) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:140) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.doPretreatment(OriginalConfPretreatmentUtil.java:35) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Job.init(CommonRdbmsWriter.java:41) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Job.init(MysqlWriter.java:31) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.initJobWriter(JobContainer.java:704) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.init(JobContainer.java:304) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:113) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] ** END NESTED EXCEPTION ** 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Caught while disconnecting... 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] EXCEPTION STACK TRACE: 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] ** BEGIN NESTED EXCEPTION ** 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] MESSAGE: closing inbound before receiving peer's close_notify 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] STACKTRACE: 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:740) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:719) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.MysqlIO.quit(MysqlIO.java:2249) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4232) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.close(ConnectionImpl.java:1472) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.closeDBResources(DBUtil.java:492) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Task.prepare(CommonRdbmsWriter.java:259) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Task.prepare(MysqlWriter.java:73) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.WriterRunner.run(WriterRunner.java:50) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] ** END NESTED EXCEPTION ** 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] Exception in thread "taskGroup-0" com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为: select "PRODUCT_ID","PRODUCT_NAME","PRODUCT_DESCRIPTION","CATEGORY_ID","WEIGHT_CLASS","WARRANTY_PERIOD","SUPPLIER_ID","PRODUCT_STATUS","LIST_PRICE","MIN_PRICE","CATALOG_URL" from PRODUCT_INFORMATION 具体错误信息为:com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[WARRANTY_PERIOD], 字段名称:[-103], 字段Java类型:[oracle.sql.INTERVALYM]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:93) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:220) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) 2022-07-04 15:18:16 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) 2022-07-04 15:18:16 [JobThread.run-165] <br>----------- datax-web job execute end(finish) -----------<br>----------- ReturnT:ReturnT [code=500, msg=command exit value(1) is failed, content=null] 2022-07-04 15:18:16 [TriggerCallbackThread.callbackLog-186] <br>----------- datax-web job callback finish. |
仔细阅读,核心错误内容如下:
[AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[WARRANTY_PERIOD], 字段名称:[-103], 字段Java类型:[oracle.sql.INTERVALYM]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 .
查询Oracle端和MySQL端的数据类型:
1 2 3 | PRODUCT_INFORMATION.WARRANTY_PERIOD Oracle:INTERVAL YEAR(2) TO MONTH MySQL:longtext |
解决办法:"to_char(WARRANTY_PERIOD)"
如下:
然后,重新同步该表即可。
ORDERS表错误解决
报错:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 | 2022-07-04 15:18:00 [JobThread.run-130] <br>----------- datax-web job execute start -----------<br>----------- Param: 2022-07-04 15:18:00 [BuildCommand.buildDataXParam-100] ------------------Command parameters: 2022-07-04 15:18:00 [ExecutorJobHandler.execute-57] ------------------DataX process id: 4546 2022-07-04 15:18:00 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:00 [AnalysisStatistics.analysisStatisticsLog-53] DataX (DATAX-OPENSOURCE-3.0), From Alibaba ! 2022-07-04 15:18:00 [AnalysisStatistics.analysisStatisticsLog-53] Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved. 2022-07-04 15:18:00 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:00 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:01 [ProcessCallbackThread.callbackLog-186] <br>----------- datax-web job callback finish. 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.008 [main] INFO MessageSource - JVM TimeZone: GMT+08:00, Locale: zh_CN 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.059 [main] INFO MessageSource - use Locale: zh_CN timeZone: sun.util.calendar.ZoneInfo[id="GMT+08:00",offset=28800000,dstSavings=0,useDaylight=false,transitions=0,lastRule=null] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.284 [main] INFO VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.294 [main] INFO Engine - the machine info => 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] osInfo: Red Hat, Inc. 1.8 25.332-b09 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] jvmInfo: Linux amd64 3.10.0-1127.10.1.el7.x86_64 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] cpu num: 16 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] totalPhysicalMemory: -0.00G 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] freePhysicalMemory: -0.00G 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] maxFileDescriptorCount: -1 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] currentOpenFileDescriptorCount: -1 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] GC Names [PS MarkSweep, PS Scavenge] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] MEMORY_NAME | allocation_size | init_size 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Eden Space | 256.00MB | 256.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Code Cache | 240.00MB | 2.44MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Compressed Class Space | 1,024.00MB | 0.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Survivor Space | 42.50MB | 42.50MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] PS Old Gen | 683.00MB | 683.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] Metaspace | -0.00MB | 0.00MB 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.332 [main] INFO Engine - 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "content":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "reader":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "parameter":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "password":"***", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "column":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"ORDER_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"ORDER_DATE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"ORDER_MODE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"CUSTOMER_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"ORDER_STATUS\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"ORDER_TOTAL\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"SALES_REP_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"PROMOTION_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"WAREHOUSE_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"DELIVERY_TYPE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"COST_OF_DELIVERY\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"WAIT_TILL_ALL_AVAILABLE\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"DELIVERY_ADDRESS_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"CUSTOMER_CLASS\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"CARD_ID\"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "\"INVOICE_ADDRESS_ID\"" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "connection":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbcUrl":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "table":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "ORDERS" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "splitPk":"", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "username":"lhr" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "name":"oraclereader" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "writer":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "parameter":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "password":"***", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "column":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`ORDER_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`ORDER_DATE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`ORDER_MODE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`CUSTOMER_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`ORDER_STATUS`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`ORDER_TOTAL`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`SALES_REP_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`PROMOTION_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`WAREHOUSE_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`DELIVERY_TYPE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`COST_OF_DELIVERY`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`WAIT_TILL_ALL_AVAILABLE`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`DELIVERY_ADDRESS_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`CUSTOMER_CLASS`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`CARD_ID`", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "`INVOICE_ADDRESS_ID`" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "connection":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] { 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "jdbcUrl":"jdbc:mysql://172.72.7.35:3306/lhrdb", 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "table":[ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "ORDERS" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "username":"root" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "name":"mysqlwriter" 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] ], 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "setting":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "errorLimit":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "record":0, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "percentage":0.02 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] }, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "speed":{ 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "byte":1048576, 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] "channel":3 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] } 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.402 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.405 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.406 [main] INFO JobContainer - DataX jobContainer starts job. 2022-07-04 15:18:02 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:02.408 [main] INFO JobContainer - Set jobId = 0 2022-07-04 15:18:03 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:03.316 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb. 2022-07-04 15:18:03 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:03.568 [job-0] INFO OriginalConfPretreatmentUtil - table:[ORDERS] has columns:[ORDER_ID,ORDER_DATE,ORDER_MODE,CUSTOMER_ID,ORDER_STATUS,ORDER_TOTAL,SALES_REP_ID,PROMOTION_ID,WAREHOUSE_ID,DELIVERY_TYPE,COST_OF_DELIVERY,WAIT_TILL_ALL_AVAILABLE,DELIVERY_ADDRESS_ID,CUSTOMER_CLASS,CARD_ID,INVOICE_ADDRESS_ID]. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.558 [job-0] INFO OriginalConfPretreatmentUtil - table:[ORDERS] all columns:[ 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] ORDER_ID,ORDER_DATE,ORDER_MODE,CUSTOMER_ID,ORDER_STATUS,ORDER_TOTAL,SALES_REP_ID,PROMOTION_ID,WAREHOUSE_ID,DELIVERY_TYPE,COST_OF_DELIVERY,WAIT_TILL_ALL_AVAILABLE,DELIVERY_ADDRESS_ID,CUSTOMER_CLASS,CARD_ID,INVOICE_ADDRESS_ID 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] ]. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.651 [job-0] INFO OriginalConfPretreatmentUtil - Write data [ 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] INSERT INTO %s (`ORDER_ID`,`ORDER_DATE`,`ORDER_MODE`,`CUSTOMER_ID`,`ORDER_STATUS`,`ORDER_TOTAL`,`SALES_REP_ID`,`PROMOTION_ID`,`WAREHOUSE_ID`,`DELIVERY_TYPE`,`COST_OF_DELIVERY`,`WAIT_TILL_ALL_AVAILABLE`,`DELIVERY_ADDRESS_ID`,`CUSTOMER_CLASS`,`CARD_ID`,`INVOICE_ADDRESS_ID`) VALUES(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?) 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] ], which jdbcUrl like:[jdbc:mysql://172.72.7.35:3306/lhrdb?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&rewriteBatchedStatements=true&tinyInt1isBit=false] 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.651 [job-0] INFO JobContainer - jobContainer starts to do prepare ... 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.652 [job-0] INFO JobContainer - DataX Reader.Job [oraclereader] do prepare work . 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.652 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] do prepare work . 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.653 [job-0] INFO JobContainer - jobContainer starts to do split ... 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.653 [job-0] INFO JobContainer - Job set Max-Byte-Speed to 1048576 bytes. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.676 [job-0] INFO JobContainer - DataX Reader.Job [oraclereader] splits to [1] tasks. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.677 [job-0] INFO JobContainer - DataX Writer.Job [mysqlwriter] splits to [1] tasks. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.718 [job-0] INFO JobContainer - jobContainer starts to do schedule ... 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.746 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.755 [job-0] INFO JobContainer - Running by standalone Mode. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.778 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [1] channels for [1] tasks. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.866 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to 1048576. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.866 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated. 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.906 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:05.912 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select "ORDER_ID","ORDER_DATE","ORDER_MODE","CUSTOMER_ID","ORDER_STATUS","ORDER_TOTAL","SALES_REP_ID","PROMOTION_ID","WAREHOUSE_ID","DELIVERY_TYPE","COST_OF_DELIVERY","WAIT_TILL_ALL_AVAILABLE","DELIVERY_ADDRESS_ID","CUSTOMER_CLASS","CARD_ID","INVOICE_ADDRESS_ID" from ORDERS 2022-07-04 15:18:05 [AnalysisStatistics.analysisStatisticsLog-53] ] jdbcUrl:[jdbc:oracle:thin:@//172.72.7.34:1521/lhrsdb]. 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.166 [0-0-0-reader] ERROR StdoutPluginCollector - 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[ORDER_DATE], 字段名称:[-102], 字段Java类型:[oracle.sql.TIMESTAMPLTZ]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.buildRecord(CommonRdbmsReader.java:329) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.transportOneRecord(CommonRdbmsReader.java:237) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:209) [plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) [oraclereader-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_332] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.201 [0-0-0-reader] ERROR StdoutPluginCollector - 脏数据: 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] {"exception":"Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[ORDER_DATE], 字段名称:[-102], 字段Java类型:[oracle.sql.TIMESTAMPLTZ]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 .","record":[{"byteSize":3,"index":0,"rawData":"152","type":"DOUBLE"}],"type":"reader"} 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:06.228 [0-0-0-reader] ERROR ReaderRunner - Reader runner Received Exceptions: 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为: select "ORDER_ID","ORDER_DATE","ORDER_MODE","CUSTOMER_ID","ORDER_STATUS","ORDER_TOTAL","SALES_REP_ID","PROMOTION_ID","WAREHOUSE_ID","DELIVERY_TYPE","COST_OF_DELIVERY","WAIT_TILL_ALL_AVAILABLE","DELIVERY_ADDRESS_ID","CUSTOMER_CLASS","CARD_ID","INVOICE_ADDRESS_ID" from ORDERS 具体错误信息为:com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[ORDER_DATE], 字段名称:[-102], 字段Java类型:[oracle.sql.TIMESTAMPLTZ]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:93) ~[plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:220) ~[plugin-rdbms-util-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) ~[oraclereader-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:06 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) [na:1.8.0_332] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15.850 [job-0] INFO StandAloneJobContainerCommunicator - Total 1 records, 3 bytes | Speed 0B/s, 0 records/s | Error 1 records, 3 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00% 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15.850 [job-0] ERROR JobContainer - 运行scheduler 模式[standalone]出错. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15.851 [job-0] ERROR JobContainer - Exception when job run 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[Framework-14], Description:[DataX传输脏数据超过用户预期,该错误通常是由于源端数据存在较多业务脏数据导致,请仔细检查DataX汇报的脏数据日志信息, 或者您可以适当调大脏数据阈值 .]. - 脏数据条数检查不通过,限制是[0]条,但实际上捕获了[1]条. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) ~[datax-common-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.util.ErrorRecordChecker.checkRecordLimit(ErrorRecordChecker.java:58) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.scheduler.AbstractScheduler.schedule(AbstractScheduler.java:89) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.schedule(JobContainer.java:535) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:119) ~[datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) [datax-core-0.0.1-SNAPSHOT.jar:na] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15.852 [job-0] INFO StandAloneJobContainerCommunicator - Total 1 records, 3 bytes | Speed 3B/s, 1 records/s | Error 1 records, 3 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00% 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15.853 [job-0] ERROR Engine - 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 经DataX智能分析,该任务最可能的错误原因是: 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] com.alibaba.datax.common.exception.DataXException: Code:[Framework-14], Description:[DataX传输脏数据超过用户预期,该错误通常是由于源端数据存在较多业务脏数据导致,请仔细检查DataX汇报的脏数据日志信息, 或者您可以适当调大脏数据阈值 .]. - 脏数据条数检查不通过,限制是[0]条,但实际上捕获了[1]条. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.util.ErrorRecordChecker.checkRecordLimit(ErrorRecordChecker.java:58) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.scheduler.AbstractScheduler.schedule(AbstractScheduler.java:89) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.schedule(JobContainer.java:535) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:119) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:03 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:05 GMT+08:00 2022 WARN: Caught while disconnecting... 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] EXCEPTION STACK TRACE: 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] ** BEGIN NESTED EXCEPTION ** 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] MESSAGE: closing inbound before receiving peer's close_notify 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] STACKTRACE: 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:740) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:719) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.MysqlIO.quit(MysqlIO.java:2249) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4232) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.close(ConnectionImpl.java:1472) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.closeDBResources(DBUtil.java:492) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.getTableColumnsByConn(DBUtil.java:526) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:105) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.dealColumnConf(OriginalConfPretreatmentUtil.java:140) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.util.OriginalConfPretreatmentUtil.doPretreatment(OriginalConfPretreatmentUtil.java:35) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Job.init(CommonRdbmsWriter.java:41) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Job.init(MysqlWriter.java:31) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.initJobWriter(JobContainer.java:704) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.init(JobContainer.java:304) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.job.JobContainer.start(JobContainer.java:113) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.start(Engine.java:93) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.entry(Engine.java:175) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.Engine.main(Engine.java:208) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] ** END NESTED EXCEPTION ** 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:05 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:05 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Caught while disconnecting... 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] EXCEPTION STACK TRACE: 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] ** BEGIN NESTED EXCEPTION ** 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] MESSAGE: closing inbound before receiving peer's close_notify 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] STACKTRACE: 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] javax.net.ssl.SSLException: closing inbound before receiving peer's close_notify 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:740) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at sun.security.ssl.SSLSocketImpl.shutdownInput(SSLSocketImpl.java:719) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.MysqlIO.quit(MysqlIO.java:2249) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.realClose(ConnectionImpl.java:4232) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.mysql.jdbc.ConnectionImpl.close(ConnectionImpl.java:1472) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.DBUtil.closeDBResources(DBUtil.java:492) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.writer.CommonRdbmsWriter$Task.prepare(CommonRdbmsWriter.java:259) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.writer.mysqlwriter.MysqlWriter$Task.prepare(MysqlWriter.java:73) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.WriterRunner.run(WriterRunner.java:50) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] ** END NESTED EXCEPTION ** 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Mon Jul 04 15:18:06 GMT+08:00 2022 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] Exception in thread "taskGroup-0" com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为: select "ORDER_ID","ORDER_DATE","ORDER_MODE","CUSTOMER_ID","ORDER_STATUS","ORDER_TOTAL","SALES_REP_ID","PROMOTION_ID","WAREHOUSE_ID","DELIVERY_TYPE","COST_OF_DELIVERY","WAIT_TILL_ALL_AVAILABLE","DELIVERY_ADDRESS_ID","CUSTOMER_CLASS","CARD_ID","INVOICE_ADDRESS_ID" from ORDERS 具体错误信息为:com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[ORDER_DATE], 字段名称:[-102], 字段Java类型:[oracle.sql.TIMESTAMPLTZ]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 . 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.common.exception.DataXException.asDataXException(DataXException.java:30) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.util.RdbmsException.asQueryException(RdbmsException.java:93) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.rdbms.reader.CommonRdbmsReader$Task.startRead(CommonRdbmsReader.java:220) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.plugin.reader.oraclereader.OracleReader$Task.startRead(OracleReader.java:110) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at com.alibaba.datax.core.taskgroup.runner.ReaderRunner.run(ReaderRunner.java:57) 2022-07-04 15:18:15 [AnalysisStatistics.analysisStatisticsLog-53] at java.lang.Thread.run(Thread.java:750) 2022-07-04 15:18:15 [JobThread.run-165] <br>----------- datax-web job execute end(finish) -----------<br>----------- ReturnT:ReturnT [code=500, msg=command exit value(1) is failed, content=null] 2022-07-04 15:18:15 [TriggerCallbackThread.callbackLog-186] <br>----------- datax-web job callback finish. |
核心错误:
Exception in thread "taskGroup-0" com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-07], Description:[读取数据库数据失败. 请检查您的配置的 column/table/where/querySql或者向 DBA 寻求帮助.]. - 执行的SQL为: select "ORDER_ID","ORDER_DATE","ORDER_MODE","CUSTOMER_ID","ORDER_STATUS","ORDER_TOTAL","SALES_REP_ID","PROMOTION_ID","WAREHOUSE_ID","DELIVERY_TYPE","COST_OF_DELIVERY","WAIT_TILL_ALL_AVAILABLE","DELIVERY_ADDRESS_ID","CUSTOMER_CLASS","CARD_ID","INVOICE_ADDRESS_ID" from ORDERS 具体错误信息为:com.alibaba.datax.common.exception.DataXException: Code:[DBUtilErrorCode-12], Description:[不支持的数据库类型. 请注意查看 DataX 已经支持的数据库类型以及数据库版本.]. - 您的配置文件中的列配置信息有误. 因为DataX 不支持数据库读取这种字段类型. 字段名:[ORDER_DATE], 字段名称:[-102], 字段Java类型:[oracle.sql.TIMESTAMPLTZ]. 请尝试使用数据库函数将其转换datax支持的类型 或者不同步该字段 .
分析:ORDERS.ORDER_DATE
Oracle端类型:TIMESTAMP(6) WITH LOCAL TIME ZONE
MySQL端类型:datetime
解决:"cast(ORDER_DATE as date)"
然后重新运行一次:
数据校验
所有数据全量迁移完成,如下:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | SELECT COUNT(*) FROM lhrdb.ADDRESSES UNION ALL SELECT COUNT(*) FROM lhrdb.CARD_DETAILS UNION ALL SELECT COUNT(*) FROM lhrdb.CUSTOMERS UNION ALL SELECT COUNT(*) FROM lhrdb.IMAGE_LOB UNION ALL SELECT COUNT(*) FROM lhrdb.INVENTORIES UNION ALL SELECT COUNT(*) FROM lhrdb.LOGON UNION ALL SELECT COUNT(*) FROM lhrdb.ORDERENTRY_METADATA UNION ALL SELECT COUNT(*) FROM lhrdb.ORDERS UNION ALL SELECT COUNT(*) FROM lhrdb.ORDER_ITEMS UNION ALL SELECT COUNT(*) FROM lhrdb.PRODUCT_DESCRIPTIONS UNION ALL SELECT COUNT(*) FROM lhrdb.PRODUCT_INFORMATION UNION ALL SELECT COUNT(*) FROM lhrdb.TCUSTMER UNION ALL SELECT COUNT(*) FROM lhrdb.TCUSTORD UNION ALL SELECT COUNT(*) FROM lhrdb.TSRSLOB UNION ALL SELECT COUNT(*) FROM lhrdb.TTRGVAR UNION ALL SELECT COUNT(*) FROM lhrdb.WAREHOUSES ; |
全量数据迁移完成。
对于blob和clob查看:
可以看到,数据也能正常同步!!!
总结
1、任务构建成功后,默认不开启执行调度
2、不支持字段类型:INTERVAL YEAR(2) TO MONTH、TIMESTAMP(6) WITH LOCAL TIME ZONE,可以分别使用函数进行转换,例如:“INTERVAL YEAR(2) TO MONTH”转换为“"to_char(WARRANTY_PERIOD)"
”,而“TIMESTAMP(6) WITH LOCAL TIME ZONE”转换为“"cast(ORDER_DATE as date)"
”
3、BLOB和CLOB也可以同步。
4、优势:dataX全量迁移速度比OGG快很多倍!!!
请问老师的secureCRT这个配置是怎么设置的