CREATE TABLE vmtest.test (a integer, b timestamp(6) FORMAT 'yyyy-mm-ddbhh:mi:ss.s(6)') PRIMARY INDEX (a); INSERT INTO vmtest.test VALUES (1, '2017-03-14 15:20:20.711001');2. And sqoop import command:
sqoop import --connect jdbc:teradata://3. data stored in HDFS as below:/database=vmtest \ --username dbc --password dbc --target-dir /tmp/test --delete-target-dir \ --as-textfile --fields-terminated-by "," --table test
[[email protected] ~]$ hadoop fs -cat /tmp/test/part* 1,2017-03-14 15:20:20.711Notice the microseconds part truncated from 711001 to 711 This is caused by a bug in TDCH (TeraData Connector for Hadoop) from Teradata, which is used by Cloudera Connector Powered by Teradata. The workaround is to make sure that the timestamp value is in String format before passing it to Sqoop, so that no conversion will happen. Below Sqoop command is an example:
sqoop import --connect jdbc:teradata://After importing, data is stored in HDFS correctly:/database=vmtest \ --username dbc --password dbc --target-dir /tmp/test \ --delete-target-dir --as-textfile --fields-terminated-by "," \ --query "SELECT a, cast(cast(b as format 'YYYY-MM-DD HH:MI:SS.s(6)') as char(40)) from test WHERE \$CONDITIONS" \ --split-by a
[[email protected] ~]$ hadoop fs -cat /tmp/test/part* 1,2017-03-14 15:20:20.711001As mentioned above, this is a bug in Teradata connector, we have to wait for it to be fixed in TDCH. At the time of writing, the issue still exists in CDH5.8.x.