EXPORT TABLE <table_name> TO 'path/to/hdfs';Please note that you don’t need the “/” in front, otherwise Hive will complain about invalid path or target is not an empty directory. And the final data will be written to /user/<user_name>/path/to/hdfs directory in HDFS (of course, it will need to be writable by the current user). The next step is to copy the data across to another Hive instance, you can use the “distcp” command from Hadoop:
hadoop distcp hdfs://:8020/path/to/hdfs hdfs:///path/to/hdfsOnce the copy is finished, you can then use the IMPORT command on the new cluster to load the data into a new table:
IMPORT TABLE <table_name> FROM 'path/to/another/hdfs';Now you will have the exact same data in the new cluster as the old one. The transfer should be smooth, the only thing is that if the data is big, it might take a while for “distcp” to copy the data across. For more information, you can have a look at the Apache Hive’s Confluence Page
Can you help me how to export an view in Hive.
Sorry about the late response, I missed your update in my email.
View is just a Hive query stored against a name, you just need to re-run the create view statement in another cluster. Hope that makes sense.
Thanks again for your visit on my site.
for import command, can the path be any hdfs files, like an avro file? or does it have to be some files exported from hive?