I am running Sqoop in AWS EMR. I am trying to copy a table ~10 GB from MySQL into HDFS.
I get the following exception
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000000_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 3
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:07 INFO mapreduce.Job: Task Id : attempt_1435664372091_0048_m_000005_2, Status : FAILED
Error: java.io.IOException: mysqldump terminated with status 2
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485)
at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:152)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:773)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:170)
15/07/06 12:19:08 INFO mapreduce.Job: map 0% reduce 0%
15/07/06 12:19:20 INFO mapreduce.Job: map 25% reduce 0%
15/07/06 12:19:22 INFO mapreduce.Job: map 38% reduce 0%
15/07/06 12:19:23 INFO mapreduce.Job: map 50% reduce 0%
15/07/06 12:19:24 INFO mapreduce.Job: map 75% reduce 0%
15/07/06 12:19:25 INFO mapreduce.Job: map 100% reduce 0%
15/07/06 12:23:11 INFO mapreduce.Job: Job job_1435664372091_0048 failed with state FAILED due to: Task failed task_1435664372091_0048_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/07/06 12:23:11 INFO mapreduce.Job: Counters: 8
Job Counters
Failed map tasks=28
Launched map tasks=28
Other local map tasks=28
Total time spent by all maps in occupied slots (ms)=34760760
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=5793460
Total vcore-seconds taken by all map tasks=5793460
Total megabyte-seconds taken by all map tasks=8342582400
15/07/06 12:23:11 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 829.8697 seconds (0 bytes/sec)
15/07/06 12:23:11 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
15/07/06 12:23:11 INFO mapreduce.ImportJobBase: Retrieved 0 records.
15/07/06 12:23:11 ERROR tool.ImportTool: Error during import: Import job failed!
If I run with out '--direct' option, I get the communication exception as in https://issues.cloudera.org/browse/SQOOP-186
I have set 'net-write-timeout' and 'net-read-timeout' values in MySQL to 6000.
My Sqoop command looks like this
sqoop import -D mapred.task.timeout=0 --fields-terminated-by '\t' --escaped-by '\\' --optionally-enclosed-by '\"' --bindir ./ --connect jdbc:mysql://<remote ip>/<mysql db> --username tuser --password tuser --table table1 --target-dir=/base/table1 --split-by id -m 8 --direct
How to fix the same ? Am I missing something.
I have also created SQOOP JIRA - https://issues.apache.org/jira/browse/SQOOP-2411
I have seen this error occur when Sqoop cannot divide the key space evenly and one of the map tasks processes zero rows of data. Possible workarounds are changing the number of mappers (-n) or specifying a different key column (--split-by) that has evenly distributed values.
Can you try running the below command and see whether it works. Not sure but I guess there is some problem with your sqoop import command.
sqoop import --connect "jdbc:mysql://<remote ip>/<mysql db>" --password "core" --username "core" --table "TABLENAME" --target-dir "/sqoopfile2" -m 8 --direct
Related
I'm trying to export a table from HDFS to Oracle database, using the command:
sqoop export --connect jdbc:oracle:thin:#ip:port/db --username user -P --table OPORTUNIDADESHIVE --export-dir /user/hadoop/OPORTUNIDADES/000000_0 --input-fields-terminated-by "\t"
where OPORTUNIDADESHIVE is the table from Oracle and the file "000000_0" is the table extracted from Hive to HDFS. Both tables have the same columns.
This table from Hive is ROW FORMAT DELIMITED and FIELDS TERMINATED BY '\t'.
But in the end, the export gives me this error message:
2021-11-04 14:58:04,633 INFO mapreduce.Job: map 0% reduce 0%
2021-11-04 14:58:13,711 INFO mapreduce.Job: map 100% reduce 0%
2021-11-04 14:58:13,723 INFO mapreduce.Job: Job job_1635324128846_0049
failed with state FAILED due to: Task failed
task_1635324128846_0049_m_000000 Job failed as tasks failed.
failedMaps:1 failedReduces:0 killedMaps:0 killedReduces: 0
2021-11-04 14:58:13,792 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=2
Killed map tasks=2
Launched map tasks=2
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=28818
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=14409
Total vcore-milliseconds taken by all map tasks=14409
Total megabyte-milliseconds taken by all map tasks=29509632 2021-11-04 14:58:13,799 WARN mapreduce.Counters: Group
FileSystemCounters is deprecated. Use
org.apache.hadoop.mapreduce.FileSystemCounter instead 2021-11-04
14:58:13,800 INFO mapreduce.ExportJobBase: Transferred 0 bytes in
19.1507 seconds (0 bytes/sec) 2021-11-04 14:58:13,803 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is
deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
2021-11-04 14:58:13,804 INFO mapreduce.ExportJobBase: Exported 0
records. 2021-11-04 14:58:13,804 ERROR mapreduce.ExportJobBase: Export
job failed! 2021-11-04 14:58:13,804 ERROR tool.ExportTool: Error
during export: Export job failed!
at org.apache.sqoop.mapreduce.ExportJobBase.runExport(ExportJobBase.java:445)
at org.apache.sqoop.manager.OracleManager.exportTable(OracleManager.java:465)
at org.apache.sqoop.tool.ExportTool.exportTable(ExportTool.java:80)
at org.apache.sqoop.tool.ExportTool.run(ExportTool.java:99)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252)
sqoop export is very sensitive to data type, length mismatch issue. so plese follow below steps to fix this issue -
Put hive and oracle definition side by side and check if data type and length are comparable. please note some data type like string in hive can hold more than 4000 char so you may need to handle such scenario specially.
if you see above comparison is fine then use this command to import data. Your command has export dir argument, it should be path to the table which is /user/hadoop/OPORTUNIDADES.
But i feel like its easy to use --hcatalog-table and here is complete command.
sqoop export --connect jdbc:oracle:thin:#mydb:5217/user --username user --password pass --hcatalog-database hive_db --hcatalog-table hive_tab --table orac_tab --input-null-string '\\\\N' --input-null-non-string '\\\\N' --m 6
Pls note we are using just input and output table and optional null handling.
I have a table on Oracle and identical table on Hive (with relevant datatypes)
I'm trying to export hive table to Oracle using script:
sqoop export -D oraoop.disabled=true -Dmapred.job.queue.name=disco --connect jdbc:oracle:thin:#oracle:1521/tns \
--username someuser \
--password somepasswd \
--hcatalog-database hive_database \
--hcatalog-table TABLE_ON_HIVE \
--table TABLE_ON_ORACLE \
--num-mappers 5
And I get an error:
18/08/29 08:23:10 INFO mapreduce.Job: Job job_1535519043541_1004 running in uber mode : false
18/08/29 08:23:10 INFO mapreduce.Job: map 0% reduce 0%
18/08/29 08:23:28 INFO mapreduce.Job: map 100% reduce 0%
18/08/29 08:28:40 INFO mapreduce.Job: Task Id : attempt_1535519043541_1004_m_000000_0, Status : FAILED
AttemptID:attempt_1535519043541_1004_m_000000_0 Timed out after 300 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/08/29 08:28:41 INFO mapreduce.Job: map 0% reduce 0%
18/08/29 08:28:57 INFO mapreduce.Job: map 100% reduce 0%
18/08/29 08:34:09 INFO mapreduce.Job: Task Id : attempt_1535519043541_1004_m_000000_1, Status : FAILED
AttemptID:attempt_1535519043541_1004_m_000000_1 Timed out after 300 secs
18/08/29 08:34:10 INFO mapreduce.Job: map 0% reduce 0%
18/08/29 08:34:28 INFO mapreduce.Job: map 100% reduce 0%
18/08/29 08:39:39 INFO mapreduce.Job: Task Id : attempt_1535519043541_1004_m_000000_2, Status : FAILED
AttemptID:attempt_1535519043541_1004_m_000000_2 Timed out after 300 secs
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
18/08/29 08:39:40 INFO mapreduce.Job: map 0% reduce 0%
18/08/29 08:39:56 INFO mapreduce.Job: map 100% reduce 0%
18/08/29 08:45:11 INFO mapreduce.Job: Job job_1535519043541_1004 failed with state FAILED due to: Task failed task_1535519043541_1004_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
18/08/29 08:45:11 INFO mapreduce.Job: Counters: 9
Job Counters
Failed map tasks=4
Launched map tasks=4
Other local map tasks=3
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=1312647
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=1312647
Total vcore-seconds taken by all map tasks=1312647
Total megabyte-seconds taken by all map tasks=5376602112
18/08/29 08:45:11 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
18/08/29 08:45:11 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 1,359.6067 seconds (0 bytes/sec)
18/08/29 08:45:11 WARN mapreduce.Counters: Group org.apache.hadoop.mapred.Task$Counter is deprecated. Use org.apache.hadoop.mapreduce.TaskCounter instead
18/08/29 08:45:11 INFO mapreduce.ExportJobBase: Exported 0 records.
18/08/29 08:45:11 ERROR tool.ExportTool: Error during export: Export job failed!
So I get two attempts and export fails with no reason explained.
Did you face issue like mine?
Where can I find any more detailed logs?
Pawel
Please try with the below command
sqoop export --connect \
-Dmapred.job.queue.name=disco \
--username sqoop \
--password sqoop \
--table emp \
--update-mode allowinsert \
--update-key id \
--export-dir table_location \
--input-fields-terminated-by 'delimiter'
Note: --update-mode - we can pass two arguments "updateonly" - to update the records. this will update the records if the update key matches.
if you want to do upsert (If exists UPDATE else INSERT) then use "allowinsert" mode.
example:
--update-mode updateonly \ --> for updates
--update-mode allowinsert \ --> for upsert
I am trying to import data from sqoop to hive
MySQL
use sample;
create table forhive( id int auto_increment,
firstname varchar(36),
lastname varchar(36),
primary key(id)
);
insert into forhive(firstname, lastname) values("sample","singh");
select * from forhive;
1 abhay agrawal
2 vijay sharma
3 sample singh
This is the Sqoop command I'm using (version 1.4.7)
sqoop import --connect jdbc:mysql://********:3306/sample
--table forhive --split-by id --columns id,firstname,lastname
--target-dir /home/programmeur_v/forhive
--hive-import --create-hive-table --hive-table sqp.forhive --username vaibhav -P
This is the error I'm getting
Error Log
18/08/02 19:19:49 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
Enter password:
18/08/02 19:19:55 INFO tool.BaseSqoopTool: Using Hive-specific
delimiters for output. You can override
18/08/02 19:19:55 INFO tool.BaseSqoopTool: delimiters with
--fields-terminated-by, etc.
18/08/02 19:19:55 INFO manager.MySQLManager: Preparing to use a MySQL
streaming resultset.
18/08/02 19:19:55 INFO tool.CodeGenTool: Beginning code generation
18/08/02 19:19:56 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM forhive AS t LIMIT 1
18/08/02 19:19:56 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM forhive AS t LIMIT 1
18/08/02 19:19:56 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is
/home/programmeur_v/softwares/hadoop-2.9.1
Note:
/tmp/sqoop-programmeur_v/compile/e8ffa12496a2e421f80e1fa16e025d28/forhive.java
uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details. 18/08/02 19:19:58
INFO orm.CompilationManager: Writing jar file:
/tmp/sqoop-programmeur_v/compile/e8ffa12496a2e421f80e1fa16e025d28/forhive.jar
18/08/02 19:19:58 WARN manager.MySQLManager: It looks like you are
importing from mysql.
18/08/02 19:19:58 WARN manager.MySQLManager: This transfer can be
faster! Use the --direct
18/08/02 19:19:58 WARN manager.MySQLManager: option to exercise a
MySQL-specific fast path.
18/08/02 19:19:58 INFO manager.MySQLManager: Setting zero DATETIME
behavior to convertToNull (mysql)
18/08/02 19:19:58 INFO mapreduce.ImportJobBase: Beginning import of
forhive
18/08/02 19:19:58 INFO Configuration.deprecation: mapred.jar is
deprecated. Instead, use mapreduce.job.jar
18/08/02 19:19:59 INFO Configuration.deprecation: mapred.map.tasks is
deprecated. Instead, use mapreduce.job.maps
18/08/02 19:19:59 INFO client.RMProxy: Connecting to ResourceManager
at /0.0.0.0:8032
18/08/02 19:20:02 INFO db.DBInputFormat: Using read commited
transaction isolation
18/08/02 19:20:02 INFO db.DataDrivenDBInputFormat: BoundingValsQuery:
SELECT MIN(id), MAX(id) FROM forhive
18/08/02 19:20:02 INFO db.IntegerSplitter: Split size: 0; Num splits:
4 from: 1 to: 3
18/08/02 19:20:02 INFO mapreduce.JobSubmitter: number of splits:3
18/08/02 19:20:02 INFO Configuration.deprecation:
yarn.resourcemanager.system-metrics-publisher.enabled is deprecated.
Instead, use yarn.system-metrics-publisher.enabl ed
18/08/02 19:20:02 INFO mapreduce.JobSubmitter: Submitting tokens for
job: job_1533231535061_0006
18/08/02 19:20:03 INFO impl.YarnClientImpl: Submitted application
application_1533231535061_0006
18/08/02 19:20:03 INFO mapreduce.Job: The url to track the job:
http://instance-1:8088/proxy/application_1533231535061_0006/
18/08/02 19:20:03 INFO mapreduce.Job: Running job:
job_1533231535061_0006
18/08/02 19:20:11 INFO mapreduce.Job: Job job_1533231535061_0006
running in uber mode : false
18/08/02 19:20:11 INFO mapreduce.Job: map 0% reduce 0%
18/08/02 19:20:21 INFO mapreduce.Job: map 33% reduce 0%
18/08/02 19:20:24 INFO mapreduce.Job: map 100% reduce 0%
18/08/02 19:20:25 INFO mapreduce.Job: Job job_1533231535061_0006
completed successfully
18/08/02 19:20:25 INFO mapreduce.Job: Counters: 31
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=622830
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=295
HDFS: Number of bytes written=48
HDFS: Number of read operations=12
HDFS: Number of large read operations=0
HDFS: Number of write operations=6
Job Counters
Killed map tasks=1
Launched map tasks=3
Other local map tasks=3
Total time spent by all maps in occupied slots (ms)=27404
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=27404
Total vcore-milliseconds taken by all map tasks=27404
Total megabyte-milliseconds taken by all map tasks=28061696
Map-Reduce Framework
Map input records=3
Map output records=3
Input split bytes=295
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=671
CPU time spent (ms)=4210
Physical memory (bytes) snapshot=616452096
Virtual memory (bytes) snapshot=5963145216
Total committed heap usage (bytes)=350224384
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=48
18/08/02 19:20:25 INFO mapreduce.ImportJobBase: Transferred 48 bytes
in 25.828 seconds (1.8584 bytes/sec)
18/08/02 19:20:25 INFO mapreduce.ImportJobBase: Retrieved 3 records.
18/08/02 19:20:25 INFO mapreduce.ImportJobBase: Publishing Hive/Hcat
import job data to Listeners for table forhive
18/08/02 19:20:25 INFO manager.SqlManager: Executing SQL statement:
SELECT t.* FROM forhive AS t LIMIT 1
18/08/02 19:20:25 INFO hive.HiveImport: Loading uploaded data into
Hive
18/08/02 19:20:25 ERROR hive.HiveConfig: Could not load
org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set
correctly.
18/08/02 19:20:25 ERROR tool.ImportTool: Import failed:
java.io.IOException: java.lang.ClassNotFoundException:
org.apache.hadoop.hive.conf.HiveConf
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:50)
at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392)
at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:379)
at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:337)
at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241)
at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:537)
at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:628)
at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:234)
at org.apache.sqoop.Sqoop.runTool(Sqoop.java:243)
at org.apache.sqoop.Sqoop.main(Sqoop.java:252) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:44)
... 12 more
After I did google for the same error I added HIVE_CONF_DIR also to my bashrc
export HIVE_HOME=/home/programmeur_v/softwares/apache-hive-1.2.2-bin
export
HIVE_CONF_DIR=/home/programmeur_v/softwares/apache-hive-1.2.2-bin/conf
export
PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HIVE_HOME/bin:$SQOOP_HOME/bin:$HIVE_CONF_DIR
All my Hadoop services are also up and running.
6976 NameNode
7286 SecondaryNameNode
7559 NodeManager
7448 ResourceManager
8522 DataNode
14587 Jps
I'm just unable to figure out what mistake I'm making here. Please guide!
Download the file "hive-common-0.10.0.jar" by googling. Place this in "sqoop/lib" folder. This solution worked for me.
Go to $HIVE_HOME/lib directory using
cd $HIVE_HOME/lib
then copy hive-common-x.x.x.jar and paste it in $SQOOP_HOME/lib using
cp hive-common-x.x.x.jar $SQOOP_HOME/lib
You need to download the file hive-common-0.10.0.jar and copy it to $SQOOP_HOME/lib folder.
edit your .bash_profile ,then add HADOOP_CLASSPATH
vim ~/.bash_profile
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
source ~/.bash_profile
I got the same issue when I tried to import data from MySQL to Hive with the following command:
sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username root --password z*****3 --table users -m 1 --hive-home /opt/hive --hive-import --hive-overwrite
Finally, these environment variables made it work perfectly.
export HIVE_HOME=/opt/hive
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
export HIVE_CONF_DIR=$HIVE_HOME/conf
when I want to export data from sqoop to hana DB, I receive error message.
Here is sqoop call
sqoop export \
--connect "jdbc:sap://saphana:30115" \
--username username \
--password password \
--table "INTERFACE_BO.WR_CUSTOMER_DATA" \
--columns "WEEK_DAY_START,SALES_ORG,REG_ORDERS_CNT,REG_EMAILS_CNT,UNIQUE_EMAILS" \
--driver "com.sap.db.jdbc.Driver" \
--export-dir "/user/hive/warehouse/export.db/weekly_holding_report" \
--input-fields-terminated-by '\t'
This is the way how I create table
CREATE TABLE weekly_holding_report
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\t'
stored as textfile
AS
SELECT
...
I also tried many variants of stored as like: stored as parquet, etc.. but without any change
And here is reposponse
Warning: /opt/cloudera/parcels/CDH-5.8.0-1.cdh5.8.0.p0.42/bin/../lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/10/25 06:43:04 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.8.0
16/10/25 06:43:04 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
16/10/25 06:43:05 WARN sqoop.ConnFactory: Parameter --driver is set to an explicit driver however appropriate connection manager is not being set (via --connection-manager). Sqoop is going to fall back to org.apache.sqoop.manager.GenericJdbcManager. Please specify explicitly which connection manager should be used next time.
16/10/25 06:43:05 INFO manager.SqlManager: Using default fetchSize of 1000
16/10/25 06:43:05 INFO tool.CodeGenTool: Beginning code generation
16/10/25 06:43:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM INTERFACE_BO.WR_CUSTOMER_DATA AS t WHERE 1=0
16/10/25 06:43:05 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/515ace5e3ad6ad117ba5f5fa61bf279c/INTERFACE_BO_WR_CUSTOMER_DATA.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
16/10/25 06:43:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/515ace5e3ad6ad117ba5f5fa61bf279c/INTERFACE_BO.WR_CUSTOMER_DATA.jar
16/10/25 06:43:07 INFO mapreduce.ExportJobBase: Beginning export of INTERFACE_BO.WR_CUSTOMER_DATA
16/10/25 06:43:07 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
16/10/25 06:43:07 INFO Configuration.deprecation: mapred.map.max.attempts is deprecated. Instead, use mapreduce.map.maxattempts
16/10/25 06:43:08 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
16/10/25 06:43:08 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
16/10/25 06:43:08 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
16/10/25 06:43:08 INFO client.RMProxy: Connecting to ResourceManager at cz-dc-v-564.mall.local/10.200.58.21:8032
16/10/25 06:43:10 INFO input.FileInputFormat: Total input paths to process : 10
16/10/25 06:43:10 INFO input.FileInputFormat: Total input paths to process : 10
16/10/25 06:43:10 INFO mapreduce.JobSubmitter: number of splits:4
16/10/25 06:43:10 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
16/10/25 06:43:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1476740951886_1458
16/10/25 06:43:10 INFO impl.YarnClientImpl: Submitted application application_1476740951886_1458
16/10/25 06:43:10 INFO mapreduce.Job: The url to track the job: http://cz-dc-v-564.mall.local:8088/proxy/application_1476740951886_1458/
16/10/25 06:43:10 INFO mapreduce.Job: Running job: job_1476740951886_1458
16/10/25 06:43:18 INFO mapreduce.Job: Job job_1476740951886_1458 running in uber mode : false
16/10/25 06:43:18 INFO mapreduce.Job: map 0% reduce 0%
16/10/25 06:43:25 INFO mapreduce.Job: map 100% reduce 0%
16/10/25 06:43:25 INFO mapreduce.Job: Job job_1476740951886_1458 failed with state FAILED due to: Task failed task_1476740951886_1458_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
16/10/25 06:43:26 INFO mapreduce.Job: Counters: 13
Job Counters
Failed map tasks=1
Killed map tasks=3
Launched map tasks=4
Data-local map tasks=1
Rack-local map tasks=3
Total time spent by all maps in occupied slots (ms)=39746
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=19873
Total vcore-seconds taken by all map tasks=19873
Total megabyte-seconds taken by all map tasks=30524928
Map-Reduce Framework
CPU time spent (ms)=0
Physical memory (bytes) snapshot=0
Virtual memory (bytes) snapshot=0
16/10/25 06:43:26 WARN mapreduce.Counters: Group FileSystemCounters is deprecated. Use org.apache.hadoop.mapreduce.FileSystemCounter instead
16/10/25 06:43:26 INFO mapreduce.ExportJobBase: Transferred 0 bytes in 17,9084 seconds (0 bytes/sec)
16/10/25 06:43:26 INFO mapreduce.ExportJobBase: Exported 0 records.
16/10/25 06:43:26 ERROR tool.ExportTool: Error during export: Export job failed!
Does anybody know what am I doing wrong?
Thanks for help!
Have you tried adding an sqoop argumment --m 1
this is basically for giving number of mappers so in this case try with mapper = 1
is your data is too huge(more than 1 gb's) then increase number of mappers
Check the logs for the corresponding job id, in your case job_1476740951886_1458. There must be some permission or connectivity issue(with the given saphana url), that is why it transferred zero bytes.
While transferring data from HDFS to MySQL, a MapReduce job gets spawned. But, it gets stuck and does not get completed.
sqoop export --connect jdbc:mysql://crxy2:3306/test --username root --password 19911130 --table info --export-dir sqoop_export
I see following in the logs:
Warning: /software/sqoop-1.4.6.alpha/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /software/sqoop-1.4.6.alpha/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /software/sqoop-1.4.6.alpha/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /software/sqoop-1.4.6.alpha/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
15/12/02 01:17:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
15/12/02 01:17:37 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/12/02 01:17:37 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
15/12/02 01:17:37 INFO tool.CodeGenTool: Beginning code generation
15/12/02 01:17:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `info` AS t LIMIT 1
15/12/02 01:17:38 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `info` AS t LIMIT 1
15/12/02 01:17:38 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /software/hadoop-2.6.0
Note: /tmp/sqoop-root/compile/344126e97612def1e3976c1978c2e75e/info.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/12/02 01:17:42 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/344126e97612def1e3976c1978c2e75e/info.jar
15/12/02 01:17:42 INFO mapreduce.ExportJobBase: Beginning export of info
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/software/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/software/hbase-0.98.8-hadoop2/lib/slf4j-log4j12-1.6.4.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/12/02 01:17:43 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/12/02 01:17:45 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/12/02 01:17:45 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/12/02 01:17:45 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/12/02 01:17:46 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/12/02 01:17:50 INFO input.FileInputFormat: Total input paths to process : 1
15/12/02 01:17:50 INFO input.FileInputFormat: Total input paths to process : 1
15/12/02 01:17:50 INFO mapreduce.JobSubmitter: number of splits:4
15/12/02 01:17:50 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/12/02 01:17:50 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1449047829255_0001
15/12/02 01:17:51 INFO impl.YarnClientImpl: Submitted application application_1449047829255_0001
15/12/02 01:17:52 INFO mapreduce.Job: The url to track the job: http://crxy2:8088/proxy/application_1449047829255_0001/
15/12/02 01:17:52 INFO mapreduce.Job: Running job: job_1449047829255_0001
15/12/02 01:18:12 INFO mapreduce.Job: Job job_1449047829255_0001 running in uber mode : false
15/12/02 01:18:12 INFO mapreduce.Job: map 0% reduce 0%
15/12/02 01:19:10 INFO mapreduce.Job: map 75% reduce 0%
15/12/02 01:19:12 INFO mapreduce.Job: map 100% reduce 0%
15/12/02 01:29:41 INFO mapreduce.Job: Task Id : attempt_1449047829255_0001_m_000001_0, Status : FAILED
AttemptID:attempt_1449047829255_0001_m_000001_0 Timed out after 600 secs
15/12/02 01:29:42 INFO mapreduce.Job: map 75% reduce 0%
15/12/02 01:29:58 INFO mapreduce.Job: map 100% reduce 0%
15/12/02 01:40:11 INFO mapreduce.Job: Task Id : attempt_1449047829255_0001_m_000001_1, Status : FAILED
AttemptID:attempt_1449047829255_0001_m_000001_1 Timed out after 600 secs
15/12/02 01:40:12 INFO mapreduce.Job: map 75% reduce 0%
15/12/02 01:40:28 INFO mapreduce.Job: map 100% reduce 0%
15/12/02 01:50:41 INFO mapreduce.Job: Task Id : attempt_1449047829255_0001_m_000001_2, Status : FAILED
AttemptID:attempt_1449047829255_0001_m_000001_2 Timed out after 600 secs
15/12/02 01:50:42 INFO mapreduce.Job: map 75% reduce 0%
15/12/02 01:51:00 INFO mapreduce.Job: map 100% reduce 0%
15/12/02 02:01:13 INFO mapreduce.Job: Job job_1449047829255_0001 failed with state FAILED due to: Task failed task_1449047829255_0001_m_000001
Job failed as tasks failed. failedMaps:1 failedReduces:0
15/12/02 02:01:13 INFO mapreduce.Job: Counters: 32
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=370395
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=556
HDFS: Number of bytes written=0
HDFS: Number of read operations=15
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed map tasks=4
Launched map tasks=7
Other local map tasks=3
Data-local map tasks=4
Total time spent by all maps in occupied slots (ms)=2732612
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=2732612
Total vcore-seconds taken by all map tasks=2732612
Total megabyte-seconds taken by all map tasks=2798194688
Map-Reduce Framework
Map input records=0
Map output records=0
Input split bytes=504
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=759
CPU time spent (ms)=5170
Physical memory (bytes) snapshot=245080064
Virtual memory (bytes) snapshot=2529026048
Total committed heap usage (bytes)=46792704
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
15/12/02 02:01:13 INFO mapreduce.ExportJobBase: Transferred 556 bytes in 2,607.4894 seconds (0.2132 bytes/sec)
15/12/02 02:01:13 INFO mapreduce.ExportJobBase: Exported 0 records.
15/12/02 02:01:13 ERROR tool.ExportTool: Error during export: Export job failed!
2015-12-02 08:01:15,791 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=root OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1449047829255_0002
2015-12-02 08:01:15,793 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Application Attempt appattempt_1449047829255_0002_000001 is done. finalState=FINISHED
2015-12-02 08:01:15,793 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.AppSchedulingInfo: Application application_1449047829255_0002 requests cleared
2015-12-02 08:01:15,794 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.LeafQueue: Application removed - appId: application_1449047829255_0002 user: root queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0
2015-12-02 08:01:15,794 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.ParentQueue: Application removed - appId: application_1449047829255_0002 user: root leaf-queue of parent: root #applications: 0
2015-12-02 08:01:15,794 INFO org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary: appId=application_1449047829255_0002,name=info.jar,user=root,queue=default,state=FINISHED,trackingUrl=http://crxy2:8088/proxy/application_1449047829255_0002/jobhistory/job/job_1449047829255_0002,appMasterHost=crxy2,startTime=1449069503787,finishTime=1449072069229,finalStatus=FAILED
2015-12-02 08:01:15,796 INFO org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncher: Cleaning master appattempt_1449047829255_0002_000001
2015-12-02 08:01:15,873 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2015-12-02 08:01:15,873 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
2015-12-02 08:01:16,879 INFO org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler: Null container completed...
Questioner was looking at incorrect logs. He is able to troubleshoot the issue by going through failed task logs as per the suggestion in the comments section.