When using sqoop import is possible to pass java properties.
In my case, I need to pass
-Doraoop.oracle.rac.service.name=myservice
together with a --direct to use sqoop direct connection to an oracle RAC.
Now I need to create a sqoop job with the same parameter but when I try issuing
sqoop job --create myjob -- import -Doraoop.oracle.rac.service.name=myservice --direct --connect...
It complains saying
ERROR tool.BaseSqoopTool: Error parsing arguments for import:
ERROR tool.BaseSqoopTool: Unrecognized argument: -Doraoop.oracle.rac.service.name=myservice
....
Wherever I put the -D it doesn't work while with the straight sqoop import it works.
It works only in the following way
sqoop job -Doraoop.oracle.rac.service.name=myservice --create myjob -- import ...
but in this way, the property is passed to the current execution and not to the subsequent job execution.
Is there a way to pass java properties through -D to a sqoop job --create myjob -- import command?
Trying with sqoop 1.4.6 on cdh 5.5
As per Sqoop docs,
Syntax for sqoop job command:
sqoop job (generic-args) (job-args) [-- [subtool-name] (subtool-args)]
Now -Dis generic-arg and --create myjob is job-arg
So you have to use command like :
sqoop job -Doraoop.oracle.rac.service.name=myservice --create myjob ....
For current and all subsequent job execution, it should behave in the same manner.
Inspect the configuration of the job using:
sqoop job --show myjob
Check if you find any difference in first execution and subsequent execution.
Related
Sqoop import job for Oracle 11g fails with error
ERROR sqoop.Sqoop: Got exception running Sqoop:
org.kitesdk.data.ValidationException: Dataset name
81fdfb8245ab4898a719d4dda39e23f9_C46010.HISTCONTACT is not
alphanumeric (plus '_')
here's the complete command:
$ sqoop job --create ingest_amsp_histcontact -- import --connect "jdbc:oracle:thin:#<IP>:<PORT>/<SID>" --username "c46010" -P --table C46010.HISTCONTACT --check-column ITEM_SEQ --target-dir /tmp/junk/amsp.histcontact -as-parquetfile -m 1 --incremental append
$ sqoop job --exec ingest_amsp_histcontact
it's an incremental import with parquet format. Surprisingly, it works pretty well if I use another format like --as-textfile.
This is similar issue with Sqoop job fails with KiteSDK validation error for Oracle import
But I've used ojdbc6 and switched to ojdbc7 doesn't work as well.
Sqoop version: 1.4.7
Oracle version: 11g
Thanks,
Yusata
I know it is kind of late but I faced the same problem and I solved it by omitting parquet file option.
Try running the job without
-as-parquetfile
There's a workaround, omitting "." character in --table parameter works for me, so instead of --table <schema>.<table_name>, I use --table <table_name>. But this doesn't work if you import a table from another schema in Oracle.
The problem is "." in --target-dir option. Workaround here: Change target dir to "/tmp/junk/amsp_histcontact". When sqoop job finishes, rename the hdfs target dir to "/tmp/junk/amsp.histcontact"
Does sqoop import/export create java classes? If it does so, where can I see these generated classes. What is the location of these class files?
Does sqoop import/export create java classes?
Yes
If it does so, where can I see these generated classes. What is the location of these class files?
It automatically generates a java file of same table name in the
current path of local system.
You can use --outdir to provide your own path.
Updated as per comment
You can use codegen command for this:
sqoop codegen \
--connect jdbc:mysql://localhost/databasename\
--username username\
--password password\
--table tablename
After the command is executed successfully there will be a path at the end where you can see the java files.
This is the complete flow of sqoop commands
User---> SQOOP CLI cmd ----> Sqoop Code GEN -----> Sqoop JAR Writer
----> JAR submission ---> ResourceManager ----> MR operation (5phases) ----> HDFS ----> Ack to Sqoop by MR program
**
Sqoop internally uses MapReducev1 or v2 for its execution(Getting data from DB and Storing the same in HDFS in comma delimited values). And it first creates a .java source file for the map-reduce prg and pakages in jar and then submits.
The .java is created in the current local directory with name of table.
sqoop import --connect jdbc:mysql://localhost/hadoop --table employee -m 1
In this case a "employee.java" is created .
I'm crceating a sqoop job which will be scheduled in Oozie to load daily data into Hive.
I want to do incremental load into hive based on Date as a parameter, which will be passed to sqoop job
After researching lot I'm unable to find a way to pass a parameter to Sqoop job
You do this by passing the date down through two stages:
Coordinator to workflow
In your coordinator you can pass the date to the workflow that it executes as a <property>, like this:
<coordinator-app name="schedule" frequency="${coord:days(1)}"
start="2015-01-01T00:00Z" end="2025-01-01T00:00Z"
timezone="Etc/UTC" xmlns="uri:oozie:coordinator:0.2">
...
<action>
<workflow>
<app-path>${nameNode}/your/workflow.xml</app-path>
<configuration>
<property>
<name>workflow_date</name>
<value>${coord:formatTime(coord:nominalTime(), 'yyyyMMdd')}</value>
</property>
</configuration>
</workflow>
</action>
...
</coordinator-app>
Workflow to Sqoop
In your workflow you can reference that property in your Sqoop call using the ${workflow_date} variable, like this:
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
...
<command>import --connect jdbc:connect:string:here --table tablename --target-dir /your/import/dir/${workflow_date}/ -m 1</command>
...
</sqoop>
Below solution is from Apache Sqoop Cookbook.
Preserving the Last Imported Value
Problem
Incremental import is a great feature that you're using a lot. Shouldering the responsibility for remembering the last imported value is getting to be a hassle.
Solution
You can take advantage of the built-in Sqoop metastore that allows you to save all parameters for later reuse. You can create a simple incremental import job with the following command:
sqoop job \
--create visits 3.3. Preserving the Last Imported Value | 27
-- import \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop \
--password sqoop \
--table visits \
--incremental append \
--check-column id \
--last-value 0
And start it with the --exec parameter:
sqoop job --exec visits
Discussion
The Sqoop metastore is a powerful part of Sqoop that allows you to retain your job definitions and to easily run them anytime. Each saved job has a logical name that is used for referencing. You can list all retained jobs using the --list parameter:
sqoop job --list
You can remove the old job definitions that are no longer needed with the --delete parameter, for example:
sqoop job --delete visits
And finally, you can also view content of the saved job definitions using the --show parameter, for example:
sqoop job --show visits
Output of the --show command will be in the form of properties. Unfortunately, Sqoop currently can't rebuild the command line that you used to create the saved job.
The most important benefit of the built-in Sqoop metastore is in conjunction with incremental import. Sqoop will automatically serialize the last imported value back into the metastore after each successful incremental job. This way, users do not need to remember the last imported value after each execution; everything is handled automatically.
I have created a sqoop job called TeamMemsImportJob which basically pulls data from sql server into hive.
I can execute the sqoop job through the unix command line by running the following command:
sqoop job –exec TeamMemsImportJob
If I create an oozie job with the actual scoop import command in it, it runs through fine.
However if I create the oozie job and run the sqoop job through it, I get the following error:
oozie job -config TeamMemsImportJob.properties -run
>>> Invoking Sqoop command line now >>>
4273 [main] WARN org.apache.sqoop.tool.SqoopTool – $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
4329 [main] INFO org.apache.sqoop.Sqoop – Running Sqoop version: 1.4.4.2.1.1.0-385
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – Cannot restore job: TeamMemsImportJob
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – (No such job)
5172 [main] ERROR org.apache.sqoop.tool.JobTool – I/O error performing job operation: java.io.IOException: Cannot restore missing job TeamMemsImportJob
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.read(HsqldbJobStorage.java:256)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:198)
it looks as if it cannot find the job. However I can see the job as below
[root#sandbox ~]# sqoop job –list
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/06/25 08:12:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4.2.1.1.0-385
Available jobs:
TeamMemsImportJob
How do I resolve this?
You have to use the --meta-connect flag while creating a job to create a custom Sqoop metastore database so that Oozie can have access.
sqoop \
job \
--meta-connect \
"jdbc:hsqldb:file:/on/server/not/hdfs/sqoop-metastore/sqoop-meta.db;shutdown=true" \
--create \
jobName \
-- \
import \
--connect jdbc:oracle:thin:#server:port:sid \
--username username \
--password-file /path/on/hdfs/server.password \
--table TABLE \
--incremental append \
--check-column ID \
--last-value "0" \
--target-dir /path/on/hdfs/TABLE
When you need to execute jobs, you can do it from Oozie the regular way, but make sure to include --meta-connect to indicate where the job is stored.
If we see the log we can see that it cannot find the stored job.
Since you are using the native hsql db.
To make Sqoop jobs available across other systems you should configure other database for example mysql which can be accessed by all systems.
From documentation
Running sqoop-metastore launches a shared HSQLDB database instance on
the current machine. Clients can connect to this metastore and create
jobs which can be shared between users for execution
The location of the metastore’s files on disk is controlled by the
sqoop.metastore.server.location property in conf/sqoop-site.xml. This
should point to a directory on the local filesystem.
The metastore is available over TCP/IP. The port is controlled by the
sqoop.metastore.server.port configuration parameter, and defaults to
16000.
Clients should connect to the metastore by specifying
sqoop.metastore.client.autoconnect.url or --meta-connect with the
value jdbc:hsqldb:hsql://:/sqoop. For example,
jdbc:hsqldb:hsql://metaserver.example.com:16000/sqoop.
This metastore may be hosted on a machine within the Hadoop cluster,
or elsewhere on the network.
Can you check if that db is accessible from other systems.
I need to import data from RDBMS table into remote Hive machine. How can i achieve this using Sqoop ?
In nut shell, How to specify hive database name and the hive machine i/p in the import command?
Please help me with appropriate sqoop command.
You should run the sqoop command on the machine where you have Hive installed, because sqoop will look for $HIVE_HOME/bin/hive to execute the CREATE TABLE ... and other statements.
Alternatively, you could use sqoop with the --hive-home command line option to specify where your Hive is installed (just overrides $HIVE_HOME)
To connect to your remote RDBMS:
sqoop import --connect jdbc:mysql://remote-server/mytable --username xxx --password yyy
To import into Hive:
sqoop import --hive-import
You can get a more comprehensive list of commands by looking at http://archive.cloudera.com/cdh/3/sqoop/SqoopUserGuide.html#_literal_sqoop_import_literal">this link.