Save sqoop incremental import id - bash

I have a lot of sqoop jobs running in AWS EMR, but sometimes i need to turn off this instance.
There's a way to save the last id from incremental import, maybe localy and upload it to s3 via cronjob.
My first idea is, when i create the job i just send a request to Redshift, where my data is stored and get the last id or last_modified, via bash script.
Another idea is to get the output of sqoop job --show $jobid, filter the parameter of last_id and using it to create the job again.
But i don't know if sqoop offer a way to do this more easily.

As per the Sqoop docs,
If an incremental import is run from the command line, the value which should be specified as --last-value in a subsequent incremental import will be printed to the screen for your reference. If an incremental import is run from a saved job, this value will be retained in the saved job. Subsequent runs of sqoop job --exec someIncrementalJob will continue to import only newer rows than those previously imported.
So, you need to store nothing. Sqoop's metastore will take care of saving last value and avail for next incremental import job.
Example,
sqoop job \
--create new_job \
-- \
import \
--connect jdbc:mysql://localhost/testdb \
--username xxxx \
--password xxxx \
--table employee \
--incremental append \
--check-column id \
--last-value 0
And start this job with the --exec parameter:
sqoop job --exec new_job

Solution
I change the file sqoop-site.xml and add the endpoint to my MySQL.
Steps
Create the MySQL instance and run this queries:
CREATE TABLE SQOOP_ROOT (version INT, propname VARCHAR(128) NOT NULL, propval VARCHAR(256), CONSTRAINT SQOOP_ROOT_unq UNIQUE (version, propname)); and INSERT INTO SQOOP_ROOT VALUES(NULL, 'sqoop.hsqldb.job.storage.version', '0');
Change the original sqoop-site.xml adding your MySQL endpoint, user and password.
<property>
<name>sqoop.metastore.client.enable.autoconnect</name>
<value>true</value>
<description>If true, Sqoop will connect to a local metastore
for job management when no other metastore arguments are
provided.
</description>
</property>
<!--
The auto-connect metastore is stored in ~/.sqoop/. Uncomment
these next arguments to control the auto-connect process with
greater precision.
-->
<property>
<name>sqoop.metastore.client.autoconnect.url</name>
<value>jdbc:mysql://your-mysql-instance-endpoint:3306/database</value>
<description>The connect string to use when connecting to a
job-management metastore. If unspecified, uses ~/.sqoop/.
You can specify a different path here.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.username</name>
<value>${sqoop-user}</value>
<description>The username to bind to the metastore.
</description>
</property>
<property>
<name>sqoop.metastore.client.autoconnect.password</name>
<value>${sqoop-pass}</value>
<description>The password to bind to the metastore.
</description>
</property>
When you execute the command sqoop job --list in first time it will return zero values. But after creating the jobs, if you shutdown the EMR, you don't lose the sqoop metadata from executing jobs.
In EMR, we can use the Bootstrap Action to automate this operation in cluster creation.

Related

Pyspark: remote Hive warehouse location

I need to read / write tables stored in remote Hive Server from Pyspark. All I know about this remote Hive is that it runs under Docker. From Hadoop Hue I have found two urls for an iris table that I try to select some data from:
I have a table metastore url:
http://xxx.yyy.net:8888/metastore/table/mytest/iris
and table location url:
hdfs://quickstart.cloudera:8020/user/hive/warehouse/mytest.db/iris
I have no idea why last url contains quickstart.cloudera:8020. Maybe this is because Hive runs under Docker?
Discussing access to Hive tables Pyspark tutorial writes:
https://spark.apache.org/docs/latest/sql-programming-guide.html#hive-tables
When working with Hive, one must instantiate SparkSession with Hive support, including connectivity to a persistent Hive metastore, support for Hive serdes, and Hive user-defined functions. Users who do not have an existing Hive deployment can still enable Hive support. When not configured by the hive-site.xml, the context automatically creates metastore_db in the current directory and creates a directory configured by spark.sql.warehouse.dir, which defaults to the directory spark-warehouse in the current directory that the Spark application is started. Note that the hive.metastore.warehouse.dir property in hive-site.xml is deprecated since Spark 2.0.0. Instead, use spark.sql.warehouse.dir to specify the default location of database in warehouse. You may need to grant write privilege to the user who starts the Spark application.
In my case hive-site.xml that I managed to get does not have neither hive.metastore.warehouse.dir nor spark.sql.warehouse.dir property.
Spark tutorial suggests to use the following code to access remote Hive tables:
from os.path import expanduser, join, abspath
from pyspark.sql import SparkSession
from pyspark.sql import Row
// warehouseLocation points to the default location for managed databases and tables
val warehouseLocation = new File("spark-warehouse").getAbsolutePath
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.config("spark.sql.warehouse.dir", warehouse_location) \
.enableHiveSupport() \
.getOrCreate()
And in my case, after running similar to the above code, but with correct value for warehouseLocation, I think I can then do:
spark.sql("use mytest")
spark.sql("SELECT * FROM iris").show()
So where can I find remote Hive warehouse location? How to make Pyspark to work with remote Hive tables?
Update
hive-site.xml has the following properties:
...
...
...
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1/metastore?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
...
...
...
<property>
<name>hive.metastore.uris</name>
<value>thrift://127.0.0.1:9083</value>
<description>IP address (or fully-qualified domain name) and port of the metastore host</description>
</property>
So it looks like 127.0.0.1 is Docker localhost that runs Clouder docker app. Does not help to get to Hive warehouse at all.
How to access Hive warehouse when Cloudera Hive runs as a Docker app.?
Here https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cdh_ig_hive_metastore_configure.html at "Remote Mode" you'll find that you the Hive metastore runs its own JVM process, other process such as HiveServer2, HCatalog, Cloudera Impala communicate with it through the Thrift API using property hive.metastore.uri in the hive-site.xml:
<property>
<name>hive.metastore.uris</name>
<value>thrift://xxx.yyy.net:8888</value>
</property>
(Not sure about the way you have to specify the address)
And maybe this property too:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://xxx.yyy.net/hive</value>
</property>

Passing parameter to sqoop job

I'm crceating a sqoop job which will be scheduled in Oozie to load daily data into Hive.
I want to do incremental load into hive based on Date as a parameter, which will be passed to sqoop job
After researching lot I'm unable to find a way to pass a parameter to Sqoop job
You do this by passing the date down through two stages:
Coordinator to workflow
In your coordinator you can pass the date to the workflow that it executes as a <property>, like this:
<coordinator-app name="schedule" frequency="${coord:days(1)}"
start="2015-01-01T00:00Z" end="2025-01-01T00:00Z"
timezone="Etc/UTC" xmlns="uri:oozie:coordinator:0.2">
...
<action>
<workflow>
<app-path>${nameNode}/your/workflow.xml</app-path>
<configuration>
<property>
<name>workflow_date</name>
<value>${coord:formatTime(coord:nominalTime(), 'yyyyMMdd')}</value>
</property>
</configuration>
</workflow>
</action>
...
</coordinator-app>
Workflow to Sqoop
In your workflow you can reference that property in your Sqoop call using the ${workflow_date} variable, like this:
<sqoop xmlns="uri:oozie:sqoop-action:0.2">
...
<command>import --connect jdbc:connect:string:here --table tablename --target-dir /your/import/dir/${workflow_date}/ -m 1</command>
...
</sqoop>
Below solution is from Apache Sqoop Cookbook.
Preserving the Last Imported Value
Problem
Incremental import is a great feature that you're using a lot. Shouldering the responsibility for remembering the last imported value is getting to be a hassle.
Solution
You can take advantage of the built-in Sqoop metastore that allows you to save all parameters for later reuse. You can create a simple incremental import job with the following command:
sqoop job \
--create visits 3.3. Preserving the Last Imported Value | 27
-- import \
--connect jdbc:mysql://mysql.example.com/sqoop \
--username sqoop \
--password sqoop \
--table visits \
--incremental append \
--check-column id \
--last-value 0
And start it with the --exec parameter:
sqoop job --exec visits
Discussion
The Sqoop metastore is a powerful part of Sqoop that allows you to retain your job definitions and to easily run them anytime. Each saved job has a logical name that is used for referencing. You can list all retained jobs using the --list parameter:
sqoop job --list
You can remove the old job definitions that are no longer needed with the --delete parameter, for example:
sqoop job --delete visits
And finally, you can also view content of the saved job definitions using the --show parameter, for example:
sqoop job --show visits
Output of the --show command will be in the form of properties. Unfortunately, Sqoop currently can't rebuild the command line that you used to create the saved job.
The most important benefit of the built-in Sqoop metastore is in conjunction with incremental import. Sqoop will automatically serialize the last imported value back into the metastore after each successful incremental job. This way, users do not need to remember the last imported value after each execution; everything is handled automatically.

Sqoop job through oozie

I have created a sqoop job called TeamMemsImportJob which basically pulls data from sql server into hive.
I can execute the sqoop job through the unix command line by running the following command:
sqoop job –exec TeamMemsImportJob
If I create an oozie job with the actual scoop import command in it, it runs through fine.
However if I create the oozie job and run the sqoop job through it, I get the following error:
oozie job -config TeamMemsImportJob.properties -run
>>> Invoking Sqoop command line now >>>
4273 [main] WARN org.apache.sqoop.tool.SqoopTool – $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
4329 [main] INFO org.apache.sqoop.Sqoop – Running Sqoop version: 1.4.4.2.1.1.0-385
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – Cannot restore job: TeamMemsImportJob
5172 [main] ERROR org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage – (No such job)
5172 [main] ERROR org.apache.sqoop.tool.JobTool – I/O error performing job operation: java.io.IOException: Cannot restore missing job TeamMemsImportJob
at org.apache.sqoop.metastore.hsqldb.HsqldbJobStorage.read(HsqldbJobStorage.java:256)
at org.apache.sqoop.tool.JobTool.execJob(JobTool.java:198)
it looks as if it cannot find the job. However I can see the job as below
[root#sandbox ~]# sqoop job –list
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/06/25 08:12:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4.2.1.1.0-385
Available jobs:
TeamMemsImportJob
How do I resolve this?
You have to use the --meta-connect flag while creating a job to create a custom Sqoop metastore database so that Oozie can have access.
sqoop \
job \
--meta-connect \
"jdbc:hsqldb:file:/on/server/not/hdfs/sqoop-metastore/sqoop-meta.db;shutdown=true" \
--create \
jobName \
-- \
import \
--connect jdbc:oracle:thin:#server:port:sid \
--username username \
--password-file /path/on/hdfs/server.password \
--table TABLE \
--incremental append \
--check-column ID \
--last-value "0" \
--target-dir /path/on/hdfs/TABLE
When you need to execute jobs, you can do it from Oozie the regular way, but make sure to include --meta-connect to indicate where the job is stored.
If we see the log we can see that it cannot find the stored job.
Since you are using the native hsql db.
To make Sqoop jobs available across other systems you should configure other database for example mysql which can be accessed by all systems.
From documentation
Running sqoop-metastore launches a shared HSQLDB database instance on
the current machine. Clients can connect to this metastore and create
jobs which can be shared between users for execution
The location of the metastore’s files on disk is controlled by the
sqoop.metastore.server.location property in conf/sqoop-site.xml. This
should point to a directory on the local filesystem.
The metastore is available over TCP/IP. The port is controlled by the
sqoop.metastore.server.port configuration parameter, and defaults to
16000.
Clients should connect to the metastore by specifying
sqoop.metastore.client.autoconnect.url or --meta-connect with the
value jdbc:hsqldb:hsql://:/sqoop. For example,
jdbc:hsqldb:hsql://metaserver.example.com:16000/sqoop.
This metastore may be hosted on a machine within the Hadoop cluster,
or elsewhere on the network.
Can you check if that db is accessible from other systems.

How to specify Hive database name in command line while importing data from RDBMS into Hive using Sqoop ?

I need to import data from RDBMS table into remote Hive machine. How can i achieve this using Sqoop ?
In nut shell, How to specify hive database name and the hive machine i/p in the import command?
Please help me with appropriate sqoop command.
You should run the sqoop command on the machine where you have Hive installed, because sqoop will look for $HIVE_HOME/bin/hive to execute the CREATE TABLE ... and other statements.
Alternatively, you could use sqoop with the --hive-home command line option to specify where your Hive is installed (just overrides $HIVE_HOME)
To connect to your remote RDBMS:
sqoop import --connect jdbc:mysql://remote-server/mytable --username xxx --password yyy
To import into Hive:
sqoop import --hive-import
You can get a more comprehensive list of commands by looking at http://archive.cloudera.com/cdh/3/sqoop/SqoopUserGuide.html#_literal_sqoop_import_literal">this link.

I couldn't import the tables from my sql server to hive through sqoop

When I pass the command:
$sqoop create-hive-table --connect 'jdbc:sqlserver://10.100.0.18:1433;username=cloud;password=cloud123;database=hadoop' --table cluster
Some errors and warnings appear and at the end it says,
Failed to start database '/var/lib/hive/metastore/metastore_db', see the next exception for details [again a list of import errors displayed]
Finally it says hive exited with satus 9
What is the problem here? I am new to sqoop and hive. Please anyone help me.
The correct syntax would be
sqoop import --connect 'jdbc:sqlserver://10.100.0.18:1433/hadoop' --username cloud --password cloud123 --table cluster --hive-import
I think you might want to check if you have write permissions to the specified directory and if a directory named metastore_db is being created
This message is usually shown when you're running Sqoop with default Hive configuration. Hive will by default use derby datastore which is usable only in very basic test use cases. I would recommend to reconfigure your hive instance to use some other relation database as a datastore back end (MySQL, PostgreSQL, Oracle).
Your syntax is all wrong. Syntax is $sqoop tool-name [tool-arguments]
$sqoop import --create-hive-table --connect 'jdbc:sqlserver://10.100.0.18:1433/hadoop' --username cloud --password cloud123 --table cluster
Pasting a sample call of hive import using sqoop. This might help you to correct your syntax further. Remember that essentially you need to give minimum the below command to make it work.
sqoop import --connect jdbc:mysql://localhost/RAWDATA --table geolocation --username root --password hadoop --hive-import --create-hive-table --driver com.mysql.jdbc.Driver --m 1 --delete-target-dir
--connect, in this the part which reads /RAWDATA is the database name from your mysql instance which contains the geolocation table. You can execute 'show databases' and 'show tables' command in mysql to check for your databases and tables.
--delete-target-dir option is used for safety. It will ensure sqoop delete the tmp dir it creates to write the file before moving it into hive. This will avoid unnecessary errors of directory already exists, in case you retry the command.
--create-hive-table is required only if you did not create the target table in hive already. If your previous runs of sqoop command created the table already, then you can ignore this option completely. Check your hive database for existence of target hive table.
--driver is a mandatory part of the command to perform any database connection.Make sure you either find the right path to the driver library or try googling for options. You can try first the one pasted above to see if it does the trick. You can revert to this forum for help.
remember we did not mention which database in hive the table will be created therefore it will be in default database of hive. I am not giving that option since you are just about starting in sqoop.

Resources