I am unable to execute the sqoop command with Teradata.
I am getting this error:
Error 8017] [SQLState 28000] The UserId, Password or Account is invalid.
Sqoop Command:
sqoop import --connect jdbc:teradata://TDPRODC/LOGMECH=LDAP
--driver com.teradata.jdbc.TeraDriver
--username svddssas
--password ' '
--table ADW.GST_STST_V
--hive-import
--hive-table wins.gst_stst_v1 -m 1
Please make sure, you have permissions with following userId and password in Teradata.
You can check with the below mentioned
select * from dbc.logonoff where logdate >= date '2015-10-07'
Note: change the date to the execution date.
Related
I am getting error while running Sqoop import-all-table command.
chgrp: changing ownership of 'hdfs://quickstart.cloudera:8020/user/hive/warehouse/retail_db.db/categories/part-m-00000': User does not belong to supergroup.
The command is sqoop import-all-tables --connect "jdbc:mysql://localhost:3306/retail_db" --username retail_dba -P --hive-import --hive-database retail_db --create-hive-table --hive-overwrite -m 2
I have checked following file but dfs.permission is set to false.
[cloudera#quickstart ~]$ locate hdfs-site.xml
/etc/hadoop/conf.empty/hdfs-site.xml
/etc/hadoop/conf.impala/hdfs-site.xml
/etc/hadoop/conf.pseudo/hdfs-site.xml
/etc/impala/conf.dist/hdfs-site.xml
SuperGroup is not present in /etc/group
Please suggest to resolve this.
after reboot of machine somehow i was able to run the below query was same Info message but with all data in all tables.
sqoop import-all-tables --connect "jdbc:mysql://localhost:3306/retail_db" --username retail_dba -P --hive-import --hive-database retail_db --create-hive-table --hive-overwrite
Sqoop job always prompts for a password in CLI. To avoid this it's been said that the property sqoop.metastore.client.record.password should be set as true. But everywhere it's said that I need to change this value in sqqop_site.xml. Is there anyway I can set this value to one job alone. I tried to create a job like below and sqoop fails to create it
sqoop job --create TEST -D sqoop.metastore.client.record.password=true -- import \
--connect jdbc:netezza://xx.xxx.xx.xxx/database \
--username username \
--password password \
--table tablename \
--split-by key \
--hcatalog-database hivedatabase \
--hcatalog-table hivetable \
--hcatalog-storage-stanza 'STORED as ORC TBLPROPERTIES('orc.compress'='NONE')' \
-m 100
Error :
Warning: /usr/iop/4.1.0.0/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
16/06/17 07:10:08 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6_IBM_20
16/06/17 07:10:08 ERROR tool.BaseSqoopTool: Error parsing arguments for job:
16/06/17 07:10:08 ERROR tool.BaseSqoopTool: Unrecognized argument: -D
16/06/17 07:10:08 ERROR tool.BaseSqoopTool: Unrecognized argument: sqoop.metastore.client.record.password=true
Can anyone please help me with this. I need to run a job witout prompting password in CLI.
You can save your password in a file and specify the path to this file with the parameter --password-file.
--password-file 'Set path for a file containing the authentication password'
Sqoop will then read the password from the file and pass it to the MapReduce cluster using secure means without exposing the password in the job configuration.
In Sqoop for Hadoop you can use a parameters file for connection string information.
--connection-param-file filename Optional properties file that provides connection parameters
What is the format of that file?
Say for example I have:
jdbc:oracle:thin:#//myhost:1521/mydb
How should that be in a parameters file?
if you want to give your database connection string and credentials then create a file with those details and use --options-file in your sqoop command
create a file database.props with the following details:
import
--connect
jdbc:mysql://localhost:5432/test_db
--username
root
--password
password
then your sqoop import command will look like:
sqoop --options-file database.props \
--table test_table \
--target-dir /user/test_data
and related to --connection-param-file hope this link will be helpful to understand its usage
It should be same as the command.
Example
import
--connect
jdbc:oracle:thin:#//myhost:1521/mydb
--username
foo
Below is the sample command connecting with mysql server
sqoop list-databases --connect jdbc:mysql://192.168.256.156/test --username root --password root
It will give you the list of databases available at your mysql server
I am new to hadoop and have recently started work on sqoop. While trying to export a table from hadoop to sql server i am getting the following error:
input path does not exist hdfs://sandbox:8020/user/root/
The command i am using is :
sqoop export --connect "jdbc:sqlserver://;username=;password=xxxxx;database=" --table --export-dir /user/root/ -input-fields-terminated-by " "
Could you please guide what i am missing here.
Also could you please let me know the command to navigate to the hadoop directory where the tables are stored.
For a proper sqoop export, Sqoop requires the complete data file location. You cant just specify the root folder.
Try specifying the complete src path
sqoop export --connect jdbc:oracle:thin:<>:1521/<> --username <> --password <> --table <> --export-dir hdfs://<>/user/<>/<> -m 1 --input-fields-terminated-by '|' --input-null-string '\\N' --input-null-non-string '\\N'
Hope this helps
When used the below command to import the data from the database "MyData", error occured.
sqoop import --connect jdbc:sqlserver://localhost/MyData --table My_Practice --username sa -P --target-dir /userPavan/table -m 1
But when the database name is not given as below, no error occurred.
>sqoop import --connect jdbc:sqlserver://localhost --table My_Practice --username sa -P --target-dir /userPavan/table -m 1
But i need to run the command with the database name..Can anyone suggest me..
Try using
sqoop import --connect jdbc:sqlserver://localhost:port/databaseName --table My_Practice --username sa -P --target-dir /userPavan/table -m 1
Sqlserver default port is 1433.