I am unable to do an Update to my hive table via JDBC. I able to Select, but not Update.
Connecting to the hive database:
Connection connection =
DriverManager.getConnection("jdbc:hive2://localhost:10000/db", "", "");
My query:
ResultSet resultSet = statement.executeQuery("update db.test set name='yo yo' where id=1");
Stacktrace:
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
at com.spnotes.hive.App.main(App.java:63)
Again, I am able to Select but not Update via JDBC. I am however, able to Update my table via the hive shell only. I believe this is a user permissions issue. I have seen other problems where an HDFS directory needed to be granted permissions before it could be written to.
I had to invoke my hive shell with my HDFS user as so:
sudo -u hdfs hive
Can I somehow pass a "hfds" user via JDBC? It does not look like this is possible. This is how I'm thinking the exception will not happen anymore.
Here is the "secure way" of passing in a username and password as so:
Connection con = DriverManager.getConnection("jdbc:hive2:/hiveserver.domain.com:10000/default;user=username;password=password");
BUT this is NOT the same thing as passing the user hdfs. Perhaps it is possible to link the "username" with permissions to update the hive table?
Any help is welcome. Thanks!
You are trying to pass a update statement in a executeQuery()
For security reasons, any update statement will fail when using this method. Change it to executeUpdate()
Also, instead of using queries like this, I suggest using Prepared Statements, since by using parameters you make it less vulnerable to SQL Injections
Related
I created a 3 node Hadoop cluster with 1 namenode and 2 datanode.
I can perform a read/write query from Hive shell, but not beeline.
I found many suggestions and answers related to this issue.
In every suggestion it was mentioned to give the permission for the userX for each individual table.
But I don't know how to set the permission for an anonymous user once and for all.
Why I am getting the user anonymous while accessing the data from beeline or from a Java program?
I am able to read the data from the both beeline shell and using Java JDBC connection.
But I can't insert the data in the table.
This is my jdbc connection : jdbc:hive2://hadoop01:10000.
Below is the error i am getting while on insert request:
Permission denied: user=anonymous, access=WRITE, inode="/user/hive/warehouse/test_log/.hive-staging_hive_2017-10-07_06-54-36_347_6034469031019245441-1":hadoop:supergroup:drwxr-xr-x
Beeline syntax is
beeline -n username -u "url"
I assume you are missing the username. Also, no one but the hadoop user has WRITE access to that table anyway
If you don't have full control over the table permissions, you can try relocating the staging directory with the setting hive.exec.stagingdir
If no database is specified in the connection URL to connect, like
jdbc:hive2://hadoop01:10000/default
then beeline connects to the database DEFAULT , and while inserting the data into the table - first the data is loaded to a temporary table in default database and then loaded to the actual table.
So, you need to give the user access to the DEFAULT database also, or you can connect to the databases where you have access to.
jdbc:hive2://hadoop01:10000/your_db
I am using JRuby to connect to Hive, which is happening successfully. Now I want to create tables, but instead of writing the create statement as a parameter to execute() method, I want to call a ddl file that has the table definition.
I cannot take the file contents and use them because they are usually more than one statement before the actual table creation (i.e. CREATE DATABASE IF NOT EXISTS, CREATE TABLE IF NOT EXISTS ..)
Is there a command that I can use through my JDBC connect that take the ddl file and executes it?
As per my knowledge there is no direct way with JDBC API to do an operation similar to hive -f ,
option 1)
you can parse your SQL file and write a method to execute the commands sequentially (or) use third party code,
here is one reference http://www.codeproject.com/Articles/802383/Run-SQL-Script-sql-containing-DDL-DML-SELECT-state
option 2) If your client environment where Jruby code is running also supports hive, write a shell script which can connect to remote JDBC and run SQL with beeline which will make remote Thrift calls
Reference : https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
I have a spring batch admin application. We recently tried to increase security by creating new oracle users with minimal privileges. After replacing the user for the spring batch application, I get this error.
exception: org.springframework.dao.DataAccessResourceFailureException: Could not obtain sequence value; nested exception is java.sql.SQLException: ORA-00942: table or view does not exist
After looking through the application and tomcat log, I've found that the application tries 3 times to execute this query before throwing the error.
SELECT JOB_INSTANCE_ID, JOB_NAME from BATCH_JOB_INSTANCE where JOB_NAME = ? and JOB_KEY = ?
I tried this same query from sql developer, with the values stated in the log, and it came back with no results -but completed successfully. (no table not found error.)
I tried searching the log for instances of the same JOB_KEY, thinking there would be an insert statement, but i see none in this log.
Is there anyone familiar with Spring Batch that can help me verify what privileges the oracle user needs? Our new user does not have create or drop privileges. Can you help me verify that those are require, and why. -Meaning is it creating and dropping temporary tables? I tried to find this in the different log files but I've been unsuccessful so far.
Thanks!
The error indicates that it can't find a sequence value. That would lead me to believe that the new db id you have has access to the tables, but not the sequences. The Oracle schema for Spring Batch has three sequences it uses beyond the tables: BATCH_STEP_EXECUTION_SEQ, BATCH_JOB_EXECUTION_SEQ, and BATCH_JOB_SEQ.
Please check your context.xml file for proper Database Type.It could be you are using My/Sql in Code whereas actual DB is Oracle.
property name="databaseType" value="oracle"
I want to create a new database on an Oracle server via JDBC. I cannot seem to connect to the database without providing an SID: using a URL like jdbc:oracle:thin:#//[IP]:1521 results in an error of "ORA-12504, TNS:listener was not given the SID in CONNECT_DATA"
Alternatively, if I log into a specific SID, I can run most DDL commands except for CREATE DATABASE foo which fails with an error of "ORA-01100: database already mounted"
How am I supposed to create a database if I cannot connect to the server without specifying a specific database and cannot create a database if I am already logged into a specific database?
AFAIK creating a database needs an internal and direct connection which can only be done by logging in directly on the server (normally a user account called 'oracle').
One reason for that: users are stored in the database itself. No database = no user to connect to by an external client.
Please also note Justin's comment about oracles database schemas. This is probably what you are looking for
What you need are following commands:
CREATE TABLESPACE CREATE USER and few GRANT ... TO ... -- to have rights to connect and create objects, at least
I am facing a weird problem with Hive Tables. I have HIVE_HOME set in my environ and it is also in my search path so i can invoke hive directly.
Now I invoke hive from a directory lets say /a/b/c and create some tables. I can see the tables.
Now I change to a directory e.g /a/b and invoke hive from there. Here is the problem part. Either i am unable to see the tables or i get this error
hive> show tables;
FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to start
database 'metastore_db', see the next exception for details.
NestedThrowables:
java.sql.SQLException: Failed to start database 'metastore_db', see the next exception
for details.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Why are tables tied to the directory from which the hive cli was called from? Any pointers?
I think you are using derby server which hive uses for storing the metadata. So, for that what you can do is delete everything inside metastore_db folder and then try to restart the hadoop. And then try to see. But, i think best advice would be you use the mysql as a metastore.