Creating table on kudu through impala jdbc - jdbc

I am trying to create table on kudu through impala jdbc by URL
jdbc:impala://host:21050/default;AuthMech=0;UID=impala;
but error appears Table owner must not be null or empty.
any help?

Create table on kudu through impala jdbc need authentication function like LDAP or Kerberos.You need to install one of them and configure it.

Related

Kafka Connect JDBC dblink

I'm starting to study Apache Kafka and Kafka Connect.
I'm trying to get data from a remote Oracle Database that my user only have read privilegies and can't list tables (i don't have permission to change that). To every query, i have to pass a dblink, but in the JDBC Connector, i didn't find a option to pass a dblink.
I can do the query if i pass a specific query on the connector configuration, but i want to fetch allot of tables and speficifying the query on the connector, would make me create allot of connectors.
There's a way to pass the dblink on the connector configuration or to the JDBC URL?

Creating user with password Authentication at Presto Database

I Have a presto Database.
I want to create a new user which will be used by creating a connection via JDBC connection
I was searching this link
for creating user syntax
this is the only create options it has
8.5. CREATE SCHEMA
8.6. CREATE TABLE
8.7. CREATE TABLE AS
8.8. CREATE VIEW
The CREATE SCHEMA doesn't appear to have the ability for it - it also doesn't appear to have IDENTIFIED BY syntax.
How can I create a user with a password for Presto?
Presto does not have the notion of users or storage of its own. It relies on the underlying data source. Which connectors are you using? If you're using the Hive connector, then depending on how it's configured you could connect as a particular Hive user.

Update JDBC Database table using storage handler and Hive

I have read that using Hive JDBC storage handler
(https://github.com/qubole/Hive-JDBC-Storage-Handler),
the external table in Hive can be created on different databases (MySQL, Oracle, DB2) and users can read from and write to JDBC databases using Hive using this handler.
My question is in the update .
If we use hive.14 where Hive update/delete is supported and use storage handler to point an external table to a JDBC database table, will it allow us to update the database table as well when we fire the update query from Hive end?
You can not update an external table in hive.
In hive only transcational tables support ACID properties. By default transactions are configured to be off. So to create transaction tables you need to add 'TBLPROPERTIES ('transactional'='true')' in your create statement.
There are many limitations to it. One of which is you cannot make external tables to be an ACID table because external tables are beyond the control of hive compactor.
To read more on this click here

How to get DBLink name in ODI KM

I'm using Oracle Data Integration and i need to integrate in an IKM a source DBLink.
I have a simple mapping between one source table and one target table. The source table is on another database so I need to use a DBLink.
I have created the DBLink in the topology and associated to the source Data Server.
I tried to "catch" the DBLink name using <%=odiRef.getInfo("SRC_DSERV_NAME")%> but i get the target instance instead of source DBLink(instance).
Any suggestions?
In the meantime I've found the solution: <#=odiRef.getObjectName("R","<%=odiRef.getSrcTablesList("", "[TABLE_NAME]", "", "")%>","D")#>.

Hive JDBC fails to connect configured schema

I am able to connect to Hive using hive-jdbc client and also using the beeline.Typical url is,
jdbc:hive2://hive_thrift_ip:10000/custom_schema;principal=hive/hive_thrift_ip#COMPANY.COM
Unfortunately the connection is always established to the 'default' schema of Hive , and it is not considering the configured schema name in the url. I use the org.apache.hive.jdbc.HiveDriver class
It always takes me to the tables of the default schema. Still I am able to access the tables from other schema using the schema name prefix to the tables, like custom_schema.test_table
Kindly let me know if I missed any property or configuration in the connection creation part which will help me in getting the session exclusively for the schema that configure in the url.
Many thanks.

Resources