Apache Drill JDBC Plugin Doesnt Recognize Columns - jdbc

I'm attempting to query a proprietary RDBMS using Apache Drill. I've created the plugin as a JDBC data source and put my JDBC jar in the jars/3rdparty directory, and I'm able to successfully run a query such as SELECT * FROM mytable.
However, if I use a column name in the query such as SELECT mycol FROM mytable, Drill returns the following error: Error: VALIDATION ERROR: From line 1, column 8 to line 1, column 9: Column 'mycol' not found in any table. Moreover, I've noticed that my schema is entirely missing if I run SELECT * FROM INFORMATION_SCHEMA.SCHEMATA, so I have a hunch that Drill is unable to retrieve my database schema from the JDBC driver.
I'm wondering what method of the JDBC driver may be implemented incorrectly that's causing this problem. The JDBC driver has been used with other 3rd party software such as Spark with no issue.

In order to perform a query on your table you need to prefix the name of your table with the name you gave your storage plugin. For example if you named your storage plugin rdbms your query should look like this:
SELECT * FROM rdbms.mytable
Your additional query SELECT * FROM INFORMATION_SCHEMA.SCHEMATA likely failed for the same reason. Try SELECT * FROM rdbms.INFORMATION_SCHEMA.SCHEMATA. And don't forget to replace rdbms with the name you gave your storage plugin.

I think we should query on drill like select * from dfs.<storagePlugin>.tableName
Can you check once.?

Related

Jaspersoft Studio - Oracle JDBC Data Adapter - Set Specific Schema as Current Schema

I have the latest version of Jaspersoft Studio and I am using Oracle's JDBC data adapter (ojdbc11.jar), but the connection is made with the default schema, while I want the queries of each report to be executed in another schema (let's call it that: "MySchema").
For example a report with this SELECT clause will not work:
select *
from myTable
while a report with this SELECT clause will work:
select *
from MySchema.myTable
I tried things like this:
jdbc:oracle:thin:#//10.1.1.55:1521/DOMAIN.COM;connectionProperties={currentSchema=MySchema}
or this:
jdbc:oracle:thin:#//10.1.1.55:1521/DOMAIN.COM;connectionProperties={CURRENT_SCHEMA=MySchema}
or this:
jdbc:oracle:thin:#//10.1.1.55:1521/DOMAIN.COM?searchpath=MySchema
or this:
jdbc:oracle:thin:#//10.1.1.55:1521/DOMAIN.COM??currentSchema=MySchema
but without success.
Do you have a solution in this direction or do you know of any other way to solve the problem?
This is an extremely big problem if you consider that I have reports that make selects to dozens of tables and functions.

Sybase default owner in JDBC connection

I have some queries against a Sybase database that after some changes in our Java (JDBC) code are failing to execute because the database is returning an error message where it demands we provide the owner in front of the table name but that is something I would prefer to provide in a single place in our configuration. We are using ASE 16.
For example, we had a query like "SELECT * FROM table_name" that will not work anymore unless we specify "SELECT * FROM database_name..table_name"
I think there should be a simple answer for this but I am struggling to find one, thank you in advance.

db2 9.5: substr function fails but left function works ok

I have this select statement, but it never ends:
select * from table where substr(field,1,3)='001'
but when I change it to:
select * from table where left(field,3)='001'
it works! thus, I think it's a resources issue. Now, I'll have to modify the statement but I want to know if it's possible to solve this problem making changes to the db parameters, maybe from:
db2 get db cfg ...
Aditional info:
Version database is 9.5 (windows).
Field is one of 3 key fields of the table.
Table content: 863820 rows
In a comment you ask "I was wondering if it's posible to change a db parameter to allow more resources available to run the first statement "
You could try autoconfigure https://www.ibm.com/support/knowledgecenter/en/SSEPGG_9.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0008960.html
e.g. db2 autoconfigure using mem_percent 80 apply none
to see what it would suggest (or change if you say APPLY DB AND DBM and not APPLY NONE) if you asked Db2 to use 80% of your system memory

Read Oracle Cluster name from Oracle RAC using SQL query

I'd like to know what is my RAC cluster name using SQL query. I've found out that it can be retrieved using Oracle tool cemutlo -n or just ocrdump (see http://www.br8dba.com/tag/how-to-display-oracle-cluster-name/). However, it's not possible in this case, because on target environment, I can only execute SQL queries and I don't have access to DBMS installation directory.
I've found out (here https://community.oracle.com/thread/2510788?tstart=0) that it can be done using some unusual queries:
SELECT a.ID, a.CLUSTER_ID FROM TABLE(DBMS_DATA_MINING.GET_MODEL_DETAILS_OC('CLUS_OC_1_15',NULL,NULL,1,0,0)) a
select * from table(dbms_data_mining.get_model_details_km('CLUS_KM_1_25'))
However, they don't work on my environment and I'm unable to create new model.
Most preferably, I'd just read this from some kind of v$/gv$ tables - but I can't find it there. I guess that's because cluster is far below DBMS.
Finally, I found out that there is no way to do that :(.

Spark JDBC cannot find temporary tables

We created a temporary table (in memory) through spark.
When we sftp to the server and use beeline, we can query on the this temporary table like "select * from Table1" without issue.
However, when we use GUI tool with corresponding driver on local machine (the connection string is "jdbc:spark://servername:port/default" ), we have trouble. We can see the temporary table Table1 by using "show tables;" in the GUI tool. However, when we try to use "select * from Table1" in the tool, It shows an error "[Simba]JSQLEngine The table "Table1" could not be found., SQL state: HY000, Query: select * from Table1. [SQL State=HY000, DB Errorcode=500051]". Note that we are using the trial version of the Simba JDBC driver for testing.
Also, I tried hive-jdbc driver from cloudra using connection string "jdbc:hive2://servername:port/default". It is the same issue. Please help. Thanks a lot.
It turns out that some of the drivers requires a "limit" clause after the select. Once I add that, it retrieved data.

Resources