I have a custom datasource driver in JetBrans (Rider 2019.2) which uses apache-drill-1.17.jar JDBC driver (official).
Using the driver results in this error:
SELECT * FROM dfs.my_parquets."Test" limit 10;
--
PARSE ERROR: Lexical error at line 1, column 19. Encountered: "`" (96), after : ""
SQL Query: ALTER SESSION SET `exec.query.max_rows`=501
From the error is obvious that Rider tries to execute this hidden query with backticked identifiers:
ALTER SESSION SET `exec.query.max_rows`=501
The problem is that the quoting_identifiers in target drill are not set to ` (backtick) but to " (double quote).
As a connection string I'm using this: jdbc:drill:drillbit=my-drill-instance;quoting_identifiers='"'
Is there a way to tell the driver to use double quotes in the hidden queries?
Manual shows, that option should be passed without quotes:
jdbc:drill:zk=local;quoting_identifiers=[
jdbc:drill:drillbit=my-drill-instance;quoting_identifiers="
Related
I have an old source database in which apparently custom collation UTF8_CI_AI_NUMERIC_SORT was created. I'm running it on docker via image jacobalberty/firebird:2.5-ss. Originally database was created on a Windows machine.
When I try to do a query on the table where this collation was used, I get the error:
SQL> select * from "InvoiceService";
Statement failed, SQLSTATE = 22021
COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
Show collations returns the following:
SQL> show collations;
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
I tried the following fixes:
add entry to fbintl.conf:
<charset UTF8>
intl_module fbintl
collation UTF8_CI_AI_NUMERIC_SORT
</charset>
Then run the sp_register_character_set("UTF8", 4) procedure, and receiving error about duplicate collations (because UTF8_CI_AI_NUMERIC_SORT is already defined in the DB).
Dropping collation
SQL> drop collation UTF8_CI_AI_NUMERIC_SORT;
Statement failed, SQLSTATE = 42000
unsuccessful metadata update
-Collation UTF8_CI_AI_NUMERIC_SORT is used in table InvoiceService (field name NAME) and cannot be dropped
Adding new column in which different collation would be used, but can't even add it:
SQL> ALTER TABLE "InvoiceService" ADD NAME2 VARCHAR(600) CHARACTER SET UTF8;
Statement failed, SQLSTATE = 22021
unsuccessful metadata update
-InvoiceService
-COLLATION UTF8_CI_AI_NUMERIC_SORT for CHARACTER SET UTF8 is not installed
With using gbak restoring only metadata, fixing the schema and then inserting only the data, but gbak does not support restoring data only
...
I'm out of ideas now. What else could I try?
So, I finally managed to solve the problem. What I did was to create a DB backup with
gbak -v -t -user SYSDBA /path/to/source.fdb /path/to/backup.fbk
Then use the 3.0 version of Docker image with Firebird DB (jacobalberty/firebird:3.0) and restore from backup with
gbak -create /path/to/backup.fbk /path/to/restored3.fdb
Note that the same backup-restore procedure without switching the Docker image did not work.
I didn't have to do anything else. There's only a slight difference in SHOW COLLATIONS; output:
// originally:
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'NUMERIC-SORT=1'
// restored DB
UTF8_CI_AI_NUMERIC_SORT, CHARACTER SET UTF8, FROM EXTERNAL ('UNICODE'), CASE INSENSITIVE, ACCENT INSENSITIVE, 'COLL-VERSION=58.0.6.50;NUMERIC-SORT=1'
I am trying to connect to a oracle database, query it, and send results to a txt file. When I run my statement, this shows in the .txt file:
In reality, it should be values from my sql script.
Here is the string i am running:
sql_file1=Cb.sql
sqlplus -s "username/pwd#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name))))" #sql/$sql_file1 > /home/path/to/my/files/'cb.txt'
Any reasons as to why my 'cb.txt' file shows the screenshot from above instead of any date from the query inside my sql file?
You have extra ) in your connection string:
(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name))))
should be
sqlplus -s "username/pwd#(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(Host=my_host)(Port=1521))(CONNECT_DATA=(SERVICE_NAME=my_ser_name)))" #sql/$sql_file1 > /home/path/to/my/files/cb.txt
But even easier to use EZConnect string:
sqlplus -s "username/pwd#//my_host:1521/my_ser_name" #sql/$sql_file1
using Oracle DB 10 and SQuirrel 3.7.1
I need to access inserted-fields
If I write in an Oracle trigger script - :new.fieldName,
when running the script - I get an input window that says:
"Please input the parameter values
Value for ' :new' ___________ "
the trigger is compiled with a warning - "EDT violation detected"
when the trigger is executed (using an insert) , there's an error:
" Error: ORA-04098: trigger 'schemeName.triggerName' is invalid and failed re-validation
SQLState: 42000
ErrorCode: 4098
Position: 2172 "
what am I missing ?
trigger script:
CREATE OR REPLACE TRIGGER schemeName.triggerName
AFTER INSERT ON schemeName.tableName1
FOR EACH ROW
BEGIN
Insert into schemeName.tableName2 (fieldName1, fieldName2) values (:new.fieldName, 'someString');
END;
/
Your tool (SQuirrel 3.7.1) understands : as if you wanted to enter a substitution variable.
There should be an option which turn that OFF (at least temporarily) so that you could create a trigger.
goto plugins -> summary, disable sqlparam-plugin and restart squirrel.
I am trying to remove the hard coding from Hive script. For that I have created a hql file(src_sys_cd_param.hql).
Setting the source system value thru parameter, below the of param file
hive -f /data/data01/dev/edl/md/sptfr/landing/src_sys_cd_param.hql;
param file is having command set src_sys_cd = 'M09';
After running the below script:
INSERT INTO TABLE SPTFR_CORE.M09_PRTY SELECT C.EDW_SK,A.PRTY_TYPE_HIER_ID,
A.PRTY_NUM,A.PRTY_DESC,A.PRTY_DESC,'N',${hiveconf:src_sys_cd},
A.DAI_UPDT_DTTM,A.DAI_CRT_DTTM
FROM SPTFR_STG.M09_PRTY_VIEW_STG A JOIN SPTFR_STG.BKEY_PRTY_STG C
ON ( CONCAT(A.PRTY_TYPE_LVL_1_CD,'|^',A.PRTY_NUM ,'|^',A.SRC_SYS_CD)= C.SRC_CMBN);
Receing the error:
Error while compiling statement: FAILED: ParseException line 1:113 cannot recognize input near '$' '{' 'hiveconf' in selection target
How can I set a variable in an Impala query?
In SQL:
select * from users where id=(#id:=123)
In Impala:
impala-shell> ?
Impala version is v2.0.0. Any suggestions will be appreciated. Thanks!
impala-shell> set var:id=123;select * from users where id=${VAR:id};
This variable can also be passed from command-line using --var
impala-shell --var id=123
impala-shell> select * from users where id=${VAR:id};
There's an open feature request for adding variable substitution support to impala-shell: IMPALA-1067, to mimic Hive's similar feature (hive --hivevar param=60 substitutes ${hivevar:param} inside a query with 60).
Variables that you can use in other SQL contexts (e.g. from a JDBC client) are not supported either, and I couldn't even find an open request for it... You might want to open a request for it: https://issues.cloudera.org/browse/IMPALA
impala-shell -i node.domain:port -B --var"table=metadata" --var="db=transaction" -f "file.sql"
file.sql:
SELECT * FROM ${var:db}.${var:table}"