Kafka Connect with CockroachDB - apache-kafka-connect

I am trying to use CockroachDB (v2.0.6) as a sink for one of my Kafka topics.
I wasn't able to find any Kafka connector specifically for CockroachDB so I decided to use the jdbc sink connector from Confluent since CockroachDB supports the postgreSQL syntax.
The connection string that I use on Kafka Connect is the following
"connection.url": "jdbc:postgresql://roach1:26257/mydb?sslmode=disable"
which basically is the only thing I changed on an existing working Postgres sink connector.
Unfortunately I was unable to make it work since the connector fails with an
error
Caused by: org.apache.kafka.connect.errors.ConnectException: java.sql.SQLException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "."
Detail: source SQL:
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME, (i.keys).n AS KEY_SEQ, ci.relname AS PK_NAME FROM pg_catalog.pg_class ct JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid) JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid) JOIN (SELECT i.indexrelid, i.indrelid, i.indisprimary, information_schema._pg_expandarray(i.indkey) AS keys FROM pg_catalog.pg_index i) i ON (a.attnum = (i.keys).x AND a.attrelid = i.indrelid) JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true AND ct.relname = 'my_topic' AND i.indisprimary ORDER BY table_name, pk_name, key_seq
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:88)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
... 10 more
Caused by: java.sql.SQLException: org.postgresql.util.PSQLException: ERROR: syntax error at or near "."
Detail: source SQL:
SELECT NULL AS TABLE_CAT, n.nspname AS TABLE_SCHEM, ct.relname AS TABLE_NAME, a.attname AS COLUMN_NAME, (i.keys).n AS KEY_SEQ, ci.relname AS PK_NAME FROM pg_catalog.pg_class ct JOIN pg_catalog.pg_attribute a ON (ct.oid = a.attrelid) JOIN pg_catalog.pg_namespace n ON (ct.relnamespace = n.oid) JOIN (SELECT i.indexrelid, i.indrelid, i.indisprimary, information_schema._pg_expandarray(i.indkey) AS keys FROM pg_catalog.pg_index i) i ON (a.attnum = (i.keys).x AND a.attrelid = i.indrelid) JOIN pg_catalog.pg_class ci ON (ci.oid = i.indexrelid) WHERE true AND ct.relname = 'collect_flow_tracking' AND i.indisprimary ORDER BY table_name, pk_name, key_seq
So my question is, has anyone used Kafka Connect with CockroachDB successfully ?
Also does anyone have any pointers on this error (what causes it) and how to circumvent it and make this work ?

CockroachDB PM here. It looks like the problem is an unsupported database introspection query performed by the Kafka Connect Postgres connector. The good news is that this particular query does appear to be supported by CockroachDB 2.1. Can you try again using the latest CockroachDB beta?

Related

Oracle EF Database first is not completing

When I attempt to reverse engineer models off of an Oracle database, I am getting errors when the table in question has more than one trigger on the a column. I am putting a web UI on an upgraded Oracle 7 database (it has been migrated to Oracle 18C). The original system is an old Unix terminal UI. My solution is read only. Solution is .NET Core 3.1 and Oracle.EntityFrameworkCOre is 3.19.110.
When I run the above command on a table that has more than one trigger on a column, it errors.
Scaffold-DbContext "User Id=<user>; Password=<pwd>; Data Source=<datasource>;" Oracle.EntityFrameworkCore -OutputDir Models -Context GlobalContext
Using verbose mode I get the following.
Sequence contains more than one matching element
at System.Linq.ThrowHelper.ThrowMoreThanOneMatchException()
The generated SQL is
select u.*, v.trigger_name, v.table_name, v.column_name, v.table_owner
from (SELECT sys_context('userenv', 'current_schema') as schema, c.table_name, c.column_name, c.column_id, c.data_type, c.char_length, c.data_length, c.data_precision, c.data_scale, c.nullable, c.identity_column, c.data_default, c.virtual_column, c.hidden_column
FROM user_tab_cols c INNER JOIN (select distinct object_name as table_name from user_objects where object_type in ('TABLE', 'VIEW', 'MATERIALIZED VIEW')) t ON t.table_name=c.table_name WHERE t.table_name <> '__EFMigrationsHistory' AND (t.table_name IN (:t0) AND CONCAT(sys_context('userenv', 'current_schema'), CONCAT('.', t.table_name)) IN (:sdott0)) )u
left join USER_TRIGGER_COLS v on u.table_name = v.table_name and u.column_name = v.column_name and u.schema = v.table_owner
ORDER BY u.column_id
The results are as follows (I've made the table/triggers generic, but you see the issue)
Short of dropping the triggers, is there a way to reverse engineer the table?

Clickhouse - Alias in view

I would like create view with alias in SELECT query.
After try with this syntax it's not work.
Clickhouse don't support alias in view query or my syntax is bad ?
Error message:
Received exception from server (version 20.3.5): Code: 352.
DB::Exception: Received from localhost:9000. DB::Exception: Cannot
detect left and right JOIN keys. JOIN ON section is ambiguous..
Error message if i drop alias in JOIN (ON A.column1 = B.column1 ---> ON table_a.column1 = table_b.column1):
Received exception from server (version 20.3.5): Code: 47.
DB::Exception: Received from localhost:9000. DB::Exception: Missing
columns:
Create table:
CREATE TABLE IF NOT EXISTS table_a
(
`column1` Nullable(Int32),
`column2` Nullable(Int32),
`column3` Nullable(Int32),
`column4` Nullable(Int32)
)
ENGINE = MergeTree()
PARTITION BY tuple()
order by tuple();
CREATE TABLE IF NOT EXISTS table_b
(
`column1` Nullable(Int32),
`column2` Nullable(Int32),
`column3` Nullable(Int32),
`column4` Nullable(Int32)
)
ENGINE = MergeTree()
PARTITION BY tuple()
order by tuple();
View query:
CREATE VIEW IF NOT EXISTS view_table_AB AS
SELECT
A.column1,
A.column2,
A.column3,
A.column4,
B.column1,
B.column2,
B.column3,
B.column4
FROM table_a AS A
INNER JOIN table_b AS B ON A.column1 = B.column1;
DOC clickhouse: https://clickhouse.tech/docs/fr/sql-reference/syntax/#syntax-expression_aliases
Thank you for your help
It looks like it is the bug. I added CH Issue 11000, let's wait for the answer.
As workaround need to specify database-prefix instead of aliases:
CREATE VIEW IF NOT EXISTS view_table_AB AS
SELECT
table_a.column1,
table_a.column2,
table_a.column3,
table_a.column4,
table_b.column1,
table_b.column2,
table_b.column3,
table_b.column4
FROM table_a
INNER JOIN table_b ON table_a.column1 = table_b.column1;

Oracle-lost rpc connection to heterogeneous remote agent using sid

Im trying to select in plsql using dblink connected to mysql.
Here is my query:
select
t1."header_id" header_id
from
"table1"#times t1
,"table2"#times t2
where
t1."processed" = 'no'
and
t1."returned" = 'yes'
and
t2."header_id" is null
and
t1."header_id" = t2."header_id"(+)
;
This is the error:
Please help.Thank you.

Issue with Spring data jpa Db2 pagination

I am using Spring JPA with DB2, when i use paging repository and queries for second page it throws error.
This is the generated query
SELECT *
FROM (SELECT inner2_.*,
ROWNUMBER()
OVER(
ORDER BY ORDER OF inner2_) AS rownumber_
FROM (SELECT db2DATAa0_.c_type AS col_0_0_,
db2DATAa0_.h_proc AS col_1_0_,
db2DATAa0_.n_vin AS col_2_0_,
db2DATAa0_.i_cust AS col_3_0_
FROM dcu.v_rpt_data_hist db2DATAa0_
WHERE db2DATAa0_.reportid = '0H000488089'
AND ( db2DATAa0_.c_type = 'S'
OR db2DATAa0_.c_type = 'N'
OR db2DATAa0_.c_type = 'A'
OR db2DATAa0_.c_type = 'T' )
ORDER BY db2DATAa0_.h_proc desc
FETCH first 30 ROWS only) AS inner2_) AS inner1_
WHERE rownumber_ > 15
ORDER BY rownumber_
Error:
2719372 [2016-10-21 16:29:02,040] [RxCachedThreadScheduler-13] WARN org.hibern
ate.engine.jdbc.spi.SqlExceptionHelper - SQL Error: -199, SQLState: 42601
2719379 [2016-10-21 16:29:02,047] [RxCachedThreadScheduler-13] ERROR org.hibern
ate.engine.jdbc.spi.SqlExceptionHelper - DB2 SQL Error: SQLCODE=-199, SQLSTATE=
42601, SQLERRMC=OF;??( [ DESC ASC NULLS RANGE CONCAT || / MICROSECONDS MICROSECO
ND, DRIVER=3.57.82
Any idea?
Your error states the ILLEGAL USE OF KEYWORD OF. TOKEN [DESC ASC NULLS RANGE CONCAT] WAS EXPECTED.
I identified this as the critical part of the query:
ORDER BY ORDER OF inner2_
DB2 expects one of DESC, ASC, NULLS, RANGE, CONCAT after the second ORDER keyword.
This issue can be resolve by change dialect.
Change dialect in configuration or property file to DB2ZOSDialect

Strange error when use hive udf through jdbc client

all. I met a strange error when I use hive udf through jdbc client.
I have a udf to help me convert a string into time stamp format called reformat_date. I firstly execute ADD JAR and CREATE TEMPORARY FUNCTION, both work fine.
The SQL also can be explained in hive cli mode, and can be executed. But when use jdbc client, I got errors:
Query returned non-zero code: 10, cause:
FAILED: Error in semantic analysis: Line 1:283 Wrong arguments ''20121201000000'':
org.apache.hadoop.hive.ql.metadata.HiveException:
Unable to execute method public org.apache.hadoop.io.Text com.aa.datawarehouse.hive.udf.ReformatDate.evaluate(org.apache.hadoop.io.Text) on object com.aa.datawarehouse.hive.udf.ReformatDate#4557e3e8 of class com.aa.datawarehouse.hive.udf.ReformatDate with arguments {20121201000000:org.apache.hadoop.io.Text} of size 1:
at com.aa.statistic.dal.impl.TjLoginDalImpl.selectAwakenedUserCount(TjLoginDalImpl.java:258)
at com.aa.statistic.backtask.service.impl.UserBehaviorAnalysisServiceImpl.recordAwakenedUser(UserBehaviorAnalysisServiceImpl.java:326)
at com.aa.statistic.backtask.controller.BackstatisticController$21.execute(BackstatisticController.java:773)
at com.aa.statistic.backtask.controller.BackstatisticController$DailyExecutor.execute(BackstatisticController.java:823)
My SQL is
select count(distinct a.user_id) as cnt from ( select user_id, user_kind, login_date, login_time from tj_login_hive where p_month = '2012_12' and login_date = '20121201' and user_kind = '0' ) a join ( select user_id from tj_login_hive where p_month <= '2012_12' and datediff(to_date(reformat_date(concat('20121201', '000000'))), to_date(reformat_date(concat(login_date, '000000')))) >= 90 ) b on a.user_id = b.user_id
Thanks.
i think your udf threw exception.
if reformat_date function is that you make, you should check your logic.
if not, you should check the udf's specification.

Resources