Kafka/questDB JDBC sink connector: crashing if multiple topics - jdbc

I have a Kafka broker with 2 topics and a JDBC sink connector to a questDB database.
Everything is built with docker containers.
If i configure the JDBC connector with 1 topic (which ever) then it works just fine and all events are being transferred over to questDB.
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" --data #jdbc_sink.json http://connect:8083/connectors
{
"name": "jdbc_sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"topics": "topic_1",
"table.name.format": "${topic}",
"connection.url": "jdbc:postgresql://questdb:8812/qdb?useSSL=false",
"connection.user": "admin",
"connection.password": "quest",
"auto.create": "true",
"insert.mode": "insert",
"dialect.name": "PostgreSqlDatabaseDialect"
}
}
As soon as I include 2 topics into the JDBC config the questDB and connect containers are crashing.
"topics": "topic_1,topic_2",
Connect Dockerfile
FROM confluentinc/cp-kafka-connect-base:7.1.0
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.3.3
connect:
build:
context: ./connect
dockerfile: Dockerfile
hostname: connect
container_name: connect
depends_on:
- broker1
- zookeeper
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker1:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
networks:
- broker-kafka
questdb:
image: questdb/questdb:6.2.2b
hostname: questdb
container_name: questdb
ports:
- "9000:9000" # REST API and Web Console
- "9009:9009" # InfluxDB line protocol
- "8812:8812" # Postgres wire protocol
- "9003:9003" # Min health server
volumes:
- ./questdb:/root/.questdb
networks:
- broker-kafka
What is the problem here? Does anyone have an idea?
Below you can see extracts from the docker logs of the connect and questDB containers. I cannot post the complete logs (too many characters) so I tried to extract relevant messages, but frankly speaking I couldn't figure out the determining error message. Please let me know if you need more logs.
questDB log (extract)
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f7dc99486a8, pid=1, tid=38
#
# JRE version: OpenJDK Runtime Environment (17.0.1+12) (build 17.0.1+12-39)
# Java VM: OpenJDK 64-Bit Server VM (17.0.1+12-39, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# J 2153 c1 io.questdb.cairo.vm.api.MemoryCR.getStr(JLio/questdb/cairo/vm/api/MemoryCR$CharSequenceView;)Ljava/lang/CharSequence; io.questdb#6.2.2-SNAPSHOT (101 bytes) # 0x00007f7dc99486a8 [0x00007f7dc99484a0+0x0000000000000208]
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /root/.questdb/hs_err_pid1.log
Compiled method (c1) 386972 2153 3 io.questdb.cairo.vm.api.MemoryCR::getStr (101 bytes)
total in heap [0x00007f7dc9948210,0x00007f7dc9949438] = 4648
relocation [0x00007f7dc9948370,0x00007f7dc9948498] = 296
main code [0x00007f7dc99484a0,0x00007f7dc9948f80] = 2784
stub code [0x00007f7dc9948f80,0x00007f7dc9949060] = 224
oops [0x00007f7dc9949060,0x00007f7dc9949070] = 16
metadata [0x00007f7dc9949070,0x00007f7dc99490b8] = 72
scopes data [0x00007f7dc99490b8,0x00007f7dc9949220] = 360
scopes pcs [0x00007f7dc9949220,0x00007f7dc99493e0] = 448
dependencies [0x00007f7dc99493e0,0x00007f7dc99493e8] = 8
nul chk table [0x00007f7dc99493e8,0x00007f7dc9949438] = 80
#
# If you would like to submit a bug report, please visit:
# https://bugreport.java.com/bugreport/crash.jsp
#
[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007f7ddf75b529]
[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007f7ddf75b529]
connect log (extract)
[2022-04-06 07:01:50,706] WARN Unable to query database for maximum table name length; the connector may fail to write to tables with long names (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect)
org.postgresql.util.PSQLException: ERROR: unknown function name: repeat(STRING,INT)
Position: 15
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:243)
at io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect.computeMaxIdentifierLength(PostgreSqlDatabaseDialect.java:119)
at io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect.getConnection(PostgreSqlDatabaseDialect.java:106)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:80)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:52)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:64)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:84)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-04-06 07:01:50,709] INFO JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter)
[2022-04-06 07:01:50,745] INFO Checking PostgreSql dialect for existence of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:50,998] INFO Using PostgreSql dialect TABLE "topic_1" absent (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:51,005] INFO Creating table with sql: CREATE TABLE "topic_1" (
"id" REAL NOT NULL,
"price" REAL NOT NULL,
"size" REAL NOT NULL,
"side" TEXT NOT NULL,
"liquidation" BOOLEAN NOT NULL,
"time" TEXT NOT NULL) (io.confluent.connect.jdbc.sink.DbStructure)
[2022-04-06 07:01:51,184] INFO Checking PostgreSql dialect for existence of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:51,193] INFO Using PostgreSql dialect TABLE "topic_1" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,136] INFO Checking PostgreSql dialect for type of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,145] INFO Setting metadata for table "topic_1" to Table{name='"topic_1"', type=TABLE columns=[Column{'liquidation', isPrimaryKey=false, allowsNull=true, sqlType=bool}, Column{'size', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'time', isPrimaryKey=false, allowsNull=true, sqlType=varchar}, Column{'price', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'id', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'side', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions)
[2022-04-06 07:01:52,154] INFO Checking PostgreSql dialect for existence of TABLE "topic_2" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,339] INFO Using PostgreSql dialect TABLE "topic_2" absent (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,339] INFO Creating table with sql: CREATE TABLE "topic_2" (
"id" REAL NOT NULL,
"price" REAL NOT NULL,
"size" REAL NOT NULL,
"side" TEXT NOT NULL,
"liquidation" BOOLEAN NOT NULL,
"time" TEXT NOT NULL) (io.confluent.connect.jdbc.sink.DbStructure)
[2022-04-06 07:01:52,661] INFO Checking PostgreSql dialect for existence of TABLE "topic_2" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,669] INFO Using PostgreSql dialect TABLE "topic_2" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)

Related

SQLExceptionHelper : Input parameter not set, Index : 0

I am using spring-data-rest & hibernate to expose a table "File(Id, Name)" from a SAP IQ(Sybase IQ) database. The below error occurs when I do a "GET" on the "File" table using "curl http://localhost:8080/files/1".
2022-07-20 20:01:03 WARN [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - SQL Error: 0, SQLState: JZ0SA
2022-07-20 20:01:03 ERROR [http-nio-8081-exec-1] o.h.e.jdbc.spi.SqlExceptionHelper - JZ0SA: Prepared Statement: Input parameter not set, index: 0.
2022-07-20 20:01:03 INFO [http-nio-8081-exec-1] o.h.e.i.DefaultLoadEventListener - HHH000327: Error performing load command
org.hibernate.exception.GenericJDBCException: could not extract ResultSet
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.extract(ResultSetReturnImpl.java:67)
I have the same table in "Sybase 16" but it is working perfectly with the same code base.
Anyone faced similar issue?
Thanks in advance

MiNiFi - NiFi Connection Failure: Unknown Host Exception : Able to telnet host from the machine where MiNiFi is running

I am running MiNiFi in a Linux Box (gateway server) which is behind my company's firewall. My NiFi is running on an AWS EC2 cluster (running in standalone mode).
I am trying to send data from the Gateway to NiFi running in AWS EC2.
From gateway, I am able to telnet to EC2 node with the public DNS and the remote port which I have configured in the nifi.properties file
nifi.properties
# Site to Site properties
nifi.remote.input.host=ec2-xxx.us-east-2.compute.amazonaws.com
nifi.remote.input.secure=false
nifi.remote.input.socket.port=1026
nifi.remote.input.http.enabled=true
nifi.remote.input.http.transaction.ttl=30 sec
nifi.remote.contents.cache.expiration=30 secs
Telnet connection from Gateway to NiFi
iot1#iothdp02:~/minifi/minifi-0.5.0/conf$ telnet ec2-xxx.us-east-2.compute.amazonaws.com 1026
Trying xx.xx.xx.xxx...
Connected to ec2-xxx.us-east-2.compute.amazonaws.com.
Escape character is '^]'.
The Public DNS is resolving to the correct Public IP of the EC2 node.
From the EC2 node, when I do nslookup on the Public DNS, it gives back the private IP.
From AWS Documentation: "The public IP address is mapped to the primary private IP address through network address translation (NAT). "
Hence, I am not adding the Public DNS and the Public IP in /etc/host file in the EC2 node.
From MiNiFi side, I am getting the below error:
minifi-app.log
iot1#iothdp02:~/minifi/minifi-0.5.0/logs$ cat minifi-app.log
2018-11-14 16:00:47,910 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2018-11-14 16:00:47,911 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 0 records in 0 milliseconds
2018-11-14 16:01:02,334 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#67207d8a checkpointed with 0 Records and 0 Swap Files in 20 milliseconds (Stop-the-world time = 6 milliseconds, Clear Edit Logs time = 4 millis), max Transaction ID -1
2018-11-14 16:02:47,911 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2018-11-14 16:02:47,912 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 0 records in 0 milliseconds
2018-11-14 16:03:02,354 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#67207d8a checkpointed with 0 Records and 0 Swap Files in 18 milliseconds (Stop-the-world time = 3 milliseconds, Clear Edit Logs time = 5 millis), max Transaction ID -1
2018-11-14 16:03:10,636 WARN [Timer-Driven Process Thread-8] o.a.n.r.util.SiteToSiteRestApiClient Failed to get controller from http://ec2-xxx.us-east-2.compute.amazonaws.com:9090/nifi-api due to java.net.UnknownHostException: ec2-xxx.us-east-2.compute.amazonaws.com: unknown error
2018-11-14 16:03:10,636 WARN [Timer-Driven Process Thread-8] o.apache.nifi.controller.FlowController Unable to communicate with remote instance RemoteProcessGroup[http://ec2-xxx.us-east-2.compute.amazonaws.com:9090/nifi] due to org.apache.nifi.controller.exception.CommunicationsException: org.apache.nifi.controller.exception.CommunicationsException: Unable to communicate with Remote NiFi at URI http://ec2-xxx.us-east-2.compute.amazonaws.com:9090/nifi due to: ec2-xxx.us-east-2.compute.amazonaws.com: unknown error
2018-11-14 16:04:47,912 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2018-11-14 16:04:47,912 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 0 records in 0 milliseconds
2018-11-14 16:05:02,380 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#67207d8a checkpointed with 0 Records and 0 Swap Files in 25 milliseconds (Stop-the-world time = 8 milliseconds, Clear Edit Logs time = 6 millis), max Transaction ID -1
2018-11-14 16:06:47,912 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Initiating checkpoint of FlowFile Repository
2018-11-14 16:06:47,912 INFO [pool-31-thread-1] o.a.n.c.r.WriteAheadFlowFileRepository Successfully checkpointed FlowFile Repository with 0 records in 0 milliseconds
2018-11-14 16:07:02,399 INFO [Write-Ahead Local State Provider Maintenance] org.wali.MinimalLockingWriteAheadLog org.wali.MinimalLockingWriteAheadLog#67207d8a checkpointed with
MiNiFi config.yml
MiNiFi Config Version: 3
Flow Controller:
name: Gateway-IDS_v0.1
comment: "1. ConsumeMQTT - MiNiFi will consume mqtt messages in gateway\n2. Remote\
\ Process Group will send messages to NiFi "
Core Properties:
flow controller graceful shutdown period: 10 sec
flow service write delay interval: 500 ms
administrative yield duration: 30 sec
bored yield duration: 10 millis
max concurrent threads: 1
variable registry properties: ''
FlowFile Repository:
partitions: 256
checkpoint interval: 2 mins
always sync: false
Swap:
threshold: 20000
in period: 5 sec
in threads: 1
out period: 5 sec
out threads: 4
Content Repository:
content claim max appendable size: 10 MB
content claim max flow files: 100
always sync: false
Provenance Repository:
provenance rollover time: 1 min
implementation: org.apache.nifi.provenance.MiNiFiPersistentProvenanceRepository
Component Status Repository:
buffer size: 1440
snapshot frequency: 1 min
Security Properties:
keystore: ''
keystore type: ''
keystore password: ''
key password: ''
truststore: ''
truststore type: ''
truststore password: ''
ssl protocol: ''
Sensitive Props:
key:
algorithm: PBEWITHMD5AND256BITAES-CBC-OPENSSL
provider: BC
Processors:
- id: 6396f40f-118f-33f4-0000-000000000000
name: ConsumeMQTT
class: org.apache.nifi.processors.mqtt.ConsumeMQTT
max concurrent tasks: 1
scheduling strategy: TIMER_DRIVEN
scheduling period: 0 sec
penalization period: 30 sec
yield period: 1 sec
run duration nanos: 0
auto-terminated relationships list: []
Properties:
Broker URI: tcp://localhost:1883
Client ID: nifi
Connection Timeout (seconds): '30'
Keep Alive Interval (seconds): '60'
Last Will Message:
Last Will QoS Level:
Last Will Retain:
Last Will Topic:
MQTT Specification Version: '0'
Max Queue Size: '10'
Password:
Quality of Service(QoS): '0'
SSL Context Service:
Session state: 'true'
Topic Filter: MQTT
Username:
Controller Services: []
Process Groups: []
Input Ports: []
Output Ports: []
Funnels: []
Connections:
- id: f0007aa3-cf32-3593-0000-000000000000
name: ConsumeMQTT/Message/85ebf198-0166-1000-5592-476a7ba47d2e
source id: 6396f40f-118f-33f4-0000-000000000000
source relationship names:
- Message
destination id: 85ebf198-0166-1000-5592-476a7ba47d2e
max work queue size: 10000
max work queue data size: 1 GB
flowfile expiration: 0 sec
queue prioritizer class: ''
Remote Process Groups:
- id: c00d3132-375b-323f-0000-000000000000
name: ''
url: http://ec2-xxx.us-east-2.compute.amazonaws.com:9090
comment: ''
timeout: 30 sec
yield period: 10 sec
transport protocol: RAW
proxy host: ''
proxy port: ''
proxy user: ''
proxy password: ''
local network interface: ''
Input Ports:
- id: 85ebf198-0166-1000-5592-476a7ba47d2e
name: From MiNiFi
comment: ''
max concurrent tasks: 1
use compression: false
Properties:
Port: 1026
Host Name: ec2-xxx.us-east-2.compute.amazonaws.com
Output Ports: []
NiFi Properties Overrides: {}
Any pointers on how to troubleshoot this issue?
In MiNiFi config.yml, I changed the URL under Remote Process Groups from http://ec2-xxx.us-east-2.compute.amazonaws.com:9090 to http://ec2-xxx.us-east-2.compute.amazonaws.com:9090/nifi.

Janusgraph returns "Unknown external index backend: search " building mixed index with elasticsearch in Gremlin

When I attempt to create a mixed index in janusgraph 0.3.0 via Gremlin and Gremlin server I get the "Unknown external index backend: search " error. Elasticsearch is running and I believe the config file is correct (I've included all that below.
gremlin> mgmt.get('graph.janusgraph-version')
==>0.3.0
gremlin>
When I run the following...
gremlin> IsCurrentVersion = mgmt.makePropertyKey('is current version').dataType(Boolean.class).cardinality(SINGLE).make();
==>is current version
gremlin> mgmt.buildIndex('iscurrent', Vertex.class).addKey(IsCurrentVersion).buildMixedIndex("search");
I get...
Unknown external index backend: search
# Settings with mutability GLOBAL_OFFLINE are centrally managed in
# JanusGraph's storage backend. After starting the database for the first
# time, this file's copy of this setting is ignored. Use JanusGraph's
# Management System to read or modify this value after bootstrapping.
index.search.backend=elasticsearch
# The hostname or comma-separated list of hostnames of index backend
# servers. This is only applicable to some index backends, such as
# elasticsearch and solr.
#
# Default: 127.0.0.1
# Data Type: class java.lang.String[]
# Mutability: MASKABLE
index.search.hostname=127.0.0.1
The Gremlin Server startup log has...
4060 [main] INFO com.datastax.driver.core.Cluster - New Cassandra host /127.0.0.1:9042 added
4085 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [search]
4813 [main] INFO org.janusgraph.diskstorage.Backend - Configuring index [backend]
4837 [main] INFO org.janusgraph.diskstorage.Backend - Initiated backend operations thread pool of size 8
4931 [main] INFO org.janusgraph.diskstorage.Backend - Configuring total store cache size: 219574576
Elasticsearch is running as shown by...
curl 'http://localhost:9200/?pretty'
{
"name" : "nfZKXFP",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "6QDueUnoQfmKRwuEG-a3nw",
"version" : {
"number" : "6.0.1",
"build_hash" : "601be4a",
"build_date" : "2017-12-04T09:29:09.525Z",
"build_snapshot" : false,
"lucene_version" : "7.0.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

SQL error 17002 : Physical DB connection is invalid

I am trying to connect to oracle db(installed in remote machine) from my java application. but, i am getting sql error code 17022. pls let me know how to solve this issue.
Full stack trace
MSG: Connecting to jdbc:oracle:thin:#(description=(address_list=(address=(protocol=tcp)(host=dlocdb04)(port=50000))(address=(protocol=tcp)(host=dlocdb04)(port=50000)))(connect_data=(service_name=os02rtdb01svc.world)(server=dedicated))) with Oracle JDBC driver version: 9.2.0.1.0 [cumulative retries=0]
USER_ID: UNKNOWN
level: WARN
date: 2015-09-21 10:51:45,563
category: com.retek.merch.utils.ConnectionPool
MSG: [SQL Error code: 17002, State: null] Physical DB connection is invalid. Restarting pool due to the following error.
USER_ID: UNKNOWN
java.sql.SQLException: Io exception: Connection refused(DESCRIPTION=(ERR=1153)(VSNNUM=0)(ERROR_STACK=(ERROR=(CODE=1153)(EMFI=4)(ARGS='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dlocdb0401)(PORT=27320))(CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))null))'))(ERROR=(CODE=303)(EMFI=1))))
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:333)
at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:404)
at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:468)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
at java.sql.DriverManager.getConnection(DriverManager.java:512)
at java.sql.DriverManager.getConnection(DriverManager.java:140)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:169)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:149)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:95)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:63)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getNewPoolOrXAConnection(OracleConnectionCacheImpl.java:547)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getPooledConnection(OracleConnectionCacheImpl.java:404)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getConnection(OracleConnectionCacheImpl.java:298)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getConnection(OracleConnectionCacheImpl.java:268)
at com.retek.merch.utils.ConnectionPool.get(ConnectionPool.java:346)
at com.retek.merch.utils.TransactionManager.start(TransactionManager.java:59)
at com.retek.reim.merch.utils.ReIMTransactionManager.start(ReIMTransactionManager.java:49)
at com.retek.reim.manager.LoginManager.login(LoginManager.java:72)
at com.retek.reim.ui.login.LoginAction.perform(LoginAction.java:47)
at org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1786)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1585)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:509)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:638)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:720)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:199)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:145)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:139)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2460)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:133)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:119)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:594)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:116)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:594)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:127)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.coyote.tomcat4.CoyoteAdapter.service(CoyoteAdapter.java:157)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Thread.java:534)

Why Impala not working on hbase table?

I create an external table B of hbase table A using hive. I can successfully access the data of B.Then I followed the official guide to type in Imapla Shell:
invalidate metadata B;
And then I query this external table B in Impala Shell:
select * from B limit 4;
but it outputs:
ERROR: RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
Here are some logs related:
11:13:58.937 AM INFO jni-util.cc:177
java.lang.RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
at com.cloudera.impala.planner.HBaseScanNode.computeScanRangeLocations(HBaseScanNode.java:300)
at com.cloudera.impala.planner.HBaseScanNode.init(HBaseScanNode.java:125)
at com.cloudera.impala.planner.SingleNodePlanner.createScanNode(SingleNodePlanner.java:891)
at com.cloudera.impala.planner.SingleNodePlanner.createTableRefNode(SingleNodePlanner.java:1082)
at com.cloudera.impala.planner.SingleNodePlanner.createSelectPlan(SingleNodePlanner.java:526)
at com.cloudera.impala.planner.SingleNodePlanner.createQueryPlan(SingleNodePlanner.java:151)
at com.cloudera.impala.planner.SingleNodePlanner.createSingleNodePlan(SingleNodePlanner.java:117)
at com.cloudera.impala.planner.Planner.createPlan(Planner.java:47)
at com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:842)
at com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:146)
11:13:58.939 AM INFO status.cc:114
RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
# 0x78b793 (unknown)
# 0xa68275 (unknown)
# 0x9802c6 (unknown)
# 0x99db78 (unknown)
# 0x99e6e4 (unknown)
# 0x9d50cb (unknown)
# 0xb33687 (unknown)
# 0xb29054 (unknown)
# 0x9ac52b (unknown)
# 0x1571c39 (unknown)
# 0x155d9cf (unknown)
# 0x155f914 (unknown)
# 0x92d363 (unknown)
# 0x92daca (unknown)
# 0xaa4faa (unknown)
# 0xaa7130 (unknown)
# 0xca79b3 (unknown)
# 0x386be079d1 (unknown)
# 0x386bae8b6d (unknown)
11:13:58.940 AM INFO impala-server.cc:824
UnregisterQuery(): query_id=d4269ff898eb4e7:1866144af0d14a7
11:13:58.940 AM INFO impala-server.cc:893
Cancel(): query_id=d4269ff898eb4e7:1866144af0d14a7
11:13:59.935 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:13:59.935 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:01.036 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:14:01.037 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:02.138 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:14:02.138 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:02.199 AM INFO impala-hs2-server.cc:795
GetSchemas(): request=TGetSchemasReq {
01: sessionHandle (struct) = TSessionHandle {
01: sessionId (struct) = THandleIdentifier {
01: guid (string) = "\xf8\xb9n\xe4\xb4\xf6N\xef\xad)9W.\x92#Y",
02: secret (string) = "\xc0?\xc7\xd9\x930C\x9b\xb5\xf6K\x8em\xcb\xf8\xe4",
},
},
}
11:14:02.203 AM INFO MetadataOp.java:414
Returning 19 schemas
It seems the hbase table B is either enabled nor disabled,very strange.I googled around,Is this related to the hbase security issues or the impala version problem?
Did anybody encountered the same problems?How to solve this?Thanks in advance.
Enable HBase service from Impala Configuration.
You can do it from Cloudera manager, Imapala->configuration
search from "Hbase" and enable the service.
PFA.

Resources