I have a Kafka broker with 2 topics and a JDBC sink connector to a questDB database.
Everything is built with docker containers.
If i configure the JDBC connector with 1 topic (which ever) then it works just fine and all events are being transferred over to questDB.
curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" --data #jdbc_sink.json http://connect:8083/connectors
{
"name": "jdbc_sink",
"config": {
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"topics": "topic_1",
"table.name.format": "${topic}",
"connection.url": "jdbc:postgresql://questdb:8812/qdb?useSSL=false",
"connection.user": "admin",
"connection.password": "quest",
"auto.create": "true",
"insert.mode": "insert",
"dialect.name": "PostgreSqlDatabaseDialect"
}
}
As soon as I include 2 topics into the JDBC config the questDB and connect containers are crashing.
"topics": "topic_1,topic_2",
Connect Dockerfile
FROM confluentinc/cp-kafka-connect-base:7.1.0
RUN confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:10.3.3
connect:
build:
context: ./connect
dockerfile: Dockerfile
hostname: connect
container_name: connect
depends_on:
- broker1
- zookeeper
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: "broker1:29092"
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: compose-connect-group
CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: "http://schema-registry:8081"
networks:
- broker-kafka
questdb:
image: questdb/questdb:6.2.2b
hostname: questdb
container_name: questdb
ports:
- "9000:9000" # REST API and Web Console
- "9009:9009" # InfluxDB line protocol
- "8812:8812" # Postgres wire protocol
- "9003:9003" # Min health server
volumes:
- ./questdb:/root/.questdb
networks:
- broker-kafka
What is the problem here? Does anyone have an idea?
Below you can see extracts from the docker logs of the connect and questDB containers. I cannot post the complete logs (too many characters) so I tried to extract relevant messages, but frankly speaking I couldn't figure out the determining error message. Please let me know if you need more logs.
questDB log (extract)
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f7dc99486a8, pid=1, tid=38
#
# JRE version: OpenJDK Runtime Environment (17.0.1+12) (build 17.0.1+12-39)
# Java VM: OpenJDK 64-Bit Server VM (17.0.1+12-39, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# J 2153 c1 io.questdb.cairo.vm.api.MemoryCR.getStr(JLio/questdb/cairo/vm/api/MemoryCR$CharSequenceView;)Ljava/lang/CharSequence; io.questdb#6.2.2-SNAPSHOT (101 bytes) # 0x00007f7dc99486a8 [0x00007f7dc99484a0+0x0000000000000208]
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /root/.questdb/hs_err_pid1.log
Compiled method (c1) 386972 2153 3 io.questdb.cairo.vm.api.MemoryCR::getStr (101 bytes)
total in heap [0x00007f7dc9948210,0x00007f7dc9949438] = 4648
relocation [0x00007f7dc9948370,0x00007f7dc9948498] = 296
main code [0x00007f7dc99484a0,0x00007f7dc9948f80] = 2784
stub code [0x00007f7dc9948f80,0x00007f7dc9949060] = 224
oops [0x00007f7dc9949060,0x00007f7dc9949070] = 16
metadata [0x00007f7dc9949070,0x00007f7dc99490b8] = 72
scopes data [0x00007f7dc99490b8,0x00007f7dc9949220] = 360
scopes pcs [0x00007f7dc9949220,0x00007f7dc99493e0] = 448
dependencies [0x00007f7dc99493e0,0x00007f7dc99493e8] = 8
nul chk table [0x00007f7dc99493e8,0x00007f7dc9949438] = 80
#
# If you would like to submit a bug report, please visit:
# https://bugreport.java.com/bugreport/crash.jsp
#
[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007f7ddf75b529]
[error occurred during error reporting (), id 0xb, SIGSEGV (0xb) at pc=0x00007f7ddf75b529]
connect log (extract)
[2022-04-06 07:01:50,706] WARN Unable to query database for maximum table name length; the connector may fail to write to tables with long names (io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect)
org.postgresql.util.PSQLException: ERROR: unknown function name: repeat(STRING,INT)
Position: 15
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:329)
at org.postgresql.jdbc.PgStatement.executeCachedSql(PgStatement.java:315)
at org.postgresql.jdbc.PgStatement.executeWithFlags(PgStatement.java:291)
at org.postgresql.jdbc.PgStatement.executeQuery(PgStatement.java:243)
at io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect.computeMaxIdentifierLength(PostgreSqlDatabaseDialect.java:119)
at io.confluent.connect.jdbc.dialect.PostgreSqlDatabaseDialect.getConnection(PostgreSqlDatabaseDialect.java:106)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.newConnection(CachedConnectionProvider.java:80)
at io.confluent.connect.jdbc.util.CachedConnectionProvider.getConnection(CachedConnectionProvider.java:52)
at io.confluent.connect.jdbc.sink.JdbcDbWriter.write(JdbcDbWriter.java:64)
at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:84)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:581)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:333)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:234)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:203)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:188)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:243)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
[2022-04-06 07:01:50,709] INFO JdbcDbWriter Connected (io.confluent.connect.jdbc.sink.JdbcDbWriter)
[2022-04-06 07:01:50,745] INFO Checking PostgreSql dialect for existence of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:50,998] INFO Using PostgreSql dialect TABLE "topic_1" absent (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:51,005] INFO Creating table with sql: CREATE TABLE "topic_1" (
"id" REAL NOT NULL,
"price" REAL NOT NULL,
"size" REAL NOT NULL,
"side" TEXT NOT NULL,
"liquidation" BOOLEAN NOT NULL,
"time" TEXT NOT NULL) (io.confluent.connect.jdbc.sink.DbStructure)
[2022-04-06 07:01:51,184] INFO Checking PostgreSql dialect for existence of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:51,193] INFO Using PostgreSql dialect TABLE "topic_1" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,136] INFO Checking PostgreSql dialect for type of TABLE "topic_1" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,145] INFO Setting metadata for table "topic_1" to Table{name='"topic_1"', type=TABLE columns=[Column{'liquidation', isPrimaryKey=false, allowsNull=true, sqlType=bool}, Column{'size', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'time', isPrimaryKey=false, allowsNull=true, sqlType=varchar}, Column{'price', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'id', isPrimaryKey=false, allowsNull=true, sqlType=float4}, Column{'side', isPrimaryKey=false, allowsNull=true, sqlType=varchar}]} (io.confluent.connect.jdbc.util.TableDefinitions)
[2022-04-06 07:01:52,154] INFO Checking PostgreSql dialect for existence of TABLE "topic_2" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,339] INFO Using PostgreSql dialect TABLE "topic_2" absent (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,339] INFO Creating table with sql: CREATE TABLE "topic_2" (
"id" REAL NOT NULL,
"price" REAL NOT NULL,
"size" REAL NOT NULL,
"side" TEXT NOT NULL,
"liquidation" BOOLEAN NOT NULL,
"time" TEXT NOT NULL) (io.confluent.connect.jdbc.sink.DbStructure)
[2022-04-06 07:01:52,661] INFO Checking PostgreSql dialect for existence of TABLE "topic_2" (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
[2022-04-06 07:01:52,669] INFO Using PostgreSql dialect TABLE "topic_2" present (io.confluent.connect.jdbc.dialect.GenericDatabaseDialect)
After upgrading IBM liberty in CICS (v5.1) from 8.5.5.0 to 8.5.5.5, JCICS API is not recognized by liberty. same server.xml was copied and it has following features. I noticed different message in the log (messages listed below) with tag "LIBERTY NOTUSAGE". Anybody seen this ? Do we need change any config files ?
<featureManager>
<feature>cicsts:core-1.0</feature>
<feature>jsp-2.2</feature>
<feature>wab-1.0</feature>
<feature>blueprint-1.0</feature>
<feature>cicsts:security-1.0</feature>
</featureManager>
Message from Liberty message log (8.5.5.0)
product = WebSphere Application Server 8.5.5.0 (wlp-1.0.3.20130524-0951)
Message from Liberty message log (8.5.5.5)
product = CICS Transaction Server for z/OS 5.1.0, CICS LIBERTY NOTUSAGE, WebSphere Application Server 8.5.5.5, WAS FOR Z/OS 8.5.5.5 (wlp-1.0.8.cl50520150305-2202)
The "product = ... " message you refer to is from the first line of messages.log and is correct, as this refers to how the Liberty instance is registered (see message CWWKB0108I further down), as in this instance you are using the Liberty provided by CICS TS.
As to your problem "JCICS is not recognized", JCICS is provided by the cicsts:core-1.0 product extension, so no obvious reason for it not to be available if you have that configured.
Can you clarify a couple of issues which will help narrow this down
What errors do you see and in which logs
What features does Liberty report as actually configured - see msg CWWKF0012I in messages.log
< product = CICS Transaction Server for z/OS 5.1.0, CICS LIBERTY NOTUSAGE, WebSphere Application Server 8.5.5.5, WAS FOR Z/OS 8.5.5.5 (wlp-1.0.8.cl50520150305-2202) >
< wlp.install.dir = /opt/local/lpp/cicsts/cicsts51/PC/wlp/ >
< server.config.dir = /lpp/wss/common/cics/cicsfn11/work/CIC#FN11/WLPWINS2/wlp/usr/servers/wlpwins2/ >
< java.home = /usr/lpp/java/J7.0_64 >
< java.version = 1.7.0 >
< java.runtime = Java(TM) SE Runtime Environment (pmz6470sr9fp10-20150708_01 (SR9 FP10)) >
< os = z/OS (02.01.00; s390x) (en_US) >
< process = 33752595#L2 >
< ******************************************************************************** >
< 11/13/15 11:22:37:794 EST¨ 00000012 com.ibm.ws.kernel.launch.internal.FrameworkManager A CWWKE0001I: The server wlpwins2 has been launched. >
< 11/13/15 11:22:38:490 EST¨ 0000001b com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /lpp/wss/common/cics/cicsfn11/work/CIC#FN11/WLPWINS2/wlp/usr/servers/wlpwins2/installedApps.xml >
< 11/13/15 11:22:38:496 EST¨ 0000001b com.ibm.ws.config.xml.internal.XMLConfigParser A CWWKG0028A: Processing included configuration resource: /lpp/wss/common/cics/cicsfn11/work/CIC#FN11/WLPWINS2/wlp/usr/servers/wlpwins2/cicsSsl.xml >
< 11/13/15 11:22:38:826 EST¨ 00000012 com.ibm.ws.kernel.launch.internal.FrameworkManager I CWWKE0002I: The kernel started after 1.031 seconds >
< 11/13/15 11:22:38:892 EST¨ 00000024 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0007I: Feature update started. >
< 11/13/15 11:22:39:715 EST¨ 00000027 org.apache.aries.blueprint.container.BlueprintContainerImpl I Bundle com.ibm.ws.eba.tx.7.0 is waiting for dependencies Ý(objectClass=javax.transaction.TransactionManager), (objectClass=com.ibm.ws.LocalTransaction.LocalTransactionCurrent)¨ >
< 11/13/15 11:22:39:725 EST¨ 0000001b com.ibm.ws.security.internal.SecurityReadyServiceImpl I CWWKS0007I: The security service is starting... >
< 11/13/15 11:22:39:905 EST¨ 0000001d com.ibm.ws.ssl.config.WSKeyStore E CWPKI0033E: The keystore located at /lpp/wss/common/cics/cicsfn11/work/CIC#FN11/WLPWINS2/wlp/usr/servers/wlpwins2/resources/security/key.jks did not load because of the following error: Invalid keystore format >
< 11/13/15 11:22:39:908 EST¨ 0000001d com.ibm.ws.ssl.config.WSKeyStore W CWPKI0809W: There is a failure loading the defaultKeyStore keystore. If an SSL configuration is references the defaultKeyStore keystore, then the configuration fails to initialize. >
< 11/13/15 11:22:40:020 EST¨ 00000027 org.apache.aries.blueprint.container.BlueprintContainerImpl I Bundle com.ibm.ws.eba.tx.7.0 is waiting for dependencies Ý(objectClass=javax.transaction.TransactionManager)¨ >
< 11/13/15 11:22:40:178 EST¨ 00000028 org.apache.aries.blueprint.container.BlueprintContainerImpl I Bundle com.ibm.ws.org.apache.aries.application.resolver.obr.1.0.1 is waiting for dependencies Ý(objectClass=org.apache.aries.application.management.spi.runtime.LocalPlatform)¨ >
< 11/13/15 11:22:40:179 EST¨ 00000028 org.apache.aries.blueprint.container.BlueprintContainerImpl I Bundle com.ibm.ws.org.apache.aries.application.resolver.obr.1.0.1 is waiting for dependencies Ý(objectClass=org.apache.aries.application.management.spi.runtime.LocalPlatform)¨ >
< 11/13/15 11:22:40:482 EST¨ 0000002b com.ibm.ws.tcpchannel.internal.TCPChannel I CWWKO0219I: TCP Channel defaultHttpEndpoint has been started and is now listening for requests on host L2.mf.jpmchase.net (IPv4: 155.180.183.41) port 52561. >
< 11/13/15 11:22:40:633 EST¨ 00000017 LogService-127-com.ibm.cics.wlp.impl E CWWKE0702E: Could not resolve module: com.ibm.cics.wlp.impl Ý127¨ >
Unresolved requirement: Import-Package: com.ibm.wsspi.bytebuffer >
<
11/13/15 11:22:40:731 EST¨ 0000001b com.ibm.ws.cache.ServerCache I DYNA1001I: WebSphere Dynamic Cache instance named baseCache initialized successfully. >
< 11/13/15 11:22:40:733 EST¨ 0000001b com.ibm.ws.cache.ServerCache I DYNA1071I: The cache provider default is being used. >
< 11/13/15 11:22:40:738 EST¨ 0000001b com.ibm.ws.cache.CacheServiceImpl I DYNA1056I: Dynamic Cache (object cache) initialized successfully. >
< 11/13/15 11:22:40:745 EST¨ 0000002a com.ibm.ws.security.internal.SecurityReadyServiceImpl I CWWKS0008I: The security service is ready. >
< 11/13/15 11:22:40:746 EST¨ 0000002a com.ibm.ws.security.token.ltpa.internal.LTPAKeyCreator I CWWKS4105I: LTPA configuration is ready after 0.385 seconds. >
< 11/13/15 11:22:40:885 EST¨ 00000024 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0015I: The server has the following interim fixes installed: PI29785,PI36563. >
< 11/13/15 11:22:40:886 EST¨ 00000024 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0012I: The server installed the following features: Ýssl-1.0, json-1.0, appSecurity-2.0, blueprint-1.0, cicsts:security-1.0, jsp-2.2, servlet-3.0, jaxrs-1.1, jndi-1.0, cicsts:core-1.0, distributedMap-1.0, wab-1.0¨. >
< 11/13/15 11:22:40:886 EST¨ 00000024 com.ibm.ws.kernel.feature.internal.FeatureManager I CWWKF0008I: Feature update completed in 2.061 seconds. >
< 11/13/15 11:22:40:886 EST¨ 00000024 com.ibm.ws.kernel.feature.internal.FeatureManager A CWWKF0011I: The server wlpwins2 is ready to run a smarter planet. >
I am trying to connect to oracle db(installed in remote machine) from my java application. but, i am getting sql error code 17022. pls let me know how to solve this issue.
Full stack trace
MSG: Connecting to jdbc:oracle:thin:#(description=(address_list=(address=(protocol=tcp)(host=dlocdb04)(port=50000))(address=(protocol=tcp)(host=dlocdb04)(port=50000)))(connect_data=(service_name=os02rtdb01svc.world)(server=dedicated))) with Oracle JDBC driver version: 9.2.0.1.0 [cumulative retries=0]
USER_ID: UNKNOWN
level: WARN
date: 2015-09-21 10:51:45,563
category: com.retek.merch.utils.ConnectionPool
MSG: [SQL Error code: 17002, State: null] Physical DB connection is invalid. Restarting pool due to the following error.
USER_ID: UNKNOWN
java.sql.SQLException: Io exception: Connection refused(DESCRIPTION=(ERR=1153)(VSNNUM=0)(ERROR_STACK=(ERROR=(CODE=1153)(EMFI=4)(ARGS='(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dlocdb0401)(PORT=27320))(CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))null))'))(ERROR=(CODE=303)(EMFI=1))))
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:134)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:179)
at oracle.jdbc.dbaccess.DBError.throwSqlException(DBError.java:333)
at oracle.jdbc.driver.OracleConnection.<init>(OracleConnection.java:404)
at oracle.jdbc.driver.OracleDriver.getConnectionInstance(OracleDriver.java:468)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:314)
at java.sql.DriverManager.getConnection(DriverManager.java:512)
at java.sql.DriverManager.getConnection(DriverManager.java:140)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:169)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPhysicalConnection(OracleConnectionPoolDataSource.java:149)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:95)
at oracle.jdbc.pool.OracleConnectionPoolDataSource.getPooledConnection(OracleConnectionPoolDataSource.java:63)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getNewPoolOrXAConnection(OracleConnectionCacheImpl.java:547)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getPooledConnection(OracleConnectionCacheImpl.java:404)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getConnection(OracleConnectionCacheImpl.java:298)
at oracle.jdbc.pool.OracleConnectionCacheImpl.getConnection(OracleConnectionCacheImpl.java:268)
at com.retek.merch.utils.ConnectionPool.get(ConnectionPool.java:346)
at com.retek.merch.utils.TransactionManager.start(TransactionManager.java:59)
at com.retek.reim.merch.utils.ReIMTransactionManager.start(ReIMTransactionManager.java:49)
at com.retek.reim.manager.LoginManager.login(LoginManager.java:72)
at com.retek.reim.ui.login.LoginAction.perform(LoginAction.java:47)
at org.apache.struts.action.ActionServlet.processActionPerform(ActionServlet.java:1786)
at org.apache.struts.action.ActionServlet.process(ActionServlet.java:1585)
at org.apache.struts.action.ActionServlet.doPost(ActionServlet.java:509)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:638)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:720)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:199)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:145)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:210)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:139)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardContext.invoke(StandardContext.java:2460)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:133)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.valves.ErrorDispatcherValve.invoke(ErrorDispatcherValve.java:119)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:594)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:116)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:594)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:127)
at org.apache.catalina.core.StandardPipeline$StandardPipelineValveContext.invokeNext(StandardPipeline.java:596)
at org.apache.catalina.core.StandardPipeline.invoke(StandardPipeline.java:433)
at org.apache.catalina.core.ContainerBase.invoke(ContainerBase.java:955)
at org.apache.coyote.tomcat4.CoyoteAdapter.service(CoyoteAdapter.java:157)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:873)
at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(Http11BaseProtocol.java:665)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:528)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:81)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:689)
at java.lang.Thread.run(Thread.java:534)
I create an external table B of hbase table A using hive. I can successfully access the data of B.Then I followed the official guide to type in Imapla Shell:
invalidate metadata B;
And then I query this external table B in Impala Shell:
select * from B limit 4;
but it outputs:
ERROR: RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
Here are some logs related:
11:13:58.937 AM INFO jni-util.cc:177
java.lang.RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
at com.cloudera.impala.planner.HBaseScanNode.computeScanRangeLocations(HBaseScanNode.java:300)
at com.cloudera.impala.planner.HBaseScanNode.init(HBaseScanNode.java:125)
at com.cloudera.impala.planner.SingleNodePlanner.createScanNode(SingleNodePlanner.java:891)
at com.cloudera.impala.planner.SingleNodePlanner.createTableRefNode(SingleNodePlanner.java:1082)
at com.cloudera.impala.planner.SingleNodePlanner.createSelectPlan(SingleNodePlanner.java:526)
at com.cloudera.impala.planner.SingleNodePlanner.createQueryPlan(SingleNodePlanner.java:151)
at com.cloudera.impala.planner.SingleNodePlanner.createSingleNodePlan(SingleNodePlanner.java:117)
at com.cloudera.impala.planner.Planner.createPlan(Planner.java:47)
at com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:842)
at com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:146)
11:13:58.939 AM INFO status.cc:114
RuntimeException: couldn't retrieve HBase table (mv_p2pusers) info:
Enable/Disable failed
# 0x78b793 (unknown)
# 0xa68275 (unknown)
# 0x9802c6 (unknown)
# 0x99db78 (unknown)
# 0x99e6e4 (unknown)
# 0x9d50cb (unknown)
# 0xb33687 (unknown)
# 0xb29054 (unknown)
# 0x9ac52b (unknown)
# 0x1571c39 (unknown)
# 0x155d9cf (unknown)
# 0x155f914 (unknown)
# 0x92d363 (unknown)
# 0x92daca (unknown)
# 0xaa4faa (unknown)
# 0xaa7130 (unknown)
# 0xca79b3 (unknown)
# 0x386be079d1 (unknown)
# 0x386bae8b6d (unknown)
11:13:58.940 AM INFO impala-server.cc:824
UnregisterQuery(): query_id=d4269ff898eb4e7:1866144af0d14a7
11:13:58.940 AM INFO impala-server.cc:893
Cancel(): query_id=d4269ff898eb4e7:1866144af0d14a7
11:13:59.935 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:13:59.935 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:01.036 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:14:01.037 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:02.138 AM INFO ClientCnxn.java:975
Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error)
11:14:02.138 AM WARN ClientCnxn.java:1102
Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
Java exception follows:
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
11:14:02.199 AM INFO impala-hs2-server.cc:795
GetSchemas(): request=TGetSchemasReq {
01: sessionHandle (struct) = TSessionHandle {
01: sessionId (struct) = THandleIdentifier {
01: guid (string) = "\xf8\xb9n\xe4\xb4\xf6N\xef\xad)9W.\x92#Y",
02: secret (string) = "\xc0?\xc7\xd9\x930C\x9b\xb5\xf6K\x8em\xcb\xf8\xe4",
},
},
}
11:14:02.203 AM INFO MetadataOp.java:414
Returning 19 schemas
It seems the hbase table B is either enabled nor disabled,very strange.I googled around,Is this related to the hbase security issues or the impala version problem?
Did anybody encountered the same problems?How to solve this?Thanks in advance.
Enable HBase service from Impala Configuration.
You can do it from Cloudera manager, Imapala->configuration
search from "Hbase" and enable the service.
PFA.
currently working on developing a rest service that needs to be deployed on Weblogic 10.3.4. Using Spring 3.0.6 examples online but the basic loading of the DispatcherServlet seems to be causing problems with Weblogic.
<servlet>
<servlet-name>mvc-dispatcher</servlet-name>
<servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
<load-on-startup>1</load-on-startup>
</servlet>
The exception shown in the weblogic console window was:
<07-Nov-2011 20:29:33 o'clock GMT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
log4j:WARN No appenders could be found for logger (org.springframework.web.servlet.DispatcherServlet).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
=============== DEBUG MESSAGE: unimplemented bytecode ================
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_PRIV_INSTRUCTION (0xc0000096) at pc=0x026b26d0, pid=7200, tid=7924
#
# JRE version: 6.0_21-b51
# Java VM: Java HotSpot(TM) Client VM (17.0-b17 mixed mode windows-x86 )
# Problematic frame:
# j javax.validation.Validation.byDefaultProvider()Ljavax/validation/bootstrap/GenericBootstrap;+0
#
# An error report file with more information is saved as:
# C:\bea\user_projects\domains\saw_ca_wl10\hs_err_pid7200.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
#
The exception shown in the target AdminServer log was
####<07-Nov-2011 20:29:54 o'clock GMT> <Info> <EJB> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697794074> <BEA-010008> <EJB Deploying file: KCS-ejb-0.0.1-SNAPSHOT.jar>
####<07-Nov-2011 20:29:54 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697794499> <BEA-149060> <Module KCS-ejb-0.0.1-SNAPSHOT.jar of application KCS-ear-0 successfully transitioned from STATE_NEW to STATE_PREPARED on server AdminServer.>
####<07-Nov-2011 20:29:54 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697794499> <BEA-149059> <Module /KCS-webApp of application KCS-ear-0 is transitioning from STATE_NEW to STATE_PREPARED on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697795146> <BEA-149060> <Module /KCS-webApp of application KCS-ear-0 successfully transitioned from STATE_NEW to STATE_PREPARED on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697795270> <BEA-149059> <Module KCS-ejb-0.0.1-SNAPSHOT.jar of application KCS-ear-0 is transitioning from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697795301> <BEA-149060> <Module KCS-ejb-0.0.1-SNAPSHOT.jar of application KCS-ear-0 successfully transitioned from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697795301> <BEA-149059> <Module /KCS-webApp of application KCS-ear-0 is transitioning from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <Deployer> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1320697795302> <BEA-149060> <Module /KCS-webApp of application KCS-ear-0 successfully transitioned from STATE_PREPARED to STATE_ADMIN on server AdminServer.>
####<07-Nov-2011 20:29:55 o'clock GMT> <Info> <ServletContext-/KCS-webApp> <JGOGGINS212> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1320697795498> <BEA-000000> <Initializing Spring FrameworkServlet 'mvc-dispatcher'>
Building using Maven and have all the spring-core/web/webmvc/context all defined as runtime dependent.
Also tried to copy the "org.springframework.web.servlet-3.0.5.RELEASE,jar" file to the bea\modules folder in an attempt to resolve runtime issues within weblogic.
Tried the weblogic-application.xml,
<prefer-application-packages>
<package-name>org.springframework.*</package-name>
<package-name>org.springframework.web.*</package-name>
<package-name>org.springframework.web.servlet.*</package-name>
<prefer-application-packages>
All that, same problem.
I noted that the release notes have stated that "This version of WebLogic Server supports Spring 3.0.", http://download.oracle.com/docs/cd/E17904_01/web.1111/e13852/toc.htm#BGGEAIJJ
Function code is limited by the size 64 KB in both version of Java 7 and 8. When our code exceeds more than 64 KB in function Compiler failed to compile the whole code and throws error as Unimplemented Byte Code
COMPILE ERROR:
ERROR MESSAGES/STACK TRACES THAT OCCUR :
=============== DEBUG MESSAGE: unimplemented bytecode ================
A fatal error has been detected by the Java Runtime Environment:
Note: Here function is consider as
1)Function is normal function one we write in jsp or java file.
2)Our normal jsp(Java Server Page) page is also treated as function when it is complied in java programming
Example:
// The Function Size Should Less Than 64KB
public static function myFunction(){
// Your Code
}