Corda 4.3 client rpc throw error when we use LinearStateQueryCriteria to query by linear id - spring-boot

I am trying to invoke a get by linear id for a state from a springboot application using corda rpc client. cordapps are build using corda 4.3.
Below is the query i use:
QueryCriteria queryCriteria =
new QueryCriteria.LinearStateQueryCriteria(null, idList, status, contractStateTypes);
List<StateAndRef<DemoState>> stateAndRefs = cordaRPCOps.vaultQueryByCriteria(queryCriteria, stateType).getStates();
The query fails to retrive data and throws an exception as shown below, but if i use Corda 4.1 rpc client the same query returns a correct result.
Below is the exception shown in the corda node logs when using corda rpc 4.3:
[WARN ] 2020-04-28T15:35:12,118Z [Thread-964 (ActiveMQ-client-global-threads)] rpc.RPCServer.clientArtemisMessageHandler -
Inbound RPC failed [errorCode=1ctda0y, moreInformationAt=https://errors.corda.net/ENT/4.1/1ctda0y] {actor_id=user1, actor
_owning_identity=O=SBI, L=Panjim, C=IN, actor_store_id=NODE_CONFIG, invocation_id=81afab50-486a-406d-9e17-bbbcfdf5aaed, in
vocation_timestamp=2020-04-28T15:35:12.117Z, origin=user1, session_id=472a8267-7035-4e58-be6f-7b762c82bf2b, session_timest
amp=2020-04-28T15:32:59.497Z}
java.io.NotSerializableException: Internal deserialization failure: java.lang.ArrayIndexOutOfBoundsException: java.util.Li
st<*> -> net.corda.core.node.services.vault.QueryCriteria$LinearStateQueryCriteria -> null
at net.corda.serialization.internal.amqp.DeserializationInput.des(DeserializationInput.kt:106) ~[corda-serializati
on-4.1.jar:?]
at net.corda.serialization.internal.amqp.DeserializationInput.deserialize(DeserializationInput.kt:119) ~[corda-ser
ialization-4.1.jar:?]
at net.corda.serialization.internal.amqp.AbstractAMQPSerializationScheme.deserialize(AMQPSerializationScheme.kt:225) ~[corda-serialization-4.1.jar:?]
at net.corda.serialization.internal.SerializationFactoryImpl$deserialize$1$1.invoke(SerializationScheme.kt:105) ~[corda-serialization-4.1.jar:?]
at net.corda.core.serialization.SerializationFactory.withCurrentContext(SerializationAPI.kt:71) ~[corda-core-4.1.jar:?]
at net.corda.serialization.internal.SerializationFactoryImpl$deserialize$1.invoke(SerializationScheme.kt:105) ~[corda-serialization-4.1.jar:?]
at net.corda.serialization.internal.SerializationFactoryImpl$deserialize$1.invoke(SerializationScheme.kt:73) ~[corda-serialization-4.1.jar:?]
at net.corda.core.serialization.SerializationFactory.asCurrent(SerializationAPI.kt:85) ~[corda-core-4.1.jar:?]
at net.corda.serialization.internal.SerializationFactoryImpl.deserialize(SerializationScheme.kt:105) ~[corda-serialization-4.1.jar:?]
at net.corda.node.services.rpc.RPCServer.clientArtemisMessageHandler(RPCServer.kt:584) ~[corda-node-4.1.jar:?]
at net.corda.node.services.rpc.RPCServer.access$clientArtemisMessageHandler(RPCServer.kt:77) ~[corda-node-4.1.jar:?]
at net.corda.node.services.rpc.RPCServer$createRpcConsumer$1.invoke(RPCServer.kt:295) ~[corda-node-4.1.jar:?]
at net.corda.node.services.rpc.RPCServer$createRpcConsumer$1.invoke(RPCServer.kt:77) ~[corda-node-4.1.jar:?]
at net.corda.node.services.rpc.RPCServerKt$sam$org_apache_activemq_artemis_api_core_client_MessageHandler$0.onMessage(RPCServer.kt) ~[corda-node-4.1.jar:?]
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.callOnMessage(ClientConsumerImpl.java:1002) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.access$400(ClientConsumerImpl.java:50) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl$Runner.run(ClientConsumerImpl.java:1125) ~[artemis-core-client-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42) ~[artemis-commons-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31) ~[artemis-commons-2.6.2.jar:2.6.2]
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:66) ~[artemis-commons-2.6.2.jar:2.6.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_172]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_172]
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118) ~[artemis-commons-2.6.2.jar:2.6.2]
Caused by: java.lang.ArrayIndexOutOfBoundsException: java.util.List<*> -> net.corda.core.node.services.vault.QueryCriteria$LinearStateQueryCriteria -> null
Any pointers as to how we could get this running using corda rpc 4.3?

It seems you are using CordaRPC 4.3 while your Corda node is running v4.1. In such cases, you need to provide the minimumServerProtocolVersion to CordaRPCClientConfiguration.
Here's a sample code:
CordaRPCClientConfiguration config =
new CordaRPCClientConfiguration(Duration.ofMinutes(3), 4);
CordaRPCOps rpcProxy = new CordaRPCClient(NetworkHostAndPort.parse("<host>:<port>"), config)
.start(<username>, <password>).getProxy();
Note the part CordaRPCClientConfiguration(Duration.ofMinutes(3), 4), "4" here is the minimumServerProtocolVersion supported which Corresponds to Corda 4.0.

Related

Bulk Request failed in ElasticSinkConnector

I got the following error while creating the elasticsinkconnector.
CREATE SOURCE CONNECTOR testdemosinkconnector WITH(
"type.name"= '_doc',
"input.data.format"= 'AVRO',
"connector.class"= 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
"tasks.max"= '1',
"transforms"= 'Dealership',
"topics"= 'es.contact.model',
"transforms.Dealership.type"= 'io.confluent.connect.transforms.ExtractTopic$Value',
"transforms.Dealership.field"= 'indexTopicName',
"transforms.Dealership.skip.missing.or.null"= 'true',
"connection.url"= 'https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243',
"connection.username"= 'elastic',
"connection.password"= 'BUgBxOBg3dv4jp4Z3W7p4tHC',
"key.ignore"= 'true',
"value.converter"= 'io.confluent.connect.avro.AvroConverter',
"value.converter.schemas.enable"= 'true',
"value.converter.schema.registry.url"= 'http://localhost:8081',
"bulk.size.bytes"= '-1',
"behavior.on.null.values"= 'IGNORE',enter code here
"behavior.on.malformed.documents"= 'IGNORE',
"max.retries"= '5',
"retry.backoff.ms"= '5000'
);
The error is,
FAILED | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:255)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Bulk request failed
at io.confluent.connect.elasticsearch.ElasticsearchClient$1.afterBulk(ElasticsearchClient.java:397)
at org.elasticsearch.action.bulk.BulkRequestHandler$1.onFailure(BulkRequestHandler.java:70)
at org.elasticsearch.action.ActionListener$5.onFailure(ActionListener.java:258)
at org.elasticsearch.action.bulk.Retry$RetryHandler.onFailure(Retry.java:126)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:174)
... 5 more
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to execute bulk request due to 'java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}' after 6 attempt(s)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:165)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:119)
at io.confluent.connect.elasticsearch.ElasticsearchClient.callWithRetries(ElasticsearchClient.java:425)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:168)
... 5 more
Caused by: java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1632)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:533)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$0(ElasticsearchClient.java:170)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:158)
... 8 more
Caused by: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:221)
at org.elasticsearch.action.DocWriteResponse.<init>(DocWriteResponse.java:127)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:54)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:39)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:107)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:104)
at org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:159)
at org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:196)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
at org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$8(RestHighLevelClient.java:1554)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1630)
13 more
Please help me to resolve this error.
Elastic Sink Connector Version : 11.1.10
Elastic Search Version : 8.2.2
Elasticsearch version 8 is not supported by the Confluent Elasticsearch Sink Connector Version 11.1.10 so most likely that it is why it can't parse the Elasticsearch response properly
As of version 11.0.0, the connector uses the Elasticsearch High Level REST Client (version 7.0.1), which means only Elasticsearch 7.x is supported.
https://docs.confluent.io/kafka-connect-elasticsearch/current/overview.html

Debezium content based router setup unable to find DebeziumException

I'm trying to setup a Debezium source connector in docker which uses ContentBasedRouting and I'm following the official doc.
I have all the following dependencies in /kafka/connect/debezium-connector-mysql:
antlr4-runtime-4.7.2.jar groovy-3.0.9.jar
debezium-connector-mysql-1.0.3.Final.jar groovy-json-3.0.9.jar
debezium-core-1.0.3.Final.jar groovy-jsr223-3.0.9.jar
debezium-ddl-parser-1.0.3.Final.jar mysql-binlog-connector-java-0.19.1.jar
debezium-scripting-1.7.2.Final.jar mysql-connector-java-8.0.16.jar
At the moment I'm trying to add the connector with the following transforms:
"transforms": "route",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.after.name == 'x' ? 'topic1' : null",
"transforms.route.null.handling.mode": "drop"
I get http status 500 and the log:
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:494)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 28 more
Caused by: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:719)
at org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:308)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:302)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
... 1 more
Caused by: java.lang.ClassNotFoundException: io.debezium.DebeziumException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 14 more
Tried adding a connector without the transform (everything else unchanged) and it worked.
What am I doing wrong?
Ok so I solved it by starting from clean environment because I've probably played with a mix of too many versions. Made sure I'm using Debezium 1.8 for everything and maybe the weirdest thing is that I had to put all the Debezium jars on the connect container in /kafka/libs because it didn't work in /kafka/connect/debezium-connector-mysql for some reason...

kafka connect task support multi format messages

we are exploring fluentd, kafka and elasticsearch integration for our central logging platform.
at present we have individual topic for each deployments. for example if we have deployments customerA, customerB, customerC then we have separate topic on Kafka cluster for these deployments customerA, customerB, customerC .
as long as we push json logs to these topic our Elasticsearch sink connectors work fine but for log format other json tends to break connector task with error :
[2021-08-01 17:03:27,338] ERROR WorkerSinkTask{id=central-logs-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:366)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4247)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)
at org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:364)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
and our Elasticsearch sink connector config is :
{
"name":"central-logs",
"config":{
"connector.class":"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type.name":"_doc",
"transforms.AddSuffix.topic.format":"${topic}-${timestamp}",
"tasks.max":"16",
"max.retries":"30",
"retry.backoff.ms":"10000",
"topics.regex": "kafka-(.*)",
"transforms.ConvertTimestamp.format":"yyyy-MM-dd HH:mm:ss",
"transforms":"AddSuffix,InsertTimestamp,ConvertTimestamp",
"transforms.ConvertTimestamp.type":"org.apache.kafka.connect.transforms.TimestampConverter$Value",
"key.ignore":"true",
"schema.ignore":"true",
"transforms.AddSuffix.timestamp.format":"yyyy.MM.dd",
"key.converter.schemas.enable":"false",
"transforms.AddSuffix.type":"org.apache.kafka.connect.transforms.TimestampRouter",
"value.converter.schemas.enable":"false",
"transforms.InsertTimestamp.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"name":"central-logs",
"connection.url":"https://es",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.InsertTimestamp.timestamp.field":"#timestamp",
"transforms.ConvertTimestamp.target.type":"string",
"read.timeout.ms":"30000",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.ConvertTimestamp.field":"#timestamp"
}
}
is there a way to avoid such connector/task failures?
You have set the connector's converter to expect JSON:
"value.converter":"org.apache.kafka.connect.json.JsonConverter"
So any message in these topics which is not a valid JSON will fail when the JSON parser processes it. Is this parsing actually required? Looks like the ElasticSearch sink connector example does not do this.
Try removing this configuration.

RSocket channel error : "reactor.core.publisher.Operators.error - Operator called default onErrorDropped" with merged flux

I want to create a rsocket channel where the data sent from the server can be either a reaction to a client request or a push. I use a flux merge for that.
It's referential data : the refresh can be asked by the client and the server can also push updates.
So I have this on the server side :
#MessageMapping("update-stream")
Flux<DomainObject> addUpdatesListener(Flux<RefreshRequest> requests) {
Flux<DomainObject> pushFlux = Flux.from(this.flux)
.doOnError((e) -> log.error("Error on push flux : {}", e, e));
return requests
.map(this::getUpdates)
.flatMap(Flux::fromIterable)
.doOnError((e) -> log.error("Error on channel flux : {}", e, e))
.mergeWith(pushFlux)
.doOnError((e) -> log.error("Error on merged flux : {}", e, e));
}
It works excepts that when I stop the client I have the following error :
06-07-2020 15:58:53.168 [reactor-http-nio-3] ERROR reactor.core.publisher.Operators.error - Operator called default onErrorDropped
java.util.concurrent.CancellationException: Disposed
at reactor.core.publisher.FluxProcessor.dispose(FluxProcessor.java:80)
at io.rsocket.core.RSocketResponder$3.hookOnCancel(RSocketResponder.java:513)
at reactor.core.publisher.BaseSubscriber.cancel(BaseSubscriber.java:230)
at java.base/java.lang.Iterable.forEach(Iterable.java:75)
at io.rsocket.core.RSocketResponder.cleanUpSendingSubscriptions(RSocketResponder.java:275)
at io.rsocket.core.RSocketResponder.cleanup(RSocketResponder.java:265)
at io.rsocket.core.RSocketResponder.tryTerminate(RSocketResponder.java:167)
at io.rsocket.core.RSocketResponder.tryTerminateOnConnectionClose(RSocketResponder.java:160)
at reactor.core.publisher.LambdaMonoSubscriber.onComplete(LambdaMonoSubscriber.java:132)
at reactor.core.publisher.MonoProcessor$NextInner.onComplete(MonoProcessor.java:518)
at reactor.core.publisher.MonoProcessor.onNext(MonoProcessor.java:308)
at reactor.core.publisher.MonoProcessor.onComplete(MonoProcessor.java:265)
at io.rsocket.internal.BaseDuplexConnection.dispose(BaseDuplexConnection.java:23)
at io.rsocket.transport.netty.TcpDuplexConnection.lambda$new$0(TcpDuplexConnection.java:60)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:570)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:549)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setSuccess0(DefaultPromise.java:604)
at io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
at io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:84)
at io.netty.channel.AbstractChannel$CloseFuture.setClosed(AbstractChannel.java:1158)
at io.netty.channel.AbstractChannel$AbstractUnsafe.doClose0(AbstractChannel.java:760)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:736)
at io.netty.channel.AbstractChannel$AbstractUnsafe.close(AbstractChannel.java:607)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.closeOnRead(AbstractNioByteChannel.java:105)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:171)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
If I don't do the merge, I have no error.
I tried many different versions but I cant find a way to have both the push an no error logged on client quit.
What am I missing ?
Thank a lot.
The problem disapears when upgrading from spring-boot 2.3.0.RELEASE to 2.3.1.RELEASE.

NoClassDefFoundError when connecting Toplink Workbench to the database

I am using TopLink Mapping Worknbench version 9.0.3.5. When I try to connect to the database I get the following error:
Exception in thread "AWT-EventQueue-0" java.lang.NoClassDefFoundError: oracle/dms/instrument/ExecutionContextForJDBC
at oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnection.java:322)
at oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:151)
at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:32)
at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:608)
at java.sql.DriverManager.getConnection(DriverManager.java:525)
at java.sql.DriverManager.getConnection(DriverManager.java:171)
at oracle.toplink.workbench.model.db.BldrDatabase.login(Unknown Source)
at oracle.toplink.workbench.ui.BldrMainView.login(Unknown Source)
at oracle.toplink.workbench.ui.BldrActionManager$32.actionPerformed(Unknown Source)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1849)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2169)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:420)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:258)
at javax.swing.AbstractButton.doClick(AbstractButton.java:302)
at javax.swing.plaf.basic.BasicMenuItemUI.doClick(BasicMenuItemUI.java:1050)
at javax.swing.plaf.basic.BasicMenuItemUI$Handler.mouseReleased(BasicMenuItemUI.java:1091)
at java.awt.AWTEventMulticaster.mouseReleased(AWTEventMulticaster.java:231)
at java.awt.Component.processMouseEvent(Component.java:5517)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3129)
at java.awt.Component.processEvent(Component.java:5282)
at java.awt.Container.processEvent(Container.java:1966)
at java.awt.Component.dispatchEventImpl(Component.java:3984)
at java.awt.Container.dispatchEventImpl(Container.java:2024)
at java.awt.Component.dispatchEvent(Component.java:3819)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4212)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3892)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3822)
at java.awt.Container.dispatchEventImpl(Container.java:2010)
at java.awt.Window.dispatchEventImpl(Window.java:1791)
at java.awt.Component.dispatchEvent(Component.java:3819)
at java.awt.EventQueue.dispatchEvent(EventQueue.java:463)
at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:242)
at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:163)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:157)
at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:149)
at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
Added .\lib\dms.jar; to the classpath in the workbench.cmd to solve this issue.

Resources