kafka connect task support multi format messages - elasticsearch

we are exploring fluentd, kafka and elasticsearch integration for our central logging platform.
at present we have individual topic for each deployments. for example if we have deployments customerA, customerB, customerC then we have separate topic on Kafka cluster for these deployments customerA, customerB, customerC .
as long as we push json logs to these topic our Elasticsearch sink connectors work fine but for log format other json tends to break connector task with error :
[2021-08-01 17:03:27,338] ERROR WorkerSinkTask{id=central-logs-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:206)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.DataException: Converting byte[] to Kafka Connect data failed due to serialization error:
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:366)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'status': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false')
at [Source: (byte[])"status-task-central-logs-3"; line: 1, column: 8]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1840)
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:722)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._reportInvalidToken(UTF8StreamJsonParser.java:3560)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._handleUnexpectedValue(UTF8StreamJsonParser.java:2655)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser._nextTokenNotInObject(UTF8StreamJsonParser.java:857)
at com.fasterxml.jackson.core.json.UTF8StreamJsonParser.nextToken(UTF8StreamJsonParser.java:754)
at com.fasterxml.jackson.databind.ObjectMapper._readTreeAndClose(ObjectMapper.java:4247)
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2734)
at org.apache.kafka.connect.json.JsonDeserializer.deserialize(JsonDeserializer.java:64)
at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:364)
at org.apache.kafka.connect.storage.Converter.toConnectData(Converter.java:87)
at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:156)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:190)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:132)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:495)
at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:475)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:325)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:229)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:189)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:239)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
and our Elasticsearch sink connector config is :
{
"name":"central-logs",
"config":{
"connector.class":"io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"type.name":"_doc",
"transforms.AddSuffix.topic.format":"${topic}-${timestamp}",
"tasks.max":"16",
"max.retries":"30",
"retry.backoff.ms":"10000",
"topics.regex": "kafka-(.*)",
"transforms.ConvertTimestamp.format":"yyyy-MM-dd HH:mm:ss",
"transforms":"AddSuffix,InsertTimestamp,ConvertTimestamp",
"transforms.ConvertTimestamp.type":"org.apache.kafka.connect.transforms.TimestampConverter$Value",
"key.ignore":"true",
"schema.ignore":"true",
"transforms.AddSuffix.timestamp.format":"yyyy.MM.dd",
"key.converter.schemas.enable":"false",
"transforms.AddSuffix.type":"org.apache.kafka.connect.transforms.TimestampRouter",
"value.converter.schemas.enable":"false",
"transforms.InsertTimestamp.type":"org.apache.kafka.connect.transforms.InsertField$Value",
"name":"central-logs",
"connection.url":"https://es",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.InsertTimestamp.timestamp.field":"#timestamp",
"transforms.ConvertTimestamp.target.type":"string",
"read.timeout.ms":"30000",
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"transforms.ConvertTimestamp.field":"#timestamp"
}
}
is there a way to avoid such connector/task failures?

You have set the connector's converter to expect JSON:
"value.converter":"org.apache.kafka.connect.json.JsonConverter"
So any message in these topics which is not a valid JSON will fail when the JSON parser processes it. Is this parsing actually required? Looks like the ElasticSearch sink connector example does not do this.
Try removing this configuration.

Related

Bulk Request failed in ElasticSinkConnector

I got the following error while creating the elasticsinkconnector.
CREATE SOURCE CONNECTOR testdemosinkconnector WITH(
"type.name"= '_doc',
"input.data.format"= 'AVRO',
"connector.class"= 'io.confluent.connect.elasticsearch.ElasticsearchSinkConnector',
"tasks.max"= '1',
"transforms"= 'Dealership',
"topics"= 'es.contact.model',
"transforms.Dealership.type"= 'io.confluent.connect.transforms.ExtractTopic$Value',
"transforms.Dealership.field"= 'indexTopicName',
"transforms.Dealership.skip.missing.or.null"= 'true',
"connection.url"= 'https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243',
"connection.username"= 'elastic',
"connection.password"= 'BUgBxOBg3dv4jp4Z3W7p4tHC',
"key.ignore"= 'true',
"value.converter"= 'io.confluent.connect.avro.AvroConverter',
"value.converter.schemas.enable"= 'true',
"value.converter.schema.registry.url"= 'http://localhost:8081',
"bulk.size.bytes"= '-1',
"behavior.on.null.values"= 'IGNORE',enter code here
"behavior.on.malformed.documents"= 'IGNORE',
"max.retries"= '5',
"retry.backoff.ms"= '5000'
);
The error is,
FAILED | org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:618)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:334)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:235)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:204)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:200)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:255)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: org.apache.kafka.connect.errors.ConnectException: Bulk request failed
at io.confluent.connect.elasticsearch.ElasticsearchClient$1.afterBulk(ElasticsearchClient.java:397)
at org.elasticsearch.action.bulk.BulkRequestHandler$1.onFailure(BulkRequestHandler.java:70)
at org.elasticsearch.action.ActionListener$5.onFailure(ActionListener.java:258)
at org.elasticsearch.action.bulk.Retry$RetryHandler.onFailure(Retry.java:126)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:174)
... 5 more
Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to execute bulk request due to 'java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}' after 6 attempt(s)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:165)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:119)
at io.confluent.connect.elasticsearch.ElasticsearchClient.callWithRetries(ElasticsearchClient.java:425)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$1(ElasticsearchClient.java:168)
... 5 more
Caused by: java.io.IOException: Unable to parse response body for Response{requestLine=POST /_bulk?timeout=1m HTTP/1.1, host=https://elasticsearchdemo.es.us-central1.gcp.cloud.es.io:9243, response=HTTP/1.1 200 OK}
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1632)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1583)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1553)
at org.elasticsearch.client.RestHighLevelClient.bulk(RestHighLevelClient.java:533)
at io.confluent.connect.elasticsearch.ElasticsearchClient.lambda$null$0(ElasticsearchClient.java:170)
at io.confluent.connect.elasticsearch.RetryUtil.callWithRetries(RetryUtil.java:158)
... 8 more
Caused by: java.lang.NullPointerException
at java.base/java.util.Objects.requireNonNull(Objects.java:221)
at org.elasticsearch.action.DocWriteResponse.<init>(DocWriteResponse.java:127)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:54)
at org.elasticsearch.action.index.IndexResponse.<init>(IndexResponse.java:39)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:107)
at org.elasticsearch.action.index.IndexResponse$Builder.build(IndexResponse.java:104)
at org.elasticsearch.action.bulk.BulkItemResponse.fromXContent(BulkItemResponse.java:159)
at org.elasticsearch.action.bulk.BulkResponse.fromXContent(BulkResponse.java:196)
at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:1892)
at org.elasticsearch.client.RestHighLevelClient.lambda$performRequestAndParseEntity$8(RestHighLevelClient.java:1554)
at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1630)
13 more
Please help me to resolve this error.
Elastic Sink Connector Version : 11.1.10
Elastic Search Version : 8.2.2
Elasticsearch version 8 is not supported by the Confluent Elasticsearch Sink Connector Version 11.1.10 so most likely that it is why it can't parse the Elasticsearch response properly
As of version 11.0.0, the connector uses the Elasticsearch High Level REST Client (version 7.0.1), which means only Elasticsearch 7.x is supported.
https://docs.confluent.io/kafka-connect-elasticsearch/current/overview.html

Debezium content based router setup unable to find DebeziumException

I'm trying to setup a Debezium source connector in docker which uses ContentBasedRouting and I'm following the official doc.
I have all the following dependencies in /kafka/connect/debezium-connector-mysql:
antlr4-runtime-4.7.2.jar groovy-3.0.9.jar
debezium-connector-mysql-1.0.3.Final.jar groovy-json-3.0.9.jar
debezium-core-1.0.3.Final.jar groovy-jsr223-3.0.9.jar
debezium-ddl-parser-1.0.3.Final.jar mysql-binlog-connector-java-0.19.1.jar
debezium-scripting-1.7.2.Final.jar mysql-connector-java-8.0.16.jar
At the moment I'm trying to add the connector with the following transforms:
"transforms": "route",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.after.name == 'x' ? 'topic1' : null",
"transforms.route.null.handling.mode": "drop"
I get http status 500 and the log:
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:494)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 28 more
Caused by: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:719)
at org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:308)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:302)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
... 1 more
Caused by: java.lang.ClassNotFoundException: io.debezium.DebeziumException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 14 more
Tried adding a connector without the transform (everything else unchanged) and it worked.
What am I doing wrong?
Ok so I solved it by starting from clean environment because I've probably played with a mix of too many versions. Made sure I'm using Debezium 1.8 for everything and maybe the weirdest thing is that I had to put all the Debezium jars on the connect container in /kafka/libs because it didn't work in /kafka/connect/debezium-connector-mysql for some reason...

Apache Kafka JDBC Connector - SerializationException: Unknown magic byte

We are trying to write back the values from a topic to a postgres database using the Confluent JDBC Sink Connector.
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
connection.password=xxx
tasks.max=1
topics=topic_name
auto.evolve=true
connection.user=confluent_rw
auto.create=true
connection.url=jdbc:postgresql://x.x.x.x:5432/Datawarehouse
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://localhost:8081
key.converter=io.confluent.connect.avro.AvroConverter
key.converter.schema.registry.url=http://localhost:8081
We can read the value in the console using:
kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic topic_name
The schema exists and the value are correctly deserialize by kafka-avro-console-consumer because it gives no error but the connector gives those errors:
{
"name": "datawarehouse_sink",
"connector": {
"state": "RUNNING",
"worker_id": "x.x.x.x:8083"
},
"tasks": [
{
"id": 0,
"state": "FAILED",
"worker_id": "x.x.x.x:8083",
"trace": "org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:511)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:491)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:226)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:194)\n\tat org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)\n\tat org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat java.lang.Thread.run(Thread.java:748)\nCaused by: org.apache.kafka.connect.errors.DataException: f_machinestate_sink\n\tat io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:103)\n\tat org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$0(WorkerSinkTask.java:511)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)\n\tat org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)\n\t... 13 more\nCaused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1\nCaused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!\n"
}
],
"type": "sink"
}
The final error is :
org.apache.kafka.common.errors.SerializationException: Unknown magic byte!
The schema is registered in the schema registry.
Does the problem sit with the configuration file of the connector ?
The error org.apache.kafka.common.errors.SerializationException: Unknown magic byte! means that a message on the topic was not valid Avro and could not be deserialised. There are several reasons this could be:
Some messages are Avro, but others are not. If this is the case you can use the error handling capabilities in Kafka Connect to ignore the invalid messages using config like this:
"errors.tolerance": "all",
"errors.log.enable":true,
"errors.log.include.messages":true
The value is Avro but the key isn't. If this is the case then use the appropriate key.converter.
More reading: https://www.confluent.io/blog/kafka-connect-deep-dive-converters-serialization-explained/
This means that deserializer have checked the first 5 bytes of the message and found something unexpected. More about message packing with serializer here , check 'wire format' section. Just a guess that zero byte in message !=0

Artifactory jfrog jenkins failed uploading artifacts by spec

Hi I am trying to upload multiple files into artifactory repo using my Jenkinsfile, Please find the snippet and error pasted below:
Artifactory Plug in version - 2.16.2 and Jenkins 2.138.2
snippet from Jenkinsfile and message from the logs
def uploadSpec = """{
"files": [
{
"pattern": "folder/test.py",
"target": "reponame-local/folder/"
}
]
}"""
def buildInfo1 = server.upload spec: uploadSpec
I get the below error pasted all logs here Invalid object ID
SEVERE: Failed to execute command Pipe.Flush(-1) (channel JNLP4-connect connection from 10.37.88.189/10.37.88.189:41732)
java.util.concurrent.ExecutionException: Invalid object ID -1 iota=25377
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:478)
at hudson.remoting.ExportTable.get(ExportTable.java:397)
at hudson.remoting.Channel.getExportedObject(Channel.java:780)
at hudson.remoting.ProxyOutputStream$Flush.execute(ProxyOutputStream.java:307)
at hudson.remoting.Channel$1.handle(Channel.java:565)
at hudson.remoting.AbstractByteBufferCommandTransport.processCommand(AbstractByteBufferCommandTransport.java:203)
at hudson.remoting.AbstractByteBufferCommandTransport.receive(AbstractByteBufferCommandTransport.java:189)
at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onRead(ChannelApplicationLayer.java:187)
at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecv(ApplicationLayer.java:207)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processRead(SSLEngineFilterLayer.java:369)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecv(SSLEngineFilterLayer.java:117)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:160)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Object appears to be deallocated at lease before Sun Nov 18 20:34:02 MST 2018
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:474)
java.lang.Exception: Error occurred during operation, please refer to logs for more information.
at org.jfrog.build.extractor.producerConsumer.ProducerConsumerExecutor.start(ProducerConsumerExecutor.java:84)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:71)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:190)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from xx.xx.xxx.xx/xx.xx.xxx.xx:41732
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1071)
at hudson.FilePath.act(FilePath.java:1060)
at org.jfrog.hudson.pipeline.executors.GenericUploadExecutor.execution(GenericUploadExecutor.java:52)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:65)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:46)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.lang.RuntimeException: Failed uploading artifacts by spec
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:194)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:131)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3085)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Thread.java:748)
It seems wontedly created issue by infra. Please try with user having write access and these will only can resolved by Infra team.

How do I troubleshoot "Error injecting constructor" when adding an amazon S3 repository to ElasticSearch?

I've been following the instructions here
and I've completed step 1, the installation of the plugin. I'm now trying to add a snapshot by executing the following command in sense:
PUT /_snapshot/ElasticSearch
{
"type": "s3",
"settings": {
"bucket": "SomeBucket",
"base_path" : "ElasticSearch",
"secret_key": "xxx",
"acess_key" : "xxx"
}
}
From this I get the following error:
{
"error": "RepositoryException[[ElasticSearch] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer\n at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.s3.S3Repository\n while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: NoClassDefFoundError[org/elasticsearch/common/blobstore/ImmutableBlobContainer]; nested: ClassNotFoundException[org.elasticsearch.common.blobstore.ImmutableBlobContainer]; ",
"status": 500
}
Any ideas on how to start troubleshooting this? All the information I get in the logs is:
[2015-07-13 16:20:40,519][WARN ][repositories ] [app1050-ela-dacq] failed to create repository [s3][ElasticSearch -d]
org.elasticsearch.common.inject.CreationException: Guice creation errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)
while locating org.elasticsearch.repositories.s3.S3Repository
while locating org.elasticsearch.repositories.Repository
1 error
at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:178)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)
at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:409)
at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:373)
at org.elasticsearch.repositories.RepositoriesService.access$100(RepositoriesService.java:57)
at org.elasticsearch.repositories.RepositoriesService$1.execute(RepositoriesService.java:112)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(S3Repository.java:124)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
Note that the version of elasticsearch I'm using is 1.6.0 so the plugin version is 2.6.0. The output of the command 'java -version' is the following:
openjdk version "1.8.0_45"
OpenJDK Runtime Environment (build 1.8.0_45-b13)
OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
It turns out that the version of the plugin that had been installed was actually 2.2.0 instead of 2.6.0.
The answer tlrx gives in here is the clue.
It's also good to remember to restart elasticsearch after you've updated the plugin

Resources