How to use ca cert in elasticsearch sink connector configuration for kafka-connect confluent platform? - elasticsearch

I am currently trying to configure an elasticsearch sink connector on a kafka-connect cluster in distributed mode. This cluster is deployed in kubernetes using the helm charts provided by confluent.
Here is the properties json file.( i use username passwd to connect to elastic- removed it from the json file for security purposed)
"name": "elasticsearch-sink-connector",
"config": {
"connector.class": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.json.JsonConverter",
"value.converter": "org.apache.kafka.connect.json.JsonConverter",
"transforms": "dropPrefix",
"config.action.reload": "restart",
"errors.tolerance": "all",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"topics": "_audit_log",
"errors.deadletterqueue.topic.name": "_audit_log_dead_letter_queue",
"errors.deadletterqueue.topic.replication.factor": "1",
"transforms.dropPrefix.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.dropPrefix.regex": ".*",
"transforms.dropPrefix.replacement": "audit_log",
"connection.url": "https://path_to_elastic_cloud:9200",
"auto.create.indices.at.start": "false",
"type.name": "",
"key.ignore": "true",
"schema.ignore": "true",
"drop.invalid.message": "true",
"elastic.ca.cert.path": "/opt/xyz/elastic/certs/tls.crt",
"key.converter.schemas.enable": "false",
"value.converter.schemas.enable": "false"
}
}```
but here are the error logs from the connect cluster.
[INFO] 2020-11-04 19:13:53,384 [task-thread-Brians-0] io.searchbox.client.AbstractJestClient setServers - Setting server pool to a list of 1 servers: [https://path_to_elastic_clound:9200]
[INFO] 2020-11-04 19:13:53,385 [task-thread-Brians-0] io.searchbox.client.JestClientFactory getConnectionManager - Using multi thread/connection supporting pooling connection manager
[INFO] 2020-11-04 19:13:53,386 [task-thread-Brians-0] io.searchbox.client.JestClientFactory getObject - Using default GSON instance
[INFO] 2020-11-04 19:13:53,386 [task-thread-Brians-0] io.searchbox.client.JestClientFactory getObject - Node Discovery disabled...
[INFO] 2020-11-04 19:13:53,386 [task-thread-Brians-0] io.searchbox.client.JestClientFactory getObject - Idle connection reaping enabled...
[INFO] 2020-11-04 19:13:53,387 [task-thread-Brians-0] io.searchbox.client.JestClientFactory getObject - Authentication cache set for preemptive authentication
[ERROR] 2020-11-04 19:13:53,410 [task-thread-Brians-0] org.apache.kafka.connect.runtime.WorkerTask doRun - WorkerSinkTask{id=Brians-0} Task threw an uncaught and unrecoverable exception
org.apache.kafka.connect.errors.ConnectException: Couldn't start ElasticsearchSinkTask due to connection error:
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:168)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:152)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:74)
at io.confluent.connect.elasticsearch.ElasticsearchSinkTask.start(ElasticsearchSinkTask.java:48)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:302)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:235)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:131)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:326)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:269)
at java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1339)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.onConsumeCertificate(CertificateMessage.java:1214)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.consume(CertificateMessage.java:1157)
at java.base/sun.security.ssl.SSLHandshake.consume(SSLHandshake.java:392)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:444)
at java.base/sun.security.ssl.HandshakeContext.dispatch(HandshakeContext.java:422)
at java.base/sun.security.ssl.TransportContext.dispatch(TransportContext.java:183)
at java.base/sun.security.ssl.SSLTransport.decode(SSLTransport.java:171)
at java.base/sun.security.ssl.SSLSocketImpl.decode(SSLSocketImpl.java:1403)
at java.base/sun.security.ssl.SSLSocketImpl.readHandshakeRecord(SSLSocketImpl.java:1309)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:440)
at java.base/sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:411)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:396)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
at org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:359)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at io.searchbox.client.http.JestHttpClient.executeRequest(JestHttpClient.java:133)
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:70)
at io.searchbox.client.http.JestHttpClient.execute(JestHttpClient.java:63)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.getServerVersion(JestElasticsearchClient.java:316)
at io.confluent.connect.elasticsearch.jest.JestElasticsearchClient.<init>(JestElasticsearchClient.java:161)
... 12 more
Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:439)
at java.base/sun.security.validator.PKIXValidator.engineValidate(PKIXValidator.java:306)
at java.base/sun.security.validator.Validator.validate(Validator.java:264)
at java.base/sun.security.ssl.X509TrustManagerImpl.validate(X509TrustManagerImpl.java:313)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkTrusted(X509TrustManagerImpl.java:222)
at java.base/sun.security.ssl.X509TrustManagerImpl.checkServerTrusted(X509TrustManagerImpl.java:129)
at java.base/sun.security.ssl.CertificateMessage$T13CertificateConsumer.checkServerCerts(CertificateMessage.java:1323)
... 39 more
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at java.base/sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at java.base/sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.base/java.security.cert.CertPathBuilder.build(CertPathBuilder.java:297)
at java.base/sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:434)```
any pointers are deeply appreciated. Thank you.

This issue has been resolved. Earlier we werent using any keys to the keystore. Finally was able to add these to the config(not real values)after following instructions from this ticket on github. Very helpful.
https://github.com/confluentinc/kafka-connect-elasticsearch/issues/432
"elastic.security.protocol": "SSL",
"elastic.https.ssl.keystore.location": "/mnt/secrets/elastic/keystore.jks",
"elastic.https.ssl.keystore.password": "changeit",
"elastic.https.ssl.truststore.location": "/mnt/secrets//elastic/truststore.jks",
"elastic.https.ssl.truststore.password": "changeit"

Related

Debezium content based router setup unable to find DebeziumException

I'm trying to setup a Debezium source connector in docker which uses ContentBasedRouting and I'm following the official doc.
I have all the following dependencies in /kafka/connect/debezium-connector-mysql:
antlr4-runtime-4.7.2.jar groovy-3.0.9.jar
debezium-connector-mysql-1.0.3.Final.jar groovy-json-3.0.9.jar
debezium-core-1.0.3.Final.jar groovy-jsr223-3.0.9.jar
debezium-ddl-parser-1.0.3.Final.jar mysql-binlog-connector-java-0.19.1.jar
debezium-scripting-1.7.2.Final.jar mysql-connector-java-8.0.16.jar
At the moment I'm trying to add the connector with the following transforms:
"transforms": "route",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.after.name == 'x' ? 'topic1' : null",
"transforms.route.null.handling.mode": "drop"
I get http status 500 and the log:
javax.servlet.ServletException: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:408)
at org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:346)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:365)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:318)
at org.glassfish.jersey.servlet.ServletContainer.service(ServletContainer.java:205)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:852)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:544)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1581)
at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1307)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:482)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1549)
at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1204)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
at org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:173)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
at org.eclipse.jetty.server.Server.handle(Server.java:494)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:374)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:268)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:782)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:918)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.glassfish.jersey.server.ContainerException: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at org.glassfish.jersey.servlet.internal.ResponseWriter.rethrow(ResponseWriter.java:254)
at org.glassfish.jersey.servlet.internal.ResponseWriter.failure(ResponseWriter.java:236)
at org.glassfish.jersey.server.ServerRuntime$Responder.process(ServerRuntime.java:436)
at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:261)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248)
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244)
at org.glassfish.jersey.internal.Errors.process(Errors.java:292)
at org.glassfish.jersey.internal.Errors.process(Errors.java:274)
at org.glassfish.jersey.internal.Errors.process(Errors.java:244)
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265)
at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232)
at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679)
at org.glassfish.jersey.servlet.WebComponent.serviceImpl(WebComponent.java:392)
... 28 more
Caused by: java.lang.NoClassDefFoundError: io/debezium/DebeziumException
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at org.apache.kafka.common.config.ConfigDef.parseType(ConfigDef.java:719)
at org.apache.kafka.connect.runtime.ConnectorConfig.enrich(ConnectorConfig.java:308)
at org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:302)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:745)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder$6.call(DistributedHerder.java:742)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:342)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:282)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
... 1 more
Caused by: java.lang.ClassNotFoundException: io.debezium.DebeziumException
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)
at org.apache.kafka.connect.runtime.isolation.PluginClassLoader.loadClass(PluginClassLoader.java:104)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 14 more
Tried adding a connector without the transform (everything else unchanged) and it worked.
What am I doing wrong?
Ok so I solved it by starting from clean environment because I've probably played with a mix of too many versions. Made sure I'm using Debezium 1.8 for everything and maybe the weirdest thing is that I had to put all the Debezium jars on the connect container in /kafka/libs because it didn't work in /kafka/connect/debezium-connector-mysql for some reason...

Camel aws-s3 Source Connector Error - How should the config be changed

I am working on defining a Camel S3 Source connector with our Confluent (5.5.1) installation. After creating the connector and checking status as "RUNNING", I upload a file to my S3 bucket. Even if I do ls for the bucket, it is empty, which indicates the file is processed and deleted. But, I do not see messages in the topic. I am basically following this example trying a simple 4 line file, but instead of standalone kafka, doing it on a confluent cluster.
This is my configuration
{
"name": "CamelAWSS3SourceConnector",
"connector.class": "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
"bootstrap.servers": "broker1-dev:9092,broker2-dev:9092,broker3-dev:9092",
"sasl.jaas.config": "org.apache.kafka.common.security.plain.PlainLoginModule required username=\"user\" password=\"password\";",
"security.protocol": "SASL_SSL",
"ssl.truststore.location": "/config/client.truststore.jks",
"ssl.truststore.password": "password",
"ssl.keystore.location": "/config/client.keystore.jks",
"ssl.keystore.password": "password",
"ssl.key.password": "password",
"errors.log.enable": "true",
"errors.log.include.messages": "true",
"errors.tolerance": "all",
"offset.flush.timeout.ms": "60000",
"offset.flush.interval.ms": "10000",
"max.request.size": "10485760",
"flush.size": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
"camel.source.maxPollDuration": "10000",
"topics": "TEST-CAMEL-S3-SOURCE-POC",
"camel.source.path.bucketNameOrArn": "arn:aws:s3:::my-bucket",
"camel.component.aws-s3.region": "US_EAST_1",
"tasks.max": "1",
"camel.source.endpoint.useIAMCredentials": "true",
"camel.source.endpoint.autocloseBody": "true"
}
And I see these errors in the logs
[2020-12-23 09:05:01,876] ERROR WorkerSourceTask{id=CamelAWSS3SourceConnector-0} Failed to flush, timed out while waiting for producer to flush outstanding 1 messages (org.apache.kafka.connect.runtime.WorkerSourceTask:448)
[2020-12-23 09:05:01,876] ERROR WorkerSourceTask{id=CamelAWSS3SourceConnector-0} Failed to commit offsets (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:116)
[2020-12-23 09:20:58,685] DEBUG [Worker clientId=connect-1, groupId=connect-cluster] Received successful Heartbeat response (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:1045)
[2020-12-23 09:20:58,688] DEBUG WorkerSourceTask{id=CamelAWSS3SourceConnector-0} Committing offsets (org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter:111)
[2020-12-23 09:20:58,688] INFO WorkerSourceTask{id=CamelAWSS3SourceConnector-0} Committing offsets (org.apache.kafka.connect.runtime.WorkerSourceTask:426)
[2020-12-23 09:20:58,688] INFO WorkerSourceTask{id=CamelAWSS3SourceConnector-0} flushing 1 outstanding messages for offset commit (org.apache.kafka.connect.runtime.WorkerSourceTask:443)
And if I do a curl request for the status of the connector, I get this error for the status
trace: org.apache.kafka.connect.errors.ConnectException: OffsetStorageWriter is already flushing
at org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:438)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I saw below solution in a couple of links, but that also didn't help. It suggested to add below keys to the config
"offset.flush.timeout.ms": "60000",
"offset.flush.interval.ms": "10000",
"max.request.size": "10485760",
Thank you
UPDATE
I cut the config to minimal, but still get the same error
{
"connector.class": "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
"camel.source.maxPollDuration": "10000",
"topics": "TEST-S3-SOURCE-MINIMAL-POC",
"camel.source.path.bucketNameOrArn": "pruvpcaws003-np-use1-push-json-poc",
"camel.component.aws-s3.region": "US_EAST_1",
"tasks.max": "1",
"camel.source.endpoint.useIAMCredentials": "true",
"camel.source.endpoint.autocloseBody": "true"
}
Still get the same error
trace: org.apache.kafka.connect.errors.ConnectException: OffsetStorageWriter is already flushing
at org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:438)
at org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Not sure where else should I look too find the root cause

how to configure an HdfsSinkConnector with Kafka Connect?

I'm trying to setup an HdfsSinkConnector. This is my worker.properties config:
bootstrap.servers=kafkacluster01.corp:9092
group.id=nycd-og-kafkacluster
config.storage.topic=hive_conn_conf
offset.storage.topic=hive_conn_offs
status.storage.topic=hive_conn_stat
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=io.confluent.connect.avro.AvroConverter
value.converter.schema.registry.url=http://my-schemaregistry.co:8081
schema.registry.url=http://my-schemaregistry.co:8081
hive.integration=true
hive.metastore.uris=dev-hive-metastore
schema.compatibility=BACKWARD
value.converter.schemas.enable=true
logs.dir = /logs
topics.dir = /topics
plugin.path=/usr/share/java
and this is the post requst that i'm calling to setup the connector
curl -X POST localhost:9092/connectors -H "Content-Type: application/json" -d '{
"name":"hdfs-hive_sink_con_dom16",
"config":{
"connector.class": "io.confluent.connect.hdfs.HdfsSinkConnector",
"topics": "dom_topic",
"hdfs.url": "hdfs://hadoop-sql-dev:10000",
"flush.size": "3",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter.schema.registry.url":"http://my-schemaregistry.co:8081"
}
}'
The topic dom_topic already exists (is Avro) but I get the following error from my worker:
INFO Couldn't start HdfsSinkConnector: (io.confluent.connect.hdfs.HdfsSinkTask:72)
org.apache.kafka.connect.errors.ConnectException: java.io.IOException:
Failed on local exception: com.google.protobuf.InvalidProtocolBufferException:
Protocol message end-group tag did not match expected tag.;
Host Details : local host is: "319dc5d70884/172.17.0.2"; destination host is: "hadoop-sql-dev":10000;
at io.confluent.connect.hdfs.DataWriter.<init>(DataWriter.java:202)
at io.confluent.connect.hdfs.HdfsSinkTask.start(HdfsSinkTask.java:64)
at org.apache.kafka.connect.runtime.WorkerSinkTask.initializeAndStart(WorkerSinkTask.java:207)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:139)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
the hdfs.url I've gotten from hive: jdbc:hive2://hadoop-sql-dev:10000
If I change the port to, say, 9092 I get
INFO Retrying connect to server: hadoop-sql-dev/xxx.xx.x.xx:9092. Already tried 0 time(s); maxRetries=45 (org.apache.hadoop.ipc.Client:837)
I'm running this all on Docker, and my Dockerfile is very simple
#FROM coinsmith/cp-kafka-connect-hdfs
FROM confluentinc/cp-kafka-connect:5.3.1
COPY confluentinc-kafka-connect-hdfs-5.3.1 /usr/share/java/kafka-connect-hdfs
COPY worker.properties worker.properties
# start
ENTRYPOINT ["connect-distributed", "worker.properties"]
Any help would be appreciated.

Artifactory jfrog jenkins failed uploading artifacts by spec

Hi I am trying to upload multiple files into artifactory repo using my Jenkinsfile, Please find the snippet and error pasted below:
Artifactory Plug in version - 2.16.2 and Jenkins 2.138.2
snippet from Jenkinsfile and message from the logs
def uploadSpec = """{
"files": [
{
"pattern": "folder/test.py",
"target": "reponame-local/folder/"
}
]
}"""
def buildInfo1 = server.upload spec: uploadSpec
I get the below error pasted all logs here Invalid object ID
SEVERE: Failed to execute command Pipe.Flush(-1) (channel JNLP4-connect connection from 10.37.88.189/10.37.88.189:41732)
java.util.concurrent.ExecutionException: Invalid object ID -1 iota=25377
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:478)
at hudson.remoting.ExportTable.get(ExportTable.java:397)
at hudson.remoting.Channel.getExportedObject(Channel.java:780)
at hudson.remoting.ProxyOutputStream$Flush.execute(ProxyOutputStream.java:307)
at hudson.remoting.Channel$1.handle(Channel.java:565)
at hudson.remoting.AbstractByteBufferCommandTransport.processCommand(AbstractByteBufferCommandTransport.java:203)
at hudson.remoting.AbstractByteBufferCommandTransport.receive(AbstractByteBufferCommandTransport.java:189)
at org.jenkinsci.remoting.protocol.impl.ChannelApplicationLayer.onRead(ChannelApplicationLayer.java:187)
at org.jenkinsci.remoting.protocol.ApplicationLayer.onRecv(ApplicationLayer.java:207)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.processRead(SSLEngineFilterLayer.java:369)
at org.jenkinsci.remoting.protocol.impl.SSLEngineFilterLayer.onRecv(SSLEngineFilterLayer.java:117)
at org.jenkinsci.remoting.protocol.ProtocolStack$Ptr.onRecv(ProtocolStack.java:669)
at org.jenkinsci.remoting.protocol.NetworkLayer.onRead(NetworkLayer.java:136)
at org.jenkinsci.remoting.protocol.impl.NIONetworkLayer.ready(NIONetworkLayer.java:160)
at org.jenkinsci.remoting.protocol.IOHub$OnReady.run(IOHub.java:795)
at jenkins.util.ContextResettingExecutorService$1.run(ContextResettingExecutorService.java:28)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:59)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Object appears to be deallocated at lease before Sun Nov 18 20:34:02 MST 2018
at hudson.remoting.ExportTable.diagnoseInvalidObjectId(ExportTable.java:474)
java.lang.Exception: Error occurred during operation, please refer to logs for more information.
at org.jfrog.build.extractor.producerConsumer.ProducerConsumerExecutor.start(ProducerConsumerExecutor.java:84)
at org.jfrog.build.extractor.clientConfiguration.util.spec.SpecsHelper.uploadArtifactsBySpec(SpecsHelper.java:71)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:190)
Also: hudson.remoting.Channel$CallSiteStackTrace: Remote call to JNLP4-connect connection from xx.xx.xxx.xx/xx.xx.xxx.xx:41732
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1071)
at hudson.FilePath.act(FilePath.java:1060)
at org.jfrog.hudson.pipeline.executors.GenericUploadExecutor.execution(GenericUploadExecutor.java:52)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:65)
at org.jfrog.hudson.pipeline.steps.UploadStep$Execution.run(UploadStep.java:46)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:290)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
Caused: java.lang.RuntimeException: Failed uploading artifacts by spec
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:194)
at org.jfrog.hudson.generic.GenericArtifactsDeployer$FilesDeployerCallable.invoke(GenericArtifactsDeployer.java:131)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:3085)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at hudson.remoting.Engine$1.lambda$newThread$0(Engine.java:93)
at java.lang.Thread.run(Thread.java:748)
It seems wontedly created issue by infra. Please try with user having write access and these will only can resolved by Infra team.

How do I troubleshoot "Error injecting constructor" when adding an amazon S3 repository to ElasticSearch?

I've been following the instructions here
and I've completed step 1, the installation of the plugin. I'm now trying to add a snapshot by executing the following command in sense:
PUT /_snapshot/ElasticSearch
{
"type": "s3",
"settings": {
"bucket": "SomeBucket",
"base_path" : "ElasticSearch",
"secret_key": "xxx",
"acess_key" : "xxx"
}
}
From this I get the following error:
{
"error": "RepositoryException[[ElasticSearch] failed to create repository]; nested: CreationException[Guice creation errors:\n\n1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer\n at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)\n while locating org.elasticsearch.repositories.s3.S3Repository\n while locating org.elasticsearch.repositories.Repository\n\n1 error]; nested: NoClassDefFoundError[org/elasticsearch/common/blobstore/ImmutableBlobContainer]; nested: ClassNotFoundException[org.elasticsearch.common.blobstore.ImmutableBlobContainer]; ",
"status": 500
}
Any ideas on how to start troubleshooting this? All the information I get in the logs is:
[2015-07-13 16:20:40,519][WARN ][repositories ] [app1050-ela-dacq] failed to create repository [s3][ElasticSearch -d]
org.elasticsearch.common.inject.CreationException: Guice creation errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(Unknown Source)
while locating org.elasticsearch.repositories.s3.S3Repository
while locating org.elasticsearch.repositories.Repository
1 error
at org.elasticsearch.common.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:344)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:178)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.InjectorImpl.createChildInjector(InjectorImpl.java:131)
at org.elasticsearch.common.inject.ModulesBuilder.createChildInjector(ModulesBuilder.java:69)
at org.elasticsearch.repositories.RepositoriesService.createRepositoryHolder(RepositoriesService.java:409)
at org.elasticsearch.repositories.RepositoriesService.registerRepository(RepositoriesService.java:373)
at org.elasticsearch.repositories.RepositoriesService.access$100(RepositoriesService.java:57)
at org.elasticsearch.repositories.RepositoriesService$1.execute(RepositoriesService.java:112)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:188)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:158)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: org/elasticsearch/common/blobstore/ImmutableBlobContainer
at org.elasticsearch.repositories.s3.S3Repository.<init>(S3Repository.java:124)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
Note that the version of elasticsearch I'm using is 1.6.0 so the plugin version is 2.6.0. The output of the command 'java -version' is the following:
openjdk version "1.8.0_45"
OpenJDK Runtime Environment (build 1.8.0_45-b13)
OpenJDK 64-Bit Server VM (build 25.45-b02, mixed mode)
It turns out that the version of the plugin that had been installed was actually 2.2.0 instead of 2.6.0.
The answer tlrx gives in here is the clue.
It's also good to remember to restart elasticsearch after you've updated the plugin

Resources