RocksDB exception in Kafka Streams with kafka_2.13-3.2.0 running on Window - windows

I have kafka 2.13-3.2.0 running on Window machine. I am trying stream Join operation and getting following error. I can see same issue fixed in 1.0.1 version as per : https://issues.apache.org/jira/browse/KAFKA-6162
But I am still getting this error with kafka 2.13-3.2.0.
Error Logs:
Caused by: org.rocksdb.RocksDBException: Failed to create dir: D:\tmp\kafka-streams\join_driver_application\1_0\KSTREAM-JOINTHIS-0000000014-store\KSTREAM-JOINTHIS-0000000014-store:1661385600000: Invalid argument
at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:231)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)
... 23 more

Related

Apache Nifi Web Server keeps failing to start with Decryption exception

I have a setup in which NiFi Web Server suddenly started failing to start when upgrading from 1.15.3 to 1.16.1 version. The following exception keeps occurring on the Apache NiFi Cluster:
2022-05-11 22:53:40,570 WARN [main] org.apache.nifi.web.server.JettyServer Failed to start web server... shutting down.
org.apache.nifi.encrypt.EncryptionException: Decryption Failed with Algorithm [PBEWITHMD5AND256BITAES-CBC-OPENSSL]
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:78)
at org.apache.nifi.fingerprint.FingerprintFactory.decrypt(FingerprintFactory.java:931)
at org.apache.nifi.fingerprint.FingerprintFactory.getLoggableRepresentationOfSensitiveValue(FingerprintFactory.java:561)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameter(FingerprintFactory.java:330)
at org.apache.nifi.fingerprint.FingerprintFactory.addParameterContext(FingerprintFactory.java:302)
at org.apache.nifi.fingerprint.FingerprintFactory.addFlowControllerFingerprint(FingerprintFactory.java:210)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:153)
at org.apache.nifi.fingerprint.FingerprintFactory.createFingerprint(FingerprintFactory.java:127)
at org.apache.nifi.controller.inheritance.FlowFingerprintCheck.checkInheritability(FlowFingerprintCheck.java:45)
at org.apache.nifi.controller.XmlFlowSynchronizer.sync(XmlFlowSynchronizer.java:200)
at org.apache.nifi.controller.serialization.StandardFlowSynchronizer.sync(StandardFlowSynchronizer.java:43)
at org.apache.nifi.controller.FlowController.synchronize(FlowController.java:1524)
at org.apache.nifi.persistence.StandardFlowConfigurationDAO.load(StandardFlowConfigurationDAO.java:104)
at org.apache.nifi.controller.StandardFlowService.loadFromBytes(StandardFlowService.java:815)
at org.apache.nifi.controller.StandardFlowService.load(StandardFlowService.java:457)
at org.apache.nifi.web.server.JettyServer.start(JettyServer.java:1086)
at org.apache.nifi.NiFi.<init>(NiFi.java:170)
at org.apache.nifi.NiFi.<init>(NiFi.java:82)
at org.apache.nifi.NiFi.main(NiFi.java:330)
Caused by: javax.crypto.BadPaddingException: pad block corrupted
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher$BufferedGenericBlockCipher.doFinal(Unknown Source)
at org.bouncycastle.jcajce.provider.symmetric.util.BaseBlockCipher.engineDoFinal(Unknown Source)
at javax.crypto.Cipher.doFinal(Cipher.java:2168)
at org.apache.nifi.encrypt.CipherPropertyEncryptor.decrypt(CipherPropertyEncryptor.java:74)
... 18 common frames omitted
relevant nifi.properties:
nifi.sensitive.props.key=<hidden>
nifi.sensitive.props.key.protected=
nifi.sensitive.props.algorithm=PBEWITHMD5AND256BITAES-CBC-OPENSSL
nifi.sensitive.props.additional.keys=
I have already tried to tear it all down and re-install 1.15.3 with not any other changes, but the same issue still persists. Can someone please share any ideas if there are any on how to fix this?

Failed to initialize the application "DataSource" due to error weblogic.application.ModuleException

When I am trying to restart the Application server getting below error. If I manually deploy the datasource.ear file, it is working fine. But it fails during server restart.
<Error> <Deployer> <BEA-149205> <Failed to initialize the application "Datasource" due to error weblogic.application.ModuleException:
weblogic.application.ModuleException:
at weblogic.jdbc.module.JDBCModule.prepare(JDBCModule.java:337)
at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:100)
at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:172)
at weblogic.application.internal.flow.ModuleStateDriver$1.next(ModuleStateDriver.java:167)
at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:35)
Truncated. see log file for complete stacktrace
Caused By: weblogic.common.ResourceException: weblogic.common.ResourceException: Could not create pool connection. The DBMS driver exception was: IO Error: Connection reset
at weblogic.jdbc.common.internal.ConnectionEnvFactory.createResource(ConnectionEnvFactory.java:288)
at weblogic.common.resourcepool.ResourcePoolImpl.makeResources(ResourcePoolImpl.java:1310)
at weblogic.common.resourcepool.ResourcePoolImpl.makeResources(ResourcePoolImpl.java:1227)
at weblogic.common.resourcepool.ResourcePoolImpl.start(ResourcePoolImpl.java:250)
at weblogic.jdbc.common.internal.ConnectionPool.doStart(ConnectionPool.java:1396)
Truncated. see log file for complete stacktrace
>

Dse is not starting stating unable to write to commit log directory

I am getting below error while starting the dse:
ERROR [main] 2020-02-26 13:08:33,269 DseModule.java:97 - {}. Exiting...
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) An exception was caught and reported. Message: Unable to check disk space available to /u01/dse_ops/logs. Perhaps the Cassandra user does not have the necessary permissions
at com.datastax.bdp.DseModule.configure(Unknown Source)

Unable to convert to timestamp using Kafka timestampconvert

I am using Kafka source JDBC connector to pull DB events and I am running the kafka connect in standalone mode. When I run this file, I get the shown error. Please help me.
Code:
name=sailpointdb01107
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.password = xxxxx
connection.url = jdbc:oracle:thin:#xxxxx:1521/xxxxx
connection.user =xxxxx
query= SELECT * FROM (SELECT NAME, TO_TIMESTAMP('19700101', 'YYYYMMDD')+ NUMTODSINTERVAL(COMPLETED/1000,'SECOND') AS TASKFAILEDON FROM task WHERE COMPLETION_STATUS='Error')
mode= timestamp
timestamp.column.name=TASKFAILEDON
topic.prefix=testing
validate.non.null=false
transforms=TimestampConverter
transforms.TimestampConverter.type=org.apache.kafka.connect.transforms.TimestampConverter$Value
transforms.TimestampConverter.format=yyyy-MM-dd
transforms.TimestampConverter.target.type=Timestamp
transforms.TimestampConverter.target.field=TASKFAILEDON
Error:
Error[2019-10-01 15:17:45,058] ERROR WorkerSourceTask{id=sailpointdb01107-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
Caused by: org.apache.kafka.connect.errors.ConnectException: Schema Schema{STRUCT} does not correspond to a known timestamp type format
at org.apache.kafka.connect.transforms.TimestampConverter.timestampTypeFromSchema(TimestampConverter.java:406)
at org.apache.kafka.connect.transforms.TimestampConverter.applyWithSchema(TimestampConverter.java:334)
at org.apache.kafka.connect.transforms.TimestampConverter.apply(TimestampConverter.java:275)
at org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
... 11 more
[2019-10-01 15:17:45,059] ERROR WorkerSourceTask{id=sailpointdb01107-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)
This line configured at connector could avoid that issue.
time.precision.mode: "connect"

SEVERE error writing to S3 backup

I'm running OpsCenter 5.1.1 with Datastax Enterprise 4.5.1. It's a 3-node cluster on AWS and I'm backing up to S3 (still...) I've started seeing a new error. I think this is a different error than any I've posted b4.
$ cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.8.39 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
I am seeing this error in the agent.log file
node1_agent.log: SEVERE: error after writing 15736832/16777216 bytes to https://cassandra-dev-bkup.s3.amazonaws.com/snapshots/407bb4b1-5c91-43fe-9d4f-767115668037/sstables/1430904167-reporting_test-transaction_lookup-jb-288-Index.db?partNumber=2&uploadId=.MA3X4RYssg7xL_Hr7Msgze.J4exDq9zZ_0Y7qEj9gZhJ570j73kZNr5_nbxactmPMJeKf0XyZfEC0KAplWOz9lpyRCtNeeDCvCmtEXDchH8F1J2c57aq4MrxfBcyiZr
java.io.IOException: Error writing request body to server
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.checkError(HttpURLConnection.java:3192)
at sun.net.www.protocol.http.HttpURLConnection$StreamingOutputStream.write(HttpURLConnection.java:3175)
at com.google.common.io.CountingOutputStream.write(CountingOutputStream.java:53)
at com.google.common.io.ByteStreams.copy(ByteStreams.java:179)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.writePayloadToConnection(JavaUrlHttpCommandExecutorService.java:308)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:192)
at org.jclouds.http.internal.JavaUrlHttpCommandExecutorService.convert(JavaUrlHttpCommandExecutorService.java:72)
at org.jclouds.http.internal.BaseHttpCommandExecutorService.invoke(BaseHttpCommandExecutorService.java:95)
at org.jclouds.rest.internal.InvokeSyncToAsyncHttpMethod.invoke(InvokeSyncToAsyncHttpMethod.java:128)
at org.jclouds.rest.internal.InvokeSyncToAsyncHttpMethod.apply(InvokeSyncToAsyncHttpMethod.java:94)
at org.jclouds.rest.internal.InvokeSyncToAsyncHttpMethod.apply(InvokeSyncToAsyncHttpMethod.java:55)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.handle(DelegatesToInvocationFunction.java:156)
at org.jclouds.rest.internal.DelegatesToInvocationFunction.invoke(DelegatesToInvocationFunction.java:123)
at com.sun.proxy.$Proxy48.uploadPart(Unknown Source)
at org.jclouds.aws.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.prepareUploadPart(SequentialMultipartUploadStrategy.java:111)
at org.jclouds.aws.s3.blobstore.strategy.internal.SequentialMultipartUploadStrategy.execute(SequentialMultipartUploadStrategy.java:93)
at org.jclouds.aws.s3.blobstore.AWSS3BlobStore.putBlob(AWSS3BlobStore.java:89)
at org.jclouds.blobstore2$put_blob.doInvoke(blobstore2.clj:246)
at clojure.lang.RestFn.invoke(RestFn.java:494)
at opsagent.backups.destinations$create_blob$fn__12007.invoke(destinations.clj:69)
at opsagent.backups.destinations$create_blob.invoke(destinations.clj:64)
at opsagent.backups.destinations$fn__12170.invoke(destinations.clj:192)
at opsagent.backups.destinations$fn__11799$G__11792__11810.invoke(destinations.clj:24)
at opsagent.backups.staging$start_staging_BANG_$fn__12338$state_machine__7576__auto____12339$fn__12344$fn__12375.invoke(staging.clj:61)
at opsagent.backups.staging$start_staging_BANG_$fn__12338$state_machine__7576__auto____12339$fn__12344.invoke(staging.clj:59)
at opsagent.backups.staging$start_staging_BANG_$fn__12338$state_machine__7576__auto____12339.invoke(staging.clj:56)
at clojure.core.async.impl.ioc_macros$run_state_machine.invoke(ioc_macros.clj:940)
at clojure.core.async.impl.ioc_macros$run_state_machine_wrapped.invoke(ioc_macros.clj:944)
at clojure.core.async.impl.ioc_macros$take_BANG_$fn__7592.invoke(ioc_macros.clj:953)
at clojure.core.async.impl.channels.ManyToManyChannel$fn__4097.invoke(channels.clj:102)
at clojure.lang.AFn.run(AFn.java:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
TL;DR -
Your SSTable which is 38866048 bytes, is both on your filesystem and on S3. This means the file has transferred over and you are in good shape. No need to worry about this error (though I opened an internal ticket to handle this kind of exception rather than throw a dump).
Details - A summary of what I suspect happened
1) There was a file transfer error when you reached 15736832 out of the 16777216 byte slice of the sstable.
2) At this point OpsCenter did not finish transferring the table or leave a partial version in s3
3) Another backup attempt later on moved the sstable with no error and a valid backup exists.

Resources