I have a requirement to delay events by 45 minutes and then invoke a method.
The events keep on coming and piling up. How to approach this using spring integration delayer or spring scheduler?
I went ahead and used JdbcMessageStore with Oracle DB. The messages are stored in INT_MESSAGE table. Somehow during retrieval we observe this error
--- [ 26] o.s.j.c.JdbcTemplate : Executing prepared SQL statement [SELECT MESSAGE_ID, CREATED_DATE, MESSAGE_BYTES from INT_MESSAGE where MESSAGE_ID=? and REGION=?]
DEBUG whdq7355
--- [ 26] o.s.j.d.DataSourceUtils : Fetching JDBC Connection from DataSource
DEBUG whdq7355
--- [ 26] o.s.j.s.l.DefaultLobHandler : Returning BLOB as bytes
DEBUG whdq7355
--- [ 26] o.s.j.d.DataSourceUtils : Returning JDBC Connection to DataSource
DEBUG whdq7355
--- [ 26] o.s.i.c.PublishSubscribeChannel : preSend on channel 'errorChannel', message: ErrorMessage [payload=org.springframework.core.serializer.support.SerializationFailedException: Failed to deserialize payload. Is the byte array a result of corresponding serialization for DefaultDeserializer?; nested exception is java.io.StreamCorruptedException: invalid stream header: 00540001, headers={id=a39571b4-747b-87e7-f10f-0fa360904a15, timestamp=1566325212736}]
There is a delayer component in Spring Integration. It can be configured for that amount of time when you convert it into milliseconds. See Docs for more info: https://docs.spring.io/spring-integration/docs/5.1.7.RELEASE/reference/html/#delayer. Also for such a long delay you definitely need to consider to use external message store on delayer to avoid messages loss and memory leaks.
Related
I am trying to check the JDBC connection status after each thread execution whether it is close or open?.
In my thread group there are three things
JDBC connection configuration
JDBC request (select * from employee)
JSR223 PostProcessor
Script :
def connection = org.apache.jmeter.protocol.jdbc.config.DataSourceElement.getConnection('ConnectionString')
log.info('*************Connection closed: '+ connection.isClosed())
Above script is logging the connection status after each thread execution when loop count is 1. Problem here as soon as I change the loop count to >= 2. it started throwing the error below error
Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object
And when I remove the Post processor and increase the loop count it is working fine.
Logs :
2023-02-19 16:15:33,599 INFO o.a.j.t.JMeterThread: Thread started: DB Thread Group 2-1
2023-02-19 16:15:38,054 DEBUG o.a.j.p.j.AbstractJDBCTestElement: executing jdbc:SELECT * FROM EMPLOYEE
2023-02-19 16:15:38,623 INFO o.a.j.e.J.JSR223 PostProcessor: *************Connection closed: false
2023-02-19 16:15:58,637 ERROR o.a.j.e.JSR223PostProcessor: Problem in JSR223 script, JSR223 PostProcessor
javax.script.ScriptException: java.sql.SQLException: Cannot get a connection, pool error Timeout waiting for idle object, borrowMaxWaitDuration=PT10S
at org.codehaus.groovy.jsr223.GroovyScriptEngineImpl.eval(GroovyScriptEngineImpl.java:320) ~[groovy-jsr223-3.0.11.jar:3.0.11]
at org.codehaus.groovy.jsr223.GroovyCompiledScript.eval(GroovyCompiledScript.java:71) ~
In the JDBC configuration are you using a connection pool to request connections?
What your test shows is that the JSR223 script is closing the connection, which is probably a good thing from a coding perspective, but the next iteration of the loop tries to execute a request with a closed Connection and blammo. If you switch from raw connections to a connection pool when the JSR 223 closes the connection it'll be returned to the pool and remain open for the next iteration of the loop. You'll have to switch to using DataSource API typically for this, but it's a minor tweak to the script.
I can think of 2 possible reasons:
Either your database is down/overloaded/not reachable via JDBC
Or your connection pool settings need to be tweaked, i.e. max number of connections and/or wait time need to be increased:
In general I don't think your approach is correct, as per JavaDoc:
This method generally cannot be called to determine whether a connection to a database is valid or invalid. A typical client can determine that a connection is invalid by catching any exceptions that might be thrown when an operation is attempted.
So you might want to increase debug logging verbosity for JMeter, your JDBC driver and Java SQL namespace instead
I have a Spring Boot application that reads from a database table with potentially millions of rows and thus uses the queryForStream method from Spring Data. This is the code:
Stream<MyResultDto> result = jdbcTemplate.queryForStream("select * from table", myRowMapper));
This runs well for smaller tables, but from about 500 MB of table size the application dies with a stacktrace like this:
Exception in thread "http-nio-8080-Acceptor" java.lang.OutOfMemoryError: Java heap space
at java.base/java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:64)
at java.base/java.nio.ByteBuffer.allocate(ByteBuffer.java:363)
at org.apache.tomcat.util.net.SocketBufferHandler.<init>(SocketBufferHandler.java:58)
at org.apache.tomcat.util.net.NioEndpoint.setSocketOptions(NioEndpoint.java:486)
at org.apache.tomcat.util.net.NioEndpoint.setSocketOptions(NioEndpoint.java:79)
at org.apache.tomcat.util.net.Acceptor.run(Acceptor.java:149)
at java.base/java.lang.Thread.run(Thread.java:833)
2023-01-28 00:37:23.862 ERROR 1 --- [nio-8080-exec-3] o.a.c.h.Http11NioProtocol : Failed to complete processing of a request
java.lang.OutOfMemoryError: Java heap space
2023-01-28 00:37:30.548 ERROR 1 --- [nio-8080-exec-6] o.a.c.c.C.[.[.[.[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: Java heap space] with root cause
java.lang.OutOfMemoryError: Java heap space
Exception in thread "http-nio-8080-Poller" java.lang.OutOfMemoryError: Java heap space
As you can probably guess from the stack trace, I am streaming the database results out via a HTTP REST interface. The stack is PostgreSQL 15, the standard PostgreSQL JDBC driver 42.3.8 and the spring-boot-starter-data-jpa is 2.6.14, which results in spring-jdbc 5.3.24 being pulled.
It's worth noting that the table has no primary key, which I suppose should be no problem for the above query. I have not posted the RowMapper, because it never goes to work, the memory literally runs out after sending the query to the database. It just never comes back with a result set that the rowmapper could work on.
I have tried to use a jdbcTemplate.setFetchSize(1000) and also without specifying any fetch size, which I believe would result in the default being used (100 I think). In both cases the same thing happens - large result sets will not be streamed, but somehow exhaust the Java heap space before streaming starts. What could be the reason for this? Isn't the queryForStream method meant to exactly avoid such situations?
I was on the right track setting the fetch size, that is exactly what prevents the JDBC driver from loading the entire result set into memory. In my case the setting was silently ignored and that is a function of the PostgreSQL JDBC driver. It ignores the fetch size if autocommit is set to true, which is the default in Spring JDBC.
Therefore the solution was to define a datasource in Spring JDBC that sets autocommit to false and use that datasource for the streaming query. The fetch size was then applied and I ended up setting it to 10000, which in my case yielded the best performance / memory usage ratio.
Current setup - Our Springboot application consumes messages from Kafka topic,We are processing one message at a time (we are not using streams).Below are the config properties and version being used.
ConsumerConfig.REQUEST_TIMEOUT_MS_CONFIG- 30000
ConsumerConfig.AUTO_OFFSET_RESET_CONFIG-earliest
ContainerProperties.AckMode-RECORD
Spring boot version-2.5.7
Spring-kafka version- 2.7.8
Kafks-clients version-2.8.1
number of partitions- 6
consumer group- 1
consumers- 2
Issue - When springboot application stays idle for longer time(idle time varying from 4 hrs to 3 days).We are seeing org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Exception error message - org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: java.security.cert.CertificateException: No subject alternative DNS name matching kafka-2.broker.emh-dev.service.dev found.
2022-04-07 06:58:42.437 ERROR 24180 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : Authentication/Authorization Exception, retrying in 10000 ms
After service recover we are seeing message duplication with same partition and offsets which is inconsistent.
Below are the exception:
Consumer clientId=XXXXXX, groupId=XXXXXX] Offset commit failed on partition XXXXXX at offset 354: The coordinator is not aware of this member
Seek to current after exception; nested exception is org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing max.poll.interval.ms or by reducing the maximum size of batches returned in poll() with max.poll.records
I keep getting this error EXCEPTION: SocketException: Operation not permitted (select/poll failed) when I push frames to kinesis, this is followed by
f-stream is reported. Terminating...
DEBUG / KinesisVideo: Exception while sending data.
ERROR / KinesisVideo: 2018-06-09T16:26Z T31: EXCEPTION: RuntimeException: Exception thrown on sending thread: Exception while sending encoded chunk in MKV stream !
DEBUG / KinesisVideo: PutFrame index: 10, pts: 15285616115400000, dts: 15285616115400000, duration: 200000, keyFrame: false, flags: 0
com.amazonaws.kinesisvideo.producer.ProducerException: Failed to put a frame into the stream.
at com.amazonaws.kinesisvideo.producer.jni.NativeKinesisVideoProducerJni.putKinesisVideoFrame(Native Method)
at com.amazonaws.kinesisvideo.producer.jni.NativeKinesisVideoProducerJni.putFrame(NativeKinesisVideoProducerJni.java:440)
at com.amazonaws.kinesisvideo.producer.jni.NativeKinesisVideoProducerStream.putFrame(NativeKinesisVideoProducerStream.java:259)
at com.amazonaws.kinesisvideo.mediasource.ProducerStreamSink.onFrame(ProducerStreamSink.java:35)
at com.amazonaws.kinesis.custom.S3FileMediaSource.putFrame(S3FileMediaSource.java:114)
at com.amazonaws.kinesis.custom.S3FileMediaSource.access$3(S3FileMediaSource.java:112)
at com.amazonaws.kinesis.custom.S3FileMediaSource$1.onFrameDataAvailable(S3FileMediaSource.java:103)
at com.amazonaws.kinesis.custom.S3FrameSource.generateFrameAndNotifyListener(S3FrameSource.java:84)
at com.amazonaws.kinesis.custom.S3FrameSource.access$0(S3FrameSource.java:71)
at com.amazonaws.kinesis.custom.S3FrameSource$1.run(S3FrameSource.java:66)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
DEBUG / KinesisVideo: Received all data, close
DEBUG / KinesisVideo: Closing data stream
DEBUG / KinesisVideo: Stream unblocked notification.
DEBUG / KinesisVideo: Data availability notification. Upload handle: 0, Size: 0, Duration 0
DEBUG / KinesisVideo: Being notified to close stream streamName with uploadHandle 0
INFO / KinesisVideo: End-of-stream is reported. Terminating...
can't figure out why this is happening, any ideas ?
Finally, we used the PutMedia API instead, to insert MKV, but we could find that the above error was due to ending connection with Kinesis for the below 2 reasons
sending wrong fragments (order/time)
ending thread before finishing the Kinesis connection establishment
I would recommend to try your logic on EC2 instance or generally standalone application (JAR) and check the logs
I am running 3 instances of a service that I wrote using:
Scala 2.11.12
kafkaStreams 1.1.0
kafkaStreamsScala 0.2.1 (by lightbend)
The service uses Kafka streams with the following topology (high level):
InputTopic
Parse to known Type
Clear messages that the parsing failed on
split every single message to 6 new messages
on each message run: map.groupByKey.reduce(with local store).toStream.to
Everything works as expected but i can't get rid of a WARN message that keeps showing:
15:46:00.065 [kafka-producer-network-thread | my_service_name-1ca232ff-5a9c-407c-a3a0-9f198c6d1fa4-StreamThread-1-0_0-producer] [WARN ] [o.a.k.c.p.i.Sender] - [Producer clientId=my_service_name-1ca232ff-5a9c-407c-a3a0-9f198c6d1fa4-StreamThread-1-0_0-producer, transactionalId=my_service_name-0_0] Got error produce response with correlation id 28 on topic-partition my_service_name-state_store_1-repartition-1, retrying (2 attempts left). Error: UNKNOWN_PRODUCER_ID
As you can see, I get those errors from the INTERNAL topics that Kafka stream manage. Seems like some kind of retention period on the producer metadata in the internal topics / some kind of a producer id reset.
Couldn't find anything regarding this issue, only a description of the error itself from here:
ERROR CODE RETRIABLE DESCRIPTION
UNKNOWN_PRODUCER_ID 59 False This exception is raised by the broker if it could not locate the producer metadata associated with the producerId in question. This could happen if, for instance, the producer's records were deleted because their retention time had elapsed. Once the last records of the producer id are removed, the producer's metadata is removed from the broker, and future appends by the producer will return this exception.
Hope you can help,
Thanks
Edit:
It seems that the WARN message does not pop up on version 1.0.1 of kafka streams.