I have Zookeeper and Apache Kafka servers running on my Windows computer. The problem is with a Spring Boot application: it reads the same messages from Kafka whenever I start it. It means the offset is not being saved. How do I fix it?
Versions are: kafka_2.12-2.4.0, Spring Boot 2.5.0.
In Kafka listener bean, I have
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
Consumer config values printed on console when Spring Boot starts are:
2021-06-10 18:21:11.008 INFO 7036 --- [ main] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
allow.auto.create.topics = true
auto.commit.interval.ms = 5000
auto.offset.reset = latest
bootstrap.servers = [http://localhost:9092]
check.crcs = true
client.dns.lookup = use_all_dns_ips
client.id = consumer-group_id1-1
client.rack =
connections.max.idle.ms = 540000
default.api.timeout.ms = 60000
enable.auto.commit = false
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = group_id1
group.instance.id = null
heartbeat.interval.ms = 3000
interceptor.classes = []
internal.leave.group.on.close = true
internal.throw.on.fetch.stable.offset.unsupported = false
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
session.timeout.ms = 10000
socket.connection.setup.timeout.max.ms = 127000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
UPDATE 11-Jun-2021
Using #nipuna's suggestion, in my Kafka consumer config, I set
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, true);
However, this was not compatible with ack mode manual_immediate, so I deleted the following line to use the default ack mode (batch, I think).
factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);
Your issue is here enable.auto.commit = false. If you are not manually committing offset after consuming messages, You should configure this to true
If this is set to false, after consuming messages from Kafka, there is no feedback to Kafka whether you read or not. Then after you restart your consumer it will send messages from the start. If you enable this, your consumer make sure it will automatically send your last read offset to Kafka. Then Kafka saved that offset in __consumer_offsets topic with your consumer group_id, topic you consumed and partition.
Then after you restart the consumer, Kafka read your last position from __consumer_offsets topic and send from there.
Related
I'm using stream bridge in my app since the topic to send is decided at runtime based on URL path params; I build a message from the Request Body, Path Elements and call stream bridge send to a function for publishing to Kafka
#Bean
public RouterFunction<ServerResponse> webhooks() {
return route().POST("/webhooks/v1/{cat}/{mat}/{key}", accept(MediaType.APPLICATION_JSON), (serverRequest) -> {
String cat = serverRequest.pathVariable("cat");
String mat = serverRequest.pathVariable("mat");
String key = serverRequest.pathVariable("key");
String logPrefix = serverRequest.exchange().getLogPrefix();
log.debug("{}Received HTTP Payload for {}:{} with key {}", logPrefix, cat, mat, key);
return serverRequest.bodyToMono(new ParameterizedTypeReference<Map<String, Object>>() {
})
.map(payload -> MessageBuilder.withPayload(payload)
.setHeader(catHeader, cat)
.setHeader(matHeader, mat)
.setHeader(keyHeader, key)
.setHeader(KafkaHeaders.MESSAGE_KEY, "someKey")
.setHeader(KafkaHeaders.TOPIC, String.join("-", cat, mat, namespace))
.setHeader(webhookRequestId, logPrefix)
.build())
.map(message -> streamBridge.send("producer", message))
.flatMap(message -> ServerResponse.accepted().build());
}).build();
}
#Bean
public Function<Flux<Message<?>>, Flux<Message<?>>> producer() {
return mapFlux -> mapFlux.map(m -> MessageBuilder.withPayload(m.getPayload()).copyHeaders(m.getHeaders()).build());
}
I then add the following properties to application yaml
spring:
main:
banner-mode: off
mongodb:
embedded:
version: 3.4.6
data:
mongodb:
port: 29129
host: localhost
database: howler_db
kafka:
binder:
brokers: localhost:9952
cloud:
function:
definition: producer
stream:
bindings:
producer-out-0:
useTopicHeader: true
producer:
configuration:
retry:
topic:
delay: 200
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
retries: 3
max:
block:
ms: 500
enable:
idempotence: true
acks: all
My test however, fails with this exception.
Caused by: org.apache.kafka.common.errors.SerializationException: Can't convert key of class java.lang.String to class org.apache.kafka.common.serialization.ByteArraySerializer specified in key.serializer
at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:949) ~[kafka-clients-3.1.1.jar:na]
at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:914) ~[kafka-clients-3.1.1.jar:na]
at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:1087) ~[spring-kafka-2.8.7.jar:2.8.7]
at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:655) ~[spring-kafka-2.8.7.jar:2.8.7]
at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:429) ~[spring-kafka-2.8.7.jar:2.8.7]
at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler.handleRequestMessage(KafkaProducerMessageHandler.java:513) ~[spring-integration-kafka-5.5.13.jar:5.5.13]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:136) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.cloud.stream.binder.AbstractMessageChannelBinder$SendingHandler.handleMessageInternal(AbstractMessageChannelBinder.java:1074) ~[spring-cloud-stream-3.2.4.jar:3.2.4]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272) ~[spring-integration-core-5.5.13.jar:5.5.13]
at org.springframework.cloud.stream.function.StreamBridge.send(StreamBridge.java:235) ~[spring-cloud-stream-3.2.4.jar:3.2.4]
at org.springframework.cloud.stream.function.StreamBridge.send(StreamBridge.java:170) ~[spring-cloud-stream-3.2.4.jar:3.2.4]
at org.springframework.cloud.stream.function.StreamBridge.send(StreamBridge.java:150) ~[spring-cloud-stream-3.2.4.jar:3.2.4]
at com.gabbar.cloud.sambha.SholayWebFunctionConfiguration.lambda$webhooks$1(ShokayWebFunctionConfiguration.java:67) ~[classes/:na]
logs reveal producer key.serializer isn't set.
2022-07-11 18:24:15.500 INFO 15424 --- [ctor-http-nio-2] o.a.k.clients.producer.ProducerConfig : ProducerConfig values:
acks = 1
batch.size = 16384
bootstrap.servers = [127.0.0.1:9952]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-1
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = false
interceptor.classes = []
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.connect.timeout.ms = null
sasl.login.read.timeout.ms = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.login.retry.backoff.max.ms = 10000
sasl.login.retry.backoff.ms = 100
sasl.mechanism = GSSAPI
sasl.oauthbearer.clock.skew.seconds = 30
sasl.oauthbearer.expected.audience = null
sasl.oauthbearer.expected.issuer = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
sasl.oauthbearer.jwks.endpoint.url = null
sasl.oauthbearer.scope.claim.name = scope
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.token.endpoint.url = null
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = null
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
Here's what works for me now.
spring:
embedded:
kafka:
brokers: localhost:9092
cloud:
stream:
kafka:
default:
producer:
useTopicHeader: true
binder:
autoCreateTopics: false
producerProperties:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.springframework.kafka.support.serializer.JsonSerializer
max.block.ms: 100
I am trying to build a rest-api for a movie-review website. The movie model contains a cast-field which is a list-field, when using ModelViewSets one can't POST ListFields through HTML, so I set blank = true for all list-fields thinking that I'll make a raw PATCH request to update the blank fields, but I am unable to do so.
models.py
class Movie(models.Model):
movie_name = models.CharField(max_length = 100, unique = True)
release_date = models.DateField(blank = True)
description = models.TextField(max_length = 500)
movie_poster = models.ImageField(blank = True)
directors = ListCharField(
base_field = models.CharField(max_length = 500),
max_length = 6 * 500,
blank = True
)
trailer_url = models.URLField()
cast = ListCharField(
base_field = models.CharField(max_length = 225),
max_length = 11 * 225,
blank = True
)
genre = ListCharField(
base_field = models.CharField(max_length = 225),
max_length = 11 * 255,
blank = True
)
avg_rating = models.FloatField(validators = [MinValueValidator(0), MaxValueValidator(5)])
country = models.CharField(max_length = 100)
language = models.CharField(max_length = 100)
budget = models.BigIntegerField(blank = True)
revenue = models.BigIntegerField(blank = True)
runtime = models.DurationField(blank = True)
Serializer
class MovieSerializer(ModelSerializer):
cast = ListField(
child = CharField(required = False), required = False,
min_length = 0
)
genre = ListField(
child = CharField(required = False), required = False,
min_length = 0
)
directors = ListField(
child = CharField(required = False), required = False,
min_length = 0
)
class Meta:
model = Movie
fields = '__all__'
I used djano-mysql for adding the ListCharField field-type.
https://i.stack.imgur.com/sC6Vw.png [The data without list field values]
https://i.stack.imgur.com/W3xea.png [request I tried to make]
https://i.stack.imgur.com/OPeJn.png [response that I received]
Original put request which resulted in an error response
https://i.stack.imgur.com/W3xea.png
The request had some trailing commas, due to which the API expected more values.
Here's the correct request-content -
{
"cast": [
"aamir",
"sakshi"
],
"genre": [
"biopic"
],
"directors": [
"nitesh tiwari"
]
}
I'm very beginner to veins. I'm building a network in which ther is a standard host and a vehicle communoicate to each other via AccessPoint. I've successful done this simulation with wirelessHost and standard host. but when I used veins_inet to simulate adhocHost as a vehicle I get this error
""
My source code is following.
############################# .ned ############################
network ScenarioFOG
{
submodules:
radioMedium: Ieee80211ScalarRadioMedium {
#display("p=29,130");
}
manager: VeinsInetManager;
configurator: IPv4NetworkConfigurator {
parameters:
assignDisjunctSubnetAddresses = false;
#display("p=36,83");
}
vNode[0]: Car;
sNode: StandardHost {
#display("p=260,61");
}
AP: AccessPoint {
#display("p=197,79");
}
connections allowunconnected:
sNode.ethg++ <--> Eth100M <--> AP.ethg++;
}
############################# .ini ############################
[General]
network = ScenarioFOG
sim-time-limit = 60s
debug-on-errors = true
cmdenv-express-mode = true
image-path = ../../../../images
# UDPBasicApp
ScenarioFOG.*Node[*].numUdpApps = 1
ScenarioFOG.*Node[*].udpApp[0].typename = "UDPBasicApp"
ScenarioFOG.vNode[*].udpApp[0].destAddresses = "224.0.0.1"
ScenarioFOG.vNode[*].udpApp[0].multicastInterface = "wlan0"
ScenarioFOG.vNode[*].udpApp[0].joinLocalMulticastGroups = true
ScenarioFOG.sNode[*].udpApp[0].localPort = 9001
ScenarioFOG.vNode[*].udpApp[0].destPort = 9001
ScenarioFOG.vNode[*].udpApp[0].destAddresses = "sNode"
ScenarioFOG.sNode[*].udpApp[0].destAddresses = "vNode"
ScenarioFOG.vNode[*].udpApp[0].messageLength = 100B
ScenarioFOG.vNode[*].udpApp[0].startTime = uniform(0s, 5s)
ScenarioFOG.vNode[*].udpApp[0].sendInterval = 5s
# Ieee80211MgmtAdhoc
ScenarioFOG.vNode[*].wlan[0].mgmtType = "Ieee80211MgmtAdhoc"
ScenarioFOG.vNode[*].wlan[0].bitrate = 6Mbps
ScenarioFOG.vNode[*].wlan[0].radio.transmitter.power = 2mW
# HostAutoConfigurator
ScenarioFOG.vNode[*].ac_wlan.interfaces = "wlan0"
ScenarioFOG.vNode[*].ac_wlan.mcastGroups = "224.0.0.1"
# VeinsInetMobility
**.vNode[*].mobilityType = "VeinsInetMobility"
**.vNode[*].mobility.constraintAreaMinX = 0m
**.vNode[*].mobility.constraintAreaMinY = 0m
**.vNode[*].mobility.constraintAreaMinZ = 0m
**.vNode[*].mobility.constraintAreaMaxX = 1000m
**.vNode[*].mobility.constraintAreaMaxY = 1000m
**.vNode[*].mobility.constraintAreaMaxZ = 0m
**.vNode[*].mobility.initFromDisplayString = false
**.vNode[*].mobility.initialX = 200m
**.vNode[*].mobility.initialY = 100m
**.vNode[*].mobility.initialZ = 0m
# VeinsInetManager
ScenarioFOG.manager.updateInterval = 0.1s
ScenarioFOG.manager.host = "localhost"
ScenarioFOg.manager.port = 9999
ScenarioFOG.manager.autoShutdown = true
**.manager.launchConfig = xmldoc("square.launchd.xml")
ScenarioFOG.manager.moduleType = "org.car2x.veins.subprojects.veins_inet.example.Car"
**.vector-recording = true
I wan't get icinga2 notification history from db.
like timestamp1 object1 state->DOWN, call notifycommand1 send to user1.
main table icinga_notifications is ok.
but nothing in icinga_contactnotificationmethods and icinga_contact_notificationcommands.
Also, no data in icinga_logentries.
Do I make any mistake in config or what config can make this occur?
icinga2 version: r2.8.4
My ido_mysql config:
library "db_ido_mysql"
object IdoMysqlConnection "ido-mysql" {
host = "xxx"
port = xxx
user = "xxx"
password = "xxx"
database = "xxx"
table_prefix = "icinga_"
instance_name = "default"
enable_ha = true
cleanup = {
acknowledgements_age = 1209600
commenthistory_age = 1209600
contactnotificationmethods_age = 1209600
contactnotifications_age = 1209600
downtimehistory_age = 1209600
eventhandlers_age = 1209600
externalcommands_age = 1209600
flappinghistory_age = 1209600
hostchecks_age = 1209600
logentries_age = 1209600
notifications_age = 1209600
processevents_age = 1209600
servicechecks_age = 1209600
statehistory_age = 1209600
systemcommands_age = 1209600
}
}
https://icinga.com/docs/icinga1/latest/en/db_model.html
Document db model can't match mine.
I'm using Spring Integration JMS 5.1.3 with ActiveMQ, and I found an error with mapping priority:
java.lang.IllegalArgumentException: The 'priority' header value must be a Number.
at org.springframework.util.Assert.isTrue(Assert.java:118) ~[spring-core-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.integration.IntegrationMessageHeaderAccessor.verifyType(IntegrationMessageHeaderAccessor.java:177) ~[spring-integration-core-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.messaging.support.MessageHeaderAccessor.setHeader(MessageHeaderAccessor.java:305) ~[spring-messaging-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.messaging.support.MessageHeaderAccessor.lambda$copyHeaders$0(MessageHeaderAccessor.java:396) ~[spring-messaging-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at java.util.HashMap.forEach(HashMap.java:1289) ~[na:1.8.0_181]
at org.springframework.messaging.support.MessageHeaderAccessor.copyHeaders(MessageHeaderAccessor.java:394) ~[spring-messaging-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.integration.support.MessageBuilder.copyHeaders(MessageBuilder.java:179) ~[spring-integration-core-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.integration.support.MessageBuilder.copyHeaders(MessageBuilder.java:48) ~[spring-integration-core-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.integration.jms.ChannelPublishingJmsMessageListener.onMessage(ChannelPublishingJmsMessageListener.java:327) ~[spring-integration-jms-5.1.3.RELEASE.jar:5.1.3.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:736) ~[spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:696) ~[spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:674) ~[spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:318) [spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:257) [spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189) [spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179) [spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076) [spring-jms-5.1.5.RELEASE.jar:5.1.5.RELEASE]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]
My message header as following:
I have disabled the inbound message header for Priority:
#Bean
public DefaultJmsHeaderMapper jmsHeaderMapper() {
final DefaultJmsHeaderMapper mapper = new DefaultJmsHeaderMapper();
{
mapper.setMapInboundDeliveryMode(true);
mapper.setMapInboundExpiration(true);
mapper.setMapInboundPriority(false);
}
return mapper;
}
Is there any resolution for this issue ?
The inbound message in DEBUG log:
2019-03-01 09:51:51.278 DEBUG 4224 --- [sage-listener-1] .i.j.ChannelPublishingJmsMessageListener : converted JMS Message [ActiveMQTextMessage {commandId = 19, responseRequired = true, messageId = ID:hot-srv-wso2-01-44620-1551368625113-1:4:3:1:1, originalDestination = null, originalTransactionId = null, producerId = ID:hot-srv-wso2-01-44620-1551368625113-1:4:3:1, destination = queue://extraction-request, transactionId = null, expiration = 0, timestamp = 1551408495720, arrival = 0, brokerInTime = 1551408705168, brokerOutTime = 1551408705172, correlationId = ID:hot-srv-wso2-01-44620-1551368625113-1:3:3:1:1, replyTo = queue://extraction-response, persistent = true, type = null, priority = 4, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence#67153d1f, dataStructure = null, redeliveryCounter = 6, size = 0, properties = {Connection=Keep-Alive, User-Agent=Apache-HttpClient/4.1.1 (java 1.5), Host=10.10.15.235:8280, Accept-Encoding=gzip,deflate, jms_type=vn.sps.ias.domain.Response, priority=4, JMS_DESTINATION=ReqOutput, JMS_REPLY_TO=ReqROutput, Content-Length=38, JMS_REDELIVERED=false, Content-Type=application/json, timestamp=1551408705049}, readOnlyProperties = true, readOnlyBody = true, droppable = false, jmsXGroupFirstForConsumer = false, text = {"text":"1 was processed"}}] to integration Message payload []
For me, this problem occurred when I used the header "priority" in a custom message.
MessageBuilder.withPayload(val).setHeader("priority", true).build();
It seems as "priority" is a header value you must not use.
Changing to
MessageBuilder.withPayload(val).setHeader("prio", true).build();
solved the problem for me.