Akka.net Context.System.EventStream in aws lambda - aws-lambda

I'm working on a solution hosting asp.net core application in AWS lambda. This allows each team member to own an environment during development only. It will run in docker for integration and go live. Corporate knowledge means it has to run this way. It's a small actor system that must run one job at a time sequentially.
The issue is when local, all works fine, but when hosted in AWS lambda, the process which begins with a number of specific actorRef.Tell() calls, work until the first call to: Context.System.EventStream.Publish(). There's no warning or failure, it just stops until the lambda times out. It seems like Context.System.EventStream.Subscribe(Self, typeof(SomeEvent)); just doesn't work
I have included copious logging, so I see exactly where it stops. I've also enabled all the akka.net logging:
"debug": {
"autoreceive": "on",
"eventStream": "on",
"lifecycle": "on",
"receive": "on",
"unhandled": "on"
}
Is there something inherent in how Context.System.EventStream.Publish() that's different from actorRef.Tell() affected by hosting environment?
What about aws lambda environment would cause this behaviour?
Is there a way to configure akka.net to address any constraints in aws lambda?
Has anyone encountered problems of this sort?
EDIT
Any aspect of akka.net that relies on EventStream doesn't work. Even logging using the Context.GetLogger. The actors only started logging when I changed from the Context based logger to the injected ILogger<T>.
_logger.Info("Message {#msg} validated successfully, processing", e.MsgId); // this is shown in the logs also
Context.System.EventStream.Publish(new ProcessHl7MessageCommand(e.Body));
Below is the lifecycle of the lambda calls, from 13:09:07 to 13:09:12 everything is gravy, and then published event goes nowhere:
2021-03-08T13:09:07.017+00:00 START RequestId: a57732ba-4d37-46c1-b184-6f4c4b3dc438 Version: $LATEST
2021-03-08T13:09:07.019+00:00 13:09:06.996 [WARN] Microsoft.AspNetCore.DataProtection.Repositories.EphemeralXmlRepository - Using an in-memory repository. Keys will not be persisted to storage.
2021-03-08T13:09:07.155+00:00 13:09:07.000 [WARN] Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager - Neither user profile nor HKLM registry available. Using an ephemeral key repository. Protected data will be unavailable when application exits.
2021-03-08T13:09:07.182+00:00 13:09:07.152 [WARN] Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager - No XML encryptor configured. Key {d261dd3f-bbc7-4ad4-a0a5-20cac3e43970} may be persisted to storage in unencrypted form.
2021-03-08T13:09:07.182+00:00 Akka config journal: akka.persistence.journal.sqlServer
2021-03-08T13:09:07.457+00:00 Akka config SnapshotStore: akka.persistence.snapshot-store.sqlServer
2021-03-08T13:09:07.457+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] subscribing [akka://all-systems/] to channel Akka.Event.Debug
2021-03-08T13:09:07.457+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] subscribing [akka://all-systems/] to channel Akka.Event.Info
2021-03-08T13:09:07.458+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] subscribing [akka://all-systems/] to channel Akka.Event.Warning
2021-03-08T13:09:07.458+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] subscribing [akka://all-systems/] to channel Akka.Event.Error
2021-03-08T13:09:07.538+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] StandardOutLogger started
2021-03-08T13:09:07.555+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0012][akka://P80System/system] Started (Akka.Actor.SystemGuardianActor)
2021-03-08T13:09:07.555+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] subscribing [akka://P80System/system/UnhandledMessageForwarder#274328165] to channel Akka.Event.UnhandledMessage
2021-03-08T13:09:07.555+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream(P80System)] StandardOutLogger being removed
2021-03-08T13:09:07.555+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0001][EventStream] unsubscribing [akka://all-systems/] from all channels
2021-03-08T13:09:07.555+00:00 [DEBUG][3/8/2021 1:09:07 PM][Thread 0014][akka://P80System/] Started (Akka.Actor.GuardianActor)
2021-03-08T13:09:12.616+00:00 END RequestId: a57732ba-4d37-46c1-b184-6f4c4b3dc438
2021-03-08T13:09:12.616+00:00 REPORT RequestId: a57732ba-4d37-46c1-b184-6f4c4b3dc438 Duration: 4995.25 ms Billed Duration: 4996 ms Memory Size: 1024 MB Max Memory Used: 249 MB Init Duration: 2362.63 ms
********
2021-03-08T13:10:03.196+00:00 START RequestId: 76e19c6d-3265-4ef0-abfe-82bf68f01bf4 Version: $LATEST
2021-03-08T13:10:03.216+00:00 13:10:03.216 [INFO] P80.Api.Controllers.MessagesController - Received 8142
2021-03-08T13:10:03.277+00:00 13:10:03.277 [INFO] P80.Ordering.Hl7MessageManagerActor - Initializing P80.Models.Messages.Commands.StartProcessingCommand
2021-03-08T13:10:03.278+00:00 13:10:03.278 [INFO] P80.Ordering.MessageLoaderActor - Loading Message 8142
2021-03-08T13:10:03.278+00:00 13:10:03.278 [INFO] P80.Ordering.MessageLoaderActor - Loading 8142
2021-03-08T13:10:03.741+00:00 13:10:03.740 [INFO] P80.Data.Context.MessageResourceAccess - Message loaded: { MsgId = 8142, CustomerName = A PHARMACY IN JURIEN BAY }
2021-03-08T13:10:03.798+00:00 13:10:03.798 [INFO] P80.Ordering.MessageLoaderActor - Message loaded: 8142
2021-03-08T13:10:03.860+00:00 13:10:03.859 [INFO] P80.Ordering.Hl7MessageManagerActor - Message 8142 validated successfully, processing
******* Here's where the first call to EventStream.Publish() occurs and nothing happens again until the lambda closes.
2021-03-08T13:10:30.258+00:00 13:10:30.257 [INFO] P80.Api.Controllers.MessagesController - Message 8142 still processing after 00:00:27...
2021-03-08T13:10:30.262+00:00 END RequestId: 76e19c6d-3265-4ef0-abfe-82bf68f01bf4
2021-03-08T13:10:30.262+00:00 REPORT RequestId: 76e19c6d-3265-4ef0-abfe-82bf68f01bf4 Duration: 27063.17 ms Billed Duration: 27064 ms Memory Size: 1024 MB Max Memory Used: 285 MB

Related

Filebeat can't send logs after Elasticsearch cluster failure

We recently had a problem when ES cluster failed. The problem was resolved, but filebeat failed to send new data after the failure.
Here's a portion of the logs - it seems to retry forever but can't send the data:
2019-04-08T11:52:04.182+0300 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.4.0
2019-04-08T11:52:04.185+0300 INFO template/load.go:73 Template already exists and will not be overwritten.
2019-04-08T11:52:04.185+0300 INFO [publish] pipeline/retry.go:172 retryer: send unwait-signal to consumer
2019-04-08T11:52:04.185+0300 INFO [publish] pipeline/retry.go:174 done
2019-04-08T11:52:59.058+0300 INFO [publish] pipeline/retry.go:149 retryer: send wait signal to consumer
2019-04-08T11:52:59.058+0300 INFO [publish] pipeline/retry.go:151 done
2019-04-08T11:53:00.065+0300 ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:172 retryer: send unwait-signal to consumer
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:174 done
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:149 retryer: send wait signal to consumer
2019-04-08T11:53:00.065+0300 INFO [publish] pipeline/retry.go:151 done
I restarted Filebeat service and all data was sent to ES without any problem.
Is this a known issue? Filebeat version is quite old, should I update?
I'm running Filebeat 6.3.0 as a service on Windows. Elasticsearch version is 6.4.0.
Please show your profile
I have encountered this error before because I did not write the procotol
Below is a correct configuration file
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/dmesg
- /var/log/syslog
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://192.168.13.173:30014"]
description : https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html

SpringXD -> twitterstream --follow: ending with Http error

I'm trying to crate a stream that should follow #BBCBreaking (what should have 5402612 twitter ID), but I keep getting following Http error:
2016-03-28T02:13:12+0200 1.3.1.RELEASE INFO DeploymentSupervisor-0 zk.ZKStreamDeploymentHandler - Deployment status for stream 'mystream': DeploymentStatus{state=deployed}
2016-03-28T02:13:13+0200 1.3.1.RELEASE WARN twitterSource-1-1 twitter.TwitterStreamChannelAdapter - Http error, waiting for 5 seconds before restarting
2016-03-28T02:13:19+0200 1.3.1.RELEASE WARN twitterSource-1-1 twitter.TwitterStreamChannelAdapter - Http error, waiting for 10 seconds before restarting
2016-03-28T02:13:30+0200 1.3.1.RELEASE WARN twitterSource-1-1 twitter.TwitterStreamChannelAdapter - Http error, waiting for 20 seconds before restarting
my stream command is:
stream create --name mystream --definition "twitterstream --follow='5402612' | log" --deploy
running on SpringXD: 1.3.1.RELEASE
please, any idea that why the error?
You can debug such situations by enabling DEBUG logging - log config is in the xd/config folder in .groovy files; e.g. xd-singlenode-logback.groovy.
Set the loggers for org.springframework.integration and org.springframework.xd, org.springframework.xd.dirt.server to DEBUG and add a logger for org.springframework.social.twitter also at DEBUG.
Or you can set all of org.springframework and comment out the more specific ones.

Kafka | Unable to publish data to broker - ClosedChannelException

I am trying to run simple kafka producer consumer example on HDP but facing below exception.
[2016-03-03 18:26:38,683] WARN Fetching topic metadata with correlation id 0 for topics [Set(page_visits)] from broker [BrokerEndPoint(0,sandbox.hortonworks.com,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:120)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:68)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:89)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:68)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
[2016-03-03 18:26:38,688] ERROR fetching topic metadata for topics [Set(page_visits)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox.hortonworks.com,9092))] failed (kafka.utils.CoreUtils$)
kafka.common.KafkaException: fetching topic metadata for topics [Set(page_visits)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox.hortonworks.com,9092))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:73)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:68)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:89)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.CoreUtils$.swallowError(CoreUtils.scala:51)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:68)
at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:105)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:88)
at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:68)
at scala.collection.immutable.Stream.foreach(Stream.scala:547)
at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:67)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:45)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:120)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:75)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:74)
at kafka.producer.SyncProducer.send(SyncProducer.scala:115)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:59)
... 12 more
[2016-03-03 18:26:38,693] WARN Fetching topic metadata with correlation id 1 for topics [Set(page_visits)] from broker [BrokerEndPoint(0,sandbox.hortonworks.com,9092)] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
Here is command that I am using for producer.
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:9092 --topic page_visits
After doing bit of googling , I found that I need to add advertised.host.name property in server.properties file .
Here is my server.properties file.
# Generated by Apache Ambari. Thu Mar 3 18:12:50 2016
advertised.host.name=sandbox.hortonworks.com
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
broker.id=0
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=false
fetch.purgatory.purge.interval.requests=10000
host.name=sandbox.hortonworks.com
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter
kafka.timeline.metrics.host=sandbox.hortonworks.com
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=6188
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
offsets.topic.segment.bytes=104857600
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
zookeeper.connect=sandbox.hortonworks.com:2181
zookeeper.connection.timeout.ms=15000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
After adding property i am getting same exception.
Any suggestion.
I had similar problem. First I have checked listeners property for Kafka broker in the Ambari
Also possible to check with:
[root#sandbox bin]# cat /usr/hdp/current/kafka-broker/conf/server.properties | grep listeners
listeners=PLAINTEXT://sandbox.hortonworks.com:6667
Ambari replaces localhost with hostname as you can see and the port is same - 6667.
Then I checked that broker really listens on that port:
[root#sandbox bin]# netstat -tulpn | grep 6667
tcp 0 0 10.0.2.15:6667 0.0.0.0:* LISTEN 11137/java
Next step was to launch producer:
./kafka-console-producer.sh --broker-list 10.0.2.15:6667 --topic test
At last I have launched consumer:
./kafka-console-consumer.sh --zookeeper 10.0.2.15:2181 --topic test --from-beginning
After typing few words with hitting Enter on producer side, consumer received messages.
As per the log it seems the kafka server(broker) is not running. The broker server should run first.
Producers and consumers are client programs that will interact with the broker servers and zookeeper also.
Before running the producer or consumer please check whether broker and zookeeper are running successfully or not.
Run the server
./kafka-server-start.sh ../config/server.properties
check the logs for any errors, if no errors then start producing the messages to the server.
Check the zookeeper service also.
modified the file /usr/hdp/current/kafka-broker/config/server.properties with the following 2 lines
advertised.host.name=sandbox.hortonworks.com
listeners=PLAINTEXT://sandbox.hortonworks.com:6667,PLAINTEXT://0.0.0.0:6667
run the following execution commands
./kafka-console-producer.sh --broker-list sandbox.hortonworks.com:6667 --topic tst2
./kafka-console-consumer.sh --zookeeper localhost:2181 --topic tst2 --from-beginning
with this its working fine

kafka consumer group expired?

Immediately after I commitoffset using the golang client. https://github.com/Shopify/sarama
./kafka-consumer-offset-checker.sh --zookeeper=localhost:2181 --topic=my-replicated-topic --group=ib --broker-info
Group Topic Pid Offset logSize Lag Owner
ib my-replicated-topic 0 12 12 0 none
BROKER INFO
1 -> localhost:9093
However, after several minute, I run the same checker command.
./kafka-consumer-offset-checker.sh --zookeeper=localhost:2181 --topic=my-replicated-topic --group=ib --broker-info
Exiting due to: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /consumers/ib/offsets/my-replicated-topic/0.
And I check the zookeeper, the node never exists at any time, even when checker list the offset correctly.
sarama commit: 23d523386ce0c886e56c9faf1b9c78b07e5b8c90
kafka 0.8.2.1
golang 1.3
kafka server config:
broker.id=1
port=9093
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=false
zookeeper.connect=localhost:2181
zookeeper.connection.timeout.ms=6000
It seems to me that the consumer group get expired. How can I make the consumer group persist?
Sarama does not talk to zookeeper, I should use high level consumer group library
instead.
https://github.com/Shopify/sarama/issues/452

spring XD rabbit source module fails to process messages, first message stays unacknowledged

I am trying simple spring XD application to load log events in HDFS. I have configured the target application with the spring-ampq/rabbit log4j appender (the org.springframework.amqp.rabbit.log4j.AmqpAppender Class) to pump log messages to a pre-configured exchange. I set the following stream to pull those messages from and push them to HDFS, where both soruce and sink modules are off-the-shelf XD modules,
stream definition,
xd:>stream create --name demoQ1 --definition "rabbit | hdfs --rollover=15 --directory=/user/root" --deploy
Created and deployed new stream 'demoQ1'
xd:>stream list
Stream Name Stream Definition Status
----------- -------------------------------------------------- --------
demoQ1 rabbit | hdfs --rollover=15 --directory=/user/root deployed
AMQP Appender is publishing the messages to exchange and routing it to demoQ1 queue, where rabbit source is picking up the first message and then gets stuck, as it does not acknowledge the message. What could be the reason?
In your container log, do you see this: "failed to write Message payload to HDFS" ?
If so, then you need to use the type conversion between modules. From the rabbit source to hdfs sink the messages will simply be byte arrays.
Your stream definition could be,
stream create --name demoQ1 --definition "rabbit --outputType=text/plain | hdfs --rollover=15 --directory=/user/root" --deploy
or,
stream create --name demoQ1 --definition "rabbit | hdfs --inputType=text/plain --rollover=15 --directory=/user/root" --deploy
Note the outputType or the inputType option in source/sink respectively.
In this case, the hdfs sink's HdfsStoreMessageHandler expects the payload to be of type String.
For more details on the type conversion, please check this out:
https://github.com/spring-projects/spring-xd/wiki/Type-Conversion
Enabled debug logs on the spring XD container running rabbit module, It showed following exception repeatedly happening for the first message and message is requeued back thus the message stays in unacknowledged state and rabbit source can not process further messages..
To resolve the problem, from log4j Appender properties I removed this property, log4j.appender.amqp.contentEncoding=null. This property explicitly specifies name of the encoder as "null", which seems to be a bug. I was expecting null means no encoder specified :)
Exception in the log, continuously repeating as message is rejected and re-queued back..
19:29:17,713 DEBUG SimpleAsyncTaskExecutor-1 listener.BlockingQueueConsumer:268 - Received message: (Body:'Hello'MessageProperties [headers={categoryName=org.apache.hadoop.yarn.server.nodemanager.NodeManager, level=INFO}, timestamp=Sat Apr 19 19:21:52 PDT 2014, messageId=null, userId=null, appId=NodeManager, clusterId=null, type=null, correlationId=null, replyTo=null, contentType=text/plain, contentEncoding=null, contentLength=0, deliveryMode=PERSISTENT, expiration=null, priority=0, redelivered=true, receivedExchange=test-exch, receivedRoutingKey=rk1, deliveryTag=184015, messageCount=0]) 19:29:17,715 WARN SimpleAsyncTaskExecutor-1 listener.SimpleMessageListenerContainer:530 - Execution of Rabbit message listener failed, and no ErrorHandler has been set. org.springframework.amqp.rabbit.listener.ListenerExecutionFailedException: Listener threw exception at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.wrapToListenerExecutionFailedExceptionIfNeeded(AbstractMessageListenerContainer.java:751) at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:690) at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:583) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$001(SimpleMessageListenerContainer.java:75) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$1.invokeListener(SimpleMessageListenerContainer.java:154) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.invokeListener(SimpleMessageListenerContainer.java:1111) at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:556) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:904) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:888) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.access$500(SimpleMessageListenerContainer.java:75) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:989) at java.lang.Thread.run(Thread.java:722) Caused by: org.springframework.amqp.support.converter.MessageConversionException: failed to convert text-based Message content at org.springframework.amqp.support.converter.SimpleMessageConverter.fromMessage(SimpleMessageConverter.java:100) at org.springframework.integration.amqp.inbound.AmqpInboundChannelAdapter$1.onMessage(AmqpInboundChannelAdapter.java:73) at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:688) ... 10 more Caused by: java.io.UnsupportedEncodingException: null at java.lang.StringCoding.decode(StringCoding.java:190) at java.lang.String.(String.java:416) at java.lang.String.(String.java:481) at org.springframework.amqp.support.converter.SimpleMessageConverter.fromMessage(SimpleMessageConverter.java:97) ... 12 more 19:29:17,715 DEBUG SimpleAsyncTaskExecutor-1 listener.BlockingQueueConsumer:657 - Rejecting messages (requeue=true)

Resources