Decreased performance in ActiveMQ Artemis after paging - jms

After the broker switched to paging mode I am seeing a strange drop in performance. Some messages began to take a very long time:
1800мс
10мс
15мс
700мс
I am also seeing a lot of disk usage:
My broker.xml:
<configuration>
<core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:activemq:core ">
<thread-pool-max-size>50</thread-pool-max-size>
<name>0.0.0.0</name>
<persistence-enabled>true</persistence-enabled>
<journal-type>ASYNCIO</journal-type>
<paging-directory>data/paging</paging-directory>
<bindings-directory>data/bindings</bindings-directory>
<journal-directory>data/journal</journal-directory>
<large-messages-directory>data/large-messages</large-messages-directory>
<journal-datasync>true</journal-datasync>
<journal-min-files>2</journal-min-files>
<journal-pool-files>10</journal-pool-files>
<journal-file-size>10M</journal-file-size>
<journal-buffer-timeout>16000</journal-buffer-timeout>
<journal-max-io>4096</journal-max-io>
<disk-scan-period>5000</disk-scan-period>
<max-disk-usage>90</max-disk-usage>
<critical-analyzer>true</critical-analyzer>
<critical-analyzer-timeout>120000</critical-analyzer-timeout>
<critical-analyzer-check-period>60000</critical-analyzer-check-period>
<critical-analyzer-policy>HALT</critical-analyzer-policy>
<acceptors>
<acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP;useEpoll=true;</acceptor>
<acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
</acceptors>
</core>
</configuration>
Linux Astra, 4 CPU 24GB ram 50GB SSD, ActiveMQ Artemis 2.7.0
only broker restart helps

A decrease in performance is expected when paging. This is because messages are being paged to and from disk instead of being accessed directly from RAM. Even the fastest disks are much slower than RAM therefore paging reduces performance.
There are a few ways to mitigate this performance decrease:
Provide the broker's JVM with enough heap space so that paging never occurs.
Use flow-control to prevent the excessive build-up of messages on the broker that leads to paging.
Ensure that message consumption keeps up with message production to prevent the excessive build-up of messages on the broker that leads to paging (e.g. add more consumers, increase performance of existing consumers, etc.).
Use high-speed SSDs instead of slower traditional HDDs.
My guess is that you're using mostly non-durable messages so that restarting the broker clears out these messages and eliminates the need to page thus restoring normal performance.
Also, since your using ActiveMQ Artemis 2.7.0 I strongly recommend you upgrade to the latest release. It's been over 2 years now since 2.7.0 was released and there have been many bug fixes and new features implemented in later versions.

Related

Quarkus Rest service Heap memory is growing without any request

I have a Quarkus REST API, Which used hibernate Panache to connect to Database. Installed feature while running application is
Installed features: [agroal, cdi, hibernate-orm, hibernate-orm-panache, mutiny, narayana-jta, resteasy, resteasy-jackson, security, smallrye-context-propagation, smallrye-openapi, swagger-ui]
There is no request I have placed so far, but my heap memory is getting increased to 500mb in 30 minutes. Is it normal? After sometime GC executed and memory became free. Again heap started growing.

Spring Boot SockJS over stomp and Apache Artemis

I have a chat application I created using Spring Boot with SockJS over STOMP backed by and external ActiveMQ Broker,
My issue is that after approximately 4000 client connection and 10000 ActiveMQ destinations, ActiveMQ crashes with out of memory relating to KahaDB.
I would like to switch to Apache Artemis as the blog mentioned it performs better than ActiveMQ and handles alot more client connection and also implements non-blocking in.
My hope was to just swap out ActiveMQ with Artemis, however, I see the clients connects and subscribe to topics and queues but they are not receiving the messages via Artemis.
And ideas what could be the issue?
Here is my settings in Artemis broker.xml config file:
<address-settings>
<!--default for catch all-->
<address-setting match="#">
<dead-letter-address>jms.queue.DLQ</dead-letter-address>
<expiry-address>jms.queue.ExpiryQueue</expiry-address>
<redelivery-delay>0</redelivery-delay>
<!-- with -1 only the global-max-size is in use for limiting -->
<max-size-bytes>-1</max-size-bytes>
<message-counter-history-day-limit>10</message-counter-history-day-limit>
<address-full-policy>PAGE</address-full-policy>
<auto-create-jms-queues>true</auto-create-jms-queues>
<auto-delete-jms-queues>true</auto-delete-jms-queues>
</address-setting>
</address-settings>
Any help would be much appreciated.
Thanks in advance.
With artemis, You should try the latest version available (1.5.2 at the time I'm writing this).
You probably need to change the address name and queue name accordingly with Artemis 1.x. (jms.queue and jms.topic) prefixes.
With the upcoming release of 2.0 the address model doesn't require prefixes any longer, but at the current version you may be hitting the issue of having to define prefixes on your application.
Feel free to start a discussion on the user's list, which is the place where contributors are mostly active.

CachingConnectionFactory or DelegatingConnectionFactory

I've been doing some research(here and here and etc.) and with some experience I have follwing feeling about the choice between the two, it seems like:
CachingConnectionFactory is for simple container that don't have much of messaging transaction management(like Tomcat with ActiveMQ), so that the caching part can guarantee some level of performance even if Spring by nature have to start a new session/connection/producer to each transmission, so the same connection/session get cached to reuse and avoid overhead by extending SingleConnectionFactory.
DelegatingConnectionFactory is for mature application server so that the CF/transaction management is in the hand of the server(Websphere MQ, JBoss HornetQ etc) so that this CF plays as a delegation and leaves the workload to the server. So the actual performance depends on how to tune up app server's CF and queue and transaction management.
I may be too drunk to make this up so please correct me if above does not make sense. I have one more question base on this comparison that, if Spring JmsTemplate by nature have to open/close session and all to each transmission, then how we can improve the performance by utilize the JmsTemplate with the app server's jms management?

How to configure OpenMQ to not store all in-flight messages in memory?

I've load testing different JMS implementations for our notification service.
No one of ActiveMQ, HornetQ and OpenMQ behave as expected (issues with reliability and message prioritization). But as now I've best results with OpenMQ. Expect two issues that's probably just missconfiguration (I hope). One with JDBC storage
Test scenario:
2 producers with one queue send messages with different priority. 1 consumer consuming from queue with constant speed that is slightly lower than producers produce. OpenMQ is running standalone and uses PostgreSQL as persistence storage. All messages are sended and consumed from Apache Camel route and it's all persistent.
Issues:
After about 50000 messages I see warnings and errors in OpenMQ log about low memory (default cinfiguration with 256Mb Heap size). Producing is throutled by broker and after some time broker stop dispatching at all. Broker JVM memory usage is on maximum.
How I must configure broker to achieve that goal:
Broker do not depend on queue size (up to 1 000 000 msgs) and on memory limit. Performance is not issue - only reliability.
Thats possible?
I can not help with OpenMQ but perhaps with Camel and ActiveMQ. What issues do you face with ActiveMQ? Can you post your camel route and eventually spring context and the activemq config?

JSF2 Tomcat Session Backup for High Availability

I'm very interested in how other users of clustered Tomcat servers running JSF2 applications are setting up session replication or other successful failover strategies for their clustered application.
In JSF 1.2 we have been using memcached session manager which I like because it scales well by avoiding all to all replication but we've been experiencing de-serialization errors after updating to JSF2. We may stick w/ msm and fix the serialization issues but I thought this would be a good time to poll the wider community and see how other JSF2 users are managing replication/backup in case there is an easier and better supported path to HA that is widely utilized.
Any feedback is appreciated. Thanks!

Resources