IBM MQ Maximum Connections - spring

I am using IBM MQ in my application and the connection factory is defined at Jboss level. The maximum pool size property at Jboss level is configured as 50. The max instances per client configuration for the channel is set as 999999999. Sharing conversations stays as default 10.
I would appreciate if someone can elaborate more on how these connections work altogether?
I understand the maximum connections can be established by the JVM to Queue Manager is 50 (Max connection pool). If I have 50 message listener threads running in parallel all of them will be consumed. But the channel level, sharing conversation is 10 which means up to 10 conversations can be shared over a single TCP connection. In that case, we are not utilizing this capability as we exhausted the 50 connections. If 10 sharing allowed over a connection, we should see only 5 connections established for 50 messages?
Also if we have 100 messages to be consumed or loaded, as we have max channels set to a high value, does that mean 100 channels will be operated, or 10 channels with 10 sharing conversation each?
Please excuse me if the above assumptions are completely wrong as I am a very beginner to async architecture.

Sharing conversation is used to tune load balacing of mq server. When more than one sharing coversation is used, the client code acts as it has lock or round robin acces to channel instace. So if 50 listeners with sharing conversation of 10 are waiting for messages, then only 5 of them are active in any moment. One for each channel instance. No matter threading model in JVM.
By setting sharing conversations to 1 you are eliminating this contetion. The price is higher resource usage. Keep this number to 1 unless you have large number of lightly used queues.

Related

Is it posible redistribute existing messages between all started consumers

I have ActiveMQ Artemis. Producer generates 1000 messages and consumer one by one processing their. Now I want to process this queue with help of two consumers. I start new consumer and new messages are distributed between two runned consumers. My question: is it posible redistribute old messages between all started consumers?
Once messages are dispatched by the broker to a consumer then the broker can't simply recall them as the consumer may be processing them. It's up to the consumer to cancel the messages back to the queue (e.g. by closing its connection/session).
My recommendation would be to tune your consumerWindowSize (set on the client's URL) so that a suitable number of messages are dispatched to your consumers. The default consumerWindowSize is 1M (1024 * 1024 bytes). A smaller consumerWindowSize would mean that more clients would be able to receive messages concurrently, but it would also mean that clients would need to conduct more network round-trips to tell the broker to dispatch more messages when they run low. You'll need to run benchmarks to find the right consumerWindowSize value for your use-case and performance needs.

Data structure used for a buffer

I attended a developer interview recently and I was asked the following question:
I have a server that can handle 20 requests. Which data structure is used to model this? What will happen if thee are more than 20 requests? i.e., What will you do in case of buffer overflow?
I am not from CS background. I am transitioning from a different field. I am self taught in programming and DSA. So I would like to know the answers for these questions. Thanking in advance!
Regarding a server that can handle 20 simultaneous requests:
Your question indicates that you are not yet thinking about this is in a reasonable way and are probably quite far from understanding how it works. No problem -- it just means that maybe you have more to learn than you expect.
To help you along, I will write you the correct answer, full of terms you can google for:
When a client attempts to connect to your server, the kernel puts his request in to a 'listen queue' attached to your server's listening 'socket'.
When your server is ready to service a request, it 'accepts' a connection from the listening socket, which creates a new socket for the communication between the client and server, and the server then processes the request.
If your server can handle 20 simultaneous requests, it typically means that it can have up to 20 threads processing connections at the same time. That is usually accomplished by using a 'thread pool' of limited size. When a thread in the pool is available, it gets a new connection from the listening socket (might have to wait for one), and processes it, and it is only the fact that there are at most 20 of these threads that limits the number of request you will handle simultaneously. (nothing to do with a buffer of any kind, really)
If the server is already processing 20 simultaneous requests when a new one comes in, then the client's request will wait in the socket listen queue until the server eventually picks it up, or it will timeout and fail if it has been waiting too long.
There is also a limit (the TCP backlog) on the number of connection requests that can be waiting in the listen queue. If a connection request comes in when the listen queue is full, it is immediately rejected. If you want your server to handle 20 simultaneous requests, then the listen queue should have length at least 20 in case 20 requests arrive at the same time -- they will all get queued until your server picks them up.

Spring custom poller using dynamic max message per poll value

I am developing an SMPP platform that has to be capable to delivere specific amount of sms per second.
This has been easily implemented using amqp with spring integration.
But:
I need to run the project as an active-active service on 2 nodes and each node has a connection to 2 SMSC.
For this configuration, I have an allowed traffic of 100 msg/s and I need to ideally spread my traffic on all the available connections.
A simple poller can be easily configured to 25 msg/s for each node (4 * 25 = 100) but if one of my connection is down, I want to spread the lost capacity to the other nodes/connections in live.
For this I would like to create a dynamic poller that gets information about connection status in redis and just adapts the amount of messages allowed per poll at runtime (0 for the broken connection and 33% for the 3 others for example, or 50% if there is only 2 connections on 4 available).
Is it possible to implement this behavior with a custom PollerMetadata or should I look for some other solution?
Poll is quite heavy and may be consider "old-fashion" these day.
I highly recommend to try using : Sse (server send event) or websocket.
Many technology also support both above solution (spring...)
You can find more detail in this article:
https://codeburst.io/polling-vs-sse-vs-websocket-how-to-choose-the-right-one-1859e4e13bd9

Connection pooling and scaling of instances

Note : This is a design related question to which i couldn't find a satisfying answer. Hence asking here.
I have a spring boot app which is deployed in cloud ( Cloud foundry). The app connects to an oracle database to retrieve data. The application uses a connection pool(HikariCp) to maintain the connections to database. Lets say the number of connections is set as 5. Now the application has the capacity to scale automatically based on the load. All the instances will be sharing the same database. At any moment there could 50 instances of the same application running, which means the total number of database connections will be 250 (ie 5 * 50).
Now suppose the database can handle only 100 concurrent connections. In the current scenario, 20 instances will use up the 100 connections available. What will happen if the next 30 instances tries to connect to db? If this is design issue, how can this be avoided?
Please note that the numbers provided in the question are hypothetical for simplicity. The actual numbers are much higher.
Let's say:
Number of available DB connections = X
Number of concurrent instances of your application = Y
Maximum size of the DB connection pool within each instance of your application = X / Y
That's slightly simplistic since you might want to be able to connect to your database from other clients (support tools, for example) so perhaps a safer formula is (X * 0.95) / Y.
Now, you have ensured that your application layer will not encounter 'no database connection exists' issues. However if (X * 0.95) / Y is, say, 25 and you have more than 25 concurrent requests passing through your application which need a database connection at the same time then some of those requests will encounter delays when trying to acquire a database connection and, if those delays exceed a configured timeout, they will result in failed requests.
If you can limit throughput in your application such that you will never have more than (X * 0.95) / Y concurrent 'get database connection' requests then hey presto the issue disappears. But, of course, that's not typically realistic (indeed since less is rarely more ... telling your clients to stop talking to you is generally an odd signal to send). This brings us to the crux of the issue:
Now the application has the capacity to scale automatically based on the load.
Upward scaling is not free. If you want the same responsiveness when handling N concurrent requests as you have when handling 100000N concurrent requests then something has to give; you have to scale up the resources which those requests need. So, if they make use of databaase connections then the number of concurrent connections supported by your database will have to grow. If server side resources cannot grow proprotional to client usage then you need some form of back pressure or you need to carefully manage your server side resources. One common way of managing your server side resources is to ...
Make your service non-blocking i.e. delegate each client request to a threadpool and respond to the client via callbacks within your service (Spring facilitates this via DeferredResult or its Async framework or its RX integration)
Configure your server side resources (such as the maximum number of available connections allowed by your DB) to match the maximum througput from your services based on the total size of your service instance's client-request threadpools
The client-request threadpool limits the number of currently active requests in each service instance it does not limit the number of requests your clients can submit. This approach allows the service to scale upwards (to a limit represented by the size of the client-request threadpools across all service instances) and in so doing it allows the service owner to safe guard resources (such as their database) from being overloaded. And since all client requests are accepted (and delegated to the client-request threadpool) the client requests are never rejected so it feels from their perspective as if scaling is seamless.
This sort of design is further augmented by a load balancer over the cluster of service instances which distributes traffic across them (round robin or even via some mechanism whereby each node reports its 'busy-ness' with that feedback being used to direct the load balancer's behaviour e.g. direct more traffic to NodeA because it is under utilised, direct less traffic to NodeB because it is over utilised).
The above description of a non blocking service only scratches the surface; there's plenty more to them (and loads of docs, blog postings, helpful bits-n-pieces on the Internet) but given your problem statement (concern about server side resources in the face of increasing load from a client) it sounds like a good fit.

weblogic questions

I have a couple of questions
1) How can we define in weblogic configuration how many concurrent users are allowed or can be allowed at a time to a particular application?
2) how can we tell how may threads are being used in a weblogic at a time?
3) How many max jdbc connections should I set so that users are not blocked due to all connections used up. How to keep a balance between number of concurrent user/threads allowed to jdbc connections max?
Thanks
It is different in each use case scenario.
But usually WLS 1 instance can cover 50~100 active user per instance.
The instance has 2 CPU and 1~1.5GB heap.
This document will be useful to your question:
"Planning Number Of Instance And Thread In Web Application Server"
1) You can user Work Managers to do this for managing requests. However, restricting the number of concurrent users will vary application to application. If it is a web app, use the work managers with a max constraint equal to the number of users you want to restrict it to. However, be sure you figure out how to handle overflow - what will you do when you get 100 requests but have a 5-user restriction? Is this synchronous or asynchronous processing?
2) Ideally you would want a 1:1 ratio of threads to connections in the pool. This guarantees that no thread (User Request) is waiting for a connection. I would suggest trying this. You can monitor the JDBC connection pools using the WebLogic console and adding fields to the columns under the 'Monitoring' tab for the connection. If you have a high number of waiters, and/or a high wait time then you would want to increase the number of connections in the pool. You could start with a 1:0.75 ratio of threads:connections, do performance/load testing and adjust based on your findings. It really depends on how well you manage the connections. Do you release the connection immediately after you get the data from the database, or do you proceed with application logic and release the connection at the end of the method/logic? If you hold the connection for a long time you will likely need closer to a 1:1 ratio.
1) If to each user you assign a session, then you can control the max number of sessions in your webapp weblogic descriptor, for example adding the following constraint :
<session-descriptor> <max-in-memory-sessions>12</max-in-memory-sessions> </session-descriptor>
It's more effective (if you mean 1 user = 1session) than limiting the number of requests by work managers.
Another way, when you can't predict the size of sessions and the number of users, is to adjust memory overloading parameters and set :
weblogic.management.configuration.WebAppContainerMBean.OverloadProtectionEnabled.
More info here :
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/webapp/sessions.html#wp150466
2) Capacity of threads is managed by WebLogic through work managers. By default, just one exists : default with unllimited number of threads (!!!).
3) Usually, adapting the number of JDBC connections to the number of threads is the more effective.
The following page could surely be of great interest :
http://download.oracle.com/docs/cd/E11035_01/wls100/config_wls/overload.html
As far as i know you have to control these kind of things in
weblogic-xml-jar.xml
or
weblogic.xml
if you look for weblogic-xml-jar.xml commands you can find your desire .

Resources