I am using SocketJS and Stomp to send files over a backend api for being process.
My problem is that the upload function get stuck if more than two upload are done at the same time.
Ex:
User 1 -> upload a file -> backend is receiving correctly the file
User 2 -> upload a file -> backend is receiving correctly the file
User 3 -> upload a file -> the backend is not called until one of the
previous upload hasn't completed.
(after a minute User 1 complete its upload and the third upload starts)
The error I can see through the log is the following:
2021-06-28 09:43:34,884 INFO [MessageBroker-1] org.springframework.web.socket.config.WebSocketMessageBrokerStats.lambda$initLoggingTask$0: WebSocketSession[11 current WS(5)-HttpStream(6)-HttpPoll(0), 372 total, 26 closed abnormally (26 connect failure, 0 send limit, 16 transport error)], stompSubProtocol[processed CONNECT(302)-CONNECTED(221)-DISCONNECT(0)], stompBrokerRelay[null], **inboundChannel[pool size = 2, active threads = 2**, queued tasks = 263, completed tasks = 4481], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 607], sockJsScheduler[pool size = 1, active threads = 1, queued tasks = 14, completed tasks = 581444]
It seems clear that the pool size is full:
inboundChannel[pool size = 2, active threads = 2
but I really cannot find a way to increase the size.
This is the code:
Client side
ws = new SockJS(host + "/createTender");
stompClient = Stomp.over(ws);
Server side configuration
#EnableWebSocketMessageBroker
public class WebSocketBrokerConfig extends AbstractWebSocketMessageBrokerConfigurer {
...
...
#Override
public void configureWebSocketTransport(WebSocketTransportRegistration registration) {
registration.setMessageSizeLimit(100240 * 10240);
registration.setSendBufferSizeLimit(100240 * 10240);
registration.setSendTimeLimit(20000);
}
I've already tried with changing the configureWebSocketTransport parameters but it did not work.
How can I increase the pool size of the socket?
The inbound channel into the WebSocket can be overwritten by using this method:
#Override
public void configureClientInboundChannel(ChannelRegistration registration) {
registration.taskExecutor().corePoolSize(4);
registration.taskExecutor().maxPoolSize(4)
}
The official documentation suggests to have a pool size = number of cores. For sure, since the maxPoolSize is reached then requests are handled through an internal queue. So, given this configuration I can process concurrently 4 requests.
Related
I am using quarkus.rest-client to call an external API and want to limit the frequency of those calls to say 50 per second, such that I don't drown the external service. What is the recommended way of achieving this without a side-car approach (through code)?
You could use the #Bulkhead Microprofile annotation and set the maximum concurrent threads limit for the execution of your method. But, this will only work inside of one instance of your application.
Eclipse Microprofile Documentation
Example copied from the above documentation:
#Bulkhead(5) // maximum 5 concurrent requests allowed
public Connection serviceA() {
Connection conn = null;
counterForInvokingServiceA++;
conn = connectionService();
return conn;
}
// maximum 5 concurrent requests allowed, maximum 8 requests allowed in the waiting queue
#Asynchronous
#Bulkhead(value = 5, waitingTaskQueue = 8)
public Future<Connection> serviceA() {
Connection conn = null;
counterForInvokingServiceA++;
conn = connectionService();
return CompletableFuture.completedFuture(conn);
}
You can even set the value on the deploy, so you can change this parameter without a new build.
To use the #Bulkhead, you must add the Fault Tolerance to your project
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-smallrye-fault-tolerance</artifactId>
</dependency>
i have an outOfMemoryException while reading messages from a queue with 2 M of messages.
and i am trying to find a way to read messgages by 1000 for example .
here is my code
List<TextMessage> messages = jmsTemplate.browse(JndiQueues.BACKOUT, (session,browser) -> {
Enumeration<?> browserEnumeration = browser.getEnumeration().;
List<TextMessage> messageList = new ArrayList<TextMessage>();
while (browserEnumeration.hasMoreElements()) {
messageList.add((TextMessage) browserEnumeration.nextElement());
}
return messageList;
});
thanks
Perform the browse on a different thread and pass the results subset to the main thread via a BlockingQueue<List<TextMessage>>. e.g. a LinkedBlockingQueue with a small capacity.
The browsing thread will block when the queue is full. When the main thread removes an entry from the queue, the browser can add a new one.
Probably makes sense to have a capacity at least 2 so that the next list is available immediately.
I have a project to transfer file using IBM MQ. There are 10000 clients and one data center. The largest file size is almost 8MB. The MQ cluster contains three MQ managers which are at different Windows server. Each MQ manager have 5 channels for client and 5 channel for data center. There are two cases for testing. Clients are evenly distributed to MQ manager in each case. Do not lose any file is the most important thing in these cases.
Case 1:
Every client send 50 files to data center at the same time. The files size are between 150KB to 5MB.
In this case, the sum of file size one client send is almost 80MB.
Case 2 :
Data center send the 10 identical files to every client at the same time. In this case, I create a topic named `myTopic` and 10000 clients subscribe this topic. Data center send 10 identical files to the topic.
MQ Mangers have a heavy load. I already set some attribute in IBM MQ:
Queue Manager:
Max handles: 100000
Maximum message length: 100MB
Max channels: 10000
Max channels: 10000
Is there any attribute that could increase the performance?
5/11 update:
First, I have modified the situation of case 2 above. I have a data center server that has a 4 core CPU and 32G RAM. I use 4 clients server to simulate 10000 clients, and each client server has 4 core CPU and 16G RAM.
In case 1, it take about 37 minutes when 1000 clients send files to the data center. There are not enough memory on data center server when data center receive files from 2000 clients. I find there are 20G memory used for buffer/cache. Here is my java code used to receive files:
try {
String filePath = ConfigReader.getInstance().getConfig("filePath");
MQMessage mqMsg = new MQMessage();
mqMsg.messageId = CMQC.MQMI_NONE;
mqMsg.correlationId = CMQC.MQCI_NONE;
mqMsg.groupId = CMQC.MQGI_NONE;
int flag = 1;
while (true) {
try {
MQQueueManager queueManager = new MQQueueManager("QMGR1");
int option = CMQC.MQTOPIC_OPEN_AS_SUBSCRIPTION | CMQC.MQSO_DURABLE;
MQTopic subscriber = queueManager.accessTopic("", "myTopic", option, null, "datacenter");
subscriber.get(mqMsg);
if (mqMsg.getDataLength() != 0) {
String fileName = filePath + "_file" + flag + ".txt";
byte[] b = new byte[mqMsg.getDataLength()];
mqMsg.readFully(b);
System.out.println("Receive " + fileName + ", complete time: " + System.currentTimeMillis());
Path path = Paths.get(fileName);
System.out.println("Write " + fileName + ", start time: " + System.currentTimeMillis());
Files.write(path, b);
System.out.println("Write " + fileName + ", complete time: " + System.currentTimeMillis());
flag++;
}
} catch (MQException e) {
// e.printStackTrace();
if (e.reasonCode != 2033) {
e.printStackTrace();
}
} finally {
mqMsg.clearMessage();
mqMsg.messageId = CMQC.MQMI_NONE;
mqMsg.correlationId = CMQC.MQCI_NONE;
mqMsg.groupId = CMQC.MQGI_NONE;
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I use byte array to read message and write it to disk. Is it possible that the byte array does not release memory and takes 20G memory?
In case 2, I find if I send a 5MB file to myTopic that has 1000 subscribers on MQ manager01, MQ manager01 take a lot of time to sync with cluster member. The disks on the MQ servers are very busy. There are another problem: Sometimes I get only 7 seconds to send a 5MB file, sometimes it takes 90 seconds. Here is my java code to send files:
try {
MQQueueManager queueManager = new MQQueueManager("QMGR1");
MQTopic publisher = queueManager.accessTopic("myTopic", "", CMQC.MQTOPIC_OPEN_AS_PUBLICATION,
CMQC.MQOO_OUTPUT);
System.out.println("---- start publish , time: " + System.currentTimeMillis() + " ----");
publisher.put(InMemoryDataProvider.getInstance().getMessage("my5MBFile"));
System.out.println("---- end publish , time: " + System.currentTimeMillis() + " ----");
publish.getPublisher().close();
} catch (MQException e) {
System.out.println("threadNum: " + publish.getThreadNo() + " publish error");
if (e.reasonCode != 2033) {
e.printStackTrace();
}
}
A couple of things.
MQ has FTE which transfers files for you. I think it does it using non persistent messages, so you avoid the disk overhead.
You might try checking your .ini files for parameters like ClntRcvBuffSize=0
see here.
0 says use the operating system values.
TCP used to send some data in short packets (64KB chunk), then wait till the packets have been acknowledged, and send more. If the connection is reliable, then you get higher throughput by sending bigger logical packets, a technique known as Dynamic Right Sizing. See here
it works best when the connection is long lived and sending a lot if data. For example the first few chunks may be 64KB, then increase it a bit to 128KB chunks, eventually up to 100MB ( or more) if needed.
You need to set both ends.
Depending on platform, you can use Netstat replacement ss command to display the various window sizes.
For your QM to QM channels specify a large batchsz and batchlim - though this may make your disk IO worse as the data gets to the remote end faster.
I'm trying to implement a "file dispatcher" on zmq (actually jeromq, I'd rather avoid jni).
What I need is to load balance incoming files to processors:
each file is handled only by one processor
files are potentially large so I need to manage the file transfer
Ideally I would like something like https://github.com/zeromq/filemq but
with a push/pull behaviour rather than publish/subscribe
being able to handle the received file rather than writing it to disk
My idea is to use a mix of taskvent/tasksink and asyncsrv samples.
Client side:
one PULL socket to be notified of a file to be processed
one DEALER socket to handle the (async) file transfer chunk by chunk
Server side:
one PUSH socket to dispatch incoming file (names)
one ROUTER socket to handle file requests
a few DEALER workers managing the file transfers for clients and connected to the router via an inproc proxy
My first question is: does this seem like the right way to go? Anything simpler maybe?
My second question is: my current implem gets stuck on sending out the actual file data.
clients are notified by the server, and issue a request.
the server worker gets the request, and writes the response back to the inproc queue but the response never seems to go out of the server (can't see it in wireshark) and the client is stuck on the poller.poll awaiting the response.
It's not a matter of sockets being full and dropping data, I'm starting with very small files sent in one go.
Any insight?
Thanks!
==================
Following raffian's advice I simplified my code, removing the push/pull extra socket (it does make sense now that you say it)
I'm left with the "non working" socket!
Here's my current code. It has many flaws that are out of scope for now (client ID, next chunk etc..)
For now, I'm just trying to have both guys talking to each other roughly in that sequence
Server
object FileDispatcher extends App
{
val context = ZMQ.context(1)
// server is the frontend that pushes filenames to clients and receives requests
val server = context.socket(ZMQ.ROUTER)
server.bind("tcp://*:5565")
// backend handles clients requests
val backend = context.socket(ZMQ.DEALER)
backend.bind("inproc://backend")
// files to dispatch given in arguments
args.toList.foreach { filepath =>
println(s"publish $filepath")
server.send("newfile".getBytes(), ZMQ.SNDMORE)
server.send(filepath.getBytes(), 0)
}
// multithreaded server: router hands out requests to DEALER workers via a inproc queue
val NB_WORKERS = 1
val workers = List.fill(NB_WORKERS)(new Thread(new ServerWorker(context)))
workers foreach (_.start)
ZMQ.proxy(server, backend, null)
}
class ServerWorker(ctx: ZMQ.Context) extends Runnable
{
override def run()
{
val worker = ctx.socket(ZMQ.DEALER)
worker.connect("inproc://backend")
while (true)
{
val zmsg = ZMsg.recvMsg(worker)
zmsg.pop // drop inner queue envelope (?)
val cmd = zmsg.pop //cmd is used to continue/stop
cmd.toString match {
case "get" =>
val file = zmsg.pop.toString
println(s"clientReq: cmd: $cmd , file:$file")
//1- brute force: ignore cmd and send full file in one go!
worker.send("eof".getBytes, ZMQ.SNDMORE) //header indicates this is the last chunk
val bytes = io.Source.fromFile(file).mkString("").getBytes //dirty read, for testing only!
worker.send(bytes, 0)
println(s"${bytes.size} bytes sent for $file: "+new String(bytes))
case x => println("cmd "+x+" not implemented!")
}
}
}
}
client
object FileHandler extends App
{
val context = ZMQ.context(1)
// client is notified of new files then fetches file from server
val client = context.socket(ZMQ.DEALER)
client.connect("tcp://*:5565")
val poller = new ZMQ.Poller(1) //"poll" responses
poller.register(client, ZMQ.Poller.POLLIN)
while (true)
{
poller.poll
val zmsg = ZMsg.recvMsg(client)
val cmd = zmsg.pop
val data = zmsg.pop
// header is the command/action
cmd.toString match {
case "newfile" => startDownload(data.toString)// message content is the filename to fetch
case "chunk" => gotChunk(data.toString, zmsg.pop.getData) //filename, chunk
case "eof" => endDownload(data.toString, zmsg.pop.getData) //filename, last chunk
}
}
def startDownload(filename: String)
{
println("got notification: start download for "+filename)
client.send("get".getBytes, ZMQ.SNDMORE) //command header
client.send(filename.getBytes, 0)
}
def gotChunk(filename: String, bytes: Array[Byte])
{
println("got chunk for "+filename+": "+new String(bytes)) //callback the user here
client.send("next".getBytes, ZMQ.SNDMORE)
client.send(filename.getBytes, 0)
}
def endDownload(filename: String, bytes: Array[Byte])
{
println("got eof for "+filename+": "+new String(bytes)) //callback the user here
}
}
On the client, you don't need PULL with DEALER.
DEALER is PUSH and PULL combined, so use DEALER only, your code will be simpler.
Same goes for the server, unless you're doing something special, you don't need PUSH with ROUTER, router is bidirectional.
the server worker gets the request, and writes the response back to
the inproc queue but the response never seems to go out of the server
(can't see it in wireshark) and the client is stuck on the poller.poll
awaiting the response.
Code Problems
In the server, you're dispatching files with args.toList.foreach before starting the proxy, this is probably why nothing is leaving the server. Start the proxy first, then use it; Also, once you call ZMQProxy(..), the code blocks indefinitely, so you'll need a separate thread to send the filepaths.
The client may have an issue with the poller. The typical pattern for polling is:
ZMQ.Poller items = new ZMQ.Poller (1);
items.register(receiver, ZMQ.Poller.POLLIN);
while (true) {
items.poll(TIMEOUT);
if (items.pollin(0)) {
message = receiver.recv(0);
In the above code, 1) poll until timeout, 2) then check for messages, and if available, 3) get with receiver.recv(0). But in your code, you poll then drop into recv() without checking. You need to check if the poller has messages for that polled socket before calling recv(), otherwise, the receiver will hang if there's no messages.
We have a situation where we set up a component to run batch jobs using spring batch remotely. We send a JMS message with the job xml path, name, parameters, etc. and we wait on the calling batch client for a response from the server.
The server reads the queue and calls the appropriate method to run the job and return the result, which our messaging framework does by:
this.jmsTemplate.send(queueName, messageCreator);
this.LOGGER.debug("Message sent to '" + queueName + "'");
try {
final Destination replyTo = messageCreator.getReplyTo();
final String correlationId = messageCreator.getMessageId();
this.LOGGER.debug("Waiting for the response '" + correlationId + "' back on '" + replyTo + "' ...");
final BytesMessage message = (BytesMessage) this.jmsTemplate.receiveSelected(replyTo, "JMSCorrelationID='"
+ correlationId + "'");
this.LOGGER.debug("Response received");
Ideally, we want to be able to call out runJobSync method twice, and have two jobs simultaneously operate. We have a unit test that does something similar, without jobs. I realize this code isn't very great, but, here it is:
final List result = Collections.synchronizedList(new ArrayList());
Thread thread1 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(1000);
result.add(Thread.currentThread().getName());
}
}, "thread1");
Thread thread2 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(500);
result.add(Thread.currentThread().getName());
}
}, "thread2");
thread1.start();
Thread.sleep(250);
thread2.start();
thread1.join();
thread2.join();
Assert.assertEquals("both thread finished", 2, result.size());
Assert.assertEquals("thread2 finished first", "thread2", result.get(0));
Assert.assertEquals("thread1 finished second", "thread1", result.get(1));
When we run that test, thread 2 completes first since it just has a 500 millisencond wait, while thread 1 does a 1 second wait:
Thread.sleep(delayInMs);
return result;
That works great.
When we run two remote jobs in the wild, one which takes about 50 seconds to complete and one which is designed to fail immediately and return, this does not happen.
Start the 50 second job, then immediately start the instant fail job. The client prints that we sent a message requesting that the job run, the server prints that it received the 50 second request, but waits until that 50 second job is completed before handling the second message at all, even though we use the ThreadPoolExecutor.
We are running transactional with Auto acknowledge.
Doing some remote debugging, the Consumer from AbstractPollingMessageListenerContainer shows no unhandled messages (so consumer.receive() obviously just returns null over and over). The webgui for the amq broker shows 2 enqueues, 1 deque, 1 dispatched, and 1 in the dispatched queue. This suggests to me that something is preventing AMQ from letting the consumer "have" the second message. (prefetch is 1000 btw)
This shows as the only consumer for the particular queue.
Myself and a few other developers have poked around for the last few days and are pretty much getting nowhere. Any suggestions on either, what we have misconfigured if this is expected behavior, or, what would be broken here.
Does the method that is being remotely called matter at all? Currently the job handler method uses an executor to run the job in a different thread and does a future.get() (the extra thread is for reasons related to logging).
Any help is greatly appreciated
not sure I follow completely, but off the top, you should try the following...
set the concurrentConsumers/maxConcurrentConsumers greater than the default (1) on the MessageListenerContainer
set the prefetch to 0 to better promote balancing messages between consumers, etc.