Heroku Redis connections Limit, what it means? - heroku

I'm in doubt about what this connection limit means, i don't know if it refers to simultaneously connections from process or connections that you make at each time(For example my app connect one process to redis, that process finished the work, and i have spended one connection), anyone can clarify it to me? Thank you guys very much.

Related

How to gauge the scalability of websockets in an application

I am struggling to find information on how to gauge the scalability of websockets. A scenario -
Let's say from client wants to establish socket connection from a browser, and the client application and service layer (Micronaut) both have two instances behind an elb - service layer will sit us-east region and can expect anyone from around the world can access the frontend app from browser and can expect an open connection for an avg of 2-5 min, no longer than 30 minutes.
Is there a ballpark number on how many concurrent websocket connections a couple servers can handle? Or if there are certain factors that I didn't mention that are vital to handling websocket connections in general?
Thank you in advance.
I'm assuming you want to know the scalability of the implementation of WS in Micronaut and not WS in general. Of course, the scalability of WS is dependent on the specific implementation and WS itself. You probably already know this, but wanted to state it for the record. You may also want to be sure you increase your file descriptors for your server process to the max number (you may have to adjust your kernel to increase the FDs).
Btw, don't forget to handle retries and reconnects as you would for a low-level TCP connection

JDBC pooling related to ntp sync?

We're having a connection timeout issue from an API pooling connections to an informix connection manager which forwards the queries to the appropriate informix database server.
Recently, I've set up the mail service and realized that we're having delays in receiving the mail send and after troubleshooting I saw that the database server is not syncronized at all with the API ( 2+ minutes difference ).
I've read somewhere that time sync is important when using jdbc pooling but I can't find to much information regarding this on internet. The timeout kinda makes sense because of the tcp keepalive.
Had anyone experienced or know about this ?
Thank you,
Mihai.
It is common to intermix database timestamps and local timestamps. This causes issues when the server times are different. If the mail server is looking for records before the current time, there could be a two minute delay before mail is sent.
Email may be delayed in transit between servers. Check the Received headers to see if there are any unexpected delays. (You will need to compensate for time variances on the servers.
Normally, you would use NTP to ensure the time is the same on all servers. Within a data center it should be able to synchronize times to a millisecond or so.

TCP connection time in windows

I am doing some performance testing with a large number of threads. Each thread is sending HTTP requests to another IP. It looks like at some stages the connections are closed (because there are too many threads) and then of course have to be reopned.
I am looking to get some ball park figures for how long it takes windows to Open TCP connections.
Is there any way I can get this?
Thanks.
This is highly dependent on the endpoints you're trying to connect to, is it not?
As an extreme best case, you can test it yourself by targeting an IIS on localhost.
I wouldn't be surprised if routers and servers that you are connecting through may drop connections as a measure against what could be perceived as connection storms or even denial-of-service attacks.

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resuming persistent sessions while switching to a different mosquitto broker

Can anyone tell me how I can resume persistent session on a different broker while I am switching brokers for load balancing in mosquitto. I am really confused and can't find a way out.
Short answer, you don't.
All the persistent session information is held in the broker and mosquitto has no way to share that information between instances

Resources