How long can a Mailkit SMTPClient connection stay open - asp.net-core-mvc

I use mailkit for my ASP.Net Core application and create a SMTPClient object which I then connect to Office365 and authenticate my user with a username and password. How long can this connection be open until it expires or needs to be reauthenticated? Also, is there a way to keep the connection alive without sending an email at the expiration time?

To keep a client connection to a server alive (whether it be using the ImapClient, Pop3Client, or the SmtpClient), you can use the NoOp() or NoOpAsync() methods to send a command to the server that does "nothing" but to let the server know that the client is still there and wants the connection to remain alive. At least in theory this will work.
That said, it's likely not considered good netiquette to keep an SMTP connection alive for any lengthy period of time.
Most SMTP servers are going to expect clients to connect, flush their outbound message queues (by sending each message that is queued up) and then immediately disconnecting and some of the bigger free mail servers may ignore NOOP commands and disconnect the client anyway after some set period of time.
For example, the IMAP specification states that servers should keep a connection alive for at least 30 minutes after the client's last command, but in practice, GMail will only keep that connection alive for less than 10 minutes.
The SMTP specification, as far as I can remember off the top of my head, makes no such suggestion as far as how long to keep the connection alive after the client's last command, so some servers may require more frequent NOOP commands than others and some may even have a max period of time that they'll allow you to keep that connection alive for regardless of whether you send those NOOP commands.
TL;DR:
The NOOP command (via the NoOp() or NoOpAsync() methods) will theoretically keep the connection alive if sent frequently enough, but I would not depend on this working reliably.

Related

nodeJS being bombarded with reconnections after restart

We have a node instance that has about 2500 client socket connections, everything runs fine except occasionally then something happens to the service (restart or failover event in azure), when the node instances comes back up and all socket connections try to reconnect the service comes to a halt and the log just shows repeated socket connect/disconnects. Even if we stop the service and start it the same thing happens, we currently send out a package to our on premise servers to kill the users chrome sessions then everything works fine as users begin logging in again. We have the clients currently connecting with 'forceNew' and force web sockets only and not the default long polling than upgrade. Any one ever see this or have ideas?
In your socket.io client code, you can force the reconnects to be spread out in time more. The two configuration variables that appear to be most relevant here are:
reconnectionDelay
Determines how long socket.io will initially wait before attempting a reconnect (it should back off from there if the server is down awhile). You can increase this to make it less likely they are all trying to reconnect at the same time.
randomizationFactor
This is a number between 0 and 1.0 and defaults to 0.5. It determines how much the above delay is randomly modified to try to make client reconnects be more random and not all at the same time. You can increase this value to increase the randomness of the reconnect timing.
See client doc here for more details.
You may also want to explore your server configuration to see if it is as scalable as possible with moderate numbers of incoming socket requests. While nobody expects a server to be able to handle 2500 simultaneous connections all at once, the server should be able to queue up these connection requests and serve them as it gets time without immediately failing any incoming connection that can't immediately be handled. There is a desirable middle ground of some number of connections held in a queue (usually controllable by server-side TCP configuration parameters) and then when the queue gets too large connections are failed immediately and then socket.io should back-off and try again a little later. Adjusting the above variables will tell it to wait longer before retrying.
Also, I'm curious why you are using forceNew. That does not seem like it would help you. Forcing webSockets only (no initial polling) is a good thing.

How to make http2 requests with persistent connection ? (Any language)

How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?

Websockets and uwsgi - detect broken connections client side?

I'm using uwsgi's websockets support and so far it's looking great, the server detects when the client disconnects and the client as well when the server goes down. But i'm concerned this will not work in every case/browser.
In other frameworks, namely sockjs, the connection is monitored by sending regular messages that work as heartbeats/pings. But uwsgi sends PING/PONG frames (ie. not regular messages/control frames) according to the websockets spec and so from the client side i have no way to know when the last ping was received from the server. So my question is this:
If the connection is dropped or blocked by some proxy will browsers reliably (ie. Chrome, IE, Firefox, Opera) detect no PING was received from the server and signal the connection as down or should i implement some additional ping/pong system so that the connection is detected as closed from the client side?
Thanks
You are totally right. There is no way from client side to track or send ping/pongs. So if the connection drops, the server is able of detecting this condition through the ping/pong, but the client is let hung... until it tries to send something and the underlying TCP mechanism detect that the other side is not ACKnowledging its packets.
Therefore, if the client application expects to be "listening" most of the time, it may be convenient to implement a keep alive system that works "both ways" as Stephen Clearly explains in the link you posted. But, this keep alive system would be part of your application layer, rather than part of the transport layer as ping/pongs.
For example you can have a message "{token:'whatever'}" that the server and client just echoes with a 5 seconds delay. The client should have a timer with a 10 seconds timeout that stops every time that messages is received and starts every time the message is echoed, if the timer triggers, the connection can be consider dropped.
Although browsers that implement the same RFC as uWSGI should detect reliably when the server closes the connection cleanly they won't detect when the connection is interrupted midway (half open connections)t. So from what i understand we should employ an extra mechanism like application level pings.

How long can a XMPP Session last?

Without timeout? Or is there even a timeout?
There is no limit on the lifetime of a connected jid. For command line bots, it is a good practice to send periodic ping packets to the server, just to make sure opened socket doesn't drop after some period of inactivity.
In case your client is connected from browser and suppose the user refreshes the browser without disconnecting from the jabber server. User can still use saved (via cookie/session) jid,sid,rid combination to reconnect with previously opened session. However, bosh connection manager will drop the connection after "X" seconds of inactivity.
XMPP does not say anything about having or not a timeout. So, in theory, you XMPP Session could last as long as the TCP connection is established.
You are free to implement a timeout in your client or server though...

WinSock best accept() practices

Imagine you have a server which can handle only one client at a time. The server uses WSAAsyncSelect to be notified of new connections. In this case, what is the best way of handling FD_ACCEPT messages:
A > Accept the connection attempt right away but queue the client until its turn?
B > Do not accept the next connection attempt until we are done serving the currently connected client?
What do you guys think is the most efficient?
Here I describe the cons that I'm aware for both options. Hopefully this might help you decide.
A)
Upon a new client connection, it could send tons of data making your receive buffer become full, which causes unnecessary packets to be transmitted (see this). If you don't plan to receive any data from the client, shutdown receiving on that socket, thus if the client sends any data after that, the connection is reset. Moreover, if your protocol has strict rules, disconnect the client.
If the connection stays idle for too long, the system might disconnect it. To solve this, use setsockopt to set SO_KEEPALIVE on each client socket.
B)
If you don't accept the connection after a certain period (I guess the default is 60 seconds), it will timeout. In a normal (or most common) situation this indicates the server is overloaded, thus unable to answer in time. However, if the client is also designed by you, make the socket non-blocking, try to connect, then manage the timeout as you wish.
Ask yourself: what do you want the user experience to be at the other end? Do you want them to be stuck? Do you want them to time out? Do you want them to get a polite message?

Resources