I am creating a MS Windows service, which listens for TCP connections. When connected, it gets data from a SQL db and returns it via the TCP socket. What are the drawbacks, if any, of opening a SqlConnection to the SQL Server at service start time, and just re-using that, until it might fail, as opposed to opening a new connection each time a "request" is made? I expect a small number of instantiations of the service functionality - less than 10 a day, but it could be more than that.
Database connections are considered an "expensive" resource, and as such should be opened only when needed, and closed immediately thereafter. As a result, opening a connection early and persisting it would go against that philosophy. Additionally, doing so prevents your underlying framework from making best use of whatever variety of connection pooling it may implement. It just isn't a very scalable practice.
Related
I'm trying to understand if ZeroMQ can connect pub or sub socket to non existing (yet) ip address. Will it automatically connect when this IP address will appear in the future?
Or should I check up existance first before connecting?
Is the behavior same for PUB and SUB sockets?
The answer is buried somewhat in the manual, here:
for most transports and socket types the connection is not performed immediately but as needed by ØMQ. Thus a successful call to zmq_connect() does not mean that the connection was or could actually be established. Because of this, for most transports and socket types the order in which a server socket is bound and a client socket is connected to it does not matter. The ZMQ_PAIR sockets are an exception, as they do not automatically reconnect to endpoints.
As that quote says, the order of binding and connecting does not matter. This is extremely useful, as you don't then have to worry about start-up order; the client will be quite happy waiting for a server to come online, able to run other things without blocking on the connect.
Other Things That Are Useful
The direction of bind/connect is independent of the pattern used on top; thus a PUB socket can be connected to a SUB socket that has been bound to an interface (whereas the other way round might feel more natural).
The other thing that I think a lot of people don't realise is that you can bind (or connect) sockets more than once, to different transports. So a PUB socket can quite happily send to SUB clients that are both local in-process threads, other processes on the same machine via ipc, and to clients on remote machines via tcp.
There are other things that you can do. If you use the ZMQ_FD option from here, you can get ZMQ_EVENT notifcations in some way or other (I can't remember the detail) which will tell you when the underlying connection has been successfully made. Using the file descriptor allows you to include that in a zmq_poll() (or some other reactor like epoll() or select()). You can also exploit the heartbeat functionality that a socket can have, which will tell you if the connection dies for some reason or other (e.g. crashed process at the other end, or network cable fallen out). Use of a reactor like zmq_poll(), epoll() or select() means that you can have a pure actor model event-driven system, with no need to routinely check up on status flags, etc.
Using these facilities in ZMQ allows for the making of very robust distributed applications/system that know when various bits of themselves have died, come back to life, taken a network-out holiday, etc. For example, just knowing that a link is dead perhaps means that a node in your distributed app changes its behaviour somehow to adapt to that.
How connect to https://api.push.apple.com using http2 with persistent connection ?
Persistent connection is to avoid rapid connection and disconnection:
APNs treats rapid connection and disconnection as a denial-of-service attack
https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/APNsProviderAPI.html
Is writing a client in c using https://nghttp2.org the only solution?
(If that question should be ask in another StackExchange website, please do tell me)
Non-persistent connections are a relic of the past. They were used in HTTP/1.0, but HTTP/1.1 already moved to a model where the connections were persistent by default, and HTTP/2 (also being multiplexed) continues on that model of connections being persistent by default.
Independently on the language you are using to develop your applications, any HTTP/2 compliant client will, by default, use persistent connections.
You only need to use the HTTP/2 client library in a way that you don't explicitly close the connection after every request you make.
Typically these libraries employ a connection pool that keeps the connections open, typically until an idle timeout fires.
When your application makes HTTP requests, the library will pick an open connection and send the request. When the response arrives the library will not close the connection but instead put it back into the pool for the next usage.
Just study how the library you want to use allows you to make multiple requests without closing the connection.
I also met this question!
If the connection be idle for a long time (about 1 hour), then function poll catches no socket status changed. It always returns 0 even as on_frame_send_callback was invoked.
Is there anyone can figure out the problem?
During a designing of a client/server architecture, is there any advantage to multiplexing multiple WEBSOCKET connections from the same process to the server (i.e. sharing one connection) vs opening one WEBSOCKET connection per thread/session in the client (as is typically done when connecting to memcached or database servers.)
I'm aware about the overhead associated with each connection (e.g. RAM ...). But expect to have less than 1K-10K at the most in each client side.
Specific use case:
Lets assume, I have a remote server with multiple sessions on one side, and on the other side I have multiple clients, each client will connect to a different session through the websocket server.
In the remote server, there are 2 ways to implement it: (1) each session create its own websocket connection (2) all sessions will use same websocket connection.
From connection point of view, I like the sharing solution (one websocket connection to all sessions), because websocket server is limited by #of connections available (saving servers/scaling).
But from traffic/data speed/performance point of view, if a sessions will send lots of small packages through the connection, then, if we use one sharing connection, we will not be able to utilize the bandwidth (payload..../collect few small packages into one or split big package into small packages), because we may have to send different packages to different clients from different sessions, in this case, we will not be able to collect few packages (small packages) since they have different destination and from different sources!!, unless we will create "virtual connection" that manage each session data to maximize the speed, but this would create much implementation complexity!!!
Any other opinions?
Thanks,
JB.
I think you should consider using a limited connection pool, like they do with Database connection architecture.
Another solution I would consider is a Pub/Sub database middleman such as Redis. This allows you to use existing solutions as well as easier scalability.
To the best of my understanding, both having a single connection and using a multitude of connections have their issues.
For example, one connection can send only one message at a time. A big enough message could block the connection... are you moving big data?
Many connections can cause an overhead that could be very expensive as well as introduce more chances for errors. Consider the following:
Creating new connections is very expensive, uses bandwidth, suffers from longer network delays and requires local resources and this is exactly what websockets allows us to avoid.
You will run into scalability issues. For instance, Heroku limits websocket connections to 600 per server, or at least they did so a short while back (and I think it's reasonable)... How will you connect all the servers together to one data-store?
Remember every OS has an open file limit and that websockets use the IO architecture (each one is an 'open-file', so that websockets are a limited resource).
Regarding traffic/data speed/performance, it is a question of server architecture... but I believe you will actually see a slight speed increase by using one connection (or a small pool of connections). It's important to remember that there isn't any effective multi-tasking when you need to send TCP/IP packets.
Also, with a limited number of connections (even with one connection), you will be able to benefit from the OS's packet joining feature that will allow you to send a number of websocket frames over one TCP/IP packet (unless you constantly flush the TCP/IP socket). You will actually waste more bandwidth with more connections - even disregarding the bandwidth used to open each new connection.
Just my 5 cents, we will all think differently, I'm sure.
Good Luck!
I was trying to understand the difference in Websocket and Comet model. As per my understanding,
In comet model, the connection remains opened until the server has something to push to the client. Once server pushes the data to client, the connection is closed and new connection is established for the next request. It is not considered a good approach as the connection may remain open for long time (causing intensive use of server resources) or may timeout.
On the other hand, websockets start with a handshake connection and once both the client and server agree to exchange data, the connection remains open.
So in both the case the connection remains open for long time (especially websocket). So isnt't this a drawback of websocket to keep the connection open. I would like to take the reference of SignalR in asp.net to discuss about this concept.
First, let's clarify that Comet comes in two flavors: HTTP Streaming and HTTP Long Polling. You were referring to Long Polling. (See this other answer for terminology).
In all three cases (WebSocket, HTTP Streaming, and HTTP Long Polling) the underlying TCP socket is kept open. That's actually the main feature of this kind of techniques and not a side effect. You want the socket to stay permanently open (I'm oversimplifying now), so that data can be pushed asynchronously and with low latency.
As you correctly said, this implies that the server must be able to handle a large number of open sockets without wasting resources. And that's one of the key elements in the choice of a good Comet/WebSocket server.
Communicate with the database in java, we often follow these steps:
load a driver
get a connection
create a Statement or PreparedStatement
get the ResultSet
close the connection
I am confused that we should close connection, all say that create a connection is expensive, so why we can't do like this:
static
{
try
{
connection = DriverManager.getConnection(connectorURL,
user, password);
} catch (SQLException e)
{
e.printStackTrace();
}
}
We just create a connection as a singleton, and use it everywhere. Couldn't it? If I use it like this, what will happen?
And if I don't close the connection, what will happen?
Also, we will use a connection pool, it will create some connections in the pool, and we get the connection from the pool, the connection in the pool also don't close, why if we don't use pool, we need follow the steps and close the connection if we don't use?
It's so confused and I don't know the what's the principle. Please help me. Thanks.
If we don't close the connection, it will lead to connection memory leakage. Until application server/web server is shut down, connection will remain active, even if the user logs out.
There are additional reasons. Suppose database server has 10 connections available and 10 clients request for the connection. If the database sever grants all of them, and after their usage they are not closed, the database server would not be able to provide any other connection for another request. For that reason we need to close them - it is mandatory.
Furthermore, it might lead to some mischievous activities regarding the integrity of the database.
We just create a connection as a singleton, and use it everywhere. Couldn't it? If I use it like this, what will happen?
In this case, you will have only a single database connection. If database query is having a longer execution time, then other requests for that connection object will have to wait. So, this is not a recommended approach.
And if I don't close the connection, what will happen?
By closing the connection, objects of Statement and ResultSet will be closed automatically. The close() method is used to close the connection. If you forget to do so, it will lead your app to connection memory leak. For Example: Imagine that your app has 10 database connections and 10 users are active at the same time. Later on, 3 users log out of the app, but because you didn't implement connection closing mechanism, those 3 connections remain active, and as a result, your app will not provide any other connection to some other user. Also, increased number of opened connections, in database server, slows down the app. So, release the Connection object's database and JDBC resources immediately, instead of waiting for them to be automatically released.
Also, we will use a connection pool, it will create some connections in the pool, and we get the connection from the pool, the connection in the pool also don't close, why if we don't use pool, we need follow the steps and close the connection if we don't use?
Connection pooling means that connections are reused rather than created each time a connection is requested.
This source says, that: "If the system provides connection pooling, the lookup returns a connection from the pool if one is available. If the system does not provide connection pooling or if there are no available connections in the pool, the lookup creates a new connection. The application benefits from connection reuse without requiring any code changes. Reused connections from the pool behave the same way as newly created physical connections. The application makes a connection to the database and data access works in the usual way. When the application has finished its work with the connection, the application explicitly closes the connection.
The closing event on a pooled connection signals the pooling module to place the connection back in the connection pool for future reuse."
Your application borrows a connection from the pool, uses it, then returns it to the pool by closing it. A connection in the free pool for a long period of time is not considered an issue.