I'd like to build chat app on websocket, and choose Poco C++ lib as webserver (1.4.6p1). There are multiple user at the same time, poco websocket will be blocked at read frame but automatically released after 60 seconds if nothing is received from browser.
I want to keep socket connected in order to manager so many active (or idle) users, but how to get there?
T.H.X
I "fixed" the problem with this simple and somewhat dirty line of code:
ws.setReceiveTimeout(Poco::Timespan(10, 0, 0, 0, 0));
Basically, i set the receive timeout to 10 days.
Since my websocket will have a lifespan of a few hours, 10 days equals infinity for me.
Hope it helps.
Check out this:
Poco::Net Server & Client TCP Connection Event Handler
You have some examples about how wait incomming connections, timeouts, etc.
Good luck
Related
I am building TCP Proxy: client <-> proxy <-> Vertica
I have a net.TCPListener, which takes incoming requests by AcceptTCP() and creating connections, then, making connection to destination socket by net.DialTCP("tcp", nil, raddr). Looks like a bridge. Default proxy model.
Firstly, at first version, i have a trouble: if i have 59 parallel incoming request, everything is fine. But if i have one more (60), i have a trouble: 1-59 connections are OK, but 60 and newer are fault. I cant catch error properly. Looks like some socket unexpectedly closes
Secondly, i tried to set queue for listener. It helps me a lot: but if i have more than 258 requests, i get error again.
My question: is there any limit of connections in net package? May be it is system limitation?
For external info: Vertica running in docker container, hw/system: macbook, vertica limit connection pool: 5, but pool logic implemented into proxy.
I also tried set "raw" proxy without pool logic (thats why i set queue for listener: i must not exceed threshold of Vertica User's pool), result is 258 requests..
UPDATED: (05.04.2020)
Looks like it is system limitations fault. Did I mention anywhere that I trying to run the whole system on one PC?
So, what I had:
300 parallel processes as requests (making by multiprocessing.Pool
Python) (300 sockets)
Listener that creates 300 connections (once
more 300 sockets)
And series of rapidly creating/closing sockets in
deep of proxy (according to queue and Vertica pool)
What I have now:
300 python requests making from another PC in my local network (on Windows)
Proxy works fine
But I have several errors on Windows PC, which creating requests to my proxy. Errors like low memory in "swap file".
I still need to make some stress test for proxy. Adding less memory for swap file didn't solve my problem on Windows PC. I will be grateful for any suggestions and ideas. Thanks!
How does the proxy connect to Vertica?
There is by default a maximum of 50 ordinary mortal users to be connected to one Vertica node at any one time. The superuser "dbadmin" always has 5 connections in addition to that.
So if I try to connect 60 times as dbadmin, I get this on a default Vertica configuration:
Connection attempt failed: FATAL 4060: New session rejected due to limit, already 55 sessions active
You can increase the Vertica config item MaxClientSessions from its default of 50 per node.
Command is : ALTER NODE <_node_name_> SET MaxClientSessions = 100, for example.
I suppose you are always connecting to the same Vertica node, and that you have set ConnectionLoadBalancing to FALSE. So you always connect to the same node, and soon reach the default maximum of 50.
Hope that's the reason found ....
I'm writing a DLL for a purchased software.
The software will perform multi-threaded calculations on certain tasks.
My job is to output the relative result into a database.
However, due to the limited support of the software, it is kind of difficult to do multi-threaded output of the data.
The key problem is that there is no info on the last execution of the DLL function.
Therefore, the database connection will not be closed.
So may I ask if I leave the connection open and terminate the process, what would be the potential problems?
My platform is winserver 2008, and PostgreSQL 10.
I don't understand the background information you are giving, but I can answer the question:
If a PostgreSQL client process dies without closing the database (and TCP) connection, the PostgreSQL server process (“backend process”) that servers this connection will not realize this immediately.
Of course, as soon as the server tries to communicate to the client, e.g. to return some results, TCP it will notice that the partner has gone away and will return an error.
However, often the backend process is idle, waiting for the client to send the next request. In this case, it would never notice that its partner has died. This could eventually cause max_connections to be exhausted with dead connections.
Because this is a common problem in networking, TCP provides the “keepalive” functionality: when a connection has been idle for a while (2 hours by default), the operating system will send a so-called “keepalive packet” and wait for a response from the other side. Sending keepalive packets is repeated several times (5 times by default) in short intervals (1 second by default), and if no response is received, the connection is closed by the operating system, the backend process receives an error message and terminates.
PostgreSQL provides parameters with which you can configure these settings on the server side: tcp_keepalives_idle, tcp_keepalives_count and tcp_keepalives_interval. If you set tcp_keepalives_idle to a shorter value, dead connections will be detected and removed faster, at the cost of some little added network traffic.
I have used http://github.com/streadway/amqp package in my application in order to handle connections to a remote RabbitMQ server. Everything is ok and works fine but when a connection is idle for a long period of time f.g 6 hours it gets closed. I check NotifyClose(make(chan *amqp.Error)) all time in my go routine and it returns :
Exception (501) Reason: "write tcp
192.168.133.53:55424->192.168.134.34:5672: write: broken pipe"
Why this error happens? (is there any problem in my code?)
How long a connection can be idle?
How to prevent this problem?
As Cosmic Ossifrage says, the error is saying your RabbitMQ client has disconnected.
There are so many things that could sit between your client and server that can/will drop dormant connections that it's not worth focusing on how long your connection can be dormant for. You want to set the requested heartbeat interval in your connection manager.
https://www.rabbitmq.com/heartbeats.html
I'm not familiar with the framework you're using but I see it has a defaultHeartbeat field in connection.go. You might need to experiment with the value to find the best balance is to stop the connection being killed but not hit the server too often with keep-alive traffic.
I have a grpc server and client that works as expected most of the time, but do get a "transport is closing" error occasionally:
rpc error: code = Unavailable desc = transport is closing
I'm wondering if it's a problem with my setup. The client is pretty basic
connection, err := grpc.Dial(address, grpc.WithInsecure(), grpc.WithBlock())
pb.NewAppClient(connection)
defer connection.Close()
and calls are made with a timeout like
ctx, cancel := context.WithTimeout(ctx, 300*time.Millisecond)
defer cancel()
client.MyGRPCMethod(ctx, params)
One other thing I'm doing is checking the connection to see if it's either open, idle or connecting, and reusing the connection if so. Otherwise, redialing.
Nothing special configuration is happening with the server
grpc.NewServer()
Are there any common mistakes setting up a grpc client/server that I might be making?
After much search, I have finally come to an acceptable and logical solution to this problem.
The root-cause is this: The underlying TCP connection is closed abruptly, but neither the gRPC Client nor Server are 'notified' of this event.
The challenge is at multiple levels:
Kernel's management of TCP sockets
Any intermediary load-balancers/reverse-proxies (by Cloud Providers or otherwise) and how they manage TCP sockets
Your application layer itself and it's networking requirements - whether it can reuse the same connection for future requests not
My solution turned out to be fairly simple:
server = grpc.NewServer(
grpc.KeepaliveParams(keepalive.ServerParameters{
MaxConnectionIdle: 5 * time.Minute, // <--- This fixes it!
}),
)
This ensures that the gRPC server closes the underlying TCP socket gracefully itself before any abrupt kills from the kernel or intermediary servers (AWS and Google Cloud Load Balancers both have larger timeouts than 5 minutes).
The added bonus you will find here is also that any places where you're using multiple connections, any leaks introduced by clients that forget to Close the connection will also not affect your server.
My $0.02: Don't blindly trust any organisation's (even Google's) ability to design and maintain API. This is a classic case of defaults-gone-wrong.
One other thing I'm doing is checking the connection to see if it's either open, idle or connecting, and reusing the connection if so. Otherwise, redialing.
grpc will manage your connections for you, reconnecting when needed, so you should never need to monitor it after creating it unless you have very specific needs.
"transport is closing" has many different reasons for happening; please see the relevant question in our FAQ and let us know if you still have questions: https://github.com/grpc/grpc-go#the-rpc-failed-with-error-code--unavailable-desc--transport-is-closing
I had about the same issue earlier this year . After about 15 minuets I had servers close the connection.
My solution which is working was to create my connection with grpc.Dial once on my main function then create the pb.NewAppClient(connection) on each request. Since the connection was already created latency wasn't an issue. After the request was done I closed the client.
I implement asynchronous download to retrieve remote file and store it in IsolatedStorage in order to use it when out of the network.
Everything works great when network is up. However when out of network, I noticed that async donwload may take up to 2 minutes before to fire my MessageBox (which say that connection to server has failed).
Question:
Is there any way to define a timeout ? Let's say that if my application does not receive any answer for X seconds then stop the Async Download and call a method.
Maybe a timeout is not the best pratices. In this case could you give me suggestion ?
I do not want my user wait for 15 seconds max.
PS: my application is suppose to run on wifi only, so I consider that 'network speed' is optimal.
Thx for your help
What I would recommend doing is check the network type first via NetworkInterface. If NetworkInterfaceType is Wireless80211, you have a wireless connection (Wi-Fi). The returned connection can be None in case there is no available way to connect - so you won't even have to start the download if there is no accessible network.
Answering your question, if you are using WebClient, you can't define a timeout. However, you can call instance.CancelAsync(). For a HttpWebRequest you can call instance.Abort().