Everything is working normally with my SFTP InboundChannelAdapter , but every now and then it stop polling due to multiple login failures to the sftp server. SFTP server does go down a lot in infrastructure. Is there a solution to prevent SFTP InboundChannelAdapter from dying ?
Also, Have been seeing this message a lot in the logs "Caught an exception, leaving main loop due to SSH_MSG_DISCONNECT". What might be the issue ?
Related
I hope who ever is reading this is doing well.
Here's a scenario that I'm wondering about: there's a global ClientConn that is being used for all grpc requests to a server. Then that server goes down. I was wondering if there's a way to wait for this server to go up with some timeout in order for the usage of grpc in this scenario to be more resilient to failures(either a transient failure or server goes down). I was thinking keep looping if the clientConn state is connecting or a transient failure and if a timeout occurs when the clientConn state was a transient failure then return an error since the server might be down.
I was wondering if this would work if there are multiple requests coming in the client side that would need this ClientConn so then multiple go routines would be running this loop. Would appreciate any other alternatives, suggestions, or advice.
When you call grpc.Dial to connect to a server and receive a grpc.ClientConn, it will automatically handle reconnections for you. When you call a method or request a stream, it will fail if it can't connect to the server or if there is an error processing the request.
You could retry a few times if the error indicates that it is due to the network. You can check the grpc status codes in here https://github.com/grpc/grpc-go/blob/master/codes/codes.go#L31 and extract them from the returned error using status.FromError: https://pkg.go.dev/google.golang.org/grpc/status#FromError
You also have the grpc.WaitForReady option (https://pkg.go.dev/google.golang.org/grpc#WaitForReady) which can be used to block the grpc call until the server is ready if it is in a transient failure. In that case, you don't need to retry, but you should probably add a timeout that cancels the context to have control over how long you stay blocked.
If you want to even avoid trying to call the server, you could use ClientConn.WaitForStateChange (which is experimental) to detect any state change and call ClientConn.GetState to determine in what state is the connection to know when it is safe to start calling the server again.
I am working on a service, where if there is any issue in the application, the request gets quarantined and can be reprocessed manually again after fixing the issue. If any network issue, such quarantined requests can be just processed again without any fix.
Now, I am working on a fix where the application failed to get DB Connectivity with MariaDB and got quarantined. As part of the quarantine, there are some meta data should be populated. One of the metadata is wrong and I have fixed to populate correct data.
I need to come up with JUnit test cases to automatically test this scenario. When I tested in my local, I brought down my local instance of the MariaDB and tested the same and my fix was working fine. Now I need to push this fix to stage environment, where automated test cases should run successfully. In my case, for this fix, I need to replicate the DB connectivity Issue scenario to test my issue fix.
The application/service that I work - it is not a direct http/https call. I post a message to a Kafka stream and the service picks up from the stream. So I need help in writing test case, where the DB connectivity issue should happen in the asynchronous service and gets quarantined. Is this possible?
Error:
"exception": "org.springframework.jdbc.CannotGetJdbcConnectionException",
"message": "Failed to obtain JDBC Connection; nested exception is java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=localhost)(port=3306)(type=master) : Socket fail to connect to host
What can be the reasons that cause a socket.io session to be crashed and server returns invalid session or session is disconnected ?
There is a specific situation that causes these problems with the session. When a client fails to send the pings at the expected interval the server declares the client gone and deletes the session. If a client that falls into this situation later tries to send a ping or another request using the now invalidated session id it will receive one of these errors.
Another possible problem with the same outcome is when the client does send the pings at the correct intervals, but the server is blocked or too busy to process these pings in time.
So to summarize, if you think your clients are well behaved, I would look at potential blocking tasks in your server.
Ok, I'll illustrate my problem in this figure project's architecture .
In fact, I have a websocket between the react app and the rasa ( tool for creating chatbots) based on flask. bot response need to access to an external API to retrieve some data. Here where things go wrong. Sometimes, these requests take too long to return a response, and that's when websocket misbehave.
I have a high performance client server system programmed from the scratch. i am still improving my system. the server using io overlapping to handle connections. the server correctly handles disconnections and resource deallocations. at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client. this works well. and server detects that as a graceful disconnection. rarely i have observed when the connection is very slow the server doesn't detect this. I feel that the shutdown partial closure doesn't reach the server. how can i handle this. this is important the server shouldn't contain this kind of connections if so the server can not be stopped. and i do not want to close all such connection by force.
at the client side i used shutdown command with sd_receive to notify the server that the client has no data to receive after final send from the client.
It doesn't do that.
this works well
It doesn't work at all. The shutdown command with SD_RECEIVE that you're using is completely pointless. A close, or a shutdown with SD_SEND or SD_BOTH, sends a FIN: shutdown with SD_RECEIVE does exactly nothing on the wire, and specifically it does not 'notify the server' of anything.
I feel that the shutdown partial closure doesn't reach the server.
It never reaches the server. Your code doesn't work the way you think it does. What reaches the server is the FIN, which in turn is the result of the close, not the shutdown SD_RECEIVE.
What you need here is a read timeout at the server end. As you're using select() or whatever is delivering you the events, you will have to implement the timeout manually yourself.
I'm starting with Websockets and I have a problem.
I have a sails.js application that uses sockets to update the client side.
On the client side it makes an API call using socket.get("/api/v1/actor...") to bring all the items of the database. When I see what the WebSocket's traffic on the Chrome console:
As you can see, the connection has been established and the API call has been correctly done through the socket.
The problem is, there is no answer from the server, not even an error.
If I make the same API call using ajax, I get response, but it doesn't work using WebSockets.
Any idea what might be producing this behavior?
EDIT: I add here the code here that processes the request and this one here that sends the request, but the problem is that it never execute this code. I think we we are closer to the find the cause, since we think it has to do with a network problem. We figured there is an F5 reverse-proxy which is not properly set up to handle websockets
The answer didn't make any sense now that I've seen the code that's why I've edited it. I only answered because I could't comment on your question and ask you for the code.
Your calling code seems correct and the server side of things the process of response should be handled automatically by the framework, you only need to return some JSON in the controller method.
I instantiated a copy of the server (just changed the adapters to run it locally) and the server replied to the web socket requests (although I only tested the route '/index').
Normally when the problems are caused by a reverse proxy the socket simply refuses to connect and you can't even send data to server. Does the property "socket.socket.connected" returns true?
The best way to test is to write a small node application with socket.io client and test it in the same machine that the application server is running, then you can exclude network problems.