I've been using Sinatra with Rack for simulating external services when running integration tests, and would like to write a test for the case when the server is down. Is it possible to have Sinatra simulate a 'Connection Refused' error without entirely shutting down the server process?
So far I've tried:
Raising an exception
Immediately closing the stream, as illustrated here, before the method returns or anything:
post '/external_app' do
stream(:keep_open) do |out|
out.close
end
end
Thanks!
You are trying to test server down, the approach you've done still relies on the response of sinatra server.
you can set a very short connection timeout in your http client (any http client should be able to do that)
And then have something like a sleep method in your sinatra action block with x seconds that's greater than the max timeout you set.
But actually you may not need to make things that complicate, you can just throw an exception that your http client throw for connection timeout (and any other connection exceptions), and test to see if your application is able to catch and process the exception(s) accordingly.
Related
I hope who ever is reading this is doing well.
Here's a scenario that I'm wondering about: there's a global ClientConn that is being used for all grpc requests to a server. Then that server goes down. I was wondering if there's a way to wait for this server to go up with some timeout in order for the usage of grpc in this scenario to be more resilient to failures(either a transient failure or server goes down). I was thinking keep looping if the clientConn state is connecting or a transient failure and if a timeout occurs when the clientConn state was a transient failure then return an error since the server might be down.
I was wondering if this would work if there are multiple requests coming in the client side that would need this ClientConn so then multiple go routines would be running this loop. Would appreciate any other alternatives, suggestions, or advice.
When you call grpc.Dial to connect to a server and receive a grpc.ClientConn, it will automatically handle reconnections for you. When you call a method or request a stream, it will fail if it can't connect to the server or if there is an error processing the request.
You could retry a few times if the error indicates that it is due to the network. You can check the grpc status codes in here https://github.com/grpc/grpc-go/blob/master/codes/codes.go#L31 and extract them from the returned error using status.FromError: https://pkg.go.dev/google.golang.org/grpc/status#FromError
You also have the grpc.WaitForReady option (https://pkg.go.dev/google.golang.org/grpc#WaitForReady) which can be used to block the grpc call until the server is ready if it is in a transient failure. In that case, you don't need to retry, but you should probably add a timeout that cancels the context to have control over how long you stay blocked.
If you want to even avoid trying to call the server, you could use ClientConn.WaitForStateChange (which is experimental) to detect any state change and call ClientConn.GetState to determine in what state is the connection to know when it is safe to start calling the server again.
I have an unknown App consuming my Spring webservices.
The app set a timeout to every webservice calls.
The server regardless of the app timeout keeps processing.
Is there a risk of any other webservice call in receiving a misresponse (the response to the timed out webservice call)? How does Spring manages this? Doesn't HTTP protocol take care of this, given that each connection channel is open for a particular call to webservice and if broken there shouldn't be possible to retrieve the response?
As a developer, you should try to make all possible HTTP requests to your web server to be idempotent. It means that the client side has to be able to retry the failed request without new possible errors due to the inability to know the previous (timeout) request results.
The client side should handle the HTTP client timeouts himself and (by default) should treat the timeout error as a failure. Your clientside may repeat the request later and the server side should be able to handle the same request.
The solutions may vary for different tasks depending on complexity (from an INSERT statement to the database or scheduling a new CRON job avoiding duplication).
I am writing a REST service where the result of a REST POST can take longer than the environments timeout settings for HTTP connections. Given that I can't change the timeout for my REST target url,
What can I do to to make a REST call pass properly? I thought about using an async controller, but that seems not to fix any timeout behavior.
The calling client should not have to handle any server error or try to re-execute the query, as it is just adding more stress to the server.
Cheers,
Kai
Assuming this is a connection read timeout and not a http keepalive timeout since there is only one query. One suggestion would be for the rest service to return intermittent status response every specified interval. If this is a tcp keepalive issue then it can be circumvented using configuration. If a socket read timeout is being set then thst can be increased as well.
Can someone please explain why one uses this client.setReadTimeOut and client.setConnectTimeOut timeouts? I am using the same with my Jersey client. I have set a timeout of 5 secs for both connect and read. And for testing purpose I have put a thread sleep in my service for 6 secs. I get a timeout exception but after that my service resumes and gives response as normal. My requirement is to set a timeout for the service to respond and if it is passed it should come out and try again. Also I need to set number of attempts the client should try to connect. Please suggest
You client times out after 5 seconds not having heard back from the server and throws a timeout exception as designed. It has no idea whether the server started processing the call or will later on.
When you server wakes up from its sleep, it also has no idea the client timed out. You could check the status of the connection but it'd not be very reliable.
You client may catch the timeout exception and retry the call as many time as you wish. If your concern is about the same server call being executed more than once, then you have to implement you resource method to be idempotent.
So I have a pretty heavy back end service written in Java that is connected to a Rails app through Apache Thrift. I am using TCP connection to access the back end service which is running on a remote machine.
For each incoming request, my Rails Controller does the following:
transport = Thrift::BufferedTransport.new(Thrift::Socket.new(SERVER_ADDRESS, SERVER_PORT))
protocol = Thrift::BinaryProtocol.new(transport)
client = MyService::Client.new(protocol)
transport.open()
#result = client.processUserRequest(query)
transport.close()
Now the above service call clearly blocks for the entire time the back end server processes the request. Is there a way I can make this asynchronous? So that while a request is being serviced by the back end server the Web Front End can continue to accept incoming HTTP requests. What are my options to accomplish this?
I am using Phusion Passenger with Apache. I expect to see few dozens to few hundred concurrent connections at most. My Web server is on a small ec2 instance with 1.7 GB RAM.
I am quite new to Ruby/Rails (coming from a Java/C++ background), so still trying to grasp how things work in the Ruby land.
Yes. Simply use the async modifier in the Thrift definition. The method return type has to be void. Note that you will not get guarantee that the message was processed successfully, just that it was sent over to the Server.
service MyService
{
async void processUserRequest(1: Query query);
}