gSoap caches error response in FCGI mode - gsoap

We are using gSoap compiled with the WITH_FASTCGI flag. The resulting soap FCGI server is processing (successive) SOAP packets OK, but there is a problem with the error responses.
The first time an error response is created everything works OK. But all subsequent SOAP packets that result in an error will show that first response again, even if the error is different.
I had a quick look through the source code. The stdsoap2.cpp:soap_set_fault(struct soap *soap) seems to simply return if a faultstring has already been set in the soap structure that is being used, rather than update it. This being a FCGI server, gSoap re-uses the soap structure you start it with for each SOAP request so it looks like this is not being cleared properly.
Is there anyone else having similar problems?

Related

express graphql always sends 500 Internal server error for any thrown error from resolver

I am using express graphql in my node app . and it graphql always sends 500 Internal server error for any thrown error from resolver. Please suggest any solution so i will get proper response and status code
The express-graphql sends the HTTP error 500 whenever detects there is no data returned. Definitely, in case of exception is thrown there won't be a returning data. Although it looks like an obvious bug the developers have their own opinion on that. They seemed to decide providing an option to disable the feature so that the server shouldn't set the 500 error. At least the open issue on that is still there.
Checking out the sources I've found these options to get around the problem:
To set any other response code than 200 (e.g. 299 (custom)) in your resolver.
To set the response code in your error formatting handler. This seems to be the most optimal solution.
Just to pick up some other library =)

SSLException when trying to read the content of HttpResponse

I'm implementing an application that communicates with the GitHub API in order to infer some statistics about projects developed currently. I chose to make the requests asynchronously, using HttpAsyncClient.
My problem is that after I execute all requests and get all the responses from the API (around 150 of them) and I try to read the content with
String content = EntityUtils.toString(response.getEntity());
I'm getting following SSLException after ~120 responses read:
Exception in thread "main" javax.net.ssl.SSLException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:557)
at sun.security.ssl.InputRecord.read(InputRecord.java:509)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:883)
at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:840)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:94)
at org.apache.http.impl.io.AbstractSessionInputBuffer.read(AbstractSessionInputBuffer.java:204)
at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:182)
at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:138)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:282)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:324)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:176)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.Reader.read(Reader.java:140)
at org.apache.http.util.EntityUtils.toString(EntityUtils.java:224)
at org.apache.http.util.EntityUtils.toString(EntityUtils.java:264)
at pl.xsolve.githubmetrics.github.OwnGitHubClient.extractBodyFromResponse(OwnGitHubClient.java:117)
The problem disappears when I decrease the number of requests significantly (for instance, by half). Also, all the responses contain HTTP/1.1 200 OK - I've checked it with response.getStatusLine() and it works until the very end of the response list.
The problem persists even if I remove httpClient.shutdown() in the finally block (which is executed before reading the content).
From the stack trace, I've concluded that the exception is thrown on the line
while((l = reader.read(tmp)) != -1)
Is the entity in the HttpResponse getting somehow outdated? Do you see an error in my reasoning? What can be the reason why first 120 responses are parsed properly and then SSLException is thrown?
Any help will be greatly appreciated :)
This could also be dependent upon your version of openssl. On a python project I maintain, we've seen errors in openssl 0.9.8 when people make large numbers of SSL requests in a short period of time. Admittedly it was nothing like your error message, but upgrading to openssl 1.0 might help.

Content-Length for Sinatra file steaming

I am trying to set the Content-Length header before I stream a file out to the client. I am setting it with:
response.headers['Content-Length'] = "12341234"
and then i do something like:
steam do |out|
file_chunks.each do |chunk|
out << chunk
end
out.close
end
However when I attempt to down the file in a browser the Content-Length header is blank. Does anyone know whether this is a Sinatra issue or a Passenger/Apache issue?
I assume whats happening is some layer between this block of code and when the response actually gets sent it seeing that it first sends the headers and the data block is empty so it assumes a content-length of 0 even though I set it explicitly.
Is there another way to tell the browser how big the file is that I'm sending it?
EDIT
Looks to be a passenger problem not a Sinatra problem. If I run the server with thin the Content-Length is passed correctly. I guess the question changes to not change the Content-Length if it is already set?
The issue here is that when using Transfer-Encoding:chunked, the Content-Length header is omitted.
See : http://greenbytes.de/tech/webdav/rfc2616.html#rfc.section.4.4
This is a Sinatra issue. Its stream API only supports EventMachine-based servers. In other words, the API only supports Thin. When using Passenger you should bypass the Sinatra stream API, and you should stream an HTTP response directly by using the Rack socket hijacking API, which is supported by Phusion Passenger. Here is an example which demonstrates how to use the Rack socket hijacking API to stream Server Sent Events on Phusion Passenger.

How to debug a failed ajax request in google chrome?

I have a web application that crashes on ajax requests with google chrome (it works with every other web browser it was tested it). After debugging I found that the error is caused by response.responseText being undefined. The xhr object looks like this:
argument: undefined
isAbort: false
isTimeout: undefined
status: 0
statusText: "communication failure"
tId: 3
In debugger in the 'network' tab I get "(failed)", however all the headers are there and I can even copy into clipboard the response body (which is a valid JSON).
My question is - how can I debug this problem? Where to find additional information, what causes this request to fail?
I finally found the solution to my problem : AdBlocks, when it blocks an ajax request, it just says "communication failure".
The first thing I would double-check is that the data coming back from the response is valid JSON. Just pass it through a JSON validator like this online JSONLint: http://jsonlint.com/
I assume that you are using something like jQuery to make your AJAX requests. If so, then make sure that you are using the development version of that library. Now that you are using the development version (uncompressed) of the script, find the particular function that you are using (eg. $.ajax) and then, within the Chrome inspector, insert a breakpoint in the code where the AJAX response is first handled (eg. https://github.com/jquery/jquery/blob/master/src/ajax.js#L579). Then proceed to step through the code, inspecting various return-values to see exactly what is going wrong.
If you are not using something like jQuery to make AJAX calls, then I'd recommend using a framework to avoid possible cross-browser compatibility issues like you might be experiencing right now.

4xx Response from server doesn't include JSON data

I've got a .NET MVC site that uses JSON to perform AJAX form Posts. If a validation error occurs (ie user misses a required field etc), then the server returns the validation errors in a JSON object and sets the HTTP status code of the response to something in the 400 range. On our local machines, this works absolutely fine.
However on our CI environment, it has suddenly stopped working, without any code changes. The response comes back from the server with the correct HTTP code, but the content is not the JSON our controller returns, but the standard .NET error page HTML, ie just the 11-byte 'Bad Request' response if the status code is 400.
The error code is correct for each validation error, so we are hitting the right controller/action, and the validation is working correctly, but for some reason our JSON is getting snipped out. Any ideas why this might be happening?
You are getting 400 code because your request syntax in not correct. Check if you have actually encode json data correctly or not.

Resources