gsoap C SOAP server hijacks memory even after using soap_end() call - gsoap

For a project, I have implemented a gsoap(v2.8.83) FastCGI server with a C application. It is hosted as FastCGI on lighttpd webserver. It serves SOAP request and populates SOAP response with items available on the device. The number of items are varying with time. The functionality works fine but the FastCGI process seems to hijack some memory.
I expected the gsoap generated code to handle the release of all memory used to serve the request once I call soap_destroy() and soap_end(). But that does not happen.
For same number of items in the SOAP response, the memory held by the process is same (more than the memory held before serving the request) over multiple requests. However, as the number of items increases in the SOAP response, the memory held by the process also increases and this is an issue for the project as it runs on an embedded device.
I have enabled DEBUG option provided by gsoap and verified from the logs that all allocated memory is freed.
I have also tried compiling my FastCIG with few optimization options provided by gsoap and running it as an independent server (isolated from lighttpd) but there is no improvement. I am not able to use the WITH_LEAN compilation flag as I need data-time serialization.
gsoap memory leak C applications posts a similar issue but I have tried using the auto-generated functions as suggested in the answers for allocating the elements with no luck.
From the source code, I feel that the server is retaining some attribute data even after calls to soap_end() but not quite able to nail down as the debug logs does not give any clue on this.
Anyone understands what could be happening?
Any help is much appreciated.
Regards,

Related

Load testing WCF services gives huge (>200 sec) responses

I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.

EXE size bloats while using Websocketpp

I've built a very basic EXE which uses Websocketpp client, which just connects to a Websocket server, and sends and receives a mesage.
I've used VS 2013.
I'm noticing that the size of the EXE is mammoth. It's like 2.3 MB for Release and 6 MB for Debug.
Any ideas as to how I can reduce the size of EXE??
WebSocket++ author here. The sizes you quote seem about the right ballpark. Keep in mind that a "very basic sample" like the echo_server (which produces a ~1MB executable on linux) does a lot more than you might think based on the ~50 lines in the program source.
Out of the box any WebSocket++/Asio based program is a high performance event based client/server system and includes code for DNS resolution, IPv4 and IPv6, timers, SHA1/MD5 hashing, base64 encoding, UTF8 validation, logging, thread safety, and parsers for URIs, HTTP, and multiple WebSocket protocol versions. Just because you only use these capabilities to echo back messages doesn't make this a trivial program.
Some observations/notes on the topic:
Due to the way templates work, the code for WebSocket++, ASIO, and the STL is compiled into your program rather than sitting in an externally linked library. This may make a WebSocket++ or Asio program look artificially larger than a program that links to an external library.
The situation described in #1 can sometimes end up more efficient than an external library because this program will only include the parts of the library that your code actually uses, rather than all parts. I.e. If you don't instantiate a client endpoint no client code will be included. If your config disables TLS encryption, logging, or the thread safety features they will also not be included. Again due to the way templates work this can go both ways. For example: A program that includes both a client and a server will have some potentially unnecessary duplication.
The size of WebSocket++'s code is largely correlated to the number of different endpoint configs that you use and the options enabled in each of those configs. These represent a fixed size no matter what else your program does. If your program does little, they will make up a large proportion of the code. If your program does a lot that proportion will shrink.
WebSocket++ is fairly modular (though this is less well documented right now). If you are really concerned about code size (small embedded systems perhaps?) and don't actually need all the features that Asio and WebSocket++ bring out of the box, you can set up a custom config that either removes many features or replaces them with your own space optimized implementations.
Say you only ever need to service one non-TLS connection with no DNS lookup and no security timeouts in a guaranteed single threaded program with no logging. You can implement your own network transport policy based on your native OS socket library that doesn't include all the stuff that Asio does. You can also stub out the locking/concurrency and logger policies you don't need.

Unable to service 20+ connections MVC3 IIS6 w long polling

It seems this is a common question/problem but despite checking out a number of proposed solutions nothing worked for us so far.
The app
It's a simple chat app, that puts a new interface on an existing app's JSON library. We proxy all the calls to their app to avoid x-domain restrictions (IE8).
ASP.net MVC3 App;
It's hosted in IIS6, W2K3 SP2. DEV svr has 1gig ram, TEST svr has 4gig ram.
The problem
When we approach 20 concurrent users, requests start lagging - no issues in Event Viewer to be found. It looks like calls are just queued. There are NO 503's returned.
What we've tried
We're using AsyncController to long-poll a 3rd party webservice for results
Hosted in IIS6
We're using the TPL to call their service in our AsyncController method
We've modified processModel and set maxWorkerThreads=100.
We've looked at this how-to but the HTTP.SYS config looks to service an infinite number of threads so we haven't bothered adding the reg keys.
The 3rd party service can handle lots of concurrent requests (and is in a web farm, so we're fairly confident we're the weakest link)
what are we missing? - any help greatly appreciated
Well... almost four weeks later and I thought I'd update this in case anyone wants to find out what helped us overcome these limitations (we're cramming around 100 simul connections on our DEV server, 1gig Xeon).
AsyncControllers
If you've got a potentially long waiting request (i.e. long polling) then use them.
Feel free to use TaskFactory but be sure to mark it as a long running process, if there is risk you could exception in your thread, be sure to use ContinueWith so you can decrement the operations, and return the error to your caller.
ServicePointManager
If you're making downstream calls (i.e. WebService/3rd party API) then make sure you have increased DefaultConnectionLimit from the default of 2 simultaneous connections.
A rough guide is 8 * Num cores so you don't starve outgoing connection resources.
See this MSDN article on DefaultConnectionLimit for more info.
IOCP vs RestSharp
I love RestSharp's API, it's fantastic but it's probably meant more for client side programming, not for proxying requests. My mistake!! Use HttpWebRequest and the Begin/End methods to make use of IOCP
If you're looking to reverse proxy or url rewrite, check out URL Rewriter, a great library available freely on CodePlex
In the end, our issue wasn't with incoming requests, it was with requests being proxied to a third party, we weren't supplying enough connections and thus they all queued up lagging the whole system. Happy to say after lots of reading, investigation and coding we've resolved it.

What is ajax-push? Are there caveats to using it on some servers?

Can somebody explain what ajax-push is? From what I understand it involves leaving HTTP connections open for a long time and reconnecting as needed. It seems to be used in chat systems a lot.
I have also heard when using ajax-push in Java it is important to use something with the NIO-connetors or grizzle serlvet api? Again, I'm just researching what it exactly.
In normal AJAX (call it pull) you ask the server for something and you get it immediately. This is fine when you want to get some data from the server now. But what if something happens on the server and the server wants to push that event to the client(s)?
Technically this is implemented using so called long polling - the browser opens the HTTP connection and waits for the response. As long as there is nothing interesting on the server side, it waits. But when something happens, the server sends the response and the client receives it immediately. This is a huge advantage over normal polling where you ask the server every few seconds - it generates a lot of traffic and still introduces noticeable latency.
The only problem with this approach is the number of pending HTTP connections. Old-school Java servlet containers aren't quite capable of handling such amount of connections due to one-thread-per-connection limitation - they quickly run out of memory. Even though the HTTP threads aren't doing anything (waiting for some other part of the system to wake them up and give them the response), they occupy memory.
However there are plenty of solutions nowadays:
Tomcat NIO connectors
Atmosphere Ajax Push/Comet library
Servlet 3.0 #Async (most portable)
Container-specific features, but Servlet 3.0, if available, should be considered superior.

performance of accessing a mono server application via remoting

This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.

Resources