I want to profile a simple webserver that I wrote in Go. It accepts requests, maps the request to avro object and sends it to Kafka in a go routine. The requirement is that it answers immediately and sends an object to the Kafka later. Locally it answers in under 1 ms on average. I have been trying to profile it by starting the script with davecheney/profile package and sending test requests with jmeter. I can see in the output that the profile file is generated but it remains empty, no matter how long jemeter is sending the requests. I'm running it on Mac El Capitan. I read that there were some issues with profiling on Mac but it would be working on El Capitan. Do you have any tips?
Firstly, I'm not sure whether you're trying to do latency profiling. If so, be aware that Go's CPU profiler only reports time spent by a function executing on the CPU and doesn't include time spent sleeping, etc. If CPU Profiling really is what you're looking for, read on.
If you're running a webserver, just include the following in your imports (in the file where main() is) and rebuild:
import _ "net/http/pprof"
Then, while applying load through jmeter, run:
go tool pprof /path/to/webserver/binary http://<server>/debug/pprof/profile
The net/http/pprof package provides profiling hooks that allow you to profile your webserver on demand any time, even while it's running in production. You may want to use a different, firewalled port for it, though, if your webserver is meant to be exposed publicly.
Related
I want to profile a server written in Go. I am using "net/http/pprof", but the default behaviour is utterly useless, as it seems to only profile the goroutine running the server that serves the profiling data.
I put my server under load with siege. With 1000 concurrent users I got the profiling data that I wanted.
Recently, I received a PC installed LoadRunner 11.03(perhaps patch 3) from my client and watched a web performance with it by long-run test.
In multiple user test, it seems not to work on proper performance because my web's performance monitor couldn't reach any limitation, usage of CPU, network bands, disk usage per minute, usage of Memory. Only waiting threads was little bad, but it was not obvious.
It seems a sequential behavior rather than a parallel access.
(No error occured.)
So I though it was not problem of servers, but the client have some problem having prevent to be acting parallel access for some reasons.
I don't have proper HP passport ID, I can't access the LoadRunner patches' website.
Please notice me if not LoadRunner patches, especially patch 4 or higher , let it show such the above behavior or not.
Ok, it sounds like you are just running a script in VUGen. If that is the case I am guessing (based on what you wrote, correct me if I'm wrong) you are running a script in the Virtual User Generator and not in the Controller. LoadRunner is actually a suite of multiple applications. The Virtual User Generator is the script development application, a development environment like Eclipse. It is single threaded and running a script there is meant only to test the script individually.
To run a multi-threaded test you need to use the Controller app and develop a test scenario, assign multiple virtual users (the LR term for concurrent threads) to each script you want to run and execute the test from the Controller. You can configure machines to be the Load Generators (another app set up to run as a process or service) and push out the test from the Controller to the Generators.
I have a Drupal 6 site that is frequently (about once a day) going down. The hosting provider is reporting that something in our site code is occupying all Apache threads but keeping them idle, making the server run out of threads to respond to new requests. A simple restart of Apache frees the threads and fixes the issue, though it reoccurs within a few hours or a day.
I have no idea how to troubleshoot this issue and have never come across PHP code doing this. Is there some kind of Apache settings change I can make to capture more information about what might be keeping a thread occupied but idle? What typical PHP routines can cause this behavior? I looked for code that connects to external resources, but didn't see any issues there.
Any hints for what to look at, capture more information, or PHP code that can cause this would be most useful.
With Drupal6 you could have the poormanscron module running sometimes, or even the classical cron (from crontab wget or whatever).
Then you could get one heavy cron operation putting your database under heavy stuff. Then if your database reponse time is becoming very slow every http request will become very slow (as for example sessions are in the database, and several hundreds queries are required for a drupal page). having all reqests slowing down may put all the avĂ ilable php process in a 'occupied state'.
Restarting apache all current process are stoped. If you run the cron via wget and not via drush cron tasks are a nice thing to check (running cron via drush would make it run via php-cli' restarting apache would not kill the cron). You can try a module like elysia cron to get more details on cron tasks and maybe isolate the long ones (you have a report on tasks duration).
This effect (one request hurting bad the database, all requests slowing down, no more process available) could also be done by one bad piece of code coming from any of your installed modules. This would be harder to detect.
So I would ensure slow queries are tracked on MySQL (see my.cnf otinons), then analyse theses requests with tolls like mysqsla. The problem is that sometimes one query is so big that all query becames slow. Se use time of crash te detect the first ones. Use also tho MySQL option to track queries not using indexes.
Another way to get all apache process stalled on php operation with drupal is having a lock problem. Drupal is using is own lock implementation with MySQL. You could maybe add some watchdog (drupal internal debug messages) calls on theses files to try to detect locks problems.
Then you could also have sonme external http requests calls made by drupal. Calling external websites like facebook, google, some tiny url tools, or drupal.org module update things (which always try to find all modules, even the one you write). If the distant website is down or filtering your traffic you'll have problems (but the apache restart would not help you, so it may not be that).
I've created a simple Go application on a Mac for writing and reading data to and from a TCP connection. I've used the GAE Go version. Later, I ported that program to Windows, and I got this error :
Connection.SetReadTimeout undefined (type *net.TCPConn has no field or method SetReadTimeout)
I guess the net package information on the Golang website describes the package only for the GAE version. How would I properly set the timeout in a non-GAE Go version?
With latest weekly (aka Go 1 RC2) one has to use the various Set*Deadline methods of the net.Conn type. Note that the old timeouts were relative to some event, deadlines are absolute times. The background for this change is roughly: setting a [relative] timeout of 1 s seems like a good idea in some scenario, but it applied to every event, like receiving a single byte, thus allowing crafted transfers to avoid timeouts forever (with the respective DOS nearby).
This is my setting: I have written a .NET application for local client machines, which implements a feature that could also be used on a webpage. To keep this example simple, assume that the client installs a software into which he can enter some data and gets some data back.
The idea is to create a webpage that holds a form into which the user enters the same data and gets the same results back as above. Due to the company's available web servers, the first idea was to create a mono webservice, but this was dismissed for reasons unknown. The "service" is not to be run as a webservice, but should be called by a PHP script. This is currently realized by calling the mono application via shell_exec from PHP.
So now I am stuck with a mono port of my application, which works fine, but takes way too long to execute. I have already stripped out all unnecessary dlls, methods etc, but calling the application via the command line - submitting the desired data via commandline parameters - takes approximately 700ms. We expect about 10 hits per second, so this could only work when setting up a lot of servers for this task.
I assume the 700m are related to the cost of starting the application every time, because it does not make much difference in terms of time if I handle the request only once or five hundred times (I take the original input, vary it slighty and do 500 iterations with "new" data every time. Starting from the second iteration, the processing time drops down to approximately 1ms per iteration)
My next idea was to setup the mono application as a remoting server, so that it only has to be started once and can then handle incoming requests. I therefore wrote another mono application that serves as the client. Calling the client, letting the client pass the data to the server and retrieving the result now takes 344ms. This is better, but still way slower than I would expect and want it to be.
I have then implemented a new project from scratch based on this blog post and get stuck with the same performance issues.
The question is: am I missing something related to the mono-projects that could improve the speed of the client/server? Although the idea of creating a webservice for this task was dismissed, would a webservice perform better under these circumstances (as I would not need the client application to call the service), although it is said that remoting is faster than webservices?
I could have made that clearer, but implementing a webservice is currently not an option (and please don't ask why, I didn't write the requirements ;))
Meanwhile I have checked that it's indeed the startup of the client, which takes most of the time in the remoting scenario.
I could imagine accessing the server via pipes from the command line, which would be perfectly suitable in my scenario. I guess this would be done using sockets?
You can try to use AOT to reduce the startup time. On .NET you would use ngen for that purpoise, on mono just do a mono --aot on all assemblies used by your application.
AOT'ed code is slower than JIT'ed code, but has the advantage of reducing startup time.
You can even try to AOT framework assemblies such as mscorlib and System.
I believe that remoting is not an ideal thing to use in this scenario. However your idea of having mono on server instead of starting it every time is indeed solid.
Did you consider using SOAP webservices over HTTP? This would also help you with your 'web page' scenario.
Even if it is a little to slow for you in my experience a custom RESTful services implementation would be easier to work with than remoting.