is it ok to use golang pprof on production without effecting performance? - go

I'm kind of new to the pprof tool, and am wondering if its ok to keep running this in production. From the articles I have seen, it seems to be ok and standard, however I'm confused as to how this does not affect performance since it does a sampling N times every second and how come this does not lead to a degradation in performance.

Jaana Dogan does say in her article "Continuous Profiling of Go programs"
Profiling in production
pprof is safe to use in production.
We target an additional 5% overhead for CPU and heap allocation profiling.
The collection is happening for 10 seconds for every minute from a single instance. If you have multiple replicas of a Kubernetes pod, we make sure we do amortized collection.
For example, if you have 10 replicas of a pod, the overhead will be 0.5%. This makes it possible for users to keep the profiling always on.
We currently support CPU, heap, mutex and thread profiles for Go programs.
Why?
Before explaining how you can use the profiler in production, it would be helpful to explain why you would ever want to profile in production. Some very common cases are:
Debug performance problems only visible in production.
Understand the CPU usage to reduce billing.
Understand where the contention cumulates and optimize.
Understand the impact of new releases, e.g. seeing the difference between canary and production.
Enrich your distributed traces by correlating them with profiling samples to understand the root cause of latency.
So if you are using pprof for the right reason, yes, you can leave it in production.
But for basic monitoring, as commented, the system is enough.
As noted in "Continuous Profiling and Go" by Vladimir Varankin
Depending on the state of the infrastructure in the company, an “unexpected” HTTP server inside the application’s process can raise questions from your systems operations department ;)
At the same time, depending on the peculiar nature of a company, the very ability to access something inside a production application, that doesn’t directly relate to application’s business logic, can raise questions from the security department ;)) I
So the overhead is not the only criteria to consider when leaving active such a feature.

Related

Performance Testing in Mirth Connect Using JMeter

Mirth Connect is a software that is designed to handle a message flow and it has built-in support to handle HL7 messages in particular and therefore this software is widely used for interfacing in Healthcare applications. Over the years I have seen the Mirth software experiencing performance issues primarily due to the message build up over time and in scenarios where it receives a heavy message load in quick succession.
Mirth has a channel-based architecture and it's ideal if there is some way we can performance test the Mirth channel and get JMeter statistics for its performance. Whereby we can gather the necessary information to optimize the channel transformers and also to set the purge routines accordingly.
However in the Internet there was little to no information on this area, that is how one can use JMeter to test a Mirth channel. A team in Sri Lanka did some research on this area back in 2013 and I found their findings and achievements below
http://pragmatictestlabs.com/2016/10/09/performance-testing-healthcare-application-hl7-jmeter/
However this is very specific the output here was a JSon object which they extracted, in Mirth however we can have outputs in various forms and there need to be a better way to do this. An important takeaway from this is the input that is the input is general we can use JMeter to generate HL7 messages and pass them to Mirth that's great but how to capture the response generally, it would be ideal if there is a way to read the Mirth Dashboard through JMeter, all the output statistics are there it's just a matter of reading them.
I have an application where Mirth reads HL7 messages both ADT and RDE and creates a text file accordingly with appropriate content and drops it to a shared location. Then the application reads the files and shows the information to the user.
I wish to do two performance tests here
Measure how much time the complete system takes and how it varies with load from the arrival of a message to its information being available to the user
Measure how much time the channel takes and how it does it as the load increases
I can do the first one because I can generate HL7 messages using JMeter and I can get JMeter to read the output in the application or the database. The problem is with the second, can I do this in a general way.
You asked for suggestions, so I'm going to share my general strategy for performance testing Mirth channels. I suspect that this won't be a complete answer to your question, and I might not be telling you anything you don't already know, but I'm hoping this will help you find an answer that you are comfortable with.
For several reasons, try not to spend too much time "testing the complete system":
Firstly, testing the entire system necessarily includes testing low-level configuration like the number of CPU cores, the NICs being used in the box, and kernel level software like the TCP/IP stack. You don't usually have any control over these things, so you can't optimize them in any way.
Secondly, the performance of the entire system is going to be heavily dependant on whatever ancillary code is running on the box. If a sysadmin decides to 'nice' my Mirth process down, or to use that box to also host a SQL server, that will have an impact on the system that I (again) have no control over.
Thirdly and most frankly, I find that the "performance of an entire system" is something that management asks about during system setup so they can get a cost estimate; but they know that they're only getting an estimate. You do your best to use test metrics to give a good guess for the initial hardware provisioning, but everyone knows that it's really the production performance metrics that will drive later provisioning costs.
Make sure that you build your channels for testability. I find that it's much easier to test a channel when the source and destination can be changed to "Channel Reader" and "Channel Writer" without changing message handling. One way to look at this is that you're not going to overhaul Mirth's MLLP stack or Java's TCP stack, so just eliminate these things from your testing.
I keep a source of useful test messages. I have a couple of files on a network drive that have around a hundred messages that test for nasty edge cases that I've run into over the years on my HL7 interfaces. I wrote a small Mirth channel that reads these in from a file and spews out copies as fast as it can. By turning on "Queueing" on the destination side of that channel, I can queue up a bajillion test messages that are ready to send to the channel I want to test. In the past I took the time to build a test interface that acted like a fake EMR to spew out randomly constructed messages, but there didn't seem to be any advantage over just spewing copies of the same messages from my test files.
Finally, and most importantly, it's critical that you measure the performance of your test instance using the same metrics that you'll use to measure the performance of your production instance. If the sole production metric you care about is 'messages per second', then that's what you need to measure on your test box. If memory footprint is a concern in production, then you need to measure memory usage in your test environment as well. When you make a change to to your test instance that decreases an important metric by 10%, you'll need to make sure your management is aware before you push that change to production.
Note that getting some of these metrics can be tricky, since Mirth doesn't include good tools to monitor its own performance. The Mirth dashboard is a good place to keep an eye on errors or crashes, but it's not a great place to find performance data. During my testing I make sure that I use whatever resource monitoring tool that the sysadmins will be using to monitor the performance of the production instance. Beyond that, I use a manual process to test performance: If I want to count message per second, I send through a batch of messages and look at the timestamps of the first and last messages. If I want to get an idea of the CPU load of a Mirth channel, I use the Windows Performance Monitor or the posix 'top' command.

Liferay 6.2 Session autoextend Disadvantages

I found that it's possible to automatically extend Liferay's session. So that the session doesn't expire till you close your browser. Is there any limitations or disadvantages of such approach. Any performance degrade or load issues?
As with any abstract question about hypothetical performance impact (or preliminary optimization) this question is basically unanswerable - but here's some criteria:
Naturally, pinging the server in order to extend a session will incur some extra load - if that results in a performance decrease, you'll most likely have a highly congested installation in the first place. If your server is bored all day, the extra ping won't bring it down.
You may or may not have custom applications running in your installation that store data in the user's session. If those are a few bytes (like Liferay does, e.g. the currently logged in user's information): There's probably no degradation. If you store 1MB of information per session (in your own custom apps - Liferay doesn't do this), things might differ: Just multiply your session storage size by the number of concurrent users that you expect. In case this use of memory indicates a problem: Make your custom apps use the session less - it's bad style anyway.
Will your particular installation suffer from any degradation? Measure. There's no way around this.
From a system maintenance point of view: If you're running a cluster and want to take individual machines out of the load balancer: Artificially extending sessions might indicate that a machine still has sessions open, even though they're mostly on unattended browsers - you'll get inflated numbers and it takes longer to bring machines down when you need to wait for the session count to come close to zero.

How might Chaos Engineering look as part of a pipeline?

Chaos engineering practices are becoming very widely used. One common example is Netflix' own Chaos Monkey. However, Chaos Monkey is often run ad-hoc against random targets. I'm curious how chaos experiments might work in a typical CI/CD pipeline to enhance a specific service's resiliency.
Since chaos experiments (usually) require a fully functional environment, when would they run? Would it run parallel to testing, or downstream?
Would you run a chaos experiment with every commit, or just some?
How long would allow the chaos experiments to run? A 60 minute CPU spike might interfere with a "fail fast" approach, for example.
Would a chaos experiment ever fail the pipeline? What would constitute a 'failure'?
We are just getting started with our chaos engineering efforts, but I'll offer some thoughts regarding your questions.
There are at least three distinct classes of experiment:
Instance/container kills that we expect the underlying infrastructure to handle automatically.
Higher-level but fairly localized failures like slow or unavailable dependencies.
Large-scale failures like data center or region down.
For a build pipeline the sweet spot would be in the middle there (i.e. higher-level but localized failures), because usually the software itself plays a role in responding to the failure. For example the software might include a circuit breaker that trips, throttling, automated failover, etc. If those are software functions, then they can either work or not work, and the build should uncover that.
To the extent that resiliency to failure is a system requirement, then yeah, a failed experiment would fail the pipeline. Suppose for instance that build 392 has a correctly working circuit breaker, and that build 393 doesn't. That would be a failure since the build goes from meeting the requirement to not.
We usually have some chaos experiments, like large-scale failures outside the pipeline.
During the build pipeline, we usually combine chaos experiments with a short performance test to simulate activity and then kill some instances/container to check the resilience of the system. And fail if the system is not able to recover.

MVC3 memory management

What is the best way to check memory usage in an ASP.NET MVC3 application?
I have been told by my hosting provider to recyle the IIS application pool every so often to improve the speed of the site. Is this what is 'recommended practice'? Surely I shouldn't need to restart my application every so often? I'd much rather find out if it is an issue with memory usage in my application and correct it. So any tips & best practices you use would be quite helpful too.
The application is based on ASP.NET MVC3, C# and EF Code First. Any guidance, links appreciated.
EDIT:
I found this page after I posted, which is quite useful. But I'd still like to hear any other views.
ASP.NET MVC and EF Code First Memory Usage
Thank you
I have a site that never recycles (until the machine is rebooted weekly)
Your application generally should keep performing fine. If it doesn't, there is some leak.
This can occur because
Cache never expires
Cache never expires
Session storage keeps growing and never times out
ObjectContexts are never disposed and kept in the session, etc
Objects that should be disposed aren't
Objects that are created via a dependency injection container aren't setup to release after each request, and thus potentially have internal collections that keep growing.
There are more causes - but these are a few main ones.
So the question really is 'there is no best practice - it depends on your app'
If you are worried about current sessions during a restart, keep in mind a restart can be quick and current requests are allowed to finish (sometimes) and forms authentication tokens will survive the restart, however sessions will not unless you configure an out of process state server.
If your memory usage keeps growing, then setup a restart schedule, otherwise do once a week or never - or setup once memory goes to XYZ then reset. ASP.NET will restart automatically once a certain threshold is reached as well based on what the hoster has setup on memoryLimit:
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
By default IIS recycles the application pool automatically at an interval (I think is 29 hours or so) but that is surely set by the host, no matter how little or how much memory you're the process is using. THe recycling trigger can be a time interval or when the process hits a certain memory usage limit. I'm sure any shared host has both of them set.
About memory usage, you can use the GC.GetTotalMemory method which will give you an approximate usage. Even when using Perfmon the readings aren't very accurate but it gives you an idea.
//global.asax.cs
void Application_EndRequest(object o,EventArgs a)
{
var ctype=Context.Response.Headers["Content-Type"];
if (ctype == null || !ctype.Contains("text/html")) return;
Context.Response.Write(string.format("<p>Memory usage: {0}</p>",GC.GetTotalMemory(false)));
}
Be aware that you'll see the usage increasing increasing until the GC kicks in and the usage will drop to a more 'realistic' value.
If you have the money I recommend a specialized tool such as the Memory profiler
Other things you can do to at least be ready if the application has memory or performance problems:
Proper layering of the application, means you can refactor the more inefficient parts without affecting the others.
The Repository pattern will be very helpful, because you can start using EF , find out that EF uses to much memory (like in the link you've found), but then you could switch the repository implementation to use PetaPoco or Dapper.net.
In general an OR\M is more of a heavy library, if the application doesn't need ORM features but just a quick way to work with a db, use from the beginning a mico-Orm like those mentioned above.
Always dispose objects implementing IDisposable.
When dealing with large db records, use pagination. It's good for both server resources usage and user experience
Apply the YAGNI (You Aint Gonna Need It) principle as much as possible, this somehow implies a bit of TDD :)

Diagnosing pathological behavior of a piece of cluster software

I'm using a kind of load balancer over a small cluster that is able to achieve >2000rps on zero-duration requests (t.i. ones that are immediately satisfied by the worker nodes).
But as soon as the requests stop being zero-duration and start taking even 1ms, performance immediately drops >10x. The data being transfered in both directions is identical and is about 2kb in size.
This is for sure not related to saturation of the cluster or network throughput, because 200rps of 1ms requests is a very tiny load and the network is 10Gbit. Besides, the CPU load is just some 2-5% both on the load balancer and on the worker nodes.
I wonder whether that might be related to some pathological behavior of the OS scheduler, or the OS network stack (t.i. there is some special case behavior for very short interactions).
How might I diagnose the reason? Which perfcounters to watch? What tools or methodologies to use?
(Just in case someone simply knows the answer to my particular problem, I'm talking about the MS HPC Server 2008 R2's "WCF Broker", running on Windows Server 2008 R2 over Hyper-V)
One thing you can do is use ETW tracing to try and understand what the nodes are doing while your WCF job is running. On HPC server, I sometimes clusrun xperf to collect traces on all or specific nodes. There are a number of tools that you can use for analyzing ETW traces, including xperf itself. I haven't done any serious work using HPC SOA (WCF), but I did write a simple WCF raytracer app and then used xperf to profile it on several of the nodes.
Turned out it was a completely network-unrelated issue having to do with peculiarities of the scheduling mechanism of HPC Server. I resolved the issue by tweaking a configuration option "serviceRequestPrefetchCount" to 0 in the loadBalancing section of the WCF service config file.
I'm assuming that there are some shared resources with some kind of locking system in place? Is locking a bottleneck? It's hard to guess without seeing the system.
Do you have a way to profile the workers? What are they spending most of their time on, especially in the fast vs slow scenarios?

Resources