MiniProfiler - What's the impact in terms of Memory Consumption? - mvc-mini-profiler

In terms of memory consumption, what is the impact of running Mini-Profiler?
Also, if we use the included SqlServerStorage provider are the profiling results also stored in memory?

Related

Memory performance vs Utilization

Question: How is memory (RAM) performance (read/write speeds, etc.) affected by total utilization.
Background:
I am curious if there is a performance impact for reading/writing to system memory based on the overall utilization of that memory
If performance degrades at higher utilization, what is the relationship between utilization and performance? Is this linear? Or at some point is there a significant drop in performance?
If there is a drop in performance with higher utilization, is there a point at which it becomes faster to use swap on an SSD on a SATA bus? Where does this point occur?
Outcome:
All else being equal, I'm curious if there should be a specific target for memory utilization to get the best performance from a machine as on the one hand, having more stuff in system memory should make things faster than having to read from disk, but at some point surely, the overall memory performance is materially affected by some overhead from high memory utilization right?
This sounds a bit like a superuser.com question, not stackoverflow.
Time to allocate new memory might increase slightly as the system approaches 100% full.
If you don't have any swap space, Linux will pick a processes using a lot of RAM and kill it pre-emptively when the system is approaching OOM. (google oom-killer.)
Access time to already allocated memory is not at all influenced by the fraction of total memory in use. A program that uses 1GB of memory with some specific access pattern will show the same performance on a machine with 2G vs. a machine with 16GB of RAM.
Virtual->physical mappings are defined by page tables, which by themselves could give slower performance for lookups when more memory is allocated to a process. (each process has its own page table). Again, this is not %-full dependent, simply size. However, these lookups need to be cached by the CPU hardware TLB.
See Ulrich Drepper's What Every Programmer Should Know About Memory for more background on this stuff.

Virtuoso System Requirements

We would be using Virtuoso for storing RDFs, the triple count will be 100 million to start with. I need to know what should be typical RAM, CPU, Disk etc for this. Querying will be with SPARQL and there will be a bit complex queries.
Kindly provide your inputs.
The average size of a Virtuoso version 6.x triple (quad) is about 30bytes thus for 100 million triples you would need about 3GB RAM , this being the most critical component to enable the database working set to fit in memory , data does not need to be loaded from disk once the database is "warmed up", for best performance. This would be especially the case when running complex queries. In terms of disk, the fast they are the quicker the databaase can be loaded into memory, checkpoints performed etc. thus SSDs or similar devices are recommended where possible, espcially if memory is limited and reading data from disk at times in unavoidable. In terms of processor standard commodity 64bit processor available today would suffice, typically running on a Linux x86_64 system of your choice, as said memory is always the most critical component though.
See the following Virtuoso FAQ and peformance tuning documents for more details:
http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/VirtRDFPerformanceTuning
http://virtuoso.openlinksw.com/dataspace/dav/wiki/Main/#FAQ

Profiling CPU Cache/Memory from the OS/Application?

I wish to write software which could essentially profile the CPU cache (L2,L3, possibly L1) and the memory, to analyze performance.
Am I right in thinking this is un-doable because there is no access for the software to the cache content?
Another way of wording my Q: is there any way to know, from the OS/Application level, what data has been loaded into cache/memory?
EDIT: Operating System Windows or Linux and CPU Intel Desktop/Xeon
You might want to look at Intel's PMU i.e. Performance Monitoring Unit. Some processors have one. It is a bunch of special purpose registers (Intel calls them Model Specific Registers, or MSRs) which you can program to count events, like cache misses, using the RDMSR and WRMSR instructions.
Here is a document about Performance Analysis on i7 and Xeon 5500.
You might want to check out Intel's Performance Counter Monitor, which is basically some routines that abstract the PMU, which you can use in a C++ application to measure several performance metrics live, including cache misses. It also has some GUI/Commandline tools for standalone use.
Apparently, the Linux kernel has a facility for manipulating MSRs.
There are other utilities/APIs that also use the PMU: perf, PAPI.
Cache performance is generally measured in terms of hit rate and miss rate.
There are many tools to do this for you. Check how Valgrind does cache profiling.
Also cache performance is generally measured on a per program basis. Well written programs will result in a fewer cache misses and better cache performance and vice versa for poorly written code.
Measuring the actual cache speed is the headache of the hardware manufacturers and you can refer their manuals to know this value.
Callgrind/Cachegrind combination can help you track cache hits/misses
This has some examples.
TAU, an open-source profiler which works using PAPI can also be used.
If however, you want to write a code to measure the cache statistics you can write a program using PAPI. PAPI allows the user to access the hardware counters without any need to know system architecture.
PMU uses Model Specific Registers, hence you must have the knwoledge of the registers to be used.
Perf allows for measurement of L1 and LLC (which is L2), Cachegrind, on the other hand allows the user to measure L1 and LLC (which can be L2 or L3, whichever the highest level cache is). Use Cachegrind only if you have no need of faster results because Cachegrind runs the program about 10X slower.

Memory management for intentionally memory intensive applications

Note: I am aware of the question Memory management in memory intensive application, however that question appears to be about applications that make frequent memory allocations, whereas my question is about applications intentionally designed to consume as much physical memory as is safe.
I have a server application that uses large amounts of memory in order to perform caching and other optimisations (think SQL Server). The application runs on a dedicated machine, and so can (and should) consume as much memory as it wants / is able to in order to speed up and increase throughput and response times without worry of impacting other applications on the system.
The trouble is that if memory usage is underestimated, or if load increases its possible to end up with nasty failures as memory allocations fail - in this situation obviously the best thing to do is to free up memory in order to prevent the failure at the expense of performance.
Some assumptions:
The application is running on a dedicated machine
The memory requirements of the application exceed the physical memory on the machine (that is, if additional memory was available to the application it would always be able to use that memory to in some way improve response times or throughput)
The memory is effectively managed in a way such that memory fragmentation is not an issue.
The application knows what memory can be safely freed, and what memory should be freed first for the least performance impact.
The app runs on a Windows machine
My question is - how should I handle memory allocations in such an application? In particular:
How can I predict whether or not a memory allocation will fail?
Should I leave a certain amount of memory free in order to ensure that core OS operations remain responsive (and don't in that way adversely impact the applications performance), and how can I find out how much memory that is?
The core objective is to prevent failures as a result of using too much memory, while at the same time using up as much memory as possible.
I'm a C# developer, however my hope is that the basic concepts for any such app are the same regardless of the language.
In linux, the memory usage percentage is divided into following levels.
0 - 30% - no swapping
30 - 60% - swap dirty pages only
60 - 90% - swap clean pages also based on LRU policy.
90% - Invoke OOM(Out of memory) killer and kill the process consuming maximum memory.
check this - http://linux-mm.org/OOM_Killer
In think windows might have similar policy, so you can check the memory stats and make sure you never get to the max threshold.
One way to stop consuming more memory is to go to sleep and give more time for memory cleanup threads to run.
That is a very good question, and bound to be subjective as well, because the very nature of the fundamental of C# is that all memory management is done by the runtime, i.e. Garbage Collector. The Garbage Collector is a non-deterministic entity that manages and sweeps the memory for reclaiming, depending on how often the memory gets fragmented, the GC will kick in hence to know in advance is not easy thing to do.
To properly manage the memory sounds tedious but common sense applies, such as the using clause to ensure an object gets disposed. You could put in a single handler to trap the OutOfMemory Exception but that is an awkward way, since if the program has run out of memory, does the program just seize up, and bomb out, or should it wait patiently for the GC to kick in, again determining that is tricky.
The load of the system can adversely affect the GC's job, almost to a point of a Denial of Service, where everything just grind to a halt, again, since the specifications of the machine, or what is the nature of that machine's job is unknown, I cannot answer it fully, but I'll assume it has loads of RAM..
In essence, while an excellent question, I think you should not worry about it and leave it to the .NET CLR to handle the memory allocation/fragmentation as it seems to do a pretty good job.
Hope this helps,
Best regards,
Tom.
Your question reminds me of an old discussion "So what's wrong with 1975 programming ?". The architect of varnish-cache argues, that instead of telling the OS to get out of the way and manage all memory yourself, you should rather cooperate with the OS and let it understand what you intend to do with memory.
For example, instead of simply reading data from disk, you should use memory-mapped files. This allows the OS to apply its LRU algorithm to write-back data to disk when memory becomes scarce. At the same time, as long as there is enough memory, your data will stay in memory. Thus, your application may potentially use all memory, without risking getting killed.

Open source profiler for analyzing low-level architectural inefficiencies?

Modern processors use all sorts of tricks to bridge the gap between the large speed of their processing elements and the tardiness of the external memory. In performance-critical applications the way you structure your code can often have a considerable influence on its efficiency. For instance, researchers using the SLO analyzer were able to fix cache locality problems and double the execution speed of several SPEC2000 benchmark programs. I'm looking for recommendations for an open source tool that utilizes a processor's performance monitoring support to locate and analyze architectural inefficiencies, such as cache misses, branch mispredicts, front end stalls, cache pollution through address aliasing, long latency instructions, and TLB misses. I'm aware of Intel's VTune (commercial), AMD's CodeAnalysist (free, but not open source), and Cachegrind (relies on simulation).
For linux, oprofile works well. Actually AMD's CodeAnalysist uses oprofile as its backend.
Oprofile uses processor's intenal performance tunning mechanism to analyze architectural inefficiency.

Resources