I am solving some algorithm problems in c# and running them as a console application.
To check for the efficiency of the applications I would like to see what their run times are.
Currently I am printing the time at the start of a program and at the end and calculating the time difference ,but is there a way to reduce Observer' effect ?
Some inbuilt tool/plugin that I am not aware of ?
You should use the Stopwatch class, which is specifically designed to do that.
To avoid measuring the JIT time, you should also run each algorithm at least once before measuring anything so that the JIT has time to run.
When measuring the algorithms, you should run each one hundreds of times and take the average runtime.
The most important source of delay due to observer's effect is printing itself. Another potential delayer is debug message text formatting. So I suggest the following:
If you can anticipate the number of loops and number of stages per
loop, create an array to store timing information. If not, use a
dynamic list.
During execution, store time and aditional information in that array
or list.
If possible, don't store messages along with time but codes, for
example 1=Stage 1, 2=Stage 2, etc.
At the end of the execution, dump all the information to screen or file, and format messages as needed.
Use stopwatch class. Look at at the method below for an example.
using System.Diagnostics;
public void yourAlgorithm()
{
Stopwatch timePerParse = Stopwatch.StartNew();
/** -- TODO --**/
timePerParse.Stop();
Console.WriteLine(timePerParse.ElapsedMilliseconds);
}
Related
I have profiled my Rust code and see one processor-intensive function that takes a large portion of the time. Since I cannot break the function into smaller parts, I hope I can see which line in the function takes what portion of time. Currently I have tried CLion's Rust profiler, but it does not have that feature.
It would be best if the tool runs on MacOS since I do not have a Windows/Linux machine (except for virtualization).
P.S. Visual studio seems to have this feature; but I am using Rust. https://learn.microsoft.com/en-us/visualstudio/profiling/how-to-collect-line-level-sampling-data?view=vs-2017 It has:
Line-level sampling is the ability of the profiler to determine where in the code of a processor-intensive function, such as a function that has high exclusive samples, the processor has to spend most of its time.
Thanks for any suggestions!
EDIT: With C++, I do see source code line level information. For example, the following toy shows that, the "for" loop takes most of the time within the big function. But I am using Rust...
To get source code annotation in perf annotate or perf report you need to compile with debug=2 in your cargo toml.
If you also want source annotations for standard library functions you additionally need to pass -Zbuild-std to cargo (requires nightly).
Once compiled, "lines" of Rust do not exist. The optimiser does its job by completely reorganising the code you wrote and finding the minimal machine code that behaves the same as what you intended.
Functions are often inlined, so even measuring the time spent in a function can give incorrect results - or else change the performance characteristics of your program if you prevent it from being inlined to do so.
The built-in Sitecore rendering stats http://<sitename>/sitecore/admin/stats.aspx is really helpful for identifying inefficient and slow-loading XSLT renders. Recently I've started switching to .ascx sub layouts to take advantage of the Sitecore C# API which can help improve performance when used correctly.
However, I've noticed that sub layouts (as opposed to XSLT renders) are not reported correctly on the stats page. See the screenshot below....
I know for a fact that this sub layout takes about 1.8 seconds to generate (I calculated this in the code-behind). Caching is turned off. I've refreshed the page 20 times to ensure I get an average. You will see that the "Avg. items" is always 0 - I can live with this - but the "Avg. time (ms)" is less than 1ms which is just clearly wrong.
Does anyone have any insights into this? Has anyone found a way to get it to work correctly?
Judging whether a statistic is right/wrong is going to rely on understanding exactly what it is measuring.
Digging around in Sitecore.Diagnostics.Statistics using Reflector I note the following:
Sitecore.Web.UI.Webcontrol contains a field m_timer
This is 'started' in the BeforeRender() method and 'stopped' in the AfterRender() method
Data from that timer is sent to Statistics.AddRenderingData() and is logged against the control
This means it is measuring the time taken to render the control, which for an XSLT includes the processing time for preparing all the data in it, but as much of the work of a normal ASCX is done prior to the Render-stage the statistic is much less useful. Incorporating the Load stage in the time would inadvertently include the processing time for all child components, since the Load sequence is chained and called recursively, so that probably doesn't help much either.
I suspect there is no good way of measuring the processing time for a specific ASCX control (excluding children) without first acquiring cumulative data then post-processing the call chain and splitting the time apart. This is the sort of thing RedGate ANTS does really well, but might not be so good if it was being executed on a live production system, given the overheads.
I put a stopwatch on it. The first time the app loads (no settings file exists) it takes about 190ms to fail to load four settings. The app runs, three bools and a short string are written as settings, and the next time the app loads, it takes 400ms to read the first setting from the IsolatedStorageSettings.ApplicationSettings collection and about 1ms to get the remainder.
Is there anything I can do to ameliorate this load time?
Ues a better Serialization method ;)
XMLSerialization is okay for more complex graphs, but for simple settings, binary serialization would be much better. Also, when you say fail to load, I assume you're doing a check to see if the files exist? If not, I think there may be exceptions being thrown internally which would slow down execution as well.
I read somewhere that somebody could access a config value during run time but not during design time. Whats the difference between run time and design time in this context?
Design time is when somebody signs off our word documents and our UML diagrams with a cheery "That looks fine!" Run time is when we execute our code and it fails with a horrible crash and burn.
The advantage of a technique like TDD is that it compresses the gap between design time and run time to the point where they are the same thing. This means we get instant feedback on how our design actually works when translated into code, which should result in a better design and fewer embarrassments when our code goes live. YMMV.
Design time is when you are creating a design based on the requirements, or creating some UML diagrams.
Run time is when you are implementing your design and running the code.
Are you talking about .NET applications? In that case design time probably means something more specific - when your GUI is presented within the Visual Studio designer. This gives you a working view of your application, but it is running in a design time environment. Many .NET controls have a DesignMode property that allows you tell whether the control is running in design time view or not.
design time is when you design some code
run time is when you execute the code you designed
Run time is when your program runs. Design time is when your program is designed.
Design time refers to processes that take place during development, Runtime refers to processes that take place while the application is running.
For instance, constants that are hardcoded in your application are set at design time, such as...
// you need to recompile your solution to change this,
// hence it is said that its value is set at design time.
const string value = "this is set at design time";
Whereas configuration values that are pulled from a config file would be said to be set at runtime. Such as...
// You do not need to recompile your solution to change this,
// hence the value is said to be set at runtime.
string value = ConfigurationManager.GetValue("section", "key");
As a developer, you must aim for the ideal equilibrium between design time (let's take it to mean 'the time you spend designing and developing the app', though it's a bit incorrect) and run time, which I take to mean 'the time the user stands looking at the hourglass waiting for his important report to be rendered'.
Too much focus on 'design time' and you might run out of the scheduled programming time, and your client will pull out of the contract, he'll badmouth you, and kittens will die. Too little, and your program will, as they say, suck. Remember that 'shipping is a feature, one your program should have'.
Unless what they meant by "run time" is "runtime" and that means something else entirely.
I'm not exactly sure how to tag this question or how to write the title, so if anyone has a better idea, please edit it
Here's the deal:
Some time ago I had written a little but cruicial part of a computing olympiad management system. The system's job is to get submissions from participants (code files), compile them, run them against predefined test cases, and return results. Plus all the rest of the stuff you can imagine it should do.
The part I had written was called Limiter. It was a little program whose job was to take another program and run it in a controlled environment. Controlled in this case means limitations on available memory, computing time and access to system resources. Plus if the program crashes I should be able to determine the type of the exception and report that to the user. Also, when the process terminated, it should be noted how long it executed (with a resolution of at least 0.01 seconds, better more).
Of course, the ideal solution to this would be virtualization, but I'm not that experienced to write that.
My solution to this was split into three parts.
The simplest part was the access to system resources. The program would simply be executed with limited access tokens. I combined some of the basic (Everyone, Anonymous, etc.) access tokens that are available to all processes in order to provide practically a read-only access to the system, with the exception of the folder it was executing in.
The limitation of memory was done through job objects - they allow to specify maximum memory limit.
And lastly, to limit execution time and catch all the exceptions, my Limiter attaches to the process as a debugger. Thus I can monitor the time it has spent and terminate it if it takes too long. Note, that I cannot use Job objects for this, because they only report Kernel Time and User Time for the job. A process might do something like Sleep(99999999) which would count in none of them, but still would disable the testing machine. Thus, although I don't count a processes idle time in its final execution time, it still has to have a limit.
Now, I'm no expert in low-level stuff like this. I spent a few days reading MSDN and playing around, and came up with a solution as best I could. Unfortunately it seems it's not running as well as it could be expected. For most part it seems to work fine, but weird cases keep creeping up. Just now I have a little C++ program which runs in a split second on its own, but my Limiter reports 8 seconds of User mode time (taken from job counters). Here's the code. It prints the output in about half a second and then spends more than 7 seconds just waiting:
#include <iostream>
#include <vector>
using namespace std;
int main()
{
vector< vector<int> > dp(50000, vector<int>(4, -1));
cout << dp.size();
}
The code of the limiter is pretty lengthy, so I'm not including it here. I also feel that there might be something wrong with my approach - perhaps I shouldn't do the debugger stuff. Perhaps there are some common pitfalls that I don't know of.
I would like some advice on how other people would tackle this problem. Perhaps there is already something that does this, and my Limiter is obsolete?
Added: The problem seems to be in the little program that I posted above. I've opened a new question for it, since it is somewhat unrelated. I'd still like comments on this approach for limiting a program.
Running with a debugger attached can change the characteristics of the application. Performance can be impacted, and code paths can even change (if the target process does things based on the presence of a debugger, i.e. IsDebuggerPresent).
A different approach that we've used is to configure our own application to run as the JIT debugger. By setting the AeDebug registry key, you can control what debugger is invoked when an application crashes. This way you only jump in when the target process crashes, and it doesn't impact the process during normal run-time.
This site has some details about setting the postmortem debugger: Configuring Automatic Debugging.
Your approaches for limiting the memory, getting timing etc. all sound perfectly fine.