Term for how long a program takes to run - performance

I just want the word for how long a program takes to run (like when you actually run it, not time complexity). I suspect its "running time" but when I google running time, google gives me a bunch of stuff about runtime, so I am stuck.
For example: "The _____ of my program is 2100ms."

Runtime, run time, or execution time would work.
In computer science, runtime, run time, or execution time is the final
phase of a computer program's life cycle, in which the code is being
executed on the computer's central processing unit (CPU) as machine
code. In other words, "runtime" is the running phase of a program.
(according to wikipedia)

Related

Execution time java program

First of all, here's just something I'm curious about
I've made a little program which fills some templates with values and I noticed that every time I run it the execution time changes a little bit, it ranges from 0.550s to 0.600s. My CPU is running at 2.9GHZ if that could be useful.
The instructions are always the same, is it maybe something that has to do with physics or something more software oriented?
it has to do with java running on a virtual machine; even a c program might run different times slightly longer/shorter, also the operating system steers when a program has resources (cpu time, memory …) to be executed.

The Goroutines and the scheduled

i didn't understand this sentence plz explain me in detail and use an easy english to do that
Go routines are cooperatively scheduled, rather than relying on the kernel to manage their time sharing.
Disclaimer: this is a rough and inaccurate description of the scheduling in the kernel and in the go runtime aimed at explaining the concepts, not at being an exact or detailed explanation of the real system.
As you may (or not know), a CPU can't actually run two programs at the same time: a CPU only have one execution thread, which can execute one instruction at a time. The direct consequence on early systems was that you couldn't run two programs at the same time, each program needing (system-wise) a dedicated thread.
The solution currently adopted is called pseudo-parallelism: given a number of logical threads (e.g multiple programs), the system will execute one of the logical threads during a certain amount of time then switch to the next one. Using really small amounts of time (in the order of milliseconds), you give the human user the illusion of parallelism. This operation is called scheduling.
The Go language doesn't use this system directly: it itself implement a scheduler that run on top of the system scheduler, and schedule the execution of the goroutines itself, bypassing the performance cost of using a real thread for each routine. This type of system is called light/green thread.

Can we time commands deterministically?

We know that in bash, time foo will tell us how long a command foo takes to execute. But there is so much variability, depending on unrelated factors including what else is running on the machine at the time. It seems like there should be some deterministic way of measuring how long a program takes to run. Number of processor cycles, perhaps? Number of pipeline stages?
Is there a way to do this, or if not, to at least get a more meaningful time measurement?
You've stumbled into a problem that's (much) harder than it appears. The performance of a program is absolutely connected to the current state of the machine in which it is running. This includes, but is not limited to:
The contents of all CPU caches.
The current contents of system memory, including any disk caching.
Any other processes running on the machine and the resources they're currently using.
The scheduling decisions the OS makes about where and when to run your program.
...the list goes on and on.
If you want a truly repeatable benchmark, you'll have to take explicit steps to control for all of the above. This means flushing caches, removing interference from other programs, and controlling how your job gets run. This isn't an easy task, by any means.
The good news is that, depending on what you're looking for, you might be able to get away with something less rigorous. If you run the job on your regular workload and it produces results in a good amount of time, then that might be all that you need.

VB.net application first execution is too slow compared to the next ones

I'm working on a vb.net application that has heavy computing and I/O tasks ,the program's first execution after rebooting is too slow compared to the next execution times (10 seconds more to finish).I realized that at the first time the CPU usage reached about 60% and for the later executions it reached from 90% to 100% . Please does any one know why does that happen?
When you reboot your computer, it dumps everything stored in memory, along with the cache. The first time you run your program, VS has to pull your program and all the required assemblies and libraries from your hard drive for compilation, etc. After the first execution (and your question is quite vague so it's hard for me to acquire your current situation) VS keeps all that stuff in main memory until it's needed by other processes or you close VS.
Since main memory is much faster than primary storage, and since most of your external assemblies have been compiled into your program's build, subsequent executions will be faster.
The reason for CPU usage being much lower on the first run is because the data cannot be read from your hard drive fast enough to keep the CPU busy!

How do online judge sites isolate program performance?

There are many online judge sites which can verify your program by comparing its output to the correct answers. What's more, they also check the running time to make sure that your program running time doesn't exceed the maximum limit.
So here is my question, since some online judge sites run several test programs at the same time, how do they achieve performance isolation, i.e., how can they make sure that a user program running in a heavy-loaded environment will finish within the same time, as when it is running in an idle environment?
Operating systems keep track of CPU time separately from real-world "wall clock" time. It's very common when benchmarking to only look at one or the other kind of time. CPU or file I/O intensive tasks can be measured with just CPU time. Tasks that require external resources, like querying a remote database, are best measured in wall clock time because you don't have access to the CPU time on the remote resource.
If a judging site is just comparing CPU times of different tests, the site can run many tests simultaneously. On the other hand, if wall clock times matter, then the site must either use independent hardware or a job queue that ensures one test finishes before the next starts.
As The Computer Language Benchmarks Game measures both CPU time and Elapsed time those measurements are made sequentially in an idle environment.

Resources