Under most Unix-like systems, you can use the "time" command to execute a program and tell you how much space and time it used. Does anybody know of anything comparable for Windows?
(No, I don't particularly want to spend 6 months learning the Win32 API just for this...)
From the command line (low resolution, possibly inaccurate): echo %date% %time%
Programmatically: QueryPerformanceCounter. http://msdn.microsoft.com/en-us/library/ms644904(v=vs.85).aspx
If you want something of the order of millisecond accuracy (which is comparable to what the linux/unix time would give you) then timeGetTime() is what you need. It returns the number of milliseconds since the system was booted. include mmsystem.h and link against winmm.lib. However, all this would just give you a time value, you'd either need to put in a system() call in between or do something like dump the start time out to a file when called for the first time, and then read it the second time.
More pragmatic solutions, which may be more useful depending on your circumstances:
Write a batch script to call the program you wish you benchmark and wrap it so that it writes to a file:
echo "start" >> log.txt
do_my_stuff.exe
echo "stop" >> log.txt
and then use a tool as the excellent LogExpert to look at the timestamps
Install the cygwin tools and use the time that comes with that. If you only need to do this on your own machine, and the benchmark program doesn't require complex setting up (command line parameters, environment variables, etc) then this may be the easiest approach.
I use the 'time' utility in windows too. It comes with mingw+msys.
Related
I've wondered this many times and in many cases, and I like to learn so general or close-but-more needed answers are acceptable to me.
I'll get specific, to help explain the question. Please remember that this question is more about accelerating common interpreted language calls (yes, exactly the same arguments), than it is about the specific programs I'm calling in this case.
Here we go:
Using i3WM I use i3lock-fancy to lock my workspace with a key-combo mapped to the command:
i3lock-fancy -p -f /usr/share/fonts/fantasque_mono.ttf
So here is why I think this is possible, though my google-fu has failed me:
i3lock-fancy is a bash script, and bash is an interpreted language
each time I run the command I call it with the same arguments
Theoretically the interpreter is spitting out the same bitstream to be executed, right?
Please don't complain about portability, I understand it, the captured bitstream, would not be
For visual people:
When I call the above command > bash interpreter converts bash-code to byte-code > CPU executes byte-code
I want to:
execute command > bash interpreter converts to byte-code > save to file
so that I can effectively skip interpretation (since it's EXACTLY the same every time):
call file > CPU executes byte-code
What I tried:
Looking around on SO before asking the question lead me shc which is similar in some ways to what I'm asking for.
But this is not what shc is for (thanks #stefan)
is there a way to do this which is more like what I've described?
Simply put, is there a way to interpret bash, and save the result without actually running it?
After developing an elaborate TCL code to do smoothing based on Gabriel Taubin's smoothing without shape shrinkage, the code runs extremely slow. This is likely due to the size of unstructured grid I am smoothing. I have to use TCL because the grid generator I am using is Pointwise and Pointwise's "macro language" is TCL based. I'm still a bit new to this, but is there a way to run an external code from TCL where TCL sends the data to the software, the software runs the smoothing operation, and output is sent back to TCL to update the internal data inside the Pointwise grid generation tool? I will be writing the smoothing tool in another language which is significantly faster.
There are a number of options to deal with code that "runs extremely show". I would start with determining how fast it must run. Are we talking milliseconds, seconds, minutes, hours or days. Next it is necessary to determine which part is slow. The time command is useful here.
But assuming you have decided that more performance is necessary and you have some metrics for your current program so you will know if you are improving, here are some things to try:
Try to improve the existing code. If you are using the expr command, make sure your expressions are given to the command as a single argument enclosed in braces. Beginners sometimes forget this and the improvement can be substantial.
Use the critcl package to code parts of the program in "C". Critcl allows you to put "C" code directly into your Tcl program and have that code pulled out, compiled and loaded into your program.
Write a traditional "C" based Tcl extension. Tcl is very extensible and has a clean API for building extensions. There is sample code for extensions and source to many extensions is readily available.
Write a program to do the time consuming part of the job and execute it as a separate process and obtain the output back into your Tcl script. This is where the exec command comes in useful. Presumably you will have to write data out to some where the program can get it and read the output of the program back into your Tcl script. If you want to get fancy you can do two-way communications across a localhost TCP port. The set up in Tcl is quite simple. The "C" code in a program to do it is a bit more tedious, but many examples exist out on the Internet.
Which option to choose depends very much on how much improvement is required and the amount of code that must be improved. You haven't given us much idea what those things are in your case, so all I can offer is rather vague general solutions.
For a loadable module, you can write a Tcl extension. An example is here:
File Last Modified Time with Milliseconds Precision
Alternatively, just write your program to take input from a file. Have Tcl write the input data to the file, run the program, then collect the output from the external program.
I've been searching for a considerably long time for this. Does anyone know how to clear the screen in a console app in Fortran language?
any help will be very much appretiated!
Fortran, qua Fortran, knows nothing of such concepts as screens or keyboards or, for that matter, computers. There is, therefore, no language-standard way of clearing a screen from Fortran. You will have to find some platform-dependent approach.
Most Fortran compilers have some way of doing this, for example Intel Fortran provides the SYSTEM function.
Contrary to others I would not call SYSTEM() (standard Fortran 2008 alternative is execute_command_line()) but I would print right ANSI escape code http://en.wikipedia.org/wiki/ANSI_escape_code:
print *, achar(27)//"[2J"
This will be much faster than calling SYSTEM().
This works in typical Linux terminals, but will not work in the MS Windows terminal.
Another more practical reference how to use the escape code is http://www.lihaoyi.com/post/BuildyourownCommandLinewithANSIescapecodes.html
In Fortran 90/95 your best option is the system command which is a vendor supplied extension (i.e., not part of the F90/95 standard so some obscure Fortran compilers may not have it but all major ones do).
$ cat clear.f90
program
call system('clear')
end
$ gfortran clear.f90 -o clear
$ ./clear
It depends on your specific sytem and compiler. There is no general way. Fortran doesn't know about specific hardware devices like terminal screens and printers. (Neither do most other languages). The details depend entirely on your specific system.
My advice would be to clear the terminal by invoking the relevent script via the command line - but this is not nice. it is generally more portable to write the output to an ordinary text file and then execute appropriate system commands to print that file to screen. This way you can manipulate the file as you wish...
See here for a simalar question from which these some of the above text was salvaged.
In Fortran ACHAR(N) returns the ASCII of N, so my preferred method would be:
WRITE(*,'(2f15.9,A5)',advance='no') float1,float2,ACHAR(13)
ACHAR(13) returns carriage return character \r in Python. So after printing, it returns the cursor to the beginning of the line where it can be overwritten.
After you are out of the loop you can use CALL SYSTEM('clear') to clean the screen.
This is helpful since CALL SYSTEM('clear') is slower and uses a lot of CPU, you can check this by replacing the above method with
WRITE(*,'(2f15.9)',advance='no') float1,float2;CALL SYSTEM('clear')
and check the difference in time taken in loops.
This Works For Me in FTN95
program
call system('CLS')
end
I found yet another way to clear screen in UNIX like systems by printing the output of clear command (from manual of clear command which states you can write the output to a file and then cat it to clear the screen)
So better way is to clear > temp.txt and use the characters in files in a print statement
Although both do the same thing, calling SYSTEM("foo") is very very (100+ times) slower than directly printing those characters
For example:
program clear
implicit none
INTEGER::i
do i = 1, 1000
print *, "Test"
call system("clear")
enddo
end program clear
This program takes anywhere between 3.1 to 3.4 seconds.
But instead of calling system, if I directly print the characters like this :
program clear
implicit none
INTEGER::i
do i = 1, 1000
print *, "Test"
print *, "[H[2J[3J"
enddo
end program clear
This produces the exact same result but takes anywhere between 0.008 to 0.012 seconds (8 to 12 ms)
In fact, running the second loop 100,000 is faster than CALL SYSTEM("clear") 1000 times
EDIT: Don't copy-paste from here, it won't work, (characters are replaced on StackOverflow)
Just use clear > file and copy the contents of the file into a print statement
Screenshot of actual code(from preview):
These characters aren't supported I guess, They disappear in the answers (They work fine in preview)
I have a Perl script that runs via a system() command from C. On a specific site (SunOS 5.10), when that script is run, it nearly always takes 6 seconds or more. On other sites, it runs pretty much instantly (0.1s). If I run the script manually, i.e. not from the C code, it also runs instantly. I eventually tracked the slowness down (by spitting out the time a whole bunch in a lot of different places), to a single require line. The file that it is requiring is another Perl script we wrote. The script consists of a single require (this file here), 3 scalars that are assigned integer values, and a handful of time/date conversion routines. The file ends with a 1;. That single require appears to take as much as 6 seconds on occasion, but as I said, not always even on the same machine. I'm absolutely stumped here. My only last thought is to turn on profiling, but the site doesn't have Devel::Profiler and my only other option (that I know of) would be to add it to the Perl command which would require me altering and recompiling the C code (doable but non-trivial).
Anybody have ANY idea what could be going on here? I don't think I can/want to put the entire date.pl that is being required, but it's pretty much exactly as I described; I could answer any questions about it that you have.
Thanks in advance.
You might be interested in A Timely Start by Jean-Louis Leroy. He had a similar problem and tracked it down to a long and deep module search path where perl usually found the modules in the last entries in #INC.
Six seconds is a long time. Have you checked what your network is doing during this?
My first thought was that spawning the new process when using the system() command could be the problem, but six seconds is too long.
I don't know much about perl, but I could imagine that for any reason, the access of the time module could invoke a call to a network time server. Just to get synchronized. Maybe this takes so long or maybe it is getting a time out.
It could be that this only happens for a newly spawned process -- hence only when you use the system() command.
just wild guessing...
So, this does nothing to answer your question directly, but please tell me that you're not actually running on perl 4? Assuming you're on perl 5, you could remove the entire file and replace the require with use POSIX qw(ctime) to get the version that comes with Perl.
If you do have to support perl4, I'll merely grumble something about version 5 being fifteen years old now and go away. :)
I came past a few ways to cause a time delay such as pings and dirs. Though none of them are really precise, is there anny proper way to cause a time delay?
I heard about a few things though they don't work on all computers, not on my Windows XP nor the Windows NT at college.
It takes ages going through all files on Google finding a good answer, and since I didn't yet find the question on Stack Overflow I thought it might be good to just create the question myself ;)
Sleep
It will allow you to do this.
<warning>This is a hack</warning>
Use your favorite programming language (other than MS-DOS batch) and create an application which takes one argument, the number of milliseconds to wait, then, simply call this program from a batch file to sleep the required amount.
As far as I know, this is the only reliable way to do it in DOS.
If you don't have the ability to send another program along with the batch file, use DEBUG to write a sleep command on the fly, and execute it.
EDIT:
The above answer was kind of toungue-in-cheek. I wouldn't run a batch file that had some DEBUG trickery in it. I believe the traditional way to use a delay in a batch file is the CHOICE commad.
type nul|choice /c:y /t:y,nn > nul
Which of course, doesn't work in XP, since that would be WAAYY too convenient.
"...proper way...."
you can't do that in DOS.
It is possible to achieve a precision of a few miliseconds, depending on your machine's speed.
I have just finished creating such a batch, and though I won't share with you the actual code, I'll give you some pointers:
Use %time% variable, and devide into substrings (without ":" and ".") - one substring will get the seconds, the other - minutes (you may add hours, for delays of over an hour, and even incorporate the date)
Use set /A to transform the time variables into 1 integer representing the amount of seconds which have passed since a rounded hour (X:00:00.00). Add the inputed delay (in seconds) to that.
Create a short loop, comparing the value of the combined var (explained in section 2) to the current time (need to recalc the curent combined min+sec value, inside this loop), and exiting the loop when the match is true.
One major bugfix here - you'll need to truncate any preceeding zeros to variables which are about to get evaluated in a "set /A" command. I've noticed that the console's evaluator (for this command) returns an error when an integer with a preceeding 08 or 09 (or simply 08 or 09) is used.
Remark:
This will only work with command extensions enabled. To test it, use the following at the beginning of the routine:
verify other 2>nul
setlocal enableextensions
if errorlevel 1 goto err
And then add an error handler subroutine named "err".
If you'd want to continue your batch in the same file, use "endlocal" after the looping subroutine.
BTW, this applies ONLY to Windows XP service pack 2 or 3.