Usage frequency of windows application - windows

I have a homepage need to count the usage frequency of Notepad or Calc application. Is there any Windows API that I can know the usage frequency of windows application?

No. But you can write a service that will check applications in memory and do appropriate calculations. Few links:
Windows Services Programming
Simple Windows Service Sample
Also, shell variant from bua is quite ready solution. You just need some parsing and updating the counter (not necessary in the database).

I don't know about one, but You could quite easily achieve that by:
ex:
using
cmd.exe -> tasklist
apply each 10 sec. (that would not load the CPU) - one good solution is to use python scheduler.
import sched, time
from subprocess import *
s = sched.scheduler(time.time, time.sleep)
def get_task():
f=open("log",'wt')
p=Popen("tasklist",stdout=f)
p.terminate()
p.close()
def analise_logs():
# do some analise here
def print_some_times():
while(1):
print time.time()
s.enter(5, 1, get_tasks, ())
analise_logs()
s.run()
do some awk (if you have cygwin), or powershell awk like processing on that file.
Count processes which interest you. Write result to another file with updated infos.

Take a look at my answer to How Do I Stop An Application From Opening and use the same technique. In essence when the application is launched your program is run and you can log any information that you require and then open the app.

Related

Choosing a better parallel architecture in Python

I am working on Data Wrangling problem using Python,
which processes a dirty Excel file into a clean Excel file
I would like to process multiple input files by introducing concurrency/parallelism.
I have the following options 1) Using multiThreading 2) Using multiProceesing modules 3) ParallelPython module,
I have a basic idea of the three methods, I would like to know which method is best and why?
In Bref, Processing of a SINGLE dirty Excel file today takes 3 minutes,
Objective : To introduce parallelism/concurrency to process multiple files at once.
Looking for, best method of parallelism to achieve the objective
Since your process is mostly CPU bound multi-threading won't be fast because of the GIL...
I would recommend multiprocessing or concurrent.futures since they are a bit simpler the ParallelPython (only a bit :) )
example:
with concurrent.futures.ProcessPoolExecutor() as executor:
for file_path, clean_file in zip(files, executor.map(data_wrangler, files)):
print('%s is now clean!' % (file_path))
#do something with clean_file if you want
Only if you need to distribute the load between servers then I would recommend ParallelPython .

How to check Matplotlib's speed in Xcode and increase performance?

I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.

Count number of executions of batch-script

This is my problem, I've got a batch-script that I can't modify (lets call it foo) and I would like to count how many times/day this script is executed - to keep track of that data.
Preferably, I would like to write the number of executions with date and exit-code to some kind of log file.
So my question is if this is possible and in that case - how? To create a batch-script/something that works in the background and writes every execution of foo to a log.
(I know this would be easy if I could modify foo but I can't. Also, everything is running on WinXP machines.)
You could write a wrapper script that does the logging and calls the existing script. Then use the wrapper in place of the original script
Consider writing a program that interrogates the Task Manager.
See http://www.netomatix.com/ProcDiagnostics.aspx
You could, for example, write a simple Console app which runs on a timer; every 5 seconds it checks that your foo application process exists. If it finds that it does, it assumes that find as the start time of the application; if it doesn't find it, it assumes the application has now closed and logs that information. It wouldn't be accurate to the second by any means, but would give you a rough approximation of when the thing is running and closing.
You might be able to configure Process Monitor
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx to capture the information you require

how to get window os system idle time using ruby?

I have a ruby script. I want to know how long the system has been idle (i.e. no user interaction - the time screen saver activation is based upon).
I believe I can do this in ruby via win32api using user32.dll and GetLastInputInfo, but I can't figure out how... can anyone can help me?
.
Here is a sample that calls GetLastInputInfo. I did not study that API, though, to see if it is really giving you the information you are wanting.
require "Win32API"
api = Win32API.new( 'user32', 'GetLastInputInfo', ['P'], 'I')
# match the structure LASTINPUTINFO. First 4 byte int is size of struct
s = [8, 0].pack('l*')
api.call( s )
a = s.unpack('l*')
puts a
It would appear what you want to do has been done for Linux:
http://coderrr.wordpress.com/2008/04/20/getting-idle-time-in-unix/
But as for windows the nearest thing I could find is for C#... I don't have a windows machine to hack with but it could well give you an indication as to how GetLastInputInfo can be interacted with:
http://dataerror.blogspot.com/2005/02/detect-windows-idle-time.html
On the basis of the answer from Mark Wilkins, I created some script to log the idle time of a user.
https://gist.github.com/Largo/11216868

Determining maximum memory use of a process

I want to run a command and, when it's complete, have a record of the maximum memory use of the resulting process. For instance, I want something analogous to the 'time' command on Linux, where 'time foo' will run 'foo' and, when 'foo' exits, will print out the amount of CPU time that 'foo' took.
For my present application I need this to run on Windows, but if you know of a Linux-only program let me know too. (At the very least it'd be interesting, but it may also give me a lead to find a Windows equivalent.)
You can, if you have Vista (maybe 7 too, not sure). Go to start -> control panel -> system and maintenance -> administrative tools -> Reliability and Performance Monitor -> Performance Monitor -> Create new watch (green + symbol) -> Process -> Working Set -> [select a process below] and press Ok. You can log this, etc.
Screenshot: http://www.freeimagehosting.net/image.php?912df44d75.jpg
I don't know of a program that does this, but there are APIs.
If you're using .NET, use the Process.TotalProcessorTime property.
If you're using native code, use the GetProcessTimes() function.
I've created a cmd exe for anaylsing memory usage of long time running program.
see here:MemoryUsageMonitor
I have created a simple Windows program called timemem.exe that behaves similarly to /usr/bin/time on Linux/Mac OS X, and will show similar statistics, such as elapsed time, user and kernel CPU time, and maximum working set size in memory used by another Win32 process. See:
http://homepage.mac.com/jafingerhut/files/code/code.html

Resources