This is my problem, I've got a batch-script that I can't modify (lets call it foo) and I would like to count how many times/day this script is executed - to keep track of that data.
Preferably, I would like to write the number of executions with date and exit-code to some kind of log file.
So my question is if this is possible and in that case - how? To create a batch-script/something that works in the background and writes every execution of foo to a log.
(I know this would be easy if I could modify foo but I can't. Also, everything is running on WinXP machines.)
You could write a wrapper script that does the logging and calls the existing script. Then use the wrapper in place of the original script
Consider writing a program that interrogates the Task Manager.
See http://www.netomatix.com/ProcDiagnostics.aspx
You could, for example, write a simple Console app which runs on a timer; every 5 seconds it checks that your foo application process exists. If it finds that it does, it assumes that find as the start time of the application; if it doesn't find it, it assumes the application has now closed and logs that information. It wouldn't be accurate to the second by any means, but would give you a rough approximation of when the thing is running and closing.
You might be able to configure Process Monitor
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx to capture the information you require
Related
I am working on a project where I have to send arguments by a command line to a python file (using system exec) and then visualize the results saved in a folder after the python file finishes executing. I need to have this by only clicking on one button, so my question is, if there is any way to realize this scenario or maybe if I can order the events.
Now I have included the flat sequence structure to the block Diagram so I can order the events, but I had an issue with making the program (the python file) running every time I press the Test button (it only runs in the first time I click on the Test button), I tried to use the while loop but I couldn't execute it again unless I restart the program.
The way you phrased your question makes me think that you want to wait until the command you call via system exec finished and then run some code. You could simply use a sequence structure for this.
However, if you need to do this asynchronously, i.e. launch the command and get an event when the command finished so you can draw the results, you will need to resort to asynchronous techniques like "Start Asynchronous Call" and "Wait On Asynchronous Call", or for example queues and a separate code area for the background-work.
Use "wait until completion?" input of System Exec function to make sure the script finished execution, then proceed with the results visualization part.
Im writing a bash script that essentially fires off a python script that takes roughly 10 hours to complete, followed by an R script that checks the outputs of the python script for anything I need to be concerned about. Here is what I have:
ProdRun="python scripts/run_prod.py"
echo "Commencing Production Run"
$ProdRun #Runs python script
wait
DupCompare="R CMD BATCH --no-save ../dupCompareTD.R" #Runs R script
$DupCompare
Now my issues is that often the python script can generate a whole heap of different processes on our linux server depending on its input, with lots of different PIDs AND we have heaps of workers using the same server firing off scripts. As far as I can tell from reading, the 'wait' command must wait for all processes to finish or for a specific PID to finish, but when i cannot tell what or how many PIDs will be assigned/processes run, how exactly do I use it?
EDIT: Thank you to all that helped, here is what caused my dilemma for anyone google searching this. I broke up the ProdRun python script into its individual script that it was itself calling, but still had the issue, I think found that one of these scripts was also calling another smaller script that had a "&" at the end of it that was ignoring any commands to wait on it inside the python script itself. Simply removing this and inserting a line of "os.system()" allowed all the code to run sequentially.
It sounds like you are trying to implement a job scheduler with possibly some complex dependencies between different tasks. I recommend to use a job scheduler instead. It allows you to specify to run those jobs whilst also benefitting from features like monitoring, handling exceptional cases, errors, ...
Examples are: the open source rundeck https://github.com/rundeck/rundeck or the commercial one http://www.bmcsoftware.uk/it-solutions/control-m.html
Make your Python program wait on the children it spawns. That's the proper way to fix this scenario. Then you don't have to wait for Python after it finishes (sic).
(Also, don't put your commands in variables.)
I have a problem which I think might be solvable with a batch file, but I've only used batch files once or twice and don't know enough to try and solve this on my own. For context I'm running Windows 10 Home Edition, and have some programming experience, though it is primarily mathematical, i.e. R and MATLAB.
The problem is this: I have two programs, in this case Spotify and Toastify, which run together, with Toastify running in the background. I'll refer to them as S and T, respectively. If I run T, S runs as well, but if I close S, T remains running in the background. For reasons of convenience, I would rather that closing S also close T, so that when I want to use them again later I need only reopen T rather than checking if it's still running in the background, because T doesn't let you run multiple instances.
I'm wondering if there is an easy way to write a batch file (or something else if this isn't a good approach) that will open T (and so also S), and then 'listen' for S to close, at which point it closes T as well.
You need to use a Job Object. See Working example of CreateJobObject/SetInformationJobObject pinvoke in .net? and Kill child process when parent process is killed
Do not try 'monitoring for one process', that leaves zombies when the monitoring crashes.
I have a script named program.rb and would like to write a script named main.rb that would do the following:
system("ruby", "program.rb")
constantly check if program.rb is running until it is done
if program.rb has reached completion
exit main.rb
end
otherwise keep doing this until program.rb reaches completion{
if program.rb is not running and stopped before completing
restart program.rb from where it left off
end}
I've looked into Pidify but could not find a way to apply it to fit this exactly the right way...
Any help in how to approach this script would be greatly appreciated!
Update:
I could figure out how to resume running the script from where it left off in program.rb if there's no way to do it in main.rb
It's impossible to "restart script from where it left off" without full cooperation from the program.rb. That is, it should be able to advertise its progress (by writing current state to a file, maybe?) and be able to start correctly from a step specified in ARGV. There's no external ruby magic that can replace this functionality.
Also, if a program terminated abnormally, it means one of two things:
the error is (semi-)permanent (disk is full, no appropriate access rights to a file, etc). In this case, simply restarting the program would cause it to fail again. And again. Infinite fail loop.
the error is temporary (shaky internet connection). In this case, program should do better job with exception handling and retry on its own (instead of terminating).
In either case, there's no need for restarting, IMHO.
Well, here is one way.
Modify program.rb to take an optional flag argument --restart or something.
When program.rb starts up without this argument it will initialize a file to record its current state. Periodically, it will write whatever it needs into this file to record some kind of checkpoint.
When program.rb starts up with the restart flag, it will read its checkpoint file and start processing at that point. For this to work, it must either checkpoint all state changes or arrange for all processing between checkpoints to be idempotent so it can be repeated without ill effect.
There are lots of ways to monitor the health of program.rb. The best way is with some sort of ping, perhaps something like GET /health_check or a dummy message via a socket or pipe. You could just have a locked file to detect if the lock is still held, or you could record the PID on startup and check that it still exists.
I need to run a ruby script for one week and check whether it is running for every hour.
Could you please suggest me some way? I need to check this in windows machine.
For ex:- I have script called one_week_script.rb which will run for one week, in between i want to check whether the script is running or not? if it is not running, then running that script from another script
A typical solution is to use a "heartbeat" strategy. The "to be monitored" notifies a "watchdog" process on a regular interval. A simple way of doing this might be to update the contents of some file every so often, and the watchdog simply checks that same file to see if it's got recent data.
The alternative, simply checking if the process is still 'loaded' has some weaknesses, The program could be locked up, even though it's still apparently 'running'. Using the heartbeat/watchdog style means you know that the watched process is operating normally, because you're getting feedback from it.
In a typical scenario, you might just write the current time, and some arbitrary diagnostic data, say the number of bytes processed (whatever that might mean for you).