So, I'm starting a Python (2.7) script via a launch agent, on macOS 10.13. The script runs, and during execution, it triggers a restart of the computer. When the computer turns back on and logs in, the launch agent runs the script again. The script reads the logs, and does a switch:case to pick up where it left off.
The problem is, after the restart, the python script is unable to execute some shell commands. ls -l works fine. However, when I try to run something that isn't in /bin, it seems to just... skip over it. No errors, it simply doesn't do it.
Here's the related code. I've removed most of the specifics, as well as the switch control, because I've verified they are working, independently.
#!/usr/bin/python
import logging
import os
import subprocess
import time
#globals
path_to_plist = 'User/Me/Library/Preferences/PathTo.plist'
def spot_check():
#determine where we are in the test, verified it works
return loop_number
def func_that_checks_stuff():
results = subprocess.check_output(['System/Library/Path/To/tool','-toolarg'])
###process results
logging.info(results)
def restarts(power_on, power off):
#sets plist key values for the restart app
subprocess.Popen('defaults write {} plist_key1 {}'.format(path_to_plist, power_on), shell=True
subprocess.Popen('defaults write {} plist_key2 {}'.format(path_to_plist, power_off), shell=True
#run the restart app, which is a GUI application in /Applications
logging.info('Starting restart app')
subprocess.Popen('open -a RestartApp.app', shell=True)
time.sleep(power_on + 5)
def main():
###setup and config stuff, verified its working
#switch control stuff, verified its working
loop = spot_check()
if loop == 0:
#tool that shows text on the screen
subprocess.Popen('User/Me/Library/Scripts/Path/To/Text/tool -a -args', shell=True)
logging.info('I just ran that tool')
subprocess.check_output('ls -l', shell=True)
restarts(10, 0)
if loop == 1:
func_that_checks_stuff()
subprocess.Popen('User/Me/Library/Scripts/Path/To/Text/tool -a args', shell=True)
logging.info('Hey I ran that tool again, see?')
restarts(10, 0)
else:
func_that_checks_stuff()
subprocess.Popen('User/Me/Library/Scripts/Path/To/Text/tool -a args', shell=True)
print 'You finished!'
if __name__ == '__main__':
main()
So, if I start this using my launch agent, it will run through every sequence just fine.
On the first loop (prior to the restart), EVERYTHING works. All logging, all tools, everything.
After the restart, all the logging works, so I know it's following the switch control. The func_that_checks_stuff() works, and logs it's output correctly. The ls -l' call shows me exactly what I should see. But,Path/To/Text/tooldoesn't run, and when I callrestarts()`, it never opens the app.
No errors are produced, at least that I can find
What am I doing wrong? Is it something to do with the tool paths?
Update:
As it turns out, the solution was to add a ~20 second delay at the start of the script. It seems like it was trying to run the commands in question before Window Server had finished loading, and that freaked everything out. Not a particularly elegant solution, but it works for what I need in this project.
Related
I am developing a ruby framework to run different jobs and one of the things that I need to do is to know when these jobs have ended in order to used their outputs and organize everything. I have been using it with no problem but some colegues are starting to use it in different system and something really odd is happening. What I do is run the commands using
i,o,e,t = Open3.popen3(job.get_cmd)
p = t.pid
and later I check if the job has ended like this:
begin
Process.getpgid(p)
rescue Errno::ESRCH
# The process ended
end
It works perfectly in the system I am running (Scientifi linux 6) but when a friend of mine started running on Ubuntu 14.04 (using ruby 1.9.3p484) and the command is a concatenation of commands such as cmd1 && cmd2 && cmd3 each command is run at the same time by the system, not one after the other, and the pid returned by t.pid is neither of the pids of the different processes being run.
I modified the code and instead of running the concatenation of cammands it creates a script with all the command inside the command called from popen3 is just Open3.popen3("./script.sh") but the behaviour is the same... All the commands are run at the same time and the pid that ruby knows is not any of the processes pid...
I am not sure if this is something ruby related but since running that script.sh by hand behaves as expected, running one command after the other, it seems that either ruby is not launching the process accordingly or the system is not reading the process as it should. Do you know what might be happening?
Thanks a lot!
EDIT:
The command being run looks like this
./myFit.exe h vlq.config &> output_h.txt && ./myFit.exe d vlq.config &> output_d.txt && ./myFit.exe p vlq.config &> output_p.txt
This command, if run by hand and not inside the ruby script runs perfectly, exactly this command. When run from the ruby script it runs at the same time all the myFit.exe executions (but I want them to be run withh && becasue I want them running if the previous works fine). Myfit.exe is a tool which makes a fit, is not a system command. Again, this command, if run by hand runs perfeclty.
I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.
This hopefully should be an easy question to answer. I am attempting to have mumble-ruby run automatically I have everything up and running except after running this simple script it runs but ends. In short:
Running this from terminal I get "Press enter to terminate script" and it works.
Running this via a cronjob runs the script but ends it and runs cli.disconnect (I assume).
I want the below script to run automatically via a cronjob at a specified time and not end until the server shuts down.
#!/usr/bin/env ruby
require 'mumble-ruby'
cli = Mumble::Client.new('IP Address', Port, 'MusicBot', 'Password')
cli.connect
sleep(1)
cli.join_channel(5)
stream = cli.stream_raw_audio('/tmp/mumble.fifo')
stream.volume = 2.7
print 'Press enter to terminate script';
gets
cli.disconnect
Assuming you are on a Unix/Linux system, you can run it in a screen session. (This is a Unix command, not a scripting function.)
If you don't know what screen is, it's basically a "detachable" terminal session. You can open a screen session, run this script, and then detach from that screen session. That detached session will stay alive even after you log off, leaving your script running. (You can re-attach to that screen session later if you want to shut it down manually.)
screen is pretty neat, and every developer on Unix/Linux should be aware of it.
How to do this without reading any docs:
open a terminal session on the server that will run the script
run screen - you will now be in a new shell prompt in a new screen session
run your script
type ctrl-a then d (without ctrl; the "d" is for "detach") to detach from the screen (but still leave it running)
Now you're back in your first shell. Your script is still alive in your screen session. You can disconnect and the screen session will keep on trucking.
Do you want to get back into that screen and shut the app down manually? Easy! Run screen -r (for "reattach"). To kill the screen session, just reattach and exit the shell.
You can have multiple screen sessions running concurrently, too. (If there is more than one screen running, you'll need to provide an argument to screen -r.)
Check out some screen docs!
Here's a screen howto. Search "gnu screen howto" for many more.
Lots of ways to skin this cat... :)
My thought was to take your script (call it foo) and remove the last 3 lines. In your /etc/rc.d/rc.local file (NOTE: this applies to Ubuntu and Fedora, not sure what you're running - but it has something similar) you'd add nohup /path_to_foo/foo 2>&1 > /dev/null& to the end of the file so that it runs in the background. You can also run that command right at a terminal if you just want to run it and have it running. You have to make sure that foo is made executable with chmod +x /path_to_foo/foo.
Use an infinite loop. Try:
while running do
sleep(3600)
end
You can use exit to terminate when you need to. This will run the loop once an hour so it doesnt eat up processing time. An infinite loop before your disconnect method will prevent it from being called until the server shuts down.
I have a shell script that calls a java jar file and runs an application. There's no way around this, so I have to work with what I have.
When you execute this shell script, it outputs the application status and just sits there (pretty much a console); so when something happens to the program it updates the screen. This is like with any normal non daemonized/backgrounded process. Only way to get out of it is ctrl-c, which then ends the process altogether. I do know that I could get around this by doing path_to_shell_script/script.sh &, which would background it for my session (I could use nohup if I wanted to logout).
My issue is, I just don't know how to put this script into a init script. I have most of the init script written, but when I try to daemonize it, it doesn't work. I've almost got it working, however, when i run the initscript, it actually spans the same "console" on the script, and just sits there until i hit ctrl-c. Here's the line in question:
daemon ${basedir}/$prog && success || failure
The problem is that I can't background just the daemon ${basedir}/$prog part and I think that's where I'm running into the issue. Has anyone been successful at creating an init script FOR a shell script? Also this shell script is not daemonizable (you can background it, but the underlying program does not support a daemonize option, or else I would have just let the application do all the work).
You need to open a subshell to execute it. It also help to redirect its output to a file, or at least /dev/null.
Something like:
#!/bin/bash
(
{ daemon ${basedir}/$prog && success || failure ; } &>/dev/null
) &
exit 0
It work as follows ( list ) & in a background subshell. { list } is a group command, it's used here to capture all the output of your commands and send it to /dev/null.
I have had success with initially detached screen sessions for running things like the half life server and my custom "tail logfile " bash scripts.
To start something in the background:
screen -dmS arbitarySessionName /path/to/script/launchService.sh
To look at the process:
# screen -r arbitrarySessionName
Hope you find this useful, gl!
I have a master-workers architecture where the number of workers is growing on a weekly basis. I can no longer be expected to ssh or remote console into each machine to kill the worker, do a source control sync, and restart. I would like to be able to have the master place a message out on the network that tells each machine to sync and restart.
That's where I hit a roadblock. If I were using any sane platform, I could just do:
exec('ruby', __FILE__)
...and be done. However, I did the following test:
p Process.pid
sleep 1
exec('ruby', __FILE__)
...and on Windows, I get one ruby instance for each call to exec. None of them die until I hit ^C on the window in question. On every platform I tried this on, it is executing the new version of the file each time, which I have verified this by making simple edits to the test script while the test marched along.
The reason I'm printing the pid is to double-check the behavior I'm seeing. On windows, I am getting a different pid with each execution - which I would expect, considering that I am seeing a new process in the task manager for each run. The mac is behaving correctly: the pid is the same for every system call and I have verified with dtrace that each run is trigging a call to the execve syscall.
So, in short, is there a way to get a windows ruby script to restart its execution so it will be running any code - including itself - that has changed during its execution? Please note that this is not a rails application, though it does use activerecord.
After trying a number of solutions (including the one submitted by Byron Whitlock, which ultimately put me onto the path to a satisfactory end) I settled upon:
IO.popen("start cmd /C ruby.exe #{$0} #{ARGV.join(' ')}")
sleep 5
I found that if I didn't sleep at all after the popen, and just exited, the spawn would frequently (>50% of the time) fail. This is not cross-platform obviously, so in order to have the same behavior on the mac:
IO.popen("xterm -e \"ruby blah blah blah\"&")
The classic way to restart a program is to write another one that does it for you. so you spawn a process to restart.exe <args>, then die or exit; restart.exe waits until the calling script is no longer running, then starts the script again.