using 'at' command in a bash file - macos

I would like to create a command alarm which plays a song at a given time. Here is the bash file, alarm, I've unfortunately created:
#!/bin/bash
at $1
open /.../mysongs/sweetsong.mp3
but when I run alarm 0830 in my terminal, a job is created and the song immediately starts playing. Then when the time actually comes for the job to run, nothing happens. I've come to the conclusion that it comes down to my use of at. Any tips? Thanks!

According to man at:
The at utility shall read commands from standard input and group them together as an at-job, to be executed at a later time.
So you need to pass the command in to standard input (and Use More Quotes™):
at "$1" < 'open /.../mysongs/sweetsong.mp3'

Related

When data is piped from one program via | is there a way to detect what that program was from the second program?

Say you have a shell command like
cat file1 | ./my_script
Is there any way from inside the 'my_script' command to detect the command run first as the pipe input (in the above example cat file1)?
I've been digging into it and so far I've not found any possibilities.
I've been unable to find any environment variables set in the process space of the second command recording the full command line, the command data the my_script commands sees (via /proc etc) is just _./my_script_ and doesn't include any information about it being run as part of a pipe. Checking the process list from inside the second command even doesn't seem to provide any data since the first process seems to exit before the second starts.
The best information I've been able to find suggests in bash in some cases you can get the exit codes of processes in the pipe via PIPESTATUS, unfortunately nothing similar seems to be present for the name of commands/files in the pipe. My research seems to be saying it's impossible to do in a generic manner (I can't control how people decide to run my_script so I can't force 3rd party pipe replacement tools to be used over build in shell pipes) but it just at the same time doesn't seem like it should be impossible since the shell has the full command line present as the command is run.
(update adding in later information following on from comments below)
I am on Linux.
I've investigated the /proc/$$/fd data and it almost does the job. If the first command doesn't exit for several seconds while piping data to the second command can you read /proc/$$/fd/0 to see the value pipe:[PIPEID] that it symlinks to. That can then be used to search through the rest of the /proc//fd/ data for other running processes to find another process with a pipe open using the same PIPEID which gives you the first process pid.
However in most real world tests I've done of piping you can't trust that the first command will stay running long enough for the second one to have time to locate it's pipe fd in /proc before it exits (which removes the proc data preventing it being read). So if this method will return any information is something I can't rely on.

Last run time of shell script?

I need to create some sort of fail safe in one of my scripts, to prevent it from being re-executed immediately after failure. Typically when a script fails, our support team reruns the script using a 3rd party tool. Which is usually ok, but it should not happen for this particular script.
I was going to echo out a time-stamp into the log, and then make a condition to see if the current time-stamp is at least 2 hrs greater than the one in the log. If so, the script will exit itself. I'm sure this idea will work. However, this got me curious to see if there is a way to pull in the last run time of the script from the system itself? Or if there is an alternate method of preventing the script from being immediately rerun.
It's a SunOS Unix system, using the Ksh Shell.
Just do it, as you proposed, save the date >some file and check it at the script start. You can:
check the last line (as an date string itself)
or the last modification time of the file (e.g. when the last date command modified the somefile
Other common method is create one specified lock-file, or pid-file such /var/run/script.pid, Its content is usually the PID (and hostname, if needed) of the process what created it. Of course, the file-modification time tell you when it is created, by its content you can check the running PID. If the PID doesn't exists, (e.g. pre process is died) and the file modification time is older as X minutes, you can start the script again.
This method is good mainly because you can use simply the cron + some script_starter.sh what will periodically check the script running status and restart it when needed.
If you want use system resources (and have root access) you can use the accton + lastcomm.
I don't know SunOS but probably knows those programs. The accton starts the system-wide accounting of all programs, (needs to be root) and the lastcomm command_name | tail -n 1 shows when the command_name is executed last time.
Check the man lastcomm for the command line switches.

make nohup ignore ">" and "<"

So, i'm running moses machine translation system on my server computer. I access terminal from ssh, and i came across an interesting problem.
The scrip i'm running uses > to specify and output file and it looks like this:
~/mosesdecoder/bin/moses -f /home/tin/working/filtered/moses.ini -i /home/tin/working/filtered/input.29242 > final
Now, since it will take some time for the translation to finish (around 10 hours) i want it to run with nohup, but when i do that even if i put & at the end i end up with file named "final" filled with stdout stuff.
Any idea on how to avoid it??
If you're running the commands inside an actual script file, you could get rid of the > inside the script, and run nohup ./sciptname.sh.
This will print the script's output to terminal, but nohup will redirect it to "nohup.out" in the current directory.
Source:
According to the nohup manpage I am reading, If the standard output is a terminal, the standard output is appended to the file nohup.out in the current directory.
Give it a shot :)

Disown, nohup or & on Mac OS zsh… not working as hoped

Hi. I'm new to the shell and am working on my first kludged together script. I've read all over the intertube and SO and there are many, MANY places where disown, nohup, & and return are explained but something isn't working for me.
I want a simpler timer. The script asks for user input for the hours, mins., etc., then:
echo "No problem, see you then…"
sleep $[a*3600+b*60+c]
At this point (either on the first or second lines, not sure) I want the script OR the specific command in the script to become a background process. Maybe a daemon? So that the timer will still go off on schedule even if
that terminal window is shut
the terminal app is quit completely
the computer is put to sleep (I realize I probably need some different code still to wake the mac itself)
Also after the "No problem" line I want a return command so that the existing shell window is still useful in the meantime.
The terminal-notifier command (the timer wakeup) is getting called immediately under certain usage of the above (I can't remember which right now), then a second notification at the right time. Using the return command anywhere basically seems to quit the script.
One thing I'm not clear on is whether/how disown, nohup, etc. are applicable to a command process vs. a script process, i.e., will any of them work properly on only a command inside a script (and if not, how to initialize a script as a background process that still asks for input).
Maybe I should use some alternative to sleep?
It isn't necessary to use a separate script or have the script run itself in order to get part of it to run in the background.
A much simpler way is to place the portions that you want to be backgrounded (the sleep and following command) inside of parentheses, and put an ampersand after them.
So the end of the script would look like:
(
sleep $time
# Do whatever
)&
This will cause that portion of the code to be run inside a subshell which is placed into the background, since there's no code after that the first shell will immediately exit returning control to your interactive shell.
When your script is run, it is actually run by starting a new shell to execute it. In order for you to get your script into the background, you would need to send that shell into the background, which you can't do because you would need to communicate with its parent shell.
What you can do is have your script call itself with a special argument to indicate that it should do the work:
#! /bin/zsh
if [ "$1" != '--run' ] ; then
echo sending to background
$0 --run $# &
exit
fi
sleep 1
echo backgrounded $#
This script first checks to see if its first argument is --run. If it is not, then it calls itself ($0) with that argument and all other arguments it received ($#) in the background, and exits. You can use a similar method, performing the test when you want to enter the background, and possibly sending the data you will need instead of every argument. For example, to send just the number of seconds:
$0 --run $[a*3600+b*60+c] &

Making a command loop in shell with a script

How can one loop a command/program in a Unix shell without writing the loop into a script or other application.
For example, I wrote a script that outputs a light sensor value but I'm still testing it right now so I want it run it in a loop by running the executable repeatedly.
Maybe I'd also like to just run "ls" or "df" in a loop. I know I can do this easily in a few lines of bash code, but being able to type a command in the terminal for any given set of command would be just as useful to me.
You can write the exact same loop you would write in a shell script by writing it in one line putting semicolons instead of returns, like in
for NAME [in LIST ]; do COMMANDS; done
At that point you could write a shell script called, for example, repeat that, given a command, runs it N times, by simpling changing COMMANDS with $1 .
I recommend the use of "watch", it just do exactly what you want, and it cleans the terminal before each execution of the commands, so it's easy to monitor changes.
You probably have it already, just try watch ls or watch ./my_script.sh. You can even control how much time to wait between each execution, in seconds, with the -n option, and you can use -d to highlight the difference in the output of consecutive runs.
Try:
Run ls each second:
watch -n 1 ls
Run my_script.sh each 3 seconds, and highlight differences:
watch -n 3 -d ./my_script.sh
watch program man page:
http://linux.die.net/man/1/watch
This doesn't exactly answer your question, but I felt it was relavent. One of the great things with shell looping is that some commands return lists of items. Of course that is obvious, but a something you can do using the for loop is execute a command on that list of items.
for $file in `find . -name *.wma`; do cp $file ./new/location/ done;
You can get creative and do some very powerful stuff.
Aside from accepting arguments, anything you can do in a script can be done on the command line. Earlier I typed this directly in to bash to watch a directory fill up as I transferred files:
while sleep 5s
do
ls photos
end

Resources