I need to run mpg123 with a single file, such that it will autostart and autoclose, like it normally does, however, I need to be able to override this default behavior with commands sent to a fifo file.
I had been running mpg123 filename.mp3 from a script, and simply waiting for it to finish before moving on. However, I'd like another script to be able to pause playback, control volume, or kill the process early, depending on the user's input.
mpg123 -R --fifo /srv/http/newsctl filename.mp3 seems to start mpg123 and create the pipe, but does not start playback.
How do I make this work?
Unfortunately mpg123 is unable to play a specified file when -R argument is used.
To start the playback you have to load a file using created fifo.
FIFO_MPG='/srv/http/newsctl'
mpg123 -R --fifo "$FIFO_MPG"
echo 'load filename.mp3' >> "$FIFO_MPG"
Also I suggest you to silence verbose output by using
echo 'silence' >> "$FIFO_MPG"
I hope it is not too late. Good luck! ;)
Related
I have a game server running in an xterm window.
Once daily I'd like to send a warning message to any players followed after a delay by the stop command to the program inside the xterm window from a script running on a schedule. This causes the cleanup and save functions to run automatically.
Once the program shuts down I can bring it back up easily but I don't know how to send the warning and stop commands first.
Typed directly into xterm the commands are:
broadcast Reboot in 2 minutes
(followed by a 2 minute wait and then simply):
stop
no / or other characters required.
Any help?
Do you also need to type something from the xterm itself (from time to time) or do you want your server to be fully driven from external commands?
Is your program line-oriented? You may first try something like:
mkfifo /tmp/f
tail -f /tmp/f | myprogram
and then try to send commands to your program (from another xterm) with
echo "mycommand" > /tmp/f
You may also consider using netcat for turning your program to a server:
Turn simple C program into server using netcat
http://lifehacker.com/202271/roll-your-own-servers-with-netcat
http://nc110.sourceforge.net/
Then you could write a shell script for sending the required commands to your "server".
If you know a little about C programming; I remember having once hacked the script program (which was very easy to do: code is short) in order to launch another program but to read commands from a FIFO file (then again a shell script would be easy to write for driving your program).
Something you might try is running your program inside a screen session.
You can then send commands to the session from cron that will be just
as if you typed them.
In your xterm, before launching the program do:
screen -S myscreen bash
(or you can even replace bash by your program). Then from your cron
screen -S myscreen -X stuff 'broadcast Reboot in 2 minutes\n'
sleep 120
screen -S myscreen -X stuff 'stop\n'
will enter that text. You can exit the session using screen -S myscreen -X quit
or by typing ctrl-a \.
screen is intended to be transparent. To easily see you are inside screen, you can
configure a permanent status bar at the bottom of your xterm:
echo 'hardstatus alwayslastline' >~/.screenrc
Now when you run screen you should see a reverse video bottom line. Depending
on your OS it may be empty.
I have a VM that I want running indefinitely. The server is always running but I want the script to keep running after I log out. How would I go about doing so? Creating a cron job?
In general the following steps are sufficient to convince most Unix shells that the process you're launching should not depend on the continued existence of the shell:
run the command under nohup
run the command in the background
redirect all file descriptors that normally point to the terminal to other locations
So, if you want to run command-name, you should do it like so:
nohup command-name >/dev/null 2>/dev/null </dev/null &
This tells the process that will execute command-name to send all stdout and stderr to nowhere (instead of to your terminal) and also to read stdin from nowhere (instead of from your terminal). Of course if you actually have locations to write to/read from, you can certainly use those instead -- anything except the terminal is fine:
nohup command-name >outputFile 2>errorFile <inputFile &
See also the answer in Petur's comment, which discusses this issue a fair bit.
I have a script that creates a FIFO and launches a program that writes output to the FIFO. I then read and parse the output until the program exits.
MYFIFO=/tmp/myfifo.$$
mkfifo "$MYFIFO"
MYFD=3
eval "exec $MYFD<> $MYFIFO"
external_program >&"$MYFD" 2>&"$MYFD" &
EXT_PID=$!
while kill -0 "$EXT_PID" ; do
read -t 1 LINE <&"$MYFD"
# Do stuff with $LINE
done
This works fine reading input while the program is still running, but it looks like the timeout to read is ignored, and read call hangs after the external program exits.
I've used read with a timeout successfully in other scripts, and a simple test script that leaves out the external program times out correctly. What am I doing wrong here?
EDIT: It looks like read -t functions as expected when I run my script from the command line, but when I run it as part of an xcodebuild build process, the timeout does not function. What is different about these two environments?
I don't think -t will work with redirection.
From the man page here:
-t timeout
Cause read to time out and return failure if a complete line
of input is not read within timeout seconds. This option has no
effect if read is not reading input from the terminal or a pipe.
I'm running a Bukkit (Minecraft) server on a Linux machine and I want to have the server gracefully shut down using the server's stop command and the computer suspend at a certain time using pm-suspend from the command line. Here's what I've got:
me#comp~/dir$ perl -e 'sleep [time]; print "stop\\n";' | ./server && sudo pm-suspend
(I've edited by /etc/sudoers so I don't have to enter my password when I suspend.)
The thing is, while the perl -e is sleeping, the server is expecting a constant stream of bytes, (That's my guess. I could be misunderstanding something.) so it prints out all of the nothings it receives, taking up precious resources:
me#comp~/dir$ ...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>...
Is there any such thing as a buffered pipe? If not, are there any ways to send delayed input to a script?
You may want to have a look at Bukkit's wiki, which recommends an init script for permanently running servers.
This init script uses rather unconventional approach to communicate with running server. The server is started in screen session, then all commands are send to the server console via screen, e.g.
screen -p 0 -S $SCREEN -X eval 'stuff \"stop\"\015'
See https://github.com/Ahtenus/minecraft-init/blob/master/minecraft
This approach suggest that bukkit may be expecting standard input to be attached to a terminal, thus requiring screen wrapper (which is itself terminal emulator) for unattended runs.
I logged in to a remote server via ssh and started a php script. Appereantly, it will take 17 hours to complete, is there a way to break the connection but the keep the script executing? I didn't make any output redirection, so I am seeing all the output.
Can you stop the process right now? If so, launch screen, start the process and detach screen using ctrl-a then ctrl-d. Use screen -r to retrieve the session later.
This should be available in most distros, failing that, a package will definitely be available for you.
ctrl + z
will pause it. Than type
bg
to send it to background. Write down the PID of the process for later usage ;)
EDIT: I forgot, you have to execute
disown -$PID
where $PID is the pid of your process
after that, and the process will not be killed after you close the terminal.
you described it's important to protect script continuation. Unfortunately I don't know, you make any interaction with script and script is made by you.
continuation protects 'screen' command. your connection will break, but screen protect pseudo terminal, you can reconnect to this later, see man.
if you don't need operators interaction with script, you simply can put script to background at the start, and log complete output into log file. Simply use command:
nohup /where/is/your.script.php >output.log 2&>1 &
>output.log will redirect output into log file, 2&>1 will append error stream into output, effectively into log file. last & will put command into background. Notice, nohup command will detach process from terminal group.
At now you can safely exit from ssh shell. Because your script is out of terminal group, then it won't be killed. It will be rejoined from your shell process, into system INIT process. It is unix like system behavior. Complete output you can monitor using command
tail -f output.log #allways breakable by ^C, it is only watching
Using this method you do not need use ^Z , bg etc shell tricks for putting command to the background.
Notice, using redirection to nohup command is preferred. Otherwise nohup will auto redirect all outputs for you to nohup.out file in the current directory.
You can use screen.