This question is related to my previous one: Running erlang shell as a daemon/service
I have a script that looks like this:
#!/bin/bash
# Load the VERBOSE setting and other rcS variables
. /lib/init/vars.sh
# Define LSB log_* functions.
# Depend on lsb-base (>= 3.0-6) to ensure that this file is present.
. /lib/lsb/init-functions
export HEART_COMMAND="/etc/init.d/script restart"
start() {
erl -heart -pa DIR -sname NAME -setcookie COOKIE -env port 21 -s M -s M2 --
### Create the lock file ###
touch /var/lock/lock
}
stop() {
erl -noshell -sname temp_control -setcookie COOKIE -eval "rpc:call(NAME#ubuntu, init, stop, [])" -s init stop
### Now, delete the lock file ###
rm -f /var/lock/lock
}
### main logic ###
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
# start
;;
*)
echo $"Usage: $0 {start|stop|restart}"
exit 1
esac
exit 0
I don't know how to simulate a crash so I just tried ctrl+c and aborted the shell, the output looks like this:
root#ubuntu:/etc/init.d# ./script start
heart_beat_kill_pid = 17512
Erlang R13B03 (erts-5.7.4) [source] [64-bit] [smp:4:4] [rq:4] [async-threads:0] [hipe] [kernel-poll:false]
Eshell V5.7.4 (abort with ^G)
(NAME#ubuntu)1> Starting M2
Listening on port 21
(NAME#ubuntu)1>
(NAME#ubuntu)1>
(NAME#ubuntu)1>
(NAME#ubuntu)1>
(NAME#ubuntu)1>
BREAK: (a)bort (c)ontinue (p)roc info (i)nfo (l)oaded
(v)ersion (k)ill (D)b-tables (d)istribution
a
heart: Fri Jul 29 09:25:10 2011: Erlang has closed.
root#ubuntu:/etc/init.d# heart_beat_kill_pid = 17557
heart: Fri Jul 29 09:25:13 2011: Erlang has closed.
/etc/init.d/NAME: line 20: 17557 Killed erl -heart -pa DIR -sname NAME -setcookie COOKIE -env port 21 -s M -s M2 --
heart: Fri Jul 29 09:25:13 2011: Executed "/etc/init.d/script restart". Terminating.
heart_beat_kill_pid = 17602
heart: Fri Jul 29 09:25:15 2011: Erlang has closed.
/etc/init.d/NAME: line 20: 17602 Killed erl -heart -pa DIR -sname NAME -setcookie COOKIE -env port 21 -s M -s M2 --
heart: Fri Jul 29 09:25:15 2011: Executed "/etc/init.d/script restart". Terminating.
heart: Fri Jul 29 09:25:17 2011: Executed "/etc/init.d/script restart". Terminating.
root#ubuntu:/etc/init.d#
This goes on for ever if I don't comment the line of code in the script that starts it. It's like an endless loop of terminating erlang shells... or something.
If I try for example "export HEART_COMMAND="/bin/echo hello" it says "write error: broken pipe".
Why doesn't it work? How do I simulate a crash properly to check if the heart command work?
Thankful for any advice you might have.
Answering the question you didn't ask
(but mentioned a couple of times that you don't know how to do)
To simulate a crash so kill -SEGV <PID>
Example:
$ sleep 30 &
[1] 13274
$ kill -SEGV 13274
[1]+ Segmentation fault sleep 30
Also, So while I don't know erlang, I presume that it spawns multiple threads and one thread can monitor another by sending heartbeat messages. If the other thread does not respond, it is assumed to be hung and restarted.
Related
I have a script that does stuff with an ID as a parameter, I want to make it accept multiple parameters and then duplicate itself so that each copy runs with its own parameter from the list. For instance, I run "script 1 2 3&", and I want to see the result as if I were to run "script 1&", "script 2&", and "script 3&".
I can launch a script inside a script using
MyName="${0##*/}"
bash ./$MyName $id
$id is basically the parameter I want to put in this script. I launch multiple of them; however, the scripts I launch from within the script get processed in a raw, one after another, not parallel. I tried adding '$' at the end after $id, did not work.
Is it possible to launch a script from a script so that they run as separate processes in the background as if I were to run multiple scripts with & myself from the terminal?
This does what I think you want... with some dummy example code for the case of handling a single id.
If no parameters are given, it will give you Usage: output, if more than one parameter is given it will invoke itself as a background job, once for each of those parameters, and then wait for those background jobs to be finished.
If only one parameter is given, it will execute the workload for you.
A cleaner approach would probably be to do this in two scripts, one that is the launcher that handles the first two situations, and a second script that handles the work load.
#!/bin/bash
if [ $# -eq 0 ]; then
echo "Usage $0 <id1> ..." 1>&2
exit 1
fi
if [ $# -gt 1 ]; then
# Fire up the background jobs
for myid in $*; do
$0 $myid &
done
# Now wait for the background scripts to finish
wait
exit 0
fi
id=$1
echo "PID $$ ($id) - Hello world, on $(date)."
sleep 5
Example run:
$ ./myscript.sh 10 20 30 40 50; date
PID 8558 (10) - Hello world, on Tue Jun 21 22:22:10 PDT 2022.
PID 8559 (20) - Hello world, on Tue Jun 21 22:22:10 PDT 2022.
PID 8560 (30) - Hello world, on Tue Jun 21 22:22:10 PDT 2022.
PID 8561 (40) - Hello world, on Tue Jun 21 22:22:10 PDT 2022.
PID 8563 (50) - Hello world, on Tue Jun 21 22:22:10 PDT 2022.
Tue Jun 21 22:22:15 PDT 2022
The ; date at the end shows that the initial script doesn't return until all child processes are finished.
I am trying to create a script that runs just before sleeping. Can someone tell me what I am doing wrong here? This script runs perfectly when I run the command in terminal.
king#death-star /etc/pm/sleep.d $ ls
total 1MB
drwxr-xr-x 2 root root 1MB May 30 15:21 .
drwxr-xr-x 5 root root 1MB Nov 28 2015 ..
-rwxr-xr-x 1 root root 1MB Jun 26 2015 10_grub-common
-rwxr-xr-x 1 root root 1MB Dec 6 2013 10_unattended-upgrades-hibernate
-rwxr-xr-x 1 root root 1MB May 22 2012 novatel_3g_suspend
-rwxr-xr-x 1 root root 1MB May 30 15:20 revert_kb_on_sleep
king#death-star /etc/pm/sleep.d $ cat revert_kb_on_sleep
sh -c "/home/king/Desktop/Scripts/rotate_desktop normal; /home/king/Desktop/Scripts/misc/my_keyboard on"
Output from log:
$ cat /var/log/pm-suspend.log
Running hook /etc/pm/sleep.d/revert_kb_on_sleep suspend suspend:
Can't open display
Can't open display
xrandr: --rotate requires an argument
Try 'xrandr --help' for more information.
No protocol specified
Unable to connect to X server
/etc/pm/sleep.d/revert_kb_on_sleep suspend suspend: success.
Mon May 30 15:23:39 EDT 2016: performing suspend
Mon May 30 15:27:59 EDT 2016: Awake.
Mon May 30 15:27:59 EDT 2016: Running hooks for resume
Running hook /etc/pm/sleep.d/revert_kb_on_sleep resume suspend:
Can't open display
Can't open display
xrandr: --rotate requires an argument
Try 'xrandr --help' for more information.
No protocol specified
Unable to connect to X server
/etc/pm/sleep.d/revert_kb_on_sleep resume suspend: Returned exit code 1.
Any luck with this? I wrote a script to run after waking, and I'm getting similar errors. This script is supposed to turn off the laptop display upon waking from sleep.
case "${1}" in
resume|thaw)
screen_status=$(xset -q -display :0.0 | tail -1 | sed 's/^[ \t]*//g')
if [[ "$screen_status" = "Monitor is On" ]]; then
sleep 1 && xset -display :0.0 dpms force off
fi
;;
esac
But I get the following error:
No protocol specified
xset: unable to open display ":0.0"
I've tried to get it to set screen_status as "Monitor is off" when it can't get a display, so that it triggers the condition to execute xset anyway, but that's not working, either, because it can't access the display. In the meantime, I set xfce4-power-manager to turn off the screen after 1 minute. Having to wait for a minute is better than nothing!
How can i read the content of a xterm or terminal, only by knowing its device number?
Similar to moving the mouse over the text.
Redirecting or cloning the terminal output to a file would be an option too, as long it could be done without interacting with commands executed in this terminal.
So nothing like 'command > myfile'.
Or is the only way to solve this a print screen with ocr or simulating mouse moves and clicks?
Edit: I m looking for a solution that reads the content regardless of his origin, p.e. 'echo "to tty" > /dev/pts/1'
The script command may work for you.
"Script makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later" - man script
You can even pass script as command when invoking xterm with -e:
ubuntu#ubuntu:~$ xterm -e script
ubuntu#ubuntu:~$ # A new xterm is started. uname is run, then exit
ubuntu#ubuntu:~$ # The output is captured to a file called typescript, by default:
ubuntu#ubuntu:~$ cat typescript
Script started on Tue 19 Nov 2013 06:00:07 PM PST
ubuntu#ubuntu:~$ uname
Linux
ubuntu#ubuntu:~$ exit
exit
Script done on Tue 19 Nov 2013 06:00:13 PM PST
ubuntu#ubuntu:~$
Environment: Recent Ubuntu, non-standard packages are OK as long as they are not too exotic.
I have a data processor bash script that processes data from stdin:
$ cat data | process_stdin.sh
I can change the script.
I have a legacy data producer system (that I can not change) that logs in to a machine via SSH and calls the script, piping it data. Pseudocode:
foo#producer $ cat data | ssh foo#processor ./process_stdin.sh
The legacy system launches ./process_stdin.sh a zillion times per day.
I would like to keep ./process_stdin.sh running indefinitely at processor machine, to get rid of process launch overhead. Legacy producer will call some kind of wrapper that will somehow pipe the data to the actual processor process.
Is there a robust unix-way way to do what I want with minimum code? I do not want to change ./process_stdin.sh (much) — the full rewrite is already scheduled, but, alas, not soon enough — and I can not change data producer.
A (not so) dirty hack could be the following:
As foo on processor, create a fifo and run a tail -f redirected to stdin of process_stdin.sh, possibly in an infinite loop:
foo#processor:~$ mkfifo process_fifo
foo#processor:~$ while true; do tail -f process_fifo | process_stdin.sh; done
Don't worry, at this point process_stdin.sh is just waiting for some stuff to arrive on the fifo process_fifo. The infinite loop is just here in case something wrong happens, so that it is relaunched.
Then you can send your data thus:
foo#producer:~$ cat data | ssh foo#processor "cat > process_fifo"
Hope this will give you some ideas!
Flock do the job.
The same command asked 3 times shortly, but waiting until the lock is free.
# flock /var/run/mylock -c 'sleep 5 && date' &
[1] 21623
# flock /var/run/mylock -c 'sleep 5 && date' &
[2] 21626
# flock /var/run/mylock -c 'sleep 5 && date' &
[3] 21627
# Fri Jan 6 12:09:14 UTC 2017
Fri Jan 6 12:09:19 UTC 2017
Fri Jan 6 12:09:24 UTC 2017
We're using groovy to execute a bash script that has debug mode set -x in it. We're running it like so:
def proc = "bash hello.sh".execute()
proc.in.eachLine { line -> println line }
proc.waitForOrKill(100*1000)
When we run it directly from command prompt with bash hello.sh, we see echo lines and + lines:
Tue Jun 11 10:52:42 IDT 2013:: Running
+ mkdir -p folder
+ tar -xzf file
...
But when we run it from groovy, only the echo lines are visible!
Tue Jun 11 10:52:42 IDT 2013:: Running
What's the deal? Is this a groovy/Java bug?
Try adding
proc.consumeProcessOutput(System.out, System.err)
Before you wait for it to finish (in place of your proc.in.eachLine line)