how to send minicom to background? - bash

I need to log some data from a serial device. So I would like to first call minicom with the & parameter:
minicom -D /dev/ttyXYZ -b 115200 -C logFile &
But taking a look to ps aux shows, that minicom had become a terminated zombie process:
edeviser 8835 0.0 0.0 19696 2628 pts/0 T 15:29 0:00 minicom -D /dev/ttyXYZ -b 115200 -C logFile
How to send minicom to background?
Further information:
I would like to send it to background because I'll need to trigger some actions after minicom has started to log the serial data to the logFile. Using cat /dev/ttyXYZ > logFile is no option, because I must specify the baudrate. Using a second terminal is also no option, because this work will be done by a bash script.

Have you tried:
nohup minicom -D /dev/ttyXYZ -b 115200 -C logFile &
I ran into the same issue when I was trying to convert movies with handbrake-cli.
nohup will background the task and push the output into a txt file called nohup.txt.

Related

How can I make this bash xautolock script works?

H_i guys, I'm running Arch with i3 as WM, i3lock works fine when manually executed via keybinding, xautolock is ofc installed and the script is launched at startup ( when trying to manually launch it I get this message : " xautolock is already running (PID 1302)" but my screen never automatically locks
Here is the script :
#!/bin/sh
exec xautolock -detectsleep \
-time 3 -locker "i3lock -d -c 000000" \
-notify 30 \
-notifier "notify-send -u critical -t 10000 -- 'LOCKING screen in 30 seconds'"
Thanks in advance.

Can't redirect rfcomm output to a file

I would like to redirect the output of rfcomm to a file in bash like
$ rfcomm watch hci0 > rfcomm.log &
or
$ rfcomm watch hci0 > rfcomm.log 2>&1 &
However rfcomm.log remains desparately empty.
Why?
If this program is writing to the tty directly then you can capture the output by invoking it via ssh with a pseudo-tty (-t). Try:
ssh -t localhost 'rfcomm watch hci0' > rfcomm.log

Execute simultaneous scripts on remote machines and wait until the process completes

The original idea was copy out a script to each IP address which would do a yum-install some RPMs and some configuration steps on each machine. Since the yum-install takes about 20 minutes, the hope was to do the install simultaneously on each machine then wait for all the spawned processes to finish before continuing.
#!/bin/bash
PEM=$1
IPS=$2
for IP in IPS; do
scp -i $PEM /tmp/A.sh ec2-user#IP:/tmp
ssh -i $PEM ec2-user#$IP chmod 777 /tmp/A.sh
done
for IP in IPS; do
ssh -t -i $PEM ec2-user#$IP sudo /tmp/A.sh &
done
wait
echo "IPS have been configured."
exit 0
Executing a remote sudo execute command in background on three IP addresses yields three error messages. Obviously, there's flaw in my logic.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
All machines are CentOS 6.5
You need to tell ssh not to read from standard input
ssh -n -t root#host "sleep 100" &
Here's an example
drao#darkstar:/tmp$ cat a
date
ssh -n -t me#host1 "sleep 100" &
ssh -n -t me#host2 "sleep 100" &
wait
date
darkstar:/tmp$ . ./a
Mon May 16 15:32:16 CEST 2016
Pseudo-terminal will not be allocated because stdin is not a terminal.
Pseudo-terminal will not be allocated because stdin is not a terminal.
[1]- Done ssh -n -t me#host1 "sleep 100"
[2]+ Done ssh -n -t me#host2 "sleep 100"
Mon May 16 15:33:57 CEST 2016
darkstar:/tmp
That waited in all 101 seconds. Obviously I've the ssh keys so I did not get prompted fro the password.
But looking at your output it looks like sudo on the remote machine is failing ... you might not even need -n.
just to push some devopsy doctrine on you.
Ansible does this amazingly well.

Why docker exec is killing nohup process on exit?

I have running docker ubuntu container with just a bash script inside. I want to start my application inside that container with docker exec like that:
docker exec -it 0b3fc9dd35f2 ./main.sh
Inside main script I want to run another application with nohup as this is a long running application:
#!/bin/bash
nohup ./java.sh &
#with this strange sleep the script is working
#sleep 1
echo `date` finish main >> /status.log
The java.sh script is as follow (for simplicity it is a dummy script):
#!/bin/bash
sleep 10
echo `date` finish java >> /status.log
The problem is that java.sh is killed immediately after docker exec returns. The question is why?
The only solution I found out is to add some dummy sleep 1 into the first script after nohup is started. Than second process is running fine. Do you have any ideas why it is like that?
[EDIT]
Second solution is to add some echo or trap command to java.sh script just before sleep. Than it works fine. Unfortunately I cannot use this workaround as instead of this script I have java process.
This is not an answer, but I still don't have the required reputation to comment.
I don't know why the nohup doesn't work. But I did a workaround that worked, using your ideas:
docker exec -ti running_container bash -c 'nohup ./main.sh &> output & sleep 1'
Okay, let's join two answers above :D
First rcmgleite say exactly right: use
-d
options to run process as 'detached' background.
And second (the most important!) if you run detached process, you don't needed nohup!
deploy_app.sh
#!/bin/bash
cd /opt/git/app
git pull
python3 setup.py install
python3 -u webui.py >> nohup.out
Execute this inside a container
docker exec -itd container_name bash -c "/opt/scripts/deploy_app.sh"
Check it
$ docker attach container_name
$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 11768 1940 pts/0 Ss Aug31 0:00 /bin/bash
root 887 0.4 0.0 11632 1396 pts/1 Ss+ 02:47 0:00 /bin/bash /opt/scripts/deploy_app
root 932 31.6 0.4 235288 32332 pts/1 Sl+ 02:47 0:00 python3 -u webui.py
I know this is a late response but I will add it here for documentation reasons.
When using nohup on bash and running it with 'exec' on a docker container, you should use
$ docker exec -d 0b3fc9dd35f2 /bin/bash -c "./main.sh"
The -d option means:
-d, --detach Detached mode: run command in the
background
for more information about docker exec, see:
https://docs.docker.com/engine/reference/commandline/exec/
This should do the trick.

Is it possible to run two programs simultaneously or one after another using a bash or expect script?

I have basically two lines of code which are:
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures
tshark -i /tmp/Captures -T pdml >results.xml
if I run them both in separate terminals it works fine.
However I've been trying to create a simple bash script that will execute them at the same time, but have had no luck. Bash script is as follows:
#! /bin/bash
tcpdump -i eth0 -s 65535 -w - >/tmp/Captures &
tshark -i /tmp/Captures -T pdml >results.xml &
If anyone could possibly help in getting this to work or getting it to "run tcpdump until a key is pressed, then run tshark. then when a key is pressed again close."
I have only a little bash scripting experience.
Do you need to run tcpdump and tshark separately? Using a pipe command will feed the output of tcpdump to the input of tshark.
tcpdump -i eth0 -s 65535 | tshark -T -pdml > results.xml

Resources