Using minicom to retrieve modem information stops after x seconds - bash

I am using minicom in order to connect with my modem (quectelEC25). The goal is to send differente AT commands in order to retrieve ceratain information about the modem and save it in a outpu file. I wrote the following script in bash:
#!/bin/bash
while true;
do
sudo minicom -D /dev/ttyUSB2 -S script.txt -C AT_modems_responses_1.txt
sleep 1
done
Being the script.txt:
send AT
expect OK
send ATI
expect OK
send AT+COPS?
expect OK
start:
send AT+CCLK?
expect OK
send AT+CREG?
expect OK
send AT+CSQ
expect OK
sleep 1
goto start
The problem is that the AT commands stop working after 2 minutes (AT+CCLK? & AT+CSQ).
Why does it stop? What is the problem? Should I work with the AT commands in a different way?
Thank you in advance

The runscript by defautl exists after 120 seconds (2 minutes). This is the reason why the minicom was not working after 2 minutes, in order to run more time, a timeout has to be included in the script. For 5 minutes should be:
timeout 300
Don't know how it can be configured as infinite.

Related

linux expect in background

I use the following bash script to connect to pbx using telnet:
expect.sh:
#!/usr/bin/expect
spawn telnet [ip] 2300
expect -exact "-"
send "SMDR\r";
expect "Enter Password:"
send "PASSWORD\r";
interact
and created another script to redirect the result to a file:
#!/bin/bash
./expect.sh | tee pbx.log
I'm trying to run expect.sh at boot time so I added it to systemd. When I add it as service in /etc/systemd/system it runs but I can't get the results in the log file as if I run both scripts manually
any idea about how can I run it at boot time?
TIA
If you just want to permanently output everything received after providing your password, simply replace your interactive with expect eof, i.e. wait for end-of file which will happen when the connection is closed by the other end. You will probably also want to change the default timeout of 10 seconds with no data that will stop the command:
set timeout -1
expect eof

Shell script: How to loop run two programs?

I'm running an Ubuntu server to mine crypto. It's not a very stable coin yet and their main node gets disconnected sometimes. When this happens it crashes the program through fatal error.
At first I wrote a loop script so it would keep running after a crash and just try again after 15 seconds:
while true;
do ./miner <somecodetoconfiguretheminer> &&break;
sleep 15
done;
This works, but is inefficient. Sometimes the loop will keep running for 30 minutes until the main node is back up - which costs me 30 minutes of hashing power unused. So I want it to run a second miner for 15 minutes to mine another coin, then check the first miner again if its working yet.
So basically: Start -> Mine coin 1 -> if crash -> Mine coin 2 for 15 minutes -> go to Start
I tried the script below but the server just becomes unresponsive once the first miner disconnects:
while true;
do ./miner1 <somecodetoconfiguretheminer> &&break;
timeout 900 ./miner2
sleep 15
done;
Ive read through several topics / questions on how &&break works, timeout works and how while true works but I can't figure out what I'm missing here.
Thanks in advance for the help!
A much simpler solution would be to run both of the programs all the time, and lower the priority of the less-preferred one. On Linux and similar systems, that is:
nice -10 ./miner2loop.sh &
./miner1loop.sh
Then the scripts can be similar to your first one.
Okay, so after trial and error - and some help - I found out that there is nothing wrong with my initial code. Timeout appears to behave differently on my linux instance when used in terminal than in a bash script. If used in Terminal it behaves as it should, it counts down and then kills the process it started. If used in bash however - it acts as if I typed 'sleep' and then after counting down stops.
Apparently this has to do with my Ubuntu instance (running on a VPS). Even though I installed latest versions of coreutils, have all the latest versions installed through apt-get update etc. This is the case for me on Digital Ocean as well as Google Compute.
The solution is to use the Timeout code as a function within the bash script, as found on another thread in stackoverflow. I named the function timeout2 as to not confuse the system in triggering the not properly working timeout command:
#!/bin/bash
# Executes command with a timeout
# Params:
# $1 timeout in seconds
# $2 command
# Returns 1 if timed out 0 otherwise
timeout2() {
time=$1
# start the command in a subshell to avoid problem with pipes
# (spawn accepts one command)
command="/bin/sh -c "$2""
expect -c "set echo "-noecho"; set timeout $time; spawn -noecho
$command; expect timeout { exit 1 } eof { exit 0 }"
if [ $? = 1 ] ; then
echo "Timeout after ${time} seconds"
fi
}
while true;
do
./miner1 <parameters for miner> && break;
sleep 5
timeout2 300 ./miner2 <parameters for miner>
done;

How to provide ctrl+C in the shell script?

Below is the script that i tried to execute/automate testing,
while [ 1 ]; do
val=`expr $val + 1`
ksh ./run.ksh //This line needs the keyboard interaction so i can't run in background. It takes too long time to complete so i need to kill the above command using ctrl+C
echo "pid=$!"
echo "pid=$$"
sleep 40
val1=val;
done;
./run.ksh - is script that has some business logic to send the data to other machine and waits for the response. Even if the response received it waits for reasonable amount of time to complete the processing. Because it waits for the connection to be closed and doing other cleanup activity.
My problem is that i want to kill that script after few seconds by sending ctrl+C. When i googled i found that $! can be used to get the process id of the background process, but the same cannot be used in this case.
Is it possible to send the ctrl+C in the shell script?
Thanks in advance.
Use timeout command. For example it will be killed after 30 seconds.
timeout 30 ksh ./run.ksh

Why can't tranfer file into the remote vps with expect?

The expect has been installed, it_a_test is the vps password.
scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
The command can transfer file /home/wpdatabase_backup.sql into my vps_ip:/tmp.
Now i rewrite the process into the following code:
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send it_is_a_test\r
Why can't transfer my file into remote vps_ip with expect?
Basically, expect will work with two feasible commands such as send and expect. In this case, if send is used, then it is mandatory to have expect (in most of the cases) afterwards. (while the vice-versa is not required to be mandatory)
This is because without that we will be missing out what is happening in the spawned process as expect will assume that you simply need to send one string value and not expecting anything else from the session, making the script exits and causing the failure.
So, you just have to add one expect to wait for the closure of the scp command which can be performed by waiting for eof (End Of File).
#!/usr/bin/expect -f
spawn scp /home/wpdatabase_backup.sql root#vps_ip:/tmp
expect "password:"
send "it_is_a_test\r"
expect eof; # Will wait till the 'scp' completes.
Note :
The default timeout in expect is 10 seconds. So, if the scp completes, within 10 seconds, then no problem. Suppose, if the operation takes more than that, then expect will timeout and quit, which makes failure in scp transfer. So, you can set increase timeout if you want which can be modified as
set timeout 60; # Timeout is 1 min

"autossh" output change under "expect"

I am using autossh with a monitoring flag. autossh prints to standard output every time it send test packets.
When using autossh under expect the text packets messages are not printed.
I don't know if they are sent at all which is important to keep the ssh connection alive.
can you tell if "expect" effects the autossh behavior ?
how can I figure out if autossh works correctly ?
the expect code:
#!/usr/bin/expect
set timeout 50
spawn autossh -M 11111 -N -R 4848:localhost:80 user#192.168.1.100
set keepRunning 1
while {$keepRunning} {
expect \
{
"(yes/no)" { send "yes\r" }
"Password:*" { send "1234\r" ; set keepRunning 0 }
"ssh exited prematurely with" { exit 7 }
"remote port forwarding failed*" { exit 8 }
}
}
expect \
{
"remote port forwarding failed*" { exit 9 }
"Password:*" { exit 5 }
}
wait
the periodic output that I see without expect is this:
autossh[2882]: checking for grace period, tries = 0
autossh[2882]: starting ssh (count 1)
autossh[2883]: execing /pfrm2.0/bin/ssh
autossh[2882]: ssh child pid is 2883
autossh[2882]: check on child 2883
autossh[2882]: set alarm for 50 secs
Password: autossh[2882]: connection ok
**autossh[2882]: check on child 2883
autossh[2882]: set alarm for 60 secs
autossh[2882]: connection ok
autossh[2882]: check on child 2883
autossh[2882]: set alarm for 60 secs**
The last 5 lines are test packets sent by autossh.
That lines are printed only when running autossh from bash directly.
When running from using "expect" those lines are not printed and I dont know if autossh sends them.
Thanks.
Frankly, I have not worked on autossh before. But, as per your code, expect will check for the patterns given by you, such as logging in by means of providing the password. If it happen to see an sort of error message. it will exit.
Then at the end. you have added wait. It delays until a spawned process (or the current process if none is named) terminates. I am not sure whether those prints will simply get printed in terminal. In that case, after logging in, you can make the expect to wait for it just like how you did for logging in. If those prints will be seen after providing any sort of command, then make sure you passing it by send and then wait for those patterns.
Unless until, you instruct expect to expect for a pattern, it won't bother waiting for anything which is why it won't be seen in output as well.
I've found out that wait somehow prevents the "autossh" from monitoring the connection.
The "wait" in the end of the "expect" script was replaced with "interact" and it is working fine now.
* Thanks Dinesh
(This is more detailed description and answer.)
What I wanted:
ssh tunnel using autossh and expect and a wrapper bash script.
The bash script should run in background.
when the ssh quits the script should start the tunnel again.
The problems with expect:
After establishing the ssh-tunnel I have tried using the following
interact - when the bash script was running in background interact hangs so expect wasn't waiting for autossh and I couldn't monitor the autossh (without a sleep loop)
wait - for some reason it was preventing autossh monitoring mechanism from working properly.
-nothing- - expect just quits and I couldn't monitor autossh
the solution was using this loop end "expecting" a "connection ok" string
set moreThanAutoSshTimeout [expr {$env(AUTOSSH_POLL) + 10}]
set timeout $moreThanAutoSshTimeout
set connStatus 1
while { $connStatus } {
set connStatus 0
expect \
{
"connection ok" { set connStatus 1 }
}
}

Resources