Bash programming, interrogating ttyUSB port - bash

I'm new in this of bash programming in linux, basically what I want to do is to program a bash file that can open the port ttyUSB0 and then I need to interrogate it with AT commands (like "0100") and then assign the response to a variable, I've been trying this with this different ways:
1) Using cat
#!/bin/bash
PORT= \ls /dev/ttyU*
cat $PORT
????
2) Using Minicom
`#!/bin/bash
minicom
????
'
3) Using Screen
#!/bin/bash
PORT= \ls /dev/ttyU*
screen $PORT
????
How can I interrogate it before the cat, minicom and screen starts? What should I have to put in ???? of the 3 different codes?
Thank you so much!!!

Don't try writing to a tty device using bash, you'll end up chasing your own tail forever. Use minicom or C-Kermit for that.
If you want to check that the device is active before starting minicom, you can read from it with bash and there is a good explanation of how to achieve this here: Bash read from ttyUSB0 and send to URL

You should be able to use my atinout program for this. It is a command line tool to talk with a modem:
$ echo AT | atinout - /dev/ttyUSB0 -
AT
OK
$
So with a little bit of scripting you should be able to extract the response you want (remember to always check for a successful OK response).

Related

Is there a way to redirect all stdout and stderr to systemd journal from within script?

I like the idea of using systemd's journal to view and manage the logs of my own scripts. I have become aware you can log to journal from my user scripts on a per message basis..
echo 'hello' | systemd-cat -t myscript -p emerg
Is there a way to redirect all messages to journald, even those generated by other commands? Like..
exec &> systemd-cat
Update:
Some partial success.
Tried Inian's suggestion from terminal.
~/scripts/myscript.sh 2>&1 | systemd-cat -t myscript.sh
and it worked, stdout and stderr were directed to systemd's journal.
Curiously,
~/scripts/myscript.sh &> | systemd-cat -t myscript.sh
didn't work in my Bash terminal.
I still need to find a way to do this inside my script for when other programs call my script.
I tried..
exec 2>&1 | systemd-cat -t myscript.sh
but it doesn't work.
Update 2:
From terminal
systemd-cat ~/scripts/myscript.sh
works. But I'm still looking for a way to do this from within the script.
A pipe to systemd-cat is a process which needs to run concurrently with your script. Bash offers a facility for this, though it's not portable to POSIX sh.
exec > >(systemd-cat -t myscript -p emerg) 2>&1
The >(command) process substitution starts another process and returns a pseudo-filename (something like /dev/fd/63) which you can redirect into. This is basically a wrapper for the mkfifo hacks you could do if you wanted to port this to POSIX sh.
If your script happens to not be a shell script, but some other programming language that allows loading extension modules linked to -lsystemd, there is another way. There is a library function sd_journal_stream_fd that quite precisely matches the task at hand. Calling it from bash itself (as opposed to some child) seems difficult at best. In Python for instance, it is available as systemd.journal.stream. What this function does in essence is connecting a unix domain stream socket and communicating what kind of data is being transmitted (e.g. priority). The difficult part with a shell here is making it connect a unix domain socket (as opposed to connecting in a child).
The key idea to this answer was given by Freenode/libera.chat user grawity.
Apparently, and for reasons that are beyond me, you can't really redirect all stdout and stderr to journald from within a script because it has to be piped in. To work around that I found a trick people were using with syslog's logger which works similarly.
You can wrap all your code into a function and then pipe the function into systemd-cat.
#!/bin/bash
mycode(){
echo "hello world"
echor "echo typo producing error"
}
mycode | systemd-cat -t myscript.sh
exit 0
And then to search journal logs..
journalctl -t myscript.sh --since yesterday
I'm disappointed there isn't a more direct way of doing this.

Send command to open process in Shell file

In bash how can I issue a command to a running process I just started?
For example;
# Start Bluez gatttool then Connect to bluetooth device
gatttool -b $MAC -I
connect # send 'connect' to the gatttool process?
Currently my shell script doesn't get to the connect line because the gatttool process is running.
If you simply want to send the string "connect\n" to the process, you can use a standard pipe:
echo "connect" | gatttool -b $MAC -I
If you want to engage in a more complex "conversation" with the gatttool process, take a look at the expect (1) and chat (8) tool, which allow you to send a sequence of strings, and wait for certain responses.
If you'd prefer a slightly "lighter" way of piping you could use a heredoc such as in:
gatttool -b $MAC -I <<EOF
connect
(...)
EOF
Everything contained between the two EOF tags will be piped to the command's input. I believe this will not allow you to interact with the command whilst between the EOF tags so, as mentioned in the previous answer, you might want to consider using expect if you need to act upon the commands' output before sending something back to it.

How to run the "cat" command on the background inside the script

I have a USB LTE modem connected to my Raspberry and I need to read replies sent via serial line, generated by requests sent using the "echo" command.
Example:
cat /dev/ttyUSB0 &>> /ttyUSB0_logs &
echo "AT+csq" > /dev/ttyUSB0
echo "AT+cgreg=2" > /dev/ttyUSB0
echo "AT+cgreg?" > /dev/ttyUSB0
The problem is, although the "cat" command should run on the background and all output is directed to the file, script still freezes at this point. If I use the first command outside of the script, it works as I expect - it stores all output to the file ttyUSB0_logs on the background and I can use the received data for other operations. The question is - how can I integrate the first command to the script to get it work this way? Thanks a lot.
you want:
cat /dev/ttyUSB0 >> /ttyUSB0_logs &
if that doesn't work, you should double check what is actually freezing. you can put set -x at the top of the script to get tracing output.

piping Linux cat command to web in openWRT

I want to run a shell script from openWRT. Basically its need to constantly read arduino serial port and when its reads something its need to be sent to a web based service.
Currently this is my script which only save to text file:
cat /dev/ttyACM0 >> /www/home/log.txt &
I want to avoid saving to file and send the output string right to a web based service that store the readings in mySQL DB.
All the data saving web service is all set and working something like this:
http://my-service.com/?data=what-ever-the-arduino-spits
Is there a way to do it with wget?
maybe something like this:
cat /dev/ttyACM0 | xargs -n % wget http://ivardi.info?todb=%
keep in mind that the openWRT is on a 32 RAM and 4MB flash storage so this is only possible with shell script and not Phyton/PHP.
Regards
Note that it could be dangerous in some cases to directly read the serial (/dev/ttyACM0) device and pass it direct to wget in case the read blocked for some reason (what happens if the serial port is disconnected and reconnected?)
It could be safer to route the output to a file; then in a loop read the most recent data and 'pushing' that using wget. Perhaps something like:
#!/bin/bash
while true; do
tail -1 /www/home/log/txt | wget <...options...>
sleep 60
done
In reality you would probably need to do something a little more advanced so that you don't keep sending duplicate data.
Of course, in your own situation what you proposed may be sufficient...

How can I capture the stdout from a process that is ALREADY running

I have a running cron job that will be going for a while and I'd like to view its stdout. I don't know how important the fact that the process was started by cron is, but I figure I'd mention it. This is on OSX so, I don't have access to things like... /proc/[pid]/..., or truss, or strace. Suggestions of executing with IO redirection (e.g. script > output & tail -f output) are NOT acceptable, because this process is 1) already running, and 2) can't be stopped/restarted with redirection. If there are general solutions that will work across various Unices, that'd be ideal, but specifically I'm trying to accomplish this on a Mac right now.
True solution for OSX
Write the following function to your ~/.bashrc or ~/.zshrc.
capture() {
sudo dtrace -p "$1" -qn '
syscall::write*:entry
/pid == $target && arg0 == 1/ {
printf("%s", copyinstr(arg1, arg2));
}
'
}
Usage:
example#localhost:~$ perl -e 'STDOUT->autoflush; while (1) { print "Hello\n"; sleep 1; }' >/dev/null &
[1] 97755
example#localhost:~$ capture 97755
Hello
Hello
Hello
Hello
...
https://github.com/mivok/squirrelpouch/wiki/dtrace
NOTE:
You must disable dtrace restriction on El Capitan or later.
csrutil enable --without dtrace
DISCLAIMER: No clue if Mac has this. This technique exists on Linux. YMMV.
You can grab stdout/err from /proc (assuming proper privileges):
PID=$(pidof my_process)
tail -f /proc/$PID/fd/1
Or grab everything remaining in the buffer to a file:
cat /proc/$PID/fd/1
PS: fd/1 is stdout, fd/2 is stderr.
EDIT: Alex brown> Mac does not have this, but it's a useful tip for Linux.
use dtruss -p <PID>, or even rwsnoop -p <PID>.
neercs has the ability to "grab" programs that were started outside it. Perhaps it will work for you.
BTW, you don't have truss or strace, but you do have dtrace.
I think the fact you started with cron could save you. Under linux any standard output of a cron job is mailed to the unix mail account of the user who owns the job. Not sure about OSX though. Unfortunately you will have to wait for the the job to finish before the mail is sent and you can view the output.

Resources