Can't read from serial port with socat and rawer option - socat

I'm trying to use socat to read data from a serial port; however, it doesn't appear to read from the port correctly when passed the rawer option.
If I read from the port with either socat /dev/ttyTHS1,b9600 - or socat /dev/ttyTHS1,b9600,raw - I see the expected data, but when I run socat /dev/ttyTHS1,b9600,rawer - I get no output.
I'm running this on Ubuntu 18.04 on an aarch64 processor with kernel version 4.9.140. I've tried with the "stock" socat from apt (1.7.3.2-2ubuntu2) and also socat version 1.7.4.1 that I built from source.
One thing I noticed is that when I run socat in a working configuration (e.g. with the raw option) and then examine the serial port with stty -F /dev/ttyTHS1 it looks like
speed 9600 baud; line = 0;
min = 1; time = 0;
-brkint -icrnl -imaxbel
-opost
-isig -icanon
whereas when socat is run with the rawer option it looks like:
speed 0 baud; line = 0;
min = 1; time = 0;
-cread
-brkint -icrnl -imaxbel
-opost -onlcr
-isig -icanon -iexten -echo -echoe -echok -echoctl -echoke
and when the same command is run with the rawer option on a different platform (socat 1.7.3.2-2ubuntu2, on Ubuntu 18.04 x86_64, 5.4.0-84-generic, opening a USB-serial device) the speed appears to populate correctly.
Is there a reason why rawer would not work as I'm using it?

Related

Optimistic way to test port before executing sftp

I have a bash script which is doing very plain sftp to transfer data to production and uat servers. See my code below.
if [ `ls -1 ${inputPath}|wc -l` -gt 0 ]; then
sh -x wipprod.sh >> ${sftpProdLog}
sh -x wipdev.sh >> ${sftpDevLog}
sh -x wipdevone.sh >> ${sftpDevoneLog}
fi
sometimes the UAT server may go down. In those cases the number of scripts hanged are getting increased. If it reaches the user max. number of process the other scripts also getting affected. So I am thinking before executing each of the above script i have to test the port 22 availability on the destination server. Then I can execute the script.
Is this the right way? If yes what is the optimistic way to do that? If no what else is the best approach to avoid unnecessary sftp connection when destination not available? Thanks in advance.
Use sftp in batch mode together with ConnectTimeout option explicitely set. Sftp will take care about up/down problems by itself.
Note, that ConnectTimeout should be slightly higher if your network is slow.
Then put sftp commands into your wip*.sh backup scripts.
If UAT host is up:
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_up <<<"put testfile.xml /tmp/"; echo $?
sftp> put testfile.xml /tmp/
Uploading testfile.xml to /tmp/testfile.xml
0
File is uploaded, sftp exits with exit code 0.
If UAT host is down, sftp exits wihin 1 second with exit code 255.
[localuser#localhost tmp]$ sftp -b - -o ConnectTimeout=1 remoteuser#this_host_is_down <<<"put testfile.xml /tmp/"; echo $?
ssh: connect to host this_host_is_down port 22: Connection timed out
Couldn't read packet: Connection reset by peer
255
It sounds reasonable - if the server is inaccessible you want to immediately report an error and not try and block.
The question is - why does the SFTP command block if the server is unavailable? If the server is down, then I'd expect the port open to fail almost immediately and you need only detect that the SFTP copy has failed and abort early.
If you want to detect a closed port in bash, you can simply as bash to connect to it directly - for example:
(echo "" > /dev/tcp/remote-host/22) 2>/dev/null || echo "failed"
This will open the port and immediately close it, and report a failure if the port is closed.
On the other hand, if the server is inaccessible because the port is blocked (in a firewall, or something, that drops all packets), then it makes sense for your process to hang and the base TCP test above will also hang.
Again this is something that should probably be handled by your SFTP remote copy using a timeout parameter, as suggested in the comments, but a bash script to detect blocked port is also doable and will probably look something like this:
(
(echo "" > /dev/tcp/remote-host/22) &
pid=$!
timeout=3
while kill -0 $pid 2>/dev/null; do
sleep 1
timeout=$(( $timeout - 1 ))
[ "$timeout" -le 0 ] && kill $pid && exit 1
done
) || echo "failed"
(I'm going to ignore the ls ...|wc business, other than to say something like find and xargs --no-run-if-empty are generally more robust if you have GNU find, or possibly AIX has an equivalent.)
You can perform a runtime connectivity check, OpenSSH comes with ssh-keyscan to quickly probe an SSH server port and dump the public key(s), but sadly it doesn't provide a usable exit code, leaving parsing the output as a messy solution.
Instead you can do a basic check with a bash one-liner:
read -t 2 banner < /dev/tcp/127.0.0.1/22
where /dev/tcp/127.0.0.1/22 (or /dev/tcp/hostname/ssh) indicates the host and port to connect to.
This relies on the fact that the SSH server will return an identifying banner terminated with CRLF. Feel free to inspect $banner. If it fails after the indicated timeout read will receive SIGALARM (exit code 142), and connection refused will result in exit code 1.
(Support for /dev/tcp and network redirection is enabled by default since before bash-2.05, though it can be disabled explicitly with --disable-net-redirections or with --enable-minimal-config at build time.)
To prevent such problems, an alternative is to set a timeout: with any of the ssh, scp or sftp commands you can set a connection timeout with the option -o ConnectTimeout=15, or, implicitly via ~/.ssh/config:
Host 1.2.3.4 myserver1
ConnectionTimeout 15
The commands will return non-zero on timeout (though the three commands may not all return the same exit code on timeout). See this related question: how to make SSH command execution to timeout
Finally, if you have GNU parallel you may use its sem command to limit concurrency to prevent this kind of problem, see https://unix.stackexchange.com/questions/168978/limit-maximum-number-of-concurrent-scp-processes-running-on-a-host .

BASH: keep connection alive

I have the following scenario:
I use netcat to connect to a host running telnet server on port 23, I log in using provided username and password, issue some commands, after which I need to do fairly complex analysis of the provided output. Naturally, expect comes to mind, with a script like this:
spawn nc host 23
send "user\r"
send "pass\r"
send "command\r"
expect EOF
then, it is executed with expect example.scr >output.log, so the output file can be parsed. The parser is 150+ lines of bash code that executes under 2 seconds, and makes a decision what command should be executed next. Thus, it replaces "command" with "command2", and executes the expect script again, like this:
sed -i '/send "command\r"/send "command2\r"/' example.scr
expect example.scr >output.log
Obviously, it is not needed to re-establish telnet connection and perform log in process all over again, just to issue a single telnet command after 2 seconds of processing. A conclusion can be made, that telnet session should be kept alive as a background process, so one could freely talk to it at any given time. Naturally, using named pipes comes to mind:
mkfifo in
mkfifo output.log
cat in | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" >in
cat output.log
After the file is written to, EOF causes the named pipe to close, thus terminating the telnet session. I was thinking what kind of eternal process could be piped to netcat so it can be used as telnet relay to host. I came up with a very silly idea, but it works:
nc -k -l 666 | nc host 23 >output.log &
echo -e "user\npass\ncommand\n" | nc localhost 666
cat output.log
The netcat server is started with k(eep alive), listening on port 666, and any data stream is redirected to the netcat telnet client connected to the host, while the entire conversation is dumped to output.log. One can now echo telnet commands to nc localhost 666, and read the result from output.log.
One should keep in mind that the expect script can be easily modified to accommodate SSH and even serial console connection, just by spawning ssh or socat instead of netcat. I never liked expect because it forces a use of another scripting language within bash, requires tcl libraries, and needs to be compiled for the embedded platforms, while netcat is a part of busybox and readily available everywhere.
So, the question is - could this be done in a simpler way? I'd put my bet on having some sort of link between console and TCP socket. Any suggestions are appreciated.
How about using like a file descriptor?
exec 3<>/dev/tcp/host/port
while true; do
echo -e "user\npass\ncommand" >&3
read_response_generate_next_command <&3 >&3
# if no more commands, break;
done
exec 3>&-

Linux shell: save input line from Serial Port each minute and send to remote server

I have Arduino connected to computer over RS-232 (only TxD, RxD and GND).
Arduino only send data to computer and computer receive it. Computer do not transmit anything to Arduino.
Computer is WiFi router with OpenWrt linux with 16MB RAM and 4MB flash for system. I do not have free enough space for "good tool" like python (I have the same working program on x86 PC written in python).
Arduino send data to PC +- each 60 seconds. Data has following format:
SENSOR1;12.34;95.47
ABC245;34.5;75.1
2 sensors each have 2 values. Line is ended using <CR><LF>. I can modify this "protocol" to for example one line like (or any other):
SENSOR1;12.34;95.47|ABC245;34.5;75.1
so on wifi router I need little program which read this string every minute and save it to variable. This variable insert to curl and send to remote server. Can I send data to server without curl (with less ram/flash usage)?
I would like to use pure bysybox sh (bash is to big).
I found Bash script: save stream from Serial Port (/dev/ttyUSB0) to file until a specific input (e.g. eof) appears :
#!/bin/bash
while read line; do
if [ "$line" != "EOF" ]; then
echo "$line" >> file.txt
else
break
fi
done < /dev/ttyUSB0
awk `
/EOF/ {exit;}
{print;}` < /dev/ttyUSB0 > file.txt
is it good choice to use/modify these script? Is there any other better solution?
Why not give a try to ser2net pakage?
This will allow forward the serial port to the server.
It work ok on OpenWrt
Lua is buid in.
A script in Lua ca read also from the serial port , but you neet to set the port parameters first with stty
stty 9600 raw < /dev/ttyUSB0
lua myscript < /dev/ttyUSB0

How to open a .txt file after using netstat command with Batch

I am trying to open a simple log.txt file (in this example comandos.txt) after running a netstat command like so:
# echo off
echo. >> C:\comandos.txt
netstat -b -o 1 >> C:\comandos.txt
start C:\comandos.txt
After netstat Prompt Windows won't close and comandos.txt won't open.
Any clues on how to solve this?
# echo off
echo. >> C:\comandos.txt
netstat -b -o >> C:\comandos.txt
start C:\comandos.txt
The above snippet works fine. Note that you were specifying the interval in netstat command which redisplays the statistics again and again. Also, because you have echo turned off and display redirected to the file, the empty prompt window showing up for a long time would send wrong signals. Show some message like Collecting information... or similar.
Also, given that the command needs to resolve addresses and depends on the number of processes with network connection, it may take some time for netstat to complete which would be system dependent.

using socat to multiplex incoming tcp connection

An external data provider makes a tcp connection to one of our servers.
I would like to use socat to 'multiplex' the incoming data so that multiple programs can receive data sent from the external data provider.
socat -u TCP4-LISTEN:42000,reuseaddr,fork OPEN:/home/me/my.log,creat,append
happily accepts incoming data and puts it into a file.
What I'd like to do is something that will allow local programs to connect to a TCP port and begin to receive data that arrives from connections to the external port. I tried
socat -u TCP4-LISTEN:42000,reuseaddr,fork TCP4-LISTEN:43000,reuseaddr
but that doesn't work. I haven't been able to find any examples in the socat doco that seem relevant to back to back TCP servers.
Can someone point me in the right direction?
With Bash process-substitution
Multiplexing from the shell can in general be achieved with coreutils tee and Bash process-substitution. So for example to have the socat-stream multiplexed to multiple pipelines do something like this:
socat -u tcp-l:42000,fork,reuseaddr system:'bash -c \"tee >(sed s/foo/bar/ > a) >(cat > b) > /dev/null\"'
Now if you send foobar to the server:
socat - tcp:localhost:42000 <<<fobar
Files a and b will contain:
a
barbar
b
foobar
With named pipes
If the pipelines are complicated and/or you want to avoid using Bash, you can use named pipes to improve readability and portability:
mkfifo x y
Create the reader processes:
sed s/foo/bar/ x > a &
cat y > b &
Start the server:
socat -u tcp-l:42000,fork,reuseaddr system:'tee x y > /dev/null'
Again, send foobar to the server:
echo foobar | socat - tcp:localhost:42000
And the result is the same as in the above.
I found ncat ( http://nmap.org/ncat/) to be flexible and easier to use. I suggest you give it a try. I cannot currently test it for you to find the exact command, but you can let it listen on 2 ports; for one port you use the -k option to accept multiple clients.

Resources