How can I send a string to serial /dev/tty.* port, delay a second, disconnect from the port and continue my bash script in OSX? - macos

This is in relation to resetting an Arduino, and then start pushing data to it from my usb xbee.
I've tried using screen, with no luck.
screen -S Xbee -d -m /dev/tty.usbserial-A900fra9 115200 *reset
I don't know how to close this session, not sure whether the args are correct, either.

to send anything to devices on /dev, you can use the > >> 2> 2>&1, etc.
Try this example from tty1 (ctrl+alt+F1):
echo "my string" > /dev/tty2
now go to tty2 (alt+F2) and you gonna see your string. It should work with any device.
and to sleep, use:
sleep 1
your problem could be also with permissions. Try it with root! ;)

Related

Toggling the Wi-Fi with XF86WLAN and script

I have the following script (in my PATH):
#!/usr/bin/env bash
main()
{
local state=$(sudo rfkill list wifi -n -o SOFT)
if [[ $state == 'blocked' ]]
then
sudo rfkill unblock wifi
state='Unblocked'
else
sudo rfkill block wifi
state='Blocked'
fi
notify-send 'Wi-Fi' "$state"
exit 0
}
main $#
Running the script from the command line works as expected, then I add the following shortcut to my.xbindkeysrc:
"kill-wifi"
XF86WLAN
But the notifications, and the Wi-Fi interface get stuck in one of the two states, blocked or unblocked, it doesn't toggle. Sometimes, if I press several times the XF86WLAN key, I get a toggle.
The weird thing is that using another key to trigger the script, such as F8, the whole thing works fine, but I want to leave F8 for purposes other than toggling the WiFi.
So one of my guesses was that there's "something" binding the XF86LAN KeySym that messes up when my script runs. But then commenting out the command that actually kills the WiFi interface, produces the right notifications (but I'm not actually doing anything useful).
Any pointers would be appreciated.
Aaaanyways, for anyone in my same situation, install urfkill which automatically listen for the XF86WLAN event.
Then in your script, simply produce the notification:
#!/usr/bin/env bash
main()
{
local state=$(sudo rfkill list wifi -n -o SOFT)
notify-send "Wi-Fi" "$state"
}
main
Note that the script uses another utility named rfkill only for getting the state of the Wi-Fi interface, and spit it in the notification.
Finally in your .xbindkeysrc:
"kill-wifi"
XF86WLAN
You could be using other hotkeys daemon like sxhkd or even the configuration file of i3 or sway, in any case your shell script only is used for notify the state of the Wifi interface, the real work of toggling the Wifi on and off is done by urfkill.
The good news is that now that button with the little antena symbol on your laptop actually toggles on/off your Wi-Fi card and you still have free to use the underlying Function key (F8 in my case) ;-)
P.S. If anyone reads this and knows why my first approach was failing (race condition or whatever), please feel free to let me know how to solve it.

Opening serial connection to Arduino through Bash

I have set up my Arduino so when I send a "0" via the serial monitor, a stepper motor turns a given amount.
I want to include this in a bash script, but I can only get this to work when the arduino serial monitor is open and entering echo 0 > /dev/tty.usbserial641 in bash. I assume this is because serial monitor is opening the connection for me.
In my struggle to open the connection in bash (without serial monitor open) I have tried all manner of options with stty -f /dev/tty.usbserial641 and have also tried connecting reset to ground with a 10uF capacitor.
Can any help me open the connection in bash without the use of arduino serial monitor?
System:
Arduino Uno rev3
OS X 10.8.4
Many thanks,
hcaw
Do the commands below work for you.
# stty -F /dev/ttyUSB0 9600 cs8 -cstopb
# sleep 0.1
# echo "0" > /dev/ttyUSB0
There is a difference between the value 0 and the ascii char 0 (48). Which one you trying to send, and which one are you trying to receive?
If you want to read the port from the terminal you can do it like this
head -n 1 /dev/ttyUSB0 //the number after n is how many lines you want to read
As a last note, I am a fan of pySerial. I would much rather write an interface in python than shell scripts.
I found a great binary written in C that solves my problem called Arduino-serial. Here's the link.
I hope this helps people with similar problems!

Bash echoing in a screen

I have this bash file:
#!/bin/bash
stty -F /dev/ttyACM0 cs8 9600 ignbrk -brkint -imaxbel -opost -onlcr -isig -icanon - iext en -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts
screen /dev/ttyACM0 9600
echo "1"
This is basically an Arduino connected to my Ubuntu PC and I can run the code perfectly all until the echo "1" section.
I can ...
establish the connection
see the screen of the serial connection
type in "1" and see my light bulb light up, and when I type "0" the light bulb turns off.
The problem I am meeting now is that I would like to control the on/off in code (without me manually typing it out) and it seems almost impossible to do that. The logic is correct but when I start the screen, the code just stops there and runs the screen waiting for me to have some input. All until I plug out the Arduino will the echo finally come out. Is there a way to solve this?
I had a problem like this before, this was my workaround:
I had more luck with cu then with stty
Start a screen session:
screen -S arduino -dmS cu -l /dev/ttyACM0 -s 9600
Now there is a screen session created called arduino
You can send commands to it from a script:
screen -S arduino -X stuff 1
This will send the 1 to the serial connection just like your example
If you want to control this with a different user make sure rights will allow this and create the screen session with the same user that will be sending commands to the screen session.
If you have more questions just ask me.

Spawn subshell for SSH and continue with program flow

I'm trying to write a shell script that automates certain startup tasks based on my location (home/campusA/campusB). I go to University and take classes in two different campuses (hence campusA/campusB). My location is determined by which wireless network I'm connected to. For the purposes of this script, we can assume that I will be connected to one of these networks when the script is called and my script knows which one I'm connected to based on a call to iwconfig.
This is what I want it to do:
cat file1 > file2 # always do this, regardless of where I am
if Im at home:
start tweetdeck, thunderbird, skype
else if Im at campusA:
activate the login script # I need to login on a webform before I get internet access.
# I have written a script to automate this.
# Wait for this script to finish before doing anything else
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
else if Im at campusB:
ssh username#domain # this is the problematic line
myProg2 & # I want myProg2 running in the background until I shutdown my computer.
start tweetdeck, thunderbird
close the terminal with the "exit" command
The problem is that campusB's wireless network is behind a firewall, which grants me internet access ONLY after I successfully ssh by username#domain. After a successful ssh, I need to keep the terminal window active in order to hold keep the internet access. If I close the terminal window, I lose internet access (this is bad).
When I try doing just ssh username#domain, the script stops because I don't exit the ssh command. I can't ^C out of it, which means that the rest of the script is never executed. I also have the same problem if I just close the terminal window in an attempt to kill the ssh session.
Some googling brought me to subshell, which I'm either using wrong or can't use to solve my problem. So how should I go about solving this problem? I'd appreciate any help - I've been at this for a while now and am unable to find anything helpful. If it makes a difference, I'd rather not store my ssh password in the script
Further, ampersanding the ssh call (ssh username#domain &) doesn't seem to do any good (can anyone explain why?)
Thank you in advance
EDIT
I must clarify, that the ssh connection has to be active in order for me to have internet access. Thus, when I close the terminal window, I need the ssh connection to still be active.
I had a script that looped on 6 servers, calling via ssh in the background. In 1 part of the script, there was a mis-behaving vendor application; the application didn't 'let go' of the connection properly. (other parts of the script using ssh in background worked fine).
I found that using ssh -t -t cured the problem. Maybe this can help you too.
(a teammate found this on the web, and we had spent so much time, I never went back to read the article that suggested this. The man page on our system gave no hint that such a thing was possible)
Hope this helps.
You may want to try to double background myProg2 to detach it from the tty:
# cf. "Wizard Boot Camp, Part Six: Daemons & Subshells",
# http://www.linux-mag.com/id/5981
(myProg2 &) &
Another option may be to use the daemon tool from the libslack package:
http://ingvar.blog.linpro.no/2009/05/18/todays-sysadmin-tip-using-libslack-daemon-to-daemonize-a-script/
Having a ssh with pseudy tty on background shell
In addition to #shellter's answer, I would like make some precision:
where #shelter said:
The man page on our system gave no hint that such a thing was possible
On my system (Debian 7 GNU/Linux), if I hit:
man -Pcol\ -b ssh| grep -A3 '^ *-t '
I could read:
-t Force pseudo-tty allocation. This can be used to execute arbi‐
trary screen-based programs on a remote machine, which can be
very useful, e.g. when implementing menu services. Multiple -t
options force tty allocation, even if ssh has no local tty.
Yes: Multiple -t options force tty allocation, even if ssh has no local tty.
This mean: If you remotely run a tool that require access to pseudo terminal ( pty like /dev/pts/0), you could run them by using -t switch.
But this would work only if ssh is run from a shell console (aka having his own pty). If you plan to run them is shell session without console, like background scripts, you may use Multiple -t to enforce pseudo tty allocation from ssh.
Multiple ssh shell on one ssh connection
In addition to answers from #tommy and #geekosaur, I would make some precision:
#tommy point to a very intersting feature of ssh. Not sure this have a lot to do with answer, but speaking around long time connection, this feature has to be clearly understood.
Once a connection is established, ssh could (and know how to) use them to drive a lot of thing in this one connection:
-L let you drive remote TCP connections to local machines/network. (full syntax is: -L localip:localport:distip:distport) where localip could be specified to permit other hosts from same local domain to access same tcp bind, and distip could by any host from distant network ( not only localhost ) sample: -L192.168.1.31:8443:google.com:443 permit any host from local domain to reach google through your host: http://192.168.1.31:8443
-R Same remarks in reverse way!
-M Tell ssh to open a local unix socket for bindind next ssh consoles. Simply open two terminal window. First in both window, hit: ssh somewhere than hit netstat -tan | grep :22 or netstat -tan | grep 192.168.1.31:22 (assuming 192.168.1.31 is your onw host's ip)
Than compare close all your ssh session and in first terminal, hit: ssh -M somewhere and in second, simply ssh somewhere. you may see in second terminal:
$ ssh somewhere
+ ssh somewhere
Last login: Mon Feb 3 08:58:01 2014 from elsewhere
If now you hit netstat -tan | grep 192.168.1.31:22 (on any of two oppened ssh session;) you must see that there is only one tcp connection.
This kind of features could be used in combination with -L and maybe some sleep 86399...
To work around a tcp killer router that close every inactive TCP connection from more than 120 seconds, I run:
ssh -M somewhere 'while :;do uptime;sleep 60;done'
This ensure connection stay up even if I dont hit a key for more than two minutes.
Here's a few thoughts that might help.
Sub-shells
Sub-shells fork new processes, but don't return control to the calling shell. If you want to fork a sub-shell to do the work for you, then you'll need to append a & to the line.
(ssh username#domain) &
But this doesn't look like a compelling reason to use a sub-shell. If you had a number commands you wanted to execute in order from each other, yet in parallel from the calling shell, then maybe it would be worth it. For example...
(dothis.sh; thenthis.sh; andthislastthingtoo.sh) &
Forking
I'm not sure why & isn't working for you, but it may be worth looking into nohup as well. This makes the command "immune" to hang up signals.
nohup ssh username#domain (try with and without the & at the end)
Passwords
Not storing passwords in the script is essential for any ssh automation. You can accomplish that using public key cryptography which is an inherent feature of ssh. I wont go into the details here because there are a number of great resources all across the interwebs on setting this up. I strongly suggest investigating this further.
HOWTO: set up ssh keys - Paul Keck, 2001
SSH Keys - archlinux.org
SSH with authentication key instead of password - Debian Administration
Secure Shell - Wikipedia, the free encyclopedia
If you do go this route, I also suggest running ssh in "batch mode" which will disable password querying and will automatically disconnect from the server if it becomes unresponsive after 5 minutes.
ssh -o 'BatchMode=yes' username#domain
Persistence
Then if you want to persist the connection, run some silly loop in bash! :)
ssh -o 'BatchMode=yes' username#domain "while (( 1 == 1 )); do sleep 60; done"
The problem with & is that ssh loses access to its standard input (the terminal), so when it goes to read something to send to the other side it either gets an error and exits, or is killed by the system with SIGTTIN which will implicitly suspend it. The -n and -f options are used to deal with this: -n tells it not to use standard input, -f tells it to set up any necessary tunnels etc., then close the terminal stream.
So the best way to do this is probably to do
ssh -L 9999:localhost:9999 -f host & # for some random unused port
and then manually kill the ssh before logout. Alternately,
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' </dev/null &
(The redirection is to make sure the SIGTTIN doesn't happen anyway.)
While you're at it, you may want to save the process ID and shut it down from your .logout/.bash_logout:
ssh -L 9999:localhost:9999 -n host 'while :; do sleep 86400; done' < /dev/null & echo $! >~.ssh_pid; chmod 0600 ~/.ssh_pid
and in .bash_logout:
if test -f ~/.ssh_pid; then
set -- $(sed -n 's/^\([0-9][0-9]*\)$/\1/p' ~/.ssh_pid)
if [ $# = 1 ]; then
kill $1 >/dev/null 2>&1
fi
rm ~/.ssh_pid
fi
The extra code there attempts to avoid someone sabotaging your ~/.ssh_pid, because I'm a professional paranoid.
(Code untested and may have typoes)
It's been a while since I've used ssh, and I can't test it right now, but have you tried the -f switch?
ssh -f username#domain
The man page says it backgrounds ssh. Not sure why & wouldn't work, but I guess it's interpreting it as a command to be run on the remote machine.
Maybe screen + ssh would fit the bill as well?
Something like:
screen -d -m -S sessionName cmd
screen -d -m -S sessionName cmd &
# reconnect with
screen -r sessionName

grep 5 seconds of input from the serial port inside a shell-script

I've got a device that I'm operating next to my PC and as it runs it's spitting log lines out it's serial port. I have this wired to my PC and I can see the log lines fine if I'm using either minicom or something like:
ttylog -b 115200 -d /dev/ttyS0
I want to write 5 seconds of the device serial output to a temp file (or assign it to a variable) and then later grep that file for keywords that will let me know how the device is operating. I've already tried redirecting the output to a file while running the command in the background, and then sleeping 5 seconds and killing the process, but the log lines never get written to my temp file. Example:
touch tempFile
ttylog -b 115200 -d /dev/ttyS0 >> tempFile &
serialPID=$!
sleep 5
#kill ${serialPID} #does not work, gets wrong PID
killall ttylog
cat tempFile
The file gets created but never filled with any data. I can also replace the ttylog line with:
ttylog -b 115200 -d /dev/ttyS0 |tee -a tempFile &
In neither case do I ever see any log lines logged to stdout or the log file unless I have multiple versions of ttylog running by mistake (see commented out line, D'oh).
I have no idea what's going on here. It seems to be a failure of redirection within my script.
Am I on the right track? Is there a better way to sample 5 seconds of the serial port?
It sounds like maybe ttylog is buffering its output. Have you tried running it with -f or --flush?
You might try the unbuffer script that comes with expect.
ttylog has a --timeout option, where you can simply specify for how many seconds it should run.
So, in your case, you could do
ttylog --baud 115200 --device /dev/ttyS0 --timeout 5
and it would just run for 5 seconds and then stop.
Indeed it also has the -f option as mentioned which flushes, but if you'd use --timeout you would not be needing that.

Resources