How can I run a test on 20+ sites from a single console? - bash

I'd like to run a test script on dozens of embedded Linux units; in manufacturing the authentication credentials are all the same.
The tests are about an hour, but I'd like each unit to loop continuously (over the weekend say) and report the current iteration of the test (on a per unit basis).
I'm thinking expect might be the way to go (it would certainly help with ssh login), but the online documentation is ... uh ... a bit too distributed for what seems to be a simple exercise.
I'm stuck at the trying to determine how to spawn my embedded tests in parallel. In BASH I'd use the & operator to put the process on the background, but then entering the authentication is a challenge.
Should I use expect or stick with BASH scripting?
What I did:
Using an expect script I placed an SSH authentication file on the DUT. THE DUT's have only a RAM file system to play with so this before the rest of the bash script is run. Then, a simple BASH for loop issues an ssh command inside a for loop to run the tests and put the session on a background thread. Comme ca:
for i in <IP devices to test> ;
do
ssh user#$i "echo - \"IP Address: $i :\" ; test-script" &
done
Voila!

Setup ssh public-key authentication (example: http://www.petefreitag.com/item/532.cfm) with blank public key password, then you can use ssh to run these scripts without entering any authorization credential, thus you can write bash scripts to execute them without user intervention

Related

Bash script to send commands to remote ssh session

Is it possible to write a bash script that opens a remote node (i.e. through ssh and/or slurm) and starts an interactive session there after running some commands? I'm trying to automate the process of starting a jupyter session on a remote computing cluster, which currently looks like this:
ssh into a login node of the remote cluster, using a specific port
use slurm to request an interactive session on one of the compute nodes, including x11 forwarding through that port
change directory to the working directory
activate conda environment for my project
open jupyter from the command line, specifying the port I used previously
It's a lengthy process, and if I get something wrong at any step I usually have to go back and start from the beginning because the port I'm using is still tied up. So I'm looking for a way I can run a single script (possibly with arguments) from my local machine that jumps through all the hoops to get me a working jupyter session with a link I can paste to my browser.
Like #Diego Torres Milano said, you would need to write a script locally that could do the interactive part, then invoke that via a remote script.
But since your process is interactive, this gets tricky. Luckily, linux has a tool which can easily be installed via a package manager called expect which has the ability to write logic to execute multi-step interactive scripts.
So you would write an expect script which would "expect" certain prompts, then it can read those prompts and use conditional logic respond to those prompts appropriately.
Once you have this written and it works locally, it's just a matter of executing it via ssh from a remote server as:
ssh user#12.34.56.78 /path/to/script.ex

bash/python consequtive commands in a nested environment

I have a task for my thesis which includes a camera and several LEDs and they can be controlled by some bash commands. To access the system, I need to run ssh root#IP and access the default path of the system. Under this path there is a script which opens the camera application by running ./foo and once it is executed, I am inside the camera application. Then, I can check the temperature of the LED's etc by typing i.e. status -t
Now my aim is to automatize this process to check the temperature by a bash script or python code. In Bash, If I run i.e ssh root#192.168.0.1, ./foo and status -t consecutively, I can get the temperature value. However, executing ssh root#192.168.0.1 './foo' 'status -t, ends in a infinite loop. If I do ssh root#192.168.0.1 './foo', I expect to be in camera application but this opens the application weirdly such that I can't execute status -t afterwards.
I tried also something like this
ssh root#192.168.0.1 << EOF
ls
./foo
status -t
EOF
refer to
or in python using subprocess ssh using python and with paramiko.
but nothing really works. What actually differs my situation from the rest of these examples is that my commands depend on each other, one opens another application and run the next command in the next application.
So the questions are
1- Does what I am doing make sense and is it even possible?
2- How to apply is in a script/python code?

How to send input to a console/CLI program running on remote host using bash?

I have a script that I normally launch using the following syntax:
ssh -Yq user#host "xterm -e '. /home/user/bin/prog1 $arg1;prog2'"
(note: I've removed some of the complexities of the command, so please excuse if there are any syntax errors in the ssh command; it should not be relevant to the question)
This launches an xterm window that runs prog1, and after completion runs prog2. prog2 is a console-style program that performs some setup, then several seconds later waits for user input.
Is there a way via bash script (preferably without downloading external packages) that I can send data to prog2 that's running on $host?
I've looked into << and expect, but it's way over my head. My intuition is that there's probably a straightforward way of doing this, but I can't figure out what terms to search for. I also understand that I can remotely send keystrokes to a host using xdotools or something similar, but I'm hesitant to request a new package installation unless I know that's the only reasonable solution.
Thanks!

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

How to ssh into a shell and run a script and leave myself at the prompt

I am using elastic map reduce from Amazon. I am sshing into hadoop master node and executing a script like.
$EMR_BIN/elastic-mapreduce --jobflow $JOBFLOW --ssh < hivescript.sh . It sshes me into the master node and runs the hive script. The hivescript contains the following lines
hive
add jar joda-time-1.6.jar;
add jar EmrHiveUtils-1.2.jar;
and some commands to create hive tables. The script runs fine and creates the hive tables and everything else, but comes back to the prompt from where I ran the script. How do I leave it sshed into hadoop master node at the hive prompt.
Consider using Expect, then you could do something along these lines and interact at the end:
/usr/bin/expect <<EOF
spawn ssh ... YourHost
expect "password"
send "password\n"
send javastuff
interact
EOF
These are the most common answers I've seen (with the drawbacks I ran into with them):
Use expect
This is probably the most well rounded solution for most people
I cannot control whether expect is installed in my target environments
Just to try this out anyway, I put together a simple expect script to ssh to a remote machine, send a simple command, and turn control over to the user. There was a long delay before the prompt showed up, and after fiddling with it with little success I decided to move on for the time being.
Eventually I came back to this as the final solution after realizing I had violated one of the 3 virtues of a good programmer -- false impatience.
Use screen / tmux to start the shell, then inject commands from an external process.
This works ok, but if the terminal window dies it leaves a screen/tmux instance hanging around. I could certainly try to come up with a way to just re-attach to prior instances or kill them; screen (and probably tmux) can make it die instead of auto-detaching, but I didn't fiddle with it.
If using gnome-terminal, use its -x or --command flag (I'm guessing xterm and others have similar options)
I'll go into more detail on problems I had with this on #4
Make a bash script with #!/bin/bash --init-file as the shebang; this will cause your script to execute, then leave an interactive shell running afterward
This and #3 had issues with some programs that required user interaction before the shell is presented to them. Some programs (like ssh) it worked fine with, others (telnet, vxsim) presented a prompt but no text was passed along to the program; only ctrl characters like ^C.
Do something like this: xterm -e 'commands; here; exec bash'. This will cause it to create an interactive shell after your commands execute.
This is fine as long as the user doesn't attempt to interrupt with ^C before the last command executes.
Currently, the only thing I've found that gives me the behavior I need is to use cmdtool from the OpenWin project.
/usr/openwin/bin/cmdtool -I 'commands; here'
# or
/usr/openwin/bin/cmdtool -I 'commands; here' /bin/bash --norc
The resulting terminal injects the list of commands passed with -I to the program executed (no parms means default shell), so those commands show up in that shell's history.
What I don't like is that the terminal cmdtool provides feels so clunky ... but alas.

Resources