How to run command in background and also capture the output - bash

In the middle of the script, I have a command that exposes the local port with ssh -R 80:localhost:8080 localhost.run I need to execute this command in the background, parse the output and save it into a variable.
The output returns:
Welcome to localhost.run!
...
abc.lhrtunnel.link tunneled with tls termination, https://abc.lhrtunnel.link
Need to capture this part:
https://abc.lhrtunnel.link
As a result something like this:
...
hostname=$(command)
echo $hostname
...

Try this Shellcheck-clean code:
#! /bin/bash -p
hostname=$(ssh ... \
| sed -n 's/^.*tunneled with tls termination, //p')
declare -p hostname
I'm assuming that you don't really want to background the command that generates the output. You just want to run it in a way that allows its output to be captured and filtered. See How do you run multiple programs in parallel from a bash script? for information about how "background" processes are used for parallel processing.
The -n option to sed means that it doesn't print lines from the input unless explicitly instructed to print.
s/^.*tunneled with tls termination, //p works on input lines that contain anything followed by the string tunneled with tls termination, . It deletes everything on the line up to the end of that string and prints the result, which hopefully will be the URL that you want.
declare -p varname is a much more reliable and useful way to show the value of a variable than using echo.

Related

Remote script not recognizing environment variable set on host machine in screen

I have a bash script that runs another script in a screen on a remote computer. The environment variable GITLAB_CI_TOKEN is set on the host machine and is defined properly. However, the script configure.sh on the remote machine tells that this environment variable is empty when it is executed, even if it is defined on the same line as the script...
Here is the command I am using:
ssh -o "StrictHostKeyChecking=accept-new" "${COMPUTERS_IPS[i]}" \
screen -S "deploy_${COMPUTERS_IPS[i]}" -dm " \
GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN} \
bash \"${REMOTE_FOLDER}/configure.sh\" \"${REMOTE_FOLDER}\" > ${LOG_FILE} 2>&1;
"
Additionally, the logs are not being written to LOG_FILE, but are being displayed on the console of the screen. I have been pulling my hair out over this for the past two days... Any help or guidance would be greatly appreciated :)
Why GITLAB_CI_TOKEN is "empty":
Passing a command to a remote host over ssh is very similar to running it through eval. For example in your case, escaped newlines on the first evaluation become unescaped newlines on a subsequent evaluation. Consider this very simple program named args (place it in bin or somewhere else on your path to demo):
#!/bin/bash
for arg ; do
echo "|$arg|"
done
And these two use cases:
args "\
Hello \
World"
# prints:
# |Hello World|
ssh host args "\
Hello \
World"
# prints:
# |Hello|
# |World|
As you can see, when we run this via ssh the newline we attempted to escape splits our data into two separate lines even though we tried to keep it all on one line. This means your assignment of GITLAB_CI_TOKEN is just a regular shell variable instead of a scoped environment variable for your bash command. A scoped environment variable requires the declaration and the command to happen on the same line.
The easiest thing to do is likely to just export the variable explicitly with export GITLAB_CI_TOKEN=${GITLAB_CI_TOKEN}.
For similar reasons, the output of your command is going to the screen and not the logfile because the outer quotes of screen -dm "commands >output" are getting stripped on the first evaluation, and then the remote host is parsing screen -dm commands >output and assigning the output redirection to screen instead of commands. That means your configure.sh is writing to the screen, and it's the screen program that's writing its own output to a logfile.
To send complex commands to a remote host, you may want to look into tools like printf %q which can produce escaped output suitable for being safely evaluated in an eval-like context. Take a look at BashFAQ/096 for an example.

Why does "(echo <Payload> && cat) | nc <link> <port>" creates a persistent connection?

I began with playing ctfs challenges, and I encountered a problem where I needed to send an exploit into a binary and then interact with the spawned shell.
I found a solution to this problem which looks something like this:
(echo -ne "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\xbe\xba\xfe\xca" && cat) | nc pwnable.kr 9000
Meaning:
without the "cat" sub-command, I couldn't interact with the shell, but with it, i now able to send commands into the spawned shell and get the returned output to my console stdout.
What exactly happens there? this command line confuses me
If you just type in cat at the command line, you'll be able to see that this command simply copies stdin to stdout one line at a time. It will carry on doing this until you either quit with Ctrl-C or send an EOF with Ctrl-D.
In this example you're running cat immediately after successfully printing the payload (the concatenator && tells the shell to run the second command only if the first command has an exit code of zero; i.e., no error). As a result, the remote terminal won't see an EOF until you terminate it as described above. When this is piped to nc, everything you type in is sent via cat to the remote server, and everything it sends back appears on your stdout.
So yes, in effect you end up with an interactive shell. You can get pretty much the same effect on your own machine by running cat | sh.

Loop trough docker output until I find a String in bash

I am quite new to bash (barely any experience at all) and I need some help with a bash script.
I am using docker-compose to create multiple containers - for this example let's say 2 containers. The 2nd container will execute a bash command, but before that, I need to check that the 1st container is operational and fully configured. Instead of using a sleep command I want to create a bash script that will be located in the 2nd container and once executed do the following:
Execute a command and log the console output in a file
Read that file and check if a String is present. The command that I will execute in the previous step will take a few seconds (5 - 10) seconds to complete and I need to read the file after it has finished executing. I suppose i can add sleep to make sure the command is finished executing or is there a better way to do this?
If the string is not present I want to execute the same command again until I find the String I am looking for
Once I find the string I am looking for I want to exit the loop and execute a different command
I found out how to do this in Java, but if I need to do this in a bash script.
The docker-containers have alpine as an operating system, but I updated the Dockerfile to install bash.
I tried this solution, but it does not work.
#!/bin/bash
[command to be executed] > allout.txt 2>&1
until
tail -n 0 -F /path/to/file | \
while read LINE
do
if echo "$LINE" | grep -q $string
then
echo -e "$string found in the console output"
fi
done
do
echo "String is not present. Executing command again"
sleep 5
[command to be executed] > allout.txt 2>&1
done
echo -e "String is found"
In your docker-compose file make use of depends_on option.
depends_on will take care of startup and shutdown sequence of your multiple containers.
But it does not check whether a container is ready before moving to another container startup. To handle this scenario check this out.
As described in this link,
You can use tools such as wait-for-it, dockerize, or sh-compatible wait-for. These are small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
OR
Alternatively, write your own wrapper script to perform a more application-specific health check.
In case you don't want to make use of above tools then check this out. Here they use a combination of HEALTHCHECK and service_healthy condition as shown here. For complete example check this.
Just:
while :; do
# 1. Execute a command and log the console output in a file
command > output.log
# TODO: handle errors, etc.
# 2. Read that file and check if a String is present.
if grep -q "searched_string" output.log; then
# Once I find the string I am looking for I want to exit the loop
break;
fi
# 3. If the string is not present I want to execute the same command again until I find the String I am looking for
# add ex. sleep 0.1 for the loop to delay a little bit, not to use 100% cpu
done
# ...and execute a different command
different_command
You can timeout a command with timeout.
Notes:
colon is a utility that returns a zero exit status, much like true, I prefer while : instead of while true, they mean the same.
The code presented should work in any posix shell.

Storing the output of a ssh/shell command as a regular string

I want to store output of a ssh command as a string, so that I could do some scripting around it, like so:
ssh_command = %x(ssh -t -t user#ip.ip.ip.ip)
The problem is that 'ssh_command' does not actually seems to store any string.
It's supposed to store this(for a non-valid IP):
ssh: connect to host ip.ip.ip.ip port 22: No route to host
It does output it in the irb but not as a string referenced by the 'ssh_command' variable.
Interestingly enough, the following works as expected:
uname_cmd = %x(uname -a)
In this case, when I print 'uname_cmd', I do get the string output back, as expected.
So my question is, is there a way to store the output of a ssh command as a regular string, like it works with uname?
Thanks,
%x(..) captures stdout, while ssh error messages are written to stderr.
You can redirect stderr to stdout so they're both captured using shell redirection syntax 2>&1:
ssh_command = %x(ssh -t -t user#ip.ip.ip.ip 2>&1)

Want to read variable value from remote file

In one of my bash script I want to read and use the variable value from other script which is on remote machine.
How should I go ahead to resolve this. Any related info would be helpful.
Thanks in advance!
How about this (which is code I cannot currently test myself):
text=$(ssh yourname#yourmachine 'grep uploadRate= /root/yourscript')
It assumes that the value of the variable is contained in one line. The variable text now contains you variable assignment, presumably something like
uploadRate=1MB/s
There are several ways to convert the text/code into a real variable assignment in your current script, like evaluating the string or using grep. I would recommend
uploadRate=${text#*=}
to just remove the part up and including the =.
Edit: One more caveat to mention is that this only works if the original assignment does not contain variable references itself like in
uploadRate=1000*${kB}/s
ssh user#machine 'command'
will print the standard output of the remote command.
I would tell two ways at least:
1) You can simply redirect output to a file from remote server to your system with scp command...It would work for you.Then your script on your machine should read that file as an argument...
script on your machine:
read -t 50 -p "Waiting for argumet: " $1
It waits for output from remote machine,
Then you can
sshpass -p<password> scp user#host:/Path/to/file /path/to/script/
What you need to do:
You should tell the script from your machine, that the output from scp command is the argument($1)
2)Run script from your machine:
#!/bin/bash
script='
#Your commands
'
sshpass -p<password> ssh user#host $script
And you have also another ways to run script to do sth with remote machine.

Resources