Bash read command with cat and pipe - bash

I have two scripts:
install.sh
#!/usr/bin/env bash
./internal_install.sh
internal_install.sh
#!/usr/bin/env bash
set -x
while true; do
read -p "Hello, what's your name? " name
echo $name
done
When I run ./install.sh, all works as expected:
> ./install.sh
+ true
+ read -p 'Hello, what'\''s your name? ' name
Hello, what's your name? Martin
+ echo Martin
Martin
...
However, when I run with cat ./install.sh | bash, the read function does not block:
cat ./install.sh | bash
+ true
+ read -p 'Hello, what'\''s your name? ' name
+ echo
+ true
+ read -p 'Hello, what'\''s your name? ' name
+ echo
...
This is just a simplified version of using curl which results in the same issue:
curl -sl https://www.conteso.com/install.sh | bash
How can I use curl/cat to have blocking read in the internal script?

read reads from standard input by default. When you use the pipe, standard input is the pipe, not the terminal.
If you want to always read from the terminal, redirect the read input to /dev/tty.
#!/usr/bin/env bash
set -x
while true; do
read -p "Hello, what's your name? " name </dev/tty
echo $name
done
But you could instead solve the problem by giving the script as an argument to bash instead of piping.
bash ./install.sh
When using curl to get the script, you can use process substitution:
bash <(curl -sl https://www.conteso.com/install.sh)

Related

How to get a bash variable from inside postgre's?

I'm kind of new in bash script and postgresql.
I saw in another question a way to run a bash script as psql user here.
I tried making a bash function as follow,
postgres_create_db(){
sudo su postgres <<- EOF
if psql -lqt | cut -d \| -f 1 | grep -qw nokia_aaa_poc_db; then
psql -c '\dt'
else
psql -c 'CREATE DATABASE nokia_AAA_poc_db;'
fi
EOF
exit
}
where this function will be called further in code, but I wonder if I can add a RETURN to the function that's actualy returning a varible that was first declared inside postgres bash (in between the EOF's). Like bellow:
postgres_create_db(){
sudo su postgres <<- EOF
if psql -lqt | cut -d \| -f 1 | grep -qw nokia_aaa_poc_db; then
psql -c '\dt'
exists=1 #where thats a variable that I want to access outside the postgres bash.
else
psql -c 'CREATE DATABASE nokia_AAA_poc_db;'
fi
EOF
exit
return exists
}
but it gives an error on shellcheck
return exists
^-- SC2152: Can only return 0-255. Other data should be written to stdout.
Functions in bash can only return values from 0 to 255 where 0 is success. Reference: Return value in a Bash function
So you can echo the variable like this instead:
#!/usr/bin/env bash
postgres_test() {
psql -c '\dt' &> /dev/null
declare exists=1
echo $exists
}
printf "%s\n" "$(postgres_test)"
This prints "1".
You'll also notice that I redirected the output of the Postgres command to /dev/null. This is because it would be combined in the function's output otherwise.
You might wish to redirect that output to a file instead.

Why can't pass the variable's value into file in /etc directory?

I want to pass the value of the $ip variable into the file /etc/test.json with bash.
ip="xxxx"
sudo bash -c 'cat > /etc/test.json <<EOF
{
"server":"$ip",
}
EOF'
I expect the content of /etc/test.json to be
{
"server":"xxxx",
}
However the real content in /etc/test.json is:
{
"server":"",
}
But if I replace the target directory /etc/ with /tmp
ip="xxxx"
cat > /tmp/test.json <<EOF
{
"server":"$ip",
}
EOF
the value of the $ip variable gets passed into /tmp/test.json:
$ cat /tmp/test.json
{
"server":"xxxx",
}
In Kamil Cuk's example, the subprocess is cat > /etc/test.json which contains no variable.
sudo sh -c 'cat > /etc/test.json' << EOF
{
"server":"$ip",
}
EOF
It does not export the $ip variable at all.
Now let's make an analysis for the following:
ip="xxxx"
sudo bash -c "cat > /etc/test.json <<EOF
{
"server":\""$ip"\",
}
EOF"
The different parts in
"cat > /etc/test.json <<EOF
{
"server":\""$ip"\",
}
EOF"
will concatenate into a long string and as a command .Why can the $ip variable inherit the value from its father process here?
There are two reasons for this behavior:
Per default, variables are no passed to the environment of subsequently executed commands.
The variable is not expanded in the current context, because your command is wrapped in single quotes.
Exporting the variable
Place an export statement before the variable, see man 1 bash
The supplied names are marked for automatic export to the environment of subsequently executed commands.
And as noted by Léa Gris you also need to tell sudo to preserve the environment with the -E or --preserve-environment flag.
export ip="xxxx"
sudo -E bash -c 'cat > /etc/test.json <<EOF
{
"server":"$ip",
}
EOF'
Expand the variable in the current context:
This is the reason your second command works, you do not have any quotes around the here document in this example.
But if I replace the target directory /etc/ with /tmp [...] the value of the $ip variable gets passed into /tmp/test.json
You can change your original snippet by replacing the single quotes with double quotes and escaping the quotes around your ip:
ip="xxxx"
sudo bash -c "cat > /etc/test.json <<EOF
{
"server":\""$ip"\",
}
EOF"
Edit: For your additional questions:
In Kamil Cuk's example, the subprocess is cat > /etc/test.json which contains no variable.
sudo sh -c 'cat > /etc/test.json' << EOF
{
"server":"$ip",
}
EOF
It does not export the $ip variable at all.
Correct and you did not wrap the here document in single quotes. Therefore $ip is substituted in the current context and the string passed to subprocesses standard input is
{
"server":"xxxx",
}
So in this example the subprocess does not need to know the $ip variable.
Simple example
$ x=1
$ sudo -E sh -c 'echo $x'
[sudo] Password for kalehmann:
This echos nothing because
'echo $x' is wrapped in single quotes. $x is therefore not substituted in the current context
$x is not exported. Therefore the subprocess does not know its value.
$ export y=2
$ sudo -E sh -c 'echo $y'
[sudo] Password for kalehmann:
2
This echos 2 because
'echo $y' is wrapped in single quotes. $x is therefore not substituted in the current context
$y is exported. Therefore the subprocess does know its value.
$ z=3
$ sudo -E sh -c "echo $z"
[sudo] Password for kalehmann:
3
This echos 3 because
"echo $z" is wrapped in double quotes. $z is therefore substituted in the current context
There little need to do the here document inside the subshell. Just do it outside.
sudo tee /etc/test.json <<EOF
{
"server":"$ip",
}
EOF
or
sudo sh -c 'cat > /etc/test.json' << EOF
{
"server":"$ip",
}
EOF
Generally, it is not safe to build a fragment of JSON using string interpolation, because it requires you to ensure the variables are properly encoded. Let a tool like jq to that for you.
Pass the output of jq to tee, and use sudo to run tee to ensure that the only thing you do as root is open the file with the correct permissions.
ip="xxxx"
jq --arg x "$ip" '{server: $x}' | sudo tee /etc/test.json > /dev/.null

sed command find and replace in file and overwrite file , how to initialize file has current file/script

I wanted to increment the current decimal variable,
so I made the following code
#! /bin/bash
k=1.3
file=/home/script.sh
next_k=$(echo "$k + 0.1" | bc -l)
sed -i "s/$k/$next_k/g" "$file"
echo $k
As you can see here I have to specify the file in line 3 , is there a workaround to just tell it to edit and replace in the current file. Instead of me pointing it to the file. Thank you.
I think you're asking how to reference the own script name, which $0 holds, e.g.
#! /bin/bash
k=1.3
next_k=$(echo "$k + 0.1" | bc -l)
sed -i "s/$k/$next_k/g" "$0"
echo $k
You can read more on Positional Parameters here, specifically this bit:
($0) Expands to the name of the shell or shell script. This is set at shell initialization. If Bash is invoked with a file of commands (see Shell Scripts), $0 is set to the name of that file. If Bash is started with the -c option (see Invoking Bash), then $0 is set to the first argument after the string to be executed, if one is present. Otherwise, it is set to the filename used to invoke Bash, as given by argument zero.
e.g.
$ cat test.sh
#! /bin/bash
k=1.3
next_k=$(echo "$k + 0.1" | bc -l)
sed -i "s/$k/$next_k/g" $0
echo $k
$ ./test.sh; ./test.sh ; ./test.sh
1.3
1.4
1.5
$ cat test.sh
#! /bin/bash
k=1.6
next_k=$(echo "$k + 0.1" | bc -l)
sed -i "s/$k/$next_k/g" $0
echo $k

for loop with 2 files consecutively and pass parameters to other script

I have 2 files, one with hostnames.txt and one with commands.txt :
hostnames.txt:
switch1.txt
switch2.txt
switch3.txt
commands.txt:
show inter gi0/1
show inter gi0/0/1
show inter Eth1/1
I would like to execute switch1 ( first switch ) with show inter gi0/1 ( first command ) and then pick up switch2 and execute with show inter gi0/0/1 and so on until the file ends.
I'm using a TCL script to which I'm passing the parameters with hostname and command from a text files.
for in `/bin/cat hostname.list`;
do
echo $n > oneswitch.txt
for in `/bin/cat interfinal.list`;
do
echo $m > onecommand.txt
for switch in `/bin/cat oneswitch.txt
tclscript -u username -p password -t $switch -r onecommand.txt .
And I couldn't achieve it, how and what are the possible loops I can use and what login can I put in place to get this achieved?
What happens with the above one is that I can only execute with last switch with last command.
Please help.
Thanks
Use paste -> try this at a bash prompt: paste -d ":" hostname.list interfinal.list
So:
paste -d ":" hostname.list interfinal.list | while IFS=: read -r switch cmd; do
tclscript -u username -p password -t "$switch" -r "$cmd" .
done
You could do:
#!/usr/bin/env bash
for i in {1..3}; do
eval $(paste -d' ' <(sed -n ${i}p commands.txt) <(sed -n ${i}p hostnames.txt))
done
This script will execute the commands:
show inter gi0/1 switch1.txt
show inter gi0/0/1 switch2.txt
show inter Eth1/1 switch3.txt

How can bash read from piped input or else from the command line argument

I would like to read some data either from pipe or from the command line arguments (say $1), whichever is provided (priority has pipe).
This fragment tells me if the pipe was open or not but I don't know what to put inside in order not to block the script (test.sh) (using read or cat)
if [ -t 0 ]
then
echo nopipe
DATA=$1
else
echo pipe
# what here?
# read from pipe into $DATA
fi
echo $DATA
Executing the test.sh script above I should get the following output:
$ echo 1234 | test.sh
1234
$ test.sh 123
123
$ echo 1234 | test.sh 123
1234
You can read all of stdin into a variable with:
data=$(cat)
Note that what you're describing is non-canonical behavior. Good Unix citizens will:
Read from a filename if supplied as an argument (regardless of whether stdin is a tty)
Read from stdin if no file is supplied
This is what you see in sed, grep, cat, awk, wc and nl to name only a few.
Anyways, here's your example with the requested feature demonstrated:
$ cat script
#!/bin/bash
if [ -t 0 ]
then
echo nopipe
data=$1
else
echo pipe
data=$(cat)
fi
echo "$data"
$ ./script 1234
nopipe
1234
$ echo 1234 | ./script
pipe
1234

Resources