I want produce udev rule file from bash script. For this I'm using cat command. Unfortunately produced file has missing "$" char. Here is example test.sh script:
#!/bin/sh
rc=`cat <<stmt1 > ./test.txt
-p $tempnode
archive/$env{ID_FS_LABEL_ENC}
stmt1`
Result is following:
cat test.txt
-p ''
archive/{ID_FS_LABEL_ENC}
Where issue is ?
If you don't want any variable interpolation, use:
#!/bin/sh
group="test_1"
cat <<'stmt1' > ./test.txt
-p $tempnode
archive/$env{ID_FS_LABEL_ENC}
stmt1
rc=$?
(Notice the '' around stmt1.)
Related
I'm trying to redirect a command's output in a file only if the command has been successful because I don't want it to erase its content when it fails.
(command is reading the file as input)
I'm currently using
cat <<< $( <command> ) > file;
Which erases the file if it fails.
It's possible to do what I want by storing the output in a temp file like that:
<command> > temp_file && cat temp_file > file
But it looks kinda messy to me, I want avoid manually creating temp files (I know <<< redirection is creating a temp file)
I finally came up with this trick
cat <<< $( <command> || cat file) > file;
Which will not change the contents of the file... but which is even more messy I guess.
Perhaps capture the output into a variable, and echo the variable into the file if the exit status is zero:
output=$(command) && echo "$output" > file
Testing
$ out=$(bash -c 'echo good output') && echo "$out" > file
$ cat file
good output
$ out=$(bash -c 'echo bad output; exit 1') && echo "$out" > file
$ cat file
good output
Remember, the > operator replaces the existing contents of the file with the output of the command. If you want to save the output of multiple commands to a single file, you’d use the >> operator instead. This will append the output to the end of the file.
For example, the following command will append output information to the file you specify:
ls -l >> /path/to/file
So, for log the command output only if it success, you can try something like this:
until command
do
command >> /path/to/file
done
Let's take a little example:
$ cat source.sh
#!/bin/bash
echo "I'm file source-1"
. source-2.sh
And:
$ cat source-2.sh
#!/bin/bash
echo "I'm file source-2"
Now run:
$ ./source.sh
I'm file source-1
I'm file source-2
If I'll change the call of the second file in first:
$ cat source.sh
#!/bin/bash
echo "I'm file source-1"
source source-2.sh
It will have the same effect as using dot.
What is difference between these methods?
The only difference is in portability.
. is the POSIX-standard command for executing commands from a file; source is a more-readable synonym provided by Bash and some other shells. Bash itself, however, makes no distinction between the two.
There is no difference.
From the manual:
source
source filename
A synonym for . (see Bourne Shell Builtins).
I am deeply confused by Bash's Heredoc construct behaviour.
Here is what I am doing:
#!/bin/bash
user="some_user"
server="some_server"
address="$user"#"$server"
printf -v user_q '%q' "$user"
function run {
ssh "$address" /bin/bash "$#"
}
run << SSHCONNECTION1
sudo dpkg-query -W -f='${Status}' nano 2>/dev/null | grep -c "ok installed" > /home/$user_q/check.txt
softwareInstalled=$(cat /home/$user_q/check.txt)
SSHCONNECTION1
What I get is
cat: /home/some_user/check.txt: No such file or directory
This is very bizarre, because the file exists if I was to connect using SSH and check the following path.
What am I doing wrong? File is not executable, just a text file.
Thank you.
If you want the cat to run remotely, rather than locally during the heredoc's evaluation, escape the $ in the $(...):
softwareInstalled=\$(cat /home/$user_q/check.txt)
Of course, this only has meaning if some other part of your remote script then refers to "$softwareInstalled" (or, since it's in an unquoted heredoc, "\$softwareInstalled").
I have the following simple bash script:
#!/bin/bash -fx
ls *sh
The problem is that bash add a quote to the pattern and I get wrong output.
+ ls '*sh'
ls: cannot access *sh: No such file or directory
How can I change this behavior?
The output of ls *sh from the terminal is:
$ls *sh
a.bash a.sh b.sh
I tried to add quotes according to this post - "Bash variable containing file wildcard"
without success
That because you're disabling pathname expansion with the -f option.
#!/bin/bash -fx
From man:
-f
Disable filename expansion (globbing).
The following is iptable save file, which I modified by setting some variables like you see below.
-A OUTPUT -o $EXTIF -s $UNIVERSE -d $INTNET -j REJECT
I also have a bash script which is defining this variables and should call iptables-restore with the save file above.
#!/bin/sh
EXTIF="eth0"
INTIF="eth1"
INTIP="192.168.0.1/32"
EXTIP=$(/sbin/ip addr show dev "$EXTIF" | perl -lne 'if(/inet (\S+)/){print$1;last}');
UNIVERSE="0.0.0.0/0"
INTNET="192.168.0.1/24"
Now I need to use
/sbin/iptables-restore <the content of iptables save file>
in bash script and somehow insert the text file on top to this script, so the variables will be initialized. Is there any way to do that?
UPDATE: even tried this
/sbin/iptables-restore -v <<-EOF;
$(</etc/test.txt)
EOF
Something like this:
while read line; do eval "echo ${line}"; done < iptables.save.file | /sbin/iptables-restore -v
or more nicely formatted:
while read line
do eval "echo ${line}"
done < iptables.save.file | /sbin/iptables-restore -v
The eval of a string forces the variable expansion stuff.
Use . (dot) char to include one shell script to another:
#!/bin/sh
. /path/to/another/script
In your shell script:
. /path/to/variable-definitions
/sbin/iptables-restore < $(eval echo "$(</path/to/template-file)")
or possibly
/sbin/iptables-restore < <(eval echo "$(</path/to/template-file)")