Adjust netplan yaml with sed or awk [duplicate] - bash

This question already has answers here:
How can I parse a YAML file from a Linux shell script?
(23 answers)
Closed 26 days ago.
Here is our yaml:
network:
ethernets:
ens160:
addresses:
- 10.200.2.11/22
gateway4: 10.200.0.1
nameservers:
addresses:
- 8.8.8.8
- 4.4.4.4
search:
- cybertax.live
version: 2
I want to change the dns severs only.
From:
- 8.8.8.8
- 4.4.4.4
to:
- 10.10.10.10
- 10.10.10.11
How can I do this? Note: we cannot use or install yq so this needs to be done through sed or awk. Also, yes I know, this is not recommended, but its what needs to be done right now.
What I have tried so far:
sed -i '/ addresses:/,/ search:/ s/^/# /' $netplan_yaml
sed -i '/ nameservers:/a\ \ \ \ \ \ \ \ addresses:' $netplan_yaml
for i in ${!asar_dns[#]}; do
sed -i "/ addresses:/a\ \ \ \ \ \ \ \ - ${asar_dns[$i]}" $netplan_yaml
done
But this does three things wrong (that I can see).
It matches between addresses and search including the line wiht addresses and search. I only want what is AFTER addresses, and BEFORE search.
It puts the DNS addresses in the associative array between the older addresses that is commented out anywhere there is an "addresses". I dont want to do that on the commented out line.
i dont like how I have to use \ \ \ \ \ \ would much rather use a .* if possible but also need to use the addresses in the associative array.

I found a very hacky solution but it works. Open for feedback.
netplan_yaml=/etc/netplan/00-installer-config.yaml
baddns="$(sed -n '/.*nameservers:/,/.*search:/p' /etc/netplan/00-installer-config.yaml | grep -v 'nameservers\|addresses' | grep -v 'search' | grep -v 10.200 | awk '{print$2}')"
mapfile -t arr_baddns <<<$baddns
for i in ${arr_baddns[#]}; do
sed -i "/ - $i/s/^/#/g" $netplan_yaml
done
for i in ${!asar_dns[#]}; do
sed -i "/ addresses:/a\ \ \ \ \ \ \ \ - ${asar_dns[$i]}" $netplan_yaml
done

If ed is available/acceptable, with your given input.
Something like.
ed -s file.yaml <<-'EOF'
g/nameservers:/;/addresses:/;/search:/-s/[[:digit:].]\{1,\}/10.10.10.11/\
-s/[[:digit:].]\{1,\}/10.10.10.10/
,p
Q
EOF
Using variables for the values, something like:
#!/bin/sh
dns1=10.10.10.10
dns2=10.10.10.11
ed -s file.yaml <<-EOF
g/nameservers:/;/addresses:/;/search:/-s/[[:digit:].]\{1,\}/$dns2/\\
-s/[[:digit:].]\{1,\}/$dns1/
,p
Q
EOF
Change Q to w if in-place editing is required, ala sed -i
Remove the ,p to silence the output.

Related

Is it possible to comment inline in a multi-line newline escaped script?

When I have long multi-line piped commands in my scripts I would like to comment on what each line does, but I haven't found a way of doing so.
Given this snippet:
git branch -r --merged \
| grep " $remote" \
| egrep -v "HEAD ->" \
| util.esed -n 's/ \w*\/(.*)/\1/p' \
| egrep -v \
"$(skipped $skip | util.esed -e 's/,/|/g' -e 's/(\w+)/^\1$/g' )" \
| paste -s
Is it possible to insert comments in between the lines? It seems that using the backslash to escape the newline prevents me from adding comments at the end of the line, and I can't add the comment before the backslash, as that would hide the escaping.
Pseudo-code of what I would like the above script to look like
It seems I was unclear (?) of what I wanted by the above section, so to have a clue on what I am looking for, it should be in the similar vein of this:
git branch -r --merged \ # list merged remote branches
| grep " $remote" \ # filter out the ones for $remote
| egrep -v "HEAD ->" \ # remove some garbage
#strip some whitespace:
| util.esed -n 's/ \w*\/(.*)/\1/p' \
# remove the skipped branches:
| egrep -v \
"$(skipped $skip | util.esed -e 's/,/|/g' -e 's/(\w+)/^\1$/g' )" \
| paste -s # something else
It doesn't have to be exactly like this (obviously, it's not valid syntax), but something similar. If it's not possible directly, due to syntactical restrictions, perhaps it's possible to write self-modifying code that will have comments that are removed before executing it?
You can try something like that:
git branch --remote | # some comment
grep origin | # another comment
tr a-z A-Z

Installed Debian package-list with version-numbers

I want to compare two Debian systems with respect to packages version numbers. For that I need a file listing of all installed packages like this:
a2ps 1:4.14-1.3
abiword 3.0.0-8+b1
acl 0.6.37-3+b1
...
I wrote a bash script (rather clumsy) that collects the required info, but I cannot make it write to a file. Can someone help me to fix this?
dpkg --get-selections \
| grep "\binstall\b" \
| sed 's/\(^[A-Za-z0-9\.\-\_]*\).*/\1/' \
| while read i ; \
do `echo $i` `apt-cache policy $i \
| grep Install \
| sed 's/ *Installed: *\([A-Za-z0-9\.\-\_]*\)/\1/' `\
; done
Thank you.
dpkg-query --show -f '${Package}\t${Version}\n' > out.txt

Bash script - Some commands don't work in sh file

I have some troubles with my bash script. The end of my file doesn't work but every commands work outside the file. I have two strings as argument $1 and $2. $acl_line and $usebackend_line are numbers and they are good.
Here is my end file :
sed -i "$((acl_line+1))i \ \tacl\t\t is_$2_$1\t\thdr_com(host)\t-i $2.$1" /my_doc/haproxy/haproxy.cfg
sed -i "$((usebackend_line+1))i \ \tuse_backend\t$2_$1\tif is_$2_$1" /my_doc/haproxy/haproxy.cfg
echo -en "\nbackend $2_$1\n\tserver $2_$1 163.172.167.52:$3 maxconn 1024" >> /my_doc/haproxy/haproxy.cfg
cp -r "./model/*" "./script/lp_domains/$1/$2/"
sed -i 's/lp_ports/$ports/g' "./script/lp_domains/$1/$2/my_doc.yml"
sed -i 's/lp_name/$2-$1/g' "./script/lp_domains/$1/$2/my_doc.yml"
Thanks for your anwsers :)
If $1 and $2 should be interpolated, you cannot use single quotes.
Moreover, copying a file and then running sed -i on it is wasteful and error-prone. Just run sed and perform your substitutions at the same time.
sed -i -e "$((acl_line+1))i \ \tacl\t\t is_$2_$1\t\thdr_com(host)\t-i $2.$1" \
-e "$((usebackend_line+1))i \ \tuse_backend\t$2_$1\tif is_$2_$1" /my_doc/haproxy/haproxy.cfg \
-e "\$a\
backend $2_$1\n\tserver $2_$1 163.172.167.52:$3 maxconn 1024" /my_doc/haproxy/haproxy.cfg
# remove ./model/my_doc.yml; instead have a template ./my_doc.yml.in
cp -r "./model/*" "./script/lp_domains/$1/$2/"
sed -e "s/lp_ports/$ports/g" -e "s/lp_name/$2-$1/g" \
my_doc.yml.in >"./script/lp_domains/$1/$2/my_doc.yml"
(You should probably do something similar with haproxy.cfg.in actually.)
I have fixed my errors. It was just permissions errors, Sed create some temporary files so i add permissions to my user. Thanks for your help !

How to get the file size on Unix in a Makefile?

I would like to implement this as a Makefile task:
# step 1:
curl -u username:password -X POST \
-d '{"name": "new_file.jpg","size": 114034,"description": "Latest release","content_type": "text/plain"}' \
https://api.github.com/repos/:user/:repo/downloads
# step 2:
curl -u username:password \
-F "key=downloads/octocat/Hello-World/new_file.jpg" \
-F "acl=public-read" \
-F "success_action_status=201" \
-F "Filename=new_file.jpg" \
-F "AWSAccessKeyId=1ABCDEF..." \
-F "Policy=ewogIC..." \
-F "Signature=mwnF..." \
-F "Content-Type=image/jpeg" \
-F "file=#new_file.jpg" \
https://github.s3.amazonaws.com/
In the first part however, I need to get the file size (and content type if it's easy, not required though), so some variable:
{"name": "new_file.jpg","size": $(FILE_SIZE),"description": "Latest release","content_type": "text/plain"}
I tried this but it doesn't work (Mac 10.6.7):
$(shell du path/to/file.js | awk '{print $1}')
Any ideas how to accomplish this?
If you have GNU coreutils:
FILE_SIZE=$(stat -L -c %s $filename)
The -L tells it to follow symlinks; without it, if $filename is a symlink it will give you the size of the symlink rather than the size of the target file.
The MacOS stat equivalent appears to be:
FILE_SIZE=$(stat -L -f %z)
but I haven't been able to try it. (I've written this as a shell command, not a make command.) You may also find the -s option useful:
Display information in "shell output", suitable for initializing variables.
For reference, an alternative method is using du with -b bytes output and -s for summary only. Then cut to only keep the first element of the return string
FILE_SIZE=$(du -sb $filename | cut -f1)
This should return the same result in bytes as #Keith Thompson answer, but will also work for full directory sizes.
Extra: I usually use a macro for this.
define sizeof
$$(du -sb \
$(1) \
| cut -f1 )
endef
Which can then be called like,
$(call sizeof,$filename_or_dirname)
I think this is a case where parsing the output of ls is legitimate:
% FILE_SIZE=`ls -l $filename | awk '{print $5}'`
(no it's not: use stat, as noted by Keith Thompson)
For the type, you can use
% FILE_TYPE=`file --mime-type --brief $filename`

Wget page title

Is it possible to Wget a page's title from the command line?
input:
$ wget http://bit.ly/rQyhG5 <<code>>
output:
If it’s broke, fix it right - Keeping it Real Estate. Home
This script would give you what you need:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
But there are lots of situations where it breaks, including if there is a <title>...</title> in the body of the page, or if the title is on more than one line.
This might be a little better:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -e 's!.*<head>\(.*\)</head>.*!\1!' \
| sed -e 's!.*<title>\(.*\)</title>.*!\1!'
but it does not fit your case as your page contains the following head opening:
<head profile="http://gmpg.org/xfn/11">
Again, this might be better:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -e 's!.*<head[^>]*>\(.*\)</head>.*!\1!' \
| sed -e 's!.*<title>\(.*\)</title>.*!\1!'
but there is still ways to break it, including no head/title in the page.
Again, a better solution might be:
wget --quiet -O - http://bit.ly/rQyhG5 \
| paste -s -d " " \
| sed -n -e 's!.*<head[^>]*>\(.*\)</head>.*!\1!p' \
| sed -n -e 's!.*<title>\(.*\)</title>.*!\1!p'
but I am sure we can find a way to break it. This is why a true xml parser is the right solution, but as your question is tagged shell, the above it the best I can come with.
The paste and the 2 sed can be merged in a single sed, but is less readable. However, this version has the advantage of working on multi-line titles:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;T;s!.*<title>\(.*\)</title>.*!\1!p}'
Update:
As explain in the comments, the last sed above uses the T command which is a GNU extension. If you do not have a compatible version, you can use:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;tnext;b;:next;s!.*<title>\(.*\)</title>.*!\1!p}'
Update 2:
As above still not working on Mac, try:
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -e 'H;${x;s!.*<head[^>]*>\(.*\)</head>.*!\1!;tnext};b;:next;s!.*<title>\(.*\)</title>.*!\1!p'
and/or
cat << EOF > script
H
\$x
\$s!.*<head[^>]*>\(.*\)</head>.*!\1!
\$tnext
b
:next
s!.*<title>\(.*\)</title>.*!\1!p
EOF
wget --quiet -O - http://bit.ly/rQyhG5 \
| sed -n -f script
(Note the \ before the $ to avoid variable expansion.)
It seams that the :next does not like to be prefixed by a $, which could be a problem in some sed version.
The following will pull whatever lynx thinks the title of the page is, saving you from all of the regex nonsense. Assuming the page you are retrieving is standards compliant enough for lynx, this should not break.
lynx -dump example.com | sed '2q;d'

Resources