Dynamically encrypting configuration variables and placing them in specific folders - ansible

I have a configuration file that contains a list of string variables that the user is required to change to suit their environment:
Configuration file example:
# first_file.yml
value_one: <UPDATE>
value_two: <UPDATE>
# second_file.yml
value_one: <UPDATE>
value_two: <UPDATE>
Once the user has changed the UPDATE value, I want to be able to use vault to encrypt each variable before copying the encrypted variable to a file specified in the comment, with the desired out below:
# first_file.yml
value_one: !vault |
$ANSIBLE_VAULT;1.1;AES256
30663734346135353432323531336536636566643739656332613031636630383237666636366630
6164633835363766666535656438306534343565636434330a626239396536373032373866353861
37376665313438363561323262393337313266613237313065396338376438313737393234303434
3035326633616339340a346164646366623932313261613662633938356662373438643831643830
3432
value_two: !vault |
$ANSIBLE_VAULT;1.1;AES256...
I am unsure how to best approach this problem, with the main challenge being how to:
Encrypt each variable successfully, without encrypting the entire file
Copy the encrypted variable over to a specified file

I just threw this together, but it works for your case, preserving structure and indents:
#!/bin/bash
IFS=; while read line; do
# read key and value from line
key=$( echo "${line}" | cut -d: -f1 )
value=$( echo "${line}" | cut -d: -f2 | tr -d '\n' )
# Get spaces to indent
indent=$( echo "${key}" | grep -o '^ *' )
# if value is not empty...
if [ -n "$value" ]; then
# Encrypt value and indent
cval=$( echo -n "${value## }" | sed -e "s/^'//" -e "s/'$//" | ansible-vault encrypt_string --vault-password-file ~/.ssh/vault_key.txt | sed "s/^ / ${indent}/")
fi
# if key is not empty...
if [ -n "$key" ]; then
echo -n "${key}: ${cval}"
fi
# End the line
echo
# unset cval
unset cval
done < /dev/stdin
Name it encrypt_values.sh, run chmod +x encrypt_values.sh, then you can run it with
cat {input-file} | ./encrypt_values.sh > {output_file}
If you have some bizzare structure, run the file through yq first to clean it up:
yq r {imput-file} | ./encrypt_values.sh > {output_file}

Related

echo strings with envrionment variables from lines pulled from a file in bash

I have a file like so:
- ${VAR1}/blah/blah:/blah1
- ${VAR2}/blah/blah:/blah2
- $VAR3:/blah3
I ultimately need to create those three folders.
I am using sed to extract the folder part:
$ cat test.txt | grep -E '^ +- \$.*?:.*?$' | sed 's/.*- \(\$.*\):.*/\1/g'
${VAR1}/blah/blah
${VAR2}/blah/blah
$VAR3
I need to create those folders but I need those shell variables to expand. Right now they don't:
$ cat test.txt | grep -E '^ +- \$.*?:.*?$' | sed 's/.*- \(\$.*\):.*/\1/g' | while read line; do echo "$line"; done
${VAR1}/blah/blah
${VAR2}/blah/blah
$VAR3
Is there a way to get the expanded strings so I can run mkdir instead of echo to make the folders?
You may use this bash script with envsubst:
#!/usr/bin/env bash
export VAR1 VAR2 VAR3
while IFS=' -:' read -r _ d _; do
mkdir -p "$d"
done < <(envsubst < test.txt)
Alternatively use this envsubst + awk + xargs solution:
envsubst < text.txt |
awk -F '[-:[:blank:]]+' -v ORS='\0' '{print $2}' |
xargs -0 mkdir -p
First of all those variables should be exported to be accessible from your script. Then you could just use the cut and tr commands combination to extract dir name in a loop like the following:
#!/bin/bash -eu
while read -r LINE; do
echo "$LINE" | cut -d ':' -f 1 | tr -d ' ' | tr -d '-'
done < test.txt

Write to file from within a for loop in Bash

Let's say I have the following csv file:
A,1
A,2
B,3
C,4
C,5
And for each unique value i in the first column of the file I want to write a script that does some processing using this value. I go about doing it this way:
CSVFILE=path/to/csv
VALUES=$(cut -d, -f1 $CSVFILE | sort | uniq)
for i in $VALUES;
do
cat >> file_${i}.sh <<-!
#!/bin/bash
#
# script that takes value I
#
echo "Processing" $i
!
done
However, this creates empty files for all values of i it is looping over, and prints the actual content of files to the console.
Is there a way to redirect the output to the files instead?
Simply
#!/bin/bash
FILE=/path/to/file
values=`cat $FILE | awk -F, '{print $1}' | sort | uniq | tr '\n' ' '`
for i in $values; do
echo "value of i is $i" >> file_$i.sh
done
Screenshot
Try using this:
#!/usr/bin/env bash
csv=/path/to/file
while IFS= read -r i; do
cat >> "file_$i.sh" <<-eof
#!/bin/bash
#
# Script that takes value $i ...
#
eof
done < <(cut -d, -f1 "$csv" | sort -u)

Inline array substitution

I have file with a few lines:
x 1
y 2
z 3 t
I need to pass each line as paramater to some program:
$ program "x 1" "y 2" "z 3 t"
I know how to do it with two commands:
$ readarray -t a < file
$ program "${a[#]}"
How can i do it with one command? Something like that:
$ program ??? file ???
The (default) options of your readarray command indicate that your file items are separated by newlines.
So in order to achieve what you want in one command, you can take advantage of the special IFS variable to use word splitting w.r.t. newlines (see e.g. this doc) and call your program with a non-quoted command substitution:
IFS=$'\n'; program $(cat file)
As suggested by #CharlesDuffy:
you may want to disable globbing by running beforehand set -f, and if you want to keep these modifications local, you can enclose the whole in a subshell:
( set -f; IFS=$'\n'; program $(cat file) )
to avoid the performance penalty of the parens and of the /bin/cat process, you can write instead:
( set -f; IFS=$'\n'; exec program $(<file) )
where $(<file) is a Bash equivalent to to $(cat file) (faster as it doesn't require forking /bin/cat), and exec consumes the subshell created by the parens.
However, note that the exec trick won't work and should be removed if program is not a real program in the PATH (that is, you'll get exec: program: not found if program is just a function defined in your script).
Passing a set of params should be more organized :
In this example case I'm looking for a file containing chk_disk_issue=something etc.. so I set the values by reading a config file which I pass in as a param.
# -- read specific variables from the config file (if found) --
if [ -f "${file}" ] ;then
while IFS= read -r line ;do
if ! [[ $line = *"#"* ]]; then
var="$(echo $line | cut -d'=' -f1)"
case "$var" in
chk_disk_issue)
chk_disk_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
chk_mem_issue)
chk_mem_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
chk_cpu_issue)
chk_cpu_issue="$(echo $line | tr -d '[:space:]' | cut -d'=' -f2 | sed 's/[^0-9]*//g')"
;;
esac
fi
done < "${file}"
fi
if these are not params then find a way for your script to read them as data inside of the script and pass in the file name.

bash scripting to add users

I created a bash script to read information such as username, group etc., from a text file and create users based on it in linux. The code seems to function properly and creates the users as desired. But the user information in the last line of the text file always gets misinterpreted. Even if i delete it then the next last line gets misinterpreted i.e., the text is read wrongly.
`
#!/bin/bash
userfile="users.txt"
IFS=$'\n'
if [ ! -f "$userfile" ]
then
echo "File does not exist. Specify a valid file and try again. "
exit
fi
groups=(`cut -f 4 "$userfile" | sed 's/ //'`)
fullnames=(`cut -f 1 "$userfile" | sed 's/,//' | sed 's/"//g'`)
username1=(`cut -f 1 "$userfile" |sed 's/,//' | sed 's/"//' | tr [A-Z] [a-z] | awk '{print substr($2,1,1) substr($3,1,1) substr($1,1,1)}'`)
username2=(`cut -f 4 "$userfile" | tr [A-Z] [a-z] | awk '{print substr($1,1,1)}'`)
i=0
n=${#username1[#]}
for (( q=0; q<n; q++ ))
do
usernames[$q]=${username1[$q]}"${username2[$q]}"
done
declare -a usernames
x=0
created=0
for user in ${usernames[*]}
do
adduser -c ${fullnames[$x]} -p 123456789 -f 15 -m -d /home/${groups[$x]}/$user -K LOGIN_RETRIES=3 -K PASS_MAX_DAYS=30 -K PASS_WARN_AGE=3 -N -s /bin/bash $user 2> /dev/null
usermod -g ${groups[$x]} $user
chage -d 0 $user
let created=$created+1
x=$x+1
echo -e "User $user created "
done
echo "$created Users created"
enter image description here`
#!/bin/bash
userfile="./users.txt"; # <-- Config
while read line; do
# FULL NAME
# Capture all between quotes as full name
fullname=$(printf '%s' "${line}" | sed 's/^"\(.*\)".*/\1/')
# Remove spaces and punctuations???:
fullname=$(printf '%s' "${fullname}" | tr -d '[:punct:][:blank:]')
# Right-side names:
partb=$(printf '%s' "${line}" | sed "s/^\".*\"//g")
# CODE 1, capture second row
code1=$(printf '%s' "${partb}" | cut -f 2 )
# CODE 2, capture third row
code2=$(printf '%s' "${partb}" | cut -f 3 )
# GROUP, capture fourth row
group=$(printf '%s' "${partb}" | cut -f 4 )
# Print only for report
echo "fullname: ${fullname}\n code 1: ${code1}\n code 2: ${code2}\n group: ${group}\n"
done <${userfile}
Maybe these are the fields that you want, now you have it in variables for manipulate them: $fullname, $code1, $code2 and $group.
Although maybe the fail that you observed was due to some misplaced quotation mark in the text file or the line breaks, on the attached screenshot I can see one missed quote.

hash each line in text file

I'm trying to write a little script which will open a text file and give me an md5 hash for each line of text. For example I have a file with:
123
213
312
I want output to be:
ba1f2511fc30423bdbb183fe33f3dd0f
6f36dfd82a1b64f668d9957ad81199ff
390d29f732f024a4ebd58645781dfa5a
I'm trying to do this part in bash which will read each line:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line
done
later on I do:
$ more 123.txt | ./read.line.by.line.sh | md5sum | cut -d ' ' -f 1
but I'm missing something here, does not work :(
Maybe there is an easier way...
Almost there, try this:
while read -r line; do printf %s "$line" | md5sum | cut -f1 -d' '; done < 123.txt
Unless you also want to hash the newline character in every line you should use printf or echo -n instead of echo option.
In a script:
#! /bin/bash
cat "$#" | while read -r line; do
printf %s "$line" | md5sum | cut -f1 -d' '
done
The script can be called with multiple files as parameters.
You can just call md5sum directly in the script:
#!/bin/bash
#read.file.line.by.line.sh
while read line
do
echo $line | md5sum | awk '{print $1}'
done
That way the script spits out directly what you want: the md5 hash of each line.
this worked for me..
cat $file | while read line; do printf %s "$line" | tr -d '\r\n' | md5 >> hashes.csv; done

Resources