BASH Script : Explode string and save to file - bash

I need some help. I have this
info.txt
[test.local]
user=test
group=test
;
[game.local]
user=game
group=game
;
this is my objective, i want it to be separated with ; and put it in a file where the file name is based on the value of [ ]
like this
test.local.txt
[test.local]
user=test
group=test
game.local.txt
[game.local]
user=game
group=game
and here my code currently files.sh
#!/bin/bash
value=$(<info.txt)
SAVEIFS=$IFS
IFS=$';'
val=($value)
IFS=$SAVEIFS
for (( i=0; i<${#val[#]}; i++ ))
do
echo "${val[$i]}"
done
in which im stuck with array only, how can i achieve it.

You may use this gnu-awk command:
awk -v RS=';\n' 'NF{f=$1; gsub(/[][]/, "", f); printf "%s", $0 > (f ".txt")}' info.txt
Details:
-v RS=';\n': sets input record separator to ; followed by newline
NF{...}: Execute only for non-empty lines
f=$1: Save $1 which is [...] line in variable f
gsub(/[][]/, "", f): Removes [ and ] from variable f
printf: Redirects a single block to a filename made with value of f and text ".txt"

You can use
fn=$(echo "${val[$i]}"|head -n 1| tr -d '[]').txt
to find the name of the file to create and
echo "${val[$i]}"|tail -n +2
to produce the content of the file.

Related

How to use variable with awk when being read from a file

I have a file with the following entries:
foop07_bar2_20190423152612.zip
foop07_bar1_20190423153115.zip
foop08_bar2_20190423152612.zip
foop08_bar1_20190423153115.zip
where
foop0* = host
bar* = fp
I would like to read the file and create 3 variables, the whole file name, host and fp (which stands for file_path_differentiator).
I am using read to take the first line and get my whole file name variable, I though I could then feed this into awk to grab the next two variables, however the first method of variable insertion creates an error and the second gives me all the variables.
I would like to loop each line, as I wish to use these variables to ssh to the host and grab the file
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`awk 'BEGIN { FS = "_" } ; { print $1 }'<<<<"$FILE"`
echo ${host}
path=`awk -v var="${FILE}" 'BEGIN { FS = "_" } ; { print $2 }'`
echo ${path}
done <zips_not_received.csv
Expected Result
foop07_bar2_20190423152612.zip
foop07
bar2
foop07_bar1_20190423153115.zip
foop07
bar1
Actual Result
foop07_bar2_20190423152612.zip
/ : No such file or directoryfoop07_bar2_20190423152612.zip
bar2 bar1 bar2 bar1
You can do this alone with bash, without using any external tool.
while read -r file; do
[[ $file =~ (.*)_(.*)_.*\.zip ]] || { echo "invalid file name"; exit 1; }
host="${BASH_REMATCH[1]}"
path="${BASH_REMATCH[2]}"
echo "$file"
echo "$host"
echo "$path"
done < zips_not_received.csv
typical...
Managed to work a solution after posting...
#!/bin/bash
while read -r FILE
do
echo ${FILE}
host=`echo "$FILE" | awk -F"_" '{print $1}'`
echo $host
path=`echo "$FILE" | awk -F"_" '{print $2}'`
echo ${path}
done <zips_not_received.csv
not sure on the elegance or its correctness as i am using echo to create variable...but i have it working..
Assuming there is no space or _ in your "file name" that are part of the host or path
just separate line before with sed, awk, ... if using default space separator (or use _ as argument separator in batch). I add the remove of empty line value as basic security seeing your sample.
sed 's/_/ /g;/[[:blank:]]\{1,\}/d' zips_not_received.csv \
| while read host path Ignored
do
echo "${host}"
echo "${path}"
done

Take multiple (any number of input) input strings and concatenate in shell

I want to input multiple strings.
For example:
abc
xyz
pqr
and I want output like this (including quotes) in a file:
"abc","xyz","pqr"
I tried the following code, but it doesn't give the expected output.
NextEmail=","
until [ "a$NextEmail" = "a" ];do
echo "Enter next E-mail: "
read NextEmail
Emails="\"$Emails\",\"$NextEmail\""
done
echo -e $Emails
This seems to work:
#!/bin/bash
# via https://stackoverflow.com/questions/1527049/join-elements-of-an-array
function join_by { local IFS="$1"; shift; echo "$*"; }
emails=()
while read line
do
if [[ -z $line ]]; then break; fi
emails+=("$line")
done
join_by ',' "${emails[#]}"
$ bash vvuv.sh
my-email
another-email
third-email
my-email,another-email,third-email
$
With sed and paste:
sed 's/.*/"&"/' infile | paste -sd,
The sed command puts "" around each line; paste does serial pasting (-s) and uses , as the delimiter (-d,).
If input is from standard input (and not a file), you can just remove the input filename (infile) from the command; to store in a file, add a redirection at the end (> outfile).
If you can withstand a trailing comma, then printf can convert an array, with no loop required...
$ readarray -t a < <(printf 'abc\nxyx\npqr\n' )
$ declare -p a
declare -a a=([0]="abc" [1]="xyx" [2]="pqr")
$ printf '"%s",' "${a[#]}"; echo
"abc","xyx","pqr",
(To be fair, there's a loop running inside bash, to step through the array, but it's written in C, not bash. :) )
If you wanted, you could replace the final line with:
$ printf -v s '"%s",' "${a[#]}"
$ s="${s%,}"
$ echo "$s"
"abc","xyx","pqr"
This uses printf -v to store the imploded text into a variable, $s, which you can then strip the trailing comma off using Parameter Expansion.

Trying to take input file and textline from a given file and save it to other, using bash

What I have is a file (let's call it 'xfile'), containing lines such as
file1 <- this line goes to file1
file2 <- this goes to file2
and what I want to do is run a script that does the work of actually taking the lines and writing them into the file.
The way I would do that manually could be like the following (for the first line)
(echo "this line goes to file1"; echo) >> file1
So, to automate it, this is what I tried to do
IFS=$'\n'
for l in $(grep '[a-z]* <- .*' xfile); do
$(echo $l | sed -e 's/\([a-z]*\) <- \(.*\)/(echo "\2"; echo)\>\>\1/g')
done
unset IFS
But what I get is
-bash: file1(echo "this content goes to file1"; echo)>>: command not found
-bash: file2(echo "this goes to file2"; echo)>>: command not found
(on OS X)
What's wrong?
This solves your problem on Linux
awk -F ' <- ' '{print $2 >> $1}' xfile
Take care in choosing field-separator in such a way that new files does not have leading or trailing spaces.
Give this a try on OSX
You can use the regex capabilities of bash directly. When you use the =~ operator to compare a variable to a regular expression, bash populates the BASH_REMATCH array with matches from the groups in the regex.
re='(.*) <- (.*)'
while read -r; do
if [[ $REPLY =~ $re ]]; then
file=${BASH_REMATCH[1]}
line=${BASH_REMATCH[2]}
printf '%s\n' "$line" >> "$file"
fi
done < xfile

Column separation inside shell script

If I have file.txt with the data:
abcd!1023!92
efgh!9873!xk
and a basic tutorial.sh file which goes through each line
while read line
do
name = $line
done < $1
How do I separate the data between the "!" into a column and select the second column and add them? (I am aware of the "sed -k 2 | bc " function but I can't/ do not understand how to get it to work with a shell script.
You can use awk:
awk -F '!' '{sum += $2} END{print sum}' file
10896
To adjust your while loop:
while IFS='!' read -r a b c
do
((sum += b))
done < "$1" # always quote "$vars"
echo "$sum"
IFS is the shell's "internal field separator" used for splitting strings into words. It's normally "whitespace" but you can use it for your specific needs.

Best way to run a command using either a file as input or stdin

I am writing a script to format a file such that each column has a width of the length of its longest record+1. The script works fine run as ./auto_format file and cat file | ./auto_format:
#!/bin/bash
# auto_format file
case $# in
1)
file="$1"
;;
0)
file=$(mktemp || echo "failed, exiting..." 1>&2; exit 1)
cat > $file <&0
;;
*)
echo "usage: auto_format [file]" 1>&2
exit 1
;;
esac
awk '
NR==FNR {
for (i=1;i<=NF;i++) {
if (length($i) > max[i]) max[i]=length($i);
}
}
NR!=FNR {
for (i=1;i<=NF;i++){
printf "%-*s", max[i]+1, $i
}
printf "\n"
}
' "$file" "$file"
However, I do not like the use of a temporary file when receiving input from STDIN, and was wondering if I could pass on a copy of input to awk so I don't have to use a temp file. Something like: awk [script] STDIN COPY_STDIN
It seems like you're making this harder than it has to be. Awk is perfectly capable of handling piped stdin or a file and you dont need a tmp file unless your input is huge which it sounds like it's not from your comments::
$ cat tst.sh
awk '
{
for (i=1;i<=NF;i++) {
if (length($i) > max[i]) max[i]=length($i);
}
line[NR] = $0
}
END {
for (nr=1; nr<=NR; nr++) {
nf = split(line[nr],flds)
for (i=1; i<=nf; i++) {
printf "%-*s", max[i]+1, flds[i]
}
print ""
}
}
' "$#"
.
$ cat file
abc de fghi
abcde f ghiklm
$
$ ./tst.sh file
abc de fghi
abcde f ghiklm
$
$ cat file | ./tst.sh
abc de fghi
abcde f ghiklm
One great way to handle this is to redirect your stdin from a file if that file is provided:
if [ -n "$1" ]; then exec <"$1"; fi
This will open the file in your first argument, replacing stdin, if and only if a filename is provided.
That said, your specific case here is trickier, and you do need to capture content, since you want to return the user's input twice. However, you don't necessarily need to capture out to a file -- capturing to a variable, and playing that variable back twice, will do. If your content doesn't contain NULs, that's as simple as the following:
#!/bin/bash
# ^- this will not work with /bin/sh
if [ -n "$1" ]; then exec <"$1"; fi
IFS= read -r -d '' content
awk ... <(printf '%s' "$content") <(printf '%s' "$content")
If your content does contain NULs, a solution is still possible by storing content in an array rather than a scalar variable (since POSIX shells use C-style NUL-terminated strings, a scalar can't contain a NUL -- but the divisions between array entries can represent the places where NULs would be), but the corner cases get a bit hairy; frankly, it's easier to use a temporary file (or a language like Python that uses Pascal strings, which aren't NUL-delimited) at that point.

Resources