How to find values ​in quotes using bash? - bash

I have a file with the following content:
"X-Apple-I-MD-M" = "MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s";
I want to extract the returned results Output as:
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s
Tks Everybody!

One awk idea, assuming this is the only line in the file:
$ awk -F'"' '{print $4}' file
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s
If there are other lines and you wish to focus only on the line with the string "X-Apple-I-MD-M":
Input file:
$ cat file
some line to ignore
"X-Apple-I-MD-M" = "MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s";
other line to ignore and "with" some "quotes"
New awk idea:
$ pattern='X-Apple-I-MD-M'
$ awk -v ptn="${pattern}" -F'"' '$2==ptn {print $4}' file
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s
And saving the awk result in a variable:
$ mystring=$(awk ... )
$ echo "${mystring}"
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s
NOTE: keep in mind if there are multiple matching lines in file then ${mystring} will contain a multi-line value (eg, line1match\nline2match\nline3match

I always like sed.
$: echo '"X-Apple-I-MD-M" = "MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s";'| sed -E 's/^.*= *"([^"]+)" *; *$/\1/'
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s
if it's a file,
$: sed -E 's/^.*= *"([^"]+)" *; *$/\1/' file
MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s

With GNU grep(1), something like.
grep -Po '(?<="X-Apple-I-MD-M" = ").*(?=";)' <<< '"X-Apple-I-MD-M" = "MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s";'
If it is in a file.
grep -Po '(?<="X-Apple-I-MD-M" = ").*(?=";)' file.txt

If your content is consistent, an ugly solution is:
VAL='"X-Apple-I-MD-M" = "MR7v7ctwW0yr3mAUY3rAluXgOReA4CIn1JWJS2ba1s";'
echo $VAL
echo $VAL | awk '{split($0, a, " = "); print(substr(a[2], 2, length(a[2]) - 3))}'

Guessing by the bash tag, this is probably supposed to be in pure Bash, without external processes…? Two (somewhat) random options:
while IFS='"' read _ _ _ code _; do
echo "$code"
done
while read line; do
line="${line#\"*\" = \"}"
line="${line%\";}"
echo "$line"
done

Related

Removing newlines in a txt file

I have a txt file in a format like this:
test1
test2
test3
How can I bring it into a format like this using bash?
test1,test2,test3
Assuming that “using Bash” means “without any external processes”:
if IFS= read -r line; then
printf '%s' "$line"
while IFS= read -r line; do
printf ',%s' "$line"
done
echo
fi
Old answer here
TL;DR:
cat "export.txt" | paste -sd ","
Another pure bash implementation that avoids explicit loops:
#!/usr/bin/env bash
file2csv() {
local -a lines
readarray -t lines <"$1"
local IFS=,
printf "%s\n" "${lines[*]}"
}
file2csv input.txt
You can use awk. If the file name is test.txt then
awk '{print $1}' ORS=',' test.txt | awk '{print substr($1, 1, length($1)-1)}'
The first awk commad joins the three lines with comma (test1,test2,test3,).
The second awk command just deletes the last comma from the string.
Use tool 'tr' (translate) and sed to remove last comma:
tr '\n' , < "$source_file" | sed 's/,$//'
If you want to save the output into a variable:
var="$( tr '\n' , < "$source_file" | sed 's/,$//' )"
Using sed:
$ sed ':a;N;$!ba;s/\n/,/g' file
Output:
test1,test2,test3
I think this is where I originally picked it up.
If you don't want a terminating newline:
$ awk '{printf "%s%s", sep, $0; sep=","}' file
test1,test2,test3
or if you do:
awk '{printf "%s%s", sep, $0; sep=","} END{print ""}' file
test1,test2,test3
Another loopless pure Bash solution:
contents=$(< input.txt)
printf '%s\n' "${contents//$'\n'/,}"
contents=$(< input.txt) is equivalent to contents=$(cat input.txt). It puts the contents of the input.txt file (with trailing newlines automatically removed) into the variable contents.
"${contents//$'\n'/,}" replaces all occurrences of the newline character ($'\n') in contents with the comma character. See Parameter expansion [Bash Hackers Wiki].
See the accepted, and excellent, answer to Why is printf better than echo? for an explanation of why printf '%s\n' is used instead of echo.

Read multiple variables from file

I need to read a file that has lines like
user=username1
pass=password1
How can I read multiple lines like this into separate variables like username and password?
Would I use awk or grep? I have found ways to read lines into variables with grep but would I need to read the file for each individual item?
The end result is to use these variables to access a database via the command line. So I need to be able to read, store and use these values in other commands.
if the process which generates the file is safe and has shell syntax just source the file.
. ./file
Otherwise the file can be processes before to add quotes
perl -ne 'if (/^([A-Za-z_]\w*)=(.*)/) {$k=$1;$v=$2;$v=~s/\x27/\x27\\\x27\x27/g;print "$k=\x27$v\x27\n";}' <file >file2
. ./file2
If you want to use awk then
Input
$ cat file
user=username1
pass=password1
Reading
$ user=$(awk -F= '$1=="user"{print $2;exit}' file)
$ pass=$(awk -F= '$1=="pass"{print $2;exit}' file)
Output
$ echo $user
username1
$ echo $pass
password1
You could use a loop for your file perhaps, but this is probably the functionality you're looking for.
$ echo 'user=username1' | awk -F= '{print $2}'
username1
Using the -F flag sets the delimiter to = and we select the 2nd item from the row.
file.txt:
user=username1
pass=password1
user=username2
pass=password2
user=username3
pass=password3
Do to avoid browsing several times the file file.txt:
#!/usr/bin/env bash
func () {
echo "user:$1 pass:$2"
}
i=0
while IFS='' read -r line; do
if [ $i -eq 0 ]; then
i=1
user=$(echo ${line} | cut -f2 -d'=')
else
i=0
pass=$(echo ${line} | cut -f2 -d'=')
func "$user" "$pass"
fi
done < file.txt
Output:
user:username1 pass:password1
user:username2 pass:password2
user:username3 pass:password3

extracting a variable's value from text file using bash

I am using Linux and bash.
I have a simple text file like below:
VAR1=100
VAR2=5
VAR3=0
VAR4=99
I want to extract by means of bash the value of VAR2, that is 5.
How could I do that?
Assuming the file is called vars.txt
sed -n 's/^VAR2=\(.*\)/\1/p' < vars.txt
You can use the value elsewhere like this using single back quotes
echo VAR2=`sed -n 's/^VAR2=\(.*\)/\1/p' < txt`
The simplest way might be to use source or simply . to read and execute the file. This would work with your example, because there are no spaces in the variable values. Otherwise you need to use grep + cut or awk, as stated in other answers.
. /path/to/your/file
echo $VAR2
[edit]
As stated by dawg, this would make the other variables available in your script too, and possibly overwrite existing variables.
Given:
$ echo "$txt"
VAR1=100
VAR2=5
VAR3=0
VAR4=99
You can use awk:
$ echo "$txt" | awk -F= '/^VAR2/ { print $2 }'
5
Or grep and cut:
$ echo "$txt" | egrep '^VAR2=\d+' | cut -d = -f 2
5
On Bash, you can insert the value of those assignments into the current shell using source and filter the lines you wish to use. In this case, only the line VAR2=5 will be used. You need to write that to a file and then source that file:
$ echo "$txt" | grep '^VAR2' > tmp && source tmp && rm tmp
$ echo $VAR2
5
For the files as described, you can just source the file as bash script which will run it's content and update you workspace environment with it. For example:
source file.txt
echo $VAR2
Assume this as your txt file, named test.txt
VAR2 = 5
VAR3 = 0
VAR4 = 99
you can cat test.txt | grep 'VAR2' | awk '{printf $3}'
and then your output will be: 5
Here, cat test.txt will display the content of test.txt in your terminal,grep 'VAR2' will list lines containing 'VAR2' and awk '{printf $3}' will print the value of the variable

how to concatenate lines into one string

I have a function in bash that outputs a bunch of lines to stdout. I want to combine them into a single line with some delimiter between them.
Before:
one
two
three
After:
one:two:three
What is an easy way to do this?
Use paste
$ echo -e 'one\ntwo\nthree' | paste -s -d':'
one:two:three
And another way:
cat file | tr -s "\n" ":"
This might work for you:
paste -sd':' file
For fun, here's a bash-only way:
echo $'one\n2 and 3\nfour' | { mapfile -t lines; IFS=:; echo "${lines[*]}"; }
outputs
one:2 and 3:four
The {} grouping is to ensure all the commands that refer to the array variable are executed in the same subshell. The variable will not exist once the pipeline ends.
http://www.gnu.org/software/bash/manual/bashref.html#index-mapfile-140
Taking #glennJackman's corrections verbatim
awk '{printf("%s%s", sep, $0); sep=":"} END {print ""}' file
Or as you specified bash
while read line ; do printf "%s:" $line ; done < file | sed s'/:$//'
I hope this helps
Input.txt
one
two
three
Perl Solution : dummy.pl
#a = `cat /home/Input.txt`;
foreach my $x (#a)
{
chomp($x);
push(#array,"$x");
}
chomp(#array);
print "#array";
Run the script as :
$> perl dummy.pl | sed 's/ /:/g' > Output.txt
Output.txt
one:two:three

using awk within loop to replace field

I have written a script finding the hash value from a dictionary and outputting it in the form "word:md5sum" for each word. I then have a file of names which I would like to use to place each name followed by every hash value i.e.
tom:word1hash
tom:word2hash
.
.
bob:word1hash
and so on. Everything works fine but I can not figure out the substitution. Here is my script.
$#!/bin/bash
#/etc/dictionaries-common/words
cat words.txt | while read line; do echo -n "$line:" >> dbHashFile.txt
echo "$line" | md5sum | sed 's/[ ]-//g' >> dbHashFile.txt; done
cat users.txt | while read name
do
cat dbHashFile.txt >> nameHash.txt;
awk '{$1="$name"}' nameHash.txt;
cat nameHash.txt >> dbHash.txt;
done
the line
$awk '{$1="$name"}' nameHash.txt;
is where I attempt to do the substitution.
thank you for your help
Try replacing the entire contents of the last loop (both cats and the awk) with:
awk -v name="$name" -F ':' '{ print name ":" $2 }' dbHashFile.txt >>dbHash.txt

Resources