I have an env.txt file in the following format:
lDRIVER={ODBC Driver 13 for SQL Server};
PORT=1433;
SERVER=serveename;
DATABASE=db;
UID=username;
PWD=password!
I have a git bash script (.sh) that requires the UID and PWD from that file. I was thinking about getting it by the last/second last line number. How do I do this/ is there a better way (say looking for UID and PWD and assigning the git bash variable that way)
There's lots of ways to do this. You could use awk which I would personally use since it's sort of like an x-acto knife for this type of thing:
uid=$(awk -F"[=;]" '/UID/{print $2}' env.txt)
pwd=$(awk -F"[=;]" '/PWD/{print $2}' env.txt)
Or grep and sed. sed is nice because it allows you to get very specific about the piece of info you want to cut from the line, but it's regex which has its learning curve:
uid=$(grep "UID" env.txt | sed -r 's/^.*=(.*)(;|$)/\1/g' )
pwd=$(grep "PWD" env.txt | sed -r 's/^.*=(.*)(;|$)/\1/g' )
As #JamesK noted in the comments you can use sed and have it do the search instead of grep. This is super nice and I would definitely choose this instead of the grep | sed.
uid=$(sed -nr '/UID/s/^.*=(.*)(;|$)/\1/gp' )
pwd=$(sed -nr '/PWD/s/^.*=(.*)(;|$)/\1/gp' )
Or grep and cut. Bleh... we can all do better, but sometimes we just want to grep and cut and not have to think about it:
uid=$(grep "UID" env.txt | cut -d"=" -f2 | cut -d";" -f1)
pwd=$(grep "PWD" env.txt | cut -d"=" -f2 | cut -d";" -f1)
I definitely wouldn't go by line number though. That looks like and odbc.ini file and the order in which the parameters are listed in each odbc entry are irrelevant.
First rename PWD to something like PASSWORD. PWD is a special variable used by the shell. Even better is to use lowercase variable names for all your own variables.
When the password is without special characters (spaces, $, ), you can
source env.txt
When the password has something special, consider editing the env.txt:
lDRIVER="{ODBC Driver 13 for SQL Server}"
PORT="1433"
SERVER="serveename"
DATABASE="db"
UID="username"
PASSWORD="password!"
When you are only interested in lowercase uid and passwd, consider selecting only the interesting fields and change the keywords to lowercase
source <(sed -rn '/^(UID|PWD)=/ s/([^=]*)/\L\1/p' env.txt)
Related
I have a list like below:
list="-list mail-username:123 --list mail-password:xyz --list url:https://www.google.com --list mail_username:123 --list mail_password:xyz --list mail_username:123 --list user_password:xyz"
I want to extract the values of all passwords and store it in a variable.
I tried using sed but couldn't get it.
You could try:
pwds="$(echo "$list" | sed 's/[[:space:]]\{1,\}/\n/g' | sed '/password/!d;s/^[^:]*password:\(.*\)$/\1/g')"
This will not work for passphrases with spaces, though.
This will include the trailing newline in the final password, but since you seem to be playing fast and loose with whitespace, that shouldn't be a problem. (Or, at least, it's not more of a problem than the other issues that will crop up from being cavalier about whitespace!)
echo "$list" | awk '/password/{print $2}' RS=' ' FS=:
Hey so got another predicament that I am stuck in. I wanted to see approximately how many Indian people are using the stampede computer. So I set up an indian txt file in vim that has about 50 of the most common surnames in india and I want to compare those names in the file to the user name list.
So far this is the code I have
getent passwd | cut -f 5 -d: | cut -f -d' '
getent passwd gets the userid list which is going to look like this
tg827313:x:827313:8144474:Brandon Williams
the cut functions will get just the last name so the output of the example will be
Williams
Now can use the grep function to compare files but how do I use it to compare the getent passwd list with the file?
To count how many of the last names of computer users appear in the file namefile, use:
getent passwd | cut -f 5 -d: | cut -f -d' ' | grep -wFf namefile | wc -l
How it works
getent passwd | cut -f 5 -d: | cut -f -d' '
This is your code which I will assume works as intended for you.
grep -wFf namefile
This selects names that match a line in namefile. The -F option tells grep not to use regular expressions for the names. The names are assumed to be fixed strings. The option -f tells grep to read the strings from file. -w tells grep to match whole words only.
wc -l
This returns a count of the lines in the output.
I am currently learning a little more about using Bash shell on OSX terminal. I am trying to pipe the output of a cut command into a grep command, but the grep command is not giving any output even though I know there are matches. I am using the following command:
cut -d'|' -f2 <filename.txt> > <temp.txt> | grep -Ff <temp.txt> <searchfile.txt> > <filematches.txt>
I was thinking that this should work, but most of the examples I have seen normally pipe grep output into the cut. My goal was to cut field 2 from the file and use that as the pattern to search for in . However, using the command produced no output.
When I generated the temp.txt first with the cut command and then ran the grep on it manually with no pipe, the grep seemed to run fine. I am not sure why this is?
You can use process substitution here:
grep -Ff <(cut -d'|' -f2 filename.txt) searchfile.txt > filematches.txt
<(cut -d'|' -f2 filename.txt) is feeding cut command's output to grep as a file.
Okay, a reason this line doesn't behave as you expect
cut -d'|' -f2 <filename.txt> > <temp.txt> | grep -Ff <temp.txt> <searchfile.txt> > <filematches.txt>
is that the output of your cut is going to temp.txt. You're not sending anything to the pipe. Now, conveniently pipe also starts a new commend, so it doesn't matter much -- grep runs and reads searchfile.txt.
But what are you trying to do? Here's what your command line is trying to do:
take the second pipe-delimited field from filename.txt
write it to a file
run grep ...
... using the contents of the file from 2 as a grep search string (which isn't going to do what you think either, as you're effectively asking grep to look for the pattern match1\nmatch2...)
You'd be closer with
cut ... && grep ...
as that runs grep assuming cut completes effectively. Or you could use
grep -f `cut ...`
which would put the results on the command line. You need to mess with quoting, but you're still going to be looking for a line containing ALL of your match fields from cut.
I'd recommend maybe you mean something like this:
for match in `cut ...`
do
grep -f $match >> filematches.txt
done
How can I replace a column with its hash value (like MD5) in awk or sed?
The original file is super huge, so I need this to be really efficient.
So, you don't really want to be doing this with awk. Any of the popular high-level scripting languages -- Perl, Python, Ruby, etc. -- would do this in a way that was simpler and more robust. Having said that, something like this will work.
Given input like this:
this is a test
(E.g., a row with four columns), we can replace a given column with its md5 checksum like this:
awk '{
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
$2=cksum
print
}' < sample
This relies on GNU awk (you'll probably have this by default on a Linux system), and it uses openssl to generate the md5 checksum. We first build a shell command line in tmp to pass the selected column to the md5 command. Then we pipe the output into the cksum variable, and replace column 2 with the checksum. Given the sample input above, the output of this awk script would be:
this 7e1b6dbfa824d5d114e96981cededd00 a test
I copy pasted larsks's response, but I have added the close line, to avoid the problem indicated in this post: gawk / awk: piping date to getline *sometimes* won't work
awk '{
tmp="echo " $2 " | openssl md5 | cut -f2 -d\" \""
tmp | getline cksum
close(tmp)
$2=cksum
print
}' < sample
This might work using Bash/GNU sed:
<<<"this is a test" sed -r 's/(\S+\s)(\S+)(.*)/echo "\1 $(md5sum <<<"\2") \3"/e;s/ - //'
this 7e1b6dbfa824d5d114e96981cededd00 a test
or a mostly sed solution:
<<<"this is a test" sed -r 'h;s/^\S+\s(\S+).*/md5sum <<<"\1"/e;G;s/^(\S+).*\n(\S+)\s\S+\s(.*)/\2 \1 \3/'
this 7e1b6dbfa824d5d114e96981cededd00 a test
Replaces is from this is a test with md5sum
Explanation:
In the first:- identify the columns and use back references as parameters in the Bash command which is substituted and evaluated then make cosmetic changes to lose the file description (in this case standard input) generated by the md5sum command.
In the second:- similar to the first but hive the input string into the hold space, then after evaluating the md5sum command, append the string G to the pattern space (md5sum result) and using substitution arrange to suit.
You can also do that with perl :
echo "aze qsd wxc" | perl -MDigest::MD5 -ne 'print "$1 ".Digest::MD5::md5_hex($2)." $3" if /([^ ]+) ([^ ]+) ([^ ]+)/'
aze 511e33b4b0fe4bf75aa3bbac63311e5a wxc
If you want to obfuscate large amount of data it might be faster than sed and awk which need to fork a md5sum process for each lines.
You might have a better time with read than awk, though I haven't done any benchmarking.
the input (scratch001.txt):
foo|bar|foobar|baz|bang|bazbang
baz|bang|bazbang|foo|bar|foobar
transformed using read:
while IFS="|" read -r one fish twofish red fishy bluefishy; do
twofish=`echo -n $twofish | md5sum | tr -d " -"`
echo "$one|$fish|$twofish|$red|$fishy|$bluefishy"
done < scratch001.txt
produces the output:
foo|bar|3858f62230ac3c915f300c664312c63f|baz|bang|bazbang
baz|bang|19e737ea1f14d36fc0a85fbe0c3e76f9|foo|bar|foobar
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2`
outputs something like this
"Abc Inc";
What I want to do is I want to remove the trailing ";" as well. How can i do that? I am a beginner to bash. Any thoughts or suggestions would be helpful.
This will remove the last character contained in your COMPANY_NAME var regardless if it is or not a semicolon:
echo "$COMPANY_NAME" | rev | cut -c 2- | rev
I'd use sed 's/;$//'. eg:
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2 | sed 's/;$//'`
foo="hello world"
echo ${foo%?}
hello worl
I'd use head --bytes -1, or head -c-1 for short.
COMPANY_NAME=`cat file.txt | grep "company_name" | cut -d '=' -f 2 | head --bytes -1`
head outputs only the beginning of a stream or file. Typically it counts lines, but it can be made to count characters/bytes instead. head --bytes 10 will output the first ten characters, but head --bytes -10 will output everything except the last ten.
NB: you may have issues if the final character is multi-byte, but a semi-colon isn't
I'd recommend this solution over sed or cut because
It's exactly what head was designed to do, thus less command-line options and an easier-to-read command
It saves you having to think about regular expressions, which are cool/powerful but often overkill
It saves your machine having to think about regular expressions, so will be imperceptibly faster
I believe the cleanest way to strip a single character from a string with bash is:
echo ${COMPANY_NAME:: -1}
but I haven't been able to embed the grep piece within the curly braces, so your particular task becomes a two-liner:
COMPANY_NAME=$(grep "company_name" file.txt); COMPANY_NAME=${COMPANY_NAME:: -1}
This will strip any character, semicolon or not, but can get rid of the semicolon specifically, too.
To remove ALL semicolons, wherever they may fall:
echo ${COMPANY_NAME/;/}
To remove only a semicolon at the end:
echo ${COMPANY_NAME%;}
Or, to remove multiple semicolons from the end:
echo ${COMPANY_NAME%%;}
For great detail and more on this approach, The Linux Documentation Project covers a lot of ground at http://tldp.org/LDP/abs/html/string-manipulation.html
Using sed, if you don't know what the last character actually is:
$ grep company_name file.txt | cut -d '=' -f2 | sed 's/.$//'
"Abc Inc"
Don't abuse cats. Did you know that grep can read files, too?
The canonical approach would be this:
grep "company_name" file.txt | cut -d '=' -f 2 | sed -e 's/;$//'
the smarter approach would use a single perl or awk statement, which can do filter and different transformations at once. For example something like this:
COMPANY_NAME=$( perl -ne '/company_name=(.*);/ && print $1' file.txt )
don't have to chain so many tools. Just one awk command does the job
COMPANY_NAME=$(awk -F"=" '/company_name/{gsub(/;$/,"",$2) ;print $2}' file.txt)
In Bash using only one external utility:
IFS='= ' read -r discard COMPANY_NAME <<< $(grep "company_name" file.txt)
COMPANY_NAME=${COMPANY_NAME/%?}
Assuming the quotation marks are actually part of the output, couldn't you just use the -o switch to return everything between the quote marks?
COMPANY_NAME="\"ABC Inc\";" | echo $COMPANY_NAME | grep -o "\"*.*\""
you can strip the beginnings and ends of a string by N characters using this bash construct, as someone said already
$ fred=abcdefg.rpm
$ echo ${fred:1:-4}
bcdefg
HOWEVER, this is not supported in older versions of bash.. as I discovered just now writing a script for a Red hat EL6 install process. This is the sole reason for posting here.
A hacky way to achieve this is to use sed with extended regex like this:
$ fred=abcdefg.rpm
$ echo $fred | sed -re 's/^.(.*)....$/\1/g'
bcdefg
Some refinements to answer above. To remove more than one char you add multiple question marks. For example, to remove last two chars from variable $SRC_IP_MSG, you can use:
SRC_IP_MSG=${SRC_IP_MSG%??}
cat file.txt | grep "company_name" | cut -d '=' -f 2 | cut -d ';' -f 1
I am not finding that sed 's/;$//' works. It doesn't trim anything, though I'm wondering whether it's because the character I'm trying to trim off happens to be a "$". What does work for me is sed 's/.\{1\}$//'.