find particular column where string is match - bash

I have one file test.sh. In this my content is look like
Nas /mnt/enjayvol1/backup/test.sh lokesh
thinclient rsync /mnt/enjayvol1/esync/lokesh.sh lokesh
crm rsync -arz --update /mnt/enjayvol1/share/mehul mehul mehul123
I want to retrieve string where it match content /mnt
I want output line
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I have tried
grep -i "/mnt" test.sh | awk -F"mnt" '{print $2}'
but this will not give me accurate output. Please help

Could you please try following awk approach too and let me know if this helps you.
awk -v RS=" " '$0 ~ /\/mnt/' Input_file
Output will be as follows.
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
Explanation: Making record separator as space and then checking if any line has /mnt string in it, if yes then not mentioning any action so by default print will happen. So it will print those lines which have /mtn sting in them.

Short grep approach (assuming that /mnt... path doesn't contain whitespaces):
grep -o '\/mnt\/[^[:space:]]*' lokesh.sh
The output:
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul

Related

Combine multiple sed commands into one

I have a file example.txt, I want to delete and replace fields in it.
The following commands are good, but in a very messy way, unfortunately I'm a rookie to sed command.
The commands I used:
sed 's/\-I\.\.\/\.\.\/\.\.//\n/g' example.txt > example.txt1
sed 's/\-I/\n/g' example.txt1 > example.txt2
sed '/^[[:space:]]*$/d' > example.txt2 example.txt3
sed 's/\.\.\/\.\.\/\.\.//g' > example.txt3 example.txt
and then I'm deleting all the unnecessary files.
I'm trying to get the following result:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
The file looks like this:
-I../../../Common/Component -I../../../Common/Component1 -I../../../Common/Component2 -I../../../Common/Component3 -I../../../Common/Component4 -I../../../Common/Component5 -I../../../Common/Component6 -IComp -IApp ../../../
I want to know how the best way to transform input format to output format standard text-processing tool with 1 call with sed tool or awk.
With your shown samples, please try following awk code. Written and tested in GNU awk.
awk -v RS='-I\\S+' 'RT{sub(/^-I.*Common\//,"Common/Components/",RT);sub(/^-I/,"",RT);print RT}' Input_file
output with samples will be as follows:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
Explanation: Simple explanation would be, in GNU awk. Setting RS(record separator) as -I\\S+ -I till a space comes. In main awk program, check if RT is NOT NULL, substitute starting -I till Common with Common/Components/ in RT and then substitute starting -I with NULL in RT. Then printing RT here.
If you don't REALLY want the string /Components to be added in the middle of some output lines then this may be what you want, using any awk in any shell on every Unix box:
$ awk -v RS=' ' 'sub("^-I[./]*","")' file
Common/Component
Common/Component1
Common/Component2
Common/Component3
Common/Component4
Common/Component5
Common/Component6
Comp
App
That would fail if any of the paths in your input contained blanks but you don't show that as a possibility in your question so I assume it can't happen.
What about
sed -i 's/\-I\.\.\/\.\.\/\.\.//\n/g
s/\-I/\n/g
/^[[:space:]]*$/d
s/\.\.\/\.\.\/\.\.//g' example.txt

How to awk only in a certain row and specific column

For example i have the following file:
4:Oscar:Ferk:Florida
14:Steven:Pain:Texas
7:Maya:Ross:California
and so on...
It has an unknown number of lines because you can keep adding more to it.
I'm writing a script where you can edit the name by giving in the id of the person and the new name you want to give it as parameters.
What i am trying to do is use awk to find the line and then change the name on that specific line and update the file. I'm having trouble because my code updates every single column value to the given one.
My current code is:
getID=$1
getName=$2
awk -v varID="$getID" -F':' '$0~varID' file.dat | awk -v varName="$getName" -F':' '$2=varName' file.dat > tmp && mv tmp file.dat
Help is really appreciated, thank you kindly in advance.
You may use this awk:
getID=14 # change to getID="$1"
getName='John K' # change to getName="$2"
awk -v id="$getID" -v name="$getName" 'BEGIN{FS=OFS=":"} $1==id{$2=name} 1' file
4:Oscar:Ferk:Florida
14:John K:Pain:Texas
7:Maya:Ross:California

How can extract word between special character and other words

I am trying to find a way, how to extract a word between special character and other words.
Example of the text:
description "CST 500M TEST/VPNGW/11040 X {} // test"
description "test2-VPNGW-110642 -VPNGW"
I am trying to achieve result like,only the word including VPNGW:
TEST/VPNGW/11040
test2-VPNGW-110642
I tried with grep and AWK, but looks like my knowledge is not so far enough.
The way to print with awk '{$1=""; $2=""; ... is not working due to the whole word is not always on the same position.
Thanks for the help!
With grep you can output only the part of the string that matches the regex:
grep -o '[^ "]\+VPNGW[^ "]\+' file.name
You could try something like:
grep -Eoi 'test.*[0-9]'
Of course this would be greedy and if there is another number after the ones in the required string it will grab up to there. Normally I would suggest an inverted test to stop at the thing you don't want:
grep -Eoi 'test[^ ]+'
The problem with this is like in your first example, there is more than one occurrence of the string 'test' and so the output for the first example is:
TEST/VPNGW/11040
test"
Of course knowing what your real data looks like you can make your own decision on what might best suit
Uou could go with the perl regex machine in grep and use a look-ahead:
grep -Eoi 'test[^ ]+(?= )'
Again though, if you have the string 'test' somewhere else on the line followed by a single space, this will still not work as desired.
Lastly, awk can do the job but you would need to cycle through each item or set RS to white space:
Option 1:
awk '{for(i=1;i<=NF;i++)if(tolower($i) ~ /test.*[0-9]/)print $i}'
Option 2:
awk 'tolower($i) ~ /test.*[0-9]/' RS="[[:space:]]+"
awk '/test2/{sub(/"/,"")}$0{print $4}/test2/{print $2}' file
TEST/VPNGW/11040
test2-VPNGW-110642

Unix scripting : Writing to another file with ":" is failing

I have below record (and many other such records) in one file
9460 xyz abc (lmn):1027739543798. Taxpayer's identification number (INN): 123. For all IIB. 2016/02/03
I need to search for the keyword IIB. If it matches, then I need to take that entire record and write to another file.
Below is the code which already exists. This code is not working. Problem with this code is when it takes the full matched
record, it is ignoring the text which falls after ":" and writing to another file.
cat keyword.cfg | while read KwdName
do
echo "KEYWORD:"${KwdName} //This prints IIB
grep "^${KwdName}\|${KwdName}\|~${KwdName}~\|:${KwdName}$\|:${KwdName}~" ${mainFileWithListOfRecords} | awk -F ":" '{print $1}' >> ${destinationFile}
done
So, instead of writing below record to destination file
9460 xyz abc (lmn):1027739543798. Taxpayer's identification number (INN): 123. For all IIB. 2016/02/03
It is only writing,
9460 xyz abc (lmn)
cat -vte mainFileWithListOfRecords gives below output
9460^IMEZHPROMBANK^I^ICJSC ;IIB;~ Moscow, (lmn): 1027739543798. Taxpayer's identification number (INN): 123. For all IIB. 2016/02/031#msid=s1448434872350^IC1^I2000/12/28^I2015/11/26^I^I$
The short fix is replacing
awk -F ":" '{print $1}'
with
cut -d ":" -f2-
But what are you cutting? Maybe ${mainFileWithListOfRecords} is a variabele with a list of files. In that case grep will show the matching file in front of its matches. You can change that with the -h option.
The result is that you do not need to cut or awk:
grep -h "${KwdName}" ${mainFileWithListOfRecords} >> ${destinationFile}
(I changed the searchstring as well, with \|${KwdName}\| in your searchstring you will match KwdName in all combinations)
Of course, it cuts on the colon - you programmed it that way. In your code, you have | awk -F ":" '{print $1}', which basically means "throw away everything starting from the first colon".
If you don't want to do this, why do you explicitly request it? What was your original intention, when writing the awk command?

Save changes to a file AWK/SED

I have a huge text file delimited with comma.
19429,(Starbucks),390 Provan Walk,Glasgow,G34 9DL,-4.136909,55.872982
The first one is a unique id. I want the user to enter the id and enter a value for one of the following 6 fields in order to be replaced. Also, i'm asking him to enter a 2-7 value in order to identify which field should be replaced.
Now i've done something like this. I am checking every line to find the id user entered and then i'm replacing the value.
awk -F ',' -v elem=$element -v id=$code -v value=$value '{if($1==id) {if(elem==2) { $2=value } etc }}' $path
Where $path = /root/clients.txt
Let's say user enters "2" in order to replace the second field, and also enters "Whatever". Now i want "(Starbucks)" to be replaced with "Whatever" What i've done work fine but does not save the change into the file. I know that awk is not supposed to do so, but i don't know how to do it. I've searched a lot in google but still no luck.
Can you tell me how i'm supposed to do this? I know that i can do it with sed but i don't know how.
Newer versions of GNU awk support inplace editing:
awk -i inplace -v elem="$element" -v id="$code" -v value="$value" '
BEGIN{ FS=OFS="," } $1==id{ $elem=value } 1
' "$path"
With other awks:
awk -v elem="$element" -v id="$code" -v value="$value" '
BEGIN{ FS=OFS="," } $1==id{ $elem=value } 1
' "$path" > /usr/tmp/tmp$$ &&
mv /usr/tmp/tmp$$ "$path"
NOTES:
Always quote your shell variables unless you have an explicit reason not to and fully understand all of the implications and caveats.
If you're creating a tmp file, use "&&" before replacing your original with it so you don't zap your original file if the tmp file creation fails for any reason.
I fully support replacing Starbucks with Whatever in Glasgow - I'd like to think they wouldn't have let it open in the first place back in my day (1986 Glasgow Uni Comp Sci alum) :-).
awk is much easier than sed for processing specific variable fields, but it does not have in-place processing. Thus you might do the following:
#!/bin/bash
code=$1
element=$2
value=$3
echo "code is $code"
awk -F ',' -v elem=$element -v id=$code -v value=$value 'BEGIN{OFS=",";} /^'$code',/{$elem=value}1' mydb > /tmp/mydb.txt
mv /tmp/mydb.txt ./mydb
This finds a match for a line starting with code followed by a comma (you could also use ($1==code)), then sets the elemth field to value; finally it prints the output, using the comma as output field separator. If nothing matches, it just echoes the input line.
Everything is written to a temporary file, then overwrites the original.
Not very nice but it gets the job done.

Resources