How to awk only in a certain row and specific column - bash

For example i have the following file:
4:Oscar:Ferk:Florida
14:Steven:Pain:Texas
7:Maya:Ross:California
and so on...
It has an unknown number of lines because you can keep adding more to it.
I'm writing a script where you can edit the name by giving in the id of the person and the new name you want to give it as parameters.
What i am trying to do is use awk to find the line and then change the name on that specific line and update the file. I'm having trouble because my code updates every single column value to the given one.
My current code is:
getID=$1
getName=$2
awk -v varID="$getID" -F':' '$0~varID' file.dat | awk -v varName="$getName" -F':' '$2=varName' file.dat > tmp && mv tmp file.dat
Help is really appreciated, thank you kindly in advance.

You may use this awk:
getID=14 # change to getID="$1"
getName='John K' # change to getName="$2"
awk -v id="$getID" -v name="$getName" 'BEGIN{FS=OFS=":"} $1==id{$2=name} 1' file
4:Oscar:Ferk:Florida
14:John K:Pain:Texas
7:Maya:Ross:California

Related

How to use awk to change one column of a line(date)

I am trying to create a script that will create a log file in which there are two columns: first is date of editing and second is path to textfile, which was edited. How can I change the first column of a line, but keep the second column containing the path? I have tried
awk '$2=="$path{$1=$date}"' logfile.txt where $path contains path of the file, but that doesn't change the date.
Thanks in advance.
logfile:
20.03.18.19.08.56 /home/ubuntu/Desktop
now lets say i edited something and now the logfile should look like:
21.03.18.19.08.56 /home/ubuntu/Desktop
you can pass the values as awk variables
$ awk -v date="$date" -v path="$path" '$2==path{$1=date}1' file > newfile

AWK - Delete whole line when inside that line a piece matchs a string

I have a db.sql full of lines containing sometime the string _wc_session_
(26680, '_wc_session_expires_120f486fe21c9ae4ce247c04f3b009f9', '1445934089', 'no'),
(26682, '_wc_session_expires_73516b532380c28690a4437d20967e03', '1445934114', 'no'),
(26683, '_wc_session_1a71c566970b07ac2b48c5da4e0d43bf', 'a:21:{s:4:"cart";s:305:"a:1:{s:32:"7fe1f8abaad094e0b5cb1b01d712f708";a:9:{s:10:"product_id";i:459;s:12:"variation_id";s:0:"";s:9:"variation";a:0:{}s:8:"quantity";i:1;s:10:"line_total";d:6;s:8:"line_tax";i:0;s:13:"line_subtotal";i:6;s:17:"line_subtotal_tax";i:0;s:13:"line_tax_data";a:2:{s:5:"total";a:0:{}s:8:"subtotal";a:0:{}}}}";s:15:"applied_coupons";s:6:"a:0:{}";s:23:"coupon_discount_amounts";s:6:"a:0:{}";s:27:"coupon_discount_tax_amounts";s:6:"a:0:{}";s:21:"removed_cart_contents";s:6:"a:0:{}";s:19:"cart_contents_total";d:6;s:20:"cart_contents_weight";i:0;s:19:"cart_contents_count";i:1;s:5:"total";i:0;s:8:"subtotal";i:6;s:15:"subtotal_ex_tax";i:6;s:9:"tax_total";i:0;s:5:"taxes";s:6:"a:0:{}";s:14:"shipping_taxes";s:6:"a:0:{}";s:13:"discount_cart";i:0;s:17:"discount_cart_tax";i:0;s:14:"shipping_total";i:0;s:18:"shipping_tax_total";i:0;s:9:"fee_total";i:0;s:4:"fees";s:6:"a:0:{}";s:10:"wc_notices";s:205:"a:1:{s:7:"success";a:1:{i:0;s:166:"Ver carrito Se ha añadido "Incienso Gaudí Lavanda" con éxito a tu carrito.";}}";}', 'no'),
I'd like to remove those whole lines with AWK when _wc_session_ is within. I mean the whole line like:
(26682, '_wc_session_expires_73516b532380c28690a4437d20967e03', '1445934114', 'no'),
So far I've found the right REGEX that select the whole line
when "_wc_session_" is found
(^\(.*_wc_session_.*\)\,)
but we I try to run
awk '!(^\(.*_wc_session_.*\)\,)' db.sql > temp.sql
I get
awk: line 1: syntax error at or near ^
Am I missing something?
If you're set on awk:
awk '!/_wc_session/' db.sql
You may also you sed -i to write output "inplace" (in the input file):
sed -i '/_wc_session/d' db.sql
Edit:
A more precise approach with awk would be to use the inherent , from your file as delimiter and only check column 2 for the respective pattern. This approach is useful in case the pattern would be in a different column and that line should not be removed.
awk -F',' '$2 !~ "_wc_session" {print $0}' db.sql
With simple grep following may help you in same and should do the trick.
grep -v "(26682, '_wc_session_expires_73516b532380c28690a4437d20967e03', '1445934114', 'no')" Input_file
EDIT: If you want to remove the lines which have only string _wc_session_expires_ in any line then following may help you in same.
grep -v "_wc_session_expires_" Input_file
Mistake on the input Regex, the right one is
'!/(^(.*wc_session.*)\,)/'

find particular column where string is match

I have one file test.sh. In this my content is look like
Nas /mnt/enjayvol1/backup/test.sh lokesh
thinclient rsync /mnt/enjayvol1/esync/lokesh.sh lokesh
crm rsync -arz --update /mnt/enjayvol1/share/mehul mehul mehul123
I want to retrieve string where it match content /mnt
I want output line
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I have tried
grep -i "/mnt" test.sh | awk -F"mnt" '{print $2}'
but this will not give me accurate output. Please help
Could you please try following awk approach too and let me know if this helps you.
awk -v RS=" " '$0 ~ /\/mnt/' Input_file
Output will be as follows.
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
Explanation: Making record separator as space and then checking if any line has /mnt string in it, if yes then not mentioning any action so by default print will happen. So it will print those lines which have /mtn sting in them.
Short grep approach (assuming that /mnt... path doesn't contain whitespaces):
grep -o '\/mnt\/[^[:space:]]*' lokesh.sh
The output:
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul

Bash Script and Edit CSV

Im trying to create a bash file to do the following task
1- I have a file name called "ORDER.CSV" need to make a copy of this file and append the date/time to file name - This I was able to get done
2- Need to edit a particular field in the new csv file I created above. Column DY and row 2. This have not been able to do. I need to insert date bash script is run in this row. Needs to be in this format DDMMYY
3- then have system upload to SFTP. This I believe I got it figured out as shown below.
#!/usr/bin/env bash
Im able to get this step done with below command
# Copies order.csv and appends file name date/time
#cp /mypath/SFTP/order.csv /mypath/SFTP/orders.`date +"%Y%m%d%H%M%S"`.csv
Need help to echo new file name
echo "new file name "
Need help to edit field under Colum DY Row 2. Need to insert current date inthis format MMDDYYYY
awk -v r=2 -v DY=3 -v val=1001 -F, 'BEGIN{OFS=","}; NR != r; NR == r {$c = val;
print}'
This should connect to SFTP, which it does with out issues.
sshpass -p MyPassword sftp -o "Port 232323"
myusername#mysftpserver.com
Need to pass new file that was created and put into SFTP server.
put /incoming/neworder/NEWFILEName.csv
Thanks
I guess that's what you want to do...
echo -e "h1,h2,h3,h4\n1,2,3,4" |
awk -v r=2 -v c=3 -v v=$(date +"%d%m%y") 'BEGIN{FS=OFS=","} NR==r{$c=v}1'
h1,h2,h3,h4
1,2,120617,4
to find the column index from the column name (not tested)
... | awk -v r=2 -v h="DY" -v v=$(date +"%d%m%y") '
BEGIN {FS=OFS=","}
NR==1 {c=$(NF+1); for(i=1;i<=NF;i++) if($i==h) {c=i; break}}
NR==r {$c=v}1'
the risk of doing this is the column name may not match, in that case this will add the value as a new column.

bash delete line condition

I couldn't find a solution to conditionally delete a line in a file using bash. The file contains year dates within strings and the corresponding line should be deleted only if the year is lower than a reference value.
The file looks like the following:
'zg_Amon_MPI-ESM-LR_historical_r1i1p1_196001-196912.nc' 'MD5'
'zg_Amon_MPI-ESM-LR_historical_r1i1p1_197001-197912.nc' 'MD5'
'zg_Amon_MPI-ESM-LR_historical_r1i1p1_198001-198912.nc' 'MD5'
'zg_Amon_MPI-ESM-LR_historical_r1i1p1_199001-199912.nc' 'MD5'
'zg_Amon_MPI-ESM-LR_historical_r1i1p1_200001-200512.nc' 'MD5'
I want to get the year 1969 from line 1 and compare it to a reference (let's say 1980) and delete the whole line if the year is lower than the reference. This means in this case the code should remove the first two lines of the file.
I tried with sed and grep, but couldn't get it working.
Thanks in advance for any ideas.
You can use awk:
awk -F- '$4 > 198000 {print}' filename
This will output all the lines where the second date is later than 31/12/1979. This will not edit the file in-place, you would have to save the output to another file then move that in place of the original:
awk -F- '$4 > 198000 {print}' filename > tmp && mv tmp filename
Using sed (will edit in-place):
sed -i '/.*19[0-7][0-9]..\.nc/d' filename
This requires a little more thought, in that you will need to construct a regex to match any values which you don't want to be displayed.
Perhaps something like this:
awk -F- '{ if (substr($4,1,4) >= 1980) print }' input.txt

Resources