Shell script calling rows from text document - bash

I need some help with files renaming.
At start I prepare text file : names.txt
This file contain:
T22.tsv
T33.tsv
T101.tsv
T48.tsv
Names of files at start in the folder /home/filip/Desktop/
Xpress33.tsv
Xpress5.tsv
Xpress12.tsv
Xpress006.tsv
Names of files after mv at /home/filip/Desktop/:
T22.tsv
T33.tsv
T101.tsv
T48.tsv
Could you help, how could I read from the text file in bash script, it could be with awk.
I tried :
A= awk 'NR==1 {print $0}' names.txt
mv Xpress33.tsv "$A"
But it doesn't work.

You want to store the output of a command into a variable. For this, you need the syntax var=$(command).
Hence, this should make:
A=$(awk 'NR==1 {print $0}' names.txt)
mv Xpress33.tsv "$A"
Note also that these are equivalent, because {print $0} is the default behaviour of awk:
awk 'NR==1 {print $0}' names.txt
awk 'NR==1' names.txt
If you want to make it even more direct, you can do:
mv Xpress33.tsv "$(awk 'NR==1' names.txt)"

Related

redirect output of loop to current reading file

I have simple script that looks like
for file in `ls -rlt *.rules | awk '{print $9}'`
do
cat $file | awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file
done
How can i redirect output of awk to the same file which it is reading to perform action.
files have data before running above script
123|test||
After running script files should have data like
123|test|2017_04_05|2017_04_05
You cannot replace your files on the fly like this, mostly because you increase their size.
The way is to use temporary file, then replace the current:
for file in `ls -1 *.rules `
do
TMP_FILE=/tmp/${file}_$$
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file > ${TMP_FILE}
mv ${TMP_FILE} $file
done
I would modify Michael Vehrs otherwise good answer as follows:
ls -rt *.rules | while read file
do
TMP_FILE="/tmp/${file}_$$"
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) \
'!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" "$file" > "$TMP_FILE"
mv "$TMP_FILE" "$file"
done
Your question uses ls(1) to sort the files by time, oldest first. The above preserves that property. I removed the {} braces because they add nothing in a shell script if the variable name isn't being interpolated, and quotes to cope with filenames that include whitespace.
If time-order doesn't matter, I'd consider an inside-out solution: in awk, write to a temporary file instead of standard output, and then rename it with system in an END block. Then if something goes wrong your input is preserved.
First of all, it is silly to use a combination of ls -rlt and awk when the only thing you need is the file name. You don't even need ls because the shell glob is expanded by the shell, not ls. Simply use for file in *.rules. Since the date would seem to be the same for every file (unless you run the command at midnight), it is sufficient to calculate it in advance:
date=$(date +%Y"_"%m"_"%d)
for file in *.rules
do
TMP_FILE=$(mktemp ${file}_XXXXXX)
awk -F"|" -v DATE=${date} '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file > ${TMP_FILE}
mv ${TMP_FILE} $file
done
However, since awk also knows which file it is reading, you could do something like this:
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) \
'!$3{$3=DATE} !$4{$4=DATE} { print > FILENAME ".tmp" }' OFS="|" *.rules
rename .tmp "" *.rules.tmp

Get only part of file using sed or awk

I have a file which contains text as follows:
Directory /home/user/ "test_user"
bunch of code
another bunch of code
How can I get from this file only the /home/user/ part?
I've managed to use awk -F '"' 'NR==1{print $1}' file.txt to get rid of rest of the file and I'm gettig output like this:
Directory /home/user/
How can I change this command to get only /home/user/ part? I'd like to make it as simple as possible. Unfortunately, I can't modify this file to add/change the content.
this should work the fastest, noticeable if your file is large
awk '{print $2; exit}' file
it will print the second field of the first line and stop processing the rest of the file.
With awk it should be:
awk 'NR==1{print $2}' file.txt
Setting the field delimiter to " was wrong Since it splits the line into these fields:
$1 = 'Directory /home/user/'
$2 = 'test_user'
$3 = '' (empty)
The default record separator, which is [[:space:]]+, splits like this:
$1 = 'Directory'
$2 = '/home/user/'
$3 = '"test_user"'
As an alternate, you can use head and cut:
$ head -n 1 file | cut -d' ' -f2
Not sure why you are using the -F" as that changes the delimiter. If you remove that, then $2 will get you what you want.
awk 'NR==1{print $2}' file.txt
You can also use awk to execute the print when the line contains /home/user instead of counting records:
awk '/\home\/user\//{print $2}' file.txt
In this case, if the line were buried in the file, or if you had multiple instances, you would get the name for every occurrence wherever it was.
Adding some grep
grep Directory file.txt|awk '{print $2}'

Save command output at filename

I've got this problem, where I want to save an output of a command as a filename and stream output from a different command (within the same script) to that file. I wasn't able to find a solution online, so here goes. Below is the code I have:
zgrep --no-filename 'some-patter\|other-pattern' /var/something/something/$1/* | awk -F '\t' '{printf $8; printf "scriptLINEbreakerPARSE"; print $27}' | while read -r line ; do
awk -F 'scriptLINEbreakerPARSE' '{print $1}' -> save output of this as a filename
awk -F 'scriptLINEbreakerPARSE' '{print $2}' >> the_filename_from_above
done
So basically I want to use the first awk in the loop to save the output as a filename and then the second awk output will save to the file with that filename.
Any help would be appreciated guys.
You're doing too much work. Just output to the desired file in the first awk command:
zgrep --no-filename 'some-patter\|other-pattern' /var/something/something/$1/* |
awk -F '\t' '{printf $27 > $8}'
See https://www.gnu.org/software/gawk/manual/html_node/Redirection.html

Shell command to retrieve specific value using pattern

I have a file which contains data like below.
appid=TestApp
version=1.0.1
We want to parse the file and capture the value assigned to appid field.
I have tried with awk command as below
awk '/appid=/{print $1}' filename.txt
However it outputs the whole line
appid=TestApp
but we required only
TestApp
Please let me know how I can achieve this using awk/grep/sed shell commands.
You need to change the field separator:
awk -F'=' '$1 ~ /appid/ {print $2}' filename.txt
or with an exact match
awk -F'=' '$1 == "appid" {print $2}' filename.txt
outputs
TestApp
There's about 20 different ways to do this but it's usually a good idea when you have name = value statements in a file to simply build an array of those assignments and then just print whatever you care about using it's name, e.g.:
$ cat file
appid=TestApp
version=1.0.1
$
$ awk -F= '{a[$1]=$2} END{print a["appid"]}' file
TestApp
$ awk -F= '{a[$1]=$2} END{print a["version"]}' file
1.0.1
$ awk -F= '{a[$1]=$2} END{for (i in a) print i,"=",a[i]}' file
appid = TestApp
version = 1.0.1
If you are in the shell already then simply sourcing the file will let you get what you want.
. filename.txt
echo $appid

Awk adding constant values

I have data in the text file like val1,val2 with multiple lines
and I want to change it to 1,val1,val2,0,0,1
I tried with print statement in awk(solaris) to add constants by it didn't work.
What is the correct way to do it ?
(From the comments) This is what I tried
awk -F, '{print "%s","1,"$1","$2"0,0,1"}' test.txt
Based on the command you posted, a little change makes it:
$ awk -F, 'BEGIN{OFS=FS} {print 1,$1,$2,0,0,1}' file
1,val1,val2,0,0,1
OR using printf (I prefer print):
$ awk -F, '{printf "1,%s,%s,0,0,1", $1, $2}' file
1,val1,val2,0,0,1
To prepend every line with the constant 1 and append with 0,0,1 simply do:
$ awk '{print 1,$0,0,0,1}' OFS=, file
1,val1,val2,0,0,1
A idiomatic way would be:
$ awk '$0="1,"$0",0,0,1"' file
1,val1,val2,0,0,1
Using sed:
sed 's/.*/1,&,0,0,1/' inputfile
Example:
$ echo val1,val2 | sed 's/.*/1,&,0,0,1/'
1,val1,val2,0,0,1

Resources