I've got this problem, where I want to save an output of a command as a filename and stream output from a different command (within the same script) to that file. I wasn't able to find a solution online, so here goes. Below is the code I have:
zgrep --no-filename 'some-patter\|other-pattern' /var/something/something/$1/* | awk -F '\t' '{printf $8; printf "scriptLINEbreakerPARSE"; print $27}' | while read -r line ; do
awk -F 'scriptLINEbreakerPARSE' '{print $1}' -> save output of this as a filename
awk -F 'scriptLINEbreakerPARSE' '{print $2}' >> the_filename_from_above
done
So basically I want to use the first awk in the loop to save the output as a filename and then the second awk output will save to the file with that filename.
Any help would be appreciated guys.
You're doing too much work. Just output to the desired file in the first awk command:
zgrep --no-filename 'some-patter\|other-pattern' /var/something/something/$1/* |
awk -F '\t' '{printf $27 > $8}'
See https://www.gnu.org/software/gawk/manual/html_node/Redirection.html
Related
I have simple script that looks like
for file in `ls -rlt *.rules | awk '{print $9}'`
do
cat $file | awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file
done
How can i redirect output of awk to the same file which it is reading to perform action.
files have data before running above script
123|test||
After running script files should have data like
123|test|2017_04_05|2017_04_05
You cannot replace your files on the fly like this, mostly because you increase their size.
The way is to use temporary file, then replace the current:
for file in `ls -1 *.rules `
do
TMP_FILE=/tmp/${file}_$$
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file > ${TMP_FILE}
mv ${TMP_FILE} $file
done
I would modify Michael Vehrs otherwise good answer as follows:
ls -rt *.rules | while read file
do
TMP_FILE="/tmp/${file}_$$"
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) \
'!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" "$file" > "$TMP_FILE"
mv "$TMP_FILE" "$file"
done
Your question uses ls(1) to sort the files by time, oldest first. The above preserves that property. I removed the {} braces because they add nothing in a shell script if the variable name isn't being interpolated, and quotes to cope with filenames that include whitespace.
If time-order doesn't matter, I'd consider an inside-out solution: in awk, write to a temporary file instead of standard output, and then rename it with system in an END block. Then if something goes wrong your input is preserved.
First of all, it is silly to use a combination of ls -rlt and awk when the only thing you need is the file name. You don't even need ls because the shell glob is expanded by the shell, not ls. Simply use for file in *.rules. Since the date would seem to be the same for every file (unless you run the command at midnight), it is sufficient to calculate it in advance:
date=$(date +%Y"_"%m"_"%d)
for file in *.rules
do
TMP_FILE=$(mktemp ${file}_XXXXXX)
awk -F"|" -v DATE=${date} '!$3{$3=DATE} !$4{$4=DATE} 1' OFS="|" $file > ${TMP_FILE}
mv ${TMP_FILE} $file
done
However, since awk also knows which file it is reading, you could do something like this:
awk -F"|" -v DATE=$(date +%Y"_"%m"_"%d) \
'!$3{$3=DATE} !$4{$4=DATE} { print > FILENAME ".tmp" }' OFS="|" *.rules
rename .tmp "" *.rules.tmp
I have a file which contains text as follows:
Directory /home/user/ "test_user"
bunch of code
another bunch of code
How can I get from this file only the /home/user/ part?
I've managed to use awk -F '"' 'NR==1{print $1}' file.txt to get rid of rest of the file and I'm gettig output like this:
Directory /home/user/
How can I change this command to get only /home/user/ part? I'd like to make it as simple as possible. Unfortunately, I can't modify this file to add/change the content.
this should work the fastest, noticeable if your file is large
awk '{print $2; exit}' file
it will print the second field of the first line and stop processing the rest of the file.
With awk it should be:
awk 'NR==1{print $2}' file.txt
Setting the field delimiter to " was wrong Since it splits the line into these fields:
$1 = 'Directory /home/user/'
$2 = 'test_user'
$3 = '' (empty)
The default record separator, which is [[:space:]]+, splits like this:
$1 = 'Directory'
$2 = '/home/user/'
$3 = '"test_user"'
As an alternate, you can use head and cut:
$ head -n 1 file | cut -d' ' -f2
Not sure why you are using the -F" as that changes the delimiter. If you remove that, then $2 will get you what you want.
awk 'NR==1{print $2}' file.txt
You can also use awk to execute the print when the line contains /home/user instead of counting records:
awk '/\home\/user\//{print $2}' file.txt
In this case, if the line were buried in the file, or if you had multiple instances, you would get the name for every occurrence wherever it was.
Adding some grep
grep Directory file.txt|awk '{print $2}'
i want to svn blame lines of code which include "todo | fixme"
i have the general flow of the script but struggle to combine it into one
finding the lines with "todo"
grep --color -Ern --include=*.{php,html,phtml} --exclude-dir=vendor "todo|TODO|FIXME" .
blame the line of code
svn blame ${file} | cat -n |grep ${linenumber}
i could get $file and $linenumber from the first command with awk, but i dont know how to pipe the values i extract with awk into the second command.
i am missing the glue to combine these commands into one "script" (- :
You can build the command with awk and then pipe it to bash:
grep --color -Ern --include=*.{php,html,phtml} --exclude-dir=vendor "todo|TODO|FIXME" . |\
awk -F: '{printf "svn blame \"%s\" | cat -n | grep \"%s\"\n", $1, $2}'
That prints one command per input line with the following format:
svn blame "${file}" | cat -n | grep "${linenumber}"
The varibales are replaces. When you execute the command as above they are only printed to the shell, that you can comfirm if everything is right. If yes add a last pipe to the in of the command that the ouput is redirected to bash. The complete command would look like this:
grep --color -Ern --include=*.{php,html,phtml} --exclude-dir=vendor "todo|TODO|FIXME" . |\
awk -F: '{printf "svn blame \"%s\" | cat -n | grep \"%s\"\n", $1, $2}' | bash
A small notice: I think you want to print the line number extracterd in the first command, aren't you? But grep ${linenumber} just gives the line containing the string ${linenumber}. To print only the linenumber use that command: sed -n "2p" to print line number 2 for example. The complete command would then look like this:
grep --color -Ern --include=*.{php,html,phtml} --exclude-dir=vendor "todo|TODO|FIXME" . |\
awk -F: '{printf "svn blame \"%s\" | cat -n | sed -n \"%sp\"\n", $1, $2}' | bash
I am attempting to filter a log file and am running into issues, what I have so far is the following, which does not work,
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1; fflush("")}' | awk '!x[$0]++' > /var/log/squid/accesscustom-filtered.log
The goal is to take a file that contains
ipaddress1 username
ipaddress7
ipaddress2 user-name
ipaddress1 username
ipaddress5
ipaddress3 username
ipaddress4 user-name
and save to accesscustom-filtered.log
ipaddress1
ipaddress2
ipaddress3
ipaddress4
It works without the output to accesscustom-filtered.log but something in the > isn't working right and the file ends up empty.
Edit: Changed the original example to be correct
Use tee:
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1}' | tee /var/log/squid/accesscustom-filtered.log
See also: Writing “tail -f” output to another file and Turn off buffering in pipe
Note: awk doesn't buffer like grep in the superuser example, so you shouldn't need to do anything special with your awk command. (more info)
input.txt
1,Ram,Fail
2,John,Fail
3,Ron,Success
param.txt (New Input)
1,Sam,Success
2,John,Sucess
Now i want to replace the whole line in input.txt with those present in param.txt .
1st column will act like a primary key.
Output.txt
1,Sam,Success
2,John,Sucess
3,Ron,Success
I tried as
awk 'FNR==NR{a[$1]=$2 FS $3;next}{ print $0, a[$1]}' input.txt param.txt > Output.txt
But it is merging the file contents.
This might work for you (GNU sed):
sed 's|^\([^,]*,\).*|/^\1/c\\&|' param.txt | sed -f - input.txt
Explanation:
Convert param.txt into a sed script using the first field as an address to change the line in the input.txt. s|^\([^,]*,\).*|/^\1/c\\&|
Run the script against the input.txt. sed -f - input.txt
This can be done with one call to sort:
sort -t, -k1,1n -us param.txt input.txt
Use a stable numerical sort on the first comma-delimited field, and list param.txt before input.txt so that the correct, newer, lines are preferred when eliminating duplicates.
You could use join(1) to make this work:
$ join -t, -a1 -j1 Input.txt param.txt | sed -E 's/,.*?,.*?(,.*?,.*?)/\1/'
1,Sam,Success
2,John,Sucess
3,Ron,Success
sed as a pipe tail strips fields from Input.txt out of replaced lines.
This will work only if both input files are sorted by first field.
Pure awk isn't really the right tool for the job. If you must use only awk, https://stackoverflow.com/a/5467806/1301972 is a good starting point for your efforts.
However, Unix provides some tools that will help with feeding awk the right input for what you're trying to do.
$ join -a1 -t, <(sort -n input.txt) <(sort -n param.txt) |
awk -F, 'NF > 3 {print $1 "," $4 "," $5; next}; {print}'
Basically, you're feeding awk a single file with the lines joined on the keys from input.txt. Then awk can parse out the fields you want for proper display or for redirection to your output file.
This should work in awk
awk -F"," 'NR==FNR{a[$1]=$0;next} ($1 in a){ print a[$1]; next}1' param.txt input.txt
Test:
$ cat input.txt
1,Ram,Fail
2,John,Fail
3,Ron,Success
$ cat param.txt
1,Sam,Success
2,John,Sucess
$ awk -F"," 'NR==FNR{a[$1]=$0;next} ($1 in a){ print a[$1]; next}1' param.txt input.txt
1,Sam,Success
2,John,Sucess
3,Ron,Success