I want to write a bash script for extracting a string from the file name and insert that string into a specific location in the same file.
For example:
Under /root dir there are different date directories 20160201, 20160202, 20160203 and under each directory there is a file abc20160201.dat, abc20160202.dat, abc20160203.dat.
My requirement is that I need to extract the date from each file name first, and then insert that date into the second column of each record in the file.
For extracting the date I am using
f=abc20160201.dat
s=`echo $f | cut -c 4-11`
echo "$f -> $s"
and for inserting the date iI am using
awk 'BEGIN { OFS = "~"; ORS = "\n" ; date="20160201" ; IFS = "~"} { $1=date"~"$1 ; print } ' file > tempdate
But in my awk command the date is coming in the first column. Please let me know what I am doing wrong here.
The file on which this operation is being done is a delimited file with fields separated by ~ characters.
Or if anybody has a better solution for this, please let me know.
The variable for the input field separator is FS, not IFS. Consequently, the input line is not being split at all, hence when you add the date after field 1, it appears at the end of the line.
You should be able to use:
f=abc20160201.dat
s=$(echo $f | cut -c 4-11)
awk -v date="$s" 'BEGIN { FS = OFS = "~" } { $1 = $1 OFS date; print }' $f
That generates the modified output to standard output. AFAIK, awk doesn't have an overwrite option, so if you want to modify the files 'in place', you'll write the output of the script to a temporary file, and then copy or move the temporary file over the original (removing the temporary if you copied). Copying preserves both hard links and symbolic links (and owner, group, permissions); moving doesn't. If the file names are neither symlinks nor linked files, moving is simpler. (Copying always 'works', but the copy takes longer than a move, requires the remove, and there's a longer window while the over-writing copy could leave you with an incomplete file if interrupted.)
Generalizing a bit:
for file in /root/2016????/*.dat
do
tmp=$(mktemp "$(dirname "$file")/tmp.XXXXXX)")
awk -v date="$(basename "$file" | cut -c 4-11)" \
'BEGIN { FS = OFS = "~" } { $1 = $1 OFS date; print }' "$file" >"$tmp"
mv "$tmp" "$file"
done
One of the reasons for preferring $(…) over back-quotes is that it is much easier to manage nested operations and quoting using $(…). The mktemp command creates the temporary file in the same directory as the source file; you can legitimately decide to use mktemp "${TMPDIR:-/tmp}/tmp.XXXXXX instead. A still more general script would iterate of "$#" (the arguments it is passed), but it might need to validate that the base name of the file matches the format you require/expect.
Adding code to deal with cleaning up on interrupts, or selecting between copy and move, is left as an exercise for the reader. Note that the script makes no attempt to detect whether it has been run on a file before. If you run it three times on the same file, you'll end up with columns 2-4 all containing the date.
Related
I'm trying to find duplicates of a string ID across files. Each of these IDs are unique and should be used in only one file. I am trying to verify that each ID is only used once, and the script should tell me the ID which is duplicated and in which files.
This is an example of the set.csv file
"Read-only",,"T","ID6776","3.1.1","Text","?"
"Read-only",,"T","ID4294","3.1.1.1","Text","?"
"Read-only","ID","T","ID7294","a )","Text","?"
"Read-only","ID","F","ID8641","b )","Text","?"
"Read-only","ID","F","ID8642","c )","Text","?"
"Read-only","ID","T","ID9209","d )","Text","?"
"Read-only","ID","F","ID3759","3.1.1.2","Text","?"
"Read-only",,"F","ID2156","3.1.1.3","
This is the very inefficient code I wrote
for ID in $(grep 'ID\"\,\"[TF]' set.csv | cut -c 23-31);
do for FILE1 in *.txt; do for FILE2 in *.txt;
do if [[ $FILE1 -nt $FILE2 && `grep -E '$ID' $FILE1 $FILE2` ]];
then echo $ID + $FILE1 + $FILE2;
fi;
done;
done;
done
Essentially I'm only interested in ID#s that are identified as "ID" in the CSV which would be 7294, 8641, 8642, 9209, 3759 but not the others. If File1 and File2 both contain the same ID from this set then it would print out the duplicated ID and each file that it is found in.
There might be thousands of IDs, and files so my exponential approach isn't at all preferred. If Bash isn't up to it I'll move to sets, hashmaps and a logarithmic searching algorithm in another language... but if the shell can do it I'd like to know how.
Thanks!
Edit: Bonus would be to find which IDs from the set .csv aren't used at all. A pseudo code for another language might be create a set for all the IDs in the csv, then make another set and add to it IDs found in the files, then compare the sets. Can bash accomplish something like this?
A linear option would be to use awk to store discovered identifiers with their corresponding filename, then report when an identifier is found again. Assuming
awk -F, '$2 == "\"ID\"" && ($3 == "\"T\"" || $3 == "\"F\"") {
id=substr($4,4,4)
if(ids[id]) {
print id " is in " ids[id] " and " FILENAME;
} else {
ids[id]=FILENAME;
}
}' *.txt
The awk script looks through every *.txt file; it splits the fields based on commas (-F,). If field 2 is "ID" and field 3 is "T" or "F", then it extracts the numeric ID from field 4. If that ID has been seen before, it reports the previous file and the current filename; otherwise, it saves the id with an association to the current filename.
I have a directory of files, myFiles/, and a text file values.txt in which one column is a set of values to find, and the second column is the corresponding replace value.
The goal is to replace all instances of find values (first column of values.txt) with the corresponding replace values (second column of values.txt) in all of the files located in myFiles/.
For example...
values.txt:
Hello Goodbye
Happy Sad
Running the command would replace all instances of "Hello" with "Goodbye" in every file in myFiles/, as well as replace every instance of "Happy" with "Sad" in every file in myFiles/.
I've taken as many attempts at using awk/sed and so on as I can think logical, but have failed to produce a command that performs the action desired.
Any guidance is appreciated. Thank you!
Read each line from values.txt
Split that line in 2 words
Use sed for each line to replace 1st word with 2st word in all files in myFiles/ directory
Note: I've used bash parameter expansion to split the line (${line% *} etc) , assuming values.txt is space separated 2 columnar file. If it's not the case, you may use awk or cut to split the line.
while read -r line;do
sed -i "s/${line#* }/${line% *}/g" myFiles/* # '-i' edits files in place and 'g' replaces all occurrences of patterns
done < values.txt
You can do what you want with awk.
#! /usr/bin/awk -f
# snarf in first file, values.txt
FNR == NR {
subs[$1] = $2
next
}
# apply replacements to subsequent files
{
for( old in subs ) {
while( index(old, $0) ) {
start = index(old, $0)
len = length(old)
$0 = substr($0, start, len) subs[old] substr($0, start + len)
}
}
print
}
When you invoke it, put values.txt as the first file to be processed.
Option One:
create a python script
with open('filename', 'r') as infile, etc., read in the values.txt file into a python dict with 'from' as key, and 'to' as value. close the infile.
use shutil to read in directory wanted, iterate over files, for each, do popen 'sed 's/from/to/g'" or read in each file interating over all the lines, each line you find/replace.
Option Two:
bash script
read in a from/to pair
invoke
perl -p -i -e 's/from/to/g' dirname/*.txt
done
second is probably easier to write but less exception handling.
It's called 'Perl PIE' and it's a relatively famous hack for doing find/replace in lots of files at once.
I have multiple files, let's say
fname1 contains:
red=5
green=10
yellow=2
fname2 contains:
red=10
green=2
yellow=2
fname3 contains:
red=1
green=7
yellow=4
I want to write script that read from these files, sum the numbers for each colour,
and redirect the sums into new file.
New file contains:
red=16
green=19
yellow=8
[ awk ] is your friend :
awk 'BEGIN{FS="=";}
{color[$1]+=$2}
END{
for(var in color)
printf "%s=%s\n",var,color[var]
}' fname1 fname2 fname3 >result
should do it.
Demystifying above stuff
Anything that is include inside '' is the awk program.
Stuff inside BEGIN will be executed only once, ie in the beginning
FS is an awk built-in variable which stands for field separator.
Setting FS to = means awk will use = to delimit the fields/columns.
By default awk considers each line as a record.
In that case you have two fields denoted by $1 and $2 in each record having = as the delimiter.
{color[$1]+=$2} creates(if not already exist) an associative array with color name as the key and += adds the value of the field2 to this array element. Remember, associative arrays at the time of creation are initilized to zero.
This is repeated for the three files fname1, fname2, fname3 fed into awk
Anything inside END{} will be executed only at last, ie just before exit.
for(var in color) is a the style of forloop used to parse an associative array.
Here var will be a key and color[key] points to value.
printf "%s=%s\n",var,color[var] is self explained.
Note
If all the filenames start with fname you can even put fname* instead of fname1 fname2 fname3
This assumes that there are no blank lines in any file
Because your source files are valid shell code. You can just source them (if they are from a trusted source) and accumulate them using Shell Arithmetic.
#!/bin/bash
sum_red=0
sum_green=0
sum_yellow=0
for file in "$#";do
. ${file}
let sum_red+=red
let sum_green+=green
let sum_yellow+=yellow
done
echo "red=$sum_red
green=$sum_green
yellow=$sum_yellow"
I'm using awk to perform a file comparison against a file listing in found.txt
while read line; do
awk 'FNR==NR{a[$1]++;next}$1 in a' $line compare.txt >> $CHECKFILE
done < found.txt
found.txt contains full path information to a number of files that may contain the data. While I am able to determine that data exists in both files and output that data to $CHECKFILE, I wanted to be able to put the line from found.txt (the filename) where the line was found.
In other words I end up with something like:
File " /xxxx/yyy/zzz/data.txt "contains the following lines in found.txt $line
just not sure how to get the /xxxx/yyy/zzz/data.txt information into the stream.
Appended for clarification:
The file found.txt contains the full path information to several files on the system
/path/to/data/directory1/file.txt
/path/to/data/directory2/file2.txt
/path/to/data/directory3/file3.txt
each of the files has a list of parameters that need to be checked for existence before appending additional information to them later in the script.
so for example, file.txt contains the following fields
parameter1 = true
parameter2 = false
...
parameter35 = true
the compare.txt file contains a number of parameters as well.
So if parameter35 (or any other parameter) shows up in one of the three files I get it's output dropped to the Checkfile.
Both of the scripts (yours and the one I posted) will give me that output but I would also like to echo in the line that is being read at that point in the loop. Sounds like I would just be able to somehow pipe it in, but my awk expertise is limited.
It's not really clear what you want but try this (no shell loop required):
awk '
ARGIND==1 { ARGV[ARGC] = $0; ARGC++; next }
ARGIND==2 { keys[$1]; next }
$1 in keys { print FILENAME, $1 }
' found.txt compare.txt > "$CHECKFILE"
ARGIND is gawk-specific, if you don't have it add FNR==1{ARGIND++}.
Pass the name into awk inside a variable like this:
awk -v file="$line" '{... print "File: " file }'
Problem: Comparison of files from Pre-check status and Post-check status of a node for specific parameters.
With some help from community, I have written the following solution which extracts the information from files from directories pre and post and based on the "Node-ID" (which happens to be unique and is to be extracted from the files as well). After extracting the data from Pre/post folder, I have created folders based on the node-id and dumped files into the folders.
My Code to extract data (The data is extracted from Pre and Post folders)
FILES=$(find postcheck_logs -type f -name *.log)
for f in $FILES
do
NODE=`cat $f | grep -m 1 ">" | awk '{print $1}' | sed 's/[>]//g'` ##Generate the node-id
echo "Extracting Post check information for " $NODE
mkdir temp/$NODE-post ## create a temp directory
cat $f | awk 'BEGIN { RS=$NODE"> "; } /^param1/ { foo=RS $0; } END { print foo ; }' > temp/$NODE-post/param1.txt ## extract data
cat $f | awk 'BEGIN { RS=$NODE"> "; } /^param2/ { foo=RS $0; } END { print foo ; }' > temp/$NODE-post/param2.txt
cat $f | awk 'BEGIN { RS=$NODE"> "; } /^param3/ { foo=RS $0; } END { print foo ; }' > temp/$NODE-post/param3.txt
done
After this I have a structure as:
/Node1-pre/param1.txt
/Node1-post/param1.txt
and so on.
Now I am stuck to compare $NODE-pre and $NODE-post files,
I have tried to do it using recursive grep, but I am not finding a suitable way to do so. What is the best possible way to compare these files using diff?
Moreover, I find the above data extraction program very slow. I believe it's not the best possible way (using least resources) to do so. Any suggestions?
Look askance at any instance of cat one-file — you could use I/O redirection on the next command in the pipeline instead.
You can do the whole thing more simply with:
for f in $(find postcheck_logs -type f -name *.log)
do
NODE=$(sed '/>/{ s/ .*//; s/>//g; p; q; }' $f) ##Generate the node-id
echo "Extracting Post check information for $NODE"
mkdir temp/$NODE-post
awk -v NODE="$NODE" -v DIR="temp/$NODE-post" \
'BEGIN { RS=NODE"> " }
/^param1/ { param1 = $0 }
/^param2/ { param2 = $0 }
/^param3/ { param3 = $0 }
END {
print RS param1 > DIR "/param1.txt"
print RS param2 > DIR "/param2.txt"
print RS param3 > DIR "/param3.txt"
}' $f
done
The NODE finding process is much better done by a single sed command than cat | grep | awk | sed, and you should plan to use $(...) rather than back-quotes everywhere.
The main processing of the log file should be done once; a single awk command is sufficient. The script is passed to variables — NODE and the directory name. The BEGIN is cleaned up; the $ before NODE was probably not what you intended. The main actions are very similar; each looks for the relevant parameter name and saves it in an appropriate variable. At the end, it write the saved values to the relevant files, decorated with the value of RS. Semicolons are only needed when there's more than one statement on a line; there's just one statement per line in this expanded script. It looks bigger than the original, but that's only because I'm using vertical space.
As to comparing the before and after files, you can do it in many ways, depending on what you want to know. If you've got a POSIX-compliant diff (you probably do), you can use:
diff -r temp/$NODE-pre temp/$NODE-post
to report on the differences, if any, between the contents of the two directories. Alternatively, you can do it manually:
for file in param1.txt param2.txt param3.txt
do
if cmp -s temp/$NODE-pre/$file temp/$NODE-post/$file
then : No difference
else diff temp/$NODE-pre/$file temp/$NODE-post/$file
fi
done
Clearly, you can wrap that in a 'for each node' loop. And, if you are going to need to do that, then you probably do want to capture the output of the find command in a variable (as in the original code) so that you do not have to repeat that operation.