Comparing two text files and counting number of occurrences - shell

I'm trying to write a blog post about the dangers of having a common access point name.
So I did some wardriving to get a list of access point names, and I downloaded a list of the 1000 most common access point names (which there exists rainbow tables for) from Renderlab.
But how can I compare those two text files, to see how many of my collected access point names that are open to attacks from rainbow tables?
The text files are build like this:
collected.txt:
linksys
internet
hotspot
Most common access point names are called
SSID.txt:
default
NETGEAR
Wireless
WLAN
Belkin54g
So the script should sort the lines, compare them and show how many times the lines from collected.txt are found in SSID.txt ..
Does that make any sense? Any help would be grateful :)

If you don't mind using python script:
file1=open('collected.txt', 'r') # open file 1 for reading
with open('SSID.txt', 'r') as content_file: # ready file 2
SSID = content_file.read()
found={} # summary of found names
for line in file1:
if line in SSID:
if line not in found:
found[line]=1
else:
found[line]+=1
for i in found:
print found[i], i # print out list and no. of occurencies
...it can be run in the dir containing these files - collected.txt and SSID.txt - it will return a list looking like this:
5 NETGEAR
3 default
(...)
Script reads file 1 line-by line and compares it to the whole file 2. It can be easily modified to take file names from command prompt.

First, take a look on a simple tutorial about sdiff command, like How do I Compare two files under Linux or UNIX. Also, Notepad++ support this.

To find the number of times each line in file A appears in file B, you can do:
awk 'FNR==NR{a[$0]=1; next} $0 in a { count[$0]++ }
END { for( i in a ) print i, count[i] }' A B
If you want the output sorted, pipe the output to sort, but there's no need to sort just to find the counts. Note that the $0 in a clause can be omitted at the cost of consuming more memory, which may be a problem if file B is very large.

Related

Is there a faster way to combine files in an ordered fashion than a for loop?

For some context, I am trying to combine multiple files (in an ordered fashion) named FILENAME.xxx.xyz (xxx starts from 001 and increases by 1) into a single file (denoted as $COMBINED_FILE), then replace a number of lines of text in the $COMBINED_FILE taking values from another file (named $ACTFILE). I have two for loops to do this which work perfectly fine. However, when I have a larger number of files, this process tends to take a fairly long time. As such, I am wondering if anyone has any ideas on how to speed this process up?
Step 1:
for i in {001..999}; do
[[ ! -f ${FILENAME}.${i}.xyz ]] && break
cat ${FILENAME}.${i}.xyz >> ${COMBINED_FILE}
mv -f ${FILENAME}.${i}.xyz ${XYZDIR}/${JOB_BASENAME}_${i}.xyz
done
Step 2:
for ((j=0; j<=${NUM_CONF}; j++)); do
let "n = 2 + (${j} * ${LINES_PER_CONF})"
let "m = ${j} + 1"
ENERGY=$(awk -v NUM=$m 'NR==NUM { print $2 }' $ACTFILE)
sed -i "${n}s/.*/${ENERGY}/" ${COMBINED_FILE}
done
I forgot to mention: there are other files named FILENAME.*.xyz which I do not want to append to the $COMBINED_FILE
Some details about the files:
FILENAME.xxx.xyz are molecular xyz files of the form:
Line 1: Number of atoms
Line 2: Title
Line 3-Number of atoms: Molecular coordinates
Line (number of atoms +1): same as line 1
Line (number of atoms +2): Title 2
... continues on (where line 1 through Number of atoms is associated with conformer 1, and so on)
The ACT file is a file containing the energies which has the form:
Line 1: conformer1 Energy
Line 2: conformer2 Energy2
Where conformer1 is in column 1 and the energy is in column 2.
The goal is to make the energy for the conformer the title for in the combined file (where the energy must be the title for a specific conformer)
If you know that at least one matching file exists, you should be able to do this:
cat -- ${FILENAME}.[0-9][0-9][0-9].xyz > ${COMBINED_FILE}
Note that this will match the 000 file, whereas your script counts from 001. If you know that 000 either doesn't exist or isn't a problem if it were to exist, then you should just be able to do the above.
However, moving these files to renamed names in another directory does require a loop, or one of the less-than-highly portable pattern-based renaming utilities.
If you could change your workflow so that the filenames are preserved, it could just be:
mv -- ${FILENAME}.[0-9][0-9][0-9].xyz ${XYZDIR}/${JOB_BASENAME}
where we now have a directory named after the job basename, rather than a path component fragment.
The Step 2 processing should be doable entirely in Awk, rather than a shell loop; you can read the file into an associative array indexed by line number, and have random access over it.
Awk can also accept multiple files, so the following pattern may be workable for processing the individual files:
awk 'your program' ${FILENAME}.[0-9][0-9][0-9].xyz
for instance just before catenating and moving them away. Then you don't have to rely on a fixed LINES_PER_FILE and such. Awk has the FNR variable which is the record in the current file; condition/action pairs can tell when processing has moved to the next file.
GNU Awk also has extensions BEGINFILE and ENDFILE, which are similar to the standard BEGIN and END, but are executed around each processed file; you can do some calculations over the record and in ENDFILE print the results for that file, and clear your accumulation variables for the next file. This is nicer than checking for FNR == 1, and having an END action for the last file.
if you really wanna materialize all the file names without globbing you can always jot it (it's like seq with more integer digits in default mode before going to scientific notation) :
jot -w 'myFILENAME.%03d' - 0 999 |
mawk '_<(_+=(NR == +_)*__)' \_=17 __=91 # extracting fixed interval
# samples without modulo(%) math
myFILENAME.016
myFILENAME.107
myFILENAME.198
myFILENAME.289
myFILENAME.380
myFILENAME.471
myFILENAME.562
myFILENAME.653
myFILENAME.744
myFILENAME.835
myFILENAME.926

How to check if the following files are exists are not with condition in shell script?

I had an scenario
In this path $path1 i have list of files
LINUX-7.1.0.00.00-010.RHEL6.DEBUG.i386.rpm
LINUX-7.1.0.00.00-010.RHEL6.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL6.i386.rpm
LINUX-7.1.0.00.00-010.RHEL6.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL7.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.RHEL7.x86_64.rpm
LINUX-7.1.0.00.00-010.SLES12SP4.DEBUG.x86_64.rpm
LINUX-7.1.0.00.00-010.SLES12SP4.x86_64.rpm
In $path2 i have these files
7.1.0.00.00-010 - (build.major).(build.minor).(build.servicepack).(build.patch).(build.hotfix)-(build.number)
build.major - 7
build.minor - 1
build.servicepack - 0
build.patch - 0
build.hotfix - 0
build.number - 010
I need to check if List of particular files exists or not, if exists then it can follow some steps else it should exit.
As Barmar said, this website is more aimed at solving technical issues.
Assuming you don't know where to look, I would approach the problem with the following steps:
"cat" the input file and use "awk" to extract the 3rd column
use the output in a for loop to iterate through the lines (even if you could do it with awk directly), concatenating in a variable (called tmp for example)
looking for the files using $tmp in their name.
So, in shell, you can use awk to select a column from a text input, you can iterate directly through lines of a text flux with a for loop and you can insert the value of a variable in a string, using $myVariable.
You're now on tracks!

Script to append files with same part of the name

I have a bunch of files (> 1000), which content has columns of numbers separates by space. I would like to reduce the number of files by appending the content of groups of them in one file.
All the files start with "*time_NUMBER*" followed by a number, and the rest of the filename (*pow_....txt*). For example : *time_0.6pow_0.1-173.txt*
I would like to append the files with the same NUMBER in a single file and make it with a script since I got ~70 different NUMBERs.
I have found
cat time_0.6pow_*.txt > time_0.6.txt
it works but would like to make a script for all the possible NUMBERs.
Regards
You can do it like this:
for fName in time_*pow_*.txt; do
s="${fName#time_}"
cat "$fName" >> time_"${s%%pow*}".txt
done

Split text file into multiple files

I am having large text file having 1000 abstracts with empty line in between each abstract . I want to split this file into 1000 text files.
My file looks like
16503654 Three-dimensional structure of neuropeptide k bound to dodecylphosphocholine micelles. Neuropeptide K (NPK), an N-terminally extended form of neurokinin A (NKA), represents the most potent and longest lasting vasodepressor and cardiomodulatory tachykinin reported thus far.
16504520 Computer-aided analysis of the interactions of glutamine synthetase with its inhibitors. Mechanism of inhibition of glutamine synthetase (EC 6.3.1.2; GS) by phosphinothricin and its analogues was studied in some detail using molecular modeling methods.
You can use split and set "NUMBER lines per output file" to 2. Each file would have one text line and one empty line.
split -l 2 file
Something like this:
awk 'NF{print > $1;close($1);}' file
This will create 1000 files with filename being the abstract number. This awk code writes the records to a file whose name is retrieved from the 1st field($1). This is only done only if the number of fields is more than 0(NF)
You could always use the csplit command. This is a file splitter but based on a regex.
something along the lines of :
csplit -ks -f /tmp/files INPUTFILENAMEGOESHERE '/^$/'
It is untested and may need a little tweaking though.
CSPLIT

method for merging two files, opinion needed

Problem: I have two folders (one is Delta Folder-where the files get updated, and other is Original Folder-where the original files exist). Every time the file updates in Delta Folder I need merge the file from Original folder with updated file from Delta folder.
Note: Though the file names in Delta folder and Original folder are unique, but the content in the files may be different. For example:
$ cat Delta_Folder/1.properties
account.org.com.email=New-Email
account.value.range=True
$ cat Original_Folder/1.properties
account.org.com.email=Old-Email
account.value.range=False
range.list.type=String
currency.country=Sweden
Now, I need to merge Delta_Folder/1.properties with Original_Folder/1.properties so, my updated Original_Folder/1.properties will be:
account.org.com.email=New-Email
account.value.range=True
range.list.type=String
currency.country=Sweden
Solution i opted is:
find all *.properties files in Delta-Folder and save the list to a temp file(delta-files.txt).
find all *.properties files in Original-Folder and save the list to a temp file(original-files.txt)
then i need to get the list of files that are unique in both folders and put those in a loop.
then i need to loop each file to read each line from a property file(1.properties).
then i need to read each line(delta-line="account.org.com.email=New-Email") from a property file of delta-folder and split the line with a delimiter "=" into two string variables.
(delta-line-string1=account.org.com.email; delta-line-string2=New-Email;)
then i need to read each line(orig-line=account.org.com.email=Old-Email from a property file of orginal-folder and split the line with a delimiter "=" into two string variables.
(orig-line-string1=account.org.com.email; orig-line-string2=Old-Email;)
if delta-line-string1 == orig-line-string1 then update $orig-line with $delta-line
i.e:
if account.org.com.email == account.org.com.email then replace
account.org.com.email=Old-Email in original folder/1.properties with
account.org.com.email=New-Email
Once the loop finishes finding all lines in a file, then it goes to next file. The loop continues until it finishes all unique files in a folder.
For looping i used for loops, for splitting line i used awk and for replacing content i used sed.
Over all its working fine, its taking more time(4 mins) to finish each file, because its going into three loops for every line and splitting the line and finding the variable in other file and replace the line.
Wondering if there is any way where i can reduce the loops so that the script executes faster.
With paste and awk :
File 2:
$ cat /tmp/l2
account.org.com.email=Old-Email
account.value.range=False
currency.country=Sweden
range.list.type=String
File 1 :
$ cat /tmp/l1
account.org.com.email=New-Email
account.value.range=True
The command + output :
paste /tmp/l2 /tmp/l1 | awk '{print $NF}'
account.org.com.email=New-Email
account.value.range=True
currency.country=Sweden
range.list.type=String
Or with a single awk command if sorting is not important :
awk -F'=' '{arr[$1]=$2}END{for (x in arr) {print x"="arr[x]}}' /tmp/l2 /tmp/l1
I think your two main options are:
Completely reimplement this in a more featureful language, like perl.
While reading the delta file, build up a sed script. For each line of the delta file, you want a sed instruction similar to:
s/account.org.com.email=.*$/account.org.email=value_from_delta_file/g
That way you don't loop through the original files a bunch of extra times. Don't forget to escape & / and \ as mentioned in this answer.
Is using a database at all an option here?
Then you would only have to write code for extracting data from the Delta files (assuming that can't be replaced by a database connection).
It just seems like this is going to keep getting more complicated and slower as time goes on.

Resources