bash sort a list starting at the end of each line - bash

I have a file containing file paths and filenames that I want to sort starting at the end of the string.
My file contains a list such as below:
/Volumes/Location/Workers/Andrew/2015-08-12_Andrew_PC/DOCS/3177109.doc
/Volumes/Location/Workers/Andrew/2015-09-17_Andrew_PC/DOCS/2130419.doc
/Volumes/Location/Workers/Bill/2016-03-17_Bill_PC/DOCS/1998816.doc
/Volumes/Location/Workers/Charlie/2016-07-06_Charlie_PC/DOCS/4744123.doc
I want to sort this list such that the filenames will be sequential, this will help find duplicates based on filename regardless of path.
The list should appear like this:
/Volumes/Location/Workers/Bill/2016-03-17_Bill_PC/DOCS/1998816.doc
/Volumes/Location/Workers/Andrew/2015-09-17_Andrew_PC/DOCS/2130419.doc
/Volumes/Location/Workers/Andrew/2015-08-12_Andrew_PC/DOCS/3177109.doc
/Volumes/Location/Workers/Charlie/2015-07-06_Charlie_PC/DOCS/4744128.doc

Here's a way to do this:
sed -e 's|^.*/\(.*\)$|\1\t\0|' list.txt | sort | cut -f 2-
This uses sed to insert a copy of the filename to the beginning of each line so that we can sort the list with sort. Then we remove the stuff that we added in the first step.

This should work:
sort -t/ -k7 input_file

This will sort based on dynamic last field which is separated by /.
First it will append last field to the start of the line and then sort. First field which is appended earlier is removed by second awk.
awk -F'/' '{ $0= $NF " " $0;print $0 |"sort -k1"}' fil |awk '{print $2}'
/Volumes/Location/Workers/Bill/2016-03-17_Bill_PC/DOCS/1998816.doc
/Volumes/Location/Workers/Andrew/2015-09-17_Andrew_PC/DOCS/2130419.doc
/Volumes/Location/Workers/Andrew/2015-08-12_Andrew_PC/DOCS/3177109.doc
/Volumes/Location/Workers/Charlie/2016-07-06_Charlie_PC/DOCS/4744123.doc

Related

Bash: sort rows within a file by timestamp

I am new to bash scripting and I have written a script to match regex and output lines to print to a file.
However, each line contains multiple columns, one of which is the timestamp column, which appears in the form YYYYMMDDHHMMSSTTT (to millisecond) as shown below.
20180301050630663,ABC,,,,,,,,,,
20180301050630664,ABC,,,,,,,,,,
20180301050630665,ABC,,,,,,,,,,
20180301050630666,ABC,,,,,,,,,,
20180301050630667,ABC,,,,,,,,,,
20180301050630668,ABC,,,,,,,,,,
20180301050630663,ABC,,,,,,,,,,
20180301050630665,ABC,,,,,,,,,,
20180301050630661,ABC,,,,,,,,,,
20180301050630662,ABC,,,,,,,,,,
My code is written as follow:
awk -F "," -v OFS=","'{if($2=="ABC"){print}}' < $i>> "$filename"
How can I modify my code such that it can sort the rows by timestamp (YYYYMMDDHHMMSSTTT) in ascending order before printing to file?
You can use a very simple sort command, e.g.
sort yourfile
If you want to insure sort only looks at the datestamp, you can tell sort to only use the first command separated field as your sorting criteria, e.g.
sort -t, -k1 yourfile
Example Use/Output
With your data save in a file named log, you could do:
$ sort -t, -k1 log
20180301050630661,ABC,,,,,,,,,,
20180301050630662,ABC,,,,,,,,,,
20180301050630663,ABC,,,,,,,,,,
20180301050630663,ABC,,,,,,,,,,
20180301050630664,ABC,,,,,,,,,,
20180301050630665,ABC,,,,,,,,,,
20180301050630665,ABC,,,,,,,,,,
20180301050630666,ABC,,,,,,,,,,
20180301050630667,ABC,,,,,,,,,,
20180301050630668,ABC,,,,,,,,,,
Let me know if you have any problems.
Just add a pipeline.
awk -F "," '$2=="ABC"' < "$i" |
sort -n >> "$filename"
In the general case, to sort on column 234. try sort -t, -k234,234n
Notice alse the quoting around "$i", like you already have around "$filename", and the simplifications of the Awk script.
If you are using gawk you can do:
$ awk -F "," -v OFS="," '$2=="ABC"{a[$1]=$0} # Filter lines that have "ABC"
END{ # set the sort method
PROCINFO["sorted_in"] = "#ind_num_asc"
for (e in a) print a[e] # traverse the array of lines
}' file
An alternative is to use sed and sort:
sed -n '/^[0-9]*,ABC,/p' file | sort -t, -k1 -n
Keep in mind that both of these methods are unrelated to the shell used. Bash is just executing the tools (sed, awk, sort, etc) that are otherwise part of the OS.
Bash itself could do the sort in pure Bash but it would be long and slow.

Delete first characters off of a line in a file with awk or grep

I'm attempting to remove a certain pattern from a line, but not the entire line itself. An example would be:
Original:
user=dannyBoy
Desired:
dannyBoy
I have a file that is full of lines like that, so I was wondering how I would be able to cut a specific part of the text off, whether that be just removing the first five characters from the list or searching for the pattern "user=" and removing it.
There are many ways to do this:
cut -d'=' -f2- file
sed 's/^[^=]*//' file
awk -F= '{print $2}' file #if just one = is present
cut sets a delimiter (-d'=) and then prints all the fields starting from the 2nd one (-f2-).
sed looks for all the content from the beginning up to the first = and removes it.
awk sets = as field separator and prints the second field.
Using ex:
echo user=dannyBoy | ex -s +"norm df=" +%p -cq! /dev/stdin
where ex is equivalent to vi -e/vim -e which basically executes vi command: df= (delete until finds =), then print the buffer (%p).
If you've multiple lines like that, then it would be simpler by using substitution:
ex -s +"%s/^.*=//g" +%p -cq! foo.txt
To edit file in place, change -cq! to -cwq.
The command below deletes the first 5 characters:
$ echo "user=dannyboy" | cut -c 6-
You can use it on a file with cut -c 6- inputfilename as well.

To find a word and copy the following word with shell(ubuntu)?

is there a posibility to find a word in a file and than to copy the following word?
Example:
abc="def"
bla="no_need"
line_i_need="information_i_need"
still_no_use="blablabla"
so the third line, is exactly the line i need!
is it possible to find this word with shell orders?
thanks for your support
Using an awk with custom field separator it is much simpler:
awk -F '[="]+' '$1=="line_i_need"{print $2}' file
information_i_need
-F '[="]+' sets field separator as 1 or more of = or "
Use grep:
grep file_name line_i_need
It will print:
line_i_need="information_i_need"
This finds the line with grep an cuts the second column using " separator
grep file_name line_i_need | cut -d '"' -f2

Bash to find count of multiple strings in a large file

I'm trying to get the count of various strings in a large txt file using bash commands.
I.e. find the count of the strings 'pig', 'horse', and 'cat' using bash, and get an output say 'pig: 7, horse: 3, cat: 5'. I would like a way to search through the txt file only once, because it is very large (so I do not want to search for 'pig' through the whole txt file, then go back and search for 'horse', etc.)
Any help with commands would be appreciated. Thanks!
grep -Eo 'pig|horse|cat' txt.file | sort | uniq -c | awk '{print $2": "$1}'
Breaking that into pieces:
grep -Eo 'pig|horse|cat' Print all the occurrences (-o) of the
extended (-e) regex
sort Sort the resulting words
uniq -c Output unique values (of sorted input)
with the count (-c) of each value
awk '{print $2": "$1}' For each line, print the second field (the word)
then a colon and a space, and then the first
field (the count).

Unix cut: Print same Field twice

Say I have file - a.csv
ram,33,professional,doc
shaym,23,salaried,eng
Now I need this output (pls dont ask me why)
ram,doc,doc,
shayam,eng,eng,
I am using cut command
cut -d',' -f1,4,4 a.csv
But the output remains
ram,doc
shyam,eng
That means cut can only print a Field just one time. I need to print the same field twice or n times.
Why do I need this ? (Optional to read)
Ah. It's a long story. I have a file like this
#,#,-,-
#,#,#,#,#,#,#,-
#,#,#,-
I have to covert this to
#,#,-,-,-,-,-
#,#,#,#,#,#,#,-
#,#,#,-,-,-,-
Here each '#' and '-' refers to different numerical data. Thanks.
You can't print the same field twice. cut prints a selection of fields (or characters or bytes) in order. See Combining 2 different cut outputs in a single command? and Reorder fields/characters with cut command for some very similar requests.
The right tool to use here is awk, if your CSV doesn't have quotes around fields.
awk -F , -v OFS=, '{print $1, $4, $4}'
If you don't want to use awk (why? what strange system has cut and sed but no awk?), you can use sed (still assuming that your CSV doesn't have quotes around fields). Match the first four comma-separated fields and select the ones you want in the order you want.
sed -e 's/^\([^,]*\),\([^,]*\),\([^,]*\),\([^,]*\)/\1,\4,\4/'
$ sed 's/,.*,/,/; s/\(,.*\)/\1\1,/' a.csv
ram,doc,doc,
shaym,eng,eng,
What this does:
Replace everything between the first and last comma with just a comma
Repeat the last ",something" part and tack on a comma. VoilĂ !
Assumptions made:
You want the first field, then twice the last field
No escaped commas within the first and last fields
Why do you need exactly this output? :-)
using perl:
perl -F, -ane 'chomp($F[3]);$a=$F[0].",".$F[3].",".$F[3];print $a."\n"' your_file
using sed:
sed 's/\([^,]*\),.*,\(.*\)/\1,\2,\2/g' your_file
As others have noted, cut doesn't support field repetition.
You can combine cut and sed, for example if the repeated element is at the end:
< a.csv cut -d, -f1,4 | sed 's/,[^,]*$/&&,/'
Output:
ram,doc,doc,
shaym,eng,eng,
Edit
To make the repetition variable, you could do something like this (assuming you have coreutils available):
n=10
rep=$(seq $n | sed 's:.*:\&:' | tr -d '\n')
< a.csv cut -d, -f1,4 | sed 's/,[^,]*$/'"$rep"',/'
Output:
ram,doc,doc,doc,doc,doc,doc,doc,doc,doc,doc,
shaym,eng,eng,eng,eng,eng,eng,eng,eng,eng,eng,
I had the same problem, but instead of adding all the columns to awk, I just used (to duplicate the 2nd column):
awk -v OFS='\t' '$2=$2"\t"$2' # for tab-delimited files
For CSVs you can just use
awk -F , -v OFS=, '$2=$2","$2'

Resources