Bash - Searching in a file log - bash

I have a bash script that saves a date (of last change), filename, maybe number of changes etc in a file (something similar to ls output).
Is there any way to search in this file in bash, eg. get the most used file or the most recent file, but just the filename?
So the file looks something like this:
2018-03-28 19:47:41 filename1
2018-03-28 19:49:24 filename2
2018-03-28 19:50:14 filename1
2018-03-28 19:50:17 filename3
Now I would like to get the file that was used last, so I shall sort it (it's actually sorted already), but I only want to get the filename of the file last edited (with the latest date). Is there a way to do this apart from regex?

If I understand you correctly:
head -1 foo.txt | tr -s ' ' | cut -d ' ' -f 3 # First line of foo.txt, only the filename
tail -1 foo.txt | tr -s ' ' | cut -d ' ' -f 3 # Last line
will give you the lines you want (thanks to this). If foo.txt is already sorted, those will be the earliest and latest.
To store those in variables:
firstfn="$(head -1 foo.txt | tr -s ' ' | cut -d ' ' -f 3)"
echo "First filename is $firstfn" # just a test
lastfn="$(tail -1 foo.txt | tr -s ' ' | cut -d ' ' -f 3)"
If you have awk, you can do this more simply (thanks to this answer):
awk -- 'END { print $3; }' foo.txt
for the last line, or
awk -- '{ print $3; exit }' foo.txt
for the first line. Same deal with the variables, e.g.,
firstfn="$(awk -- '{ print $3; exit }' foo.txt)"

Related

Get second part of output separated by two spaces

I have this script
#!/bin/bash
path=$1
find "$path" -type f -exec sha1sum {} \; | sort | uniq -D -w 32
It outputs this:
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16 ./dups/dup1-1.txt
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16 ./dups/dup1.txt
ffc752244b634abb4ed68d280dc74ec3152c4826 ./dups/subdups/dup2-2.txt
ffc752244b634abb4ed68d280dc74ec3152c4826 ./dups/subdups/dup2.txt
Now I only want to save the last part (the path) in an array.
When I add this after the sort
| awk -F " " '{ print $1 }'
I get this as output:
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16
ffc752244b634abb4ed68d280dc74ec3152c4826
ffc752244b634abb4ed68d280dc74ec3152c4826
When I change the $1 to $2, I get nothing, but I want to get the path of the file.
How should I do this?
EDIT:
This script
#!/bin/bash
path=$1
find "$path" -type f -exec sha1sum {} \; | awk '{ print $1 }' | sort | uniq -D -w 32
Outputs this
parallels#mbp:~/bin$ duper ./dups
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16
ffc752244b634abb4ed68d280dc74ec3152c4826
ffc752244b634abb4ed68d280dc74ec3152c4826
When I change it to $2 it outputs this
parallels#mbp:~/bin$ duper ./dups
parallels#mbp:~/bin$
Expected Output
./dups/dup1-1.txt
./dups/dup1.txt
./dups/subdups/dup2-2.txt
./dups/subdups/dup2.txt
There are some files in the directory that are no duplicates of each other. Such as nodup1.txt and nodup2.txt. That's why it doesn't show up.
Change your find command to this:
find "$path" -type f -exec sha1sum {} \; | uniq -D -w 41 | awk '{print $2}' | sort
I moved the uniq as the first filter and it is taking into consideration just the first 41 characters, aiming to match just the sha1sum hash.
You can achieve the same result piping to tr and then cut:
echo '3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16 ./dups/dup1-1.txt' |\
tr -s ' ' | cut -d ' ' -f 2
Outputs:
./dups/dup1-1.txt
-s ' ' on tr is to squeeze spaces
-d ' ' -f 2 on cut is to output the second field delimited by spaces
I like to use cut for stuff like this. With this input:
3c8b9f4b983afa9f644d26e2b34fa3e03a2bef16 ./dups/dup1-1.txt
I'd do cut -d ' ' -f 2 which should return:
./dups/dup1-1.txt
I haven't tested it though for your case.
EDIT: Gonzalo Matheu's answer is better as he ensured to remove any extra spaces between your outputs before doing the cut.

Creating directories from list preserving whitespaces

I have list of names in a file that I need to create directories from. The list looks like
Ada Lovelace
Jean Bartik
Leah Culver
I need the folders to be the exact same, preserving the whitespace(s). But with
awk '{print $0}' myfile | xargs mkdir
I create separate folders for each word
Ada
Lovelace
Jean
Bartik
Leah
Culver
Same happens with
awk '{print $1 " " $2}' myfile | xargs mkdir
Where is the error?
Using gnu xargs you can use -d option to set delimiter as \n only. This way you can avoid awk also.
xargs -d '\n' mkdir -p < file
If you don't have gnu xargs then you can use tr to convert all \n to \0 first:
tr '\n' '\0' < file | xargs -0 mkdir
#birgit:try: Completely based on your sample Input_file provided.
awk -vs1="\"" 'BEGIN{printf "mkdir ";}{printf("%s%s%s ",s1,$0,s1);} END{print ""}' Input_file | sh
awk '{ system ( sprintf( "mkdir \"%s\"", $0)) }' YourFile
# OR
awk '{ print"mkdir "\"" $0 "\"" | "/bin/sh" }' YourFile
# OR for 1 subshell
awk '{ Cmd = sprintf( "%s%smkdir \"%s\"", Cmd, (NR==1?"":"\n"), $0) } END { system ( Cmd ) }' YourFile
Last version is better due to creation of only 1 subshell.
If there are a huge amount of folder (shell parameter limitation), you could loop and create smaller command several times

bash calculations with numbers from files

I am trying to do a simple thing:
To get the second number in the the line with the second occurence of the word TER and lower it by one and further process it. The tr -s ' ' is there because the file is not delimited by tabs, but by different amounts of whitespaces.
My script:
first_res_atombumb= grep 'TER' tata_sbox_cuda.pdb | head -n 2 | tail -1 |tr -s ' '| cut -f 2 -d ' '
echo $((first_res_atombumb-1))
but this only returnes:
255
-1
Of course I want to have 254.
adding | tr -d '\n' does not help either, what on earth is going on? I have already asked several people at work noone seems to know.
the lines in question look linke this
TER 128 DA3 4
TER 255 DA3 8
and if I apply grep 'TER' tata_sbox_cuda.pdb | head -n 2 | tail -1 | tr -s ' '| cut -f 2 -d ' ' in the command line i get what i expect, just 255
With bash, I'd write
n_ter=0
while read -a words; do
if [[ ${words[0]} == TER ]] && (( ++n_ter == 2 )); then
echo $(( ${words[1]} - 1 ))
fi
done < file
but I'd use awk
awk '$1 == "TER" && ++n == 2 {print $2 - 1}' file
The problem with your code: you forgot to use the $() command substitution syntax
first_res_atombumb= grep 'TER' tata_sbox_cuda.pdb | head -n 2 | tail -1 |tr -s ' '| cut -f 2 -d ' '
# .................^...............................................................................^
echo $((first_res_atombumb-1))
You're setting the variable to an empty string in the environment of the grep command. Then, since you're not capturing the output of that pipeline, "255" is printed to the terminal. Because the variable is unset in your current shell, you get echo $((-1))
All you need is:
first_res_atombumb=$(grep 'TER' tata_sbox_cuda.pdb | head -n 2 | tail -1 |tr -s ' '| cut -f 2 -d ' ')
# .................^^...............................................................................^
But I'd still use awk.
If I understand your problem correctly you can solve it using AWK:
awk 'BEGIN{v=0} $1 == "TER" {v++;if (v==2) {print $2-1 ;exit}}' tata_sbox_cuda.pdb
Explanation:
BEGIN{v=0} declaring and nulling the variable.
$1 == "TER" execute the command in {} only if it's the second occurence of TER.
{v++;if (v==2) {print $2-1 ;exit}}' increase the value of v and check if it's 2, in this case subtract 1 from the second field and display, exit afterwards (will make the processing faster and will skip unnecessary lines).

I want to re-arrange a file in an order in shell

I have a file test.txt like below spaces in between each record
service[1.1],parttion, service[1.2],parttion, service[1.3],parttion, service[2.1],parttion, service2[2.2],parttion,
Now I want to rearrange it as below into a output.txt
COMPOSITES=parttion/service/1.1,parttion/service/1.2,parttion/service/1.3,parttion/service/2.1,parttion/service/2.2
I've tried:
final_str=''
COMPOSITES=''
# Re-arranging the composites and preparing the composite property file
while read line; do
partition_val="$(echo $line | cut -d ',' -f 2)"
composite_temp1_val="$(echo $line | cut -d ',' -f 1)"
composite_val="$(echo $composite_temp1_val | cut -d '[' -f 1)"
version_temp1_val="$(echo $composite_temp1_val | cut -d '[' -f 2)"
version_val="$(echo $version_temp1_val | cut -d ']' -f 1)"
final_str="$partition_val/$composite_val/$version_val,"
COMPOSITES=$COMPOSITES$final_str
done <./temp/test.txt
We start with the file:
$ cat test.txt
service[1.1],parttion, service[1.2],parttion, service[1.3],parttion, service[2.1],parttion, service2[2.2],parttion,
We can rearrange that file as follows:
$ awk -F, -v RS=" " 'BEGIN{printf "COMPOSITES=";} {gsub(/[[]/, "/"); gsub(/[]]/, ""); if (NF>1) printf "%s%s/%s",NR==1?"":",",$2,$1;}' test.txt
COMPOSITES=parttion/service/1.1,parttion/service/1.2,parttion/service/1.3,parttion/service/2.1,parttion/service2/2.2
The same command split over multiple lines is:
awk -F, -v RS=" " '
BEGIN{
printf "COMPOSITES=";
}
{
gsub(/[[]/, "/")
gsub(/[]]/, "")
if (NF>1) printf "%s%s/%s",NR==1?"":",",$2,$1
}
' test.txt
Here's what I came up with.
awk -F '[],[]' -v RS=" " 'BEGIN{printf("COMPOSITES=")}/../{printf("%s/%s/%s,",$4,$1,$2);}' test.txt
Broken out for easier reading:
awk -F '[],[]' -v RS=" " '
BEGIN {
printf("COMPOSITES=");
}
/../ {
printf("%s/%s/%s,",$4,$1,$2);
}' test.txt
More detailed explanation of the script:
-F '[],[]' - use commas or square brackets as field separators
-v RS=" " - use just the space as a record separator
'BEGIN{printf("COMPOSITES=")} - starts your line
/../ - run the following code on any line that has at least two characters. This avoids the empty field at the end of a line terminating with a space.
printf("%s/%s/%s,",$4,$1,$2); - print the elements using a printf() format string that matches the output you specified.
As concise as this is, the format string does leave a trailing comma at the end of the line. If this is a problem, it can be avoided with a bit of extra code.
You could also do this in sed, if you like writing code in line noise.
sed -e 's:\([^[]*\).\([^]]*\).,\([^,]*\), :\3/\1/\2,:g;s/^/COMPOSITES=/;s/,$//' test.txt
Finally, if you want to avoid external tools like sed and awk, you can do this in bash alone:
a=($(<test.txt))
echo -n "COMPOSITES="
for i in "${a[#]}"; do
i="${i%,}"
t="${i%]*}"
printf "%s/%s/%s," "${i#*,}" "${i%[*}" "${t#*[}"
done
echo ""
This slurps the contents of test.txt into an array, which means your input data must be separated by whitespace, per your example. It then adds the prefix, then steps through the array, using Parameter Expansion to massage the data into the fields you need. The last line (echo "") is helpful for testing; you may want to eliminate it in practice.

Assigning deciles using bash

I'm learning bash, and here's a short script to assign deciles to the second column of file $1.
The complicating bit is the use of awk within the script, leading to ambiguous redirects when I run the script.
I would have gotten this done in SAS by now, but like the idea of two lines of code doing the job.
How can I communicate the total number of rows (${N}) to awk within the script? Thanks.
N=$(wc -l < $1)
cat $1 | sort -t' ' -k2gr,2 | awk '{$3=int((((NR-1)*10.0)/"${N}")+1);print $0}'
You can set an awk variable from the command line using -v.
N=$(wc -l < "$1" | tr -d ' ')
sort -t' ' -k2gr,2 "$1" | awk -v n=$N '{$3=int((((NR-1)*10.0)/n)+1);print $0}'
I added tr -d to get rid of the leading spaces that wc -l puts in its result.

Resources