find command in for loop does not list all the files - bash

i made a script, which lists everysingle file in the current directory and subdirectory and gives me the md5sum from the head and tail (with offset) of this file and saves it into a .txt file.
i made this with pipes, so i wasn't able to redirect a variable, which has been declared before by userinput. So i changed my script to a for loop.
Problem now: it doesn't list all the files, but only one. And it seems to do this randomely. Why doesn't it list all the files like before?
I even tryed **.* and ./* and so on. I use a macbookpro mac os 10.13.6. I onced installed something so i could use linux commands aswell for example like tree etc...
any help is appreciated! I have no clue whatelse i can do.
old code in which the variable wasn't redirected:
#!/bin/bash
echo Wie heißt die Festplatte?
read varname
echo Los gehts!
before=$(date +%s)
find . \( ! -regex '.*/\..*' \) -type f -exec bash -c 'h=`tail -n +50000 "{}" | head -c 1000 | md5`;\
t=`tail -c 51000 "{}" | head -c 1000 | md5`;\
echo "$varname {} ; $h ; $t"' \;> /Users/Tobias/Desktop/$varname.txt
after=$(date +%s)
echo Das hat: $(((after - $before)/60)) Minuten bzw $(((after - $before))) Sekunden gedauert
new code in which it doesn't list all the files but only one :
#!/bin/bash
echo Wie heißt die Festplatte?
read varname
echo Los gehts!
before=$(date +%s)
for i in $( find . \( ! -regex '.*/\..*' \) -type f ); do
h=$(tail -n +50000 $i | head -c 1000 | md5)
t=$(tail -c 51000 $i | head -c 1000 | md5)
echo "$varname; $i ; $h ; $t" > /Users/Tobias/Desktop/$varname.txt
done
after=$(date +%s)
echo Das hat: $(((after - $before)/60)) Minuten bzw $(((after - $before))) Sekunden gedauert

You are overwriting the file in each iteration of the loop. Use the append mode instead:
echo "$varname; $i ; $h ; $t" >> /Users/Tobias/Desktop/"$varname".txt
# ~~
or redirect the output of the whole loop:
echo "$varname; $i ; $h ; $t"
done > /Users/Tobias/Desktop/"$varname".txt

Redirect the output of the entire loop, not each echo statement, which overwrites the file each time.
for i in $( find . \( ! -regex '.*/\..*' \) -type f ); do
h=$(tail -n +50000 $i | head -c 1000 | md5)
t=$(tail -c 51000 $i | head -c 1000 | md5)
echo "$varname; $i ; $h ; $t"
done > /Users/Tobias/Desktop/$varname.txt

Related

find and grep / zgrep / lzgrep progress bar

I would like to add a progress bar to this command line:
find . \( -iname "*.bz" -o -iname "*.zip" -o -iname "*.gz" -o -iname "*.rar" \) -print0 | while read -d '' file; do echo "$file"; lzgrep -a stringtosearch\.anything "$file"; done
The progress file should be calculated on the total of compressed size files (not on the single file).
Of course, it can be a script too.
I would also like to add other progress bars, if possible:
The total number of files processed (example 3 out of 21)
The percentage of progress of the single file
Can anybody help me please?
Here some example of it should look alike (example from here):
tar cf - /folder-with-big-files -P | pv -s $(du -sb /folder-with-big-files | awk '{print $1}') | gzip > big-files.tar.gz
Multiple progress bars (example from here):
pv -cN orig < foo.tar.bz2 | bzcat | pv -cN bzcat | gzip -9 | pv -cN gzip > foo.tar.gz
Thanks,
This is the first time I've ever heard of pv and it's not on any machine I have access to but assuming it needs to know a total at startup and then a number on each iteration of a command, you could do something like this to get a progress bar per file processed:
IFS= readarray -d '' files < <(find . -whatever -print0)
printf '%s\n' "${files[#]}" | pv -s "${#files[#]}" | command
The first line gives you an array of files so you can then use "${#files[#]}" to provide pv it's initial total value (looks like you use -s value for that?) and then do whatever you normally do to get progress as each file is processed.
I don't see any way to tell pv that the pipe it's reading from is NUL-terminated rather than newline-terminated so if your files can have newlines in their names then you'd have to figure out how to solve that problem.
To additionally get progress on a single file you might need something like:
IFS= readarray -d '' files < <(find . -whatever -print0)
printf '%s\n' "${files[#]}" |
pv -s "${#files[#]}" |
xargs -n 1 -I {} sh -c 'pv {} | command'
I don't have pv so all of the above is untested so check the syntax, especially since I've never heard of pv :-).
Thanks to Max C., I found a solution for the main question:
find ./ -type f -iname *\.gz -o -iname *\.bz | (tot=0;while read fname; do s=$(stat -c%s "$fname"); if [ ! -z "$s" ] ; then echo "$fname"; tot=$(($tot+$s)); fi; done; echo $tot) | tac | (read size; xargs -i{} cat "{}" | pv -s $size | lzgrep -a something -)
But this work only for gz and bz files, now I have to develop to use different tool according to extension.
I'm gonna to try the Ed solution too.
Thanks to ED and Max C., here the verision 0.2
This version work with zgrep, but not with lzgrep. :-\
#!/bin/bash
echo -n "collecting dump... "
IFS= readarray -d '' files < <(find . \( -iname "*.bz" -o -iname "*.gz" \) -print0)
echo done
echo "Calculating archives size..."
tot=0
for line in "${files[#]}"; do
s=$(stat -c\%s "$line")
if [ ! -z "$s" ]
then
tot=$(($tot+$s))
fi
done
(for line in "${files[#]}"; do
s=$(stat -c\%s "$line")
if [ ! -z "$s" ]
then
echo "$line"
fi
done
) | xargs -i{} sh -c 'echo Processing file: "{}" 1>&2 ; cat "{}"' | pv -s $tot | zgrep -a anything -

Bash Shell Calculating Sum of All Video durations inside a folder in MAC OS

I used to get my result in windows by just searching *.mp4 and select all files. The sum of duration would show in side panels details. I want to find the same things inside MAC recursively.
This is the script I wrote in bash. Tell me what I am doing wrong?
#!/bin/bash
sum=0
find . -type f -name "*.mp4" | while read line; do
duration=`mdls -name kMDItemDurationSeconds "$line" | cut -d "=" -f 2`
sum=$(echo "$duration + $sum"|bc)
all=$sum
done
echo $all
#!/bin/bash
sum=0
while read line; do
duration=$(mdls -name kMDItemDurationSeconds "$line" | cut -d "=" -f 2)
sum=$(echo "$duration+$sum"|bc)
done <<< "$(find . -type f -name "*.mp4")"
h=$(bc <<< "$sum/3600")
m=$(bc <<< "($sum%3600)/60")
s=$(bc <<< "$sum%60")
printf "%02d:%02d:%05.2f\n" $h $m $s
My solution, not perfect yet.

Adding files sizes using bash command "wc"

I ran this command to find each file modified yesterday:
find /eqtynas/ -type f -mtime -1 > /home/writtenToStorage.20171026 &
and then developed this script to add up all the files collected by the script, and sum the sizes. .
#!/bin/bash
ydate=$(date +%Y%m%d --date="yesterday")
file="/home/writtenToStorage.$ydate"
fileSize=0
for line in $(cat $file)
do
if [ -f $line ] && [ -s $line ] ; then
fileSize1=$fileSize
fileSize=$(wc -c < $line)
Total=$(( $fileSize + $fileSize1 ))
fi
done
echo $Total
However when I stat just one of the files in the list It comes out to 18942, where as the total for all the files combined comes out at 34499.
wc -c /eqty/fixed
18942 /eqty/fixed
Is the script ok - because I ran another check and the total size was 314 gigs
find /eqtynas/ -type f -mtime -1 -print0 | du -ch --files0-from=- --total -s > 24hourUsage.20171026 &
Continuing from my comment, you may prefer something similar to:
sum=
while read -r sz; do
sum=$((sum + sz))
done < <(find /eqtynas/ -type f -mtime -1 -exec stat -c %s '{}' \; )
echo "sum: $sum"
There are a number of ways to do this. You can also pipe the result of -exec ls -al '{}' to awk and just sum the 5th field.
If you have already written the filenames to /home/writtenToStorage.20171026, then you can simply redirect the file to your while loop, e.g.
while read -r sz; do
sum=$((sum + sz))
done <"/home/writtenToStorage.20171026"
Look things over and let me know i you have any questions.
You're not adding to Total, you're just setting it to the sum of the sizes of the last two files.
for line in $(cat $file)
do
if [ -f $line ] && [ -s $line ] ; then
fileSize=$(wc -c < $line)
((Total += fileSize))
fi
done

rename files in a folder using find shell

i have a n files in a different folders like abc.mp3 acc.mp3 bbb.mp3 and i want to rename them 01-abc.mp3, 02-acc.mp3, 03-bbb.mp3... i tried this
#!/bin/bash
IFS='
'
COUNT=1
for file in ./uff/*;
do mv "$file" "${COUNT}-$file" let COUNT++ done
but i keep getting errors like for syntax error near 'do and sometimes for not found... Can someone provide single line solution to this using "find" from terminal. i'm looking for a solution using find only due to certain constraints... Thanks in advance
I'd probably use:
#!/bin/bash
cd ./uff || exit 1
COUNT=1
for file in *.mp3;
do
mv "$file" $(printf "%.2d-%s" ${COUNT} "$file")
((COUNT++))
done
This avoids a number of issues and also includes a 2-digit number for the first 9 files (the next 90 get 2-digit numbers anyway, and after that you get 3-digit numbers, etc).
you can try this;
#!/bin/bash
COUNT=1
for file in ./uff/*;
do
path=$(dirname $file)
filename=$(basename $file)
if [ $COUNT -lt 10 ]; then
mv "$file" "$path"/0"${COUNT}-$filename";
else
mv "$file" "$path"/"${COUNT}-$filename";
fi
COUNT=$(($COUNT+1));
done
Eg:
user#host:/tmp/test$ ls uff/
abc.mp3 acc.mp3 bbb.mp3
user#host:/tmp/test$ ./test.sh
user#host:/tmp/test$ ls uff/
01-abc.mp3 02-acc.mp3 03-bbb.mp3
Ok, here's the version without loops:
paste -d'\n' <(printf "%s\n" *) <(printf "%s\n" * | nl -w1 -s-) | xargs -d'\n' -n2 mv -v
You can also use find if you want:
paste -d'\n' <(find -mindepth 1 -maxdepth 1 -printf "%f\n") <(find -mindepth 1 -maxdepth 1 -printf "%f\n" | nl -w1 -s-) | xargs -d'\n' -n2 mv -v
Replace mv with echo mv for the "dry run":
paste -d'\n' <(printf "%s\n" *) <(printf "%s\n" * | nl -w1 -s-) | xargs -d'\n' -n2 echo mv -v
Here's a solution.
i=1
for f in $(find ./uff -mindepth 1 -maxdepth 1 -type f | sort)
do
n=$i
[ $i -lt 10 ] && n="0$i"
echo "$f" "$n-$(basename "$f")"
((i++))
done
And here it is as a one-liner (but in real life if you ever tried anything remotely like what's below in a coding or ops interview you'd not only fail to get the job, you'd probably give the interviewer PTSD. They'd wake up in cold sweats thinking about how terrible your solution was).
i=1; for f in $(find ./uff -mindepth 1 -maxdepth 1 -type f | sort); do n=$i; [ $i -lt 10 ] && n="0$i"; echo "$f" "$n-$(basename "$f")" ; ((i++)); done
Alternatively, you could just cd ./uff if you wanted the rename them in the same directory, and then use find . (along with the other find arguments) to clear everything up. I'm assuming you only want files moved, not directories. And I'm assuming you don't want to recursively rename files / directories.

Need a bash scripts to move files to sub folders automatically

I have a folder with 320G images, I want to move the images to 5 sub folders randomly(just need to move to 5 sub folders). But I know nothing on bash scripts.Please could someone help? thanks!
You could move the files do different directories based on their first letter:
mv [A-Fa-f]* dir1
mv [F-Kf-k]* dir2
mv [^A-Ka-k]* dir3
Here is my take on this. In order to use it place the script somewhere else (not in you folder) but run it from your folder. If you call your script file rmove.sh, you can place it in, say ~/scripts/, then cd to your folder and run:
source ~/scripts/rmove.sh
#/bin/bash
ndirs=$((`find -type d | wc -l` - 1))
for file in *; do
if [ -f "${file}" ]; then
rand=`dd if=/dev/random bs=1 count=1 2>/dev/null | hexdump -b | head -n1 | cut -d" " -f2`
rand=$((rand % ndirs))
i=0
for directory in `find -type d`; do
if [ "${directory}" = . ]; then
continue
fi
if [ $i -eq $rand ]; then
mv "${file}" "${directory}"
fi
i=$((i + 1))
done
fi
done
Here's my stab at the problem:
#!/usr/bin/env bash
sdprefix=subdir
dirs=5
# pre-create all possible sub dirs
for n in {1..5} ; do
mkdir -p "${sdprefix}$n"
done
fcount=$(find . -maxdepth 1 -type f | wc -l)
while IFS= read -r -d $'\0' file ; do
subdir="${sdprefix}"$(expr \( $RANDOM % $dirs \) + 1)
mv -f "$file" "$subdir"
done < <(find . -maxdepth 1 -type f -print0)
Works with huge numbers of files
Does not beak if a file is not moveable
Creates subdirectories if necessary
Does not break on unusual file names
Relatively cheap
Any scripting language will do so I'll write in Python here:
#!/usr/bin/python
import os
import random
new_paths = ['/path1', '/path2', '/path3', '/path4', '/path5']
image_directory = '/path/to/images'
for file_path in os.listdir(image_directory):
full_path = os.path.abspath(os.path.join(image_directory, file_path))
random_subdir = random.choice(new_paths)
new_path = os.path.abspath(os.path.join(random_subdir, file_path))
os.rename(full_path, new_path)
mv `ls | while read x; do echo "`expr $RANDOM % 1000`:$x"; done \
| sort -n| sed 's/[0-9]*://' | head -1` ./DIRNAME
run it in your current image directory, this command will select one file at a time and move it to ./DIRNAME, iterate this command until there are no more files to move.
Pay attention that ` is backquotes and not just quotes characters.

Resources