bash script: how to display output in different line - bash

i'm trying use one line of code to solve a problem
echo $(find . -maxdepth 1 -type f -newer $1 | sed 's,\.\/,,g')
this will print out all the file in current folder that are newer than the input file. But it prints out in one single line:
file1 file2 file3 file4....
how can i display each file name in a single line like:
file1
file2
file3
...
This seems to be a very simple but i've been searching and have no solution.
Thank you in advance.

Get rid of the echo and the $(...).
find . -maxdepth 1 -type f -newer "$1" | sed 's,\.\/,,g'
If you have GNU find you can replace the sed with a -printf action:
find . -maxdepth 1 -type f -newer "$1" -printf '%P\n'

pipe it to tr:
... | tr " " "\n"

Related

Get files from directories alphabetically sorted with bash

I have this code that works in the directory that I execute:
pathtrabajo=.
filedirs="files.txt"
dirs=$(find . -maxdepth 1 -mindepth 1 -type d | sort -n)
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for entry in $dirs
do
echo "${entry}" >> "${filedirs}"
find "$entry" -maxdepth 1 -mindepth 1 -name '*.md' -printf '%f\n' | sort | sed 's/\.md$//1' | awk '{print "- [["$0"]]"}' >> "${filedirs}"
done
IFS=$SAVEIFS
But when I try to make it global to work with variables, find gives error:
pathtrabajo="/path/to/a/files"
filedirs="files.txt"
dirs=$(find "${pathtrabajo}" -maxdepth 1 -mindepth 1 -type d | sort -n)
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for entry in "${dirs[#]}"
do
echo "${entry}" >> "${pathtrabajo}"/"${filedirs}"
find "${entry}" -maxdepth 1 -mindepth 1 -name '*.md' -printf '%f\n' | sort | sed 's/\.md$//1' | awk '{print "- [["$0"]]"}' >> "${pathtrabajo}"/"${filedirs}"
done
IFS=$SAVEIFS
What did I do wrong?
It's really not clear why you are using find here at all. The following will probably do what you are trying if I can guess from your code.
dirs=([0-9][!0-9]*/ [0-9][0-9][!0-9]*/ [0-9][0-9][0-9][!0-9]*/ [!0-9]*/)
printf "%s\n" "${dirs[#]}" >"$filedirs"
for dir in "${dirs[#]}"; do
printf "%s\n" "$dir"/*.md |
awk '{ sub(/\.md$/, ""); print "- [["$0"]]" }'
done >>"$filedirs"
The shell already expands wildcards alphabetically. The dirs assignment will expand all directories which start with a single digit, then the ones with two digits, then the ones with three digits -- extend if you need more digits -- then the ones which do not start with a digit.
It would not be hard, but cumbersome, to extend the code to run in an arbitrary directory. My proposed solution would be (for once!) to cd to the directory where you want the listing, then run this script.

Exec and sed commands

The question is how to combine the following commands in one line and use exec.
find . -name '*.txt' -exec sh -c 'echo "$(sed -n "\$p" "$1"),$1"' _ {} \;
The result: path and name of all .txt files.
find . -name '*.txt' -exec sed -n '/stringA/,/stringB/p' {} \;
The result: lines between start and end parameters over all .txt files.
The requested result: give me lines between start and end parameters. The first line must be contain path and name of the .txt file.
find . -name '*.txt' -exec ???? {} \;
./alpha/file01.txt
stringA
line1
line2
stringB
./beta/file02.txt
stringA
line1
line2
stringB
Thanks.
T.
If the files are non-empty then all you need is:
find . -name '*.txt' -exec awk 'FNR==1{print FILENAME} /StringA/,/StringB/' {} +
If they can be empty then the simplest way to handle it is to use GNU awk for BEGINFILE:
find . -name '*.txt' -exec awk 'BEGINFILE{print FILENAME} /StringA/,/StringB/' {} +
It may be easier to pipe the output of find to other commands than using -exec. Please try the following:
find . -type f -name '*.txt' -print0 | while read -r -d "" f; do echo "$f"; sed -n "/stringA/,/stringB/p" "$f"; done
which yields:
./alpha/file01.txt
stringA
line1
line2
stringB
./beta/file02.txt
stringA
line1
line2
stringB
-print0 option in find merges the filenames delimited by a null character.
-d "" option in read splits the input by a null character to properly reproduce the list of filenames.
Then we can refer to $f as a filename in the while loop.
Hope this helps.
Using GNU sed and realpath:
sed -sn '1F;/stringA/,/stringB/p' $(realpath *.txt)
Or, if relative paths are acceptable:
sed -sn '1F;/stringA/,/stringB/p' ./*.txt
If the code needs to recurse into subdirs:
sed -sn '1F;/stringA/,/stringB/p' $(find . -name '*.txt')

Count filenumber in directory with blank in its name

If you want a breakdown of how many files are in each dir under your current dir:
for i in $(find . -maxdepth 1 -type d) ; do
echo -n $i": " ;
(find $i -type f | wc -l) ;
done
It does not work when the directory name has a blank in the name. Can anyone here tell me how I must edite this shell script so that such directory names also accepted for counting its file contents?
Thanks
Your code suffers from a common issue described in http://mywiki.wooledge.org/BashPitfalls#for_i_in_.24.28ls_.2A.mp3.29.
In your case you could do this instead:
for i in */; do
echo -n "${i%/}: "
find "$i" -type f | wc -l
done
This will work with all types of file names:
find . -maxdepth 1 -type d -exec sh -c 'printf "%s: %i\n" "$1" "$(find "$1" -type f | wc -l)"' Counter {} \;
How it works
find . -maxdepth 1 -type d
This finds the directories just as you were doing
-exec sh -c 'printf "%s: %i\n" "$1" "$(find "$1" -type f | wc -l)"' Counter {} \;
This feeds each directory name to a shell script which counts the files, similarly to what you were doing.
There are some tricks here: Counter {} are passed as arguments to the shell script. Counter becomes $0 (which is only used if the shell script generates an error. find replaces {} with the name of a directory it found and this will be available to the shell script as $1. This is done is a way that is safe for all types of file names.
Note that, wherever $1 is used in the script, it is inside double-quotes. This protects it for word splitting or other unwanted shell expansions.
I found the solution what I have to consider:
Consider_this
#!/bin/bash
SAVEIFS=$IFS
IFS=$(echo -en "\n\b")
for i in $(find . -maxdepth 1 -type d); do
echo -n " $i: ";
(find $i -type f | wc -l) ;
done
IFS=$SAVEIFS

Using awk to print ALL spaces within filenames which have a varied number of spaces

I'm executing the following using bash and awk to get the potentially space-full filename, colon, file size. (Column 5 contains the space delimited size, and 9 to EOL the file name):
src="Desktop"
echo "Constructing $src files list. `date`"
cat /dev/null > "$src"Files.txt
find -s ~/"$src" -type f -exec ls -l {} \; |
awk '{for(i=9;i<=NF;i++) {printf("%s", $i " ")} print ":" $5}' |
grep -v ".DS_Store" | grep -v "Icon\r" |
while read line ; do filespacesize=`basename "$line"`; filesize=`echo "$filespacesize" |
sed -e 's/ :/:/1'`
path=`dirname "$line"`; echo "$filesize:$path" >> "$src"Files.txt ;
done
And it works fine, BUT…
If a filename has > 1 space between parts, I only get 1 space between filename parts, and the colon, followed by the filesize.
How can I get the full filename, :, and then the file size?
It seems you want the following (provided your find handles the printf option with the %f, %s and %h modifiers):
src=Desktop
echo "Constructing $src files list. $(date)"
find ~/"$src" -type f -printf '%f:%s:%h\n' > "$src"Files.txt
Much shorter and much more efficient than your method!
This will not discard the .DS_STORE and Icon\r things… but I'm not really sure what you really want to discard. If you want to discard the .DS_STORE directory altogether:
find ~/"$src" -name '.DS_STORE' -type d -prune -o -type f -printf '%f:%s:%h\n' > "$src"Files.txt
#guido seems to have guessed what you mean by grep -v "Icon\r": ignore files ending with Icon; if this his guess is right, then this will do:
find ~/"$src" -name '.DS_STORE' -type d -prune -o ! -name '*Icon' -type f -printf '%f:%s:%h\n' > "$src"Files.txt

Bash: how to pipe each result of one command to another

I want to get the total count of the number of lines from all the files returned by the following command:
shell> find . -name *.info
All the .info files are nested in sub-directories so I can't simply do:
shell> wc -l *.info
Am sure this should be in any bash users repertoire, but am stuck!
Thanks
wc -l `find . -name *.info`
If you just want the total, use
wc -l `find . -name *.info` | tail -1
Edit: Piping to xargs also works, and hopefully can avoid the 'command line too long'.
find . -name *.info | xargs wc -l
You can use xargs like so:
find . -name *.info -print0 | xargs -0 cat | wc -l
some googling turns up
find /topleveldirectory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
which seems to do the trick
#!/bin/bash
# bash 4.0
shopt -s globstar
sum=0
for file in **/*.info
do
if [ -f "$file" ];then
s=$(wc -l< "$file")
sum=$((sum+s))
fi
done
echo "Total: $sum"
find . -name "*.info" -exec wc -l {} \;
Note to self - read the question
find . -name "*.info" -exec cat {} \; | wc -l
# for a speed-up use: find ... -exec ... '{}' + | ...
find . -type f -name "*.info" -exec sed -n '$=' '{}' + | awk '{total += $0} END{print total}'

Resources