How to print out subfolder ksh? - ksh

In the main folder, it contains a,b and c sub folder. I want to get every sub folder to be printed one by one..
for d in ${ls -R main}
do
printf "$d"
done
I would like it to print
a
b
c
so that I can use these subfolder later to check permission....

If You don't have to use only ksh You can do this with find like:
find /where/to/search/directories -type d -print
(-type d ensures that only directories will get find)
Or if You don't have find You can do it like:
ls -Rdl /path/to/search/directories/* | awk '/^d/ {print $9}'
awk parses ls -l output to filter out only directories and print the last field in the table.

Related

filename group by a pattern and select only one from each group

I have following files(as an example, 60000+ actually) and all the log files follows this pattern:
analyse-ABC008795-84865-201911261249.log
analyse-ABC008795-84866-201911261249.log
analyse-ABC008795-84867-201911261249.log
analyse-ABC008795-84868-201911261249.log
analyse-ABC008795-84869-201911261249.log
analyse-ABC008796-84870-201911261249.log
analyse-ABC008796-84871-201911261249.log
analyse-ABC008796-84872-201911261249.log
analyse-ABC008796-84873-201911261249.log
Only numbers get change in log files. I want to take one file from each category where files should be categorized by ABC.... number. So, as you can see, there are only two categories here:
analyse-ABC008795
analyse-ABC008796
So, what I want to have is one file(let's say first file) from each category. Output should look like this:
analyse-ABC008795-84865-201911261249.log
analyse-ABC008796-84870-201911261249.log
This should be done in Bash/linux environment, so that after I get this, I should use grep to check if my "searching string" contain in those files
ls -l | <what should I do to group and get one file from each category> | grep "searching string"
With bash and awk.
files=(*.log)
printf '%s\n' "${files[#]}" | awk -F- '!seen[$2]++'
Or use find instead of a bash array for a more portable approach.
find . -type f -name '*.log' | awk -F- '!seen[$2]++'
If your find has the -printf flag and you don't want the leading ./ from the filename add it before the pipe |
-printf '%f\n'
The !seen[$2]++ Remove second and subsequent instances of each input line, without having to sort them first. The $2 means the second field which -F is using.

SHELL printing just right part after . (DOT)

I need to find just extension of all files in directory (if there are 2 same extensions, its just one). I already have it. But the output of my script is like
test.txt
test2.txt
hello.iso
bay.fds
hellllu.pdf
Im using grep -e -e '.' and it just highlight DOTs
And i need just these extensions give in one variable like txt,iso,fds,pdf
Is there anyone who could help? I already had it one time but i had it on array. Today I found out It's has to work on dash too.
You can use find with awk to get all unique extensions:
find . -type f -name '?*.?*' -print0 |
awk -F. -v RS='\0' '!seen[$NF]++{print $NF}'
can be done with find as well, but I think this is easier
for f in *.*; do echo "${f##*.}"; done | sort -u
if you want to assign a comma separated list of the unique extensions, you can follow this
ext=$(for f in *.*; do echo "${f##*.}"; done | sort -u | paste -sd,)
echo $ext
csv,pdf,txt
alternatively with ls
ls -1 *.* | rev | cut -d. -f1 | rev | sort -u | paste -sd,
rev/rev is required if you have more than one dot in the filename, assuming the extension is after the last dot. For any other directory simply change the part *.* to dirpath/*.* in all scripts.
I'm not sure I understand your comment. If you don't assign to a variable, by default it will print to the output. If you want to pass directory name as a variable to a script, put the code into a script file and replace dirpath with $1, assuming that will be your first argument to the script
#!/bin/bash
# print unique extension in the directory passed as an argument, i.e.
ls -1 "$1"/*.* ...
if you have sub directories with extensions above scripts include them as well, to limit only to file types replace ls .. with
find . -maxdepth 1 -type f -name "*.*" | ...

Shell code to print file names and their sizes

I need to make a shell command that lists all file names and their sizes in a directory. I wrote this:
ls -l | awk ' {print $9, $5} '
the problem is that with $9 it only prints the first word of the name of the file.
Any tips to make it print the whole name?
Instead of parsing ls, use find:
find . -type f -printf "%s\t%f\n"
The %f directive prints the filename with leading directories removed. %s produces the file size in bytes.
For restricting the listing to the current directory, use -maxdepth:
find . -maxdepth 1 -type f -printf "%s\t%f\n"
You could also use stat:
stat --printf "%s\t%n\n" *

Extract the complete list of file extensions from a directory

I have a directory with lot of sub directories in multiple level and I want to extract the unique list of file extensions that are present in all these sub directories.
Is there a simple way to do it from command line?
Thanks.
If your definition of "extension" is "anything after the last dot, or else nothing" then maybe something like
find dir -name '*.*' -print | rev | cut -d . -f1 | rev
Pipe into | sort -u to get just the unique extensions.
This is not robust against file names with newlines in them, etc.
The concept of a "file extension" is not well formalized on Unix, but if you are traversing e.g. a web server's files (which often do depend on an extension scheme) this should work.
You can for example do:
find /your/dir -type f | awk -F. '{a[$NF]} END{for (i in a) print i}'
find /your/dir -type f finds files
awk -F. '{a[$NF]} END{for (i in a) print i}' gets all the extensions and prints them.

How to print all file names in a folder with awk or bash?

I would like to print all file names in a folder.How can I do this with awk or bash?
ls -l /usr/bin | awk '{ print $NF }'
:)
find . -type f -maxdepth 1
exclude the -maxdepth part if you want to also do it recursively for subdirectories
Following is a pure-AWK option:
gawk '
BEGINFILE {
print "Processing " FILENAME
};
' *
It may be helpful if you want to use it as part of a bigger AWK script processing multiple files and you want to log which file is currently being processed.
This command will print all the file names:
for f in *; do echo "$f"; done
or (even shorter)
printf '%s\n' *
Alternatively, if you like to print specific file types (e.g., .txt), you can try this:
for f in *.txt; do echo "$f"; done
or (even shorter)
printf '%s\n' *.txt
/bin/ls does this job for you and you may call it from bash.
$> /bin/ls
[.. List of files..]
Interpreting your question you might be interested in iterating over every single file in this directory. This can be done using bash as well:
for f in `ls`; do
echo $f;
done
for f in *; do var=`find "$f" | wc -l`; echo "$f: $var"; done
This will print name of the directory and number of files in it. wc -l here returns count of files +1 (Including directory)
Sample output:
aa: 4
bb: 4
cc: 1
test2.sh: 1

Resources