Pipe ls output to get path of all directories - bash

I want to list all directories ls -d * in the current directory and list out all their full paths. I know I need to pipe the output to something, but just not sure what. I don't know if I can pipe the output to a pwd or something.
The desired result would be the following.
$ cd /home/
$ ls -d *|<unknown>
/home/Directory 1
/home/Directory 2
/home/Directory 3
<unknown> being the part which needs to pipe to pwd or something.
My overall goal is to create a script which will allow to me construct a command for each full path supplied to it. I'll type build and internally it will run the following command for each.
cd <full directory path>; JAVA_HOME=jdk/Contents/Home "/maven/bin/mvn" clean install

Try simply:
$ ls -d $PWD/*/
Or
$ ls -d /your/path/*/

find `pwd` -type d -maxdepth 1 -name [^.]\*
Note: The above command works in bash or sh, not in csh. (Bash is the default shell in linux and MacOSX.)

ls -d $PWD/* | xargs -I{} echo 'cd {} JAVA_HOME=jdk/Contents/Home "/maven/bin/mvn" clean install' >> /foo/bar/buildscript.sh
will generate the script for u.

Might I also suggest using -- within your ls construct, so that ls -d $PWD/*/ becomes ls -d -- $PWD/*/ (with an extra -- inserted)? This will help with those instances where a directory or filename starts with the - character:
/home/-dir_with_leading_hyphen/
/home/normal_dir/
In this instance, ls -d */ results in:
ls: illegal option -- -
usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]
However, ls -d -- */ will result in:
-dir_with_leading_hyphen/ normal_dir/
And then, of course, you can use the script indicated above (so long as you include the -- any time you call ls).

No piping neccessary:
find $(pwd) -maxdepth 1 -type d -printf "%H/%f\n"
to my surprise, a command in the print statement works too:
find -maxdepth 1 -type d -printf "$(pwd)/%f\n"

Related

For loop, wildcard and conditional statement

I don't really know what am I supposed to do with it.
For each file in the /etc directory whose name starts with the o or l and the second letter and the second letter of the name is t or r, display its name, size and type ('file'/'directory'/'link'). Use: wildcard, for loop and conditional statement for the type.
#!/bin/bash
etc_dir=$(ls -a /etc/ | grep '^o|^l|^.t|^.r')
for file in $etc_dir
do
stat -c '%s-%n' "$file"
done
I was thinking about something like that but I have to use if statement.
You may reach the goal by using find command.
This will search through all subdirectories.
#!/bin/bash
_dir='/etc'
find "${_dir}" -name "[ol][tr]*" -exec stat -c '%s-%n' {} \; 2>/dev/null
To have control on searching in subdirectories, you may use -maxdepth flag, like in the below example it will search only the files and directories name in the /etc dir and don't go through the subdirectories.
#!/bin/bash
_dir='/etc'
find "${_dir}" -maxdepth 1 -name "[ol][tr]*" -exec stat -c '%s-%n' {} \; 2>/dev/null
You may also use -type f OR -type d parameters to filter finding only Files OR Directories accordingly (if needed).
#!/bin/bash
_dir='/etc'
find "${_dir}" -name "[ol][tr]*" -type f -exec stat -c '%s-%n' {} \; 2>/dev/null
Update #1
Due to your request in the comments, this is a long way but used for loop and if statement.
Note: I'd strongly recommend to review and practice the commands used in this script instead of just copy and pasting them to get the score ;)
#!/bin/bash
# Set the main directory path.
_mainDir='/etc'
# This will find all files in the $_mainDir (ignoring errors if any) and assign the file's path to the $_files variable.
_files=$(find "${_mainDir}" 2>/dev/null)
# In this for loop we will
# loop over all files
# identify the poor filename from the whole file path
# and IF the poor file name matches the statement then run & output the `stat` command on that file.
for _file in ${_files} ;do
_fileName=$(basename ${_file})
if [[ "${_fileName}" =~ ^[ol][tr].* ]] ;then
stat -c 'Size: %s , Type: %n ' "${_file}"
fi
done
exit 0
You should break-down you problems into multiple pieces and tackle them one by one.
First, try and build an expression that finds the right files. If you were to execute your regex expression in a shell:
ls -a /etc/ | grep '^o|^l|^.t|^.r'
You would immediately see that you don't get the right output. So the first step would be to understand how grep works and fix the expression to:
ls -a /etc/ | grep '^[ol][tr]*'
Then, you have the file name, and you need the size and a textual file type. The size is easy to obtain using a stat call.
But, you soon realize you cannot ask stat to provide a textual format of the file type with the -f switch, so you probably have to use an if clause to present that.
How about this:
shopt -s extglob
ls -dp /etc/#(o|l)#(t|r)* | grep -v '/$'
Explanation:
shopt extglob - enable extended globbing (https://www.google.com/search?q=bash+extglob)
ls -d - list directories names, not their content
ls -dp - and add / at the end of each directory name
#(o|l)#(t|r) - o or l once (#), and then t or r once
grep -v '/$' - remove all lines containing / at the end
Of course, Vab's find solution is better that this ls:
find /etc -maxdepth 1 -name "[ol][tr]*" -type f -exec stat {} \;

Create archive from difference of two folders

I have the following problem.
There are two nested folders A and B. They are mostly identical, but B has a few files that A does not. (These are two mounted rootfs images).
I want to create a shell script that does the following:
Find out which files are contained in B but not in A.
copy the files found in 1. from B and create a tar.gz that contains these files, keeping the folder structure.
The goal is to import the additional data from image B afterwards on an embedded system that contains the contents of image A.
For the first step I put together the following code snippet. Note to grep "Nur" : "Nur in" = "Only in" (german):
diff -rq <A> <B>/ 2>/dev/null | grep Nur | awk '{print substr($3, 1, length($3)-1) "/" substr($4, 1, length($4)-1)}'
The result is the output of the paths relative to folder B.
I have no idea how to implement the second step. Can someone give me some help?
Using diff for finding files which don't exist is severe overkill; you are doing a lot of calculations to compare the contents of the files, where clearly all you care about is whether a file name exists or not.
Maybe try this instead.
tar zcf newfiles.tar.gz $(comm -13 <(cd A && find . -type f | sort) <(cd B && find . -type f | sort) | sed 's/^\./B/')
The find commands produce a listing of the file name hierarchies; comm -13 extracts the elements which are unique to the second input file (which here isn't really a file at all; we are using the shell's process substitution facility to provide the input) and the sed command adds the path into B back to the beginning.
Passing a command substitution $(...) as the argument to tar is problematic; if there are a lot of file names, you will run into "command line too long", and if your file names contain whitespace or other irregularities in them, the shell will mess them up. The standard solution is to use xargs but using xargs tar cf will overwrite the output file if xargs ends up calling tar more than once; though perhaps your tar has an option to read the file names from standard input.
With find:
$ mkdir -p A B
$ touch A/a A/b
$ touch B/a B/b B/c B/d
$ cd B
$ find . -type f -exec sh -c '[ ! -f ../A/"$1" ]' _ {} \; -print
./c
./d
The idea is to use the exec action with a shell script that tests the existence of the current file in the other directory. There are a few subtleties:
The first argument of sh -c is the script to execute, the second (here _ but could be anything else) corresponds to the $0 positional parameter of the script and the third ({}) is the current file name as set by find and passed to the script as positional parameter $1.
The -print action at the end is needed, even if it is normally the default with find, because the use of -exec cancels this default.
Example of use to generate your tarball with GNU tar:
$ cd B
$ find . -type f -exec sh -c '[ ! -f ../A/"$1" ]' _ {} \; -print > ../list.txt
$ tar -c -v -f ../diff.tar --files-from=../list.txt
./c
./d
Note: if you have unusual file names the --verbatim-files-from GNU tar option can help. Or a combination of the -print0 action of find and the --null option of GNU tar.
Note: if the shell is POSIX (e.g., bash) you can also run find from the parent directory and get the path of the files relative from there, if you prefer:
$ mkdir -p A B
$ touch A/a A/b
$ touch B/a B/b B/c B/d
$ find B -type f -exec sh -c '[ ! -f A"${1#B}" ]' _ {} \; -print
B/c
B/d

Unix shell script not executing from another script

I have written the below command using a shell script:
/usr/bin/find ${FilePath[$i]} -name ${FileName[$i]}* -type f -mtime +${DaysNo[$i]} | grep ${FilePath[$i]}$tempfile > tempFilesList
It looks good when I execute this script directly, but gives me below error when I try to execute it from another shell script.
ERROR : /usr/bin/find: bad option resultmgr.log_2019-11-07
/usr/bin/find: [-H | -L] path-list predicate-list
It's likely that ${FileName[$i]}* is being expanded to multiple file names which would give you something like -name file1 file2 in your command.
That could happen if, for example, files matching that mask existed in your current working directory for the case where you run it from another script, but not when you're running it from the command line. Some shells will expand if possible but leave alone if not, as per the following transcript:
~> echo testprog*
testprog testprog.c
~> echo nosuchfile*
nosuchfile*
~> _
That file2 would then be considered a control argument to find and therefore invalid.
You can check this by simply echoing out the command before running it:
echo Will run: /usr/bin/find ${FilePath[$i]} -name ${FileName[$i]}* -type f -mtime +${DaysNo[$i]} ...
and seeing what it outputs.

Shell Script for Bulk renaming of files

I want to recursively rename all files in directory path by changing their prefix.
For Example
XYZMyFile.h
XYZMyFile.m
XYZMyFile1.h
XYZMyFile1.m
XYZMyFile2.h
XYZMyFile2.m
TO
ABCMyFile.h
ABCMyFile.m
ABCMyFile1.h
ABCMyFile1.m
ABCMyFile2.h
ABCMyFile2.m
These files are under a directory structure with many layers. Can someone help me with a shell script for this bulk task?
A different approach maybe:
ls *.{h,m} | while read a; do n=ABC$(echo $a | sed -e 's/^XYZ//'); mv $a $n; done
Description:
ls *.{h,m} --> Find all files with .h or .m extension
n=ABC --> Add a ABC prefix to the file name
sed -e 's/^XYZ//' --> Removes the XYZ prefix from the file name
mv $a $n --> Performs the rename
Set globstar first and then use rename like below:
# shopt -s globstar # This will cause '**' to expand to each and everything
# ls -R
.:
nXYZ1.c nXYZ2.c nXYZ3.c subdir XYZ1.m XYZ2.m XYZ3.m
nXYZ1.h nXYZ2.h nXYZ3.h XYZ1.c XYZ2.c XYZ3.c
nXYZ1.m nXYZ2.m nXYZ3.m XYZ1.h XYZ2.h XYZ3.h
./subdir:
nXYZ1.c nXYZ1.m nXYZ2.h nXYZ3.c nXYZ3.m XYZ1.h XYZ2.c XYZ2.m XYZ3.h
nXYZ1.h nXYZ2.c nXYZ2.m nXYZ3.h XYZ1.c XYZ1.m XYZ2.h XYZ3.c XYZ3.m
# rename 's/^XYZ(.*.[mh])$/ABC$1/;s/^([^\/]*\/)XYZ(.*.[mh])$/$1ABC$2/' **
# ls -R
.:
ABC1.h ABC2.m nXYZ1.c nXYZ2.c nXYZ3.c subdir XYZ3.c
ABC1.m ABC3.h nXYZ1.h nXYZ2.h nXYZ3.h XYZ1.c
ABC2.h ABC3.m nXYZ1.m nXYZ2.m nXYZ3.m XYZ2.c
./subdir:
ABC1.h ABC2.h ABC3.h nXYZ1.c nXYZ1.m nXYZ2.h nXYZ3.c nXYZ3.m XYZ2.c
ABC1.m ABC2.m ABC3.m nXYZ1.h nXYZ2.c nXYZ2.m nXYZ3.h XYZ1.c XYZ3.c
# shopt -u globstar # Unset gobstar
This may be the simplest way to achieve your objective.
Note1 : Here I am not changing nXYZ to nABC as you have noticed. If they are meant to be changed the simplified rename command would be
rename 's/XYZ(.*.[mh])$/ABC$1/' **
Note2 : The question has mentioned nothing about multiple occurrences of XYZ. So nothing done in this regard.
Easy find and rename (the binary in /usr/bin, not the Perl function mentioned)
Yes, there is a command to do this non-recursive already.
rename XYZ ABC XYZ*
rename --help
Usage:
rename [options] expression replacement file...
Options:
-v, --verbose explain what is being done
-s, --symlink act on symlink target
-h, --help display this help and exit
-V, --version output version information and exit
For more details see rename(1).
edit: missed the "many layers of directory" part of the question, b/c it's a little messy. Adding the find.
Easiest to remember:
find . -type f -name "*.pdf" -exec rename XYZ ABC {} \;
Probably faster to finish:
find . -type d -not -path "*/\.*" -not -name ".*" -exec rename XYZ ABC {}/*.pdf \;
I'm not sure how to get easier than one command line of code.
For non-recursive, you can use rename which is a perl script:
rename -v -n 's/^.+(?=MyFile)/what-you-want/' *.{h,m}
test:
dir > ls | cat -n
1 XYZMyFile1.h
2 XYZMyFile1.m
3 XYZMyFile.h
4 XYZMyFile.m
dir >
dir > rename -v -n 's/^.+(?=MyFile)/what-you-want/' *.{h,m}
rename(XYZMyFile1.h, what-you-wantMyFile1.h)
rename(XYZMyFile1.m, what-you-wantMyFile1.m)
rename(XYZMyFile.h, what-you-wantMyFile.h)
rename(XYZMyFile.m, what-you-wantMyFile.m)
dir >
and for recursive,use find + this command
If you do not have access to rename, you can use perl directly like so:
perl -le '($old=$_) && s/^xzy/abc/g && rename($old,$_) for <*.[mh]>'
and here is a screen-shot
and with renrem, a CLI I developed using C++, specifically for renaming

How to cd into grep output?

I have a shell script which basically searches all folders inside a location and I use grep to find the exact folder I want to target.
for dir in /root/*; do
grep "Apples" "${dir}"/*.* || continue
While grep successfully finds my target directory, I'm stuck on how I can move the folders I want to move in my target directory. An idea I had was to cd into grep output but that's where I got stuck. Tried some Google results, none helped with my case.
Example grep output: Binary file /root/ant/containers/secret/Documents/2FD412E0/file.extension matches
I want to cd into 2FD412E0and move two folders inside that directory.
dirname is the key to that:
cd $(dirname $(grep "...." ...))
will let you enter the directory.
As people mentioned, dirname is the right tool to strip off the file name from the path.
I would use find for such kind of task:
while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
done < <(find /root/ -type f \
-exec grep "Apples" --files-with-matches {} \;)
Consider using find's -maxdepth option. See the man page for find.
Well, there is actually simpler solution :) I just like to write bash scripts. You might simply use single find command like this:
find /root/ -type f -exec grep Apples {} ';' -exec ls -l {} ';'
Note the second -exec. It will be executed, if the previous -exec command exited with status 0 (success). From the man page:
-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find.
Replace the ls -l command with your stuff.
And if you want to execute dirname within the -exec command, you may do the following trick:
find /root/ -type f -exec grep -q Apples {} ';' \
-exec sh -c 'cd `dirname $0`; pwd' {} ';'
Replace pwd with your stuff.
When find is not available
In the comments you write that find is not available on your system. The following solution works without find:
grep -R --files-with-matches Apples "${dir}" | while read -r file
do
target_dir=`dirname $file`
# do something with "$target_dir"
echo $target_dir
done

Resources