copy directory structure without files - bash

Hello i am trying to make a script which takes two arguments (names of directories) and copies the structure of the first into the other.
This is my code:
cd $1 && find . -type d -exec mkdir -p /$2/{} \;
when i run the script i dont get any errors but nothing happens. What am i doing wrong please help thank you.
edit: the script is saved in home and both directories are also in home (~). i run the script in terminal:
sudo bash DN1c.sh dir1 dir2
first directory has multiple subdirectories and the second directory is empty

export src=$1/. dest=$2
find "$src" -type d -exec bash -c 'printf "%s\0" "${#//"$src"/"$dest"}"' sh {} + | xargs -0 mkdir -p

You could use rsync to copy files and/or directories.
To copy directories only, set --max-size=0 and no files will be copied.
Example:
rsync -r -n -v --max-size=0 src_path/ dest_path
^ recursive
^ dry run - nothing copied
^ verbose
^ no files
^ src path
^ use a trailing / if you don't
want the src_path created
at dest
^ dest path

Related

How to solve no such file or directory while using xargs

I am trying to copy a file(s) to the same directory but with a prefix. I am using xargs to do that. NOTE: I can't use cd as it will break build in my pipeline.
This is what I have
root#gotoadmins-OU:~# ls
snap test
root#gotoadmins-OU:~# ls test/
me.war
root#gotoadmins-OU:~# ls test | xargs -I {} cp {} latest-{} test/
cp: cannot stat 'me.war': No such file or directory
cp: cannot stat 'latest-me.war': No such file or directory
If I understand the question correctly, you simply want to copy all of the files in a subdirectory to the same subdirectory, but with the prefix "latest-".
find test -type f -execdir bash -c 'cp "$1" latest-"$(basename "$1")"' "$(which bash)" "{}" \;
$(which bash) can be replaced with the word bash or anything, really. It's just a placeholder.
As #KamilCuk commented, a subshell might also be an option. If you put commands inside parentheses (e.g. (cd test; for FILE in *; do cp "$FILE" latest-"$FILE"; done)), those commands are run in a subshell, so the environment, including your working directory, is unaffected.
can you just use the cp command to do this?
cp test/me.war test/latest-me.war

rsync --exclude 'folder', copies that folder anyway

There are a lot of posts about this, I know, but I tried all and still can't get it working.
If this is my folder to backup: /home/user/thingstobackup
the script will create a "backup" folder here and inside another folder named as the date of today. The daily backup is copied inside.
No matter how I use rsync, the "backup" folder will be always copied inside itself starting from the 2nd run of the script.
1st run:
/home/user/thingstobackup
/home/user/thingstobackup/backup/2016-01-13 #and correct file inside
2nd run:
/home/user/thingstobackup/backup/2016-01-13 #with correct file inside
/home/user/thingstobackup/backup/2016-01-14 #with correct file inside
I will shorten the path here..
../backup/2016-01-14/2016-01-13/and backed up file inside..
../backup/2016-01-14/backup/
../backup/2016-01-14/backup/2016-01-13/and backed up file inside..
../backup/2016-01-14/backup/2016-01-14/empty
After the 2nd run, the backup folder is copied inside every daily backup folder.
The script:
#!/bin/bash
export PATH=$PATH:/bin:/usr/bin:/usr/local/bin
# directory to backup
TOSAVE=/home/user/thingstobackup
TODAY=`date +%F`
BDIR=backup
BACKUPDIR=$TOSAVE/$BDIR/$TODAY/
# options for rsync
OPTS="-aq --exclude='backup/*'"
# find daily new file
FIND="`find $TOSAVE -mindepth 1 -mtime -1 -print`"
# MAIN #
# copy daily found inside new created daily folder
[ -d $TOSAVE/$BDIR/$TODAY ] || mkdir -p $TOSAVE/$BDIR/$TODAY
rsync $OPTS $FIND $BACKUPDIR
# delete file older than 2 weeks = 14 days
find $TOSAVE -mtime +14 -exec rm -rf {} \;
No matter how I use --exclude='backup/*'" --exclude='backup' || --exclude 'backup/*' || --exclude 'backup'
It does not exclude that folder.. Yes I read the rsync manual: --exclude=PATTERN exclude files matching PATTERN
I'm sure I'm missing something but I just can't find it! Thanks in advance mates
I do not know why --exclude does not work here.. but I modified the find command and I managed to let this thing work.
FIND="`find $TOSAVE -mindepth 1 -type d \( -path $TOSAVE/backup \) -prune -o -mtime -1 -print`"
#Nihvel was right:
cd -- "$(mktemp --directory)"
mkdir foo bar
touch foo/foo bar/bar
other="$(mktemp --directory)"
rsync --recursive --exclude 'foo' * "$other"
ls "$other" # prints only "bar"
This works whether the local file specifier is * (as above), ./* or /tmp/tmp.MGUbytm0h0/*, and whether $PWD is /tmp/tmp.MGUbytm0h0 or something else.
You should be able to fix the --exclude issue by not putting quotes around the option or by including a space instead of using =; rsync is very particular about how options are written:
--exclude=backup
alternative:
--exclude '*dir'
Other Suggestions:
Remove trailing slashes on variables (rsync is very specific about /)
For command subsitution use $( ... ) instead of backticks (if available/possible)
Eliminate redundant code (eg. you define $BACKUPDIR but only use it once effectively)
When having issues always use the --verbose option, it really helps with rsync
Revised Script:
#!/bin/env bash
# export PATH=$PATH:/bin:/usr/bin:/usr/local/bin
# directory to backup
TOSAVE=$HOME/thingstobackup
TODAY=$(date +%F)
BDIR=backup
BACKUPDIR=$TOSAVE/$BDIR/$TODAY
# options for rsync
OPTS="-raq --exclude=$BDIR --exclude=$TODAY"
# find daily new file
FIND=$(find $TOSAVE -mindepth 1 -mtime -1 -print)
# MAIN #
# copy daily found inside new created daily folder
[ -d $BACKUPDIR ] || mkdir -p $BACKUPDIR
rsync $OPTS $FIND $BACKUPDIR
# delete file older than 2 weeks = 14 days
find $TOSAVE -mtime +14 -exec rm -rf {} \;
Most of the changes are in the way the paths are constructed forthingstobackup:
file1
file2
file3
Running the script it would then appear like:
file1
file2
file3 // files that will be included to rsync destination
|
|_______backup // exclude=$BDIR should not be included
|
|________2016-01-14 // exlude=$TODAY would also be advised
file1
file2
file3
EDIT: In order for rsync to not break when encountering a filename containing spaces you'll want to handle that in your find method.
* You could likely refine your script further by using rsync commands instead of find altogether, but I'll leave that up to you.

Unable to use dirname within subshell in find command

I am trying to make a small script that can move all files from one directory to another, and I figured using find would be the best solution. However, I have run into a problem of using subshells for the 'dirname' value in creating the target directory paths. This does not work because {} evaluates to '.' (a single dot) when inside a subshell. As seen in my script bellow, the -exec mkdir -p $toDir/$(dirname {}) \; portion of the find command is what does not work. I want to create all of the target directories needed to move the files, but I cannot use dirname in a subshell to get only the directory path.
Here is the script:
#!/bin/bash
# directory containting files to deploy relative to this script
fromDir=../deploy
# directory where the files should be moved to relative to this script
toDir=../main
if [ -d "$fromDir" ]; then
if [ -d "$toDir" ]; then
toDir=$(cd $toDir; pwd)
cd $fromDir
find * -type f -exec echo "Moving file [$(pwd)/{}] to [$toDir/{}]" \; -exec mkdir -p $toDir/$(dirname {}) \; -exec mv {} $toDir/{} \;
else
echo "A directory named '$toDir' does not exist relative to this script"
fi
else
echo "A directory named '$fromDir' does not exist relative to this script"
fi
I know that you can us -exec sh -c 'echo $(dirname {})' \;, but with this, I would then not be able to use the $toDir variable.
Can anyone help me figure out a solution to this problem?
Since you appear to be re-creating all the files and directories, try the tar trick:
mkdir $toDir
cd $fromDir
tar -cf - . | ( cd $toDir ; tar -xvf - )

Can I limit the recursion when copying using find (bash)

I have been given a list of folders which need to be found and copied to a new location.
I have basic knowledge of bash and have created a script to find and copy.
The basic command I am using is working, to a certain degree:
find ./ -iname "*searchString*" -type d -maxdepth 1 -exec cp -r {} /newPath/ \;
The problem I want to resolve is that each found folder contains the files that I want, but also contains subfolders which I do not want.
Is there any way to limit the recursion so that only the files at the root level of the found folder are copied: all subdirectories and files therein should be ignored.
Thanks in advance.
If you remove -R, cp doesn't copy directories:
cp *searchstring*/* /newpath
The command above copies dir1/file1 to /newpath/file1, but these commands copy it to /newpath/dir1/file1:
cp --parents *searchstring*/*(.) /newpath
for GNU cp and zsh
. is a qualifier for regular files in zsh
cp --parents dir1/file1 dir2 copies file1 to dir2/dir1 in GNU cp
t=/newpath;for d in *searchstring*/;do mkdir -p "$t/$d";cp "$d"* "$t/$d";done
find *searchstring*/ -type f -maxdepth 1 -exec rsync -R {} /newpath \;
-R (--relative) is like --parents in GNU cp
find . -ipath '*searchstring*/*' -type f -maxdepth 2 -exec ditto {} /newpath/{} \;
ditto is only available on OS X
ditto file dir/file creates dir if it doesn't exist
So ... you've been given a list of folders. Perhaps in a text file? You haven't provided an example, but you've said in comments that there will be no name collisions.
One option would be to use rsync, which is available as an add-on package for most versions of Unix and Linux. Rsync is basically an advanced copying tool -- you provide it with one or more sources, and a destination, and it makes sure things are synchronized. It knows how to copy things recursively, but it can't be told to limit its recursion to a particular depth, so the following will copy each item specified to your target, but it will do so recursively.
xargs -L 1 -J % rsync -vi -a % /path/to/target/ < sourcelist.txt
If sourcelist.txt contains a line with /foo/bar/slurm, then the slurm directory will be copied in its entiriety to /path/to/target/slurm/. But this would include directories contained within slurm.
This will work in pretty much any shell, not just bash. But it will fail if one of the lines in sourcelist.txt contains whitespace, or various special characters. So it's important to make sure that your sources (on the command line or in sourcelist.txt) are formatted correctly. Also, rsync has different behaviour if a source directory includes a trailing slash, and you should read the man page and decide which behaviour you want.
You can sanitize your input file fairly easily in sh, or bash. For example:
#!/bin/sh
# Avoid commented lines...
grep -v '^[[:space:]]*#' sourcelist.txt | while read line; do
# Remove any trailing slash, just in case
source=${line%%/}
# make sure source exist before we try to copy it
if [ -d "$source" ]; then
rsync -vi -a "$source" /path/to/target/
fi
done
But this still uses rsync's -a option, which copies things recursively.
I don't see a way to do this using rsync alone. Rsync has no -depth option, as find has. But I can see doing this in two passes -- once to copy all the directories, and once to copy the files from each directory.
So I'll make up an example, and assume further that folder names do not contain special characters like spaces or newlines. (This is important.)
First, let's do a single-pass copy of all the directories themselves, not recursing into them:
xargs -L 1 -J % rsync -vi -d % /path/to/target/ < sourcelist.txt
The -d option creates the directories that were specified in sourcelist.txt, if they exist.
Second, let's walk through the list of sources, copying each one:
# Basic sanity checking on input...
grep -v '^[[:space:]]*#' sourcelist.txt | while read line; do
if [ -d "$line" ]; then
# Strip trailing slashes, as before
source=${line%%/}
# Grab the directory name from the source path
target=${source##*/}
rsync -vi -a "$source/" "/path/to/target/$target/"
fi
done
Note the trailing slash after $source on the rsync line. This causes rsync to copy the contents of the directory, rather than the directory.
Does all this make sense? Does it match your requirements?
You can use find's ipath argument:
find . -maxdepth 2 -ipath './*searchString*/*' -type f -exec cp '{}' '/newPath/' ';'
Notice the path starts with ./ to match find's search directory, ends with /* in order to exclude files in the top level directory, and maxdepth is set to 2 to only recurse one level deep.
Edit:
Re-reading your comments, it seems like you want to preserve the directory you're copying from? E.g. when searching for foo*:
./foo1/* ---> copied to /newPath/foo1/* (not to /newPath/*)
./foo2/* ---> copied to /newPath/foo2/* (not to /newPath/*)
Also, the other requirement is to keep maxdepth at 1 for speed reasons.
(As pointed out in the comments, the following solution has security issues for specially crafted names)
Combining both, you could use this:
find . -maxdepth 1 -type d -iname 'searchString' -exec sh -c "mkdir -p '/newPath/{}'; cp "{}/*" '/newPath/{}/' 2>/dev/null" ';'
Edit 2:
Why not ditch find altogether and use a pure bash solution:
for d in *searchString*/; do mkdir -p "/newPath/$d"; cp "$d"* "/newPath/$d"; done
Note the / at the end of the search string, causing only directories to be considered for matching.

Unix script to find all folders in the directory, then tar and move them

Basically I need to run a Unix script to find all folders in the directory /fss/fin, if it exists; then I have tar it and move to another directory /fs/fi.
This is my command so far:
find /fss/fin -type d -name "essbase" -print
Here I have directly mentioned the folder name essbase. But instead, I would like to find all the folders in the /fss/fin and use them all.
How do I find all folders in the /fss/fin directory & tar them to move them to /fs/fi?
Clarification 1:
Yes I need to find only all folders in the directory /fss/fin directory using a Unix shell script and tar them to another directory /fs/fi.
Clarification 2:
I want to make it clear with the requirement. The Shell Script should contain:
Find all the folders in the directory /fss/fin
Tar the folders
Move the folders in another directory /fs/fi which is located on the server s11003232sz.net
On user requests it should untar the Folders and move them back to the orignal directory /fss/fin
here is an example I am working with that may lead you in the correct direction
BackUpDIR="/srv/backup/"
SrvDir="/srv/www/"
DateStamp=$(date +"%Y%m%d");
for Dir in $(find $SrvDir* -maxdepth 0 -type d );
do
FolderName=$(basename $Dir);
tar zcf "$BackUpDIR$DateStamp.$FolderName.tar.gz" -P $Dir
done
Since tar does directories automatically, you really don't need to do very much. Assuming GNU tar:
tar -C /fss/fin -cf - essbase |
tar -C /fs/fi -xf -
The '-C' option changes directory before operating. The first tar writes to standard output (the lone '-') everything found in the essbase directory. The output of that tar is piped to the second tar, which reads its standard input (the lone '-'; fun isn't it!).
Assuming GNU find, you can also do:
(cd /fss/fin; tar -cf - $(find . -maxdepth 1 -type d | sed '/^\.$/d')) |
tar -xf - -C /fs/fi
This changes directory to the source directory; it runs 'find' with a maximum depth of 1 to find the directories and removes the current directory from the list with 'sed'; the first 'tar' then writes the output to the second one, which is the same as before (except I switched the order of the arguments to emphasize the parallelism between the two invocations).
If your top-level directories (those actually in /fss/fin) have spaces in the names, then there is more work to do again - I'm assuming none of the directories to be backed up start with a '.':
(cd /fss/fin; find * -maxdepth 0 -type d -print 0 | xargs -0 tar -cf -) |
tar -xf - -C /fs/fi
This weeds out the non-directories from the list generated by '*', and writes them with NUL '\0' (zero bytes) marking the end of each name (instead of a newline). The output is written to 'xargs', which is configured to expect the NUL-terminated names, and it runs 'tar' with the correct directory names. The output of this ensemble is sent to the second tar, as before.
If you have directory names starting with a '.' to collect, then add '.[a-z]*' or another suitable pattern after the '*'; it is crucial that what you use does not list '.' or '..'. If you have names starting with dashes in the directory, then you need to use './*' and './.[a-z]*'.
If you've got still more perverse requirements, enunciate them clearly in an amendment to the question.
find /fss/fin -d 1 -type d -name "*" -print
The above command gives you the list of 1st level subdirectories of the /fss/fin.
Then you can do anything with this. E.g. tar them to your output directory as in the command below
tar -czf /fss/fi/outfile.tar.gz `find /fss/fin -d 1 -type d -name "*" -print`
Original directory structure will be recreated after untar-ing.
Here is a bash example (change /fss/fin, /fs/fi with your paths):
dirs=($(find /fss/fin -type d))
for dir in "${dirs[#]}"; do
tar zcf "$dir.tgz" "$dir" -P -C /fs/fi && mv -v "$dir" /fs/fi/
done
which finds all the folders, tar them separately, and if successful - move them into different folder.
This should do it:
#!/bin/sh
list=`find . -type d`
for i in $list
do
if [ ! "$i" == "." ]; then
tar -czf ${i}.tar.gz ${i}
fi
done
mv *.tar.gz ~/tardir

Resources