Change directories - shell

I am having some trouble in doing some comands on shell.
My problem is that I want to change directories more specifically to a directory which I don't know but that contains the file named xxx.
How can I change directly to that directory that contains that file?
If I knew the names of the directories that contained that file would be easier because I only had to use cd ~/Name of directory.
Can anyone help me?
thanks

If you have GNU find:
cd "$(find /startdir -name 'filename' -printf %h -quit)"
You can replace "/startdir" with any valid directory, for example /, . or `~/.
If you want to cd to a directory which is in the $PATH that contains an executable file:
cd "$(dirname "$(type -P "filename")")" # Bash
or
cd "$(f=$(type -P "ksh"); echo "${f%/*}")" # Bash
or
cd "$(dirname "$(which "filename")")"

If you don't know where a file is, go to the root of the system and find it:
cd /
find . -iname filename

In several linux systems you could do:
$ cd `find . -name "filename" | xargs dirname`
But change "filename" to the file you want to find.

BASH
cd `find . -name "*filename*" | head -1`
This is kind of a variation to Qiau answer. Finds the first file which contains the string filename and then change the current directory to its location.
* is a wild card, there may be something before and/or after filename.

Related

Go to parent directory inline

I was wondering if it’s possible to go to parent directory of a file inline. I know I could type cd .. and it would. However, what if I wanted do something like echo $(find . -name “xyz.png”)..and it would return the parent directory of the file instead of the path to file. Or instead of a file I search for a folder, and want to return path to the parent directory.
You could use dirname to strip off the last part of a path. Combined with find in your examples it would give you just the parent directory of whatever was found. You could use that in cd as in cd $(find -name "xyz.png" | xargs dirname) if that's the sort of thing you're trying to do.
You can also use the -type d option to find to have it only find directories if you want to match directory names instead of filenames.
UNTESTED:
function dirs_ofFile
{
find -name "$1" | xargs dirname
}
then
$ cd $(dirs_ofFile xyz.png | sed 1q) # in case there is more than one.

Unable to use dirname within subshell in find command

I am trying to make a small script that can move all files from one directory to another, and I figured using find would be the best solution. However, I have run into a problem of using subshells for the 'dirname' value in creating the target directory paths. This does not work because {} evaluates to '.' (a single dot) when inside a subshell. As seen in my script bellow, the -exec mkdir -p $toDir/$(dirname {}) \; portion of the find command is what does not work. I want to create all of the target directories needed to move the files, but I cannot use dirname in a subshell to get only the directory path.
Here is the script:
#!/bin/bash
# directory containting files to deploy relative to this script
fromDir=../deploy
# directory where the files should be moved to relative to this script
toDir=../main
if [ -d "$fromDir" ]; then
if [ -d "$toDir" ]; then
toDir=$(cd $toDir; pwd)
cd $fromDir
find * -type f -exec echo "Moving file [$(pwd)/{}] to [$toDir/{}]" \; -exec mkdir -p $toDir/$(dirname {}) \; -exec mv {} $toDir/{} \;
else
echo "A directory named '$toDir' does not exist relative to this script"
fi
else
echo "A directory named '$fromDir' does not exist relative to this script"
fi
I know that you can us -exec sh -c 'echo $(dirname {})' \;, but with this, I would then not be able to use the $toDir variable.
Can anyone help me figure out a solution to this problem?
Since you appear to be re-creating all the files and directories, try the tar trick:
mkdir $toDir
cd $fromDir
tar -cf - . | ( cd $toDir ; tar -xvf - )

Remove all files except some from a directory

When using sudo rm -r, how can I delete all files, with the exception of the following:
textfile.txt
backup.tar.gz
script.php
database.sql
info.txt
find [path] -type f -not -name 'textfile.txt' -not -name 'backup.tar.gz' -delete
If you don't specify -type f find will also list directories, which you may not want.
Or a more general solution using the very useful combination find | xargs:
find [path] -type f -not -name 'EXPR' -print0 | xargs -0 rm --
for example, delete all non txt-files in the current directory:
find . -type f -not -name '*txt' -print0 | xargs -0 rm --
The print0 and -0 combination is needed if there are spaces in any of the filenames that should be deleted.
rm !(textfile.txt|backup.tar.gz|script.php|database.sql|info.txt)
The extglob (Extended Pattern Matching) needs to be enabled in BASH (if it's not enabled):
shopt -s extglob
find . | grep -v "excluded files criteria" | xargs rm
This will list all files in current directory, then list all those that don't match your criteria (beware of it matching directory names) and then remove them.
Update: based on your edit, if you really want to delete everything from current directory except files you listed, this can be used:
mkdir /tmp_backup && mv textfile.txt backup.tar.gz script.php database.sql info.txt /tmp_backup/ && rm -r && mv /tmp_backup/* . && rmdir /tmp_backup
It will create a backup directory /tmp_backup (you've got root privileges, right?), move files you listed to that directory, delete recursively everything in current directory (you know that you're in the right directory, do you?), move back to current directory everything from /tmp_backup and finally, delete /tmp_backup.
I choose the backup directory to be in root, because if you're trying to delete everything recursively from root, your system will have big problems.
Surely there are more elegant ways to do this, but this one is pretty straightforward.
I prefer to use sub query list:
rm -r `ls | grep -v "textfile.txt\|backup.tar.gz\|script.php\|database.sql\|info.txt"`
-v, --invert-match select non-matching lines
\| Separator
Assuming that files with those names exist in multiple places in the directory tree and you want to preserve all of them:
find . -type f ! -regex ".*/\(textfile.txt\|backup.tar.gz\|script.php\|database.sql\|info.txt\)" -delete
You can use GLOBIGNORE environment variable in Bash.
Suppose you want to delete all files except php and sql, then you can do the following -
export GLOBIGNORE=*.php:*.sql
rm *
export GLOBIGNORE=
Setting GLOBIGNORE like this ignores php and sql from wildcards used like "ls *" or "rm *". So, using "rm *" after setting the variable will delete only txt and tar.gz file.
Since nobody mentioned it:
copy the files you don't want to delete in a safe place
delete all the files
move the copied files back in place
You can write a for loop for this... %)
for x in *
do
if [ "$x" != "exclude_criteria" ]
then
rm -f $x;
fi
done;
A little late for the OP, but hopefully useful for anyone who gets here much later by google...
I found the answer by #awi and comment on -delete by #Jamie Bullock really useful. A simple utility so you can do this in different directories ignoring different file names/types each time with minimal typing:
rm_except (or whatever you want to name it)
#!/bin/bash
ignore=""
for fignore in "$#"; do
ignore=${ignore}"-not -name ${fignore} "
done
find . -type f $ignore -delete
e.g. to delete everything except for text files and foo.bar:
rm_except *.txt foo.bar
Similar to #mishunika, but without the if clause.
If you're using zsh which I highly recommend.
rm -rf ^file/folder pattern to avoid
With extended_glob
setopt extended_glob
rm -- ^*.txt
rm -- ^*.(sql|txt)
Trying it worked with:
rm -r !(Applications|"Virtualbox VMs"|Downloads|Documents|Desktop|Public)
but names with spaces are (as always) tough. Tried also with Virtualbox\ VMs instead the quotes. It deletes always that directory (Virtualbox VMs).
Just:
rm $(ls -I "*.txt" ) #Deletes file type except *.txt
Or:
rm $(ls -I "*.txt" -I "*.pdf" ) #Deletes file types except *.txt & *.pdf
Make the files immutable. Not even root will be allowed to delete them.
chattr +i textfile.txt backup.tar.gz script.php database.sql info.txt
rm *
All other files have been deleted.
Eventually you can reset them mutable.
chattr -i *
I belive you can use
rm -v !(filename)
Except for the filename all the other files will e deleted in the directory and make sure you are using it in
This is similar to the comment from #siwei-shen but you need the -o flag to do multiple patterns. The -o flag stands for 'or'
find . -type f -not -name '*ignore1' -o -not -name '*ignore2' | xargs rm
You can do this with two command sequences.
First define an array with the name of the files you do not want to exclude:
files=( backup.tar.gz script.php database.sql info.txt )
After that, loop through all files in the directory you want to exclude, checking if the filename is in the array you don't want to exclude; if its not then delete the file.
for file in *; do
if [[ ! " ${files[#]} " ~= "$file" ]];then
rm "$file"
fi
done
The answer I was looking for was to run script, but I wanted to avoid deleting the sript itself. So incase someone is looking for a similar answer, do the following.
Create a .sh file and write the following code:
cp my_run_build.sh ../../
rm -rf * cp
../../my_run_build.sh .
/*amend rest of the script*/
Since no one yet mentioned this, in one particular case:
OLD_FILES=`echo *`
... create new files ...
rm -r $OLD_FILES
(or just rm $OLD_FILES)
or
OLD_FILES=`ls *`
... create new files ...
rm -r $OLD_FILES
You may need to use shopt -s nullglob if some files may be either there or not there:
SET_OLD_NULLGLOB=`shopt -p nullglob`
shopt -s nullglob
FILES=`echo *.sh *.bash`
$SET_OLD_NULLGLOB
without nullglob, echo *.sh *.bash may give you "a.sh b.sh *.bash".
(Having said all that, I myself prefer this answer, even though it does not work in OSX)
Rather than going for a direct command, please move required files to temp dir outside current dir. Then delete all files using rm * or rm -r *.
Then move required files to current dir.
Remove everything exclude file.name:
ls -d /path/to/your/files/* |grep -v file.name|xargs rm -rf

Unix script to find all folders in the directory, then tar and move them

Basically I need to run a Unix script to find all folders in the directory /fss/fin, if it exists; then I have tar it and move to another directory /fs/fi.
This is my command so far:
find /fss/fin -type d -name "essbase" -print
Here I have directly mentioned the folder name essbase. But instead, I would like to find all the folders in the /fss/fin and use them all.
How do I find all folders in the /fss/fin directory & tar them to move them to /fs/fi?
Clarification 1:
Yes I need to find only all folders in the directory /fss/fin directory using a Unix shell script and tar them to another directory /fs/fi.
Clarification 2:
I want to make it clear with the requirement. The Shell Script should contain:
Find all the folders in the directory /fss/fin
Tar the folders
Move the folders in another directory /fs/fi which is located on the server s11003232sz.net
On user requests it should untar the Folders and move them back to the orignal directory /fss/fin
here is an example I am working with that may lead you in the correct direction
BackUpDIR="/srv/backup/"
SrvDir="/srv/www/"
DateStamp=$(date +"%Y%m%d");
for Dir in $(find $SrvDir* -maxdepth 0 -type d );
do
FolderName=$(basename $Dir);
tar zcf "$BackUpDIR$DateStamp.$FolderName.tar.gz" -P $Dir
done
Since tar does directories automatically, you really don't need to do very much. Assuming GNU tar:
tar -C /fss/fin -cf - essbase |
tar -C /fs/fi -xf -
The '-C' option changes directory before operating. The first tar writes to standard output (the lone '-') everything found in the essbase directory. The output of that tar is piped to the second tar, which reads its standard input (the lone '-'; fun isn't it!).
Assuming GNU find, you can also do:
(cd /fss/fin; tar -cf - $(find . -maxdepth 1 -type d | sed '/^\.$/d')) |
tar -xf - -C /fs/fi
This changes directory to the source directory; it runs 'find' with a maximum depth of 1 to find the directories and removes the current directory from the list with 'sed'; the first 'tar' then writes the output to the second one, which is the same as before (except I switched the order of the arguments to emphasize the parallelism between the two invocations).
If your top-level directories (those actually in /fss/fin) have spaces in the names, then there is more work to do again - I'm assuming none of the directories to be backed up start with a '.':
(cd /fss/fin; find * -maxdepth 0 -type d -print 0 | xargs -0 tar -cf -) |
tar -xf - -C /fs/fi
This weeds out the non-directories from the list generated by '*', and writes them with NUL '\0' (zero bytes) marking the end of each name (instead of a newline). The output is written to 'xargs', which is configured to expect the NUL-terminated names, and it runs 'tar' with the correct directory names. The output of this ensemble is sent to the second tar, as before.
If you have directory names starting with a '.' to collect, then add '.[a-z]*' or another suitable pattern after the '*'; it is crucial that what you use does not list '.' or '..'. If you have names starting with dashes in the directory, then you need to use './*' and './.[a-z]*'.
If you've got still more perverse requirements, enunciate them clearly in an amendment to the question.
find /fss/fin -d 1 -type d -name "*" -print
The above command gives you the list of 1st level subdirectories of the /fss/fin.
Then you can do anything with this. E.g. tar them to your output directory as in the command below
tar -czf /fss/fi/outfile.tar.gz `find /fss/fin -d 1 -type d -name "*" -print`
Original directory structure will be recreated after untar-ing.
Here is a bash example (change /fss/fin, /fs/fi with your paths):
dirs=($(find /fss/fin -type d))
for dir in "${dirs[#]}"; do
tar zcf "$dir.tgz" "$dir" -P -C /fs/fi && mv -v "$dir" /fs/fi/
done
which finds all the folders, tar them separately, and if successful - move them into different folder.
This should do it:
#!/bin/sh
list=`find . -type d`
for i in $list
do
if [ ! "$i" == "." ]; then
tar -czf ${i}.tar.gz ${i}
fi
done
mv *.tar.gz ~/tardir

How do I rename the extension for a bunch of files?

In a directory, I have a bunch of *.html files. I'd like to rename them all to *.txt
How can I do that? I use the bash shell.
If using bash, there's no need for external commands like sed, basename, rename, expr, etc.
for file in *.html
do
mv "$file" "${file%.html}.txt"
done
For an better solution (with only bash functionality, as opposed to external calls), see one of the other answers.
The following would do and does not require the system to have the rename program (although you would most often have this on a system):
for file in *.html; do
mv "$file" "$(basename "$file" .html).txt"
done
EDIT: As pointed out in the comments, this does not work for filenames with spaces in them without proper quoting (now added above). When working purely on your own files that you know do not have spaces in the filenames this will work but whenever you write something that may be reused at a later time, do not skip proper quoting.
rename 's/\.html$/\.txt/' *.html
does exactly what you want.
This worked for me on OSX from .txt to .txt_bak
find . -name '*.txt' -exec sh -c 'mv "$0" "${0%.txt}.txt_bak"' {} \;
You want to use rename :
rename -S <old_extension> <new_extension> <files>
rename -S .html .txt *.html
This does exactly what you want - it will change the extension from .html to .txt for all files matching *.html.
Note: Greg Hewgill correctly points out this is not a bash builtin; and is a separate Linux command. If you just need something on Linux this should work fine; if you need something more cross-platform then take a look at one of the other answers.
On a Mac...
Install rename if you haven't: brew install rename
rename -S .html .txt *.html
For Ubuntu Users :
rename 's/\.html$/\.txt/' *.html
This is the slickest solution I've found that works on OSX and Linux, and it works nicely with git too!
find . -name "*.js" -exec bash -c 'mv "$1" "${1%.js}".tsx' - '{}' \;
and with git:
find . -name "*.js" -exec bash -c 'git mv "$1" "${1%.js}".tsx' - '{}' \;
This question explicitly mentions Bash, but if you happen to have ZSH available it is pretty simple:
zmv '(*).*' '$1.txt'
If you get zsh: command not found: zmv then simply run:
autoload -U zmv
And then try again.
Thanks to this original article for the tip about zmv.
Here is an example of the rename command:
rename -n ’s/\.htm$/\.html/’ *.htm
The -n means that it's a test run and will not actually change any files. It will show you a list of files that would be renamed if you removed the -n. In the case above, it will convert all files in the current directory from a file extension of .htm to .html.
If the output of the above test run looked ok then you could run the final version:
rename -v ’s/\.htm$/\.html/’ *.htm
The -v is optional, but it's a good idea to include it because it is the only record you will have of changes that were made by the rename command as shown in the sample output below:
$ rename -v 's/\.htm$/\.html/' *.htm
3.htm renamed as 3.html
4.htm renamed as 4.html
5.htm renamed as 5.html
The tricky part in the middle is a Perl substitution with regular expressions, highlighted below:
rename -v ’s/\.htm$/\.html/’ *.htm
One line, no loops:
ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
Example:
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.json ac8453e2-0d82-4d43-b80e-205edb754700.json
$ ls -1 | xargs -L 1 -I {} bash -c 'mv $1 "${1%.*}.txt"' _ {}
$ ls
60acbc4d-3a75-4090-85ad-b7d027df8145.txt ac8453e2-0d82-4d43-b80e-205edb754700.txt
The command mmv seems to do this task very efficiently on a huge number of files (tens of thousands in a second). For example, to rename all .xml files to .html files, use this:
mmv ";*.xml" "#1#2.html"
the ; will match the path, the * will match the filename, and these are referred to as #1 and #2 in the replacement name.
Answers based on exec or pipes were either too slow or failed on a very large number of files.
In Linux or window git bash or window's wsl, try below command to change every file's extension in current directory or sub-directories or even their sub-directories with just one line of code
find . -depth -name "*.html" -exec sh -c 'mv "$1" "${1%.html}.txt"' _ {} \;
Try this
rename .html .txt *.html
usage:
rename [find] [replace_with] [criteria]
After someone else's website crawl, I ended up with thousands of files missing the .html extension, across a wide tree of subdirectories.
To rename them all in one shot, except the files already having a .html extension (most of them had none at all), this worked for me:
cd wwwroot
find . -xtype f \! -iname *.html -exec mv -iv "{}" "{}.html" \; # batch rename files to append .html suffix IF MISSING
In the OP's case I might modify that slightly, to only rename *.txt files, like so:
find . -xtype f -iname *.txt -exec filename="{}" mv -iv ${filename%.*}.{txt,html} \;
Broken down (hammertime!):
-iname *.txt
- Means consider ONLY files already ending in .txt
mv -iv "{}.{txt,html}"
- When find passes a {} as the filename, ${filename%.*} extracts its basename without any extension to form the parameters to mv. bash takes the {txt,html} to rewrite it as two parameters so the final command runs as: mv -iv "filename.txt" "filename.html"
Fix needed though: dealing with spaces in filenames
This is a good way to modify multiple extensions at once:
for fname in *.{mp4,avi}
do
mv -v "$fname" "${fname%.???}.mkv"
done
Note: be careful at the extension size to be the same (the ???)
Rename file extensions for all files under current directory and sub directories without any other packages (only use shell script):
Create a shell script rename.sh under current directory with the following code:
#!/bin/bash
for file in $(find . -name "*$1"); do
mv "$file" "${file%$1}$2"
done
Run it by ./rename.sh .old .new.
Eg. ./rename.sh .html .txt
A bit late to the party. You could do it with xargs:
ls *.html | xargs -I {} sh -c 'mv $1 `basename $1 .html`.txt' - {}
Or if all your files are in some folder
ls folder/*.html | xargs -I {} sh -c 'mv $1 folder/`basename $1 .html`.txt' - {}
Similarly to what was suggested before, this is how I did it:
find . -name '*OldText*' -exec sh -c 'mv "$0" "${0/OldText/NewText}"' {} \;
I first validated with
find . -name '*OldText*' -exec sh -c 'echo mv "$0" "${0/OldText/NewText}"' {} \;
Nice & simple!
find . -iname *.html -exec mv {} "$(basename {} .html).text" \;
If you prefer PERL, there is a short PERL script (originally written by Larry Wall, the creator of PERL) that will do exactly what you want here:
tips.webdesign10.com/files/rename.pl.txt.
For your example the following should do the trick:
rename.pl 's/html/txt/' *.html
The easiest way is to use rename.ul it is present in most of the Linux distro
rename.ul -o -v [oldFileExtension] [newFileExtension] [expression to search for file to be applied with]
rename.ul -o -v .oldext .newext *.oldext
Options:
-o: don't overwrite preexisting .newext
-v: verbose
-n: dry run
Unfortunately it's not trivial to do portably. You probably need a bit of expr magic.
for file in *.html; do echo mv -- "$file" "$(expr "$file" : '\(.*\)\.html').txt"; done
Remove the echo once you're happy it does what you want.
Edit: basename is probably a little more readable for this particular case, although expr is more flexible in general.
Here is what i used to rename .edge files to .blade.php
for file in *.edge; do mv "$file" "$(basename "$file" .edge).blade.php"; done
Works like charm.
You can also make a function in Bash, add it to .bashrc or something and then use it wherever you want.
change-ext() {
for file in *.$1; do mv "$file" "$(basename "$file" .$1).$2"; done
}
Usage:
change-ext css scss
Source of code in function: https://stackoverflow.com/a/1224786/6732111
Here is a solution, using AWK. Make sure the files are present in the working directory. Else, cd to the directory where the html files are located and then execute the below command:
for i in $(ls | grep .html); do j=$(echo $i | grep -oh "^\w*." | awk '{print $1"txt"}'); mv $i $j; done
I wrote this code in my .bashrc
alias find-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; find $p_path -type f -name "*."$p_ext1'
alias rename-ext='read -p "Path (dot for current): " p_path; read -p "Ext (unpunctured): " p_ext1; read -p "Change by ext. (unpunctured): " p_ext2; echo -en "\nFound files:\n"; find $p_path -type f -name "*.$p_ext1"; find $p_path -type f -name "*.$p_ext1" -exec sh -c '\''mv "$1" "${1%.'\''$p_ext1'\''}.'\''$p_ext2'\''" '\'' _ {} \;; echo -en "\nChanged Files:\n"; find $p_path -type f -name "*.$p_ext2";'
In a folder like "/home/<user>/example-files" having this structure:
/home/<user>/example-files:
file1.txt
file2.txt
file3.pdf
file4.csv
The commands would behave like this:
~$ find-text
Path (dot for current): example-files/
Ext (unpunctured): txt
example-files/file1.txt
example-files/file2.txt
~$ rename-text
Path (dot for current): ./example-files
Ext (unpunctured): txt
Change by ext. (unpunctured): mp3
Found files:
./example-files/file1.txt
./example-files/file1.txt
Changed Files:
./example-files/file1.mp3
./example-files/file1.mp3
~$
You could use a tool designed for renaming files in bulk, e.g. renamer.
To rename all file extensions in the current folder:
$ renamer --find ".html" --replace ".txt" --dry-run *
Many more usage examples here.

Resources