I have a zip file named agent-20.1.80.8366.zip, when I extract this zip it is giving additional directory inside the parent dir like below:
agent-20.1.80.8366/agent-20.1.80.8366/files
I would like to extract and move files from child directory to it's parent directory and remove that empty child directory and then I want to pass that path as a variable.
Please someone help with bash snippet that should strictly validate and proceed, and then parent path should be stored in a variable
expected output to be:
agent-20.1.80.8366/files
$ mv agent-20.1.80.8366/agent-20.1.80.8366/files agent-20.1.80.8366/files
$ rmdir agent-20.1.80.8366/agent-20.1.80.8366
This will fail if agent-20.1.80.8366/agent-20.1.80.8366 is not empty, which imo is a good thing, since you are assuming that it will empty if files is moved.
Using find to first move the file up one level
find agent-20.1.80.8366/agent-20.1.80.8366 -type f -execdir mv -f '{}' .. \;
Then using find to remove the remaining directory
find agent-20.1.80.8366 -type d -name "agent-20.1.80.8366" -delete
Related
I am trying to use the find command to gather files with a variable name as part of the title and move them to another directory. It is a loop variable representing an array, like so
for i in "${array[#]}"; do
find -name "${i}r.TXT" -a -name "${i}f.TXT" -execdir mv '{}' logs/ \;
done
The directory I am trying to move them to is a subdirectory of my current working directory, named logs. What is the correct way to integrate the variable into the filename so that find will grab the correct files and move them to the logs directory?
The elements of the array are integers, like 50, 55, 60, 65, so on and I want files that are 50f, 50r, 55f, 55r, etc.
As pointed out in the comments, -a was the problem here. find -name 1 -a -name 2 is the same as find -name 1 -name 2 and will never print anything because the name cannot be 1 and 2 at the same time. You could use -o instead, but there is a simplier solution:
Use find -name "${i}[fr].TXT" to find the files 50f.TXT and 50r.TXT when i=50.
Please note Gordon Davisson's comment:
find searches subdirectories (and subsubdirectories and ...), so it may find the files in the current directory, move them down into logs/, then find them again in logs/, move them down into logs/logs/ (or at least try to), etc
To prevent this, you can exclude directories named log by adding -name logs -prune -o as the first arguments to find.
However, if all your files are in the current working directory you don't need find at all.
Globs can do the same:
shopt -s failglob
for i in "${array[#]}"; do
mv "${i}"[fr].TXT logs/
done
Failglob will print an error message if no matching file was found. You can leave it out, but then the resulting command mv logs/ will print an error message.
I want to copy all files with specific extensions recursively in bash.
****editing****
I've written the full script. I have list of names in a csv file, I'm iterating through each name in that list, then creating a directory with that same name somewhere else, then I'm searching in my source directory for the directory with that name, inside it there are few files with endings of xlsx,tsv,html,gz and I'm trying to copy all of them into the newly created directory.
sample_list_filepath=/home/lists/papers
destination_path=/home/ds/samples
source_directories_path=/home/papers_final/new
cat $sample_list_filepath/sample_list.csv | while read line
do
echo $line
cd $source_directories_path/$line
cp -r *.{tsv,xlsx,html,gz} $source_directories_path/$line $destination_path
done
This works, but it copies all the files there, with no discrimination for specific extension.
What is the problem?
An easy way to solve your problem is to use find and regex :
find src/ -regex '.*\.\(tsv\|xlsx\|gz\|html\)$' -exec cp {} dest/ \;
find look recursively in the directory you specify (in my example it's src/), allows you to filter with -regex and to apply a command for matching results with -exec
For the regex part :
.*\.
will take the name of the file and the dot before extension,
\(tsv\|xlsx\|gz\|html\)$
verify the extension with those you want.
The exec block is what you do with files you got from regex
-exec cp {} dest/ \;
In this case, you copy what you got ({} meaning) to the destination directory.
Part of a script I currently use is using "ls -FCRlhLoprt" to list every file inside of a root directory recursively to a text document. The problem is, every time I run the script, ls includes that document in its output so the text document grows each time I run it. I believe I can use -i or --ignore, but how can I use that when ls is using a few variables? I keep getting errors:
ls "$lsopt" "$masroot"/ >> "$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt . #this works
If I try:
ls -FCRlhLoprt --ignore=""$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt"" "$masroot"/ >> "$masroot"/"$client"_"$jobnum"_"$mas"_drive_contents.txt #this does not work
I get errors. I basically want to not include the output back into the 2nd time I run this command.
Additional, all I am trying to do is create an easy to read document of every file inside of a directory recursively. If there is a better way, please let me know.
Additional, all I am trying to do is create an easy to read document of every file inside of a directory recursively. If there is a better way, please let me know.
To list every file in a directory recursively, the find command does exactly what you want, and admits further programmatic manipulation of the files found if you wish.
Examples:
To list every file under the current directory, recursively:
find ./ -type f
To list files under /etc/ and /usr/share, showing their owners and permissions:
find /etc /usr/share -type f -printf "%-100p %#m %10u %10g\n"
To show line counts of all files recursively, but ignoring subdirectories of .git:
find ./ -type f ! -regex ".*\.git.*" -exec wc -l {} +
To search under $masroot but ignore files generated by past searches, and dump the results into a file:
find "$masroot" -type f ! -regex ".*/[a-zA-Z]+_[0-9]+_.+_drive_contents.txt" | tee "$masroot/${client}_${jobnum}_${mas}_drive_contents.txt"
(Some of that might be slightly different on a Mac. For more information see man find.)
I have a directory structure like this.ABC/123, ABC/456, ABC/789 and in each of the numbered directories i have many files.what i want is to be able to search all files named XYZ.txt found in the numbered directories from the ABC directory and get their full paths on a variable or on an array using a script.
You can try:
cd "ABC"
array=($(find "$PWD" -type f -name "XYZ.txt"))
Leave away the cd ABC, otherwise "ABC/" will not be part of the output. find searches the curent directory, which makes specifying $PWD dispensable. Also the constraint to files by -type is not necessary if you define the name with .txt extension, supposed there aren't any directories called like this.
array=($(find -name "*.txt"))
I am horrible at writing bash scripts, but I'm wondering if it's possible to recursively loop through a directory and rename all the files in there by "1.png", "2.png", etc, but I need it to restart at one for every new folder it enters. Here's script that works but only does it for one directory.
cd ./directory
cnt=1
for fname in *
do
mv $fname ${cnt}.png
cnt=$(( $cnt + 1 ))
done
Thanks in advance
EDIT
Can anyone actually write this code out? I have no idea how to write bash, and it's very confusing to me
Using find is a great idea. You can use find with the next syntax to find all directories inside your directory and apply your script to found directories:
find /directory -type d -exec youscript.sh {} \;
-type d parameter means you want to find only directories
-exec youscript.sh {} \; starts your script for every found directory and pass it this directory name as a parameter
Use find(1) to get a list of files, and then do whatever you like with that list.