Makefile - Deleting subdirectories of a directory - makefile

I am a complete Makefile newb, So I'm wondering how I would loop through a specific directory and remove specific subdirectories?
Say I have:
[app]
|
| - [src]
| |
| |- [foo]
| |
| |- [foo-bar]
| |
| |- [bar]
| |
| |- [baz]
|
How would I loop through src and :
remove foo
remove foo-bar
leave bar
remove baz
I have actually about 8 specific folders I wish to delete from src.
Any help is appreciated

clean:
rm -f /src/foo
rm -f /src/foo-bar
rm -f /src/baz

Related

Moving all files from subfolders to main folders with duplicate file names

I've been trying to write a little script to sort image files in my Linux server.
I tried multiple solution found all over StackExchange but it never meets my requirements.
Explanation:
photo_folder are filled with images (various extensions).
Mostly, images are already in this folder.
But sometime, like the example below, images are hidden in one or multiple photo_subfolder and file names are often the same such as 1.jpg, 2.jpg... in each of them.
Basically, I would like to move all image files from photo_subfolder to their photo_folder and all duplicated filenames to be renamed before merging together.
Example:
|parent_folder
| |photo_folder
| | |photo_subfolder1
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
| | |photo_subfolder2
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
| | |photo_subfolder3
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
Expectation:
|parent_folder
| |photo_folder
| | 1_a.jpg
| | 2_a.jpg
| | 3_a.jpg
| | 1_b.jpg
| | 2_b.jpg
| | 3_b.jpg
| | 1_c.jpg
| | 2_c.jpg
| | 3_c.jpg
Note that files names are just an example. Could be anything.
Thank you!
You can replace the / of the subdirectories with another character, e.g. _ , and then cp/mv the original file to the parent directory.
I try to recreate an example of your directory tree here - very simple, but I hope it can be adapted to your case. Note that I am using bash.
#!/bin/bash
bd=parent
mkdir ${bd}
for i in $(seq 3); do
mkdir -p "${bd}/photoset_${i}/subset_${i}"
for j in $(seq 5); do
touch "${bd}/photoset_${i}/${j}.jpg"
touch "${bd}/photoset_${i}/${j}.png"
touch "${bd}/photoset_${i}/subset_${i}/${j}.jpg"
touch "${bd}/photoset_${i}/subset_${i}/${j}.gif"
done
done
Here is the script that will cp the files from the subdirectories to the parent directory. Basically
find all the files recursively in the subdirectories and loop on them
use sed to replace \ with '_' and store this in a variable new_filepath (I also remove the initial parent_, but this is optional)
copy (or move) the old filepath into parent with filename new_filepath
for xtension in jpg png gif; do
while IFS= read -r -d '' filepath; do
new_filepath=$(echo "${filepath}" | sed s#/#_#g)
cp "${filepath}" "${bd}/${new_filepath}"
done < <(find ${bd} -type f -name "*${xtension}" -print0)
done
ls ${bd}
If you want to remove also the additional parent_ from the new_filepath you can replace the new_filepath above with:
new_filepath=$(echo ${filepath} | sed s#/#_#g | sed s/${bd}_//g)
I assumed that you define all the possible extension in the script. Otherwise to find all the extensions in the directory tree you can use the following snippet from a previous answer
find . -type f -name '*.*' | sed 's|.*\.||' | sort -u

Find all directories that contain only hidden files and/or hidden directories

Issue
I have been struggling with writing a Bash command that is able to recursively search a directory and then return the paths of every sub-directory (up to a certain max-depth) that contains exclusively hidden files and/or hidden directories.
Visual Explanation
Consider the following File System excerpt:
+--- Root_Dir
| +--- Dir_A
| | +--- abc.txt
| | +--- 123.txt
| | +--- .hiddenfile
| | +--- .hidden_dir
| | | +--- normal_sub_file_1.txt
| | | +--- .hidden_sub_file_1.txt
| |
| +--- Dir_B
| | +--- abc.txt
| | +--- .hidden_dir
| | | +--- normal_sub_file_2.txt
| | | +--- .hidden_sub_file_2.txt
| |
| +--- Dir_C
| | +--- 123.txt
| | +--- program.c
| | +--- a.out
| | +--- .hiddenfile
| |
| +--- Dir_D
| | +--- .hiddenfile
| | +--- .another_hiddenfile
| |
| +--- Dir_E
| | +--- .hiddenfile
| | +--- .hidden_dir
| | | +--- normal_sub_file_3.txt # This is OK because its within a hidden directory, aka won't be checked
| | | +--- .hidden_sub_file_3.txt
| |
| +--- Dir_F
| | +--- .hidden_dir
| | | +--- normal_sub_file_4.txt
| | | +--- .hidden_sub_file_4.txt
Desired Output
The command I am looking for would output
./Root_Dir/Dir_D
./Root_Dir/Dir_E
./Root_Dir/Dir_F
Dir_D because it only contains hidden files.
Dir_E because it only contains a hidden file and a hidden directory at the level I am searching.
Dir_F because it only contains a hidden directory at the level I am searching.
Attempts
I have attempted to use the find command to get the results I am looking for but I can't seem to figure out what other command I need to pipe the output to or what other options I should be using.
I think the command that will work would look something like this:
$ find ./Root_Dir -mindepth 2 -maxdepth 2 -type d -name "*." -type -f -name "*." | command to see if these are the only files in that directory
Parsing find's output is not a good idea; -exec exists, and sh can do the filtering without breaking anything.
find . -type d -exec sh -c '
for d; do
for f in "$d"/*; do
test -e "$f" &&
continue 2
done
for f in "$d"/.[!.]* "$d"/..?*; do
if test -e "$f"; then
printf %s\\n "$d"
break
fi
done
done' sh {} +
You can adjust the depth using whatever extension your find provides for it.
Assuming this structure, you don't need find.
Adjust your pattern as needed.
for d in $ROOT_DIR/Dir_?/; do
lst=( $d* ); [[ -e "${lst[0]}" ]] && continue # normal files, skip
lst=( $d.* ); [[ -e "${lst[2]}" ]] || continue # NO hidden, so skip
echo "$d"
done
I rebuilt your file structure in my /tmp dir and saved this as tst, so
$: ROOT_DIR=/tmp ./tst
/tmp/Dir_D/
/tmp/Dir_E/
/tmp/Dir_F/
Note that the confirmation of hidden files uses "${lst[2]}" because the first 2 will always be . and .., which don't count.
You could probably use for d in $ROOT_DIR/*/.
I suspect this'll do for you. (mindepth=2, maxdepth=2)
If you needed deeper subdirectories (mindepth=3, maxdepth=3) you could add a level -
for d in $ROOT_DIR/*/*/
and/or both (mindepth=2, maxdepth=3)
for d in $ROOT_DIR/*/ $ROOT_DIR/*/*/
or if you didn't want a mindepth/maxdepth,
shopt -s globstar
for d in $ROOT_DIR/**/
This will work if your filenames contain no newlines.
find -name '.*' | awk -F/ -v OFS=/ '{ --NF } !a[$0]++'
Learn awk: https://www.gnu.org/software/gawk/manual/gawk.html

bash get dirname from urls.txt

$ cat urls.txt
/var/www/example.com.com/upload/email/email-inliner.html
/var/www/example.com.com/upload/email/email.html
/var/www/example.com.com/upload/email/email2-inliner.html
/var/www/example.com.com/upload/email/email2.html
/var/www/example.com.com/upload/email/AquaTrainingBag.png
/var/www/example.com.com/upload/email/fitex/fitex-ecr7.jpg
/var/www/example.com.com/upload/email/fitex/fitex-ect7.jpg
/var/www/example.com.com/upload/email/fitex/fitex-ecu7.jpg
/var/www/example.com.com/upload/email/fitex/fitex.html
/var/www/example.com.com/upload/email/fitex/logo.png
/var/www/example.com.com/upload/email/fitex/form.html
/var/www/example.com.com/upload/email/fitex/fitex.txt
/var/www/example.com.com/upload/email/bigsale.html
/var/www/example.com.com/upload/email/logo.png
/var/www/example.com.com/upload/email/bigsale.png
/var/www/example.com.com/upload/email/bigsale-shop.html
/var/www/example.com.com/upload/email/bigsale.txt
Can anyone help me to get dirname for this?
dirname /var/www/example.com.com/upload/email/sss.png works fine, but what about a list of URLs?
Is it possible to achieve this without the use of any form of a loop (for or while). As the number of URLs can be more than several tens of millions. The best way would be with the help of redirection (tee) to a file
As always when it boils down to things like this, Awk comes to the rescue:
awk 'BEGIN{FS=OFS="/"}{NF--}1' <file>
Be aware that this is an extremely simplified version of dirname and does not have the complete identical implementation as dirname, but it will work for most cases. A correct version, which covers all cases is:
awk 'BEGIN{FS=OFS="/"}{gsub("/+","/")}
{s=$0~/^\//;NF-=$NF?1:2;$0=$0?$0:(s?"/":".")};1' <file>
The following table shows the difference:
| path | dirname | awk full | awk short |
|------------+---------+----------+-----------|
| . | . | . | |
| / | / | / | |
| foo | . | . | |
| foo/ | . | . | foo |
| foo/bar | foo | foo | foo |
| foo/bar/ | foo | foo | foo/bar |
| /foo | / | / | |
| /foo/ | / | / | /foo |
| /foo/bar | /foo | /foo | /foo |
| /foo/bar/ | /foo | /foo | /foo/bar |
| /foo///bar | /foo | /foo | /foo// |
note: various alternative solutions can be found in Extracting directory name from an absolute path using sed or awk. The solutions of Kent will all work, the solution of Solid Kim just needs a tiny tweak to fix the multiple slashes (and misses upvotes!)

How to check if a folder has any tab delimited file in it?

I am trying to search for all the tab delimited file in one folder, and if found any then I need to transfer all of them to a another folder using bash.
In my code, I am currently trying to find all files, but somehow it is not working.
Here is my code:
>nul 2>nul dir /a-d "folderName\*" && (echo Files exist) || (echo No file found)
Thanks in advance :)
For a simple move (or copy -- replace mv with cp) of files, #tripleee's answer is sufficient. To recursively search for files and run a command on each, find comes in handy.
Example:
find <src> -type f -name '*.tsv' -exec cp {} <dst> \;
Where <src> is the directory to copy from, and <dst> is the directory to copy to. Note that this searches recursively, so any files with duplicate names will cause overwrites. You can pass -i to cp to have it prompt before overwriting:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
Explained:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^
| | | | | | | | | | | ||
| | | | | | | | | | | | --- terminator
| | | | | | | | | | | --- escape for terminator
| | | | | | | | | | --- destination directory
| | | | | | | | | --- the path of each file found
| | | | | | | | --- prompt before overwriting
| | | | | | | --- the copy command
| | | | | | --- flag for executing a command (cp in this case)
| | | | | --- pattern of files to match
| | | | --- flag for specifying file name pattern
| | | --- 'f' for a regular file (as opposed to e.g. 'd' for directory)
| | --- flag for specifying the file type
| --- location to search
--- the find command, useful for searching for files
To get a feel for what happens without actually having find run the real command, you can prefix it with echo to just print each command instead of running it:
find <src> -type f -name '*.tsv' -exec echo cp -i {} <dst> \;
Your attempt has very little valid Bash script in it.
mv foldername/*.tsv otherfolder/
There will be an error message if there are no matching files.
"it is not working". That means very little on stackoverflow.
Let's first examine what you've done:
>nul 2>nul dir /a-d "folderName\*"
So, you're doing a dir (most Linux users would use ls, but soit) on
/a-d
everything under folderName
and the output is in the file nul. For debugging purposes, it would be good to see what is in nul (do cat nul). I would bet it is something like:
dir: cannot access '/a-d': No such file or directory
dir: cannot access 'folderName\*': No such file or directory
That means that dir exits with an error. So, echo No file found will be executed.
This means that your output is probably
No file found
Which is exactly as expected.
In your code, you said you want to find all files. That means you want the output of
ls folderName
or
find folderName
if you want to do things recursively. Because find has been explained above by jsageryd, I won't elaborate on that.
If you just want to look in that specific directory, you might do:
if dir folderName/*.tsv > nul 2>nul ; then
echo "Files exist"
else
echo "No file found"
fi
and go from there.

How to create patch only for modified files with xDelta?

For example i have files in three folders:
+-------------------+---------------------------------+-------------+
| Folder1 | Folder2 | Patches |
+-------------------+---------------------------------+-------------+
| - OldFile1.bin | - OldFile1.bin (Not Modified) | |
| - OldFile2.bin | - NewFile2.bin (Modified) | |
| - OldFile3.bin | - OldFile3.bin (Not Modified) | |
| - ........ | - ......... | |
| - OldFileN.bin | - OldFileN.bin (Modified) | |
+-------------------+---------------------------------+-------------+
Then i wanna create patches for all modified files using xDelta:
FOR %%P in (Folder1\*.bin) do (
call xdelta.exe -9 -e -s "Folder1\%%~nP" "Folder2\%%~nP" "Patches\%%~nP.xdelta"
)
In Patches folder i get all files from Folder1 with 0kb size which wasnt modified. How to ignore them?
In docs nothing says about hash or another file checking.
You can test if there is any modification before generating the patch
FOR %%P in ("Folder1\*.bin") do (
fc /b "Folder1\%%~nxP" "Folder2\%%~nxP" >nul 2>&1 || xdelta.exe -9 -e -s "Folder1\%%~nxP" "Folder2\%%~nxP" "Patches\%%~nP.xdelta"
)

Resources