Currently my program will take in a command line argument that will be a reference to a directory that we will be looking in. From there I must create a list of all files within this directory and any subsequent sub directories, remove their path-names (i.e home/admin/3000/assignment.txt will become assignment.txt) and sort each file by size. Ok that part is done
find $location -type f -ls | sort -r -n -k7 | sed 's#.*/##'
gives me my sorted list,
now I have to prompt the user if they would like to delete each file of size 0, any tips on how to do this would be much appreciated.
You could use the -size option of find to locate zero byte files. Use rm -i option if a prompt before deleting is required.
find $location -type f -size 0 -exec rm -i {} \;
Related
I don't really know what am I supposed to do with it.
For each file in the /etc directory whose name starts with the o or l and the second letter and the second letter of the name is t or r, display its name, size and type ('file'/'directory'/'link'). Use: wildcard, for loop and conditional statement for the type.
#!/bin/bash
etc_dir=$(ls -a /etc/ | grep '^o|^l|^.t|^.r')
for file in $etc_dir
do
stat -c '%s-%n' "$file"
done
I was thinking about something like that but I have to use if statement.
You may reach the goal by using find command.
This will search through all subdirectories.
#!/bin/bash
_dir='/etc'
find "${_dir}" -name "[ol][tr]*" -exec stat -c '%s-%n' {} \; 2>/dev/null
To have control on searching in subdirectories, you may use -maxdepth flag, like in the below example it will search only the files and directories name in the /etc dir and don't go through the subdirectories.
#!/bin/bash
_dir='/etc'
find "${_dir}" -maxdepth 1 -name "[ol][tr]*" -exec stat -c '%s-%n' {} \; 2>/dev/null
You may also use -type f OR -type d parameters to filter finding only Files OR Directories accordingly (if needed).
#!/bin/bash
_dir='/etc'
find "${_dir}" -name "[ol][tr]*" -type f -exec stat -c '%s-%n' {} \; 2>/dev/null
Update #1
Due to your request in the comments, this is a long way but used for loop and if statement.
Note: I'd strongly recommend to review and practice the commands used in this script instead of just copy and pasting them to get the score ;)
#!/bin/bash
# Set the main directory path.
_mainDir='/etc'
# This will find all files in the $_mainDir (ignoring errors if any) and assign the file's path to the $_files variable.
_files=$(find "${_mainDir}" 2>/dev/null)
# In this for loop we will
# loop over all files
# identify the poor filename from the whole file path
# and IF the poor file name matches the statement then run & output the `stat` command on that file.
for _file in ${_files} ;do
_fileName=$(basename ${_file})
if [[ "${_fileName}" =~ ^[ol][tr].* ]] ;then
stat -c 'Size: %s , Type: %n ' "${_file}"
fi
done
exit 0
You should break-down you problems into multiple pieces and tackle them one by one.
First, try and build an expression that finds the right files. If you were to execute your regex expression in a shell:
ls -a /etc/ | grep '^o|^l|^.t|^.r'
You would immediately see that you don't get the right output. So the first step would be to understand how grep works and fix the expression to:
ls -a /etc/ | grep '^[ol][tr]*'
Then, you have the file name, and you need the size and a textual file type. The size is easy to obtain using a stat call.
But, you soon realize you cannot ask stat to provide a textual format of the file type with the -f switch, so you probably have to use an if clause to present that.
How about this:
shopt -s extglob
ls -dp /etc/#(o|l)#(t|r)* | grep -v '/$'
Explanation:
shopt extglob - enable extended globbing (https://www.google.com/search?q=bash+extglob)
ls -d - list directories names, not their content
ls -dp - and add / at the end of each directory name
#(o|l)#(t|r) - o or l once (#), and then t or r once
grep -v '/$' - remove all lines containing / at the end
Of course, Vab's find solution is better that this ls:
find /etc -maxdepth 1 -name "[ol][tr]*" -type f -exec stat {} \;
I have a list of 577 image files that I need to search for on a large server. I am no expert when it comes to bash so the best I could do myself was 577 lines of this:
find /source/directory -type f -iname "alternate1_1052956.tif" -exec cp {} /dest/directory \;
...repeating this line for each file name. It works... but it's unbelievably slow because it searches the entire server for one file and then moves on to the next line, but each search could take 20 minutes. I left this overnight and it only found 29 of them by the morning which is just way too slow. It could take two weeks at that rate to find all of these.
I've tried separating each line with -o as an OR separator in the hopes that it would search once for 577 files but I can't get it to work.
Does anyone have any suggestions? I also tried using the .txt file I have of the file names as a basis for the search but couldn't get that to work either. Unfortunately I don't have the paths for these files, only the basenames.
If you want to copy all .tif files
find /source/directory -type f -name "*.tif" -exec cp {} /dest/directory \;
# ^
On MacOS, use the mdfind command that will look for the filename in the SpotLight index. This is very fast as it is only an index lookup, just like the locate command in Linux:
cp $(mdfind alternate1_1052956.tif) /dest/directory
If you have all the filenames in a file (one line per file) use xargs
xargs -L 1 -I {} cp $(mdfind {}) /dest/directory < file_with_list
Create a file with all filenames, then write a loop which runs through that file and executes command in background.
Note, that this will take a lot of memory, as you will be executing this simultaneously multiple times. So make sure you have enough memory for this.
while read -r line; do
find /source/directory -type f -iname "$line" -exec cp {} /dest/directory \ &;
done < input.file
There are a few assumption made in this answer. You have a list of all 577 file names, let's call it, inputfile.list. There are no whitespaces in the file names. Following may work:
$ cat findcopy.sh
#!/bin/bash
cmd=$(
echo -n 'find /path/to/directory -type f '
readarray -t filearr < inputfile.list # Read the list to an array
n=0
for f in "${filearr[#]}" # Loop over the array and print -iname
do
(( n > 0 )) && echo "-o -iname ${f}" || echo "-iname ${f}"
((n++))
done
echo -n ' | xargs -I {} cp {} /path/to/destination/'
)
eval $cmd
execute: ./findcopy.sh
Note for MacOS. It doesn't have readarray. Instead use any other simple method to feed the list into array, for example,
filearr=($(cat inputfile.list))
Until now when I want to gather files from a list I have been using a list that contains full paths and using:
cat pathlist.txt | xargs -I % cp % folder
However, I would like be able to recursively search through a folder and it's subfolders and copy all files that are in a plain text list of just filenames (not full paths).
How would I go about doing this?
Thanks!
Assuming your list of file names contains bare file names, as would be suitable for passing as an argument to find -name, you can do just that.
sed 's/^/-name /;1!s/^/-o /' pathlist.txt |
xargs -I % find folders to search -type f \( % \) -exec cp -t folder \+
If your cp doesn't support the -t option for specifying the destination folder before the sources, or your find doesn't support -exec ... \+ you will need to adapt this.
Just to explain what's going on here, the input
test.txt
radish.avi
:
is being interpolated into
find folders to search -type f \( -name test.txt -o -name radish.avi \
-o name : \) -exec cp -t folder \+
Try something like
find folder_to_search -type f | grep -f pattern_file | xargs -I % cp % folder
Use the find command.
while read line
do
find /path/to/search/for -type f -name "$line" -exec cp -R {} /path/to/copy/to \;
done <plain_text_file_containing_file_names
Assumption:
The files in the list have standard names without, say newlines or special characters in them.
Note:
If the files in the list have non-standard filenames, tt will be different ballgame. For more information see find manpage and look for -print0. In short you should be operating with null terminated strings then.
I want to move all the files in a specific folder having size of 0 bytes. I know that the following prints all the files with size zero bytes.
find /home/Desktop/ -size 0
But i want to move them to another folder, so i tried :
find /home/Desktop/ -size 0 | xargs -0 mv /home/Desktop/a
But that doesn't work. ? Is there any other way to do it.? What am i doing wrong?
You can do that in find itself using -exec option:
find /home/Desktop/ -size 0 -exec mv '{}' /home/Desktop/a \;
find default prints the file name on the standard output followed by a newline. The option -print0 prints the file name followed by a null character instead. The option -0 of xargs means that the input is terminated by a null character.
find /home/Desktop/ -size 0 -print0 | xargs -0 -I {} mv {} /home/Desktop/a
You could instead use find's option -exec
In both cases consider also using find's option -type f if you only want to find files and the option -maxdepth 1 if you do not want find to descend directories. This is specially usefull in your example since you move the found files to a subdirectory!
I have a command that copies file from one dir to another
FILE_COLLECTOR_PATH="/var/www/";
FILE_BACKUP_PATH='/home/'
ls $FILE_COLLECTOR_PATH | head -${1} | xargs -i basename {} | xargs -t -i cp $FILE_COLLECTOR_PATH{} "${FILE_BACKUP_PATH}{}-`date +%F%H%M%S%N`"
I loop it in a shell script like,
#!/bin/sh
SLEEP=120
FILE_COLLECTOR_PATH="/var/www/";
FILE_BACKUP_PATH='/home/'
while true
do
ls $FILE_COLLECTOR_PATH | head -${1} | xargs -i basename {} | xargs -t -i cp $FILE_COLLECTOR_PATH{} "${FILE_BACKUP_PATH}{}-`date +%F%H%M%S%N`"
sleep ${SLEEP}
done
But it seems to move only 10 files and not all files in the dir, Why? It should suppose to move all files.
In general, don't try to parse the output of ls in a script. You can end up with many different types of subtle problems. There is almost always a better tool for the job. Many times, this tool is find. For example, to generate a list of all of the files in a directory and do something to each of them, you would do something like this:
find <search directory> -maxdepth 1 -type f -print0 | xargs -0i basename {} ...
The -print0 and -0 arguments allow find and xargs to communicate filenames in a way that handles special characters (like spaces) correctly.
The find command has other options that you may find useful in a backup script (which is what it appears you are building). Options like -mmin and -newer will enable you to only back up files that have changed since the last iteration.
Try doing
ls -1
instead of just ls, because ls by default don't displays files on a newline (tail expect newlines) for each files when ls -1 does.