Q:I am working with script on unix platform and I have to find out all the files in a directory which came around 8 hours early from now.
i am using below command to retrieve the files as per above condition:
find . name "*.dat" -mmin -480
But there are few files which are having special character(double question mark) ??" in the file-name itself and using above command ,file with ?? in its name ,got splits into two part in two lines.
for eg:
file name : aabb??cc.dat
after above command run,it results like this :
$./aabb
$cc.dat
($ here is unix command prompt)
Can someone suggest the correction in the above command or the right approach to handle this exception.
This command will show you find is considering these files just like the others :
find . -name "*.dat" -mmin -480 -exec \
ksh -c 'c=1
for file do
printf "file #%d is \"%s\"\n" $c "$file"
c=$((c+1))
done ' sh {} +
If find is showing some file names split in two lines, that's just because their names have an embedded new line. That's odd but they are still valid file names.
Related
A bit lowly a query but here goes:
bash shell script. POSIX, Mint 21
I just want one/any (mp3) file from a directory. As a sample.
In normal execution, a full run, the code would be such
for f in *.mp3 do
#statements
done
This works fine but if I wanted to sample just one file of such an array/glob (?) without looping, how might I do that? I don't care which file, just that it is an mp3 from the directory I am working in.
Should I just start this for-loop and then exit(break) after one statement, or is there a neater way more tailored-for-the-job way?
for f in *.mp3 do
#statement
break
done
Ta (can not believe how dopey I feel asking this one, my forehead will hurt when I see the answers )
Since you are using Linux (Mint) you've got GNU find so one way to get one .mp3 file from the current directory is:
mp3file=$(find . -maxdepth 1 -mindepth 1 -name '*.mp3' -printf '%f' -quit)
-maxdepth 1 -mindepth 1 causes the search to be restricted to one level under the current directory.
-printf '%f' prints just the filename (e.g. foo.mp3). The -print option would print the path to the filename (e.g. ./foo.mp3). That may not matter to you.
-quit causes find to exit as soon as one match is found and printed.
Another option is to use the Bash : (colon) command and $_ (dollar underscore) special variable:
: *.mp3
mp3file=$_
: *.mp3 runs the : command with the list of .mp3 files in the current directory as arguments. The : command ignores its arguments and does nothing.
mp3file=$_ sets the value of the mp3file variable to the last argument supplied to the previous command (:).
The second option should not be used if the number of .mp3 files is large (hundreds or more) because it will find all of the files and sort them by name internally.
In both cases $mp3file should be checked to ensure that it really exists (e.g. [[ -e $mp3file ]]) before using it for anything else, in case there are no .mp3 files in the directory.
I would do it like this in POSIX shell:
mp3file=
for f in *.mp3; do
if [ -f "$f" ]; then
mp3file=$f
break
fi
done
# At this point, the variable mp3file contains a filename which
# represents a regular file (or a symbolic link) with the .mp3
# extension, or empty string if there is no such a file.
The fact that you use
for f in *.mp3 do
suggests to me, that the MP3s are named without to much strange characters in the filename.
In that case, if you really don't care which MP3, you could:
f=$(ls *.mp3|head)
statement
Or, if you want a different one every time:
f=$(ls *.mp3|sort -R | tail -1)
Note: if your filenames get more complicated (including spaces or other special characters), this will not work anymore.
Assuming you don't have spaces in your filenames, (and I don't understand why the collective taboo is against using ls in scripts at all, rather than not having spaces in filenames, personally) then:-
ls *.mp3 | tr ' ' '\n' | sed -n '1p'
I have several directories containing files whose names contain the name of the folder more other words.
Example:
one/berg - one.txt
two/tree - two.txt
three/water - three.txt
and I would like to remain so:
one/berg.txt
two/tree.txt
three/water.txt
I tried with the sed command, find command, for command, etc.
I fail has to find a way to get it.
Could you help me?. Thank you
Short and simple, if you have GNU find:
find . -name '* - *.*' -execdir bash -c '
for file; do
ext=${file##*.}
mv -- "$file" "${file%% - *}.${ext}"
done
' _ {} +
-execdir executes the given command within the directory where each set of files are found, so one doesn't need to worry about directory names.
for file; do is a shorter way to write for file in "$#"; do.
${file##*.} expands to the contents of $file, with everything up to and including the last . removed (thus, it expands to the file's extension).
"${varname%% - *}" expands to the contents of the variable varname, with everything after <space><dash><space> removed from the end.
In the idiom -exec bash -c '...' _ {} + (as with -execdir), the script passed to bash -c is run with _ as $0, and all files found by find in the subsequent positions.
Here's a way to do it with the help of sed:
#!/bin/bash
find -type f -print0 | \
while IFS= read -r -d '' old_path; do
new_path="$(echo "$old_path" | sed -e 's|/\([^/]\+\)/\([^/]\+\) - \1.\([^/.]\+\)$|/\1/\2.\3|')"
if [[ $new_path != $old_path ]]; then
echo mv -- "$old_path" "$new_path"
# ^^^^ remove this "echo" to actually rename the files
fi
done
You must cd to the top level directory that contains all those files to do this. Also, it constains an echo, so it does not actually rename the files. Run it one to see if you like its output and if you do, remove the echo and run it again.
The basic idea is that we iterate over all files and for each file, we try to find if the file matches with the given pattern. If it does, we rename it. The pattern detects (and captures) the second last component of the path and also breaks up the last component of the path into 3 pieces: the prefix, the suffix (which must match with the previous path component), and the extension.
I have images files that when they are created have these kind of file names:
Name of file-1.jpg
Name of file-2.jpg
Name of file-3.jpg
Name of file-4.jpg
..etc
This causes problems for sorting between Windows and Cygwin Bash. When I process these files in Cygwin Bash, they get processed out of order because of the differences in sorting between Windows file system and Cygwin Bash sees them. However, if the files get manually renamed and numbered with leading zeroes, this issue isn't a problem. How can I use Bash to rename these files automatically so I don't have to manually process them. I'd like to add a few lines of code to my Bash script to rename them and add the leading zeroes before they are processed by the rest of the script.
Since I use this Bash script interchangeably between Windows Cygwin and Mac, I would like something that works in both environments, if possible. Also all files will have names with spaces.
You could use something like this:
files="*.jpg"
regex="(.*-)(.*)(\.jpg)"
for f in $files
do
if [[ "$f" =~ $regex ]]
then
number=`printf %03d ${BASH_REMATCH[2]}`
name="${BASH_REMATCH[1]}${number}${BASH_REMATCH[3]}"
mv "$f" "${name}"
fi
done
Put that in a script, like rename.sh and run that in the folder where you want to covert the files. Modify as necessary...
Shamelessly ripped from here:
Capturing Groups From a Grep RegEx
and here:
How to Add Leading Zeros to Sequential File Names
#!/bin/bash
#cygcheck (cygwin) 2.3.1
#GNU bash, version 4.3.42(4)-release (i686-pc-cygwin)
namemodify()
{
bname="${1##*/}"
dname="${1%/*}"
mv "$1" "${dname}/00${bname}" # Add any number of leading zeroes.
}
export -f namemodify
find . -type f -iname "*jpg" -exec bash -c 'namemodify "$1"' _ {} \;
I hope this won't break on Mac too :) good luck
find.-type f|egrep-i"~||&|#|#|<|>|;|:|!|'^'|,|-|_"|tee temp.txt
I am not sure about special characters like * or $. Can you help me out with this.
First of all, I'd suggest to write a script which takes a single file name and fixes it. Then you can do:
find . -type f -exec /path/to/fixNames.sh "{}" \;
fixNames.sh could then contain:
rename 's/[ \t]/-/' "$1" # blanks
rename "s/'\",//" "$1" # characters to remove
rename 's/&/-n-/' "$1"
Note: Test this with a folder with some bad file names! Only run this against the real files when you know that this doesn't cause problems!
Related:
Introduction to shell scripting
How about setting up two arrays - one for the special characters, one for the replacements (they should contain the same number of indexes)?
#!/bin/bash
SPECIALCHARS=("," " " "&" "\\" "\"")
REPLACEMENTS=("" "-" "-n-" "" "")
for i in $(seq 0 $((${#SPECIALCHARS[#]}-1))); do
find . -exec rename "${SPECIALCHARS[$i]}" "${REPLACEMENTS[$i]}" {} \;
done
I am running the following commands:
FILES=`/usr/bin/find /u01/app/dw/admin/dgwspool -type f -daystart -mmin -1621`;
/usr/bin/smbclient //techshare.something.com/Depts/ -I 129.0.0.1 -D ITIS/deptshare/degreeworks/Test -U domain\\user%password -c "prompt off; mput $FILES"
I've tested this, the $FILES variable is filled with a space-delimited list of filenames. The smblclient command connects to the windows share as I would hope, and if I put in a hard-coded filename it will copy the file (or files) to the share.
What seems to be happening is that the $FILES variable is not expanded, or is being evaluated in some internal smbclient scope.
How can I get this to work?
My psychic powers tells me that you tested this with echo $FILES, which printed all the files on one line, leading you to believe that $FILES was spaces separated. This is not the case.
With echo $FILES, the shell word splits the variable on spaces and line feeds into multiple arguments, which echo then joins with spaces. If you use echo "$FILES", you'll see that it is in fact line feed separated.
The quick fix is to print the file names space separated (requires GNU find or other find with -printf). As per comment, it also omits the search path:
FILES=`/usr/bin/find /u01/app/dw/admin/dgwspool -type f -daystart -mmin -1621 -printf '%P '`;