So, I need to list a bunch of files in reverse order from a certain directory. Only problem is that there are a lot of files in the directory (since I'm decompiling video frames in a directory to reverse the entire video) and when I run ls I get an error that says /bin/ls argument list too long. I was wondering how to get around this error?
Operating System: Ubuntu 14.04
If ls doesn't do, find -type f is usually your friend (and can also use stuff like -print0 to avoid problems exotic filenames).
I assume that you are using something like
ls -1 -r *.jpg
to produce the reverse-sorted list of images. Since Bash sorts filename expansions (aka globs) itself, you can get the same effect by just reversing the expansion of *.jpg. This is one way to do it:
printf '%s\n' *.jpg | tac
If you haven't got tac, you can do it all in pure Bash:
images=( *.jpg )
for (( i=${#images[*]}-1 ; i>=0 ; i-- )) ; do
printf '%s\n' "${images[i]}"
done
Related
A bit lowly a query but here goes:
bash shell script. POSIX, Mint 21
I just want one/any (mp3) file from a directory. As a sample.
In normal execution, a full run, the code would be such
for f in *.mp3 do
#statements
done
This works fine but if I wanted to sample just one file of such an array/glob (?) without looping, how might I do that? I don't care which file, just that it is an mp3 from the directory I am working in.
Should I just start this for-loop and then exit(break) after one statement, or is there a neater way more tailored-for-the-job way?
for f in *.mp3 do
#statement
break
done
Ta (can not believe how dopey I feel asking this one, my forehead will hurt when I see the answers )
Since you are using Linux (Mint) you've got GNU find so one way to get one .mp3 file from the current directory is:
mp3file=$(find . -maxdepth 1 -mindepth 1 -name '*.mp3' -printf '%f' -quit)
-maxdepth 1 -mindepth 1 causes the search to be restricted to one level under the current directory.
-printf '%f' prints just the filename (e.g. foo.mp3). The -print option would print the path to the filename (e.g. ./foo.mp3). That may not matter to you.
-quit causes find to exit as soon as one match is found and printed.
Another option is to use the Bash : (colon) command and $_ (dollar underscore) special variable:
: *.mp3
mp3file=$_
: *.mp3 runs the : command with the list of .mp3 files in the current directory as arguments. The : command ignores its arguments and does nothing.
mp3file=$_ sets the value of the mp3file variable to the last argument supplied to the previous command (:).
The second option should not be used if the number of .mp3 files is large (hundreds or more) because it will find all of the files and sort them by name internally.
In both cases $mp3file should be checked to ensure that it really exists (e.g. [[ -e $mp3file ]]) before using it for anything else, in case there are no .mp3 files in the directory.
I would do it like this in POSIX shell:
mp3file=
for f in *.mp3; do
if [ -f "$f" ]; then
mp3file=$f
break
fi
done
# At this point, the variable mp3file contains a filename which
# represents a regular file (or a symbolic link) with the .mp3
# extension, or empty string if there is no such a file.
The fact that you use
for f in *.mp3 do
suggests to me, that the MP3s are named without to much strange characters in the filename.
In that case, if you really don't care which MP3, you could:
f=$(ls *.mp3|head)
statement
Or, if you want a different one every time:
f=$(ls *.mp3|sort -R | tail -1)
Note: if your filenames get more complicated (including spaces or other special characters), this will not work anymore.
Assuming you don't have spaces in your filenames, (and I don't understand why the collective taboo is against using ls in scripts at all, rather than not having spaces in filenames, personally) then:-
ls *.mp3 | tr ' ' '\n' | sed -n '1p'
i'm trying to count all the .txt files in the folders, the problem is that the main folder has more than one folder and inside everyone of them there are txt files , so in total i want to count the number of txt files . till now i've tried to build such a solution,but of course it's wrong:
#!/bin/bash
counter=0
for i in $(ls /Da) ; do
for j in $(ls i) ; do
$counter=$counter+1
done
done
echo $counter
the error i'm getting is :ls cannot access i ...
the problem is that i don't know how i'm supposed to build the inner for loop as it depends on the external for loop(schema) ?
This can work for you
find . -name "*.txt" | wc -l
In the first part find looks for the *.txt from this folder (.) and its subfolders. In the second part wc counts the returnes lines (-l) of find.
You want to avoid parsing ls and you want to quote your variables.
There is no need for repeated loops, either.
printf 'x\n' /Da/* /Da/*/* | wc -l
depending also on whether you expect the entries in /Da to be all files (in which case /Da/* will suffice), all directories (in which case /Da/*/* alone is enough), or both. Additionally, if you don't want to count directories at all, maybe switch to find /Da -type f -printf 'x\n' or similar.
There is no need to print the file names at all; this avoids getting the wrong result if a file name should ever contain a line feed (touch $'/Da/ick\npoo' to see this in action.)
More generally, a correct nested loop looks like
for i in list of things; do
for j in different items, perhaps involving "$i"; do
things with "$j" and perhaps also "$i"
done
done
i is a variable, so you need to reference it via $, i.e. the second loop should be
for j in $(ls "$i") ; do
I have a collection of stored procedures (SPs) being called in some C# code. I simply want to find which lines in which C# files are using these SPs.
I have installed git-bash, and am working in a Win10 environment.
No matter what I try, grep either spits out nothing, or spits out the entire contents of every file that has a matching record. I simply want the filename
and the line number where SP regex matches.
In a terminal, here is what I have done:
procs=( $(cat procs.txt) ) #load the procs into an array
echo ${#procs[#]} #echo the size to make sure each proc got read in separately
output: 235
files=( $(find . -type f -iregex '.*\.cs') ) #load the file paths into an array,
#this similarly returns a filled out array
output: #over 1000
I have also tried this variant which removes the initial './' in the path, thinking that the relative pathing was causing an issue
files=( $(find . -type f -iregex '.*\.cs' | sed 's/..//') )
The rest is a simple nested for loop:
for i in ${procs[#]}
do
for j in ${files[#]}
do
grep -nie "$i" "$j"
done
done
I have tried many other variants of this basic idea, like redirecting the grep output to a text file, adding and subtracting flags,
quoting and unquoting the variables, and the like.
I also tried this approach, but was similarly unsuccessful
for i in ${procs[#]}
do
grep -r --include='*.cs' -F $i
#and i also tried
grep -F $i *
done
at this point I am thinking there is something I don't understand about how git-bash works in a windows environment, because it seems like it should have worked by now.
Thanks for your help.
EDIT:
So after hours of heart-ache I finally got it to work with this:
for i in "${!procs[#]}"
do
for j in "${!files[#]}"
do
egrep -nH $(echo "${procs[$i]}") $(echo "${files[$j]}")
done
done
I looked it up, and my git-bash version is gnu-bash 4.4.12(1) x86_64-pc-msys
I'm still not sure why git-bash needs such weird quoting and echoing just to get everything to run properly. On debian linux it worked with just a simple
for i in ${procs[#]}
do
for j in ${files[#]}
do
grep $i $j
done
done
Running this version of bash: GNU bash, version 4.4.12(1)-release (x86_64-pc-linux-gnu)
If anyone can tell me why git-bash behaves so oddly, I would still love to know the answer.
I'm not sure I understand exactly why:
for f in `find . -name "strain_flame_00*.dat"`; do
echo $f
mybase=`basename $f .dat`
echo $mybase
done
works and:
for f in `ls strain_flame_00*.dat`; do
echo $f
mybase=`basename $f .dat`
echo $mybase
done
does not, i.e. the filename does not get stripped of the suffix. I think it's because what comes out of ls is formatted differently but I'm not sure. I even tried to put eval in front of ls...
The correct way to iterate over filenames here would be
for f in strain_flame_00*.dat; do
echo "$f"
mybase=$(basename "$f" .dat)
echo "$mybase"
done
Using for with a glob pattern, and then quoting all references to the filename is the safest way to use filenames that may have whitespace.
First of all, never parse the output of the ls command.
If you MUST use ls and you DON'T know what ls alias is out there, then do this:
(
COLUMNS=
LANG=
NLSPATH=
GLOBIGNORE=
LS_COLORS=
TZ=
unset ls
for f in `ls -1 strain_flame_00*.dat`; do
echo $f
mybase=`basename $f .dat`
echo $mybase
done
)
It is surrounded by parenthesis to protect existing environment, aliases and shell variables.
Various environment names were NUKED (as ls does look those up).
One unalias command (self-explanatory).
One unset command (again, protection against scrupulous over-lording 'ls' function).
Now, you can see why NOT to use the 'ls'.
Another difference that hasn't been mentioned yet is that find is recursive search by default, whereas ls is not. (even though both can be told to do recursive / non-recursive through options; and find can be told to recurse up to a specified depth)
And, as others have mentioned, if it can be achieved by globbing, you should avoid using either.
I want to iterate over a list of files in Bash and perform some action. The problem: the file names may contain whitespace, which creates an obvious problem with wildcards or ls:
touch a\ b
FILES=* # or $(ls)
for FILE in $FILES; do echo $FILE; done
yields
a
b
Now, the conventional way to handle this is to use find … -print0 instead. However, this only works (well) in conjunction with xargs -0, not with Bash variables / loops.
My idea was to set $IFS to the null character to make this work. However, the comp.unix.shell seems to think that this is impossible in bash.
Bummer. Well, it’s theoretically possible to use another character, such as : (after all, $PATH uses this format, too):
IFS=$':'
FILES=$(find . -print0 | xargs -0 printf "%s:")
for FILE in $FILES; do echo $FILE; done
(The output is slightly different but fair enough.)
However, I can’t help but feel that this is clumsy and that there should be a more direct way of doing this. I’m looking for a more direct way of accomplishing this, preferably using wildcards or ls.
The best way to handle this is to store the file list as an array, rather than a string (and be sure to double-quote all variable substitutions):
files=(*)
for file in "${files[#]}"; do
echo "$file"
done
If you want to generate an array from find's output (e.g. if you need to search recursively), see this previous answer.
Exactly what you have in the first example works fine for me in Msys Bash, Cygwin and on my Fedora box:
FILES=*
for FILE in $FILES
do
echo $FILE
done
Its very important to preceed
IFS=""
otherwise files with two directly following spaces will not be found