Individually Execute All Files in a Directory - bash

I've been trying to write a shell script to travel a directory tree and play every mp3 file it finds. afplay is my utility of choice, given that I am on a mac. However, afplay only takes one argument at a time, so you have to call it over and over again if you want it to keep playing. It seems like the simplest solution would be as follows:
$(`find . -name *.mp3 | awk '{ print "afplay \047" $0 "\047"; }' | tr '\n' ';' | sed 's/;/; /g'`)
...but something keeps getting caught up in the escaping of quotes. For quick reference, \047 is octal for ' (the single quote character), which should encapsulate arguments into one, but for some reason it is not. I have no clue what is going wrong here.

Why not just find . -name '*.mp3' -exec afplay '{}' \;?

If all of your songs look like this:
1. song_name.mp3
2. song_name.mp3
3. song_name.mp3
...
20. song_name.mp3
to play all 20 of them you could just loop
for ((i=1; i<=20; i++)); do afplay $i* ; done

Related

syntax error not sure what its saying or how to fix it

I am trying to write a shell script for school that searches your entire home directory for all files with the .java extension. For each such file, list the number of lines in the file along with its location (that is, its full path).
my script looks like
#!/bin/bash
total=0
for currfile in $(find ~ -name "*.java" -print)
do
total=$[total+($(wc -l $currfile| awk '{print $1}'))]
echo -n 'total=' $total
echo -e -n '\r'
done
echo 'total=' $total
when i run it from the konsole i get error
./fileQuest.sh: line 5: total+(): syntax error: operand expected (error token is ")")
I am a novice and cannot figure out what the error is telling me. Any help would be appreciated
total+()
This is the expression that's being evaluated inside of $[...]. Notice that the parentheses are empty. There should be a number there. It indicates that the $(wc | awk) bit is yielding an empty string.
total=$[total+($(wc -l $currfile| awk '{print $1}'))]
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If that part is blank then you get:
total=$[total+()]
Note that wc can handle multiple file names natively. You don't need to write your own loop. You could use find -exec to call it directly instead.
find ~ -name "*.java" -exec wc {} +

Command Line Argument for Script Changing Way Code Functions

I'm writing a script to loop through a directory, look through each file, and give the iterations of a certain word in each file. When I write it for the specific directory it works fine, but when I try to make the directory a command line argument, it only gives me the count for the first file. I was thinking maybe this has something to do with, the argument being singular ($1), but I really have no idea.
Works
for f in /home/student/Downloads/reviews_folder/*
do
tr -s ' ' '\n' <$f | grep -c '<Author>'
done
Output
125
163
33
...
Doesn't Work
for f in "$1"
do
tr -s ' ' '\n' <$f | grep -c '<Author>'
done
Command Line Input
student-vm:~$ ./countreviews.sh /home/student/Downloads/reviews_folder/*
Output
125
The shell expands wildcards before passing the list of arguments to your script.
To loop over all the files passed in as command-line arguments,
for f in "$#"
do
tr -s ' ' '\n' <"$f" | grep -c '<Author>'
done
Run it like
./countreviews /home/student/Downloads/reviews_folder/*
or more generally
./countreviews ... list of file names ...
As you discovered, "$1" corresponds to the first file name in the expanded list of wildcards.
If you are using double quotes for the parameter it should work. Like this:
student-vm:~$ ./countreviews.sh "/home/student/Downloads/reviews_folder/*"
At least like this it works for me. I hope this helps you.

Counting number of occurrences in several files

I want to check the number of occurrences of, let's say, the character '[', recursively in all the files of a directory that have the same extension, e.g. *.c. I am working with the SO Solaris in Unix.
I tried some solutions that are given in other posts, and the only one that works is this one, since with this OS I cannot use the command grep -o:
sed 's/[^x]//g' filename | tr -d '012' | wc -c
Where x is the occurrence I want to count. This one works but it's not recursive, is there any way to make it recursive?
You can get a recursive listing from find and execute commands with its -exec argument.
I'd suggest like:
find . -name '*.c' -exec cat {} \; | tr -c -d ']' | wc -c
The -c argument to tr means to use the opposite of the string supplied -- i.e. in this case, match everything but ].
The . in the find command means to search in the current directory, but you can supply any other directory name there as well.
I hope you have nawk installed. Then you can just:
nawk '{a+=gsub(/\]/,"x")}END{print a}' /path/*
You can write a snippet code itself. I suggest you to run the following:
awk '{for (i=1;i<=NF;i++) if ($i=="[") n++} END{print n}' *.c
This will search for "[" in all files in the present directory and print the number of occurrences.

Get index of argument with xargs?

In bash, I have list of files all named the same (in different sub directories) and I want to order them by creation/modified time, something like this:
ls -1t /tmp/tmp-*/my-file.txt | xargs ...
I would like to rename those files with some sort of index or something so I can move them all into the same folder. My result would ideally be something like:
my-file0.txt
my-file1.txt
my-file2.txt
Something like that. How would I go about doing this?
You can just loop through these files and keep appending an incrementing counter to desired file name:
for f in /tmp/tmp-*/my-file.txt; do
fname="${f##*/}"
fname="${fname%.*}"$((i++)).txt
mv "$f" "/dest/dir/$fname"
done
EDIT: In order to sort listed files my modification time as is the case with ls -1t you can use this script:
while IFS= read -d '' -r f; do
f="${f#* }"
fname="${f##*/}"
fname="${fname%.*}"$((i++)).txt
mv "$f" "/dest/dir/$fname"
done < <(find /tmp/tmp-* -name 'my-file.txt' -printf "%T# %p\0" | sort -zk1nr)
This handles filenames with all special characters like white spaces, newlines, glob characters etc since we are ending each filename with NUL or \0 character in -printf option. Note that we are also using sort -z to handle NUL terminated data.
So I found an answer to my own question, thoughts on this one?
ls -1t /tmp/tmp-*/my-file.txt | awk 'BEGIN{ a=0 }{ printf "cp %s /tmp/all-the-files/my-file_%03d.txt\n", $0, a++ }' | bash;
I found this from another stack overflow question looking for something similar that my search didn't find at first. I was impressed with the awk line, thought that was pretty neat.

pass shell parameters to awk does not work

Why does this work
for myfile in `find . -name "R*VER" -mtime +1`
do
SHELLVAR=`grep ^err $myfile || echo "No error"`
ECHO $SHELLVAR
done
and outputs
No error
err ->BIST Login Fail 3922 err
No error
err ->IR Remote Key 1 3310 err
But this does not
for myfile in `find . -name "R*VER" -mtime +1`
do
SHELLVAR=`grep ^err $myfile || echo "No error"`
awk -v awkvar=${SHELLVAR} '{print awkvar}'
done
and outputs
awk: cmd. line:1: fatal: cannot open file `{print awkvar}' for reading (No such file or directory)
What am I missing?
Does $SHELLVAR contain a space? If so, your awk script is getting misparsed, and the {print awkvar} is being assumed to be a file name and not the actual AWK program.
You also have a problem where both your for loop and the awk program are both slurping STDIN. In fact, your for loop would only be executed once since the AWK will read in all the STDIN until it finishes. In the end, you'll get the same line over and over, and then your program will stop running as the awk awaits for more STDIN.
I think you want to do something like this...
find . -name "R*VER" -mtime +1 | while read myfile
do
echo "$SHELLVAR" | awk '{print $0}'
done
This way, your echo command feeds into your awk which will prevent awk from reading from the find statement.
As an aside, you're better off doing this:
find . -name "R*VER" -mtime +1 | while read myfile
do
...
done
Rather than this:
for myfile in `find . -name "R*VER" -mtime +1`
do
...
done
This is for several reasons:
The command line buffer could overflow, and you'll lose file names, but you'll never see an error.
The find command in the second example must first complete before the for loop can start to execute. In the first example, the find feeds into the while loop.
ADDENDUM
Now that I saw just-my-correct-opinion's answer, I realize what you've really done wrong: You forgot the file name in the awk command:
find . -name "R*VER" -mtime +1 | while read myfile
do
SHELLVAR=`grep ^err $myfile || echo "No error"`
awk -v awkvar=${SHELLVAR} '{print awkvar}' $myfile
done
Now, the question is what exactly are you doing with the awk. You're not printing anything except the value of $SHELVAR for each and every line in the file. That's probably not what you want to do. In fact, why not simply do this:
find . -name "R*VER" -mtime +1 | while read myfile
do
SHELLVAR=$(grep -q "^err" $myfile")
if [ "x$SHELLVAR" != "x" ]
then
echo "$SHELLVAR"
fi
done
That way, you print out $SHELLVAR, but only if $SHELLVAR is empty.
Or, you can use awk to print out only those lines that match your regex:
find . -name "R*VER" -mtime +1 | while read myfile
do
awk '/^err/ {print $0}' $myfile
done
What you're trying to do is possible but ... a bit quirky. Here's an alternative for you:
for f in $(find . -name "R*VER" -mtime +1)
do
echo "$f"
awk 'BEGIN{ec=0} /^err/{ec+=1;print} END{if(ec==0){print "No error"}}' "$f"
done
This way you don't have to worry about grep, don't have to worry about shell variables and can keep your logic all in one place in one language.
Note that this code is only partially tested since I don't have your data files, but the testing I did worked fine.
If you'd like you can even go a step farther and write the whole thing in awk. It is, after all, a full, general-purpose programming language (with, IMO, a far cleaner syntax than bash). This way you avoid the need for find, grep and bash entirely. I won't be writing that script, however. You'll have to pick up the awk man page and read up on file I/O yourself.
You need to quote $SHELLVAR, to prevent the shell from splitting it.
awk -v awkvar="${SHELLVAR}" '{print awkvar}'
You have already got alternative solutions. Regardless of that, I just want to answer your question, ie the thing that you are missing:
awk -v awkvar=${SHELLVAR} '{print awkvar}'
Here awk is seeking to read the input from STDIN. And that's the problem. Awk seeks to read input from STDIN, unless you specify input file(s). The given commands to awk are executed for each RECORD in the input (and by default a record is a line). See man awk to read more on awk.
But here is a hack, if you want it to proceed without any input:
awk -v awkvar=${SHELLVAR} 'BEGIN{print awkvar}'
BEGIN block is executed as soon as the awk is called. And if awk doesn't find any other block except BEGIN, it executes those BEGIN blocks and exits.
I hope you got the problem behind the error, and a quick solution for that as well.

Resources