OS X bash For loop only processes one file in a directory - macos

I'm trying to get this code to process all files in a directory : https://github.com/kieranjol/ifi-ffv1/blob/master/ifi-ffv1.sh
I run it in the terminal and add path to file ./ifi-ffv1.sh /path/to/file.mov. How can I get it to move on to the next? I'll also need to make sure that it only processes AV files, such as .avi/.mkv/*.mov etc.
I've tried using while loops with shift but I can't get that to work either.
I've tried adding a specific path like here but I'm failing http://www.cyberciti.biz/faq/unix-loop-through-files-in-a-directory/
I've tried this https://askubuntu.com/a/315338 and it keeps looping the same file rather than moving on to the next one. http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html this didn't help me either.
I know this is going to be a horribly simple solution but I'm very new to this.

You don't actually have any kind of loop in your code. You need to do something like
for file in path/to/*.avi path/to/*.avg
do
./ifi-ffv1.sh "$file"
done
which will loop through all the specified files and substitute each one for $1
You can put whatever file names you want instead of the path/to/*.avi path/to/*.avg. If you cd to the directory first, you can leave out the paths, and just use *.avi *.avg
To do it all in one script, do something like this:
cd <your directory>
for file in *.avi *.avg
do
<your existing script here>
done
replacing all the $1's in your script with "$file" (not duplicating any quotes you already have, of course)

Related

Loop Over Files as Input for Program, Rename and Write Output to Different Directory

I have a problem with writing the output of a program to a different directory when I loop different files as variables as inputs. I run this in the command line. The problem is that I do not know how to "tell" the program to put the output with a changed filename into another directory than the input directory.
Here is the command, although it is a bioinformatic tool which requires specific input file formats. I am sorry that I could not give a better example. Nonetheless, the program is called computeMatrix in a software-tool box called deeptools2.
command:
for f in ~/my/path/*spc_files*; do computeMatrix reference-point--referencePoint center --regionsFileName /target/region.bed --binSize 500 --scoreFileName "$f" **--outFileName "$f.matrix"** ; done \
So far, I tried to use the command basename to just get the filename and then change the directory before that. However I could not figure out:
if this is combinable
what is the correct order of the commands (e.g.:
outputFile='basename"$f"', "~/new/targetDir/'basename$f'")
Probably there are other options to solve the problem which I could not think of/ find.

Shell script over some files

I have to do some things in some files from a directory in solaris. In that directory, I have thousands of files. Some of them, begin with FAC_. I need to make an array variable with those names of files (which four first letters name are FAC_), and then go over the array to do some task to each file.
How can I accomplish that?
Thanks
I think the simplest approach would be something like this:
files="FAC_*"
for file in $files; do
echo "$file"
done
If the files aren't in the same directory as the script you can use the following line to retrieve them.
files="$path/FAC_*"

Move newly created text files to a var created directory

I have several text docs that are created each day from templates. This process I've achieved successfully albeit probably in a Cro-Magnon way. I want these newly created text files to be filed within a newly created dated folder.
The script creates the file docs from the templates successfully and also creates the newly dated directory. I don't really want to create these text files somewhere else and then move them to the newly created directory. Rather that they be created directly within it. All my research tends to involve directories that already exist rather than one created from a var.
I've included just one file creation example below.
Hope you can help. TIA
today=`date '+%y%m%d'`;
today_Folder=~/Desktop/test/"${today}"
if [[ ! -d $today_Folder ]]
then
mkdir "${today_Folder} `(date '+%A')`"
fi
cat ~/Desktop/test/template.txt >> ~/Desktop/test/dest.txt
P.S. I've tried to make the cat command regarding the text files clearer - it simply creates files. I'm NOT trying to create a tree of directories. Simply ONE newly created directory that could be in test along with the text files.
Your question is how to dynamically create a file, also creating all the path to contain that file? That's not possible in any intuitive/portable way, and it's not typically programs always have to create the directory before the file. What you can do is pass the -p flag to mkdir. On Linux systems (this may also not be portable), this flag means "create all the directories necessary for this path". Zero directories is okay, so you don't need to check whether the directory already exists. So change the whole if block to just this:
mkdir -p "${today_Folder} `(date '+%A')`"
Also, it's kind of smelly the way you want a string (the path) and you're using three operations to create it. Could it be simpler? You want more statements when they add clarity, but in this case the steps are so simple that the only thing accomplished is to make your colleagues go up and read what you wrote more than once. It might suit to change it to:
dir_path=...
mkdir -p "${dir_path}"
To accomplish this, keep in mind that instead of backticks, you can add command substitution with $(). It helps since backticks can't be nested--it makes the line more readable, since you clearly see the command's start/end.

Copying multiple files with same name in the same folder terminal script

I have a lot of files named the same, with a directory structure (simplified) like this:
../foo1/bar1/dir/file_1.ps
../foo1/bar2/dir/file_1.ps
../foo2/bar1/dir/file_1.ps
.... and many more
As it is extremely inefficient to view all of those ps files by going to the
respective directory, I'd like to copy all of them into another directory, but include
the name of the first two directories (which are those relevant to my purpose) in the
file name.
I have previously tried like this, but I cannot get which file is from where, as they
are all named consecutively:
#!/bin/bash -xv
cp -v --backup=numbered {} */*/dir/file* ../plots/;
Where ../plots is the folder where I copy them. However, they are now of the form file.ps.~x~ (x is a number) so I get rid of the ".ps.~*~" and leave only the ps extension with:
rename 's/\.ps.~*~//g' *;
rename 's/\~/.ps/g' *;
Then, as the ps files have hundreds of points sometimes and take a long time to open, I just transform them into jpg.
for file in * ; do convert -density 150 -quality 70 "$file" "${file/.ps/}".jpg; done;
This is not really a working bash script as I have to change the directory manually.
I guess the best way to do it is to copy the files form the beginning with the names
of the first two directories incorporated in the copied filename.
How can I do this last thing?
If you just have two levels of directories, you can use
for file in */*/*.ps
do
ln "$file" "${file//\//_}"
done
This goes over each ps file, and hard links them to the current directory with the /s replaced by _. Use cp instead of ln if you intend to edit the files but don't want to update the originals.
For arbitrary directory levels, you can use the bash specific
shopt -s globstar
for file in **/*.ps
do
ln "$file" "${file//\//_}"
done
But are you sure you need to copy them all to one directory? You might be able to open them all with yourreader */*/*.ps, which depending on your reader may let browse through them one by one while still seeing the full path.
You should run a find command and print the names first like
find . -name "file_1.ps" -print
Then iterate over each of them and do a string replacement of / to '-' or any other character like
${filename/\//-}
The general syntax is ${string/substring/replacement}. Then you can copy it to the required directory. The complete script can be written as follows. Haven't tested it (not on linux at the moment), so you might need to tweak the code if you get any syntax error ;)
for filename in `find . -name "file_1.ps" -print`
do
newFileName=${filename/\//-}
cp $filename YourNewDirectory/$newFileName
done
You will need to place the script in the same root directory or change the find command to look for the particular directory if you are placing the above script in some other directory.
References
string manipulation in bash
find man page

Bash script for taking a screenshot, renaming and moving

First bash script and I'm running into some issues. I want to take a screenshot, then change the name of the .png to a random number (so that pictures don't overwrite). After it's renamed I want to move the picture to my dropbox folder.
This is what I've got:
#!/bin/bash
#Take screenshot
import -window root $HOME/screenshot.png
#Move to dropbox folder
mv $HOME/screenshot.png $HOME/Dropbox/Max-Max/$RANDOM.png
When I run it dropbox is getting some kind of something because my taskbar icon indicates a file transfer. When I open up the folder however, nothing's there.
Thanks for the help.
Instead of $RANDOM use $(date|tr " :" _)
Much more useful
You can do that with scrot like this:
scrot -e 'mv $f ~/Dropbox/Max-Max'
But your script looks fine... Try to create an empty file first to make sure your dropbox functions fine.
echo > ~/Dropbox/Max-Max/testfile
The commands you're using are correct. The only way it could fail is if Max-Max doesn't exist. mv moves and renames files among existing directories -- mv cannot create directories.

Resources