Can the shell direct where a program places its output files? - bash

Can the shell override where output files are placed? (Not the console/screen output, but files created by a program.) I have a script that currently runs a sequence of input files through a program and for each one produces a lot of different output files.
for i in `seq 1 24`
do
../Bin/myprog inputfile.$i.in
done
Is there a way to create new directories for each run of the program and place the corresponding output files in each directory? So I would get dir1: <output files from run 1>; dir2 <output files from run 2> etc. I suppose one way would be to just write another script to create directories and sort all the files after the program(s) had run, but is there a more elegant way to do it?

As suggested in the comments, this might be what you need, assuming that your program just dumps output into the current working directory.
for i in `seq 1 24`
do
mkdir $i
pushd $i
../../Bin/myprog ../inputfile.$i.in
popd
done

If you are trying to change where an existing program (e.g., myprog) writes its files, this is only possible if the program writes its files relative to the current directory. In this case, the outer script that invokes myprog, can create a "destination" directory and chdir to it before invoking myprog.
If the myprog program writes to an absolute path, e.g., /var/tmp/myprog.tmp, the only way to override where this write actually goes is to place a symbolic link at the absolute path linking to the desired destination. This will only work if the program (myprog) doesn't first delete an existing file before writing to it.
The third and most extreme possibility for directing absolute file path writes is to create a chroot'ed file system, in which the myprog output files will be contained, after which the outer script can copy or move them to where they are desired.
To summarize: other than changing the source, setting the working directory for relative-path output files, or chrooting a filesystem for absolute-path files, there really is no "elegant" way to replace the actual output files used in a program.

Related

How to relocate output of an executable?

I have a python file named generator.py and this file generates some files(approximately 100) in the same location with generator.py. But I want the move the output files(generated files) into the Outputs/ folder. How can I do that without change the generator.py file and without knowing the generated output file names?
I run my program with this command:
python generator.py
Since generator.py locates its own directory and creates its files there, I see two possibilities without fixing this insane design in the Python program (which would probably the better approach anyway). The following code assumes, that generator.py is locate in some directory named gen and we want to have his output files in a directory named Output:
(1) Using a reference timestamp
touch gen/generator.py
python gen/generator.py
find gen -cnewer gen/generator.py -exec mv gen/{} Output \;
(2) Use a Hack
cp gen/generator.py Output
python Output/generator.py
rm Output/generator.py
If the generator needs auxiliary files which are also in the gen directory, a variation of this hack is:
cp gen/* Output
genfiles=(Output/*)
python Output/generator.py
rm "${genfiles[#]}"
This assumes that the genertor does not need auxiliary files with names starting with a period ("hidden files").

Calling inputs to myProgram from different directories

myProgram takes three files as inputs, like so:
$ myProgram inputA inputB inputC
And say these inputs themselves reside in their own respective directories w/ some additional files:
directoryA
inputA
inputA_helperfile1
inputA_helperfile2
directoryB
inputB
inputB_helperfile1
inputB_helperfile2
directoryC
inputC
inputC_helperfile1
inputC_helperfile2
myProgram will not run properly unless all three inputs as well as these additional files (dependencies? Is that the right term?) are in the same directory. But I do not want to put all these files into the same directory in order to execute myProgram. Is there a workaround for this scenario?
I am very new to bash (and programming/scripting in general), so please forgive me if this is a trivial question! (It is non-trivial to me, and I was unable to find an adequate answer by Googling for it.)
It might be easier to propose a good solution if you would explain what exactly myProgram is. Did you implement it?
Is it documented that it requires all files to be in one directory?
What happens if you call your program like this?
myProgram directoryA/inputA directoryB/inputB directoryC/inputC
If myProgram requires that all files are in the same directory, you could write a script that creates a temporary directory, changes the working directory into this temporary directory, copies all files there, executes myProgram inputA inputB inputC, leaves the temporary directory and removes it including all contents.
Instead of copying the files you can also create symbolic links in the temporary directory if your file system allows this.
You probably would implement your script to be called like this
myScript directoryA/inputA directoryB/inputB directoryC/inputC
You could use dirname and find to list all files from directory[ABC] if your program needs all files that reside in these directories. Otherwise you have to specify how to find out which of all the files are inputA_helperfile1 etc.
You may have to handle duplicate file names. If e.g. inputA_helperfile1 and inputB_helperfile1 would actually be the same file names with different content you cannot copy both files into the same directory.

Loop Over Files as Input for Program, Rename and Write Output to Different Directory

I have a problem with writing the output of a program to a different directory when I loop different files as variables as inputs. I run this in the command line. The problem is that I do not know how to "tell" the program to put the output with a changed filename into another directory than the input directory.
Here is the command, although it is a bioinformatic tool which requires specific input file formats. I am sorry that I could not give a better example. Nonetheless, the program is called computeMatrix in a software-tool box called deeptools2.
command:
for f in ~/my/path/*spc_files*; do computeMatrix reference-point--referencePoint center --regionsFileName /target/region.bed --binSize 500 --scoreFileName "$f" **--outFileName "$f.matrix"** ; done \
So far, I tried to use the command basename to just get the filename and then change the directory before that. However I could not figure out:
if this is combinable
what is the correct order of the commands (e.g.:
outputFile='basename"$f"', "~/new/targetDir/'basename$f'")
Probably there are other options to solve the problem which I could not think of/ find.

OS X bash For loop only processes one file in a directory

I'm trying to get this code to process all files in a directory : https://github.com/kieranjol/ifi-ffv1/blob/master/ifi-ffv1.sh
I run it in the terminal and add path to file ./ifi-ffv1.sh /path/to/file.mov. How can I get it to move on to the next? I'll also need to make sure that it only processes AV files, such as .avi/.mkv/*.mov etc.
I've tried using while loops with shift but I can't get that to work either.
I've tried adding a specific path like here but I'm failing http://www.cyberciti.biz/faq/unix-loop-through-files-in-a-directory/
I've tried this https://askubuntu.com/a/315338 and it keeps looping the same file rather than moving on to the next one. http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html this didn't help me either.
I know this is going to be a horribly simple solution but I'm very new to this.
You don't actually have any kind of loop in your code. You need to do something like
for file in path/to/*.avi path/to/*.avg
do
./ifi-ffv1.sh "$file"
done
which will loop through all the specified files and substitute each one for $1
You can put whatever file names you want instead of the path/to/*.avi path/to/*.avg. If you cd to the directory first, you can leave out the paths, and just use *.avi *.avg
To do it all in one script, do something like this:
cd <your directory>
for file in *.avi *.avg
do
<your existing script here>
done
replacing all the $1's in your script with "$file" (not duplicating any quotes you already have, of course)

extract specific folder in shell command using unzip

My backup.zip has the following structure.
OverallFolder
lots of files and subfolders inside
i used this unzip backup.zip -d ~/public_html/demo
so i end up with ~/public_html/demo/OverallFolder/my other files.
How do i extract so that i end up with all my files INSIDE OverallFolder GOING DIRECTLY into ~public_html/demo?
~/public_html/demo/my other files
like this?
if you can't find any options to do that, this is the last resort
mv ~/public_html/demo/OverallFolder/* ~/public_html/demo/
(cd ~public_html/demo; unzip $OLDPWD/backup.zip)
This, in a subshell, changes to your destination directory, unzips the file from your source directory, and when the subshell exits, leaves you back in your source directory.
That, or something similar, should work in most shells.

Resources