myProgram takes three files as inputs, like so:
$ myProgram inputA inputB inputC
And say these inputs themselves reside in their own respective directories w/ some additional files:
directoryA
inputA
inputA_helperfile1
inputA_helperfile2
directoryB
inputB
inputB_helperfile1
inputB_helperfile2
directoryC
inputC
inputC_helperfile1
inputC_helperfile2
myProgram will not run properly unless all three inputs as well as these additional files (dependencies? Is that the right term?) are in the same directory. But I do not want to put all these files into the same directory in order to execute myProgram. Is there a workaround for this scenario?
I am very new to bash (and programming/scripting in general), so please forgive me if this is a trivial question! (It is non-trivial to me, and I was unable to find an adequate answer by Googling for it.)
It might be easier to propose a good solution if you would explain what exactly myProgram is. Did you implement it?
Is it documented that it requires all files to be in one directory?
What happens if you call your program like this?
myProgram directoryA/inputA directoryB/inputB directoryC/inputC
If myProgram requires that all files are in the same directory, you could write a script that creates a temporary directory, changes the working directory into this temporary directory, copies all files there, executes myProgram inputA inputB inputC, leaves the temporary directory and removes it including all contents.
Instead of copying the files you can also create symbolic links in the temporary directory if your file system allows this.
You probably would implement your script to be called like this
myScript directoryA/inputA directoryB/inputB directoryC/inputC
You could use dirname and find to list all files from directory[ABC] if your program needs all files that reside in these directories. Otherwise you have to specify how to find out which of all the files are inputA_helperfile1 etc.
You may have to handle duplicate file names. If e.g. inputA_helperfile1 and inputB_helperfile1 would actually be the same file names with different content you cannot copy both files into the same directory.
Related
I noticed something rather interesting yesterday. ls and ls -a don't show the underworking of a .app "directory?", yet you can cd into the bin file of the app if you know the path. Why is this?
I'm not of the Mac world, but in general, when asked for the contents of a given directory, a file system delivers a list of names; and when asked for a particular entry in that directory it is free to return values even when the entry wasn't listed before. The two functions are independent in general, but of course most file systems keep these congruent most of the time. It seems, OSX breaks this congruency deliberately.
Example:
The file system is asked for the contents of directory /foo/bar and it returns the list of the entries xyz, abc, and blah. This means that the entries /foo/bar/xyz, /foo/bar/abc, and /foo/bar/blah exist.
When asked for the stat() of the path /foo/bar/ghi, one could assume that the file system now has to answer with No Such File or a similar error. But in fact, it does not have to. It can return stats which mean that the path is a directory or similar.
While this is unusual in general, the operating system itself and most tools can handle this situation quite well. Tools like ls or find rely on the list of directory entries, so they will process only the entries returned by the file system. But if given an explicit path (which wasn't part of a list of directory entries), they will process that one.
You can make a test: If ls -la /Applications/ doesn't list MATLAB_R2013a_Student.app, you will probably still be able to get information by typing ls -la /Applications/MATLAB_R2013a_Student.app directly.
Some more remarks: Whether you enter that directory (cd) is of no concern. cd can be done on any directory which exists and for which (and for whose parents) you have execute permissions, that's all. The -a flag of ls lists also files starting with a ., but they are not considered "hidden", only "system", i. e. not relevant for normal user interactions. Most of the dot-files are configuration files etc.
I need to find a solution at work to backup specific folders daily, hopefully to a RAR or ZIP file.
If it was on PC, I would have done it already. But I don't have any idea to how to approach it on a Mac.
What I basically want to achieve is an automated task, that can be run with an executable, that does:
compress a specific directory (/Volumes/Audio/Shoko) to a rar or zip file.
(in the zip file exclude all *.wav files in all sub Directories and a directory names "Videos").
move It to a network share (/Volumes/Post Shared/Backup From Sound).
(or compress directly to this folder).
automate the file name of the Zip file with dynamic date and time (so no duplicate file names).
Shutdown Mac when finished.
I want to say again, I don't usually use Mac, so things like what kind of file to open for the script, and stuff like that is not trivial for me, yet.
I have tried to put Mark's bash lines (from the first answer, below) in a txt file and executed it, but it had errors and didn't work.
I also tried to use Automator, but it's too plain, no advanced options.
How can I accomplish this?
I would love a working example :)
Thank You,
Dave
You can just make a bash script that does the backup and then you can either double-click it or run it on a schedule. I don't know your paths and/or tools of choice, but some thing along these lines:
#!/bin/bash
FILENAME=`date +"/Volumes/path/to/network/share/Backup/%Y-%m-%d.tgz"`
cd /directory/to/backup || exit 1
tar -cvz "$FILENAME" .
You can save that on your Desktop as backup and then go in Terminal and type:
chmod +x ~/Desktop/backup
to make it executable. Then you can just double click on it - obviously after changing the paths to reflect what you want to backup and where to.
Also, you may prefer to use some other tools - such as rsync but the method is the same.
Can the shell override where output files are placed? (Not the console/screen output, but files created by a program.) I have a script that currently runs a sequence of input files through a program and for each one produces a lot of different output files.
for i in `seq 1 24`
do
../Bin/myprog inputfile.$i.in
done
Is there a way to create new directories for each run of the program and place the corresponding output files in each directory? So I would get dir1: <output files from run 1>; dir2 <output files from run 2> etc. I suppose one way would be to just write another script to create directories and sort all the files after the program(s) had run, but is there a more elegant way to do it?
As suggested in the comments, this might be what you need, assuming that your program just dumps output into the current working directory.
for i in `seq 1 24`
do
mkdir $i
pushd $i
../../Bin/myprog ../inputfile.$i.in
popd
done
If you are trying to change where an existing program (e.g., myprog) writes its files, this is only possible if the program writes its files relative to the current directory. In this case, the outer script that invokes myprog, can create a "destination" directory and chdir to it before invoking myprog.
If the myprog program writes to an absolute path, e.g., /var/tmp/myprog.tmp, the only way to override where this write actually goes is to place a symbolic link at the absolute path linking to the desired destination. This will only work if the program (myprog) doesn't first delete an existing file before writing to it.
The third and most extreme possibility for directing absolute file path writes is to create a chroot'ed file system, in which the myprog output files will be contained, after which the outer script can copy or move them to where they are desired.
To summarize: other than changing the source, setting the working directory for relative-path output files, or chrooting a filesystem for absolute-path files, there really is no "elegant" way to replace the actual output files used in a program.
Should be an easy question for the gurus here, though it's hard to explain it in text so hopefully this is clear. I've got two directories on a box with some flavor of unix on it. I've got a script that I want to use to move all the files and directories from one location to another.
First, an example of how the directories look:
Directory A: final/results/2012/2012-02/2012-02-25/name/files
Directory B: test/results/2012/2012-02/2012-02-24/name/files
So you see they're very similar. What I want to do is move everything from the Directory B 2012 directory, recursively, to the same level of Directory A. So you'd end up with:
someproject/results/2012/2012-02/2012-02-25/name/files
someproject/results/2012/2012-02/2012-02-24/name/files
etc.
I want this script to be future proof though, meaning I don't want the 2012 hardcoded. Also, towards the end of a month you will potentially have data from two different months and both need to be copied into the 2012 directory. So here is the command I used in the shell script file:
CONS="/someproject";
ROOT="/test";
/bin/cp -r ${ROOT}/results/* ${CONS}/results/*
but this resulted in:
/final/results/2012/2012-02/2012-02-25/name/files
and
/final/results/2012/2012/2012-02/2012-02-24/name/files
So as I hope is clear, it started a level below where I wanted it too. Can anyone fill me in on what I'm doing wrong, if they can understand what I'm even trying to explain. My apologies if it's not clear. I'm sure this is a fairly simple fix but I'm not sure what to do. Shell scripting is not a strong point of mine.
One poster suggests rsync, which is overkill.
cp -rp will work fine. if you want to move the files, just mv the directory -- it and everything under it will move too.
The only real problem here is the use of terminating *'s in the command line in the original script. You don't need the *, you're just trying to pass directories to the cp command, you aren't trying to pass it the names of all the files already in the source (and more importantly, the destination).
You could also use a tool like rsync to make sure your source and target are synchronized.
rsync -av ${ROOT}/results/ ${CONS}/results/
You specified that you want to "move" the files, though. Which means deleting the originals after they're copied:
rsync -av --remove-source-files ${ROOT}/results/ ${CONS}/results/
If you start playing around with rsync, be sure to read the man page about how it treats trailing slashes.
My backup.zip has the following structure.
OverallFolder
lots of files and subfolders inside
i used this unzip backup.zip -d ~/public_html/demo
so i end up with ~/public_html/demo/OverallFolder/my other files.
How do i extract so that i end up with all my files INSIDE OverallFolder GOING DIRECTLY into ~public_html/demo?
~/public_html/demo/my other files
like this?
if you can't find any options to do that, this is the last resort
mv ~/public_html/demo/OverallFolder/* ~/public_html/demo/
(cd ~public_html/demo; unzip $OLDPWD/backup.zip)
This, in a subshell, changes to your destination directory, unzips the file from your source directory, and when the subshell exits, leaves you back in your source directory.
That, or something similar, should work in most shells.