Move newly created text files to a var created directory - bash

I have several text docs that are created each day from templates. This process I've achieved successfully albeit probably in a Cro-Magnon way. I want these newly created text files to be filed within a newly created dated folder.
The script creates the file docs from the templates successfully and also creates the newly dated directory. I don't really want to create these text files somewhere else and then move them to the newly created directory. Rather that they be created directly within it. All my research tends to involve directories that already exist rather than one created from a var.
I've included just one file creation example below.
Hope you can help. TIA
today=`date '+%y%m%d'`;
today_Folder=~/Desktop/test/"${today}"
if [[ ! -d $today_Folder ]]
then
mkdir "${today_Folder} `(date '+%A')`"
fi
cat ~/Desktop/test/template.txt >> ~/Desktop/test/dest.txt
P.S. I've tried to make the cat command regarding the text files clearer - it simply creates files. I'm NOT trying to create a tree of directories. Simply ONE newly created directory that could be in test along with the text files.

Your question is how to dynamically create a file, also creating all the path to contain that file? That's not possible in any intuitive/portable way, and it's not typically programs always have to create the directory before the file. What you can do is pass the -p flag to mkdir. On Linux systems (this may also not be portable), this flag means "create all the directories necessary for this path". Zero directories is okay, so you don't need to check whether the directory already exists. So change the whole if block to just this:
mkdir -p "${today_Folder} `(date '+%A')`"
Also, it's kind of smelly the way you want a string (the path) and you're using three operations to create it. Could it be simpler? You want more statements when they add clarity, but in this case the steps are so simple that the only thing accomplished is to make your colleagues go up and read what you wrote more than once. It might suit to change it to:
dir_path=...
mkdir -p "${dir_path}"
To accomplish this, keep in mind that instead of backticks, you can add command substitution with $(). It helps since backticks can't be nested--it makes the line more readable, since you clearly see the command's start/end.

Related

How do I not have a duplicated custom bash script directory on my machine and add it as a link that is picked up with PATH?

I have a /.cust_dev_cmds/ directory on my MBP machine that is part of a parent sysadmin-work directory that uses a git repo here. I would like to be able to:
Not have to use a for loop in my .bash_profile to source all the *.sh files.
Add the directory to PATH with and export line in the .bash_profile instead.
# from my .bash_profile
export PATH="/Users/<my-usr-name>/Public/sharable-dev-scripts:$PATH"
This does show up with a echo $PATH but when I try to invoke a function from within the scripts I have created that worked with sourcing directly within the .bash_profile in a loop (like with point #1 above) like this
# create a directory with a builtin command
mkdir test-dir
# use one of my custom ones to create a simple readme.md within that directory:
mkr !!:1
# I am getting >>> mkr: command not found
Use whatever type of link to not have a duplicated directory structure on the machine.
Any good explanations to read up on here without using $10 words would be great.
Define a means to test the link works and works through PATH. It would also be nice that something like declare -F would be available to see that the scripts within the directory are in fact becoming part of callable functions in the shell.
is there a command anyone knows to do this?
Step this up a notch for a shared network directory. I have created a shared directory through apple > System Preferences > Sharing, and turned on the ability to share this directory in the Public folder.
Is there a tutorial that can outline this with something like VirtualBox and an Ubuntu guest that is accessing the commands from the MBP shared directory?
I have realized point #1, so really the question begins with #2 so no one would suggest the first one. I have read a bit on links but the way most of the articles I come across describing them are difficult to wrap my head around- especially when wishing to add this functionality to PATH. I believe the answer may revolve around how links are followed, but it may be better to back up and punt- dig back into linking first- then export my directory appropriately without a link, and eventually get the proper resolution to this situation.
The last thought on links before I try a few hacks on my own is do I need to only add a link to the Public directory and somehow place a flag to look at all the directories within the /Public, or is it better to drill all the way down to the /Public/shared-directory/.cust_dev_cmds? Any direction would be greatly appreciated. My goal is to be able to have a few custom command directories for various tasks, and eventually have them across networks/instances.
When you want all functions that you wrote in files in /.cust_dev_cmds/, the normal way would be sourcing all the files.
When you want to avoid a loop, you can use
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
When you want the functions found with the PATH, you should make a file for each function.
Before:
# cat talk
ask() { echo "How are you?"; }
answer() { echo "Fine, thank you"; }
After:
# cat ask
echo "How are you?"
# cat answer
echo "Fine, thank you"
When you want all users to use the same set of functions, consider a master script that sources all scripts (the masterfile can use user=dependent settings like HOME or VERSION):
# cat /Public/shared-directory/setup_functions
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
source some_other_file
Now each user only needs to source one file.

Creating a directory that is read only

I need to know if it is possible to create a read only directory from windows command line.
I know it is possible to use chmod and make files read only. But what I need is to create a folder and then immediately set it as read only upon creation. Trying to create new files inside this directory should then throw an error.
This can be done manually by modifying folder permissions using the gui. But, I need to do it from cmd for some tests.
I tried
attrib +r dirPath
But this only works for files and not for the whole directory.
Any help is appreciated.
EDIT:
A background to me problem. I need to test the behavior of a software that writes some text files. I want to test a use-case when I ask the software to write to a read only directory. I want to see I handle the exceptions correctly and inform users appropriately.

OS X bash For loop only processes one file in a directory

I'm trying to get this code to process all files in a directory : https://github.com/kieranjol/ifi-ffv1/blob/master/ifi-ffv1.sh
I run it in the terminal and add path to file ./ifi-ffv1.sh /path/to/file.mov. How can I get it to move on to the next? I'll also need to make sure that it only processes AV files, such as .avi/.mkv/*.mov etc.
I've tried using while loops with shift but I can't get that to work either.
I've tried adding a specific path like here but I'm failing http://www.cyberciti.biz/faq/unix-loop-through-files-in-a-directory/
I've tried this https://askubuntu.com/a/315338 and it keeps looping the same file rather than moving on to the next one. http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html this didn't help me either.
I know this is going to be a horribly simple solution but I'm very new to this.
You don't actually have any kind of loop in your code. You need to do something like
for file in path/to/*.avi path/to/*.avg
do
./ifi-ffv1.sh "$file"
done
which will loop through all the specified files and substitute each one for $1
You can put whatever file names you want instead of the path/to/*.avi path/to/*.avg. If you cd to the directory first, you can leave out the paths, and just use *.avi *.avg
To do it all in one script, do something like this:
cd <your directory>
for file in *.avi *.avg
do
<your existing script here>
done
replacing all the $1's in your script with "$file" (not duplicating any quotes you already have, of course)

Comparing variable with File names bash

I've recently started to learn bash script and have started to create a file repository system. I have gotten pretty far and am able to add files, remove files. When I remove a file from the repository it actually leaves the file in but changes permissions so only the user that removed can use it, it then send a copy to there home area, it also changes the name of the file left behind to "$fileNameOUT"
I know plan to add a feature to my add function which checks after a file has been added if there is a file with the same name but with "OUT" at the end, if it finds this the old file will be sent to a back folder so files can be restored. I know I have to loop through the directory using a for loop, however the problem I'm having is I don't know how I can compare the file I have just added to all of the files in the directory.
I hope someone can make send of what I just wrote.
If you know the name of the file you are interested in, you can use the -e test to check if it exists.
if [ -e fooOUT ]
then
echo File exists
fi

Copying directories recursively using shell script

Should be an easy question for the gurus here, though it's hard to explain it in text so hopefully this is clear. I've got two directories on a box with some flavor of unix on it. I've got a script that I want to use to move all the files and directories from one location to another.
First, an example of how the directories look:
Directory A: final/results/2012/2012-02/2012-02-25/name/files
Directory B: test/results/2012/2012-02/2012-02-24/name/files
So you see they're very similar. What I want to do is move everything from the Directory B 2012 directory, recursively, to the same level of Directory A. So you'd end up with:
someproject/results/2012/2012-02/2012-02-25/name/files
someproject/results/2012/2012-02/2012-02-24/name/files
etc.
I want this script to be future proof though, meaning I don't want the 2012 hardcoded. Also, towards the end of a month you will potentially have data from two different months and both need to be copied into the 2012 directory. So here is the command I used in the shell script file:
CONS="/someproject";
ROOT="/test";
/bin/cp -r ${ROOT}/results/* ${CONS}/results/*
but this resulted in:
/final/results/2012/2012-02/2012-02-25/name/files
and
/final/results/2012/2012/2012-02/2012-02-24/name/files
So as I hope is clear, it started a level below where I wanted it too. Can anyone fill me in on what I'm doing wrong, if they can understand what I'm even trying to explain. My apologies if it's not clear. I'm sure this is a fairly simple fix but I'm not sure what to do. Shell scripting is not a strong point of mine.
One poster suggests rsync, which is overkill.
cp -rp will work fine. if you want to move the files, just mv the directory -- it and everything under it will move too.
The only real problem here is the use of terminating *'s in the command line in the original script. You don't need the *, you're just trying to pass directories to the cp command, you aren't trying to pass it the names of all the files already in the source (and more importantly, the destination).
You could also use a tool like rsync to make sure your source and target are synchronized.
rsync -av ${ROOT}/results/ ${CONS}/results/
You specified that you want to "move" the files, though. Which means deleting the originals after they're copied:
rsync -av --remove-source-files ${ROOT}/results/ ${CONS}/results/
If you start playing around with rsync, be sure to read the man page about how it treats trailing slashes.

Resources