I am writing code inside of an svn repository but I really don't want to test run my code from within the repo. (I have a ../computations directory outside of the repo for this). Ideally, the computations directory would be a one-way symbolic link from the repo so that each edit to the source (inside the repo) will be immediately available to the ../computations directory.
The problem is that there is no such thing as a one-way symbolic link. An rsync shell script is about as close as I can get to a one-way mirror of the repo, but I am trying to minimize the chances of me forgetting to (or becoming tired of) 'updating' the ../computations directory.
What are my options here?
More details: I am working with C++ and Python code that spans more than one - but less than ten - directories, and editing in vim.
Alright, here goes:
We have an rsync script named snk in /usr/local/bin that looks like:
#! /bin/bash
# this script will rsync -a the ../repo directory
# with the ../computations directory on various architectures.
# leave this script here (in the repo!) and create a symlink to it from someplace in your $PATH
# get this host
HOST=${HOSTNAME}
# define various hosts in order to dictate specific rsync commands
hostA="someHost"
hostB="someOtherHost"
if [ "$HOST" = "$hostA" ]
then
rsync -zvai --exclude=.svn /full/path/to/repo/on/hostA/ /full/path/to/computations
elif [ "$HOST" = "$hostB" ]
then
rsync -zvai --exclude=.svn /full/path/to/repo/on/hostB/ /full/path/to/computations
fi
Then we went to the Google and found: this qeustion about 'vim: rsync on save' and gave it a shot. Take a look at this portion of my new .vimrc file:
:if system('pwd') =~ "/full/path/to/base/of/repo"
: au BufWritePost * !snk
:endif
This is a first-order approximation to a solution to my problem, I hope it helps!
Thank you vipraptor!
Related
I have a /.cust_dev_cmds/ directory on my MBP machine that is part of a parent sysadmin-work directory that uses a git repo here. I would like to be able to:
Not have to use a for loop in my .bash_profile to source all the *.sh files.
Add the directory to PATH with and export line in the .bash_profile instead.
# from my .bash_profile
export PATH="/Users/<my-usr-name>/Public/sharable-dev-scripts:$PATH"
This does show up with a echo $PATH but when I try to invoke a function from within the scripts I have created that worked with sourcing directly within the .bash_profile in a loop (like with point #1 above) like this
# create a directory with a builtin command
mkdir test-dir
# use one of my custom ones to create a simple readme.md within that directory:
mkr !!:1
# I am getting >>> mkr: command not found
Use whatever type of link to not have a duplicated directory structure on the machine.
Any good explanations to read up on here without using $10 words would be great.
Define a means to test the link works and works through PATH. It would also be nice that something like declare -F would be available to see that the scripts within the directory are in fact becoming part of callable functions in the shell.
is there a command anyone knows to do this?
Step this up a notch for a shared network directory. I have created a shared directory through apple > System Preferences > Sharing, and turned on the ability to share this directory in the Public folder.
Is there a tutorial that can outline this with something like VirtualBox and an Ubuntu guest that is accessing the commands from the MBP shared directory?
I have realized point #1, so really the question begins with #2 so no one would suggest the first one. I have read a bit on links but the way most of the articles I come across describing them are difficult to wrap my head around- especially when wishing to add this functionality to PATH. I believe the answer may revolve around how links are followed, but it may be better to back up and punt- dig back into linking first- then export my directory appropriately without a link, and eventually get the proper resolution to this situation.
The last thought on links before I try a few hacks on my own is do I need to only add a link to the Public directory and somehow place a flag to look at all the directories within the /Public, or is it better to drill all the way down to the /Public/shared-directory/.cust_dev_cmds? Any direction would be greatly appreciated. My goal is to be able to have a few custom command directories for various tasks, and eventually have them across networks/instances.
When you want all functions that you wrote in files in /.cust_dev_cmds/, the normal way would be sourcing all the files.
When you want to avoid a loop, you can use
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
When you want the functions found with the PATH, you should make a file for each function.
Before:
# cat talk
ask() { echo "How are you?"; }
answer() { echo "Fine, thank you"; }
After:
# cat ask
echo "How are you?"
# cat answer
echo "Fine, thank you"
When you want all users to use the same set of functions, consider a master script that sources all scripts (the masterfile can use user=dependent settings like HOME or VERSION):
# cat /Public/shared-directory/setup_functions
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
source some_other_file
Now each user only needs to source one file.
Usually I source all the macros I have for the jobs run in a remote machine using this command:
macros=$\my_directory
But I see someone uses a different way to get all the macros for submitting the jobs in a remote machine. He uses this command:
macros=$(dirname $(readlink -f $BASH_SOURCE))
Now I want to know how the $dirname has the advantages over giving the specific macro location. It would be great if you just explain to me regarding the sourcing the macro using $dirname
By using dirname you get the directory of where the script is located, therefore it's easy to source other files locally close to your script and don't worry about specifying the correct path each time the script bundle is relocated.
For instance if you have in your script source $macros/some_script.sh then it will not break when the bundle is located in the /usr/local/bin/ or /bin/ or ...
Regarding $BASH_SOURCE see: https://stackoverflow.com/a/35006505/2146346
I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.
I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.
My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.
This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.
I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).
Code in my .bashrc:
REPDIR=mydir
if [ -d $REPDIR ]; then
rm -rf $REPDIR
echo "Old repository removed."
fi
cd $HOME
git clone https://github.com/user/myproject
cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
chmod +x /home/user/bin/*
Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....
function updatescripts() {
DIR=/home/user/scripts
CURR_DIR=$PWD
cd $DIR
git pull origin master
cd $CURR_DIR
for file in $DIR/*.py $DIR/*.sh; do
if [ -L $HOME/bin/$(basename $file) ]; then
unlink $HOME/bin/$(basename $file)
fi
ln -s $file $HOME/bin/$(basename $file)
done
}
on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.
For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.
We have a git repository for a scientific software where we need to maintain a certain folder structure for our data files.
These folders should remain empty, everything that will be put there should not be tracked by git. However, it is necessary that those folders exist.
The solution to accomplish this was to add a .gitignore file into every directory which looks like this:
*
!.gitignore
which means everything inside this folder is ignored except for the .gitignore file.
This works very well.
We maintain all our data on one particular server.
Our scientists use this server often for their calculations.
It would be very convenient to be able to replace the data folders from the git repository which currently contain only the .gitignore file with a symbolic link to the full data files on this server. The data files on the server also have a .gitignore file which looks exactly the same as in every repository.
I wrote a bash script to do this which looks like this:
rm -r path/to/empty/data/in/repository/name
ln -sfn /absolute/path/to/data/on/server/ path/to/empty/data/in/repository
Now the software runs perfectly and you have access to all the data without copying it into your git repository.
However, git now gets confused.
If I run git status only my changes are listed as expected. It does not complain about the new symbolic links which replaced the existing directories.
As soon as I run git add . to stage my changes the symbolic links appear as new file: and the .gitignore files in the replaced folder are listed as deleted:.
This seems like a problem to me because as soon as somebody pushes his code changes that he made on the server the symbolic links would get uploaded (I guess) and the .gitignore files would get removed and thus the folder structure would not remain.
Is it possible to tell git that it should compare the content of the symbolic linked folders rather than the symbolic link itself?
PS: I know this seems like a software design issue with the static folder structure which is inside git but I do not want to discuss this here. We are all scientists and no programmers and the software is now developed for over 10 years by many different people. It is not possible to change the code to make it more flexible.
EDIT: This bash code reproduces the problem:
cd ~ #setup
mkdir tmp
cd tmp
mkdir server #server data folder (this one is full of data)
mkdir server/data
printf '*\n!.gitignore' > server/data/.gitignore
printf 'data file 1' > server/data/data1.txt
printf 'data file 2' > server/data/data2.txt
mkdir repo #repo data folder (this one only contains .gitignore file)
mkdir repo/data
printf '*\n!.gitignore' > repo/data/.gitignore
cd repo # create a dummy repo
git init
git add .
git commit -am"commit 1"
git status
cd .. # replace data folder with server/data folder which hase exactly the same content
rm -r repo/data/
ln -sfn ~/tmp/server/data/ ./repo/
cd repo
git status
At the end git status should ideally not list any changes in the repository.
EDIT:
I found a workaround: instead of linking the whole directory I'm now linking the content of the directory:
ln -sfn /absolute/path/to/data/on/server/* path/to/empty/data/in/repository/
this works because the symbolic links are irgnored due to the .gitignore file.
Drawback is that it only works with existing data. As soon as there is a new file in the server directory I have to run the bash script again.
Git tracks symbolic links. What you're trying to achieve can be done with bind mounts.
Replace the final ln -sfn ~/tmp/server/data/ ./repo/ with sudo mount --bind $PWD/repo
$HOME/tmp/server/data/
I have a folder path stored in a variable ${PROJECT_DIR}.
I want to navigate up into its parent folder, and back down into a folder called "Texture Packer" , i.e.. ${PROJECT_DIR} and "Texture Packer" are siblings.
How do I specify it in a shell script ?
So far I have:
TP=/usr/local/bin/TexturePacker
# create all assets from tps files
${TP} "${PROJECT_DIR}/../Texture Packer/*.tps"
But this is incorrect, since Texture Packer can't detect the files in the path. The error message displays:
TexturePacker:: error: Can't open file
/Users/john/Documents/MyProj/proj.ios_mac/../Texture Packer/*.tps for
reading: No such file or directory
EDIT: The following seems to work but isn't clean:
#! /bin/sh
TP=/usr/local/bin/TexturePacker
if [ "${ACTION}" = "clean" ]
then
# remove sheets - please add a matching expression here
# Some unrelated stuff
else
cd ${PROJECT_DIR}
cd ..
cd "Texture Packer"
# create all assets from tps files
${TP} *.tps
fi
exit 0
You're on the right track; the problem is that wildcards (like *.tps) don't get expanded when they're in quotes. The solution is to leave that part of the path outside of the quotes:
${TP} "${PROJECT_DIR}/../Texture Packer"/*.tps
BTW, I almost always recommend against using cd in scripts. It's too easy to lose track of where the current directory will be at various points in the script, or have an error occur and the rest of the script runs in the wrong place, or... Also, any relative pathis you're using (e.g. those supplied by the user as arguments) change meanings every time you cd. Basically, it's an opportunity for things to go weirdly wrong.