How to find the local directories for Heroku apps? - heroku

I have a few old Heroku apps on my local machine and am not sure where they're stored any more. Is there any way to somehow locate them with the Heroku CLI?
Neither heroku apps:table nor heroku apps provides this info as far as I can tell.

Neither heroku apps:table nor heroku apps provides this info as far as I can tell
Why should they? Heroku doesn't care about where your local copies live. There can be zero or many of them for each app, and they can live on any number of machines.
Your best bet is probably to search for Git repositories using whatever tools are available for your operating system. For example, on a Unixy machine you might run
find ~ -type d -name .git
to look for directories named .git/ in your home directory. For each repository you find you could look for remotes containing git.heroku.com. Something like this should get you started:
for GIT_DIR in $(find ~ -type d -name .git); do
cd "$GIT_DIR"
if git remote -v | grep git.heroku.com >/dev/null; then
echo "$GIT_DIR contains a Heroku remote"
fi
done

Related

How to automatically download files from github without copying the repository

I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.
I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.
My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.
This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.
I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).
Code in my .bashrc:
REPDIR=mydir
if [ -d $REPDIR ]; then
rm -rf $REPDIR
echo "Old repository removed."
fi
cd $HOME
git clone https://github.com/user/myproject
cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
chmod +x /home/user/bin/*
Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....
function updatescripts() {
DIR=/home/user/scripts
CURR_DIR=$PWD
cd $DIR
git pull origin master
cd $CURR_DIR
for file in $DIR/*.py $DIR/*.sh; do
if [ -L $HOME/bin/$(basename $file) ]; then
unlink $HOME/bin/$(basename $file)
fi
ln -s $file $HOME/bin/$(basename $file)
done
}
on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.
For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.

Find git repository on macOS

I would like to find the location of a Git repository I made on my mac. Is there a way to find, for exemple, albatrocity/git-version-control.markdown on macOS using the Terminal? I installed everything with default parameters. I guess it must be in the User directory but I don't find anything related to GitHub there.
When I find it, I would like to completely remove it to maker a "proper" install.
EDIT: sudo find / -name "parsing.py" -print
I used a file that I know the folder contained when Terminal showed me nothing with sudo find / -wholename "*albatrocity/git-version-control.markdown"
You can use find's -wholename option to find a file based on its name and folder:
find <directory> -wholename "*albatrocity/git-version-control.markdown"
Example, if you want to search in the /Users/ directory:
find /Users/ -wholename "*albatrocity/git-version-control.markdown"
If you have locate on mac, and a regularly running updatedb, locate might be much faster:
locate albatrocity | grep git-version-control.markdown
It uses a hashtable to fast access filenames, but can be out of date, if the database isn't updated regularly or the file is too young (typically less than one day old).
If this is without success, then I would go for a full search with find, but maybe restrict it to a possible, narrowed path.

Git "repository separate from work tree" solution not working under Windows

See the answer that I linked to this question:
https://stackoverflow.com/a/8603156/1445967
I could not get this to work at all under the latest Git for windows. (Windows 7 x64)
I used git bash:
<my username> /d/<worktree>
$ git --git-dir=/c/dev/gitrepo/repo.git --work-tree=. init && echo "gitdir: /c/dev/gitrepo/repo.git" > .git
Initialized empty Git repository in c:/dev/gitrepo/repo.git/
Then I tried:
<my username> /d/<worktree>
$ git status
fatal: Not a git repository: /c/dev/gitrepo/repo.git
So I tried something slightly different, thanks to the way windows paths get stored...
<my username> /d/<worktree>
$ git --git-dir=/c/dev/gitrepo/repo.git --work-tree=/d/<worktree> init && echo "gitdir: /c/dev/gitrepo/repo.git" > .git
Initialized empty Git repository in c:/dev/gitrepo/repo.git/
This is copy-paste verbatim except I changed my username and a single directory name to <worktree> for SO.
Then I tried the following:
<my username> /d/<worktree>
$ git status
fatal: Not a git repository: /c/dev/gitrepo/repo.git
Then I looked inside /c/dev/gitrepo/repo.git/config and I saw this:
worktree = d:/<worktree>
Maybe this won't work with the windows path notation. So I changed it:
worktree = /d/<worktree>
Still no luck. Is what I am trying to do possible under git for Windows?
The only progress I have made on this question is the discovery that on the workstation I was using, the SUBST command was used to create drive D. That is to say, C and D are really the same physical drive.
SUBST does not seem to completely break Git though. I have the project on drive D and the git repo on a completely different directory on drive D and everything works.
username#host /d/ProjectName (branch)
$ cat .git
gitdir: d:\gitrepo\ProjectName.git
So the best answer I have is a workaround, where I advise: In Windows there may be an issue:
1. If the repo and working dir are on different drives
2. If (1) is true and you use SUBST
But whichever of those is the case, the work around is one or both of the following:
1. put everything on the same drive letter. It works even if that drive is a 'subst' drive.
2. Use the windows "d:\" notation in the .git file

Subversion update all working copies under common parent directory

I am working on a number of projects simultaneously. Each project has a Subversion repository. The repositories are not all hosted on the same server. When I start my day, I find myself having to do an svn update for each of the individual projects.
My local working copies are all stored under one parent directory Projects.
My question: Is there a command that can be issued from the Projects directory that will search for working copies among the descendants in the file system and issue an svn update command for each of them?
I'm on Ubuntu with Subversion version 1.7.5.
cd to Projects and then:
svn up `ls -d ./*`
(note those are backticks, not single quotes.)
svn will happily skip non-svn dirs.
You could add an alias in your .bashrc
alias up-svn='svn up `ls -d ./*`'
You could just write
svn update *
That's it... Subversion will automatically recognize the working copies and do the update
One more suggestion similar to #thekbb answer
svn up `find ~/svn -maxdepth 3 -type d`
Explanation:
'~/svn' is my directory all checked out repositories are in
'-maxdepth 3' some repositories are nested (3 levels deep)
e.g. companyname/projectname/branch
'-type d' only directories
no, but you can easily write a script/batch file that calls "svn update" on each subdirectory.

clear phpThumb cache regularly with cron job

I am using phpThumb on a client website, and as it is a very image heavy application the cache gets huge quick. Today the thumbs stopped working and I had rename the cache folder, as the folder was too big to delete via ftp. I renamed it cache_old and am trying to delete it now via ssh. I recreated the cache folder and everything worked fine again.
Since it seems it stops working when the cache folder is too full, plus just to keep the server tidy, I would like to setup a daily cron job to clear files from the cache folder. I have no idea how to do this though and haven't been able to find an answer yet..
The cache folder has a file in it called index.php which I assume needs to stay, plus a sub folder called source, which again has a file called index.php, again I assumed that needs to be there. So I need a command that will delete everything BUT those files.
Any guidance on how to set this up would be appreciated!
Thanks,
Christine
P.S. The site is hosted on DreamHost, and I have set other jobs up via there cronjob panel, and I do have SSH access if setting it up that way is easier. Cheers!!
it's possible to do this in one command but it's more obviously.
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'`
rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`
or in one cron job
rm `find /path_to_cache_folder/ -type f | grep -v 'index.php'` && rm `find /path_to_cache_folder/source -type f | grep -v 'index.php'`

Resources