How do I properly set up hooks on a remote that is specified via the file:// protocol? - windows

Say I've got an upstream repo (origin) that was added with
git remote add origin file:////upstream.host/repo.git
The repo.git is acually a windows shared folder where I and my dev colleagues have r/w access assigned.
Now, I want to set up a post-receive hook on upstream.host that notifies Trac about freshly pushed revisions for automatic ticket updating. Basically, this is done by calling an executable on upstream.host that does some work in the database there.
However, I notified that the hook for some reason doesn't work.
So I've set up the hook to print everything she's doing to D:/temp/post-receive.log and issued a git push in order to trigger the hook.
When I looked into D:/temp on upstream.host, there was no logfile created.
Then, another question of me came into mind: https://superuser.com/questions/974337/when-i-run-a-git-hook-in-a-repo-on-a-network-share-which-binaries-are-used.
When actually the binaries of my machine are used for executing the hook, maybe also the paths of my machine are used. I looked into D:/temp and voilá, here we have the post-receive.log.
I traced the pwd to the logfile and it is not D:/repos/repo.git (what I expected) but actually is //upstream.host/repo.git. Obviously the whole hook is executed in the context of the pusher's machine and not in the context of the repo machine (upstream.host).
This is no problem for me since I have admin access to the remote machine and could use administrative shares in order to get my hook going (i.e. \\upstream.host\D$\repos\repo.git etc). But this is an issue for my colleagues since they are plain users and no roots.
How do I set up my post-receive hook properly so that it works as expected?
How do I force my hook to be entirely run on the remote machine without using anything from my machine?
Do I really have to implement a real server hosting my repo? Or are there other ways that don't need a server?

a post-receive hook is run after receiving data on the machine that is hosting the repository.
now the machine that is "hosting the repository" is not the file-server where the actual packed-refs and other git database files are stored. (this file-server could be anything from a redundant cloud-based storage appliance to any old NAS-enabled "network disk").
Instead it is the machine that runs the "git frontend" (that is, the git commands that actually interact with the database).
Now you are using a "network share" to host your (remote) git repository. For your computer (the client), this is just another disk device (like your floppy) and the git on your client will happily store database-files there, and run any hooks. But this is your computer, since it is being told to run the remote locally - simply because the file:// protocol does mean "local".
Btw, the fact that your remote is named upstream.host is meaningless: this name is only there for you to keep track of multiple remotes, but it could be called thursday.next instead.
So there is no way to run any script on the file-server that happens to store some files names pack-refs and similar.
If you want to have a git server to run hooks for you, you must have a git server first. Even worse: if you want a git server on machineX to run scripts on machineX, you must install a git server on machineX first.
The good news: there is no need to "implement a real server". Just install a pre-existing one. You will find docuementation about that in the Git Book, but for starters it's basically enough to have git (for interaction with the database) and sshd (for secure communication via the network; and for calling git when appropriate) installed.
Finally: i'm actually quite glad that you need to have software (e.g. a server) running on the remote end to execute code there. Just imagine what it would mean if copying some html files to your USB disk would suddenly spawn a web server out of thin air. Not to think of w32-virusses breeding happily on my linux NAS...

Related

Git error: could not commit config file

I'm trying to add a new remote repository (GitHub) to an existing project, and I'm getting an error that I've never seen before, and don't understand:
$ git remote add github git#github.com:me/myrepo.git
error: could not commit config file .git/config
What? Why would I commit the git config file? And how do I make this stop happening?
I'm on a Mac, with a relatively fresh install of most of my tools. I think this is the first time I've tried to add a remote to a repo on this machine.
Some git commands modify the git config file. One of them is git remote add, because the remote is stored in the config file.
To avoid problems with several git processes modifying the config file simultaneously, git will lock the config file before changing it (by writing a lock file), and release the lock afterwards (by renaming the lock file to the config file).
The error message
error: could not commit config file .git/config
means that git could not properly release this lock. This probably means that either another process was working on the same file, or there was some kind of filesystem error (or there's a bug in git or your OS/libraries).
Unfortunately, git does not tell you what exactly was the problem, so you'll have to manually debug this. You could try running git with dtruss to see what exactly is going wrong.
This could be a permissions issue, especially for automated jobs running on Windows that can have fewer permissions than an interactive login of the same user. From this answer on ServerFault:
Each logon session to a Windows (NT-based versions, that is) machine
has a "security token"-- a data structure that describes, amongst
other things, the groups that the user represented by the token is a
member of.
The "Interactive" identity isn't a group that you can manually place
members into, but rather is added by the operating system,
automatically, when a security token is constructed for a user who has
logged-on via the Windows Graphical User Interface. This is similar to
the "Network" identity, which is added automatically to tokens created
for users who are accessing the machine via the network.
These automatically-generated group memberships allow you to construct
permissions that might allow or deny access to resources based on how
the user is accessing the machine. This supplements the permission
system's default behavior of arbitrating access based on who is
accessing the resource.

How to pull from a fellow developers repository using Mercurial

I'm trying to setup Mercurial on developer workstations so that they can pull from each other.
I don't want to push.
I know each workstation needs to run
hg serve
The format of the pull command is
hg pull ssh:[SOURCE]
What I'm having problem with is defining SOURCE, and any other permission issues.
I would believe that SOURCE ends with the name of the repository being pulled from. What I don't know is form the host name. Can I use IPs instead?
What permission issues do I need to look out for?
SOURCE == //<hostname>/<repository>
All developers or test stations are running Windows 7 or Windows XP.
I have searched for this answer and have come up empty. I did look at all the questions suggested by SO as I typed this question.
This is probably a simple Windows concept, but I'm not an expert in simple Windows concepts. :)
The hg help urls output has these examples:
Valid URLs are of the form:
local/filesystem/path[#revision]
file://local/filesystem/path[#revision]
http://[user[:pass]#]host[:port]/[path][#revision]
https://[user[:pass]#]host[:port]/[path][#revision]
ssh://[user#]host[:port]/[path][#revision]
and a lot of info about what can be used for each component (host can be anything that your dns resolver resolves or a ipv4 or ipv6 address. I beleive on windows systems UNC paths count.
Also, you appear to have some confusion about when you can use ssh. You can use ssh:// URLs to access repositories on the file systems of systems that are running ssh servers. If they're running hg serve then you can access them using the http:// URL that hg serve gives you when you start it. It's usually used for quick "here grab this from me and see if you can tell me what I'm doing wrong" situations rather than for all-the-time sharing.

Is it safe to use git with multiple users when the central repository is on a windows file share?

We are a team of less than ten persons that need to quickly set up a git server that supports active directory based authentication.
The simplest solution seems to be to use a file share with a bare git repository and reaching it using a unc path, e.g.
git clone //server/share/repo.git
However, we are a bit worried about robustness. Are there no issues with concurrency when several people use the same git repository and there is no actual server component running?
Clients are running windows 7, server is Windows Server 2008R2. Using msysgit 1.8.1.2
(I am well aware that there are many other git server solutions, but, especially given the requirement of AD authentication, they are not as simple to set up)
It seems the only times the AR Auth will be in play is pushing/pulling.
When you clone the git repo, the entire history will be cloned as well, so every user will have a complete repo.
If the file share fails, any user can replace the code on a new share by pushing their code up.
Concurrency is not an issue - since git is distributed it handles concurrency differently from other VCSs: no file locks, etc.

best practices for uploading many files to live server while updating database

I have roughly 200 files that I need to push to our live server after business hours. In addition to this push I have a few database updates that I need to run in conjunction with this roll out.
What has been done in the past on this system is to create a directory on the server of the updated files and create a cron script to copy those files to overwrite their previous versions on the server. And then executing the calls to the database.
Here are the problems I am trying to work around:
1) There is no staging server.
2) There is no easy way to push from our version control (svn) to our live server
3) There are a lot of files and the directory structure is deep so setting up a copy of the directories to be copied over on the server seems precarious and time consuming.
What's the best way to do this?
The way I've done similar things in the past is to have a cron job run a script an administrative machine that:
1) checks out the files I need on my production server on some sort of staging machine
2) rsync's the files onto the server
3) runs a post-rsync script on the server (say via ssh'ing to the server)
However, you specify that you have no ability to use a staging machine, by which I assume you mean that you have no administrative machine at all, and that you cannot check out your repository on the server either. That makes doing this cleanly far harder. Are you sure you can't at least use your workstation or some similar box as an administrative or staging machine here?

Mercurial for small scale research team

Our team consists of 3 people and we want to use Mercurial for the verison control of our codes.
The problem is that we don't have a common server that we all have access to.
Is there a way to make one of the users the host of the repository, so that others can connect him to work on the code?
We're all using Windows 7, if it matters.
Because mercurial is a distributed version control system, you don't have to have a central server, as you can clone, push and pull between one another.
But, you could look at creating a central repository on bitbucket at no cost for up to 5 users.
Yes, just run hg serve in that host & directory. If you have IP access you'll be able to work with it. You'll need to set the web.allow_push option to * to enable the remote push.
Another option is to run hg serve on all the workstations and only pull from one another, never push.

Resources