Mercurial reported error number 255: abort: Resource busy - macos

Using MacHG I get this message:
"Mercurial reported error number 255:abort: Resource busy"
I'm trying to push changes across a local network from my mac to a SMB mounted shared directory. It was working earlier today for 2 pushes and a clone.
I have read all the forums about lock files and symlinks and that SMB supports symlinks for the file locking to work.
Also there are no .hg/store/lock or .hg/wlock files for me to delete to resolve the locking scenario.
EDIT: After trying CIFS as the protocol for mounting the share it would appear CIFS is now reporting the same issue/error message...

After repeating tests of:
Switching from SMB to CIFS
performing a verify on each repository.
Closing MacHG on all computers involved.
Closing XCode on all computers involved
Restarting all computers involved
It would seem the only solution that was consistent is to NOT map to a networked share folder...
http://hginit.com/02.html
The above link is a really great guide on getting a simple intranet share happening.
You'll need to edit the .hg/hgrc file so that it includes the following lines:
[web]
push_ssl=False
allow_push=*
Then in our situation we created a startup script (batch file for windows in our case) for when the server turned on to make sure it performed the following:
taskkill /f /im hg.exe /t
cd pathtorepository\MyProject
hg serve -d -p <portnumber1>
cd pathtosecondproject\MySecondProject
hg serve -d -p <portnumber2>
Visit the mercurial wiki or search SO for more details on setting up hg serve if you requre secure connections and authentication
https://www.mercurial-scm.org/wiki/hgserve

Related

Elixir Phoenix and Symlinks on Windows SMB Drive

So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.

Need help setting up a git server on windows

i'm trying to set up a git server on Windows, but i'm having some issues getting it all to work.
I have locally created a normal repository, and remotely i created a bare repository. On the local repository i added a single text file and committed it, but when I try to push it into the remote repository I always get the following message:
fatal: protocol error: bad line length character: fata
I searched SO and other sources, and most of them suggest it's an issue regarding command echos. I'm using freeSSHd as a SSH solution (remote repository is hosted on a windows server), and I tried to use both the git bash and the windows CMD as a command shell.
I start CMD with /Q to disable echoing and /K to change directory to a directory where repositories are located, so I don't think that would be a problem.
Using the remote desktop, i can clone the repository to a folder next to it, and using the git bash locally i can access the SSH shell and also clone the repository in the same way. But using git clone ssh://<address>:/myRepo.git I always get the above message (The SSH's working folder is the same where the repository is located). Does anyone have any idea what's going on? How can I see what command is triggering the error, and how can I see the full error message?
I also met the same error using freeSSHd as a ssh solution for git server on Windows. I couldn't find a solution for a whole day and gave up. :(
Later I found another powerful ssh server from Bitvise called WinSSHD worked well. It has free version for personal use. I suggest you to switch to it. Though I'd also like to know if there's a fix to the error we both met.
To setup ssh server with WinSSHD is quite simple, and you can add virtual accounts with private/public key access.
The key part is to setup the ssh access for git server. Please follow the steps of the blog here.
It should work well for Windows git client. For Mac, you may meet an error as follows.
grp.sh: No such file or directory
fatal: Could not read from remote repository.
To fix it, you need to create the two files gup.sh and grp.sh in your git bin directory (GIT_PATH/bin or GIT_PATH/libexec/git-core configured in system environment variable PATH) in your git server.
The content of gup.sh:
git-upload-pack.exe $*
The content of grp.sh:
git-receive-pack.exe $*

Tortoise Bazaar commit password

I've recently set up a Bazaar repo on my FTP server (only access I have to back end, please don't go into reasons why I shouldn't use FTP).
I have managed to get everything working with short cuts such that I can include user and pass in the ftp URL:
ftp://"user":pass#host/path
Though I am trying to set up script that I can commit from a local directory with a batch file. The issue is I still require to put in the password everytime.
bzr commit "E:\Ryan - Backup\Other\test" -m batched
I had a crack at using the authentication.conf file but it either didn't work for me in this situation or I was doing it wrong. I placed the file in the .bzr folder so it was located at:
E:\Ryan - Backup\Other\Test\.bzr\authentication.conf
With the contents being:
# Identity on foo.net
[site]
scheme=ftp
host=site
user=username
password=pass
Am I doing something wrong or would I have to create a plugin to do what I am after.
P.S End result was to run a batch at startup and shut down so I could sync file updates between my computers.
UPDATE: I also tried the
guide that describes the location of:
C:\Users\rfleming\AppData\Roaming\bazaar\2.0
for the authetication.conf file, this didn't work either
UPDATE 2: Placing the authentication.conf into the:
C:\Users\rfleming\AppData\Roaming\bazaar\2.0
Worked fine and I just ended up using checkout and push for syncing and no manually password typing was required!

Does anyone know how to download a project from nitrous.io?

I made an ruby web application on nitrous.io, the tool is very nice and it helped a lot but now I want to download ther project in my computer and I didn't found any option to do that...
You can download and upload projects by any of the following options:
Utilize Nitrous Desktop to Sync your files locally.
Upload your project to Github, and pull the project from there. Here is a guide on adding the SSH key to Github if needed.
Upload the content via SCP. To do this, you will need to add an SSH Key to your account.
Next, run this command on your local machine, replacing {PORT} with the port # assigned to your Nitrous.IO box, and also changing usw1 with the proper region found in the SSH URI of your boxes page.
To Upload:
scp -P{PORT} -r path/to/yourFolder action#usw1-2.nitrousbox.com:~/workspace
To Download:
scp -P{PORT} -r action#usw1-2.nitrousbox.com:~/workspace path/to/yourLocalFolder
I do not know the service, but apparently they offer ssh access. Then you can use scp to copy the files to your machine. Anyway, probably you should ask their support...
...post a summary of their answer here and close the question :)
The easiest way is to store your project in a Git repository and then push this repository to an external host. You will then be able to clone your project from the external repository to any machine you want.
Personally, I use Bitbucket (Bitbucket as it is free and very easy to set up. Have a look at the tutorials there.
ok replying really late but I hope this will help anyone still looking for this. Here is how I download stuff from nitrous, no desktop utility download needed, and no ssh/scp or adding keys.
What you do is, simply make a archive for the folder you want to download by
tar -zcvf myarchive.tar.gz mydir/
now you got a *.gz file right? Whichever folder your gz file is in, be there and type:
python3.3 -m http.server 8080
you just started a cute little http server ready to serve you your download, now from the Preview menu click "Port 8080", this opens a new browser tab showing your gz file in the file listing (sample url http://yourboxes.apse1.nitrousbox.com:8080/). Now you can click your gz file and it will start downloading. Once done with the download, press Ctrl+C on the terminal to terminate the http server.
This is not limited to nitrous, you can make this work on many online VMs like cloud9 etc.

Move files to remote file share after build

I want to create a post build script that moves files from the build directory to a remote (UNC) file share.
This line:
xcopy "C:\TeamCityBuild\project\WebSite\*" "\\192.168.1.1\WebSite\" /C /R /Y /E
Works fine when it is ran in a DOS-window but when TeamCitys buildrunner sln2008 tries to run it it fails with the message "Invalid drive specification"
I have shared the folder with full rights for 'Everyone' on the remote server.
Any ideas?
Just a guess. Not quite sure if it solves your problem. We had a similar problem using CruiseControl and deploying our application to remote JBoss server.
We've added
net use \\192.168.1.1\Website ...
before each copy. So that it 'mounts' the remote share before trying to access it. Note: you probably need to specify the username and password for the command (consult the command line for details).
The 'net use' seems needed even if you run the automated job as the same user you log on manually. These two kinds of sessions seem not to share remote shares information.
I've never used TeamCity Buildrunner sln2008, but if it runs as a service, then it is probably running under the "Local System" account, which doesn't have network access. Change the service properties (under the "Log On" tab) so that the service logs on as a user with permissions to that network share.
I don't beleave it works because the agent is running as a system service so it has limited network access (I beleave).
Instead of trying to use a post build step to copy the output, I think you should look into using TeamCity's Build Artifact's. That's what we use at my work altho we are new to TeamCity as well. What I don't know is if Build Artifact system will do extactly what you want.
You could try nANT
http://nant.sourceforge.net/release/latest/help/tasks/copy.html

Resources