So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.
Related
I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.
I am attempting setup of a dev environment using Virtualbox on OSX host running Ubuntu Server 16.10 on guest.
I am stuck on getting Samba to share the dev directory on the guest so that ultimately Netbeans can be used to edit the server files on OSX via the share directory.
This works fine on OSX to seperate physical Ubuntu machine.
From standard Samba config, at the end is
[testsharename]
path=/home/myusername/shared#note trailing slash required
#hosts deny=*
#hosts allow=192.168.0.210#ip of an allowed lan address
guest ok=yes
writeable=yes
The actual share is identified using Finder on OSX however on clicking on it there is an error that it cannot be found. Changing the share name reflects on Finder. The commented out lines are because I only really want a single Lan IP to access.
Finder error is that the operation can't be completed because the original item for "testshare" can't be found
Logs showed Can't mount Samba share (canonicalize_connect_path failed) therefore some research narrowed this down to a permissions issue hinted at by https://ubuntuforums.org/showthread.php?t=1439582
Moving the share out of the home directory into /var/www/ as I originally required (the home directory part was simply testing things) this, with 777 permissions on the share dir only showed it to work perfectly.
I certainly don't agree from the forum post that all path nodes leading to the share require permission changes however.
On domain connected Windows 10 and Windows 8.1 machines (issue may not be version specific), running mklink symlinkToCreate.txt originalFile.txt is producing an error The system cannot find the file specified. The symlink is still created correctly.
I have made sure that originalFile.txt does exist and that symlinkToCreate.txt does not already exist. I have also tried using absolute paths for both parts instead of relative paths. I am using an elevated command prompt as I know that only elevated Administrators can create symlinks by default. I have also checked the Create symbolic links local policy and confirmed that this is just set to Administrators.
Directory link creation also produces the error (mklink /D). Hard link creation, however, works fine (mklink /H).
Weirdly, I get the same behaviour even when logged in using the local Administrator account. I also get the same behaviour on a different machine in the same domain. The exact same commands work perfectly on a non domain-connected machine.
Given that mklink is built into cmd and that the file I'm linking definitely exists, I'm stumped as to what file the system cannot find, though I strongly suspect that the actual content of the error is a red herring. Shame there doesn't seem to be a mklink debug mode!
Any pointers greatly appreciated as I'm banging my head against a wall with this one.
Using MacHG I get this message:
"Mercurial reported error number 255:abort: Resource busy"
I'm trying to push changes across a local network from my mac to a SMB mounted shared directory. It was working earlier today for 2 pushes and a clone.
I have read all the forums about lock files and symlinks and that SMB supports symlinks for the file locking to work.
Also there are no .hg/store/lock or .hg/wlock files for me to delete to resolve the locking scenario.
EDIT: After trying CIFS as the protocol for mounting the share it would appear CIFS is now reporting the same issue/error message...
After repeating tests of:
Switching from SMB to CIFS
performing a verify on each repository.
Closing MacHG on all computers involved.
Closing XCode on all computers involved
Restarting all computers involved
It would seem the only solution that was consistent is to NOT map to a networked share folder...
http://hginit.com/02.html
The above link is a really great guide on getting a simple intranet share happening.
You'll need to edit the .hg/hgrc file so that it includes the following lines:
[web]
push_ssl=False
allow_push=*
Then in our situation we created a startup script (batch file for windows in our case) for when the server turned on to make sure it performed the following:
taskkill /f /im hg.exe /t
cd pathtorepository\MyProject
hg serve -d -p <portnumber1>
cd pathtosecondproject\MySecondProject
hg serve -d -p <portnumber2>
Visit the mercurial wiki or search SO for more details on setting up hg serve if you requre secure connections and authentication
https://www.mercurial-scm.org/wiki/hgserve
When I macfusion into my ubuntu VM, and hg clone something from bitbucket - and then try and do a commit / export / etc on it, I get a folder added with a name prefix of 'hg-checklinks-'.
On inspection it appears to house a never ending chain of symlinks back to its parent folder. This is driving me completely nuts, and so far, I've lost my faith in mercurial.
Mind you, seems to work fine when I just use it on a local folder. Does anyone have any idea how I can get around this.. or even more info as to why it's happening?
Cheers!
The decentralized part of DVCS is about running it locally -- the only Mercurial operations that should be done on anything other than the local system are push, pull, and clone. If you're cloning from bitbucket onto your Ubuntu VM then you should clone from your Ubuntu VM onto your mac and push to the Ubuntu VM.
That said, it looks like your network FS isn't correctly deleting the file when it's told to. Here's the relevant code (found here: https://www.mercurial-scm.org/repo/hg/file/a2dc8819bb0d/mercurial/util.py#l710):
name = tempfile.mktemp(dir=path, prefix='hg-checklink-')
try:
os.symlink(".", name)
os.unlink(name)
return True
except (OSError, AttributeError):
return False
So either your network FS is creating the symlink but throwing an exception anyway or throwing an exception when asked to delete (unlink) the symlink.
The problem here is with sshfs's very special "-o follow_symlinks", which will happily create symlinks, then claim it couldn't create them, then show them as awesome recursive unremovable directories. This broken option may automatically be turned on by a bug in Macfusion (https://code.google.com/p/macfusion/issues/detail?id=284). So if anything, you should "lose faith" in sshfs and Macfusion, not Mercurial.
This will be worked around in Mercurial 2.7. In the meantime, you should be able to run sshfs manually without the option.
(For faster bug fixes, please report bugs to the Mercurial/sshfs/macfusion projects, not random internet question forums.)