NFS mount keep changing inode - macos

I'm using a MacBook Pro with Catalina for all my development. I also run a VM with ubuntu 16.04 through virtualbox where I export a NFS share.
The export looks like this:
/export/dev 192.168.0.0/16(rw,insecure,no_subtree_check,async,all_squash,anonuid=1000,anongid=1000)
and I mount this on my Mac with
mount -o rw,nolocks,locallocks -t nfs 192.168.56.102:/export/dev /Users/myhome/Documents/dev
nfsstat -m is saying
NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,nointr,noresvport,negnamecache,callumnt,locallocks,quota,rsize=32768,wsize=32768,readahead=16,dsize=4096,rdirplus,nodumbtimr,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,nomutejukebox,nonfc,sec=sys
most of the time everything is working, but more and more often I get strange errors and folders in sublime text starts to look like folders with "link" on. Investigating that in the console, I get errors saying that the inodes has already been seen and the folder is considered a symbolic link in sublime.
I investigated this further and do not think this is a sublime error, but more likely a MacOS problem.
When everything is working and I write ls -i in Mac OS in one of my mounted folders I get the exakt same inode results as on the VM. But 5 minutes later doing the exact same thing - I get totally different inodes and the exact same inode numbers on all files in the same folder.
Has anyone experienced this before? Is this a NFS parameter issue?
I have googled this and haven't found anything on the internet about anyone with similar problems.

You are right. I'm seeing the exact problem on Catalina 10.15.7. The workaround I have for now is to specify actimeo=0. With the default actimeo=60, files/dirs attributes are refreshed periodically every 60s. Inode numbers are initially correct immediately following the mount, but each refresh changes files within the same directory to have an identical and seemingly random inode number. This is bad as hell, as it breaks the fundamental assumption of a lot of programs, including dyld, which uses inode number to identify loaded images internally. (function bool ImageLoader::statMatch(const struct stat& stat_buf) const from https://opensource.apple.com/source/dyld/dyld-750.6/src/ImageLoader.cpp.auto.html). I'm trying to file a bug report to Apple and see how to move forward.
Update 1: I quote the answer from Apple:
After reviewing your feedback, we have some additional information for you, or some additional information, or action is necessary for this issue:
To work around the issue, mounting with nordirplus should fix this.
Also, the issue is resolved in macOS Big Sur (11+) so, you could try that as well.

Related

Why does running a clang compiled executable on a network drive, hang all subsequent executions of compiled executables?

I'm perplexed by this one and not sure what's relevant so will include all context:
MacBook Pro with an M1 Pro running macOS 12.6.
Apple clang version 14.0.0, freshly installed by deleting DeveloperTools folder and running xcode-select --install.
Using zsh in Terminal.
Network share mounted using no-configuration Finder method (seems to use standard SMB, but authenticates with my Apple ID)
Network share is my home directory on a iMac with a Core i5 running macOS 11.6.8.
Update: also tried root directory and using the tmp directory, to eliminate one category of doubt. Same result.
The minimum repeatable example of the issue I've managed to find is:
Use gcc from Apple's Developer Tools to compile a “Hello World” C application (originally discovered using ghc to compile Haskell - effect is the same).
Run the compiled executable. No surprises.
cd to the mounted network drive.
Do the same thing there - compiled executable hangs! First surprise, but relatively minor.
Return to the local machine. Original compiled executable still runs fine.
Use the DeveloperTools to compile anything, including the original source - compiled executable on local machine now hangs!
I've created an asciinema recording of the MRE. You can see the key part of the transcript in this still:
I’ve tried killing processes, checking lsof, unmounting the drive, logging in and out, checking the PATH, etc. Nothing gets me back to a working state short of a reboot.
Some more troubleshooting data:
gcc -v is identical for both executables, except for -fdebug-compilation-dir (set to cwd) and the name of the object file (randomly generated).
Just performing the compilation doesn't trigger the issue - running the networked executable does.
Trawling through the voluminous Console log reveals nothing relevant.
system.log shows no entries around the time of the issue.
lsof and ps -axww show reams and reams of output that is hard to spot patterns in, but I'm pretty sure there is no significant before/after differences.
I left the hung process running on the local machine overnight, and there's no change the next day.
Have I triggered some sandboxing or security fault and am being protected from disastrous consequences? Or this some clang/llvm related quirk I'm not familiar with? Or, given that ghc using its native code generator seems to have the same result, is this a bug in the way stdout is provided to executables? I'm at a loss!
Oh boy, avoiding Apple ID authentication of the network share fixed this for me.
I forced Finder to not use it’s magic no-configuration Apple ID login method, by opening the Location in Finder, clicking the "Disconnect" button and then clicking the "Connect As..." button that appears in its place. If I choose "Registered User" and use my username and password, I can then execute exactly the same commands (since the mount name ends up being the same) and execution works without an issue. I can continue to compile and execute to my heart's content.
That the Apple ID method is being used in the first place is not obvious (in true minimal design fashion), but subtly indicated at the top of the Finder window as "Connected as ". The only obvious difference this makes, is the username shown in mount:
Apple ID:
//com.apple.idms.appleid.prd.<UUID>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
"Registered User":
//<USERNAME>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
Obviously something far more significant is different, given the fundamental impact, but it's not at all clear to me what that is. So at this stage, this answer is just a workaround to a nasty bug.

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

OSXFUSE - what exactly does the "local" mount option mean?

I've implemented an OSXFUSE-based file system. It works fine on 10.8, but on Mavericks MS Word opens existing documents as blank (although I am, apparently, returning the correct data - I see the contents in the preview icon. Also, if I copy a file to a real hard drive and open it, it opens fine).
This issue is fixed on Mavericks if I mount my filesystem with the "local" flag. However, using this flag introduces other problems - e.g., it looks like it causes Finder to do some more aggressive caching, hence some file are not visible in Finder (although I can ls them in terminal).
Ideally I want to be able to mount the filesystem without this local flag (my implementation stores file on the network, so passing this flag looks wrong), but the problem with blank Word documents really puzzles me.
We have been able to track down the problem to - wait for it - Google Chrome. When Google Chrome is running while the volume is mounted, the problem appears. If Google Chrome is not running, Word/Excel/etc. files open just fine.
We've been in contact with Benjamin (OSXFUSE developer). Please also see his answer regarding this issue on the OSXFUSE mailing list:
https://groups.google.com/d/msg/osxfuse-group/URlw-n-Qakg/bLw2fHHDe7sJ
So far I have not found any bugs in osxfuse that might explain this behavior. The odd thing is that the files are not corrupted or empty. After copying the files to another volume they open just fine. Using LibreOffice to open the file on the FUSE volume works, too.
Chrome and Office seem to be based on the Carbon framework (which is deprecated since Mountain Lion). I believe the issue is somehow related to Carbon since non-Carbon apps do not seem to be affected. Every time a volume is mounted Chrome queries the volume’s capabilities and attributes (and maybe more). As far as I can tell all these file system operations return successful without any errors. But from this point on Office will fail to open documents.
In my opinion the two most likely reasons for this are:
osxfuse might break the VFS file system contract on Mavericks. I’ve been looking into this for some time now but I have not found any clues supporting this.
There might be a bug in the Carbon/CarbonCore framework. The odd thing is that there are no issues when using the stock network file systems afp or smb.
The two possible "fixes" (or rather "workarounds") for this issue seem to be (for now):
Use the "local" mount option (which might introduce other problems and is generally not recommend to use)
Do not use the "volname" mount option. The problem seems only to occur when the "volname" mount option is used. If no custom volume name is set, the problem seems not to occur and Excel/Word/etc. files open just fine - regardless whether Google Chrome was running at mount time.
I've seen the same and likewise local is not an option. Similar problems with Photoshop.
Some findings from my implementation
The problem doesn't occur on first run after reboots.
The problem begins occurring after program exit.
I solved this problem by manually dismounting (and waiting a few seconds) before exiting my program. If unmount is successful, on next run the mount again performs fine.
If the program ever terminates or dismounting fails (file in use, etc) then the volume's read-access is borked in Word/Photoshop on next mount.
Rebooting resolves issue.
Does this match what you're seeing?

How to delete an unfinished Darwinbuild build

I got darwinbuild off macports to get a single unix executable (long story, see Where/how to get the Mac OSX "login" command). I was having trouble figuring out how it worked, so I tried their website's example build, "darwinbuild xnu"
It worked, and when I opened the new volume it mounted in finder, it appeared to be building a whole new mac osx (I know this is probably not the case, but that is what it looked like to me at least.) So I grabbed the binary I wanted, hit control-c in terminal, and unmounted the volume. Everything seemed to work out, but even after restarting the computer, I could not get the 2gig or so that build/mount/kernal/thing took up.
I even tried restoring a timemachine backup, but even that would not bring the free space back.
So how do I get rid of this thing once and for all?
If you know the location the files were written to, navigate there in Finder then delete them. If you don't, read the documentation that comes with the Darwin stuff you downloaded (it'll be there, believe me) to find out or download a drive space analyzer app to locate it.
Really, I don't see how these questions are about programming? They're more "how do I fix something I screwed up ancillary to programming-related efforts," which are of course superuser.com material.

What is the Unix command to create a hardlink to a directory in OS X?

How do you create a hardlink (as opposed to a symlink or a Mac OS alias) in OS X that points to a directory? I already know the command "ln target destination" but that only works when the target is a file. I know that Mac OS, unlike other Unix environments, does allow hardlinking to folders (this is used for Time Machine, for example) but I don't know how to do it myself.
I agree that hard-linking folders/directories can cause problems if not careful, but they have a very definite advantage - Time Machine is a perfect example. Without them it simply would not be practical as the duplication of redundant versions of files would very quickly consume even the largest of disks.
Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules:
The file system must be journaled HFS+.
The parent directories of the source and destination must be different.
The source’s parent must not be the root directory.
The destination must not be in the root directory.
The destination must not be a descendent of the source.
The destination must not have any ancestor that’s a directory hard link.
So it's not correct at all that Snow Leopard has lost the ability to create hard links to
folders.
I just verified that link/unlink do work on Snow Leopard - as long as you follow the six
rules. I just tried it and it works fine on my Snow Leopard 10.6.6 system - tried it on the boot volume and on a separate USB external volume and it worked fine in both cases.
Here is the "hunlink.c" program:
#include <stdio.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
if (argc != 2)
return 1;
int ret = unlink(argv[1]);
if (ret != 0)
perror("unlink");
return ret;
}
gcc -o hunlink hunlink.c
So, be careful if you try it - remember to follow the rules and use hlink to create these hard links and use hunlink to remove the hard link afterwards. And don't forget to document
what you've done for later on or for someone else who might need to know this.
One other "gotcha" that I just learned about these "hard links" to folders. When you create them there is really a lot that happens "behind the curtain" of Mac OS X. One really important issue is that the folder you create the link to is really moved to a super-magical super-hidden folder called /.HFS+ Private Directory Data%000d/dir_xxx where xxx is the inode number of the "source_folder" - remember the format of the command is
hlink source_folder target_folder
So because of this, you have to be careful of not having any files open in the "source_folder" because if you do, they just got moved to the super-magical folder and you will likely have a problem if you try and save any changes to those files that were open in the "source_folder". This happened to me a couple of times until it dawned on me what was happening and the solution is pretty simple. I noticed that you couldn't do a "ls -la" command any longer without getting funny errors for all the folders/directories that were in the original "source_folder" but you could do a "ls" command and all looked well.
If you run "Verify disk" in the "Disk Utility" program, you will notice that it probably complains and gives a "Volume bitmap needs minor repair for orphaned blocks" which is what just happened with the creation of the super-magical folder and the movement of the "source_folder" to it.
If you do find yourself in this situation with "orphaned blocks", first save the changed files to some other temporary location not in the volume containing the "source_folder" tree, then use "Disk Utility" to unmount and remount the volume that contains the "source_folder" or just restart the computer. Then copy the files you saved to the temporary locations back to their original locations and you should be back in business. This is what worked for me, so can't guarantee this will work for you too. So it might be a good idea to try this out on a volume you have a good backup of just in case.
It seems so very weird that all this overhead occurs just for the simple task of creating a hard link to a folder. Does anyone have any idea why Mac OS X goes to all this effort for this hard link creation to folders? Does it have something to do with the fact that this is a "journaled" file system?
I discovered the info about the super-magical, super-hidden location by reading Amit Singh's explanation of his "hfsdebug" utility. If you want more details see his web site at Amit Singh's hfsdebug utility. It's a very interesting piece of software and will tell you lots of details about HFS+ file systems. It's free and I encourage you to download it and try it out. It's no longer supported but it still works on both Snow Leopard and Leopard - basically any HFS+ supported system. You can't really do any harm with it as it's a "read-only" tool - so it's great to use to look at some details of the filesystem.
One more issue about these "hard links to folders" - once you create one and the super-magical super-secret-hidden folder gets created, it's there for good. Even if you unlink the folder that caused it to be created in the first place, this magic folder stays around. Not sure why, but it definitely does. You can use "hfsdebug" to find this out if you wish to try it out. You can also use "hfsdebug" to find out how many of these "hard links to folders" exist on a drive. For these details refer to Amit's article on the "hfsdebug" utility.
He also has another newer utility that's supported but costs. It's called fileXray and costs $79 for one person on any number of computers in the same household for a personal non-business type license. It has an extensive 173-page User Guide that you can download to see what it can do before you purchase. Unfortunately there is no trial version, so read the manual and check out the web site for more details to see if it can help you out of a jam. Learn all the details about it at their web site - see fileXray web site for more info.
There are a couple of issues you should be aware of when using these hard links to folders. If the volume that they are created on is mounted to a remote client, there can be significant problems, depending on how they are mounted. If you use AFP to mount the volume to a remote client, there are big problems as any folder that currently has a hard link to it or has ever had one but later removed, will be unable to be used as all the lower level folders (but not files) will be inaccessible from either the Finder or a Terminal window. If you try to do a simple "ls -lR" command, it will fail and give you "ls: xxx: No such file or directory" error messages for all lower level folders. If you use a Finder window to traverse the directory tree of the remote volume, the folders that are in the folder that had or has a hard link to it will simply disappear without any error when you first click on the folder name.
These problems don't appear to occur (except for the error message) if you use NFS to mount the remote client (and assuming you had a NFS server on the system that has the volume as a local HFS+ filesystem). Details on how to use NFS to mount volumes are not provided here. I used a nice program from Dr. Marcel Bresink called "NFS Manager" to help with the NFS mounts on the server and client. You can get it from his web site - just search for "Bresink NFS Manager" in your favorite search engine, but he has a free trial version so you can try before you buy. It's not that big a deal if you want to learn how to do the NFS mounts, but the "NFS Manager" makes it pretty easy to set things up and to tweak all the different settings to help optimize it. He has several other neat Mac OS X utilities too that are very reasonably priced - one called "Hardware Monitor" that lets you monitor and graph all kinds of things like power usage, temperature of CPU, speed of fans and many many other variables for both the local and remote Mac systems over extended periods of time (from minutes to days). Definitely worth checking out if you are into handy utilities.
One thing I did notice is that NFS file transfers were about 20% slower than doing them via AFP, but your "mileage may vary", so no guarantees one way or the other, but I would rather have something that works even if I have to pay a 20% performance hit as compared to having nothing work at all.
Apple is aware of the problems with hard links and remote AFP filesystems, and they refer to it as an "implentation limitation" of the AFP client - I prefer to call it what it really appears to me to be - A BUG!!! I can only hope the next release of Mac OS X fixes the problem, as I really like having the ability to use hard links to folders when it makes sense.
These notes are my own personal opinion and I don't make any warranty about their correctness so use them at your own risk. Have a good backup before you play around with these "hard links to folders" just in case something unforeseen happens. But I hope you have fun if you do decide to look a bit more into this interesting aspect of Mac OS X.
You can't do it directly in BASH then. However... I found an article here that discusses how to do it indirectly: http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html by compiling a simple little C program:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
if (argc != 3) return 1;
int ret = link(argv[1], argv[2]);
if (ret != 0) perror("link");
return ret;
}
...and build in Terminal.app with:
$ gcc -o hlink hlink.c -Wall
Piffle. On 10.5, it tells you in the man page for ln:
-d, -F, --directory
allow the superuser to attempt to hard link directories (note:
will probably fail due to system restrictions, even for the
superuser)
So yes:
sudo ln -d existing_dir new_hard_link
Give it your password, and you're not done yet. You didn't document it, did you? You must document hard linked directories; even if it's a single user machine.
Deleting is a different story: if you go about it the usual way to delete directories, you'll delete the contents. So you must "unlink" the directory:
unlink new_hard_link
There. Hope you don't wreck your filesystem!
Cross-posting this great tool which neatly solves the problem, originally posted by Sam:
To install Hardlink, ensure you've installed homebrew, then run:
brew install hardlink-osx
Once installed, create a hard link with:
hln [source] [destination]
I also noticed that unlink command does not work on snow leopard, so I added an option to unlink:
hln -u destination
Code is available on Github for those who are interested: https://github.com/selkhateeb/hardlink
Yes it's supported by the kernel and the filesystem, but since it's not intended for general usage it's not exposed to the shell.
You could probably work out which APIs Time Machine uses and wrap them in a commandline tool, but it'd be better to take the hint and steer well-clear.
The OSX version of ln cannot do it, but, as mentioned in the other answer by rich, it is possible with the GNU version of ln which is available in homebrew as gln as part of the coreutils formula. man gln lists the -d option with the OSX-specific warning provided in rich's answer. In other words, it does not work in all cases. What exactly determines whether it works or not does not seem to be documented anywhere.
As a prerequisite, install coreutils:
brew install coreutils
Now you can do:
sudo gln -d /original_folder /mirror_folder
IMPORTANT: To remove the hard link you must use gunlink:
sudo gunlink /mirror_folder
❗️❗️❗️ Using rm or Finder will also delete the original folder.
FYI: The coreutils homebrew formula provides the GNU-compatible versions of generic unix tools. Use brew list coreutils to see the full list.
As of 2018 no longer possible. APFS (introduced in MacOS High Sierra 10.13) is not compatible with directory hardlinks. See https://github.com/selkhateeb/hardlink/issues/31
My case was that I found out that from a windows virtual machine, I cannot follow symlinks. (i wanted to test some HTML pages in Internet Explorer). And my directory structure had symlinks for CSS and images folders.
My workaround to solve the problem was a different approach than the other answers implied. I used rsync to create a copy of the folder. Rsync can resolve the symlinks and copy the linked files in stead.
This solved my problem without using hard links to directories. And it's actually an easy solution if you're just working on a small set of files.
rsync -av --copy-dirlinks --delete ../htmlguide ~/src/
From the article linked to, you'll get that error if you try to create the hard link in the same directory as the original. You have to create it somewhere else.
Another solution is to use bindfs https://code.google.com/p/bindfs/ which is installable via port:
sudo port install bindfs
sudo bindfs ~/source_dir ~/target_dir
In Linux you can use bind mount to simulate hard linking directories. Not sure about OSX
sudo mount --bind /some/existing_real_contents /else/dummy_but_existing_directory
sudo umount /else/dummy_but_existing_directory
This can also be done with built-in Perl (from Terminal) without compiling anything. My specific use case is for Google Drive (which doesn't support symbolic links), so the examples below reflect the use case.
To link your "Documents" folder to Google Drive so it's synced:
perl -e 'link "/Users/me/Documents", "/Users/me/Google Drive/Documents"'
To remove the link to your "Documents" folder from Google Drive:
sudo perl -U -e 'unlink "/Users/me/Google Drive/Documents"'
You need "root" to unlink (see "unlink" perldoc).
The short answer is you can't. :) (except possibly as root, when it would be more accurate to say you shouldn't.)
Unixes only allow a set number of links to directories - ".." from within all its children and "." from within itself. Anything else is potentially a recipe for a very confused directory tree. This is/was apparently a design decision by Ken Thompson.
(Having said that, apparently Apple's Time Machine does do this :) )
in case there is no sub folder, you can try
ln folder_path/*.* target_folder
it worked for me on OSX 10.9

Resources