Nixos has a configuration option in the manual for specifying extra entries in the grub menu "boot.loader.grub. extraEntries" but I can't figure out how to make it work for a second linux installation on the same hard disk with its own grub.
More specifics: I had ubuntu installed and booting from /dev/sda2 with /dev/sda1 formatted for FAT. I reformatted /dev/sda1 as ext4 and successfully installed nixos specifying /dev/sda for it's grub. And it boots fine, but doesn't show the ubuntu install. I would like to be able to specify the ubuntu as a menu item from the nixos grub which I believe I should be able to do by using the configuration option boot.loader.grub.extraEntries but I can't figure out exactly what I need to put in that entry to make it work. Could anyone provide me some pointers please?
What's the format, that would be a long answer :) Basically the format is the grub2 configuration format: http://www.gnu.org/software/grub/manual/grub.html . Sorry for the "Read The Manual", but that's the answer to such a question.
As for the ubuntu specific question, go in the ubuntu partition and a copy & paste (plus some tweak probably) of the menuentry from /boot/grub/grub.cfg (or something in there) into the nixos extra grub option should do it.
I agree that this question probably belongs on unix or superuser, but I also think it still deserves an answer.
I was looking for the same thing, there are some examples for both grub legacy and grub2, a fair way down the page, although it is probably worth reading the whole thing.
https://github.com/NixOS/nixos/blob/master/modules/system/boot/loader/grub/grub.nix
Related
I am messing around with some coding and have some used/broken computers. I would like to see what kind of stuff I can do remotely to these computers.
I have a PXE boot setup to install operating systems remotely. What I am wondering is that if there is a way to pull the system/hardware information off of the machines from another computer (ethernet preferably) at the bios.
From there, I would like to save it as a file externally?(I would also be curious if I could wipe the data on these remotely too).
Hopefully, this is clear enough, let me know if you have questions.
I will eventually figure this out but I was hoping for a kickstart in the right direction.
Thanks for the help in advanced!
Well, maybe you can start 'hacking' the grub legacy source code, grub 0.97 already have network boot support, some file systems and a primitive hard disk driver.
As a bonuses, grub legacy works on protected mode but have some pretty simple functions to switch to real mode and from there you can call whatever BIOS services you want, so I think you can deviate grub from your primary function and adapt to your necessity.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I'm doing a project that needs to generate a vm image file which will then be used as qemu bootable disk image. Previously, our product is like an modified linux system and is made into a USB installation drive and then boot and install into an bare metal machine. But now we want to get rid of the hardware and run it in virtual machines, that's why we need an image file.
Instead of using the existing USB drive to install the system in qemu then shutdown the vm and get the image, we were asked to make an out-of-box image file directly, and skip all the booting and installing on real machine or virtual machine but still get a installed image, so that we can just deliver this image and people can load this image as a ready-to-use virtual machine image.
But during all the procedure, I can NOT use any command that requires root privilege! Don't ask why, there's a whole bunch of restrictions to our project, I just can't use root privilege, no sudo, no su, just anything only a regular user can do....
The part I already achieved is using the latest version of mke2fs -d command to populate a tree of folders and files into different partitions of this image file like this
suppose after the image is booted, we have these folder structure
$ ls /
$ bin dev boot home lib32 mnt proc run srv tmp var boot data etc lib lib64 opt root sbin sys usr
some of the folders are mounted by different partitions
extract a single partition from image
dd if=image of=partitionN skip=offset_of_partition_N count=size_of_partition_N bs=512 conv=sparse
populate a folder into the partition
mke2fs -d root_dir/etc partitionN
put the partition back into image
dd if=partitionN of=image seek=offset_of_partition_N count=size_of_partition_N bs=512 conv=sparse,notrunc
We have the first partition in the image as the boot partition, which contains the 'boot' folder and will be mounted under /boot once it is booted.
And this boot partition is an EFI compatible partition(which actually seems to be a FAT32 format), since our project needs it to be this way.
BUT
After get all partitions successfully populated into the image, I can not find a way to install grub for this bootable image. And that's the most damn important step that needed to make this image bootable.
All solutions I found on the web suggest loop mount the image's boot partition, which I can not do because without root privilege I can't loop mount the image.
So does any one have any idea how to do this?
I tried to understand how grub write raw values into mbr, and how to find stage1 and stage2 from the values inside mbr, and how to figure out the sector list at the end of stage2's first sector, but that's so crazy and I eventually failed to get this trick work.
Disclaimer: if facing this problem myself I would attack the problem directly by making a grub2-mbr-image installer myself. It's a true attack on the problem and would make a better answer, and more on topic for this site; however it's more hours of research than I'm willing to put into it for a stackverflow answer.
There's an extra trick here. We can get success/error code back by using a virtual floppy disk. There's no need to ship the floppy disk driver. You can build it as a module and include it only in the cpio image.
If we are willing to forgo the kqemu component and pay the 10x slowdown price, we can start qemu with -hda image.img, -kernel bzImage and -initrd initrd.cpio.gz and it will boot that. You need an X server (which you can provide with Xvnc) but no privileges a normal user doesn't have. Assuming / and /boot are the same, /linuxrc looks like this:
#!/bin/sh
insmod /lib/modules/kernel/floppy.ko
mount /dev/hda1 -t ext2 /mnt
PATH=/bin:/usr/bin:/usr/sbin:/sbin chroot /mnt grub2-install
RESULT=$?
umount /dev/hda1
mount /dev/fda -t fat /mnt
echo -n $RESULT > /mnt/errorcode
umount /mnt
poweroff
And you can get your error code back with mcopy to read the floppy disk image.
If qemu is not available, you can build a i586 compatible kernel and use dosbox-x instead and start the kernel using loadlin.exe. This actually works. If you try it with modern stuff it just dies because stuff demands i686 now; but you can build the grub install tool itself targeting i586 and just use an old kernel to boot to do the install. https://www.vogons.org/viewtopic.php?t=53531
Caveat: This is not a complete solution, however I'll post since nobody else answered.
I have never tried to do this and don't have a couple of hours spare to test it, however The OpenWrt project has standard x64 disk image files including Grub and kernel which you can find here:
https://downloads.openwrt.org/chaos_calmer/15.05.1/x86/64/openwrt-15.05.1-x86-64-combined-ext4.img.gz
Instructions tell you how to convert the images for VMWare, for Qemu it must be similar:
https://wiki.openwrt.org/doc/howto/vmware
The thing is, OpenWrt philosophy has always been that builds shouldn't be done as root, and it generally refuses to build as root, so I think you'll find they have ways of creating EXT4 filesystem images complete with MBR and Grub. I have only tested embedded platforms and never actually built from source for x86, but this is where you should start if you're stuck.
Of course, the OpenWrt disk has only a single partition, I'm unsure how you'd create a virtual disk with a more complex partition table, but perhaps there are some options in the tools used by OpenWrt.
I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd
Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.
How do you create a hardlink (as opposed to a symlink or a Mac OS alias) in OS X that points to a directory? I already know the command "ln target destination" but that only works when the target is a file. I know that Mac OS, unlike other Unix environments, does allow hardlinking to folders (this is used for Time Machine, for example) but I don't know how to do it myself.
I agree that hard-linking folders/directories can cause problems if not careful, but they have a very definite advantage - Time Machine is a perfect example. Without them it simply would not be practical as the duplication of redundant versions of files would very quickly consume even the largest of disks.
Snow Leopard can create hard links to directories as long as you follow Amit Singh's six rules:
The file system must be journaled HFS+.
The parent directories of the source and destination must be different.
The source’s parent must not be the root directory.
The destination must not be in the root directory.
The destination must not be a descendent of the source.
The destination must not have any ancestor that’s a directory hard link.
So it's not correct at all that Snow Leopard has lost the ability to create hard links to
folders.
I just verified that link/unlink do work on Snow Leopard - as long as you follow the six
rules. I just tried it and it works fine on my Snow Leopard 10.6.6 system - tried it on the boot volume and on a separate USB external volume and it worked fine in both cases.
Here is the "hunlink.c" program:
#include <stdio.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
if (argc != 2)
return 1;
int ret = unlink(argv[1]);
if (ret != 0)
perror("unlink");
return ret;
}
gcc -o hunlink hunlink.c
So, be careful if you try it - remember to follow the rules and use hlink to create these hard links and use hunlink to remove the hard link afterwards. And don't forget to document
what you've done for later on or for someone else who might need to know this.
One other "gotcha" that I just learned about these "hard links" to folders. When you create them there is really a lot that happens "behind the curtain" of Mac OS X. One really important issue is that the folder you create the link to is really moved to a super-magical super-hidden folder called /.HFS+ Private Directory Data%000d/dir_xxx where xxx is the inode number of the "source_folder" - remember the format of the command is
hlink source_folder target_folder
So because of this, you have to be careful of not having any files open in the "source_folder" because if you do, they just got moved to the super-magical folder and you will likely have a problem if you try and save any changes to those files that were open in the "source_folder". This happened to me a couple of times until it dawned on me what was happening and the solution is pretty simple. I noticed that you couldn't do a "ls -la" command any longer without getting funny errors for all the folders/directories that were in the original "source_folder" but you could do a "ls" command and all looked well.
If you run "Verify disk" in the "Disk Utility" program, you will notice that it probably complains and gives a "Volume bitmap needs minor repair for orphaned blocks" which is what just happened with the creation of the super-magical folder and the movement of the "source_folder" to it.
If you do find yourself in this situation with "orphaned blocks", first save the changed files to some other temporary location not in the volume containing the "source_folder" tree, then use "Disk Utility" to unmount and remount the volume that contains the "source_folder" or just restart the computer. Then copy the files you saved to the temporary locations back to their original locations and you should be back in business. This is what worked for me, so can't guarantee this will work for you too. So it might be a good idea to try this out on a volume you have a good backup of just in case.
It seems so very weird that all this overhead occurs just for the simple task of creating a hard link to a folder. Does anyone have any idea why Mac OS X goes to all this effort for this hard link creation to folders? Does it have something to do with the fact that this is a "journaled" file system?
I discovered the info about the super-magical, super-hidden location by reading Amit Singh's explanation of his "hfsdebug" utility. If you want more details see his web site at Amit Singh's hfsdebug utility. It's a very interesting piece of software and will tell you lots of details about HFS+ file systems. It's free and I encourage you to download it and try it out. It's no longer supported but it still works on both Snow Leopard and Leopard - basically any HFS+ supported system. You can't really do any harm with it as it's a "read-only" tool - so it's great to use to look at some details of the filesystem.
One more issue about these "hard links to folders" - once you create one and the super-magical super-secret-hidden folder gets created, it's there for good. Even if you unlink the folder that caused it to be created in the first place, this magic folder stays around. Not sure why, but it definitely does. You can use "hfsdebug" to find this out if you wish to try it out. You can also use "hfsdebug" to find out how many of these "hard links to folders" exist on a drive. For these details refer to Amit's article on the "hfsdebug" utility.
He also has another newer utility that's supported but costs. It's called fileXray and costs $79 for one person on any number of computers in the same household for a personal non-business type license. It has an extensive 173-page User Guide that you can download to see what it can do before you purchase. Unfortunately there is no trial version, so read the manual and check out the web site for more details to see if it can help you out of a jam. Learn all the details about it at their web site - see fileXray web site for more info.
There are a couple of issues you should be aware of when using these hard links to folders. If the volume that they are created on is mounted to a remote client, there can be significant problems, depending on how they are mounted. If you use AFP to mount the volume to a remote client, there are big problems as any folder that currently has a hard link to it or has ever had one but later removed, will be unable to be used as all the lower level folders (but not files) will be inaccessible from either the Finder or a Terminal window. If you try to do a simple "ls -lR" command, it will fail and give you "ls: xxx: No such file or directory" error messages for all lower level folders. If you use a Finder window to traverse the directory tree of the remote volume, the folders that are in the folder that had or has a hard link to it will simply disappear without any error when you first click on the folder name.
These problems don't appear to occur (except for the error message) if you use NFS to mount the remote client (and assuming you had a NFS server on the system that has the volume as a local HFS+ filesystem). Details on how to use NFS to mount volumes are not provided here. I used a nice program from Dr. Marcel Bresink called "NFS Manager" to help with the NFS mounts on the server and client. You can get it from his web site - just search for "Bresink NFS Manager" in your favorite search engine, but he has a free trial version so you can try before you buy. It's not that big a deal if you want to learn how to do the NFS mounts, but the "NFS Manager" makes it pretty easy to set things up and to tweak all the different settings to help optimize it. He has several other neat Mac OS X utilities too that are very reasonably priced - one called "Hardware Monitor" that lets you monitor and graph all kinds of things like power usage, temperature of CPU, speed of fans and many many other variables for both the local and remote Mac systems over extended periods of time (from minutes to days). Definitely worth checking out if you are into handy utilities.
One thing I did notice is that NFS file transfers were about 20% slower than doing them via AFP, but your "mileage may vary", so no guarantees one way or the other, but I would rather have something that works even if I have to pay a 20% performance hit as compared to having nothing work at all.
Apple is aware of the problems with hard links and remote AFP filesystems, and they refer to it as an "implentation limitation" of the AFP client - I prefer to call it what it really appears to me to be - A BUG!!! I can only hope the next release of Mac OS X fixes the problem, as I really like having the ability to use hard links to folders when it makes sense.
These notes are my own personal opinion and I don't make any warranty about their correctness so use them at your own risk. Have a good backup before you play around with these "hard links to folders" just in case something unforeseen happens. But I hope you have fun if you do decide to look a bit more into this interesting aspect of Mac OS X.
You can't do it directly in BASH then. However... I found an article here that discusses how to do it indirectly: http://www.mactech.com/articles/mactech/Vol.23/23.11/ExploringLeopardwithDTrace/index.html by compiling a simple little C program:
#include <unistd.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
if (argc != 3) return 1;
int ret = link(argv[1], argv[2]);
if (ret != 0) perror("link");
return ret;
}
...and build in Terminal.app with:
$ gcc -o hlink hlink.c -Wall
Piffle. On 10.5, it tells you in the man page for ln:
-d, -F, --directory
allow the superuser to attempt to hard link directories (note:
will probably fail due to system restrictions, even for the
superuser)
So yes:
sudo ln -d existing_dir new_hard_link
Give it your password, and you're not done yet. You didn't document it, did you? You must document hard linked directories; even if it's a single user machine.
Deleting is a different story: if you go about it the usual way to delete directories, you'll delete the contents. So you must "unlink" the directory:
unlink new_hard_link
There. Hope you don't wreck your filesystem!
Cross-posting this great tool which neatly solves the problem, originally posted by Sam:
To install Hardlink, ensure you've installed homebrew, then run:
brew install hardlink-osx
Once installed, create a hard link with:
hln [source] [destination]
I also noticed that unlink command does not work on snow leopard, so I added an option to unlink:
hln -u destination
Code is available on Github for those who are interested: https://github.com/selkhateeb/hardlink
Yes it's supported by the kernel and the filesystem, but since it's not intended for general usage it's not exposed to the shell.
You could probably work out which APIs Time Machine uses and wrap them in a commandline tool, but it'd be better to take the hint and steer well-clear.
The OSX version of ln cannot do it, but, as mentioned in the other answer by rich, it is possible with the GNU version of ln which is available in homebrew as gln as part of the coreutils formula. man gln lists the -d option with the OSX-specific warning provided in rich's answer. In other words, it does not work in all cases. What exactly determines whether it works or not does not seem to be documented anywhere.
As a prerequisite, install coreutils:
brew install coreutils
Now you can do:
sudo gln -d /original_folder /mirror_folder
IMPORTANT: To remove the hard link you must use gunlink:
sudo gunlink /mirror_folder
❗️❗️❗️ Using rm or Finder will also delete the original folder.
FYI: The coreutils homebrew formula provides the GNU-compatible versions of generic unix tools. Use brew list coreutils to see the full list.
As of 2018 no longer possible. APFS (introduced in MacOS High Sierra 10.13) is not compatible with directory hardlinks. See https://github.com/selkhateeb/hardlink/issues/31
My case was that I found out that from a windows virtual machine, I cannot follow symlinks. (i wanted to test some HTML pages in Internet Explorer). And my directory structure had symlinks for CSS and images folders.
My workaround to solve the problem was a different approach than the other answers implied. I used rsync to create a copy of the folder. Rsync can resolve the symlinks and copy the linked files in stead.
This solved my problem without using hard links to directories. And it's actually an easy solution if you're just working on a small set of files.
rsync -av --copy-dirlinks --delete ../htmlguide ~/src/
From the article linked to, you'll get that error if you try to create the hard link in the same directory as the original. You have to create it somewhere else.
Another solution is to use bindfs https://code.google.com/p/bindfs/ which is installable via port:
sudo port install bindfs
sudo bindfs ~/source_dir ~/target_dir
In Linux you can use bind mount to simulate hard linking directories. Not sure about OSX
sudo mount --bind /some/existing_real_contents /else/dummy_but_existing_directory
sudo umount /else/dummy_but_existing_directory
This can also be done with built-in Perl (from Terminal) without compiling anything. My specific use case is for Google Drive (which doesn't support symbolic links), so the examples below reflect the use case.
To link your "Documents" folder to Google Drive so it's synced:
perl -e 'link "/Users/me/Documents", "/Users/me/Google Drive/Documents"'
To remove the link to your "Documents" folder from Google Drive:
sudo perl -U -e 'unlink "/Users/me/Google Drive/Documents"'
You need "root" to unlink (see "unlink" perldoc).
The short answer is you can't. :) (except possibly as root, when it would be more accurate to say you shouldn't.)
Unixes only allow a set number of links to directories - ".." from within all its children and "." from within itself. Anything else is potentially a recipe for a very confused directory tree. This is/was apparently a design decision by Ken Thompson.
(Having said that, apparently Apple's Time Machine does do this :) )
in case there is no sub folder, you can try
ln folder_path/*.* target_folder
it worked for me on OSX 10.9