I've got a Mac that I can run either the Leopard (10.5) or Snow Leopard (10.6) version of OS X on. I'm using it to do web development/testing before publishing files to my production host.
On the production host my site's doc root is under the home directory (e.g. /home/stimulatingpixels/public_html) and I'd like to duplicate that location on the Mac. Unfortunately, their is a hidden and lock placeholder on the Mac that looks like a mounted drive with nothing in it sitting in the /home location.
I know from experience that it's unwise to move this and drop in your own /home directory because upgrades can cause it to be erased (and it doesn't get stored in the TimeMachine backup, by the way).
So, the question, is there anyway to safely use /home on a Mac either Leopard or Snow Leopard?
(Note: I realize this is very Mac specific and will be asking it in an Apple forum as well. Just wanted to ask here in addition to cover all the bases.)
Update: To help describe why I want to do this, in addition to the front end web site, I've got a series of scripts that I'd like to run as well. One of the main goals with being able to use the /home directory (and more specifically the same path from the servers root) is so that can use the same output paths on the development mac as well be used on the production server. I know there are ways to work around this, but I'd rather not have to deal with it. The real goal is to have all the files on the development Mac have the same filepath from the / root of the directory tree as the production server.
Another Update: The other reason that I forgot to mention earlier for this is setting up .htaccess paths when using basic authentication. Since those paths are from the file system root instead of the website docroot, they end up going through "/home" when that's part of the tree.
NOTE: As of 2015, I no longer use or recommend this method. Instead I use Vagrant to setup virtual machines for dev and testing. It's free, relatively easy, and allows better matching of the production environment. It completely separates the development environment and you can make as many as you need. Highly recommended. I'm leaving the original answer below for posterity's sake.
I found an answer here on the Apple forums.
In order to reclaim the /home directory, edit the /etc/auto_master file and comment out (or remove) the line with /home in it. You'll need to reboot after this for the change to take effect (or, per nilbus' comment, try running sudo automount -vc). This works with Mac OS X 10.5 (Leopard). Your millage may vary for different versions, but it should be similar.
As noted on that forum post, you should also be aware that Time Machine automatically excludes the /home directory and does not back it up.
One note of warning, make sure to back up your /home directory manually before doing a system update. I believe one of the updates I did (from 10.6 to 10.7 for example) wiped out what I has stored in /home without warning. I'm not 100% sure that's what happened, but it's something to be on the lookout for.
Putting it all together from the tips and hints above:
edit /etc/auto_master # comment out the line with /home in it.
remount:
sudo automount -vc
make a softlink to the mac-ified dir:
sudo ln -s $HOME /home/$USER
At that point, your paths should match-up to your production paths. env vars will still point to /Users/xxxx, but anything you hard-code in a path in your .bashrc --or say, in ~/.pip/pip.conf-- should be essentially equivalent. Worked for me.
re: "The real goal is to have all the files on the development Mac have the same filepath from the / root of the directory tree as the production server."
On production, my deploy work might happen in /opt/projects/projname, so I'll just make sure my account can write into /opt/projects and go from there. I'd start by doing something like this:
sudo mkdir /opt/projects
sudo chown $USER /opt/projects
mkdir /opt/projects/projname
cd /opt/projects/projname
With LVM, I'll set a separate partition for /opt/, and write app data there instead of $HOME. Then, I can grow the /opt file system in cases where I need more disk space for a project (LVM is your friend.)
I tried it on Yosemite (OS X 10.10.1) the sudo automount -vc didn't work, I had to use sudo umount /home.
Therefore my workflow would be:
# comment out line starting with /home
sudo vi "+g/^\/home/s/\//#\//" "+x" /etc/auto_master
sudo umount /home
# link actual home directory (/Users/<user>) to new 'home' (/home/<user>)
ln -s $HOME /home/$USER
I adapted the previous solutions to Big Sur (macOS 11.2), which is a bit more complicated due to the APFS file system changes. I managed to change /home by following these steps:
As recommended by Alan W. Smith, comment out the /home entry in /etc/auto_master.
As suggested by Marco Torchiano, run
sudo umount /home
Since /home is currently a read-only link to /System/Volumes/Data/home, you have to change the latter. I did it with the following commands:
cd /System/Volumes/Data/
sudo rmdir home
sudo ln -s <some other directory> home
Why don't you just run MAMP and use the Sites directory? You can develop off localhost and just have a bunch of aliases for your sites. I'm not sure why you specifically need to use the home directory.
EDIT:
Ok, I think you are going about solving your problem the wrong way.
If it's HTML paths you are worried about, the begin everything with a slash "/" which will default it to the home dierectory.
If it's the references in your PHP, then you need to create a global (or similar) and set it as the root of your site. Then you can reference everything from the global and when you move the site from dev to production all you need to change is the global.
Trying in a round-about way to develop from /home because it looks more like the production server is a bad idea.
Install MAMP, create the global somewhere high in the hierarchy and start re-referencing. It'll be less pain in the long run.
Related
When I tried to remove a file in local machine to check files are synchronous with vagrant development server it pops up an error:
The following file couldn't be moved to the trash.
Is gvfs-trash installed?
For solving it I created a trash directory that can be accessed from outside the user’s home directory:
# Create a Trash directory (with some subdirectories) in root
sudo mkdir -p /.Trash-1000/{expunged,files,info}
# Give ownership of this to your user:
sudo chown -R $USER /.Trash-1000
Still I can't remove the file from local machine. But If I delete a file at vagrant development server it automatically deletes at local machine, opposite is not happening and ends-up with this error "Is gvfs-trash installed? "
Like YuriAFGomes said, everything seemed to work fine in my system: trash folder had the right permissions and gvfs-trash worked flawlessly from command line, yet atom 1.45 said it couldn't delete any file. Tried to start atom with sudo and it didn't fix anything. Tried creating the .Trash-1000 directories in several places, and nothing, same error related to gvfs-trash. I'm pretty sure this used to work fine in my atom setup and suddenly it stopped doing so, and I have no idea why. I went to their releases list and tried downgrading to several of them until I settled with version 1.30, which doesn't seem to have this issue and is compatibles with my local packages. If you have this problem and tried everything said around the web, I suggest you try downgrading to different versions until the problem goes away.
There is an issue on GitHub reporting this problem. According to the report, a missing .Trash-1000 can cause this problem, so you can create it as follows.
mnt=/; id=$(id -u); sudo mkdir -p "$mnt/.Trash-$id"/{expunged,files,info} \
&& sudo chown -R $USER:$USER "$mnt/.Trash-$id"/ \
&& sudo chmod -R o-rwx "$mnt/.Trash-$id"/
Set mnt to the mount point, where gvfs-trash is expecting it.
Simply cd to the directory which will be opened in atom and execute df ..
This will give something like this:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 960380628 463122460 448403708 51% /mnt/vol
In this example, the mount point and the value of mnt would be /mnt/vol.
What solved this issue for me was uninstalling atom via dpkg and installing it via apt from the following PPA: https://launchpad.net/~webupd8team/+archive/ubuntu/atom . I have no clue why this works, though. I have noticed that the PPA installs atom 1.26, while the version where the issue arised, installed via dpkg, is 1.45.
Before doing that, I have tried creating the .Trash-1000 directories in root, in home and in project folder, with the proper permissions. gvfs-trash was installed, updated and working as expected all the time, but the problem persisted. Really odd.
The real problem is that atom/electron are/were using gvfs-trash which has been deprecated for almost 5 years. Electron which is the platform on which Atom is built has fixed this in the development branch but hasn't backported it to the 2.0 branch on which Atom is based.
Solution/Workaround as of now?
Use an environment variable $ELECTRON_TRASH and set it to gio or one of the alternatives
See if you are missing the .Trash-1000 folder (assuming your uid is 1000)
Install an alternate gvfs-trash script to take over the missing functionality
Delete the file/folder outside of atom
I had a similar problem on Windows using Atom, where I couldnt delete the files. So I resorted to deleting them manually from the directory (outside of Atom).
Turns out atom cannot "move to trash" if u checked in recycle bin this option:
"Don't move files to the Recycle Bin. Remove files immediately when deleted."
Just set the other option (to move files to actual recycle bin) and should work.
I have weird problem as all of the sudden terminal stopped reading any commands. Last weekend I installed Wordpress with PHP and mySQL and since that moment didn't have time to do anything more on laptop. Now I wanted to launch some react-native code but command wasn't found, then I tried different things to use some other commands and each time I get message
MBP-Mateusz-2:business-cards-native mateusz$ code .
-bash: code: command not found
and doesn't matter what command is that except standard ones like ls, cd etc. However when I try to write npm --version, or node --version, or launch visual studio code like before with code ., each time I get command not found. Doesn't anyone have issue like that? How to fix it as I'm super confused and have no idea even where to start.
You probably messed up your PATH environment variable, and now your computer cannot find the commands if you don't tell it directly where. The PATH variable contains the directories where the system should look for binaries if they're not in the current directory. If it gets corrupted for some reason, you won't be able to run any program from the terminal unless you pointed directly its location.
I would first run this command:
echo $PATH
so you can see which is the content of the PATH.
If it seems empty, or some critical folders are missing, try to add them temporarily:
export PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
Then try to run the commands again from the same terminal and see if that worked.
If that works, check if you have a ~/Library/LaunchAgents/environment.plist file and its content. It is possible that there is a key for the PATH and that its values are pointing something of your Wordpress stack but not the system directories.
If that looks fine, look at the ~/.bash_profile file. Find any export PATH instruction that may explain your issues. If you can't find any, but still exporting the PATH worked out, add at the end of the file that instruction as a workaround for fixing the mess:
export PATH=$PATH:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin
(notice that I'm ading $PATH in this last case so if there is any other path actually configured it is added as well)
Good luck.
EDIT: That's the usual issue people has, but now that I've read your comments, the issue seems a bit more serious. It looks like the mySQL setup destroyed your /usr/local/ folder, which means you lost all the binaries located there npm, code, etc.
If you have a backup of the whole filesystem (which by experience is unlikely), restore /usr/local folder.
If you don't have any backups, you can reconstruct /usr/local... by reinstalling the software that cannot be found. Reinstall npm, VSCode, etc, that will place their executables again in the /usr/local folders and from there you'll be good to go. Install brew (since it's likely that also got deleted) then try brew install node and see if now you can run npm. If that works out, I'm afraid you'll have to reinstall all the software you lost again.
I have recently set up Vagrant on my machine, and the first thing I noticed was that my terminal config was not synced, when I sshed into my server.
For instance I have changed my shell from bash to zsh, which does a lot of beautiful things for me (like removing case-sensitive auto completion). But on my vagrant virtual machine, or on my server, all this cool stuff is now gone. Also stuff like my important aliases is not synced.
Now, what is a proper way to sync stuff like this?
EDIT:
So currently, when I create/remove/edit an alias on my local machine, I have to copy the exact same changes into my VM and all other servers I frequently use. I see this as a very time consuming and unnecessary task.
What I do is version control my dotfiles and I keep them on github. Dotfiles are just the files in your root that start with a dot in the name such as .bashrc or .zshrc. They are "invisible" files, so you have to use ls -a instead of just ls to see them.
Here are my dotfiles: https://github.com/aharris88/dotfiles
When I get on a new machine, I just clone the repository to ~/dotfiles
Then, I have a bash script in there called setup.sh that backs up any old dotfiles that might already be in root into ~/dotfiles_old. Then it creates symlinks to the files that are in ~/dotfiles.
It also installs zsh and oh-my-zsh if it isn't already. It should work for linux or mac os x.
Here is an article describing how to version control your dotfiles: http://blog.smalleycreative.com/tutorials/using-git-and-github-to-manage-your-dotfiles/
Another thing that I do to get a new mac ready is use kitchenplan: https://github.com/kitchenplan/kitchenplan, which can sync a lot more settings, but this probably isn't what you're asking about. Here is my kitchenplan config: https://github.com/aharris88/kitchenplan-config
I'm a Mac newbie and just upgraded to Node.js 0.67. After running node, the installer says "Make sure that /usr/local/bin is in your $PATH."
And I try to run node but as expected, it doesn't run without the path change.
So not really knowing what I'm doing (yes!), after some research I do this:
export "PATH=/usr/local/bin"
And node runs. But sudo doesn't. Which I think means I screwed up the environment variables.
sudo: command not found
Then in another Terminal window (that was open when I messed this up), sudo does respond; both windows have the same path. But in that window, npm is no longer available.
Can anyone help get me back to sudo stability?
sudo on a Macintosh lives in /usr/bin.
Make sure /usr/bin is in your $PATH environment and you should be okay.
And to do that, in the context of your question above, do something like:
export "PATH=$PATH:/usr/local/bin"
The idea here being that you are appending a new search path to the already existing list in your PATH environment variable.
Here is a potentially useful tutorial you can refer to.
When I run Ruby commands like gem -v I get this error:
/Users/kristoffer/.rvm/rubies/ruby-1.9.2-p180/bin/gem:4:
warning: Insecure world writable dir
/Users/kristoffer in PATH, mode 040777
1.6.2
First of all I don't understand what this means. /Users/kristoffer is not in my path according to echo $PATH. The result of echo $PATH is:
/Users/kristoffer/.rvm/gems/ruby-1.9.2-p180/bin:/Users/kristoffer/.rvm/gems/ruby-1.9.2-p180#global/bin:/Users/kristoffer/.rvm/rubies/ruby-1.9.2-p180/bin:/Users/kristoffer/.rvm/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/usr/X11/bin
As you can see, the PATH is pretty clean. Just the default path + what RVM added.
I've seen the other posts similar to this where the recommended way to solve the issue is to run chmod go-w path/to/folder
However, I'm pretty sure that it's a bad idea to make my Home folder non-writeable, right? I've repaired permissions using Disk Utility and it didn't find anything wrong with the permissions on my Home folder.
Any idea of what the problem is and how I can fix it?
Your home folder should only be writable by you, not by anyone else. The reason gem is complaining about this is that you have folders in your PATH that are inside your (insecure) home folder, and that means that anyone who wants to could hack you by renaming/moving your .rvm folder and replacing it with an impostor.
To fix your home folder, run chmod go-w /Users/kristoffer. If there are any other insecure folders on the way to anything in your PATH, you should fix them similarly.
BTW, the reason that Disk Utility didn't repair this is that it only repairs files installed as part of the OS (see Apple's KB article on the subject). There is an option to repair home folder permissions if you boot from the install DVD and run Password Reset from the Utilities menu, but I'm not sure if it resets the permissions themselves or just ownership.
I kept getting this in my prompt.
I couldn't get it quite right with my command prompt but this ended up working.
Recently this just happened to me and it has to do with a bug in upgrading to Mac OSX 10.9.3. Looks like the upgrade changes the permissons to the User folder. Here's an explanation and a fix:
http://derflounder.wordpress.com/2014/05/16/users-folder-being-hidden-with-itunes-11-2-installed-and-find-my-mac-enabled/
chmod 755 /Users/<username>
Should fix the problem...
it says that the directory Users/username is insecure, you can fix that by running
sudo chmod go-w Users/username
I found a solution. Like user2952657, I got this warning with vagrant up after upgrading to OSX 10.9.3. Updating iTunes to 11.2.1 was all I needed to do to get the warning to stop.