Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have a disk image file containing multiple file systems, such as HFS (Journaled) in addition to Joliet or UDF. I want to mount whatever non-HFS file system is there. First, I attach the image without mounting:
$ hdiutil attach -nomount path/to/image.iso
/dev/disk3 Apple_partition_scheme
/dev/disk3s1 Apple_partition_map
/dev/disk3s2 Apple_HFS
Then, the man page for mount seems to say that I can mount non-HFS file systems like this:
$ mount -a -t nohfs /dev/disk3s2 /tmp
But the response is
mount: exec /System/Library/Filesystems/nohfs.fs/Contents/Resources/mount_nohfs for /private/tmp: No such file or directory
which sounds like it just doesn't understand the documented "no" prefix for filesystem types that you don't want to mount. Is there any way to make this work, or must I know what specific file system I want to mount?
EDIT TO ADD: Would someone care to explain the negative votes and close votes?
First, you don't want the -a option, as that tells it to mount everything listed in /etc/fstab; your disk image isn't listed there, so that's incorrect. Second, I'm not sure why the "no" prefix isn't working, but you should be able to do it by specifying the correct filesystem to use (cd9660 would be the one to use for a Joliet image). Third, if the hybrid format is done the way I've seen, you'll want to mount /dev/disk3, not /dev/disk3s2:
mkdir /tmp/mountpoint
mount -t cd9660 /dev/disk3 /tmp/mountpoint
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
How do I stop MacOS updates from overwriting my auto_master file?
Every non-trivial MacOS update always removes my custom auto_master and puts in the default auto_master. I actually only add a single line to the end of the file and do not even modify the existing lines because they are irrelevant for me.
I have spent quite a bit of time figuring out automounts of NFS shares in OS X...
Somewhere along the line, Apple decided allowing mounts directly into /Volumes should not be possible:
/etc/auto_master (see last line):
#
# Automounter master map
#
+auto_master # Use directory service
/net -hosts -nobrowse,hidefromfinder,nosuid
/home auto_home -nobrowse,hidefromfinder
/Network/Servers -fstab
/- -static
/- auto_nfs -nobrowse,nosuid
/etc/auto_nfs (this is all one line):
/Volumes/my_mount -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,rw,tcp,nfc nfs://192.168.1.1:/exports/my_share
Make sure you:
sudo chmod 644 /etc/auto_nfs
Otherwise the automounter will not be able to read the config and fail with a ... parse_entry: getmapent for map failed... error in /var/log/messages
This will not work (anymore!) though it "should".
$ sudo automount -cv
...
automount: /Volumes/my_mount: mountpoint unavailable
Note that, if you manually create the mount point using mkdir, it will mount.
But, upon restart, OS X removes the mount point, and automounting will fail.
What's the solution?
It's so easy my jaw dropped when I figured it out.
Basically, we are tricking OS X into thinking we're mounting somewhere else.
When you're talking about paths in just about any environment, the root folder is the highest path you can reach, whether it's C:\ (windows) or / (*nix)
When you're at this path, attempting to reach the parent path, via .. will keep you at the root path.
For example: /../../../../ is still just /
By now, a few of you have already figured it out.
TL;DR / Solution:
Change your /etc/auto_nfs config from (this is all one line):
/Volumes/my_mount -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,rw,tcp,nfc nfs://192.168.1.1:/exports/my_share
For pre-Catalina: To (this is all one line)
/../Volumes/my_mount -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,rw,tcp,nfc nfs://192.168.1.1:/exports/my_share
For Catalina and up: To (this is all one line)
/System/Volumes/Data/../Data/Volumes/my_mount -fstype=nfs,noowners,nolockd,noresvport,hard,bg,intr,rw,tcp,nfc nfs://192.168.1.1:/exports/my_share
And re-run the automounter:
$ sudo automount -cv
...
automount: /Volumes/my_mount: mounted
..... there you go! Technically /../Volumes is still /Volumes, but the automounter does not see things that way ;)
This configuration persists the mount across restarts, and creates the mountpoint automatically.
Source github : https://gist.github.com/L422Y/8697518
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I use FileZilla and vsftpd on another server and understand that I have to change vsftpd.conf and uncomment the line(s) that say:
# Uncomment this to allow local users to log in.
local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
write_enable=YES
So, I have that done and have restarted vsftpd but still I am unable to move files to the server. Should I chmod the directory that I am putting things in? That directory is /var/www/html and current permissions are:
drwxr-xr-x 2 root root 4096 Jan 9 20:13 html
I don't know where else to look. It must be something simple.
If you want to be able to modify files in your web directories, try changing the ownership (instead of the mode) by doing this:
sudo chown -R $USER:$USER /var/www/html
The $USER variable will take the value of the user you are currently logged in as.
By doing this, your regular (non root) user now owns the html subdirectories where you are trying to move files into.
It probably a good idea to also modify permissions a little bit to ensure that read access is permitted to the general web directory and all of the files and folders it contains so that pages can be served correctly, use:
sudo chmod -R 755 /var/www
Your web server should now have the permissions it needs to serve content, and your user should be able to create content within the necessary folders
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to create a directory in a shell script:
mkdir -p DirName
but I always get the same error:
cannot create directory `/DirName': Permission denied
If I run the same command directly from the shell instead of using the scripts, that works perfectly.
any idea?
Thank you! :)
If you're going to use the -p option, you need to specify the full path
mkdir -p /some/path/here/DirName
I suggest listing the full path (If you plan on your shell script to change locations).
If your shell script isn't going to change locations (you're not going to move it somewhere else later), I'd use:
mkdir ./DirName
These should all behave similarly to you creating the directory in the shell.
You are trying to create a directory in the root of the filesystem (/DirName) instead of in the current directory (Dirname or ./Dirname). You don't have access to write to the root.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a small shell script that starts a program when I double-click it. (I have set the permissions to allow executing the script).
I want to be able to copy that script to another computer so that the new user can double-click it without needing to know anything about chmod or permissions. But I can't find out how to preserve the execute permission when I copy the file.
I can usually find answers with Google but this has me defeated - I guess I am not expressing my question properly.
Thanks
Use rsync or tar.
rsync -p file user#host:destdir
plus other options you might need.
Or
tar cvzf file.tar file
then copy (or email, etc.) file.tar to the other machine and extract the file:
tar xpvzf file.tar
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm trying to copy my .profile, .rvm and .ssh folders/files to a new computer and keep getting a "not a regular file" response. I know how to use the cp and ssh commands but I'm not sure how to use them in order to transfer files from one computer to another.
Any help would be great, thanks!
You can do this with the scp command, which uses the ssh protocol to copy files across machines. It extends the syntax of cp to allow references to other systems:
scp username1#hostname1:/path/to/file username2#hostname2:/path/to/other/file
Copy something from this machine to some other machine:
scp /path/to/local/file username#hostname:/path/to/remote/file
Copy something from another machine to this machine:
scp username#hostname:/path/to/remote/file /path/to/local/file
Copy with a port number specified:
scp -P 1234 username#hostname:/path/to/remote/file /path/to/local/file
First zip or gzip the folders:
Use the following command:
zip -r NameYouWantForZipFile.zip foldertozip/
or
tar -pvczf BackUpDirectory.tar.gz /path/to/directory
for gzip compression use SCP:
scp username#yourserver.com:~/serverpath/public_html ~/Desktop
You may also want to look at rsync if you're doing a lot of files.
If you're going to making a lot of changes and want to keep your directories and files in sync, you may want to use a version control system like Subversion or Git. See http://xoa.petdance.com/How_to:_Keep_your_home_directory_in_Subversion