broken mount point, as it seens in mc
I'm trying to mount my work servers to local folder.
"sudo sshfs webuser#rgslb.com:/var/www/webuser/data/www /Volumes/rgslb.com"
It asks me password and silently proceed. But as result I have broken file, in mc it looks like ?rgslb.com, but "ls -l" says that there no this file in /Volumes. And when I unmounted it this mount point directory disappear.
Have any one has same situation? Thank you.
I got simple solution, do sshfs... without sudo, to ~/rgslb.com. And it works.
Related
I have two issues I need help with on bash, linux and s3cmd.
First, I'm running into linux permission issue. I am trying to download zip files from a s3 bucket using s3cmd with following command in a bash script.sh:
/usr/bin/s3cmd get s3://<bucketname>/<folder>/zipfilename.tar.gz
I am seeing following error: permission denied.
If I try to run this command manually from command line on a linux machine, it works and downloads the file:
sudo /usr/bin/s3cmd get s3://<bucketname>/<folder>/zipfilename.tar.gz
I really don't want to use sudo in front of the command in the script. How do I get this command to work? Do I need to give chown permission to the script.sh which is actually sitting in a path i.e /foldername/script.sh or how do I get this get command to work?
Two: Once I get this command to work, How do I get it to download from s3 to the linux home dir: ~/ ? Do I have to specifically issue a command in the bash script: cd ~/ before the above download command?
I really appreciate any help and guidance.
First, determine what's failing and the reason, otherwise you won't find the answer.
You can specify the destination in order to avoid permission problems when the script is invoked using a directory that's not writeable by that process
/usr/bin/s3cmd get s3:////zipfilename.tar.gz /path/to/writeable/destination/zipfilename.tar.gz
Fist of all ask 1 question at a time.
For the first one you can simply change the permission with chown like :
chown “usertorunscript” filename
For the second :
If it is users home directory you can just specify it with
~user
as you said but I think writing the whole directory is safer so it will work for more users (if you need to)
I have a pretty basic problem here, that has happened so haphazardly to me that up until now, I've just ignored it. I downloaded tomcat web server and "Murach's Java Servlets and JSP" book is telling me to navigate to the tomcat/bin directory and start the server my typing in Terminal
$ startup
However, I get the error
-bash: startup: command not found
The relevant files in this directory are startup.sh and startup.bat. Typing both of these returns the same error message
So my questions are, what are .bat and sh files, and how do I run these files? I've read several tutorials for different languages and software programs, and some times when the tutorial says execute a bunch of files in the command line, I get a "command not found" error. Sometimes it works, sometimes it doesn't. This is perplexing to me, so what are some common solutions to solving these sort of "command not found" Terminal problems?
The .sh is for *nix systems and .bat should be for Windows. Since your example shows a bash error and you mention Terminal, I'm assuming it's OS X you're using.
In this case you should go to the folder and type:
./startup.sh
./ just means that you should call the script located in the current directory. (Alternatively, just type the full path of the startup.sh). If it doesn't work then, check if startup.sh has execute permissions.
This is because the script is not in your $PATH. Use
./scriptname
You can also copy this to one of the folders in your $PATH or alter the $PATH variable so you can always use just the script name. Take care, however, there is a reason why your current folder is not in $PATH. It might be a security risk.
If you still have problems executing the script, you might want to check its permissions - you must have execute permissions to execute it, obviously. Use
chmod u+x scriptname
A .sh file is a Unix shell script. A .bat file is a Windows batch file.
Type bash script_name.sh or ./script_name in linux terminal. Before using ./script_name make you script executeable by sudo chmod 700 script_name and type script_name.bat in windows.
Drag-And-Drop
Easiest way for a lazy Mac user like me: Drag-and-drop the startup.sh file from the Finder to the Terminal window and press Return.
To shutdown Tomcat, do the same with shutdown.sh.
You can delete all the .bat files as they are only for a Windows PC, of no use on a Mac to other Unix computer. I delete them as it makes it easier to read that folder's listing.
File Permissions
I find that a fresh Tomcat download will not run on my Mac because of file permission restrictions throwing errors during startup. I use the BatChmod app which wraps a GUI around the equivelant Unix commands to reset file permissions.
Port-Forwarding
Unix systems protect access to ports numbered under 1024. So if you want to use port 80 with Tomcat you will need to learn how to do "port-forwarding" to forward incoming requests to port 8080 where Tomcat listens by default. To do port-forwarding, you issue commands to the packet-filtering (firewall) app built into Mac OS X (and BSD). In the old days we used ipfw. In Mac OS X 10.7 (Lion) and later Apple is moving to a newer tool, pf.
Based on IsmailS' comment the command which worked for me on OSX was:
sudo sh ./startup.sh
On windows type either startup or startup.bat
On unix type ./startup.sh
(assuming you are located in tomcat/bin directory)
Batch files can be run on Linux. This article explains how (http://www.linux.org/threads/running-windows-batch-files-on-linux.7610/).
Type in
chmod 755 scriptname.sh
In other words, give yourself permission to run the file. I'm guessing you only have r/w permission on it.
add #!bin/bash on top of the your .sh file
sudo chmod +x your .sh file
./your.sh file
these steps work~
My suggestion does not come from Terminal; however, this is a much easier way.
For .bat files, you can run them through Wine. Use this video to help you install it: https://www.youtube.com/watch?v=DkS8i_blVCA. This video will explain how to install, setup and use Wine. It is as simple as opening the .bat file in Wine itself, and it will run just as it would on Windows.
Through this, you can also run .exe files, as well .sh files.
This is much simpler than trying to work out all kinds of terminal code.
I had this problem for *.sh files in Yosemite and couldn't figure out what the correct path is for a folder on my Desktop...after some gnashing of teeth, dragged the file itself into the Terminal window; hey presto!!
I have a shell script that mounts an smb share. It works perfectly on all macs with every OS revision except 10.7.5
The offending command is simply:
mount -t smbfs -o nobrowse //test:test#servername/sharename /my/mnt/point
When I attempt this command on a 10.7.5 mac, it fails either with a "broken pipe" or "authentication failed" error. However, it works fine on macs running 10.7.4, 10.6, 10.8 etc.
Can anyone successfully use this command on 10.7.5?
Is there any alternative way of achieving this, or troubleshooting exactly why this error is happening? I'm running out of ideas!
Since feature requests to mark a comment as an answer remain declined, I copy the above solution here.
Thanks for the replies. The problem was two fold: firstly, for some reason you cannot run this command as root in 10.7.5, and secondly you cannot mount outisde of /Volumes. Strangely this seems to work in all other OS revisions. I have worked around this problem by mounting my share in /Volumes and then creating a sym link to the desired mount point:
mkdir -p /Volumes/share
sudo -u localadminuser mount -t smbfs -o nobrowse //user:pass#server/share /Volumes/share
ln -s /Volumes/share /location/that/I/prefer/to/mnt
I hope this helps someone out. No idea why 10.7.5 changes this. – BSUK
There are many reasons why the mount will not work. Some of the reasons include:
Time between server and client being too different
Workgroup name not specified on the mac
Local hostname uses non-latin characters
Encryption is too strict between the mac and the server
To solve the time; set the time.
I've seen broken pipe/authentication errors most often when you don't use a workgroup name for the connection. A connection string looking like generally works better than one without any workgroup:
//WORKGROUP;user:50000#192.168.2.1/Share
... assuming that the 50000 is the password for the user user should allow the connection. Generally, you just need to have a string before the semi-colon, it can read anything; it just needs to be there.
To solve the local hostname issue click on an interface, choose advanced go to the WINS tab and make sure that the name doesn't have any foreign characters there.
If the encryption is too strict, you will need to edit the nsmb.conf. I have a set of lines looking like:
[server1]
minauth=none
for an ancient BSD server which cannot deal with encrypted passwords. You can have this in either an /etc/nsmb.conf or ~/Library/Preferences/nsmb.conf file.
This may not address your issue, but it may help you in trying to proceed.
Unfortunately, saying it works on box x and not on box y doesn't really help, as there could be any arbitrary configuration difference between them.
Wasn't quite sure how to word this but let's say I've used ssh to remote into my friends MacBook (macbook_b) from my MacBook (macbook_a).
What command would I use to copy a file/directory to my MacBook (macbook_a) from my friends MacBook (macbook_b)?
Thank you.
You can use scp (Secure Copy).
To copy FROM your machine to friends:
scp file_to_copy user#remote.server.fi:/path/to/location
In another direction:
scp user#remote.server.fi:/path/locatio/file_name file_name
If you need to copy an entire directory, you'll need to use the recursive flag, like this:
scp -r directory_to_copy user#remote.server.fi:/path/to/location
Assuming you're logged in on macbook_b:
scp file_to_copy username#macbook_a:/path/to/destination
or if you're logged in on macbook_a:
scp username#macbook_b:/path/to/file_to_copy local_destination
I think this link would help you with the answer you are looking for. In this you can use scp ssh source destination example for your scenario you have requested for.
Also refer to this question which has been already answered. It might help.
first do pwd to get the path to the file of your friends macbook then
go into your machine's ssh window and do
scp user_name#machine_name(of your friend's):(copy the path after executing pwd)/file_name .(dot means your your current directory)
enter his password !
voila !!!
I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`