Detecting a remote mount - bash

Is there a single, universal bash shell variable or common Linux command that will reliably indicate if a given directory or file is on a remote filesystem -- be it NFS, SSHFS, SMB, or any other remotely mounted filesystem?
CONTEXT...
This a root-only access, single-user, multi-host Linux development "lab" using SSH and SSHFS for semi-seamless loose-coupling the systems. Relevent directory structure on each host is...
/0
/0/HOST1
/0/HOST2
/0/HOST3
/bin
/boot
:
Directories in /0 are SSHFS mounted to '/' on the named host. 'Host1', etc. are mountpoint directories named for each host.
I could of course, establish an environment variable something like...
REMOTE_FS=/0
...and test for the dirname starting with '/0'. However that's not very portable or reliable.
Obvious question...
Having made the effort to make it seamless, why do I want to know when accessing something non-local?
Answer...
Going through a mounted filesystem puts all the processing load on the initiating host. I'd like to know when I have the option of using SSH instead of SSHFS to offload the background processing (ls, grep, awk, etc) to the remote (and usually more powerful) host, leaving just the GUI and control logic on the local machine.

df -l <file>
This will return a non-zero exit code if the file or directory is not local.

Related

Run script on remote server from local machine

I have a remote script on a machine (B) which works perfectly when I run it from machine (B). I wanted to run the script via ssh from machine (A) using:
ssh usersm#${RHOST} './product/2018/requests/inbound/delDup.sh'
However, machine (A) complains about the contents of the remote script (2018req*.txt is a variable defined at the beginning of the script):
ls: cannot access 2018req*.txt: No such file or directory
From the information provided, it's hard to do more than guess. So here's a guess: when you run the script directly on machine B, do you run it from your home directory with ./product/2018/requests/inbound/delDup.sh, or do you cd into the product/2018/requests/inbound directory and run it with ./delDup.sh? If so, using 2018req*.txt will look in different places; basically, it looks in the directory that you were in when you ran the script. If you cded to the inbound directory locally, it'll look there, but running it remotely doesn't change to that directory, so 2018req*.txt will look for files in the home directory.
If that's the problem, I'd rewrite the script to cd to the appropriate directory, either by hard-coding the absolute path directly in the script, or by detecting what directory the script's in (see "https://stackoverflow.com/questions/59895/getting-the-source-directory-of-a-bash-script-from-within" and BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place").
BTW, anytime you use cd in a script, you should test the exit status of the cd command to make sure it succeeded, because if it didn't the rest of the script will execute in the wrong place and may do unexpected and unpleasant things. You can use || to run an error handler if it fails, like this:
cd somedir || {
echo "Cannot cd to somedir" >&2
exit 1
}
If that's not the problem, please supply more info about the script and the situation it's running in (i.e. location of files). The best thing to do would be to create a Minimal, Complete, and Verifiable example that shows the problem. Basically, make a copy of the script, remove everything that isn't relevant to the problem, make sure it still exhibits the problem (otherwise you removed something that was relevant), and add that (and file locations) to the question.
First of all when you use SSH, instead of directly sending the output (stdout and stderr) to the monitor, the remote machine/ssh server sends the data back to the machine from which you started the ssh connection. The ssh client running in your local machine will just display it (except if you redirect it of course).
Now, from the information you have provided, it looks like the files are not present on server (B) or not accessible (last but not least, are you sure your ls target the proper directory? ) you could display the current directory in your script before running the ls command for debugging purpose.

How to copy files from Mac OS to Windows which are in same network using command line?

I know that , we can copy files from host to another from mac using finder/smb protocol.
But I would like to copy files from mac to windows machine using command line. so that, I can call the same pro-grammatically.
Could anyone please guide?
If you can copy the files using the Finder then you have connected to the SMB share. Usually, you can see this from the command line by looking in the /Volumes folder; if it doesn't look like it's there, try running the mount command to see other places things might be connected. The following assumes the SMB is mounted in /Volumes, adjust as necessary for your particular case.
On the command line, issue the command:
ls /Volumes
You should see the SMB share listed along with some other names.
Then to copy files to it:
cp myfiles/* /Volumes/MySMBShare/mydirectory
If the name of the share has spaces in it you will need to escape them with backslashes like so:
cp myfiles/* /Volumes/My\ SMB\ Share/mydirectory

Unix shell script to archive files on Remote system

I have a requirement to archive files on remote location. i.e., I need to write a shell script that will connect to remote path copy(move) files from this path and then paste them on another location in the same system (The target system could be either a Unix system or a windows system).
This script will be scheduled to run once a day without manual intervention.
Unison should fit your bill. rsync and scp would work as well but they can be a bit cryptic to set up.
There are implementations of the Secure Shell (SSH) for both targeted systems. The Secure Shell comes with a secure copy program, named scp which would allow you to run commands like
scp localfile user#remotehost:directory/remotefilename
As lynxlynxlynx pointed out, another option is the rsync suite. Both SSH and rsync will require some configuration (rsync less so). See the respective home pages.

How to register newly mounted drive in git bash?

In my day-to-day work (I'm using MS Windows), I keep my git bash (actually using console2 for this) open for the whole day. It is also very frequent that I mount new drives that I would like to work with git.
However I noticed that I need to exit the bash and open it again in order to make it recognize new drive letter.
Is there any command that 'registers' already mounted drive in git bash ?
thanks
edit2:
I do not have any option to left a comment under my own question (weird ..?), so I post it here:
$ mount -a
sh.exe": mount: command not found
Couple of things, had some difficulty finding sources so feel free to take it with a grain of salt.
Msysgit simply doesn't include a version of mount. It is my understanding that cygwin does, however. There is no simple way to either view all attached drives or mount a new drive in msys, and thus Git Bash.
To answer your question, you don't: Git Bash does not dynamically assign drives, so if you mount new drives, you need to close all instances and restart Git Bash (source). The source referenced there is cached here. Sorry there's not a nicer solution.
I commonly mount a drive to the file system and then have to run a script that alters some files on the from within a Git Bash session in Console 2.
If you mount something to a given drive letter, say F: on the Windows file system, and then start the Git Bash session it will have it mapped. I can mount/unmount the F: drive and the session can still access /f/ without any issues. So, mount all the drives you will typically need to hit and then start the session and hopefully you don't need to restart your Git Bash too often.
I find that if I exit all currently running git bash sessions and then launch a new one, then I can access the new drive, e.g. X:, in the new bash session under /x/.
Even launching a new git bash session is not enough if there was already one running; I must exit the previous git bash sessions and then launch one for it to make the new drive letters available.
I found that if I set
MSYS_WATCH_FSTAB=YesPlease
in my User Environment variables. Then everything worked.

Rsync bash script and hard linking files

I am creating a bash script to backup my files with rsync.
Backups all come from a single directory.
I only want new or modified files to be backed up.
Currently, I am telling rsync to backup the dir, and to check the files compared to the last backup.
The way I am doing this is
THE_TIME=`date "+%Y-%m-%dT%H:%M:%S"`
rsync -aP --link-dest=/Backup/Current /usr/home/user/backup /Backup/Backup-$THE_TIME
rm -f /Backup/Current
ln -s /Backup/Backup-$THE_TIME /Backup/Current
I am pretty sure I have the syntax correct for this. Each backup will check against the "Current" folder, and upload only as necesary. It will then delete the Current folder, and re-create the symlink to the newest backup it just did.
I am getting an error when I run the script:
rsync: link "/Backup/Backup-2010-08-04-12:21:15/dgs1200series_manual_310.pdf"
=> /Backup/Current/dgs1200series_manual_310.pdf
failed: Operation not supported (45)
The host OS is running HFS filesystem, which supports hard linking. I am trying to figure out if something else is not supporting this, or if I have a problem in my code.
Thanks for any help
Edit:
I am able to create a hard link on my local machine.
I am also able to create a hard link on the remote server (when logged in locally)
I am NOT able to create a hard link on the remote server when mounted via afp. Even if both files exist on the server.
I am guessing this is a limitation of afp.
Just in case your command line is only an example: Be sure to always specify the link-dest directory with an absolute pathname! That’s something which took me quite some time to figure out …
Two things from the man page stand out that are worth checking:
If file's aren't linking, double-check their attributes. Also
check if some attributes are getting forced outside of rsync's
control, such a mount option that squishes root to a single
user, or mounts a removable drive with generic ownership (such
as OS X's “Ignore ownership on this volume” option).
and
Note that rsync versions prior to 2.6.1 had a bug that could
prevent --link-dest from working properly for a non-super-user
when -o was specified (or implied by -a). You can work-around
this bug by avoiding the -o option when sending to an old rsync.
Do you have the "ignore ownership" option turned on? What version of rsync do you have?
Also, have you tried manually creating a similar hardlink using ln at the command line?
I don't know if this is the same issue, but I know that rsync can't sync a file when the destination is a FAT32 partition and the filename has a ":" (colon) in it. [The source filesystem is ext3, and the destination is FAT32]
Try reconfiguring the date command so that it doesn't use a colon and see if that makes a difference.
e.g.
THE_TIME=`date "+%Y-%m-%dT%H_%_%S"`

Resources