I have a requirement to archive files on remote location. i.e., I need to write a shell script that will connect to remote path copy(move) files from this path and then paste them on another location in the same system (The target system could be either a Unix system or a windows system).
This script will be scheduled to run once a day without manual intervention.
Unison should fit your bill. rsync and scp would work as well but they can be a bit cryptic to set up.
There are implementations of the Secure Shell (SSH) for both targeted systems. The Secure Shell comes with a secure copy program, named scp which would allow you to run commands like
scp localfile user#remotehost:directory/remotefilename
As lynxlynxlynx pointed out, another option is the rsync suite. Both SSH and rsync will require some configuration (rsync less so). See the respective home pages.
Related
I have a remote script on a machine (B) which works perfectly when I run it from machine (B). I wanted to run the script via ssh from machine (A) using:
ssh usersm#${RHOST} './product/2018/requests/inbound/delDup.sh'
However, machine (A) complains about the contents of the remote script (2018req*.txt is a variable defined at the beginning of the script):
ls: cannot access 2018req*.txt: No such file or directory
From the information provided, it's hard to do more than guess. So here's a guess: when you run the script directly on machine B, do you run it from your home directory with ./product/2018/requests/inbound/delDup.sh, or do you cd into the product/2018/requests/inbound directory and run it with ./delDup.sh? If so, using 2018req*.txt will look in different places; basically, it looks in the directory that you were in when you ran the script. If you cded to the inbound directory locally, it'll look there, but running it remotely doesn't change to that directory, so 2018req*.txt will look for files in the home directory.
If that's the problem, I'd rewrite the script to cd to the appropriate directory, either by hard-coding the absolute path directly in the script, or by detecting what directory the script's in (see "https://stackoverflow.com/questions/59895/getting-the-source-directory-of-a-bash-script-from-within" and BashFAQ #28: "How do I determine the location of my script? I want to read some config files from the same place").
BTW, anytime you use cd in a script, you should test the exit status of the cd command to make sure it succeeded, because if it didn't the rest of the script will execute in the wrong place and may do unexpected and unpleasant things. You can use || to run an error handler if it fails, like this:
cd somedir || {
echo "Cannot cd to somedir" >&2
exit 1
}
If that's not the problem, please supply more info about the script and the situation it's running in (i.e. location of files). The best thing to do would be to create a Minimal, Complete, and Verifiable example that shows the problem. Basically, make a copy of the script, remove everything that isn't relevant to the problem, make sure it still exhibits the problem (otherwise you removed something that was relevant), and add that (and file locations) to the question.
First of all when you use SSH, instead of directly sending the output (stdout and stderr) to the monitor, the remote machine/ssh server sends the data back to the machine from which you started the ssh connection. The ssh client running in your local machine will just display it (except if you redirect it of course).
Now, from the information you have provided, it looks like the files are not present on server (B) or not accessible (last but not least, are you sure your ls target the proper directory? ) you could display the current directory in your script before running the ls command for debugging purpose.
Usually I use rsync based backup.
But now I have to make backup script from Windows server to linux.
So, there is no rsync - only FTP.
I like ideas of hard links using to save disk space and incremental backup to minimize traffic.
Is there any similar backup script for ftp instead of rsync?
UPDATE:
I need to backup Windows server through FTP. Backup script executes at Linux backup server.
SOLUTION:
I found this useful script to backup through FTP with hard links and incremental feature.
Note for Ubuntu users: there is no md5 command in Ubuntu. Use md5sum instead.
# filehash1="$(md5 -q "$curfile"".gz")"
# filehash2="$(md5 -q "$mysqltmpfile")"
filehash1="$(md5sum "$curfile"".gz" | awk '{ print $1 }')"
filehash2="$(md5sum "$mysqltmpfile" | awk '{ print $1 }')"
Edit, since the setup was not clear enough for me from the original question.
Based on the update of the question the situation is, that you need to pull the data on the backup server from the windows system via ftp. In this case you could adapt the script you find yourself (see comment) or use a similar idea like:
Use cp -lr to clone the previous backup with hard links.
Use lftp --mirror to overwrite this copy with anything which got updated on the remote system.
But I assumed initially that you need to push the data from the windows system to the backup server, that is the FTP server is on the backup system. This case can not handled this way (original answer follows):
Since FTP has no idea of links at all any transfers will only result in new or overwritten files. The only way would be to using the SITE command to issue site specific commands and deal this way with hard links. But site specific commands are usually restricted heavily so that you can do something like change permissions but not do anything with hard links.
And even if you could support hard links with SITE you have to implement the logic which decides when to use such links. With rsync this logic is built into the rsync server and executed on the server site. With FTP you have to built all the logic at the client site, which means that you would have to download a file to compare it with a local file and then decide if you would need to upload the new file or if a hard link to an existing file could be used.
Is there a single, universal bash shell variable or common Linux command that will reliably indicate if a given directory or file is on a remote filesystem -- be it NFS, SSHFS, SMB, or any other remotely mounted filesystem?
CONTEXT...
This a root-only access, single-user, multi-host Linux development "lab" using SSH and SSHFS for semi-seamless loose-coupling the systems. Relevent directory structure on each host is...
/0
/0/HOST1
/0/HOST2
/0/HOST3
/bin
/boot
:
Directories in /0 are SSHFS mounted to '/' on the named host. 'Host1', etc. are mountpoint directories named for each host.
I could of course, establish an environment variable something like...
REMOTE_FS=/0
...and test for the dirname starting with '/0'. However that's not very portable or reliable.
Obvious question...
Having made the effort to make it seamless, why do I want to know when accessing something non-local?
Answer...
Going through a mounted filesystem puts all the processing load on the initiating host. I'd like to know when I have the option of using SSH instead of SSHFS to offload the background processing (ls, grep, awk, etc) to the remote (and usually more powerful) host, leaving just the GUI and control logic on the local machine.
df -l <file>
This will return a non-zero exit code if the file or directory is not local.
I start the msysGit Bash using the provided batch file (the one that simulates a Linux environment). Bash starts up in msysGit's home directory (on my flashdrive). I would like to leave this directory to go to my project's directory (also on my flashdrive). So, I enter "$ cd .." This has no effect at all. I type "$ ls" and I'm definitely still in the Git folder. I try "cd ~" which brings me to my user folder, but I can't get to the root directory of my flashdrive. How can I get there with msysGit Bash?
I cannot use git-cmd.bat because the computers at my school deny access to cmd.exe.
Alternative question: How can I run git-cmd without needing administrator permissions?
If there is another distributed-model version control system that works better on portable devices (especially on systems where cmd is restricted and I'm not an administrator), I'll gladly switch to it (if you know of one, please tell).
You should be able to access the root directory of any drive by specify its driver letter:
(for instance)
cd /e
I learning to use the command line for remote server operation, since the usual ftp/sftp client are terribly slower the unix commands via ssh. But of course is not very practical if you're not an expert.
My question is: does it exist a app (or a webapp) that gives a UI to unix ssh remote commands? something that when for example I copy a file between two folder will use the cp command (wishfully giving you the options).
thanks
PS: I use Mac
Not exactly a GUI for the shell, but nice for filesystem operations: MacFuse. Here is a short introduction into sshfs and MacFuse:
http://zanshin.net/2009/11/06/using-sshfs-macfuse-and-macfusion-to-access-remote-filesystems/