How to upload a file to a server, that's not in the inventory? - ansible

Sometimes we need to upload logs of an application, that's distributed among multiple local Unix machines, to the vendor's server. The machines are all part of the same inventory, and can perform the archiving of the logs, and uploading the archives directly.
The server runs Unix and accepts only SCP and SFTP, so synchronize module (which uses rsync) will not work.
There exists a net_put-module, but that seems intended for uploads to special network appliances -- trying to use it, I get cryptic errors about ansible_network_os...
I can, of course, use the command module, but is not there something specifically targeted for SCP- and/or SFTP-servers?

No, there is no module for scp or sftp, and I don't really see that it would provide a lot of value. sftp and scp are straightforward to use with command, and the underlying commands don't really support the things you might want a module to do, like skipping an upload if the file on the remote wouldn't change.

Related

Shell Script for file monitoring

I have 2 AWS EC2 LAMP servers and i want to replicate the data on one of the folders to others. I know I can try with EFS, but for some reason it is not a viable option at this moment. So, here is what I want to request for help:
Our Server A and Server B has same file structure but the files inside are mismatch. So, I want a script in Server A to look in, example, /var/www/html/../file/ folder and compare with /var/www/html/../file/ in Server B, and dump all new files from Server A to B.
Any help on how to write it?
Well, I used S3FS which is lot easier than breaking head over the script. It readily copies the files from one server to another.

How do I manage secure files using a public babushka dep?

I want to share my babushka deps in much the same way as The Conversation do: https://github.com/conversation/babushka-deps
However, I manage SSL certificates and SSH keys using chef. Right now those files are directly in my chef config, but as I'd like to share my babushka config I can't put them there.
Is there a good way in babushka to deal with secure, outside-of-repo files?
This is something I'm working on at the moment. There's no built-in way, but it can be accomplished with a little bit of scripting.
If you're running the deps on a remote system, say from a shell script, then I'd add a command to the script to first rsync the private data into place:
rsync -taP private/ user#host:~/private/
ssh user#host 'babushka "server configured"'
That's the simplest case, but it quickly gets messy. Instead, I'm doing this sort of thing with babushka itself, in order to describe the whole process in terms of deps.
I have a dep with a couple of small helper methods for installing babushka on a remote machine, and then running arbitrary deps on it. This allows you to write local deps that depend on the results of remote runs, effectively nesting babushka within itself.
It's not quite general enough to be merged into core yet (and it's in need of a refactor), but it works well. Here it is if you'd like to give it a go in the meantime:
https://github.com/conversation/babushka-deps/blob/master/provision.rb#L123-131
Using this method, you can pass arguments to each remote run. That makes it easy to supply private data, e.g. your private key, or an SSL cert for setting up your webserver, etc.
(Note though that argument values are logged to ~/.babushka/logs/dep-name on the local and remote boxes, so 'private' assumes that the relevant user accounts on both are trusted.)

Syncing a file from a client to a server

I'm trying to keep a file updated real time with the server. Its more like a real time syncing which has a very small delay. Is there any application that lets me do this? Or would you suggest me using a local host as a server?
I dont know how you are connected to your server - but i assume this will be something like SCP / SFTP / FTP and i dont know your OS. WinSCP will do excatly this what you need, you can set it to watch your Filesystem (to a specified folder) and it will update the server files as soon as your file on your drive changes.
It also supports command line features so that you can use it within your own applications.

Encrypted FTP Storage

I guess this is kind of a programming question, because I'm going to write a program if this doesn't exist.
So I found a very cheap web-host (I don't really care about the actual web hosting). They will give me a domain name and ftp server with a ton of storage space. Anyway, I want to backup a few hundred gigs of data (mostly family photos and scans of important documents). I also want to backup any future family photos / documents. I don't care if everything on my local NAS dies in a fire, I just want to have the photos and important documents backed up off-site.
So I want some program that lets me select folders locally and schedules them to be backed up to the ftp server. I'm a bit of a security nut, so i'd like the files to be encrypted locally before being transferred up onto the server.
I know I can do this with truecrypt volumes, but I don't want to transfer an entire encrypted volume blob up to the server ever time I change a file in it. So I could do multiple true crypt volumes but that will be a pain to manage
Also this must be mac/linux compatible although I'll primarily be on linux.
I basically need rsync + truecrypt + cron + sftp all rolled into a cryptographically secure program.
I've been searching for days with no luck. Any ideas?
mozyBackup does this - it doesn't use FTP, it has a custom uploader.
ps. Remember a typical home ADSL connection only does about 1Gb/day upstream
Linux option.
Out of the box option probably duplicity ( for example see http://www.howtoforge.com/creating-encrypted-ftp-backups-with-duplicity-and-ftplicity-on-debian-lenny )
Otherwise if these are basically rarely changed archive copies of files - I would roll my own gnupg (or dpad) individual file encryption, a file changed script, and ftp or rsync.

Efficiently creating tar files

Note: I'm using Windows file servers and .NET
If I were to create a TAR file from files on a remote file server (meaning, the TAR file would be created on the remote file server, where the original files are), would the bytes need to come to my machine and then go back to the file server (since my machine is running the code that's generating the TAR), or would they stay on the file server? I'm asking about the best possible (theoretical) implementation.
Thank you!
The bytes need to be where they are processed.
If you process them on your remote system, they must be transferred.
If you process them on your server, they don't need to be transferred.
If your goal is to minimize bandwidth usage, your best bet would be to have a script on your server that will generate the tar files for you when triggered by your remote system.
The best possible implementation really depends on what your goals and constraints are.
The bytes would have to be read into your machine. The only way I know that you can just do the TARing on the remote server is to have the remote server generate the TAR. For example, you could connect via SSH and run a shell command on the remote server.
Unfortunately, in the scenario described, the TAR operation will use network bandwidth. You need to run the tar program on the file server to avoid using bandwidth.

Resources