best practices for uploading many files to live server while updating database - bash

I have roughly 200 files that I need to push to our live server after business hours. In addition to this push I have a few database updates that I need to run in conjunction with this roll out.
What has been done in the past on this system is to create a directory on the server of the updated files and create a cron script to copy those files to overwrite their previous versions on the server. And then executing the calls to the database.
Here are the problems I am trying to work around:
1) There is no staging server.
2) There is no easy way to push from our version control (svn) to our live server
3) There are a lot of files and the directory structure is deep so setting up a copy of the directories to be copied over on the server seems precarious and time consuming.
What's the best way to do this?

The way I've done similar things in the past is to have a cron job run a script an administrative machine that:
1) checks out the files I need on my production server on some sort of staging machine
2) rsync's the files onto the server
3) runs a post-rsync script on the server (say via ssh'ing to the server)
However, you specify that you have no ability to use a staging machine, by which I assume you mean that you have no administrative machine at all, and that you cannot check out your repository on the server either. That makes doing this cleanly far harder. Are you sure you can't at least use your workstation or some similar box as an administrative or staging machine here?

Related

Backup strategy ubuntu laravel

I am searching for a backup strategy for my web application files.
I am hosting my (laravel) application at an ubuntu (18.04) server in the cloud and currently have around 80GB of storage that needs to be backed up (this grows fast). The biggest files are around ~30mb, the rest of it are small jpg/txt/pdf files.
I want to make at least 2 times a day a full backup of the storage directory and store it as a zip file on a local server. I have 2 reasons for this: independence from cloud providers, and for archiving.
My first backup strategy was to zip all the contents of the storage folder en rsync the zip, this goes well until a couple of gigabytes then the server is completely stuck on cpu usage.
My second approach is with rsync, but this i can't track when a file is deleted / added.
I am looking for a good backup strategy that preferable generate zips before or after backup and stores them so we can browse and examine back in time.
Strange enough i could not find anything that suits me, i hope anyone can help me out.
I agree with #RobertFridzema that the whole server becomes unresponsive when using ZIP functionality from spatie package.
Had the same situation with a customer project. My suggestion is to keep the source code files within version control. Just backup the dynamic/changing files with rsync (incremental works best and fast) and create a separate database backup strategy. For example with MySQL/Mariadb: mysqldump, encrypt the resulting file and move it to an external storage as well.
If ZIP creation still is a problem, I would maybe use a storage which is already set up with raid functionality or if that is not possible, I would definitly not use the ZIP functionality on the live server. rsync incremental to another server and do the backup strategy there.
Spatie has a package for Laravel backups that can be scheduled in the laravel job scheduler. It will create zips with the entire project including storage dirs
https://github.com/spatie/laravel-backup

Laravel Restore a Backup

I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.

How do I properly set up hooks on a remote that is specified via the file:// protocol?

Say I've got an upstream repo (origin) that was added with
git remote add origin file:////upstream.host/repo.git
The repo.git is acually a windows shared folder where I and my dev colleagues have r/w access assigned.
Now, I want to set up a post-receive hook on upstream.host that notifies Trac about freshly pushed revisions for automatic ticket updating. Basically, this is done by calling an executable on upstream.host that does some work in the database there.
However, I notified that the hook for some reason doesn't work.
So I've set up the hook to print everything she's doing to D:/temp/post-receive.log and issued a git push in order to trigger the hook.
When I looked into D:/temp on upstream.host, there was no logfile created.
Then, another question of me came into mind: https://superuser.com/questions/974337/when-i-run-a-git-hook-in-a-repo-on-a-network-share-which-binaries-are-used.
When actually the binaries of my machine are used for executing the hook, maybe also the paths of my machine are used. I looked into D:/temp and voilá, here we have the post-receive.log.
I traced the pwd to the logfile and it is not D:/repos/repo.git (what I expected) but actually is //upstream.host/repo.git. Obviously the whole hook is executed in the context of the pusher's machine and not in the context of the repo machine (upstream.host).
This is no problem for me since I have admin access to the remote machine and could use administrative shares in order to get my hook going (i.e. \\upstream.host\D$\repos\repo.git etc). But this is an issue for my colleagues since they are plain users and no roots.
How do I set up my post-receive hook properly so that it works as expected?
How do I force my hook to be entirely run on the remote machine without using anything from my machine?
Do I really have to implement a real server hosting my repo? Or are there other ways that don't need a server?
a post-receive hook is run after receiving data on the machine that is hosting the repository.
now the machine that is "hosting the repository" is not the file-server where the actual packed-refs and other git database files are stored. (this file-server could be anything from a redundant cloud-based storage appliance to any old NAS-enabled "network disk").
Instead it is the machine that runs the "git frontend" (that is, the git commands that actually interact with the database).
Now you are using a "network share" to host your (remote) git repository. For your computer (the client), this is just another disk device (like your floppy) and the git on your client will happily store database-files there, and run any hooks. But this is your computer, since it is being told to run the remote locally - simply because the file:// protocol does mean "local".
Btw, the fact that your remote is named upstream.host is meaningless: this name is only there for you to keep track of multiple remotes, but it could be called thursday.next instead.
So there is no way to run any script on the file-server that happens to store some files names pack-refs and similar.
If you want to have a git server to run hooks for you, you must have a git server first. Even worse: if you want a git server on machineX to run scripts on machineX, you must install a git server on machineX first.
The good news: there is no need to "implement a real server". Just install a pre-existing one. You will find docuementation about that in the Git Book, but for starters it's basically enough to have git (for interaction with the database) and sshd (for secure communication via the network; and for calling git when appropriate) installed.
Finally: i'm actually quite glad that you need to have software (e.g. a server) running on the remote end to execute code there. Just imagine what it would mean if copying some html files to your USB disk would suddenly spawn a web server out of thin air. Not to think of w32-virusses breeding happily on my linux NAS...

Showcase website that will reinstall itself every day?

I have built a showcase Magento installation that I am about to deploy public. I'd like to give people backend access but indeed I don't want their changes to stick - not sure how to go about this. What's the best way?
I have seen a Magento showcase somewhere that gave the backend access stating the website will be renewed every 12 hours. So I suppose there is a cron job starting a script that will copy contents of one directory into the other (the public one) every 12 hours?
There are two good solutions:
1. Virtual Machine
Run the entire site in a virtual machine or VPS. Make a snapshot of the machine when it is in the state you want to reset it to. Have a cronjob that triggers the "return to snapshot" routine. The exact details vary between hosts but look for a host with an API.
2. File Copy and DB Reset
Keep a copy of all the files in another folder, together with a dump of the database. You can use mysqldump to create a database dump. You can then go back to that state by having a cronjob that removes the current folder, copies back the old folder and imports the database dump.
There are a few ways to import the database dump file, including the SOURCE command:
SOURCE dumpfile.sql;

Download large files using SFTPor HTTPS

We have an application that generate many files with size of (2GB-10GB) we want to save these files in a server and allow specific customers to download them. The system will delete these files in 30 days (we have around 30 customers) .
From your experience which download method should we use SFTP or HTTPS and why?
And do you have any suggestion how to grantee download Security?
Depends on who downloads what.
If it is customers downloading the files, then make things as easy as possible: offer https and advise one or two download managers you have tested that allow to pickup a broken download again.
For internal use (backup and the like) I strongly suggest to use rsync via ssh. Much easier to use, since you can do incremental downloads, so only those files are downloaded that do not exist locally or have changed remotely. That means you can simply trigger synchronization on a daily bases and the files will comulate locally over time just as they are created remotely.
When using sftp or rsync via ssh: the ssh server should be configured not to accept passwords but only keys for authentication as this is more secure.

Resources