We have a CMS on heroku, some files were generated by the CMS, how can I pull those changes down? Can I commit the changes remotely and pull them down? Is there an FTP option of some kind?
See: https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
It's not designed for persistent file generation and usage.
In practice, it works like this: User puts some code into a repository. That code is dynamically pulled into temporary Amazon EC instances and executed. The code can be pulled from virtual machine to virtual machine, node to node, without disruption, across data centers. There is no real "place" to get the products of your code from the environment, because anything generated by the checked-out code can (and will) be destroyed as your code deploy skips around between the temporary machines.
That being said, there are some workarounds:
If your app includes something like a file browser within your deployed code, you can grab the (entirely) temporary files using that file browser, and commit it back to your persistent code trunk.
Another option is using something like S3 for your persistent storage, with your application reading from, and writing to, a data storage service, knowing that while heroku will just re-write and destroy your local data on a frequent basis, the external service will maintain the files.
Similarly, you can change your application to use heroku's postgres for persistent data storage, or use Amazon's RDS, (etc.).
Alternately, you can edit your application in such a way as to ensure that any files generated by it will be regenerated every time the code is refreshed, redeployed, and moved around.
Related
Anytime my app goes to sleep and comes back on, I lose data in my database
And I'm not storing any media, it's just form data (texts)... I built the app on strapi and I've followed all their guidelines but it keeps happening. I'd be happy if anyone can help
Local data (files, db) is cleared after a Dyno restart because the Heroku File System is ephemeral. A Dyno is restarted (at least) every 24hrs.
In your case Strapi uses SQLite where data is saved in a local file.
Strapi suggests to configure Postgres on Heroku, alternatively you can use an external DB storage service.
First of all:
As you create content types with strapi it generates the code (= new files) for the according controllers/routes/services
Heroku does not persist data after a restart
After a restart strapi checks which content types exist in the code and deletes the tables of nonexisting types from the database.
Therefore, on Heroku you have to set up all your content types locally and connect to an external db (e.g. Heroku Postgres) but never strapi's default textfile based db.
Then push the generated files and finally deploy.
Thus, on Heroku you should always run in production mode. This way the option to alter content types is completely blocked and you will not run into the issue of data loss after a restart.
I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.
I have built a showcase Magento installation that I am about to deploy public. I'd like to give people backend access but indeed I don't want their changes to stick - not sure how to go about this. What's the best way?
I have seen a Magento showcase somewhere that gave the backend access stating the website will be renewed every 12 hours. So I suppose there is a cron job starting a script that will copy contents of one directory into the other (the public one) every 12 hours?
There are two good solutions:
1. Virtual Machine
Run the entire site in a virtual machine or VPS. Make a snapshot of the machine when it is in the state you want to reset it to. Have a cronjob that triggers the "return to snapshot" routine. The exact details vary between hosts but look for a host with an API.
2. File Copy and DB Reset
Keep a copy of all the files in another folder, together with a dump of the database. You can use mysqldump to create a database dump. You can then go back to that state by having a cronjob that removes the current folder, copies back the old folder and imports the database dump.
There are a few ways to import the database dump file, including the SOURCE command:
SOURCE dumpfile.sql;
We have clustered environment for domino server on production. I want to migrate code changes from staging to production. I have not changed signature for any of the old functions in the script library, but I have added a new function in the script library which is being called by a specific agent. All works well in staging. Now I want to transfer these changes to the cluster(consists of two servers) in production.
If I copy paste the new function(in script library) and also the changed agent which call this new function to one of the server in production, will these code changes automatically be replicated to the other server?. I mean what's the best way to migrate these changes?.
Thanks in advance.
Data and design elements get replicated immediately between clustered servers. So, if you change an agent or script library on first server the second server gets changes only seconds after.
Sometimes you get an error message "Error loading USE or USELSX module" after changing a script library. The error occurs if you call an agent or open a form which uses the script library. In this case, you have to recompile the agent or form to work the design elements properly with the new internal structure of the script library.
This error won't probably appear in your case as your changes work well in development environment. You should test all parts of your application which use the changed script library though to make sure it will work fine.
If you really want to make it seamless:
1) make your staging database a master template, and
2) make your production database inherit the design from that master template.
Then, on one of your production databases, Application > Refresh Design, and it'll ask what server to refresh the design from. Make this your staging server.
It's particularly important to recompile all LotusScript if you don't do this; otherwise, you may end up with "Type mismatch on external name: ". If you do this on your staging server, both the uncompiled and the compiled LotusScript design documents will be part of the design refresh, and it'll make things a lot easier.
Note that all clients must completely close and reopen the database to recognize any code changes. (This means 'the database tab itself, as well as any documents that are open from that database'.)
suppose if i open my heroku webpage, it update a file, like a database.
now i want to retrieve it.
i tried git pull, when done, i checked, it is the old file what i pushed last time.
i tried heroku run bash and "cat"-ed the file, it gives old outputs. :/
but i can assure, the file is getting update, coz if i output the file content through server, like if i request for a particular path on my address, it will show the contents of that file on browser, then it shows updated data.
i have no idea why is this happening. any clue ?
i am using python3 with wsgiref module.
You shouldn't use the dyno filesystem for persistent file storage (like databases). The dyno filesystems are ephemeral and changes are not reflected in the git repository associated with you app. Use one of the data storage add-ons instead: https://addons.heroku.com