I have a project that I'm working on, in which I have found myself facing quite an interesting debacle that I'm not sure which is the most efficient way to solve. I have this Spring Application that is randomly pulling some data from a PostgreSQL database and sending it to clients. On the other hand, I have a gitlab repo in which multiple individuals are submitting data which is manually entered every once in a while into the Postgres DB for the aforementioned purpose. What would be the best way to automate this?
Cheers
Related
Hi I am trying to deploy my application with zero downtime. My app is quite frequent with database ddl changes. What are all the possible ways to achieve it with zero transaction failure in the app. Though we can use kubernetes to achieve zero downtime of the application, I don't want any failures in service request happening at the time of deployment due to database change like dropping the columns, dropping the table and changing the datatype
TechStack
Kubernetes - Deployment
Spring boot Java -app
Oracle -database
This has nothing to do with Kubernetes. You will have the same problems or challenges when you install your application on bare metal servers, on VMs or on plain Docker. Have a look at https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database this describes the problem pretty good.
I have provisioned postgres on my heroku app and also installed postgres locally to maintain parity (as the documentation recommends) with the online database but I'm also not understanding how this will work. Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed? If it is parity, shouldn't they both be using the heroku postgres database?
In other words, will my local app (during production) and heroku app (deployed and live) be using the same online postgres database?
Thanks.
Am I supposed to be accessing a local copy of a database when running on my own computer (while building and before deploying) and then using heroku's separate postgres database once it is deployed?
Yes, that's exactly it. Without seeing what bit of documentation you're referencing it's hard to say what they mean but perhaps there's another way to explain it.
In your local development environment, you may find that you need to test database schema changes (this is just one example, there are many). If you only had the one heroku postgres database you'd be forced to test these changes in production, which might result in poor usability for your users and that doesn't even account for the possibility of making a mistake and accidentally destroying your production data. There are a number of other shortcomings and challenges with this single database configuration.
For these reasons and more, it's best to keep your production data completely separated from your development/staging/test environment by creating a local/staging database. You might reasonably ask, "What about the data? I need data to test!". There are many ways to put together your test database and which you choose will likely depend on your needs. A shortlist of possibilities:
Use a seed file to generate mock data in your db
Use a model factory (usually runs in conjunction with your testing framework)
Take a dump of your production database, anonymize and redact sensitive information and use that for local testing.
I am building an ETL Application that needs to fetch data from Heroku Postgres DB a few times a day but the application is not running on Heroku, I am already able to do this, but using the current credentials, but heroku states that the credentials are not permanent and will be rotated from time to time.
What is the best way to do this, building a REST API on top of my app is not viable an option. I have seen that Heroku provides a config vars API which I could potentially use to fetch the DB credentials, but is there a simpler/cleaner way for implementing this, is enforcing permanent credentials an option?
There is no way to enforce it. And it's not a question of credentials, but a question of a database hostname. It's ec2.
Your safest bet is to always fetch current DATABASE_URL from your Heroku app. If you only need to do it 'a few times a day' this is not a problem.
I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.
I want to understand how people are handing an update to a production app on the Parse.com platform. Here is the scenario that I am not sure about.
Create an called myApp_DEV. The app contains a database as well as associated cloud code.
Once testing is complete and ready for go-live I will clone this app into myApp_PRD (Production version). Cloning it will copy all the database as well as the cloud code.
So far so good.
Now 3 months down the line I want have added some functionality which includes adding some cloud code functions as well as adding some new columns to the tables in the db.
How do I update myApp_PRD with these new database structure. If i try to clone it from my DEV app it tells me the app all ready exists.
If I clone a new app (say myApp_PRD2) from DEV then all the data will be lost since the customer is all ready live.
Any ideas on how to handle this scenario?
Cloud code supports deploying to production and development environments.
You'll first need to link your production app to your existing cloud code. this can be done in the command line:
parse add production
When you're ready to release, it's a simple matter of:
parse deploy production
See the Parse Documentation for all the details.
As for the schema changes, I guess we just have to manually add all the new columns.