Is it safe to run `sentry upgrade –noinput` every time Sentry is restarted? - sentry

In the context of running Sentry in OpenShift/Kubernetes and automating the upgrade process I'm considering running sentry upgrade --noinput every time the pod is recreated (before running sentry run web). Is this safe? In other words, are repeated runs of sentry upgrade --noinput completely passive and harmless when no version change has taken place?

Sentry's core developer Matt Robenolt gave the answer:
Yes, it is perfectly safe and idempotent.

Related

How to deploy a web app without stop and restart

I built the project to binary file and deployed it to server before. and start it with nohup. But if I updated my code and rebuild my program. I must to kill the process first, then updated the file and start again.
My problem is:
The app must be down with at least few seconds.
I must update file manually (login the server, kill process, replace file, and then start it)
Is there anyway to hot update the program, something like PHP? I just need update my code to server by git (or svn or others way). then the server will rebuild app and graceful restart it.
Usually you run more than one instance of your web application behind a reversed proxy, eg nginx, or any other load balancer. If the few second downtime is an issue for you then you need to have a HA setup anyway. And in such setup you can do a rolling update, where you are replacing instances one by one.
Quick googling will let you find instructions how to do the deployment eg: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-go-web-application-using-nginx-on-ubuntu-18-04

Any way to distinguish between Heroku's scheduled dyno restarts and a new build?

I have some code in my app that purges our cache (using Cloudflare's API) every time it starts up so that whenever a change to the website is deployed it shows up instantly for everyone instead of the old version remaining in Cloudflare's cache indefinitely.
Heroku restarts my dyno every 24 hours. This purges Cloudflare for no reason, causes a large spike in traffic, and messes with analytics.
Is there a way to detect on startup if this app restart is occurring due to an actual Heroku deploy, or just due to their daily restart?
One way I've considered is using GitHub's public API to check on startup if a commit has been pushed to master in the last hour, but that seems like a hack and there is probably a better way.
This is a classic use case for a release phase task (bold mine):
Release phase enables you to run certain tasks before a new release of your app is deployed. Release phase can be useful for tasks such as:
Sending CSS, JS, and other assets from your app’s slug to a CDN or S3 bucket
Priming or invalidating cache stores
Running database schema migrations
If a release phase task fails, the new release is not deployed, leaving your current release unaffected.
Move your "clear cache" logic to a separate script and add it to your Procfile, e.g.:
web: python some_main_command.py
release: python clear_cache.py
I gave my app access (via environment variables) to my own Heroku API. This allows it to query the API, asking when its own most recent release was. If the most recent release was more than 24 hours ago, we do not purge Cloudflare. Code is here https://github.com/ImpactDevelopment/ImpactServer/commit/db1cced1ed298b933cee87457eaa844f60974f60#diff-12a774f9437b88d4b4ebbd4e2ab726abR25
This detects anything that causes the app to be rereleased, including code changes, env variable changes, add-on changes, etc.

Downgrade Heroku Postgres from standard to hobby

I'm trying to downgrade a heroku postgres db from standard to hobby basic. As I'm not fully using the web app currently but there is still some data in there that needs to be kept. How can I downgrade? (some downtime is fine).
Update: managed to setup and promote a new database based on the inststructions below, but i can't deprovision the old one.
heroku info shows:
Heroku's instructions for upgrading with pg:copy will also work for downgrading. Here's the summary:
Provision a new database
Enter maintenance mode to prevent database writes
Transfer data to the new database
Promote the new database
Exit maintenance mode
If your app isn't live (not being actively written to), you can skip the maintenance mode steps.
Once you've done that, you can deprovision your old database.

Undeploying Business Network

Using HyperLedger Composer 0.19.1, I can't find a way to undeploy my business network. I don't necessarily want to upgrade to a newer version each time, but rather replacing the one deployed with a fix in the JS code for instance. Any replacement for the undeploy command that existed before?
There is no replacement for the old undeploy command, and in fact it it not really undeploy - merely hiding the old network.
Be aware that everytime you upgrade a network it creates a new Docker Image and Container so you may want to tidy these up periodically. (You could also try to delete the BNA from the Peer servers but these are very small in comparison to the docker images.)
It might not help your situation, but if you are rapidly developing and iterating you could try this in the online Playground or local Playground with the Web profile - this is fast and does not create any new images/containers.

Chef for Large scale web Deployment in windows

I am trying to do the MSI web deployment with chef. I have about 400 web servers with same configuration. We will do deployment in two slots with 200 servers each.
I will follow below steps for new release,
1) Increase the cookbook version.
2) Upload the cookbook to server.
3) Update the cookbook version to role and run list.
I will do lot of steps from cookbook like install 7 msi, update IIS settings, update web.configure file and add registry entry. Once deployment is done we need to update testing team, so that they can start the testing. My question is how could I ensure deployment is done in all the machines successfully? How could I find if one MSI is not installed in one machine or one web.config file is not updated properly?
My understanding is chef client will run every 30 Mins default, so I have wait for next 30 mins to complete the deployment. Is there any other way with push (I can’t use push job, since chef is removed push job support from chef High Availability servers) like knife chef client from workstation?
It would be fine, If anyone share their experience who is using chef in large scale windows deployment.
Thanks in advance.
I personnaly use rundeck to trigger on demand chef runs.
According to your description, I would use 2 prod env, one for each group where you'll bump the cookbook version limitation for each group separately.
For the reporting, at this scale consider buying a license to get chef-manage and chef-reporting so you'll have a complete overview, next option is to use a handler to report the run status and send a mail if there was an error during the run.
Nothing in here is specific to Windows, so more you are asking how to use Chef in a high-churn environment. I would highly recommend checking out the new Policyfile workflow, we've had a lot of success with it though it has some sharp limitations. I've got a guide up at https://yolover.poise.io/. Another solution on the cookbook/data release side is to move a lot of your tunables (eg. versions of things to deploy) out of the cookbook and in to a little web service somewhere, than have your recipe code read from that to get their tuning data. As for the push vs. pull question, most people end up with a hybrid. As #Tensibai mentioned, RunDeck is a popular push-based option. Usually you still leave background interval runs on a longer cycle time (maybe 1 or 2 hours) to catch config drift and use the push system for more specific deploy tasks. Beyond RunDeck you can also check out Fabric, Capistrano, MCollective, and SaltStack (you can use its remote execution layer without the CM stuffs). Chef also has its own Push Jobs project but I think I can safely say you should avoid it at this point, it never got enough community momentum to really go anywhere.

Resources