How to delete a etcd key when POD crashes? - etcd

My application locks an etcd key when POD starts.
Requirement is to unlock the key locked when the POD crashes. How to achieve this ?
Thanks

I'm not sure if this helps:
1) You can set up a preStop hook which can invoke an HTTP request/ exec (eg. shell script) which can remove the lock from etcd.
2) Set the terminationGracePeriodSeconds option properly in the Pod YAML
More details on preStop hook: container hook
Read: https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-terminating-with-grace

Related

How to exec a pod in kubernetes in a shell script if the pod name changes with every deployment

Same pods with every deployment get a different name so how to put this in shell scripts so that we can exec it to pods without changing the script with every deployment
deployed 1st time
NAMESPACE NAME READY
default call-f6f8cfd84-5l6zv 3/3
depolyed 2nd time
default call-7gcfrd45d-264df 3/3
tried multiple ways but not working
You can exec into statefulset/deployment or other k8s workloads too. This will let you exec into the first pod deployed by the workload. The following example shows how to exec into the first pod deployed by the statefulset db-cluster in demo namespace in interactive mode using bash shell. I hope this helps!
kubectl exec -it -n demo sts/db-cluster -- bash

Google Kubernetes Engine(GKE) pod unable to release the liquibase lock even after updating the databasechangeloglock

I have a pod which worked fine in dev environment in GKE. Unfortunately, I have deployed changes while the pod is running and now the newly created pod unable to acquire the lock. It is raising the following error.
Could not acquire change log lock. Currently locked by main-78ddd4b475-t8c68
But the pod name which is showing in the error is already been deleted. So, I have followed some Stack Overflow answers and updated the dbchangeloglock file with
update DATABASECHANGELOGLOCK set locked=0, lockgranted=null, lockedby=null where id=1;
It did not work and then I even deleted the logs from the databasechangelog and then restarted the pods but still I am facing the same error.

heroku: flush redis in release phase

I have a node app running on heroku and I'm trying to use the release phase to flush my redis cache on deploy.
I've added the release: ./release-tasks.sh to my Procfile but I'm having a hard time finding information what tools are available for me to use in the release phase.
Currently my release-tasks.sh file looks like this:
redis-cli -u $REDIS_URL flushall
But it errors out with a redis-cli not found and it cannot find the heroku command either.
It says in the release-phase docs that it is a good place to invalidate a cache, does anyone have any thoughts on how to do this?
redis-cli nor the Heroku CLI is available on a dyno so you can't use them here. Depending on the language your app is built in your could write a task in that language that flushes the cache and then invoke that task from your shell script.

Log out Pod Trunk Session

Like the title reads, say I authenticate myself by running this:
pod trunk register myemail#email.com 'FirstName LastName' --description='my computer'
How could I log myself out? Also, is there any way I could log out a specific session that came up from running pod trunk me?
You can log out with pod trunk me clean-sessions
$ pod trunk me clean-sessions
By default this will clean-up your sessions by removing expired and unverified
sessions.
To remove all your sessions, except for the one you are currently using, specify
the `--all` flag.

What is a good way to run a task whenever a Heroku app restarts?

Use case is to bust the cache.
What is a good way to run given code (or rake task) whenever a Ruby Heroku app is restarted (or deployed)?
There's no way to do this via the Heroku API far as I know. The Heroku Platform API doesn't support this.
What you can do (if you're fast, however!) is listen for a SIGTERM message in your code (that's what Heroku sends to your application process when it attempts to restart it) -- you can then fire off your script quickly.
Here's more information on SIGTERM on Heroku: https://devcenter.heroku.com/articles/dynos#graceful-shutdown-with-sigterm
If you're using some sort of CI, you can probably configure it there. Heres how to do it with CircleCI:
deployment:
production:
branch: production
commands:
- git push git#heroku.com:foo-bar-123.git $CIRCLE_SHA1:master
- heroku run rake <your task> --app <your app name>
If you're not using a CI you can still whip together a script that first does the git push to Heroku and then executes your cache busting task through heroku run (the app's bin/ folder would be an obvious place to put it).
Note: you can also use heroku run:detached, which will send output to your logs instead of stdout.
You can use "release" feature that allows you to run any command before a new release is deployed. https://devcenter.heroku.com/articles/release-phase
Define the command that should be run in your Procfile.
release: rake db:migrate
From documentation:
The release command is run immediately after a release is created, but before the release is deployed to the app’s dyno formation. That means it will be run after an event that creates a new release.

Resources