Consul backend - VM vs Consul snapshots - consul

I have installed Vault and Consul as a cluster in 5 VMs. The installation went smoothly but I still have a question I can't find an answer.
I can make consul snapshot using consul snapshot save backup.snap and export it.
Where is stored K/V data of consul(not the snapshots)? Is it in a specific path on the system?
My question is:
If I do snapshots of my VMs, do I need to snapshot Consul or is the data of Consul saved in my VMs snapshot?
Thanks

It would be the same result as relying on a VM snapshot of a database VM to backup a database - don't.
Use the consul snapshot feature, because that would at least guarantee a proper snapshot that could be restored.
If you insist on performing a vm snapshot, at least create a hook to perform a consul snapshot -stale , this would save a backup of the consul database on that server.
Further reading: Consul Learn - Backup and restore

Related

Run several microservices docker image together on local dev with Minikube

I have several microservices around 20 or something to check their services in my local development. The micro-services are spring boot services with maven build. So wanted to know when I have to run them on my aws server can I run all these containers individually like they might have shared database so will that be one issue i might face.Or is it possible to run all these services together in one single docker image.
Also I have to configure it with Kubernetes so I have configured Minikube in my local dev would be helpful if there are some considerations to be taken while running around 20services on my minikube or even Kubernetes env
PS: I know this is a basic question but dont have much idea about Devops
Ideally you should have different docker image for each of the micro services and create kubernetes deployment for each of the micro services.This makes scaling individual micro services de coupled from each other. Also communication between micro services should be via kubernetes service. This makes communication stable because service IPs and FQDN don't change even if pods are created, deleted, scaled up and down.
Just be cautious of how much memory and CPU the micros services will need and if the system with minikube has that much resource or not. If the available memory and CPU of a Kubernetes node is not enough to schedule the pod then pods will be stuck in pending state.
As you have too many microservices, I suggest you make a Kubernetes cluster on AWS of 3-4 VMs (more info here). Then try to deploy all your microservices on that. For that you need to build the containers individually for each service and create kubernetes deployment for each service.
I run all these containers individually like they might have shared database so will that be one issue i might face.
As you have shared database, I suggest you run your database server on individual host and then remotely connect with your database from your services. This way you would be able to share database between your microservices.

how to write daily backup job for elasticsearch in production environment, we are using docker image and kubernetes for containerized platform

I am new in elasticsearch and we want to write daily backup job in our production environment ,In our production environment we are using openshift and kubernetes, and our elasticsearch deploy as a docker container in kubernetes environment,I have some idea about elasticsearch snapshot strategies but how
to implement for daily elasticsearch backup job in containerized environment.
As per documentation and #Tim Wong post, you can use for this task "Scheduled Backup using curator".
More information how to combine snapshot with cronjob here and here.
Please share with your results and findings.
you need to write a cronjob and in that cronjob, you should use the host as service name it will take backup. (as per kubernets view)

How to configure elasticsearch snapshots using persistent volumes as the "shared file system repository" in Kubernetes(on GCP)?

I have registered the snapshot repository and have been able to create snapshots of the cluster for a pod. I have used a mounted persistent volume as the "shared file system repository" as the backup storage.
However in a production cluster with multiple nodes, it is required that the shared file system is mounted for all the data and master nodes.
Hence I would have to mount the persistent volume for the data nodes and the master nodes.
But Kubernetes persistent volumes don't have a "read write many" option. So can't mount it on all the nodes and hence am unable to register the snapshot repository. Is there a way to use persistent volumes as the backup snapshot storage for a production elastic search cluster in Google Kubernetes Engine?
Reading this, I guess that you are using a cluster created on your own and not GKE, since you cannot install agents on master nodes and workers will get recreated whenever there is a node pool update. Please make this clear since it can be misleading.
There are multiple volumes that allow multiple readers, such as cephfs, glusterfs and nfs. You can take a look at the different volume types on this

Running Oracle database as a docker container in Kubernetes

I am actually new to the Kubernetes and i am in the process of learning Kubernetes by installing minikube in my local desktop. May be i am unaware of this, but i wanted to ask this question to the experts
Here is what i am trying to achieve. I have 3 docker containers created in my local environment. 2 java web based application(app1, app2) docker containers and 1 oracle database container (oracleDB).
app1 application depends on the oracleDB. I installed minikube in my local environment for trying out Kubernetes.
I was able to deploy my applications app1, app2 and oracleDB in the minikube and bring up the applications and also able to access those application using the url like http://local_minikube_ip:31213/app1
After few hours my app was not responding, so i had to restart the minikube. When i restart the minikube, i found out that i lost the database imported into the oracle docker container. I had to re-import the database and also ssh into the app1 and start the app1 and app2 containers.
So i want to know how everyone handles this scenario ? is there anyway i can maintain the data in the oracle database container during the restarts of Kubernetes ?
Can someone help me with this please ?
For data persistence you need to define Volumes in your PODs, it is often done in conjunction with Persistent Volume Clams and Persistent Volumes.
Take a look at https://kubernetes.io/docs/concepts/storage/volumes/ and https://kubernetes.io/docs/concepts/storage/persistent-volumes/ to get a better picture of how to achieve that

Is there a way to shutdown and start an AWS redshift cluster with the cli?

I'm just bringing up a redshift cluster to start a development effort and usually use a cron service to bring down all of my development resources outside of business hours to save money.
As I browse the aws cli help:
aws redshift help
I don't see any options to stop or shutdown my test cluster like I have in the console.
If there is no way to do this, does anybody know why they don't offer this functionality? These instances are pretty spendy to keep online and I don't want to have to go in and shut them down by hand every night.
It sounds like you are looking for:
delete-cluster, that explicitly specifies a final snapshot
restore-from-cluster-snapshot, restoring the snapshot taken above
From the aws-cli aws redshift delete-cluster documentation:
If you want to shut down the cluster and retain it for future use, set
SkipFinalClusterSnapshot to "false" and specify a name for
FinalClusterSnapshotIdentifier . You can later restore this snapshot to resume using the cluster. If a final cluster snapshot is requested,
the status of the cluster will be "final-snapshot" while the snapshot
is being taken, then it's "deleting" once Amazon Redshift begins
deleting the cluster.
Example usage, again from the documentation:
# When shutting down at night...
aws redshift delete-cluster --cluster-identifier mycluster --final-cluster-snapshot-identifier my-snapshot-id
# When starting up in the morning...
aws redshift restore-from-cluster-snapshot --cluster-identifier mycluster --snapshot-identifier my-snapshot-id

Resources