Two Kubernetes pods accessing the same database - spring

My SpringBoot application is scheduled to run at 1 UTC each day for some data collection and put that in the database. We are using Kubernetes and we have two pods accessing the same database. The database is at some other location for which we have a connection string which is the same in both pods.
The problem is both of my pods wake up at 1 UTC and add duplicate entries in the database? How can I ensure that only one pod is talking to the database? Is this application is not ideal for k8s deployment?

I know this is old, but for anybody else, look into ShedLock. It handles locking across distributed nodes and is pretty easy to implement.

Related

Kubernetes custom deployment

Our horizontal scaling is currently suffering because of Liquibase.
We would want our deployments to always deploy one pod which runs Liquibase (-Dspring.liquibase.enabled=true), and then all subsequent pods to not run it (-Dspring.liquibase.enabled=false).
Is there anything that Kubernetes offers which could do this out of the box?
I'm unfamiliar with Liquibase and I'm unclear how non-first Pods leverage Liquibase but, you may be able to use a lock to control access. A Pod that acquires the lock sets the property to true and, if it is unable to acquire the lock, the property is false.
One challenge will be in ensuring that the lock is released if the first Pod terminates. And, to understand the consequence on the other Pods. Is an existing Pod promoted?
Even though Kubernetes leverages etcd for its own distributed locking purposes, users are encouraged to run separate etcd instances if they need locks. Since you have to choose, you may as well choose what you prefer e.g. Redis, Zookeeper.
You could use an init Container or sidecar for the locking mechanism and a shared volume to record its state.
It feels as though Liquibase should be a distinct Deployment exposed as a Service that all Pods access.
Have you contacted Liquibase to see what it recommends for Kubernetes deployments?

Upgrade from RDS Aurora 1 (MySQL 5.6) to Aurora 2 (MySQL 5.7) without downtime on a very active database

Is there a way to upgrade from Aurora 1 (MySQL 5.6) to Aurora 2 (MySQL 5.7) without downtime on an active database? This seems like a simple task given we should be able to simply do major version upgrades from either the CLI or the Console, but that is not the case.
We tried:
Creating a snapshot of the database
Creating a new cluster using Aurora 2 (MySQL 5.7) from the snapshot
Configure replication to the new cluster from the primary cluster
However, because you can't run commands that require SUPER user privileges in Aurora you're not able to stop transactions long enough to get a good binlog pointer from the master, which results in a ton of SQL errors that are impossible to skip on an active database.
Also, because Aurora is not doing binlog replication to its Read replicas I can't necessarily stop replication to that read replica and get the pointer.
I have seen this semi-related question, but it certainly requires downtime: How to upgrade AWS RDS Aurora MySQL 5.6 to 5.7
UPDATE: AWS just announced in-place upgrade option available for 5.6 > 5.7:
https://aws.amazon.com/about-aws/whats-new/2021/01/amazon-aurora-supports-in-place-upgrades-mysql-5-6-to-5-7/
Simple as Modify and choose version with 2.x. :)
I tested this Aurora MySQL 5.6 > 5.7 on a 25Gb db, many minor versions behind and it took 10 min, with 8 min of downtime. Not zero downtime, but a very easy option, and it can be scheduled in AWS to happen automatically during off-peak times (maintenance window).
Additionally consider RDS Proxy to reduce downtime. During small windows of db unavailable time (eg. reboot for minor updates), the proxy will hold connections open, instead of completely unavailable, simply appearing as a brief delay/latency, only.
Need was to upgrade the AWS RDS Aurora MySQL from 5.6 to 5.7 without causing any downtime to our production. Being a SaaS solution, we could not afford any downtime.
Background
We have distributed architecture based on micro services running in AWS Fargate and AWS Lambda. For data persistency AWS RDS Aurora MySQL is used. While there are other services being used, those are not of interest in this use case.
Approach
After a good deliberation on in place upgrade by declaring a downtime and maintenance window, we realized that having zero downtime upgrade is the need. As without which we would have created a processing backlog for us.
High level approach was:
Create an AWS RDS Cluster with the required version and copy the data from the existing RDS Cluster to this new Cluster
Setup AWS DMS(Data Migration Service) between these two clusters
Once the replication is done and is ongoing then switch the application to point to the new DB. In our case, the micro-services running in AWS Fargate has to upgraded with the new end point and it took care of draining the old and using the new.
For Complete post please check out
https://bharatnainani1997.medium.com/aws-rds-major-version-upgrade-with-zero-downtime-5-6-to-5-7-b0aff1ea1f4
When you create a new Aurora cluster from a snapshot, you get a binlog pointer in the error log from the point at which the snapshot was taken. You can use that to set up replication from the old cluster to the new cluster.
I've followed a similar process to what you've described in your question (multiple times in fact) and was able to keep the actual downtime in the low seconds range.

Can I use a single elasticsearch/kibana for multiple k8 clusters?

Do you know of any gotcha's or requirements that would not allow using a single ES/kibana as a target for fluentd in multiple k8 clusters?
We are engineering rolling out a new kubernetes model. I have requirements to run multiple kubernetes clusters, lets say 4-6. Even though the workload is split in multiple k8 clusters, I do not have a requirement to split the logging and believe it would be easier to find the logs for pods in all clusters in a centralized location. Also less maintenance for kibana/elasticsearch.
Using EFK for Kubernetes, can I point Fluentd from multiple k8 clusters at a single ElasticSearch/Kibana? I don't think I'm the first one with this thought however I haven't been able to find any discussion of doing this. Found lots of discussions of setting up efk but all that I have found only discuss a single k8 to its own elasticsearch/kibana.
Has anyone else gone down the path of using a single es/kibana to service logs from multiple kubernetes clusters? We'll plunge ahead with testing it out but seeing if anyone else has already gone down this road.
I dont think you should create an elastic instance for each kubernetes cluster, you can run a main elastic instance and index it all logs.
But even if you don`t have an elastic instance for each kubernetes client, i think you sohuld have a drp, so lets says instead moving your logs of all pods to elastic directly, maybe move it to kafka, and then split it to two elastic clusters.
Also it is very depend on the use case, if every kubernetes cluster is on different regions, and you need the pod`s logs in low latency (<1s), so maybe one elastic instance is not the right answer.
Based on [1] we can read:
Fluentd collects logs from pods running on cluster nodes, then routes
them to a central​​​​​​ized Elasticsearch.
Then Elasticsearch ingests these logs from Fluentd and stores them in a central location. It is also used to efficiently search text files.
Kibana is the UI; the user can visualize the collected logs and metrics and create custom dashboards based on queries.
There are several ways in which they can solve your dilemma:
a) Create a centralized dashboard and use each cluster’s Elasticsearch as backend. So you can see all your clusters logs in one place.
b) Create an Elasticsearch cluster and add each Elasticsearch into it. This is NOT the best option since you will duplicate your data several times, you will need to handle each index shards and you will need to fight with the split brain dilemma but it’s great for data resiliency.
c) Use another solution like an APM (New Relic, Instana, etc) to fully centralize your logs in one place.
[1] https://techbeacon.com/enterprise-it/9-top-open-source-tools-monitoring-kubernetes

Running multiple elasticsearch instances

I need to setup 2 Elasticsearch instances:
one for kibana logs (my separate application will throw logs at it)
one for search for my production application
My plan is to create a separate folders with elasticsearch in them. They dont talk to each other which means they are separate databases and if one goes down, the other still runs. Is this good solution or should I use only one elasticsearch folder with muliple elasticsearch.yaml configuration files? What is the best practice for multiple elasticsearch instances?
The best practice is to NOT run two Elasticsearch instances on the SAME server.
Your production search will probably need a lot of ram to work fast and stay responsive. You don't want your logging system interfere with that.

BigCouch IDs and Backup data on EC2

I have a few questions about BigCouch that i'm interesting getting answers before start using it.
Do I need to choose my shard key carefully or can just use an auto-generated GUID? I start with a single server with 1 replication, but I want to be ready when I need to add another shard
Any GUI for managing the cluster like CouchBase have, something similar to administer the DB
How can I backup the data when hosting BigCouch on EC2 (ie. snapshots)
Thanks
Since you have no started to use BigCouch yet and it looks like you need some features that are available out of the box in Couchbase (auto-sharding, administration console ...)
Why no going on Couchbase ?

Resources