How to move kubernetes cluster to another aws zone - amazon-ec2

We have a working kubernetes cluster in one zone on aws, we want to move it to another zone.
The k8s cluster is installed with the help of kops.
We don't need zero down time.

At first glance it might be done following this steps:
Create new cluster in the new zone
Deploy apps to the new cluster
Check everything is started successfully
Redirect traffic to the new cluster via switching NAT/Load Balancer/DNS
Shut down/Destroy old cluster

It should be simple.
Stop all the K8S services.
Move the EC2 instances to the target zones as mentioned here.
Start the EC2 instances in the target zones.
If an ElasticIP is used, there shouldn't be any difference to the end user except for the down time.

Related

How to change cluster IP in a replication controller run time

I am using Kubernetes 1.0.3 where a master and 5 minion nodes deployed.
I have an Elasricsearch application that is deployed on 3 nodes using a replication controller and service is defined.
Now i have added a new minion node to the cluster and wanted to run the container elasticsearch on the new node.
I am scaling my replication controller to 4 so that based on the node label the elasticsearch container is deployed on new node.Below is my issue and please let me k ow if there is any solution ?
The cluster IP defined in the RC is wrong as it is not the same in service.yaml file.Now when I scale the RC new node is installed with the ES container pointing to the wrong Cluster IP due to which the new node is not joining the ES cluster.Is there any way that I can modify the cluster IP of deployed RC so that when I scale the RC the image is deployed on new node with the correct cluster IP ?
Since I am using old version I don't see kubectl edit command and I tried changing using kubectl patch command but the IP didn't change.
The problem is that I need to do this on a production cluster so I can't delete the existing pods but only option is to change the cluster IP of deployed RC and then scale so that it will take the new IP and image is started accordingly.
Please let me know if any way I can do this ?
Kubernetes creates that (virtual) ClusterIP for every service.
Whatever you defined in your service definition (which you should have posted along with your question) is being ignored by Kubernetes, if I recall correctly.
I don't quite understand the issue with scaling, but basically, you want to point at the service name (resolved by Kubernetes's internal DNS) rather than the ClusterIP.
E.g., http://myelasticsearchservice instead of http://1.2.3.4

Lifecycle of an EC2 Container Service Instance

In my project I have a constraint where all of the traffic received will go to a certain IP. The Elastic IP feature works well for this.
My question is, considering we are using Amazon's docker service (ECS) without autoscaling (so instances/tasks will be scaled manually), can we treat the instances created by the ECS service as we would treat normal, on-demand instances? As in they won't be terminated/stopped unless explicitly done by a user (or API call or whatever).
As is mentioned in the Scaling a Cluster documentation, if you created your cluster after November 24th, 2015 using either the first run wizard, or the Create Cluster wizard, then an Autoscaling group would have been created to manage the instances backing your cluster.
For the most part, the answer to your question is Yes. The instances wouldn't normally go about getting replaced. It is also important to note that because this is backed by an auto scaling group, AutoScaling might go about Replacing unhealthy instances for you. If an instance fails it EC2 Health Checks for some reason, it will be marked as unhealthy, and scheduled for replacement.
By default, my understanding is there are no CloudWatch Alarms or Scaling Policies effecting this AutoScaling group, so it should just be when an instance becomes healthy that it would get replaced.

Is there a way to shutdown and start an AWS redshift cluster with the cli?

I'm just bringing up a redshift cluster to start a development effort and usually use a cron service to bring down all of my development resources outside of business hours to save money.
As I browse the aws cli help:
aws redshift help
I don't see any options to stop or shutdown my test cluster like I have in the console.
If there is no way to do this, does anybody know why they don't offer this functionality? These instances are pretty spendy to keep online and I don't want to have to go in and shut them down by hand every night.
It sounds like you are looking for:
delete-cluster, that explicitly specifies a final snapshot
restore-from-cluster-snapshot, restoring the snapshot taken above
From the aws-cli aws redshift delete-cluster documentation:
If you want to shut down the cluster and retain it for future use, set
SkipFinalClusterSnapshot to "false" and specify a name for
FinalClusterSnapshotIdentifier . You can later restore this snapshot to resume using the cluster. If a final cluster snapshot is requested,
the status of the cluster will be "final-snapshot" while the snapshot
is being taken, then it's "deleting" once Amazon Redshift begins
deleting the cluster.
Example usage, again from the documentation:
# When shutting down at night...
aws redshift delete-cluster --cluster-identifier mycluster --final-cluster-snapshot-identifier my-snapshot-id
# When starting up in the morning...
aws redshift restore-from-cluster-snapshot --cluster-identifier mycluster --snapshot-identifier my-snapshot-id

Clustering in Amazon ec2 using Starcluster

Is it possible to deploy instances from AMI to multiple zones in Amazon EC2 using starcluster?
Can anyone give me your feedback on this please?
I need to deploy instances to various Zones in Amazon EC2.
Yes you can. In the addnode command, there is a flag just for that.
-z ZONE, --availability-zone=ZONE
In the load balancer however you can't. I have a fork of StarCluster that enables it though. When you launch it, you specify --ignore-grp. It's only useful when you work with spot instances as it will bid in the cheapest zone.
This fork also supports multiple zone into a VPC, which is also handy.

Amazon RDS Master-Slave Relationship between EC2 instances with load balancing activated

We're planning to move our Tomcat/MySQL app onto the Amazon cloud employing 2 EC2 instances (inst_1 & inst_2) running in different availability zones whereby inst_1 will contain the master RDS db and inst_2 the slave RDS db.
If we employ elastic load balancing to balance traffic between the two instances, will traffic directed to inst_2 that includes insert/update/delete db transactions first update the master RDS db in inst_1 followed by a synchronous update of the slave in inst_2; thereby ensuring that the two RDS instances are always synchronized?
Amazon's published info (whitepapers) suggests such, but doesn't explicitly state it. If not, how does one ensure that the two RDS instances remain synchronized?
Additional note: We're planning to employ Amazon's Elastic Beanstalk. Thanks!
You have to take a few things into consideration
AWS RDS instances are simple managed EC2 instances which run a MySQL server.
If you add a slave ( I think Amazon calls them read-replica) this is a read-only slave
Amazon doesn't manage the distribution of writing queries to the master server automatically.
Replication will ensure that your read slave always is up-to-date automatically ( with minimal delay which is increasing with write-load on the master )
This behavior is MySQL-specific
This means that you have to delegate manipulating queries to the master exclusively.
This can either be done by your application or by a MySQL proxy running on a extra machine.
The proxy then is the only interface your application servers will talk to. It is able to manage balancing between your RDS instances and the direction of any manipulation query to the master instance.
When RDS is used in multi-az mode you have no access to the secondary instance. There is only ever one instance that is visible to you, so most if your question doesn't apply. In case of failover the DNS address you are given will start resolving to a different ip. Amazon doesn't disclose how the two instances are kept in sync.
If instead of using a multi-az instance you use a single-az instance + a replica then it is up to you to direct queries appropriately - any attempt to alter data on the replica will fail. Since this is just standard MySQL replication, the replica can lag behind the master (in particular with current versions of MySQL the replica only runs a single thread)

Resources