Adding another Host to a Cluster in Deis - amazon-ec2

Is there a procedure for adding another host into an existing cluster? I'm using EC2.
I'm thinking it could be done by using CloudFormation again:
aws cloudformation create-stack \
--template-body "$(<deis.template)" \
--stack-name deis-2 \
--parameters "$(<cloudformation.json)"
Which would need a new stack name. That adds the new host.
Or just launch a new instance with the cli?
aws ec2 run-instances --image-id ami-cfe125b8 --count 1 --instance-type m3.medium --key-name deis --security-groups sg-b7edc3c0,sg-c9edc3be
I'm guessing the host should be in both the coreos and deis security groups? And how does fleet know about the new host?
Then, we need to alter the hosts field?
deis clusters:info <cluster>
deis clusters:update <cluster> hosts=x,y,z
Anything else necessary? Is there another, easier way of doing it?

Since all we're dealing with here is CoreOS, it's completely possible to add new nodes to the cluster. The only requirement that you must do is to apply the cloud-config template that you applied to every other node in the cluster to the new instance. See https://coreos.com/docs/running-coreos/cloud-providers/ec2/ for more info.

Related

Wait and Loop condition in Bash Script

I have an AWS CLI script that will take AMI of instance, create a launch configuration, update the autoscaling group with latest launch config and perform instance refresh operation.
I don't want to perform instance refresh operation unless the AMI is in "available state". So, I am thinking of adding a condition that checks every 10 seconds.
Here is my exisiting script file:
...
#Create AMI
IMAGE=`aws ec2 create-image --instance-id ${INST_ID} --name NEW-IMAGE-${TIME} --no-reboot --output text`
echo "Create Image of instance ${INST_ID}"
#Create new launch Configuration
aws autoscaling create-launch-configuration --launch-configuration-name ${NEW_LC} --image-id ${IMAGE} --instance-type t2.micro --key forclient --associate-public-ip-address --security-groups sg-01be135cb14a00960
echo "Create new Launch Configuration ${NEW_LC}"
#Update Auto Scaling Group to use new Launch Configuration
aws autoscaling update-auto-scaling-group --auto-scaling-group-name ${ASG_NAME} --launch-configuration-name ${NEW_LC}
echo "New Launch Configuration is updated in ASG ${NEW_LC}"
aws autoscaling start-instance-refresh --auto-scaling-group-name ${ASG_NAME}
I don't want to run the 'start-instance-refresh' command until the 'create-image' is in 'available' state.
What changes do I need to make on this script file for this to happen?
You can use image-available waiter after you create the image:
aws ec2 wait image-available --image-ids ${IMAGE}

discovery.seed_hosts in elasticsearch AWS EC2 with ELB

I have EC2 instances under an ELB. Whenever a new instance is started an ip address is assigned dynamically.
I have added the ELB DNS name, but it is referring the ip addresses from Network Interfaces tagged to the ELB. But I need to add the ec2 instance ip address.
So how do I add the new ip address in discovery.seed_hosts in elasticsearch without manual intervention?
Note:- I am looking for a way other than ec2 discovery plugin
I have used aws cli command to fetch the IP's from AWS ELB. Added the following script to my .sh file
export ELASTIC_INSTANCE_IPS=$(aws ec2 describe-instances --filters file://filters.json --query "Reservations[*].Instances[*].PrivateIpAddress" --region ${aws_region} --output text | paste -sd,)
tee -a elasticsearch.yml << END
discovery.seed_hosts: [$ELASTIC_INSTANCE_IPS]

Kubernetes remote cluster setup

How I can setup and securely access a kubernetes cluster on EC2 instance from my laptop? I want it to be a single-node cluster, like running only one instance. Have tried run minikube at EC2 instance, but can't config laptop to connect to it.
So, in the result, I want to run like 10 services/pods in EC2 instance and just debug run on my dev laptop.
Thanks!
You can use KOPS (Kubernetes Ops) to Accomplish this. Its a really handy tool. There's a whole section for configuring a cluster on AWS. I use it on a couple of projects and id really recommend it. Its an easy to understand setup and straight forward.
After the cluster is up you can use kubectl proxy to proxy locally and interact with the cluster. Or use kubectl with config files to set up services and pods.
It does not create a new instance per service or pod it creates a pod on the node(s) that is already existing on the cluster.
In your case you could have a single master and a single node in whatever size that suits your needs.t.2 micro or otherwise
A command to accomplish that would look like:
kops create cluster \
--cloud aws \
--state $KOPS_STATE_STORE \
--node-count $NODE_COUNT \
--zones $ZONES \
--master-zones $MASTER_ZONES \
--node-size $NODE_SIZE \
--master-size $MASTER_SIZE \
-v $V_LOG_LEVEL \
--ssh-public-key $SSH_KEY_PATH \
--name=$CLUSTER_NAME
Where the $NODE_COUNT would be 1 thus having a single Node or EC2 Instance and another instance as the master
To connect to it locally you can also deploy the kubernetes dashboard on your cluster.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:
kubectl proxy
Now you can access the Dashboard at:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

Creating an ec2 windows instance with Ami using Aws cli adding ami-roles, tags and EBS

I am running the following command
aws ec2 run-instances --image-id ${Ami_id} --count 1 --instance-type t2.micro --iam-instance-profile Name="bot_syndication_cloudwatch" --key-name my-key\
--security-group-ids sg-27b53b5c,sg-7ddd5306 --subnet-id subnet-96e0d6e0 \
--tag-specifications ResourceType=instance,Tags=[{Key=Name,Value=Stage-Content-Syndication},{Key=Environment,Value=Stage},{Key=Platform,Value=Windows}]\
--block-device-mappings "[{\"DeviceName\":\"/dev/sdj\",\"NoDevice\":\"\"}]" \
and I am getting this error
sg-7ddd5306, --tag-specifications, ResourceType=instance,Tags=[{Key=Name,Value=Stage-Content-Syndication},{Key=Environment,Value=Stage},{Key=Platform,Value=Windows}]--block-device-mappings, [{"DeviceName":"/dev/sdj","NoDevice":""}], sg-27b53b5c
Build step 'Execute shell' marked build as failure
The issue is with this parameter --security-group-ids sg-27b53b5c,sg-7ddd5306
If you have multiple security groups to assign on your ec2 instance, you need to separate with space such as
--security-group-ids sg-27b53b5c sg-7ddd5306

Script to attach and detach server from the load balancer in amazon aws

I am using the below Script to attach and detach the server from load balancer
#!/bin/bash
aws elb register-instances-with-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e
aws elb deregister-instances-from-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e
When I am running the script I am getting the error as below
Service elasticloadbalancing not available in region ap-southeast-1b
Service elasticloadbalancing not available in region ap-southeast-1b
Is there any changes I want to make the script working or Is there any alternate script to do the work.
The error says region ap-southeast-1b, but ap-southeast-1b is an Availability Zone, not a Region.
The Region should be ap-southeast-1.
Run aws configure and confirm that your Region is set correctly.
Seems your ELB is set in other regions, add --region in your command, for example, if the ELB is created at us-east-1:
aws elb register-instances-with-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e --region us-east-1
aws elb deregister-instances-from-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e --region us-east-1

Resources