Is it possible to deploy a scale set that can receive traffic from internet (via Application gateway) and also from internal servers (vi azure loadbalancer).
Please see image for clarification
Thanks
You most certainly can! Here's some sample code using the Azure CLI:
# create an Azure Load Balancer that is associated to a virtual network
# instead of a public IP:
$ az network lb create -g multirg -n privatealb --vnet-name vnet --subnet scalesetsubnet
# create an application gateway:
$ az network application-gateway create -g multirg -n appgw --vnet-name vnet --subnet appgwsubnet
# create a scale set associated with the app gateway (note: 'az vmss create'
# does not allow specifying multiple load balancers; we'll just create with
# the app gateway for now and add the Azure Load Balancers afterwards)
$ az vmss create -g multirg -n scaleset --app-gateway appgw --image UbuntuLTS --generate-ssh-keys --vnet-name vnet --subnet scalesetsubnet --upgrade-policy Automatic
# to associate the scale set with the load balancer post-creation,
# we will need to know the resource IDs of the load balancer's backend
# pool; we can get this using the 'az network lb show' command:
$ az network lb show -g multirg -n privatealb
{
"backendAddressPools": [
{
.
.
.
"id": "{private-alb-pool-id}",
.
.
.
}
# we can then use the 'az vmss update' command to associate the Azure
# Load Balancer with the scale set:
az vmss update --resource-group multirg --name scaleset --add virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].ipConfigurations[0].LoadBalancerBackendAddressPools '{"id": "{private-alb-pool-id}"}'
I also wrote up a quick blog post describing scale set + Azure Load Balancer + App Gateway scenarios. For more info, find it here: https://negatblog.wordpress.com/2018/06/21/scale-sets-and-load-balancers/
Hope this helps! :)
-Neil
Related
I'm working for access and monitor my Kubernetes Cluster . So I started the kubernetes proxy for access the external browsers or etc.
This is command I've runned for find the APISERVER's
APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
That is the results shown as below .
server: https://<external_ip_0>
server: https://<external_ip_1>
server: https://<external_ip_2>
server: https://<external_ip_3>
when I want to access the my proxy any ip at above. I got timeout and any response from anywhere .How I can handle this problem ?
Which one is the TRUE APISERVER ip ?
Notice : That is the my command for run the kubernetes proxy . I want to access apiserver via kubectl proxy.
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' --port=8080 &
command kubectl config view shows your kubectl config where you can have multiple clusters configured so that's why you are receiving multiple "server" when greping - those are some Kubernetes clusters which you used in past. See https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
If you want to access Kubernetes API exposed by proxy you can issue proxy command which you provided and go under http://localhost:8080/api/ in your web browser to see Kubernetes API - more information there: https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/
How I can setup and securely access a kubernetes cluster on EC2 instance from my laptop? I want it to be a single-node cluster, like running only one instance. Have tried run minikube at EC2 instance, but can't config laptop to connect to it.
So, in the result, I want to run like 10 services/pods in EC2 instance and just debug run on my dev laptop.
Thanks!
You can use KOPS (Kubernetes Ops) to Accomplish this. Its a really handy tool. There's a whole section for configuring a cluster on AWS. I use it on a couple of projects and id really recommend it. Its an easy to understand setup and straight forward.
After the cluster is up you can use kubectl proxy to proxy locally and interact with the cluster. Or use kubectl with config files to set up services and pods.
It does not create a new instance per service or pod it creates a pod on the node(s) that is already existing on the cluster.
In your case you could have a single master and a single node in whatever size that suits your needs.t.2 micro or otherwise
A command to accomplish that would look like:
kops create cluster \
--cloud aws \
--state $KOPS_STATE_STORE \
--node-count $NODE_COUNT \
--zones $ZONES \
--master-zones $MASTER_ZONES \
--node-size $NODE_SIZE \
--master-size $MASTER_SIZE \
-v $V_LOG_LEVEL \
--ssh-public-key $SSH_KEY_PATH \
--name=$CLUSTER_NAME
Where the $NODE_COUNT would be 1 thus having a single Node or EC2 Instance and another instance as the master
To connect to it locally you can also deploy the kubernetes dashboard on your cluster.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
To access Dashboard from your local workstation you must create a secure channel to your Kubernetes cluster. Run the following command:
kubectl proxy
Now you can access the Dashboard at:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
I'm trying to create an AWS server in the region "Canada(Central)", which, according to the Amazon documentation (http://docs.aws.amazon.com/general/latest/gr/rande.html), is called ca-central-1.
The command I'm running is:
knife ec2 server create -I ami-70299b14 -f t2.nano -S my-key -i ~/.ssh/my-key.pem -ssh-user ubuntu --region ca-central-1 -Z ca-central-1a
And the error I get is:
ERROR: ArgumentError: Unknown region: "ca-central-1"
The ami I'm using is one that I've used to launch a server in the region using the online EC2 Management Console.
I created an IAM user and kay pairs in this region and have given the user all permissions on ec2 resources, and I've also created an inbound rule for ssh in the region. Is there something else I'm missing?
Unfortunately fog-aws only added this region very recently so there isn't yet support for it in knife ec2. Will hopefully be in the next ChefDK release in a few weeks. For now you can just create VMs either from the aws command line tool or the web UI, and then use knife bootstrap on them.
1: your knife.rb as
knife[:aws_access_key_id] = "AWS_ACCESS_KEY"
knife[:aws_secret_access_key] = "AWS_SECRET"
knife[:ssh_key_name] = 'my-key'
knife[:image] = 'ami-21414f36'
knife[:flavor] = 't2.micro'
knife[:region] = 'ca-central-1'
knife[:availability_zone] = 'ca-central-1a'
knife[:ebs_size] = 30
knife[:editor] = 'nano'
2: Generate Key Pair for Canada (Center)
3: Run the knife ec2 server create command.
knife ec2 server create -I ami-70299b14 -f t2.nano -ssh-user ubuntu --region ca-central-1 -Z ca-central-1a
That's worked for me
Try this command:
knife ec2 server create -N node_name -I ami-21414f36 -f t2.micro -x '.\key_ca' -P 'ec2#123' --ssh-key key --region ca-central-1 --availability-zone 'ca-central-1a' --ebs-size 30 --security-group-ids sg-75cbd50d --bootstrap-protocol winrm --winrm-transport ssl --winrm-ssl-verify-mode verify_none
I am using the below Script to attach and detach the server from load balancer
#!/bin/bash
aws elb register-instances-with-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e
aws elb deregister-instances-from-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e
When I am running the script I am getting the error as below
Service elasticloadbalancing not available in region ap-southeast-1b
Service elasticloadbalancing not available in region ap-southeast-1b
Is there any changes I want to make the script working or Is there any alternate script to do the work.
The error says region ap-southeast-1b, but ap-southeast-1b is an Availability Zone, not a Region.
The Region should be ap-southeast-1.
Run aws configure and confirm that your Region is set correctly.
Seems your ELB is set in other regions, add --region in your command, for example, if the ELB is created at us-east-1:
aws elb register-instances-with-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e --region us-east-1
aws elb deregister-instances-from-load-balancer --load-balancer-name Load-BalancerLoadBalancer --instances i-a3f1446e --region us-east-1
I'm using the following command to set up the AWS launch config:
as-create-launch-config test1autoscale --image-id ami-xxxx --instance-type m1.small
where ami-xxxx is the image id that I got from my instance via the web console. I get the following error:
Malformed input-AMI ami-xxxx is invalid: The AMI ID 'ami-xxxx' does not exist
I have triple checked that the image id matches the instance image id. My availability zone is ap-southeast-1a. I am not clear on what image is being asked for if it will not accept the image of the instance I wish to add to the autoscale group
Try adding the region endpoint (because by default it's looking into us-east-1 enpoint) to your config command, then it should work:
as-create-launch-config test1autoscale --region ap-southeast-1 --image-id ami-xxxx --instance-type m1.small
Also take a look at this: Regions and Endpoints - Amazon Web Services Glossary