Azure Scale Set without Load Balancer - azure-vm-scale-set

When creating a Scale Set (VMSS) in Azure, I can choose to include a load balancer with it.
What I don't understand is - how does it work if there's no Load Balancer?
So say my Scale Set has 1 VM, and now, b/c of the scaling rules, another VM is added. If there's no LB or App GW, how should I access this VM? Is there some kind of internal load balancer in the Scale Set itself?
Thanks!

If there's no LB or App GW, how should I access this VM? Is there some
kind of internal load balancer in the Scale Set itself?
For the VM, you only can associate the public IP address to the NIC of it and the NIC is a separate resource. But for the VMSS, there is no separate NIC of it you can associate the public IP address to. If no public IP address, you cannot access both the VM and the VMSS. So you can only access through the load balancer or the Application Gateway.
Maybe you can use a VM with the public IP address as a jump box and access the VMSS from the VM. But it's a little more complex and expensive.

Related

AWS: How to Create a DNS that point to private IP?

i am new to using EC2, I have an EC2 instance that can reboot daily for maintenance and after the reboot, i get new public IP(i can't use elastic IPS there are all allocated).
so my problem is the instance contains an application that I need to be accessible via a domain (exemple.com), but I can figure out how to set custom DNS on NameCheap and in AWS this DNS automatically resolve the new public IP of that instance after reboot.
If you feel your architecture warrants additional Elastic IP addresses, you can request a limit increase. To request an increase, complete the Amazon VPC limit request form (choose VPC Elastic IP Address Limit). Describe your use case so that AWS can understand your needs.
You can put your instance behind ElasticLoadBalancer. Each Classic Load Balancer receives a default Domain Name System (DNS) name. This DNS name includes the name of the AWS region in which the load balancer is created. For example, if you create a load balancer named my-loadbalancer in the US West (Oregon) region, your load balancer receives a DNS name such as my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com. You can then use your DNS service to create a CNAME record to route queries to your load balancer. Then, your EC2 instance does not need to have stable public IP address.
You can delegate example.com resolution to AWS DNS service, Route53, and then you can run a script on server boot to update Route53 records with the latest IP address.

Automatic Failover between Azure Internal Load Balancers

We are moving a workflow of our business to Azure. I currently have two VMs as an HA pair behind an internal load balancer in the North Central US Region as my production environment. I have mirrored this architecture in the South Central US Region for disaster recovery purposes. A vendor recommended I place an Azure Traffic Manager in front of the ILBs for automatic failover, but it appears that I cannot spec ILBs as endpoints for ATM. (For clarity, all connections to these ILBs are through VPNs.)
Our current plan is to put the IPs for both ILBs in a custom-built appliance placed on-prem, and the failover would happen on that appliance. However, it would greatly simplify things if we could present a single IP to that appliance, and let the failover happen in Azure instead.
Is there an Azure product or service, or perhaps more appropriate architecture that would allow for a single IP to be presented to the customer, but allow for automatic failover across regions?
It seems that you could configure an application gateway with an internal load balancer (ILB) endpoint. In this case, you will have a private frontend IP configuration for an Application Gateway. The APPGW will be deployed in a dedicated subnet, it will exist on the same VNet with your internal backend VMs. Please note in this case you can directly add the private VMs as the backends instead of internal load balancer frontend IP address because of private APPGW itself is an internal load balancer.
Moreover, APPGW also could configure a public frontend IP configuration, if so, you can configure the APPGW public frontend IP as the endpoints of the Azure traffic manager.
Hope this could help you.

Vm scale set does not work with internal standard sku Azure load balancer backend pool

I want to load-balance my VM scale set in a VNet. My requirements are that:
I do not want public IP accessibility and
I do need https health probes.
While both Azure load balancers (in either basic or standard sku) seem to be capable of balancing also internal traffic, only the standard sku offers https health probes.
When trying to add the VM scale set to the backend pool, I cannot select it, it's not found by the wizard. Both scale set and internal standard sku load balancer are within the same region, VNet, and resource group.
It appears I'm having the same issue as someone here, only with a scale set instead of an availability set.
There is a tooltip stating
Only VMs in region with standard SKU public or no public IP can be
attached to this loadbalancer. A backend pool can only contain
resources from one virtual network. Create a new backend pool to add
resources from a different virtual network.
So I am confused, my internal load balancer is using only private addresses, so the criterion of "with standard SKU public or no public IP" should be met. Also I note, that the tooltip does only explicitly state VMs, not VM scale set. However, I refuse to believe that the standard sku of LB should lacking features compared to the basic sku (I do have it working with a scale set and internal basic LB, albeit without https health probes).
Am I missing something here? I do realise that there's still the "Azure Application Gateway", however I think it's overly complex to set-up and overkill for my scenario. I only want internal load balancing of a scale set with https health probes. And I am starting to think that this is not possible.
Kind regards, baouss
It seems a restriction that you could not select the scale set as the backend for a standard SKU load balancer on the Azure portal. The document states that
One key aspect is the scope of the virtual network for the resource.
While Basic Load Balancer exists within the scope of an availability
set, a Standard Load Balancer is fully integrated with the scope of a
virtual network and all virtual network concepts apply.
So we only could select the eligible VMs in the virtual network as the backend pool for a Standard Load Balancer.
Wait for confirmation from Azure team: VM scale set does not work with internal standard SKU Azure load balancer backend pool
As you mentioned, currently, you could use Application Gateway with health probe Https. If not, you may create a VM scale set and choose a load balancing option--- load balancer. This will automatically associate a public standard SKU load balancer for your scale set.

EC2 Load Balancer - Security Access

On a AWS EC2 ELB security profile - i need a couple of IP Address to be able to access only certain pages of my website. Is it possible? The other IP Address will have access to the full website. Is this achievable
This is not possible as a configuration in the Load Balancer because the Load Balancer simply distributes requests to your application servers.
Your application will need to enforce such functionality.

How to add a load balancer at a later stage and re-configure DNS without downtime?

Say I deploy an API, the database etc. to a t2.micro EC2 instance to serve traffic for the period of prototyping and beta testing. Let's say the domain pointing to the API is api.exampleapp.com.
Now traffic begins to grow beyond the instance's limits and we deploy the API to a bunch of instances that we want to stand behind a load balancer. After setting the fleet up, how do we make api.exampleapp.com point now to the load balancer's IP address so that traffic is served by the newly launched instances without any downtime? Is this possible at all? Or with minimal downtime? Or is this approach of starting up with a new API itself faulty?
I assume you either don't need auto-scaling or have it already configured.
start the LB and attach your first EC2 to it. The instance still work, can be directly accessible via its IP (thus, accessible from the World).
check the LB hostname, try to access the instance using LB, make sure it works
switch DNS to the LB using either CNAME or ALIAS record type (if ALIAS is supported by your DNS server)
add another instances to the LB.
Done!

Resources