Default termination policy of AWS auto scaling group - amazon-ec2

In an auto scaling group, if there are equal number of instances in multiple availability zones, which availability zone will be selected for terminating instances as per the AWS default termination policy? Is it randomly selected?

According to the documentation, if you did not assign a specific termination policy to the group, it uses the default termination policy.
In the scenario when an equal number of instances are there in multiple availability zones, Auto Scaling group selects the Availability Zone with the instances that use the oldest launch configuration.
If the instances were launched from the same launch configuration, then the Auto Scaling group selects the instance that is closest to the next billing hour and terminates it.

Related

AWS EC2 auto scaling group AZ rebalancing with AZRebalance suspended

2 days ago, I've noticed that an EC2 auto scaling group was rebalancing nodes within the set Availablity Zones.
On that same day, I've set the AZRebalance as a suspended process.
Today, without any apparent reason the ASGroup started rebalancing...
I did check that the AZRebalance wasn't removed...
I can also add that this ASGroup is an EKS node group... Not sure if it could make a difference.

Multiple locations in Google Cloud SQL

I need to use SQL on multiple different locations. The best option will be to set some databases (or even some records, like tagging on Mongo) in different locations. Is it possible to achieve on Google SQL?
There maybe two scenarios -
One single Cloud SQL instance in multiple locations
Different Cloud SQL instances in multiple locations
When you create a Cloud SQL instance, you choose a region where the instance and its data are stored. To reduce latency and increase availability, choose the same region for your data and your Compute Engine instances, standard environment apps, and other services.
Location types are of mainly two types, regional location i.e. a specific geographic place and multi-regional location which contains at least two geographic places. Multi-regional locations are only used for backup operations in Cloud SQL.
You choose a location when you first create the instance. The location can't be changed after the instance is created.
One single region consists of many small data centers called zones. While creating a Cloud SQL instance you can specify the instance to be available in a single zone or in two different zones within the selected region. Selecting the Cloud SQL instance to be in two different zones is called High Availability (HA) configuration.
The purpose of an HA configuration is to reduce downtime when a zone or instance becomes unavailable which might happen during a zonal outage, or when an instance becomes corrupted. With HA, your data continues to be available to client applications.
The HA configuration provides data redundancy. A Cloud SQL instance configured for HA is also called a regional instance and is located in a primary and secondary zone within the configured region. Within a regional instance, the configuration is made up of a primary instance and a standby instance.
So considering the first scenario when you say if a Cloud SQL instance can be located in multiple locations then it is yes if you consider different zones as different locations (This is correct as two zones are physically separated data centers within a single GCP region). But it can only be located in two zones and for that you have to configure High Availability(HA) for the instance.
For the second scenario, you can always create different Cloud SQL instances in different regions.
You can go through instance locations in Cloud SQL and overview of HA configuration to have a brief understanding of the above.
There is another option in Cloud SQL called read replicas.
You use a read replica to offload work from a Cloud SQL instance. The read replica is an exact copy of the primary instance. Data and other changes on the primary instance are updated in almost real time on the read replica.
Read replicas are read-only; you cannot write to them. The read replica processes queries, read requests, and analytics traffic, thus reducing the load on the primary instance.
If you want the data to be available in multiple locations you may consider using cross-region read replicas.
Cross-region replication lets you create a read replica in a different region from the primary instance.
Cross-region read replicas has many advantages -
Improve read performance by making replicas available closer to your
application's region.
Provide additional disaster recovery capability to guard against a
regional failure.
Let you migrate data from one region to another.

"Recent values to use" in Toloka quality control

Can the "Recent values to use" for control tasks be set to a larger value than the current pool size to include older pools?
Yes, you can put a larger value than the current pool size. The rule will be spread to the other pools where you indicate "recent values to use" for control tasks. To make a calculation based on control task responses from all project pools, fill in the field in the rule for each pool.
In other words, image you have 3 pools. In each pool you have control tasks rule. In 1st and 3rd pool you set "Recent values to use​" = 10. In the 2nd pool you did not indicate any values for "Recent values to use​“. So the performer's "control tasks history" will include only 1st and 3rd pools.

How do I connect and get info for specific Availability zone?

I am trying to get data for certain objects that are in a specific Availability Zone (ec2.ap-southeast-2b.amazonaws.com) but fail to set:
ec2Config.ServiceURL = `http://ec2.ap-southeast-2b.amazonaws.com`
And get a NameResolutionException!
How can I get the data (I try: ProductionClient.DescribeVolumes()) for this specific AZ??
AWS service endpoints are specific to a whole Region rather than just an Availability Zone.
So, the URL should be: http://ec2.ap-southeast-2.amazonaws.com (without the 'b')
For a list of Endpoints, see: AWS Regions and Endpoints

DocumentDB unique concurrent insert?

I have a horizontally event-source driven application that runs using an Azure Service Bus Topic and a Service Bus Queue. Some events for building up my domain model's state are received through the topic by all my servers, while the ones on the queue (the ones received a lot more often and not mutating domain model state) are distributed among the servers in order to distribute the load.
Now, every time one of my servers receives an event through the queue or topic, it stores it in a DocumentDB which it uses as event store.
Now here's the problem. How can I be sure that the same document is not inserted twice? Let's say 3 servers receive the same event. They all try to store it. How can I make it fail for 2 of the servers in the case they decide to do it all at the same time? Is there any form of unique constraint I can set in DocumentDB or some kind of transaction scope to prevent the document from being inserted twice?
The id property for each document has a uniqueness constraint. You can use this constraint to ensure that duplicate documents are not written to a collection.

Resources