amazon web services performance - performance

I'm from Canada. I'm building a web app with node js and mongoDB. I am very interested by AWS for 2 reasons: the scalable feature and the s3 service. The users of my app will upload a lot of photos and s3 look perfect for my project.
At this time, the cloud server regions available on AWS Marketplace are:
They don't have any cloud server in Canada. You can see where I live (green) on the image. Do you think my physical location will cause some performance issue for my users?
AWS are talking about 'availability zone'...if I'm living outside an availability zone (no availability zone in Canada) can I choose my zone for hosting my app?

Do you think my physical location will cause some performance issue for my users?
Not at all. The company I work for has a website dedicated for Canadian users that is run out of the us-east-1 region. We have never had any reports of issues from Canadian users of the site.
AWS are talking about 'availability zone'...if I'm living outside an availability zone (no availability zone in Canada) can I choose my zone for hosting my app?
Availability zones have nothing (directly) to do with your geographic location. Each of Amazons different regions have multiple availability zones. In a nutshell, each availability zone is a physically and electrically isolated datacenter. For example, in the eu-west-1 region there are currently 5 availability zones. What this essentially means is that the eu-west-1 region, which is physically located in Northern Virginia, consists of 5 independent datacenters. A power failure, network issue, etc. that impacts one of those 5 datacenters should have no impact on the other 4 datacenters.
If you were to design a highly fault-tolerant website then Amazon would recommend that you distribute each component of your site across multiple availability zones within the same region, and to ensure that the site can function if all the services in one availability zone were to fail. This is why they provide multiple availability zones in each region.
To answer your specific question, however, you can choose both the region and the availability zone within a region when you launch a server instance. When you launch an instance through the AWS web interface it will default to choosing a random availability zone for you, but you can also pick a specific availability zone if you so desire.
The region you choose will dictate the geographic area where your instance resides (Northern Virginia for us-east-1, Oregon for us-west-2, etc). Depending on the region you choose there will be between 2 and 5 availability zones to choose from.

Related

How are the clusters connected in different regions in a distributed system?

A newbie here trying to understand distributed architecture. I understand that the nodes in clusters are interconnected via LAN. How are the clusters connected in different regions, lets say different continents? Are there any frameworks or patterns I can read on this?
In general, it is achieved using fully redundant undersea cables (fiber network cables). The cables have very thin threads(glass fiber) that transfer data using fiber-optic technology almost at the speed of light in ocean between the continents. Once the data is received at the other continent, it shall be processed by connecting with an existing network via edge networks that is closest to it which in turn takes it to other endpoints/gateways as applicable.
The routing in such scenario depends on the underlying routing protocol and the endpoints. In general, there are Border Gateway Protocol (BGP) enabled gateways that will automatically learn routes to other sites and carry the data accordingly.
Cloud providers such as AWS has components such as AWS region extended with AWS Local Zones and AWS wavelength that in turn work along with internet service providers and meet the performance requirement of the application . This is achieved by having the AWS infrastructure (have AWS compute and storage services within ISP datacenters ) configured closer to the user or at the edge of the 5G network such that the application traffic from the particular set of 5G devices can reach the servers in wavelength zones with minimal latency without opting through normal internet which would have introduced latency in reaching the server.
The exact pattern/architecture depends on the software requirement/design and the software components and hardware components in use.
A typical pattern that can be taken for example is Geode pattern as depicted below. This has set of geographical nodes with backend services deployed such that they can service any request for any client in any region. By distributing request processing around the globe, this pattern brings in improvement in latency and improves availability.
Typically, the geo-distributed datastores should also be co-located with the compute resources that process the data if the data is geo-distributed across a far-flung user base. The geode pattern brings the compute to the data. It is a kind deploying service in the form satellite deployments that are spread around the globe where each of this is termed as geode.
This pattern relies on features(routing protocols) of Azure that routes traffic to nearby geode via the shortest path which in-turn brings improvement in latency and performance. The pattern is deployed such that there is global load balancer and the geode is behind it. It uses a geo-replicated read-write service like Azure Cosmos DB for the data plane, that brings in data consistency in cross-geode via Data replication services such that all geodes can serve all requests.
There is also the deployment-stamp pattern that can be used if there are Multi-region applications where each tenant's data and traffic should be directed to a specific region.
This relies on Azure Front Door for directing traffic to the closest instance or it can utilize API Management deployed across multiple regions ​for enabling geo-distribution of requests and geo-redundancy of the traffic routing service. As shown in below diagram, the Azure Front Door can be configured with a backend pool, enabling requests to be directed to the closest available API Management instance. The global distribution features of Cosmos DB can be used to keep the mapping information updated across each region.
Azure Front door is often referred as "Scalable and secure entry point for fast delivery of your global applications". As shown below, the Front Door operates at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network for improved latency, global connectivity. Based on the routing method the Front Door will route the client requests to the fastest and most available application backend(Internet-facing service hosted inside or outside of Azure).
The equivalent of Azure FrontDoor in Google Cloud Platform is the Google Cloud CDN which is termed as "Low-latency, low-cost content delivery using Google's global network" and it leverages Google's globally distributed edge caches to accelerate content delivery for websites and applications served out of Google Compute Engine.
Similarly, Amazon has Amazon CloudFront . This is as a CDN service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. The AWS backbone is also a private network built on a global, fully redundant, fiber network linked via trans-oceanic cables across various oceans. Amazon CloudFront automatically maps network conditions and intelligently routes the traffic to the most performant AWS edge location to serve up cached or dynamic content.
Here is a reference for AWS and a use case describing between different continents using AWS global backbone. for users need access to the applications running in one data center as well as the core systems running in their another data center with the different sites interconnected by a global WAN. Traffic using inter-region Transit Gateway peering is always encrypted, stays on the AWS global network, and never traverses the public Internet. Transit Gateway peering enables international, in this case intercontinental, communication. Once the traffic arrives at the particular continent/region’s Transit Gateway, the customer routes the traffic over an AWS Direct Connect (or VPN) to the central data center, where core systems are hosted.

ServiceNow Midserver failover cluster using Azure VM

How to configure failover for ServiceNow MidServer in Azure VMs. Should i choose the option of Azure VMSS for failover ?
What options do we have for failover of ServiceNow in Azure VMs . Is it azure availability zones ?
Please help.
All the options you think are good choices. But there are also differences between them. I'll show you the differences as I know and you choose one of them or combine them to match your requirement.
The virtual machine scale set is a group of load-balanced VMs. Due to its feature, when one instance failed, then it will not send the requests to the failed instance and balances the requests t other instances if the scale set has more then one instance. So it's not the failover for ServiceNow MidServer, but it can achieve the same destination.
Availability Zones are unique physical locations within an Azure region. The physical separation of Availability Zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure. I think it's the things that you mean failover for ServiceNow MidServer.
You can choose one of them. Or it will have higher availability if you combine both them.
Whenever we want to configure failover cluster option for servicenow's MidServer , we have to configure that option in the ServiceNow.com SaaS portal. There we have to specify what is the name of the failover VM. Hence the only option is to specify failover server name and keep it in different availability zone (for zone level redundancy) or availability set.

What if I choose us-central1-a zone for my google compute VM Instance and my traffic, calling VM is from asia? (in respect of pricing & efficiency)

I am trying to create a Google Compute VM Instance which will host my website, the traffic to this website will be coming mostly from asia, so which region should I select for my compute VM Instance.
How selecting of region will effect on the pricing and performance?
Have a look at the Best practices for Compute Engine regions selection section Factors to consider when selecting regions:
Latency
The main factor to consider is the latency your user experiences.
However, this is a complex problem because user latency is affected by
multiple aspects, such as caching and load-balancing mechanisms.
In enterprise use cases, latency to on-premises systems or latency for
a certain subset of users or partners is more critical. For example,
choosing the closest region to your developers or on-premises database
services interconnected with Google Cloud might be the deciding
factor.
For example you can serf some sites located and Asia and then compare your experience to sites located in US - you'll notice significant difference in response caused by latency. The same with your site - it'll be less responsive. You should set up your VM instance as close to your customers as possible.
To estimate pricing check resources below:
Pricing
Google Cloud resource costs differ by region. The following resources
are available to estimate the price:
Compute Engine pricing
Pricing calculator
Google Cloud SKUs
Billing API
If you decide to deploy in multiple regions, be aware that there are
network egress charges for data synced between regions.
In addition, you can find monthly estimate cost in Create a new instance wizard as well - try to set different regions and you'll get the numbers.
If your customers located in different regions you can try Google Cloud CDN:
Cloud CDN (Content Delivery Network) uses Google's globally
distributed edge points of presence to cache HTTP(S) load balanced
content close to your users. Caching content at the edges of Google's
network provides faster delivery of content to your users while
reducing serving costs.

Mongolab instance in which availability zone?

From their UI, I can only see it is in AWS us-east-1, anywhere I can find out is it in us-east-1b/1c/1d ?
As discussed in this blog, availability zones are logical, not physical. So even if we did show which of our account's logical AZs your server was in (we put each server in a cluster in a different AZ), it wouldn't be meaningful to you. Rather than confuse, we leave the information out.
Availability Zones are not the same across AWS accounts. There is a common misconception that an AZ name like "US-east-1a" identifies a specific physical availability zone for everyone. The fact is that AWS can map/remap the same AZ name to different physical availability zones across multiple accounts. The Availability Zone us-east-1a for account A is not necessarily the same as us-east-1a for account B. Zone assignments are mapped independently for each account. This is important when our infrastructure or use cases spans across multiple accounts. Example: Infrastructure provisioned through Account-A and Load Testing Agents are launched through Account-B, and both pointing to "US-east-1a" may not map to same AZ.

Heroku instances in an Amazon VPC - Possible?

My company uses AWS heavily and has several Amazon Direct Connect network links from our points of presence into Amazon. These reduce our latency and costs.
http://aws.amazon.com/directconnect/
We would like to be able to use Heroku more extensively with our internal applications, but the dynos would need to exist inside our Amazon VPCs in order for us to get the latency and cost benefits. I can't see a way to do this.
Is there any way for Heroku customers to run their dynos inside specific Amazon VPCs?

Resources