ServiceNow Midserver failover cluster using Azure VM - servicenow

How to configure failover for ServiceNow MidServer in Azure VMs. Should i choose the option of Azure VMSS for failover ?
What options do we have for failover of ServiceNow in Azure VMs . Is it azure availability zones ?
Please help.

All the options you think are good choices. But there are also differences between them. I'll show you the differences as I know and you choose one of them or combine them to match your requirement.
The virtual machine scale set is a group of load-balanced VMs. Due to its feature, when one instance failed, then it will not send the requests to the failed instance and balances the requests t other instances if the scale set has more then one instance. So it's not the failover for ServiceNow MidServer, but it can achieve the same destination.
Availability Zones are unique physical locations within an Azure region. The physical separation of Availability Zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure. I think it's the things that you mean failover for ServiceNow MidServer.
You can choose one of them. Or it will have higher availability if you combine both them.

Whenever we want to configure failover cluster option for servicenow's MidServer , we have to configure that option in the ServiceNow.com SaaS portal. There we have to specify what is the name of the failover VM. Hence the only option is to specify failover server name and keep it in different availability zone (for zone level redundancy) or availability set.

Related

How are the clusters connected in different regions in a distributed system?

A newbie here trying to understand distributed architecture. I understand that the nodes in clusters are interconnected via LAN. How are the clusters connected in different regions, lets say different continents? Are there any frameworks or patterns I can read on this?
In general, it is achieved using fully redundant undersea cables (fiber network cables). The cables have very thin threads(glass fiber) that transfer data using fiber-optic technology almost at the speed of light in ocean between the continents. Once the data is received at the other continent, it shall be processed by connecting with an existing network via edge networks that is closest to it which in turn takes it to other endpoints/gateways as applicable.
The routing in such scenario depends on the underlying routing protocol and the endpoints. In general, there are Border Gateway Protocol (BGP) enabled gateways that will automatically learn routes to other sites and carry the data accordingly.
Cloud providers such as AWS has components such as AWS region extended with AWS Local Zones and AWS wavelength that in turn work along with internet service providers and meet the performance requirement of the application . This is achieved by having the AWS infrastructure (have AWS compute and storage services within ISP datacenters ) configured closer to the user or at the edge of the 5G network such that the application traffic from the particular set of 5G devices can reach the servers in wavelength zones with minimal latency without opting through normal internet which would have introduced latency in reaching the server.
The exact pattern/architecture depends on the software requirement/design and the software components and hardware components in use.
A typical pattern that can be taken for example is Geode pattern as depicted below. This has set of geographical nodes with backend services deployed such that they can service any request for any client in any region. By distributing request processing around the globe, this pattern brings in improvement in latency and improves availability.
Typically, the geo-distributed datastores should also be co-located with the compute resources that process the data if the data is geo-distributed across a far-flung user base. The geode pattern brings the compute to the data. It is a kind deploying service in the form satellite deployments that are spread around the globe where each of this is termed as geode.
This pattern relies on features(routing protocols) of Azure that routes traffic to nearby geode via the shortest path which in-turn brings improvement in latency and performance. The pattern is deployed such that there is global load balancer and the geode is behind it. It uses a geo-replicated read-write service like Azure Cosmos DB for the data plane, that brings in data consistency in cross-geode via Data replication services such that all geodes can serve all requests.
There is also the deployment-stamp pattern that can be used if there are Multi-region applications where each tenant's data and traffic should be directed to a specific region.
This relies on Azure Front Door for directing traffic to the closest instance or it can utilize API Management deployed across multiple regions ​for enabling geo-distribution of requests and geo-redundancy of the traffic routing service. As shown in below diagram, the Azure Front Door can be configured with a backend pool, enabling requests to be directed to the closest available API Management instance. The global distribution features of Cosmos DB can be used to keep the mapping information updated across each region.
Azure Front door is often referred as "Scalable and secure entry point for fast delivery of your global applications". As shown below, the Front Door operates at Layer 7 (HTTP/HTTPS layer) using anycast protocol with split TCP and Microsoft's global network for improved latency, global connectivity. Based on the routing method the Front Door will route the client requests to the fastest and most available application backend(Internet-facing service hosted inside or outside of Azure).
The equivalent of Azure FrontDoor in Google Cloud Platform is the Google Cloud CDN which is termed as "Low-latency, low-cost content delivery using Google's global network" and it leverages Google's globally distributed edge caches to accelerate content delivery for websites and applications served out of Google Compute Engine.
Similarly, Amazon has Amazon CloudFront . This is as a CDN service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment. The AWS backbone is also a private network built on a global, fully redundant, fiber network linked via trans-oceanic cables across various oceans. Amazon CloudFront automatically maps network conditions and intelligently routes the traffic to the most performant AWS edge location to serve up cached or dynamic content.
Here is a reference for AWS and a use case describing between different continents using AWS global backbone. for users need access to the applications running in one data center as well as the core systems running in their another data center with the different sites interconnected by a global WAN. Traffic using inter-region Transit Gateway peering is always encrypted, stays on the AWS global network, and never traverses the public Internet. Transit Gateway peering enables international, in this case intercontinental, communication. Once the traffic arrives at the particular continent/region’s Transit Gateway, the customer routes the traffic over an AWS Direct Connect (or VPN) to the central data center, where core systems are hosted.

Akka, AMI - discover remote actors for database access

I am working on a prototype for a client where, on AWS auto-scaling is used to create new VMs from Amazon Machine Images (AMIs), using Akka.
I want to have just one actor, control access to the database, so it will create new children, as needed, and queue up requests that go beyond a set limit.
But, I don't know the IP address of the VM, as it may change as Amazon adds/removes VMs based on activity.
How can I discover the actor that will be used to limit access to the database?
I am not certain if clustering will work (http://doc.akka.io/docs/akka/2.4/scala/cluster-usage.html), and this question and answers are from 2011 (Akka remote actor server discovery), and possibly routing may solve this problem: http://doc.akka.io/docs/akka/2.4.16/scala/routing.html
I have a separate REST service that just goes to the database, so it may be that this service will need to do the control before it goes to the actors.

Mongolab instance in which availability zone?

From their UI, I can only see it is in AWS us-east-1, anywhere I can find out is it in us-east-1b/1c/1d ?
As discussed in this blog, availability zones are logical, not physical. So even if we did show which of our account's logical AZs your server was in (we put each server in a cluster in a different AZ), it wouldn't be meaningful to you. Rather than confuse, we leave the information out.
Availability Zones are not the same across AWS accounts. There is a common misconception that an AZ name like "US-east-1a" identifies a specific physical availability zone for everyone. The fact is that AWS can map/remap the same AZ name to different physical availability zones across multiple accounts. The Availability Zone us-east-1a for account A is not necessarily the same as us-east-1a for account B. Zone assignments are mapped independently for each account. This is important when our infrastructure or use cases spans across multiple accounts. Example: Infrastructure provisioned through Account-A and Load Testing Agents are launched through Account-B, and both pointing to "US-east-1a" may not map to same AZ.

Azure cache configs - multiple on one storage account?

while developing Azure application I got famous error "Cache referred to does not exist", and after a while I found this solution: datacacheexception Cache referred to does not exist (for short: dont point multiple cache clusters to one storage account by ConfigStoreConnectionString)
Well, I have 3 roles using co-located cache, and testing+production environment. So I would have to create 6 "dummy" storage accounts just for cache configuration. This doesnt seems very nice to me.
So the question is - is there any way to point multiple cache clusters to one storage account? for example, specify different containers for them (they create one named "cacheclusterconfigs" by default) or so?
Thanks!
Given your setup, i would point each cloud service at its own storage account. So this gives two per environment (one for each cloud service). Now there are other alternatives, you could set up Server AppFabric cache in an IaaS VM and expose that to both of your cloud services by placing them all within a single Azure Virtual Network. However, this will introduce latency to the connections as well as increase costs (from running the virtual network).
You can also put the storage account for cache as the same one used by diagnostics or the data storage for your cloud services, just be aware of any scalability limits as the cache will generate some traffic (mainly from the addition of new items to the cache).
But unfortunately, to my knowledge there's no option currently to allow for two caches to share the same storage account.

Cheapest, future-scalable way to host a HTTPS PHP Website on AWS?

I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.

Resources