Azure Traffic Manager - Disabled Endpoint still accessible? - azure-traffic-manager

I have configured the Azure Traffic manager with two endpoints and could access the traffic manager. I thought of validating the scenario where endpoints are disabled, so I have disabled both the endpoints
to my surprise, still the traffic manager url is accessible for about ~2 mins. Is this expected?

It's expected.
When you enable or disable the endpoint status, it controls the availability of the endpoint in the Traffic Manager profile. The underlying service, which might still be healthy, is unaffected. When an endpoint status is disabled, Traffic Manager does not check its health, and the endpoint is not included in a DNS response. Read https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-monitoring#endpoint-and-profile-status
Also this note:
Disabling an endpoint has nothing to do with its deployment state in
Azure. A healthy endpoint remains up and able to receive traffic even
when disabled in Traffic Manager. Additionally, disabling an endpoint
in one profile does not affect its status in another profile.
Read https://learn.microsoft.com/en-us/azure/traffic-manager/traffic-manager-manage-endpoints

Related

AWS Route traffic to two load balancer simultaneously

I have a requirements to record all incoming and outgoing traffic to my application loadbalancer. I have a tool from F5 (install in EC2) to receive the traffic and process and perform actions (So I can setup ELB+ASG for this). However I want the traffic should go to the web server (Apache+PHP), so that my application also will work well.
I know GuardDuty and VPC flow logs are some alternate. But there are some limitation (It didn't capture all events comes to EC2 instance). Hence I need to rely some third party tools such as F5, checkpoint.
Regards
Senthil

GKE + WebSocket + NodePort 30s dropped connections

I have a golang service that implements a WebSocket client using gorilla that is exposed to a Google Container Engine (GKE)/k8s cluster via a NodePort (30002 in this case).
I've got a manually created load balancer (i.e. NOT at k8s ingress/load balancer) with HTTP/HTTPS frontends (i.e. 80/443) that forward traffic to nodes in my GKE/k8s cluster on port 30002.
I can get my JavaScript WebSocket implementation in the browser (Chrome 58.0.3029.110 on OSX) to connect, upgrade and send / receive messages.
I log ping/pongs in the golang WebSocket client and all looks good until 30s in. 30s after connection my golang WebSocket client gets an EOF / close 1006 (abnormal closure) and my JavaScript code gets a close event. As far as I can tell, neither my Golang or JavaScript code is initiating the WebSocket closure.
I don't particularly care about session affinity in this case AFAIK, but I have tried both IP and cookie based affinity in the load balancer with long lived cookies.
Additionally, this exact same set of k8s deployment/pod/service specs and golang service code works great on my KOPS based k8s cluster on AWS through AWS' ELBs.
Any ideas where the 30s forced closures might be coming from? Could that be a k8s default cluster setting specific to GKE or something on the GCE load balancer?
Thanks for reading!
-- UPDATE --
There is a backend configuration timeout setting on the load balancer which is for "How long to wait for the backend service to respond before considering it a failed request".
The WebSocket is not unresponsive. It is sending ping/pong and other messages right up until getting killed which I can verify by console.log's in the browser and logs in the golang service.
That said, if I bump the load balancer backend timeout setting to 30000 seconds, things "work".
Doesn't feel like a real fix though because the load balancer will continue to feed actual unresponsive services traffic inappropriately, never mind if the WebSocket does become unresponsive.
I've isolated the high timeout setting to a specific backend setting using a path map, but hoping to come up with a real fix to the problem.
I think this may be Working as Intended. Google just updated the documentation today (about an hour ago).
LB Proxy Support docs
Backend Service Components docs
Cheers,
Matt
Check out the following example: https://github.com/kubernetes/ingress-gce/tree/master/examples/websocket

Loadbalancing web sockets - AWS Elastic Loadbalancer

I have a question about how to load balance web sockets with AWS elastic load balancer.
I have 2 EC2 instances behind AWS elastic load balancer.
When any user login, the user session will be established with one of the server, say EC2 instance1. Now, all the requests from the same user will be routed to EC2 instance1.
Now, I have a different stateless request coming from a different system. This request will have userId in it. This request might end up going to a EC2 instance2. We are supposed to send a notification to the user based on the userId in the request.
Now,
1) Assume, the user session is with the EC2 instance1, but the notification is originating from the EC2 instance2.
I am not sure how to notify the user browser in this case.
2) Is there any limitation on the websocket connection like 64K and how to overcome with multiple servers, since user is coming thru Load balancer.
Thanks
You will need something else to notify the browser's websocket's server end about the event coming from the other system. There are a couple of publish-subscribe based solution which might help, but without knowing more details it is a bit hard to figure out which solution fits the best. Redis is generally a good answer, and Elasticache supports it.
I found this regarding to AWS ELB's limits:
http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_elastic_load_balancer
But none of them seems to be related to your question.
Websocket requests start with HTTP communication before handing over to websockets. In theory if you could include a cookie in that initial HTTP request then the sticky session features of ELB would allow you to direct websockets to specific EC2 instances. However, your websocket client may not support this.
A preferred solution would be to make your EC2 instances stateless. Store the websocket session data in AWS Elasticache (Either Redis or Memcached) and then incoming connections will be able to access the session regardless of which EC2 instance is used.
The advantage of this solution is that you remove the dependency on individual EC2 instances and your application will scale and handle failures better.
If the ELB has too many incoming connections, then it should scale automatically. Although I can't find a reference for that. ELB's are relatively slow to scale - minutes rather than seconds, if you are expecting surges in traffic then AWS can "pre-warm" more ELB resource for you. This is done via support requests.
Also, factor in the ELB connection time out. By default this is 60 seconds, it can be increased via the AWS console or API. Your application needs to send at least 1 byte of traffic before the timeout or the ELB will drop the connection.
Recently had to hook up crossbar.io websockets with ALB. Basically there are two things to consider. 1) You need to set stickiness to 1 day on the target group attributes. 2) You either need something on the same port that returns static webpage if connection is not upgraded, or a separate port serving a static webpage with a custom health check specifying that port on the target group. Go for a ALB over ELB, ALB's have support for ws:// and wss://, they only lack the health check over websockets.

What modification is required when paypal update there policy

one of my client has receive a mail that paypal is upgrading there policy. When i do some r&d, i found similar thread on magento forum but no one has replied so far.Following are the link of that thread.
PAYPAL SERVICE UPGRADES
So my question, what modification i have to made in current configuration or there is huge changes required in current magento payment gatway coding.
Any help or suggestion is appreciated.
Major thrust of the quoted document is "don't use hard coded IP addresses".
Magento uses the server name, so DNS will resolve automatically to the new assigned netblocks.
Your Hosting Provider may have to modify their firewalls if they're filtering traffic, but it's probably unlikely. They also need to be running a web server that uses HTTP 1.1.
Viewing your web server's response headers will tell you that, look for HTTP/1.1 200 OK
Quoted PayPal notice of service upgrade
If your site is:
Calling our APIs with a hardcoded PayPal API endpoint IP address, rather than using DNS resolution: Impact of upgrade: API calls will timeout or your will encounter an internal error from your system.
You need to: Use DNS resolution to access our API endpoints and/or open your firewall to the new IP addresses which will be communicated later.
Using HTTP methods other than GET, POST, DELETE and PUT: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request or HTTP Error 405 Method not allowed.
You need to: Send the API requests using one of the allowed methods. Heartbeat calls using the HEAD method won’t be allowed.
Using the HTTP 1.0 protocol: Impact of upgrade: API calls will return HTTP/1.0 400 Bad Request.
You need to: Update your code to HTTP 1.1 and include the Host header in the API request.
Needing firewall changes to allow new IP addresses: Impact of upgrade: API calls will error out if your system responsible for making API calls to PayPal is behind a firewall that uses Access Control List (ACL) rules and limits outbound traffic to a limited number of IP addresses.
You need to: Update your firewall ACL to allow outbound access to a new set of IP addresses we will be publishing. Test your integration on Sandbox (see the IP addresses for Sandbox API endpoints). The list of new IP addresses for our Live API endpoints will be posted here when available in January.

What does the Amazon ELB automatic health check do and what does it expect?

Here is the thing:
We've implemented a C++ RESTful API Server, with built-in HTTP parser and no standard HTTP server like apache or anything of the kind
It has been in use for several months in Amazon structure, using both plain and SSL communications, and no problems have been identified, related to Amazon infra-structure
We are deploying our first backend using Amazon ELB
Amazon ELB has a customizable health check system but also as an automatic one, as stated here
We've found no documentation of what data is sent by the health check system
The backend simple hangs on the socket read instruction and, eventually, the connection is closed
I'm not looking for a solution for the problem since the backend is not based on a standard web server, just if someone knows what kind of message is being sent by the ELB health check system, since we've found no documentation about this, anywhere.
Help is much appreciated. Thank you.
Amazon ELB has a customizable health check system but also as an
automatic one, as stated here
With customizable you are presumably referring to the health check configurable via the AWS Management Console (see Configure Health Check Settings) or via the API (see ConfigureHealthCheck).
The requirements to pass health checks configured this way are outlined in field Target of the HealthCheck data type documentation:
Specifies the instance being checked. The protocol is either TCP,
HTTP, HTTPS, or SSL. The range of valid ports is one (1) through
65535.
Note
TCP is the default, specified as a TCP: port pair, for example
"TCP:5000". In this case a healthcheck simply attempts to open a TCP
connection to the instance on the specified port. Failure to connect
within the configured timeout is considered unhealthy.
SSL is also specified as SSL: port pair, for example, SSL:5000.
For HTTP or HTTPS protocol, the situation is different. You have to
include a ping path in the string. HTTP is specified as a
HTTP:port;/;PathToPing; grouping, for example
"HTTP:80/weather/us/wa/seattle". In this case, a HTTP GET request is
issued to the instance on the given port and path. Any answer other
than "200 OK" within the timeout period is considered unhealthy.
The total length of the HTTP ping target needs to be 1024 16-bit
Unicode characters or less.
[emphasis mine]
With automatic you are presumably referring to the health check described in paragraph Cause within Why is the health check URL different from the URL displayed in API and Console?:
In addition to the health check you configure for your load balancer,
a second health check is performed by the service to protect against
potential side-effects caused by instances being terminated without
being deregistered. To perform this check, the load balancer opens a
TCP connection on the same port that the health check is configured to
use, and then closes the connection after the health check is
completed. [emphasis mine]
The paragraph Solution clarifies the payload being zero here, i.e. it is similar to the non HTTP/HTTPS method described for the configurable health check above:
This extra health check does not affect the performance of your
application because it is not sending any data to your back-end
instances. You cannot disable or turn off this health check.
Summary / Solution
Assuming your RESTful API Server, with built-in HTTP parser is supposed to serve HTTP only indeed, you will need to handle two health checks:
The first one you configured yourself as a HTTP:port;/;PathToPing - you'll receive a HTTP GET request and must answer with 200 OK within the specified timeout period to be considered healthy.
The second one configured automatically by the service - it will open a TCP connection on the HTTP port configured above, won't send any data, and then closes the connection after the health check is completed.
In conclusion it seems that your server might be behaving perfectly fine already and you are just irritated by the 2nd health check's behavior - does ELB actually consider your server to be unhealthy?
As far as I know it's just an HTTP GET request looking for a 200 OK http response.

Resources