ec2 inbound traffic between different accounts - amazon-ec2

I am deploying my web app on EC2. My app consumes a third party REST API. The third party is hosted on US-EAST-1. If I host my app there too does the communication still count as an inbound traffic which there is no need to pay for?

If you're connecting to the third-party REST API through public DNS (i.e. either a public IP or a public hostname), it will be considered outbound traffic. If the external service sends data back, it will count towards your inbound traffic - this is purely because it's communicating over the public internet (through DNS).
If you want to keep costs down between your own servers, move your EC2 instances to a VPC and get them to communicat with eachother through private IP addresses. Private traffic, as far as I'm aware, is free of charge. But you can only have instances of the same availability zone inside one VPC. See Amazon's pricing model for details:
Availability Zone Data Transfer $0.00 per GB – all data transferred between instances in the same Availability Zone using
private IP addresses.
Regional Data Transfer $0.01 per GB – all data transferred between instances in different Availability Zones in the same region.

Related

How to whitelist IP/DNS of servers under loadbalancer?

I have two servers (EC2 instances provided by AWS) under a loadbalancer (To be more specific ELB provided by AWS).
These servers run identical restful API code. (State is saved in mutual database).
Lets say these servers provide a service called X.
These two servers are in an autoscaling group (ASG provided by AWS) (meaning that if one dies or more computing power is needed - an additional server will be created).
Every server has it's own IP adress.
Lets say I am a consumer of this infrastructure. I send a HTTP POST request to a loadbalancer endpoint and i expect at some point to receive a HTTP POST request. The HTTP POST request would be sent from one of two servers under loadbalancer. How can I be sure that the HTTP POST request is sent from the service X and not by someone other? (Initially I want to whitelist IPs or DNSs to be safe)
Note that i can not simply white list IP adress of every machine under a loadbalancer since machines are killed and spawned regularly (assigning new IP adresses every time).

For a given Heroku Dyno *instance*, is the external IP address stable for the lifetime of that instance?

Can I count on the external IP address being stable for any given Dyno instance.
That is, my Dyno boots up and makes a request to some external service. That service notes the incoming IP address. Could that service assume that any subsequent traffic from that same Dyno instance would come from the same IP address. Will the same apply if the same Dyno makes a request to a very different endpoint?
I understand that Heroku makes no guarantees about Dyno addressing unless you upgrade to the Private level offering (or otherwise spend more on addons or Enterprise features). I'm not looking to know in advance which IPs to expect, just whether it's stable.
I assume the architecture is fairly obvious: containers running on VMs that have outbound network access using the VM's interface, so external IPs for outbound connections are going to be the VM IP address. However, Heroku emphasizes it's Routing layer and makes it sound complicated, so you never know if they have some kind of outbound routing complexity as well, which is what I'm worried about.
On a broad level, you should never expect IP addresses to be stable on Heroku by default. This applies to DNS targets, hence the requirements for CNAMEs everywhere, and outbound IPs.
Regarding the specific question, yes, a single specific Dyno instance will have the same outbound IP address, but that means it will only be stable for ~24 hours (+3 1/2 hours possibly, see /Dynos#restarting) at most. After web.1's daily cycle, the newly launched web.1 will have a new public IP address. web.1, web.2, web.3, web.#…, along with any/all other process groups' Dynos will likely never have the same public IP address at the same time.
There are means for stabilizing outbound IPs longer term, as is accomplished by various Proxy partner add-ons, or any other Proxy service you choose to use.

What's the auto scaling strategy of AWS ELB?

We are trying to build a WebSocket server(with 500k concurrent connections, but very low traffic) on AWS with ELB.
We tested single server with Public IP, it can handle more than 500k connections.
But when we put the servers after the ELB, every server can only get 65536 connections.
And we can see a lot of Spillover Count on ELB Monitoring.
The document of ELB says they will auto scaling the ELB by changing the IP list in DNS.
But when I dig my domain, I always get the same IP list.
The ELB auto scaling seems not work.
We submitted a ticket to AWS. And they said their ELB is based on requests.
One new connection just count as one connection.
So ELB can't use for long connection server.

Spring - Make microservice only accessible internally

How can I setup a microservice which can only be called by other internal services. The microservice should not be accessible in public, because it has access to databases with secret information. I already tried to secure the microservice with spring security but in this case I got problems with the FeignClient concerning authorization.
Assuming that you are unable to solve this problem with infrastructure
(which is the only correct way to solve this problem),
there are several (bad) techniques you can use:
IP Address White List - Create a list of good IP Addresses, reject any request from an address not on the list.
IP Address Zone - Variation of the white list. Create a list of partial IP Addresses and reject any request that does not match one of the partial addresses.
Non-Routing IP Only - Variation of IP Address Zone. Only accept requests from non-routing IP addresses only (these can only be on your local network). Here is a Wiki page of non-routing IP addresses
Magic token. Only accept requests that include a specific token. This is especially terrible, because somebody can watch your traffic and discover the token.
There are probably other options, but go with Infrastructure.
This is really an infrastructure question. Typically you want to have a private network with all your resources internally - the so called De-Militarized-Zone or DMZ - and then have a second network or endpoint bridge that provides external access. The internal network should not be reachable from the internet. The endpoint could be a single server or an array of servers that are implemented as a bastion host and will authenticate and authorize callers and forward calls to the private network that are legitimate.
The API gateway (or edge-server) pattern is often used to realize this. A good configuration of the gateway is important.
Here is an article of how to do it with the Amazon cloud.
And here a link to Kong, a general API gateway that you can deploy yourself.

tcp_tw_recycle behind application level load balancer?

Given that our linux servers never open direct connections to our clients, is it safe to use tcp_tw_recycle on them ?
Those servers are behind a application level load-balancer and all the connections i see on them are between internal 10.x.x.x addresses.
Thanks
We have such a load balancer provided by AWS (ELB), so I'll provide my advice based on that:
Why gamble? If your overhead/port-consumption is coming from quick client connections, Amazon recommends enabling persistent connections on your ELB instead. (I asked them about this question specifically and got that recommendation...our Amazon contact does not recommend enabling tcp_tw_recycle).
That said, if, say it's another internal box they're struggling to establish rapid connections with (apache-php chatting with MySQL on behalf of the client without persistent connections), you might be able to get away with it:
If ALL client connections will be via the ELB (please set your security group accordingly), then technically speaking you shouldn't encounter problems for the tcp_tw_recycle timestamp jumping cases I'm aware of:
ELB is a termination point on behalf of the client (their NAT firewall won't factor in, and ELB is not NAT based)
The ELB box(es) will not reset themselves, acquire the same IP address, and still be assigned as your ELB (will be someone else's if it happens at all)
The ELB box(es) will not be replaced by another ELB machine using the same IP and still be serving your traffic as your ELB (will be someone else's if it happens at all)
*2 and 3 are not a guarantee from Amazon, but it does appear to be their behavior, just as stop/start will get you a new private IP for EC2 boxes). If that did happen, I'd imagine it is a thing of extremely low probability.
You could theoretically run into issues restarting your own boxes if they communicate with other service machines (like MySQL or memcached) and you restart (not stop/start) one of your boxes, or move their elastic IP to another box and are not using private IPs for internal chatter. But you have some control over this. However, if it's all on the AWS cloud (or your fast internal network), issues are extremely unlikely (unless your AWS zone is having a bad day, and you're restarting/replacing your systems for that reason).
A buddy and I had a long-standing argument about this, and he won by proving his point with a long running 4k browser (fast script) load test via Neustar...there were no connection issues from the client side via ELB, and eliminating the overhead helped quite a bit :-)
If you haven't already, consider tcp_tw_reuse (we were using this to keep the ephemeral port range active before the above mentioned test showed the additional merit of eliminating the overhead with tcp_tw_recycle for us). Be sure to watch your counters on ifconfig if you do decide to disable that chunk of the protocol ;-P.
The following is also a good summary resource on the topic of timestamps jumping: Dropping of connections with tcp_tw_recycle

Resources