Caching proxy solutions for internet bound HTTPS traffic - proxy

Sorry if this is inappropriate for SO but wasn't sure where best to ask this question!
Background:
Running applications on EC2 Container Service (ECS) inside an AWS VPC.
There's the potential to move the functions making the requests to Lambda functions in the near future (3-6 months).
What I'm functionally looking to achieve:
Cache responses from HTTPS traffic to specific URL patterns (eg. subdomain.example.com) for specified periods (eg. 7 days).
We're hitting API limits for free/paid services and looking to inject a layer to handle duplicate requests transparently, not easy to handle at the application layer unfortunately.
Have this applied at a VPC (eg. InternetGateway?) level or ECS service level - not too fussed which one.
Ideally this is transparent to the application itself that'd be fantastic but guessing the fact it's HTTPS traffic may throw a spanner in the works of that. Was initially thinking this may be possible at the InternetGateway level but assuming that doesn't have easy access to request headers.
Potential solutions:
Squid? (https://aws.amazon.com/articles/6463473546098546)
Linkerd? - Seems like something should be possible with this (we also don't have a unified service discovery approach at the moment so this might kill 2 birds with 1 stone).
Any suggestions would be greatly appreciated!
Alex
PS. As you can probably tell I'm a little out of my depth in this one, sorry if I'm mixing patterns/solutions!

If I understand your question correctly you want to cache certain responses that you do towards paid/free API's of 3rd parties. I'm wondering wether you're looking for a solution that works inside your VPC or if it's fine if the solution is outside.
When you're OK with some solution running outside of your VPC, Cloudfront might be something worth looking into. Cloudfront can act as a caching layer for any content of any origin, even if the origin connection is using HTTPS. It is even possible to use signed URL's or signed cookies with Cloudfront to restrict unwanted access, if that's what you're going for.

Related

How to remotely connect to a local elasticsearch server - in a secure way ofc

I have been playing around with creating a webapp that uses elasticsearch to perform queries. Currently, everything is in production, thus on the localhost, let's say elasticsearch runs at 123.123.123.123:9200. All fun and games, but once the webapplication (react) is finished, the webapp should be able to send the queries to the current local elastic search db.
I have been reading around on how to get this done in a proper and most of all secure way. Summary of this all is currently:
"First off, exposing an Elasticsearch node directly to the internet without protections in front of it is usually bad, bad news." (see here: Accessing elasticsearch from a public domain name or IP).
Another interesting blog I found: https://code972.com/blog/2017/01/dont-be-ransacked-securing-your-elasticsearch-cluster-properly-107.
The problem with the above-mentioned sources is that they are a bit older, and thus I am not sure whether they are up to date.
Therefore the following questions:
Is nginx sufficient to act as a secure middleman, passing the queries from the end-users to elastic?
What is the difference at that point with writing a backend into the react application (e.g. using node and express)?
What is the added value taking into account the built-in security from elasticsearch (usernames, password, apikey, certificates, https,...)?
I am reading a lot about using a VPN or tunneling. I have the impression that these solutions are more geared towards a corporate-collaborative approach. Let's say I am running my front-end on a live server, I can use tunneling to show my work to colleagues, my employer. VPN would be more realistic for allowing employees -wish I had them, just a cs student here- to access e.g. the database within my private network (let's say an employee needs to access kibana to adapt something, let's say an API-key - just making something up here), he/she could use a VPN connection for that.
Thank you so much for helping me clarify the above-mentioned points!
TLS, authorisation and access control are free for the Elastic Stack, and have been for a while. I'd start by looking at the docs, as it's an easy way to natively secure your cluster
for nginx, it can be useful for rate limiting, or blocking specific queries for eg. however it's another thing to configure and maintain
from a client POV it would really only matter if you are using the official Elasticsearch clients, and you use nginx and make changes to the way the API would respond to the client (eg path rewrites, rate limiting)
it's free, it's native, it's easy to manage via Kibana
I'd follow the docs to secure Elasticsearch and then see if you need this at some point in the future. this would be handled outside Elasticsearch anyway, and you'd still want to secure Elasticsearch
The point in exposing Elasticsearch nodes directly to the internet is a higher vulnerability in principle. You should follow the rule of the least "surface" of your system on the internet.
A good practice is to hide from the internet whatever doesn't need to be there, although well protected. It takes ~20mins to get cyber attacks on any exposed service (see a showcase).
So I suggest you install a private network, such as a traditional VPN or an SDP product such as Shieldoo Mesh.

Can a person add CORS headers using the ELB Application Load Balancer (sitting in front of Solr)?

We have a number of EC2 instances running Solr in EC2, which we've used in the past through another application. We would like to move towards allowing users (via web browser) to directly access Solr.
Without something "in front" of Solr this results in a security risk, so we have opted to try to use ELB (specifically the Application Load Balancer) as a simple and maintenance free way of preventing certain requests from hitting SOLR (i.e. preventing the public from DELETING or otherwise modifying the documents in Solr).
This worked great, but we realize that we need to deal with the CORS issue. In other words, we need to add the appropriate headers to requests that come in from a browser. I have not yet seen a way of doing this with Application Load Balancer but am wondering if it is possible to do someway. If it is not possible, I would love as an additional recomendation the easier and least complicated way of adding these headers. We really really really hate to add something like nginx in front of Solr because then we've got additional redundancy to deal with, more servers, etc.
Thank you!
There is not much I can find on CORS for ALB either and I remember when I used Beanstalk with ELB I had to add CORS support in my java application directly.
Having said that, I can find a lot of articles on how to set up CORS for Solr.
Can it be an option for you?

How to properly determine Amazon AWS Heroku subnets?

I need to be able to enable access through a firewall to a server for an app that is built atop Heroku. Unfortunately the IP's coming from Heroku's AWS instances seem to vary quite a bit. Is there a "correct" way of determining what subnet to expect from Heroku's AWS platform for an app?
As unfortunate as this is -- there isn't a good way to continuously get this information. On the AWS forums, however, the EC2 engineers tend to occasionally post their IP ranges (here is a recent example: https://forums.aws.amazon.com/ann.jspa?annID=1701).
The downside to this, however, is that it requires a lot of manual work.
There is no reliable way to accept Heroku public IPs in firewalls. Even if there was, you would be compromising your application and opening up an attack vector via other apps on Heroku.
The solution is to have an adequate authentication layer in your exposed services.
This question was asked a few years ago back when services like Proximo didn’t exist -- or weren’t known within the Heroku community.
Today, if you want your outbound traffic to come through a static IP which you can whitelist in your firewall, you can use a proxy service like Proximo (Fixie is another example).
There are a few downsides for using these services:
1) Intrusive Setup
Although the setup of these addons is relatively simple, it’s important to understand how they affect the application.
In case of Proximo, for example, you’ll be required to wrap your processes in a special utility.
This utility will “automatically forward outbound TCP connections made by the wrapped process over your proxy.”
2) Latency
To make your outbound traffic come from a static IP, these services route the traffic through a proxy. This means you’ll add another hop to your outbound communication.
I know that applications that run on Heroku usually aren’t very sensitive to network latency, but it’s important to take this issue into a consideration.
3) Uptime
Although these services are relatively stable, it should be noted that routing the traffic through a specialized third-party proxy adds another point of failure and may affect the overall stability of your applications.
To summarize, these services will help you solve the problem. However, I would consider using them as a temporary workaround, not a complete solution.
Rest assured that these kinds of fixes can hold for a very long time, but if security becomes increasingly more important for the applications you’re running on Heroku, it can be a good idea to start planning a migration to AWS.
If you’re wondering when can be the best time for your team to make the transition to AWS, I’ve shared a few notes here: “Will Heroku always be perfect?”
Hope that helps.

How to serve images from Riak

I have a bunch of markdown documents in Riak, which I'm exposing via a small Sinatra API with basic search functionality etc.
Each document has an associated image, also stored in Riak (in a different bucket). I'd like to have a client app display the documents alongside their associated images - so I need some way to make the images available, but as I'm only ever going to be requesting them by key it seems wasteful to serve them via a Sinatra app as I'm doing with the documents.
However I'm uneasy with serving them directly from Riak, because a) even using nginx to limit the acceptable requests, I worry about exposing more functionality than we want to and b) Riak throws a 403 for any request where the referrer is set, so by default using a direct-to-Riak url as the src of an img tag doesn't work.
So my question is - what's a good approach to take for serving the images? Add another endpoint to the Sinatra app? Direct from Riak using some Nginx wizardry that is currently beyond me? Or some other approach I haven't considered yet? This would ideally use Ruby as that's what the team I'm working with are more comfortable with.
Not sure if this question might be better suited to Server Fault - if so I'll move it over.
You're right to be concerned about exposing Riak to any direct connectivity. Until 2.0 arrives early next year, there is no security in the system (although the 403 for requests with a referrer is a security mechanism to protect against XSS), and even with security exposing any database directly to the Internet invites disaster.
I've not done anything with nginx, but all you'd really need to use it properly, I'd think, would be two features:
Ability to restrict requests to GET
Ability to restrict (or rewrite) requests to the proper bucket
Ability to strip out all HTTP headers that Riak includes in its result (which, since nginx is a proxy server and not a straight load balancer, seems like it should be straightforward)
Assuming that your images are the only content in that bucket, nginx feels like a reasonable choice here.

1 A-record for every subdomain (10000+); any potential issues? Any other solution?

Most solutions I've read here for supporting subdomain-per-user at the DNS level are to point everything to one IP using *.domain.com.
It is an easy and simple solution, but what if I want to point first 1000 registered users to serverA, and next 1000 registered users to serverB? This is the preferred solution for us to keep our cost down in software and hardware for clustering.
alt text http://learn.iis.net/file.axd?i=1101
(diagram quoted from MS IIS site)
The most logical solution seems to have 1 x A-record per subdomain in Zone Datafiles. BIND doesn't seem to have any size limit on the Zone Datafiles, only restricted to memory available.
However, my team is worried about the latency of getting the new subdoamin up and ready, since creating a new subdomain consist of inserting a new A-record & restarting DNS server.
Is performance of restarting DNS server something we should worry about?
Thank you in advance.
UPDATE:
Seems like most of you suggest me to use a reverse proxy setup instead:
alt text http://learn.iis.net/file.axd?i=1102
(ARR is IIS7's reverse proxy solution)
However, here are the CONS I can see:
single point of failure
cannot strategically setup servers in different locations based on IP geolocation.
Use the wildcard DNS entry, then use load balancing to distribute the load between servers, regardless of what client they are.
While you're at it, skip the URL rewriting step and have your application determine which account it is based on the URL as entered (you can just as easily determine what X is in X.domain.com as in domain.com?user=X).
EDIT:
Based on your additional info, you may want to develop a "broker" that stores which clients are to access which servers. Make that public facing then draw from the resources associated with the client stored with the broker. Your front-end can be load balanced, then you can grab from the file/db servers based on who they are.
The front-end proxy with a wild-card DNS entry really is the way to go with this. It's how big sites like LiveJournal work.
Note that this is not just a TCP layer load-balancer - there are plenty of solutions that'll examine the host part of the URL to figure out which back-end server to forward the query too. You can easily do it with Apache running on a low-spec server with suitable configuration.
The proxy ensures that each user's session always goes to the right back-end server and most any session handling methods will just keep on working.
Also the proxy needn't be a single point of failure. It's perfectly possible and pretty easy to run two or more front-end proxies in a redundant configuration (to avoid failure) or even to have them share the load (to avoid stress).
I'd also second John Sheehan's suggestion that the application just look at the left-hand part of the URL to determine which user's content to display.
If using Apache for the back-end, see this post too for info about how to configure it.
If you use tinydns, you don't need to restart the nameserver if you modify its database and it should not be a bottleneck because it is generally very fast. I don't know whether it performs well with 10000+ entries though (it would surprise me if not).
http://cr.yp.to/djbdns.html

Resources