How to enable AllowTcpForwarding in Jelastic? - ssh-tunnel

It seems that in Jelastic version 5.8 onward AllowTcpForwarding is set to no by default, which means that SSH port forwarding is not possible https://docs.jelastic.com/release-notes-58/#ssh-security.
What is the recommended way to set AllowTcpForwarding to yes for an environment?

As mentioned in the comments, the best way is to talk to your Jelastic hosting provider to see if they can provide you with a decent solution.
We've just published an add-on (JPS) for this case which you're welcome to use at whichever your provider may be.
The basic idea is that having AllowTcpForwarding enabled by default is a potential security risk in that you may construct security rules (e.g. firewall) for other parts of your topology on the assumption that only local traffic can be there. Although something of an edge case, there's a potential for this assumption to be exploited in order to give access to some application or port that should not be permitted.
However, if you're aware of the risks and only enable this functionality where you have a specific need for it (rather than the old default of indiscriminately enabled everywhere), it should be safe to enable; either manually on request to your Jelastic hosting provider, or via the add-on that I've linked to.
The linked add-on has an option to disable as well, so you can easily flick it on/off on-demand if you wish.

A workaround to get port forwarding working even when AllowTcpForwarding is set to false is to use the mutagen network forwarding tool instead of SSH port forwarding.
Example
mutagen forward create --name=my-web-app-repl tcp:localhost:7001 XXXX-XXXX#gate.mircloud.host:3022:tcp::7001

Related

Hosting a VPN on Heroku

I was wondering if it's possible to host a private vpn on heroku?
My (hypothetical) use case is that let's say there's some service that's only available in Europe but I want to access it in the USA. I'd like to turn a European heroku server into a personal vpn that just allows me to access that service.
I did some research and can't find anyone else who's tried/documented this.
You basically want a proxy. So heroku forbids running an open proxy, so you should restrict use.
XIX. Operate an “open proxy” or any other form of Internet proxy service that is capable of forwarding requests to any end user or third-party-supplied Internet host;
--https://www.heroku.com/policy/aup
But technically it is possible - you might want to try it: https://github.com/Rob--W/cors-anywhere, if you want to use the browser you will need to download the headers from the server.js file
Note that this project is not intended to be used as an open proxy, so for example relative paths are not loaded properly.
You might want to try it - it might be more appropriate, I just did not try it myself ... :)
https://github.com/http-party/node-http-proxy#setup-a-basic-stand-alone-proxy-server

Custom domains for Multi-tenant web app

I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac

What are some options for securing redis db?

I'm running Redis locally and have multiple machines communicating with redis on the same port -- any suggestions for good ways to lock down access to Redis? The database is run on Mac OS X. Thank you.
Edit: This is assuming I do not want to use the built-in (non backwards compatible) Redis requirepass directive in the config.
On EC2 we lock down the machines that can make requests to the redis port on our redis box to only be our app box (we also only use it to store non-sensitive data).
Another option could be to not open up the redis port externally, but require doing port forwarding through an ssh tunnel. Then you could only allow requests coming through the tunnel and only allow ssh with a known key.
You'd pay the ssh penalty, but maybe that's ok for your scenario.
There is a simple requirepass directive in the configuration file which allow access only to clients who authenticate through AUTH command. I recommend to read docs on this command, namely the "note" section.

Steps to setup proxy server

I want to setup proxy server on our office. I have two proxy server's available i.e. (SQUID for Linux and WinProxy for Windows). I have following requirement.
All the rule's which I define in proxy server like block some specific sites etc. should likely to work.
The "Evolution Mail Client" for linux and "Outlook Express" for windows also should work.
So, can you tell me the guidelines how to achieve both the task especially no.-2 .
Thanks in advance.
Squid is a very good option for a caching proxy. It has a configuration file to block some specific sites, IPs, domains... and to tell him which files has to cache. Making a smart proxy is not easy. But you can find great configurations and tutorials in Google or in his wiki.
There are two ways for setting up a proxy:
Direct proxy: you have to manually configure every computer to use your proxy server.
This is the easiest option. I recommend you using this.
Please note, computers that don't use the proxy can access all pages (even if they're blocked).
Transparent proxy: this is the most secure, ideal option for most cases (including yours). You have to configurate your network and the proxy server to forward any requests to it. This is a hard option and very difficult to achieve in your case.
About your Evolution and Outlook problem, there can't be any problems related to the proxy, don't worry about that.

command line/Powershell administration of networks behind NAT

Scenario.
3rd party admins want to administer systems with PS remoting/direct login of clients and servers behind NAT gateways.
The systems are SBS 2003 or W2K3. all are behind NAT firwalls with varying RFC1918 subnets and no site to site VPNs (although a solution would likely require this.)
Each site has its own unrelated AD setup.
The 3rd party admin network (also behind a NAT)has no trusts with the target sites (obviously SBS sites have this problem by default an It seems VPNs have problems if the same RFC1918 subnet address range is used on both sides.
Name resolution across VPN would be a prerequisite. advice
Is there some "reflection" approach (similar to Ultra VNC that would serialize PS objects and pass them through NATs without requiring router reconfig? or is portforwarding to SSH or similar required with direct remote logins? can any of this be accomplished or automated without use of a mouse?
what .NET remoting approaches might help solve this problem?
the nsoftware Powershell server solution works for SSH it seems but only where machines are publically addressable and it was also discounted due to its per CPU licensing scheme.
are there other similar alternatives to it?
You're probably best off finding a way to tunnel to a single machine, and then hop from there to the machines you want to administer. You'd need to forward a port to that first machine.
Your network security people should be very concerned about this machine; if they're not, they don't know their jobs.
My first approach would be to use PowerShell V2's remoting for both hops.
I concur with #JayBazuz,i Powershell V2 (currently in CTP3) uses WinRM, which can be configured to work over HTTPS(really any port you choose), thus working through firewalls and NATS.
james

Resources