nginx as mail proxy - proxy

I am trying to setup nginx as mail proxy. All i want is to let nginx receive the mail and forward it to a script. Is this set possible or should i only use sendmail for that.

The only way I can remotely imagine that working is if you would let nginx listen on the smtp port and run a smtp server web application on that port. At that point nginx would basically only connect the outside port to your locally running app.
So yeah, I think you would be much better off with a real smtp server like sendmail. Actually, I recommend you use postfix because it does the same thing arguably better.

there is a lack of documentation but from playing around with it
it looks to like the nginx mail options are for creating proxies to more easily add custom authentication to an existing MTA.
A use case might be to use a php script to authenticate existing website users using the website's existing users database without needing to create system users for them or set up extra databases.
so no, it won't forward the actual email to a script
(look for things like 'pipe_tansport' in MTA's like exim to do that)
but it will let you use a script to put authentication in front of an existing smtp server.

Related

Hosting a VPN on Heroku

I was wondering if it's possible to host a private vpn on heroku?
My (hypothetical) use case is that let's say there's some service that's only available in Europe but I want to access it in the USA. I'd like to turn a European heroku server into a personal vpn that just allows me to access that service.
I did some research and can't find anyone else who's tried/documented this.
You basically want a proxy. So heroku forbids running an open proxy, so you should restrict use.
XIX. Operate an “open proxy” or any other form of Internet proxy service that is capable of forwarding requests to any end user or third-party-supplied Internet host;
--https://www.heroku.com/policy/aup
But technically it is possible - you might want to try it: https://github.com/Rob--W/cors-anywhere, if you want to use the browser you will need to download the headers from the server.js file
Note that this project is not intended to be used as an open proxy, so for example relative paths are not loaded properly.
You might want to try it - it might be more appropriate, I just did not try it myself ... :)
https://github.com/http-party/node-http-proxy#setup-a-basic-stand-alone-proxy-server

How to use direct connection applications behind a kerberos proxy

I have a corporate proxy using Squid and kerberos for authentication, the proxy is configured for standard use, I.E allow http, https, a few others and block everything else. Now, there are many applications that support basic proxy authentication, but do not support Kerberos based authentication and many others that connect directly to the internet. I used Proxifier before the upgrade to kerberos to make my applications use the proxy, but I cannot do so now. I then installed an application called PX to create a proxy that connects to kerberos, but the proxy it creates is a simple HTTP Proxy and proxifier doesn't work correctly with it. Anyone has a setup for a situation like this?. I use Windows 10 and I obviously don't have access to the server where squid is configured. The application I need to connect to the internet uses standard https ports, it's not a torrent application nor anything that uses the ports blocked by squid. Thanks in advance.
Ok, for this particular case I've found the following setup to solve 99% of my problems.
First get Px here https://github.com/genotrance/px
Next get Fiddler: http://www.getfiddler.com/dl/Fiddler4BetaSetup.exe
Configure PX with your user and your domain and run it. By default it creates a running proxy on 127.0.0.1:3128
Configure your sistem proxy to use the proxy supplied by PX.
Execute fiddler, it should create ANOTHER proxy at 127.0.0.1:8888
Use this proxy in your apps. Proxifier should work as well.
Why use fiddler and not the direct 127.0.0.1:3128?, PX creates a pure http proxy and fiddler allows to tunnel https and connect request through it.
Any requests will pass through fiddler which will redirect them to the PX proxy which will redirect them to the squid proxy (So expect very slow speeds).
In the end since you're just redirecting your apps towards your proxy, if your proxy bans using regex expressions or direct IP connections some apps will NOT work, and in these cases using TOR or a VPN is the only real solution. Hope it helps someone avoid all the headaches I went through.

Can we use roundcube, thunderbird or any webmail in our xampp/wamp server?

I have a web application developed in PHP and all users can access it via local network(xampp), now the users want to access their gmail,yahoo or any emails via localhost(xampp), I want something like MS-outlook to work in my xampp.
is there any solution ?
the propper way to do this is if you use email server at localhost better already install apache2 in your own device
also using postfix and dovecot will be very helpfull here some link
about webmail configuration in google cloud instance
https://medium.com/geekculture/how-to-host-a-personal-email-server-on-google-cloud-for-free-part-i-8124d65d1d25
perhaps that concept will help you to develop your own webmail in Local Area Network

Custom domains for Multi-tenant web app

I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac

play-framework [2.0] HTTPS

i'me working on a web server using play framework 2.0, where the login is executed by a android device software we're also making. And are main concern is that we can't find any support for HTTPS in play 2.0. Sense this is a school project we can't aford clouds nor other proxy to solve the HTTPS for us.
Our main problem is the password and email going in plain sight in the request's body, encrypting and decrypting in the mobile device and on the server looks costly in performance and sense HTTPS takes care of this we wanted to avoid it. Is there any way we can use HTTPS to protect the users login data, or any other suggestion.
If not we might have to migrate all are application to another framework, because it wont look good important confidential data going through the internet without encryption.
Historically, I've seen most folks run the Java/Scala application server behind a reverse proxy of some kind. Setting up HTTPS in apache isn't too hard, and then just use ModProxy to send requests internally to your Play application.
Any one of the reverse proxy systems can likely do this, nginx is popular too, and generally has easier configuration than apache, but I've never used it with HTTPS.
The number one reason normally to do this is security. You can't start a Java program as a non privileged user on port 80. If you start your Java program as root running on port 80, then any hole in your application has root privileges! As a result, starting the Java app on another port, then reverse proxy from an web server that can run as a non-priveleged user on port 80.
(*) This is a slightly over-simplified, but a discussion of this weirdness is beyond the scope of this I think.
It's now possible to use Play and https directly. This was added in Play 2.1
Simply start the server with:
JAVA_OPTS=-Dhttps.port=9001 play start

Resources