Use trusted SSL certificate with spring boot in pivotal cloud foundry - spring-boot

Im new to the topic of SSL certificates and i want to install my purchased SSL so that when users enter my site they wont see the untrusted certificate waring here are the steps i did so far
created a p12 file using the keytool
created a csr file from the file in step 1
uploaded the csr to my ssl vendor and after passing their verification of my domain, downloading the following files: .crt, .ca-bundle, .p7b files
i placed all the files (including the generated file by me) in the resources directory and added the following properties
server.ssl.key-store:classpath:myFile.p12
server.ssl.key-store-password:some_pass
server.ssl.keyStoreType:PKCS12
server.ssl.keyAlias:someAlias
i later ran the following command: keytool -importcert - trying to import the file i got from the ssl vendor to the file i created (.p12)
than i created my jar and uploaded it to pivotal cloud foundry but i still see the invalid certificate message
i dont know if i need to do something on the pivotal platform or something on the spring boot config

The only way this would work is if you use a TCP route. With standard HTTP routes on Cloud Foundry, the traffic first hits a load balancer & then Gorouter. TLS termination is going to happen there, not at your application. If you use a TCP route, this will load balance at the TCP level and allow your application to perform the TLS termination directly.
That said, you really don't want to do that. the TCP route isn't likely to allow you to pick port 443, because a port can only be assigned to one application. That means only one application using TCP routes can have port 443. Also in most cases, platform operators are only allowing high numbered ports for TCP routes, which means no one would be able to pick 443. Long story short, you don't want your users to have to access your site as https://www.example.com:47385, so you don't want a TCP route.
To set this up properly with standard HTTP routes, you are going to need to work with your platform operations team. Together you will need to do the following:
Obtain the domain you'd like to use.
Obtain a load balancer. This needs to be configured to route traffic to the Gorouters in the foundation. You can skip this and use the existing load balancer, but that has implications[1] for step #6 below.
Configure DNS for your domain so that it routes to the load balancer in step #2.
Add the domain as a private or shared domain in CF.
Map a route to your application using the domain you created in step #3.
Add your TLS certificate & key to the load balancer [1].
When you've done all this, traffic to your domain will resolve to the IPs of your load balancers. Your user's browser will make an HTTPS request to the LB, which will terminate TLS (if it's an HTTP/layer-7 LB), and forward along to Gorouter (if there is a TCP/layer-4 LB, then TLS is terminated here), which in turn forwards along to your application (based on the route you mapped).
Your application will need to look at the x-forwarded-for and x-forwarded-proto headers to confirm if the request came in over HTTPS, since it is not terminating TLS directly.
[1] - The implication is with how the certificates get installed. With a separate LB, you add the cert to it and are done. If you are trying to reuse the platform LB, you will need to add the cert to the existing list of certs. In addition, if your platform operations team is using a TCP/layer-4 load balancer then TLS termination does not happen at the LB, it happens at Gorouter. This means you then have to load your TLS cert into the Gorouter, which requires a Bosh deploy and is more work. Modifying the platform LB also runs the risk of an error taking down the foundation. For those reasons and more, adding a separate LB for your app is usually the way to go.

Related

SSL certificate for Spring boot application with nginx running on the same server

I have server that run docker with Nginx container inside which serve react build files inside, this nginx server have an installed and working SSL certificate on port 80 and 443.
On the same machine I have an JRE that run an Spring boot application that running on port 8801.
I have search for some infomation online related to how to create an SSL certificate for spring boot when port 80 and 443 is in use, or what is the best practice to do it simultaneously with the existance of SSL certificate, And could not find any.
My friend suggest to me that we will use reverse proxy in order to hide: http://example.com:8801 under https://example.com:80/api
What could be the best way to do it?
Thanks!
You would want to terminate the SSL on Nginx and offload that load on the application server (spring boot running tomcat, for eg.).
One reason to take SSL all the way to the app server is when the communication medium between those two needs to be kept secure. But if the app server and the web server are within the DMZ, you can just use the first approach and terminate on the web server. There is a lot of optimization that goes into web servers to handle TLS termination.
Refer to this for already detailed responses and insights.

REST API over https in Google Compute Engine

Does anyone know how to easily setup https for a rest api in google compute engine ? I have currently a static ip and the api works over http but in the browser when I call it I get mixed content error because the client is server over https (firebase hosting)
Is it possible to setup https with only a static ip (and not a domain name) ?
-Jani
Is it possible to setup https with only a static ip (and not a domain
name) ?
Yes, it is possible, but since 2016 you cannot purchase an SSL certificate with a public IP address. You can use a self-signed certificate but you will have even more browser issues. Not recommended.
Possible Options:
Use your domain name (or purchase one) and use Let's Encrypt for SSL which is free and is one of your better options.
Use a different service such as Cloud Run, Cloud Functions, Firebase or App Engine which offers SSL and does not require a domain name that you own as you can use Google's endpoint.
Attach a Google Load Balancer in front of your Compute Engine instance and configure a front end with a Google Managed SSL certificate. However, this will require a domain name.
If you do not want to use your own domain name, then option #2 is your only choice.
To setup https for a rest api in google compute engine:
1- You have to buy a domain
2- You have to buy an SSL certificate
3- create a load balance resource in Google Cloud to which I assign the domain and the certificate
4- You can install the certificate to the server directly
If you want to use https over IP instead of domain, please follow click here

How to use Azure Traffic Manager with a custom domain, if the DNS settings don't allow for forwarding

I have an Azure web app up and running, using a custom domain purchased outside of Azure... and that all runs fine. So I have https://myappname.azurewebsites.net/ loading fine with my domain name URL https://www.myappname.com
I'm trying to upgrade the web app, though using Azure Traffic Manager. I've cloned the app a few times, each on its own app service plan, and I have the traffic manager all up and running fine. I can successfully hit different versions of my cloned website based on the traffic manager configuration profile... so no issues there.
The only issue is that I can only access the "traffic managed" version of my website via the standard azure URL -> myappname.trafficmanager.net.
All examples I've seen say all I really need to do now, is go into my DNS Management screen, and add domain forwarding, however, my online DNS management tool does not offer this option.
I can't really change my A record in the DNS management screen, because I don't know the IP address of myappname.trafficmanager.net
Every place I've tried to change the name of the current/working Azure URL (like in awverify text files, www cnames, etc.) does nothing. The DNS still points to the single instance which remains in the IP address od the DNS managers A record.
Also, since my live/single instance is linked to the domain name (along with the SSL binding), I can't add those properties to the clones, which makes sense....only one version can be live. However I could unbind that when I make the switch from the single instance web app to the traffic managed set of clones, but I fear I can only bind that to one of the clones. I can't seem to bind it to the myappname.trafficmanager.net version, which might cascade down to all of its endpoints. Is there a way to bind my domain name and SSL cert to more than one version of my web app?
Thanks!
Is there a way to bind my domain name and SSL cert to more than one
version of my web app?
I don't think you can do that unless you have two different domains or subdomains with each own SSL cert. Each web app hostname is unique globally and each SSL binding is attached with the web app domain name.
If you have a purchased domain and just keep the default xxx.azurewebsites.net as each hostname. Then you could configure the two Azure app serves as the endpoint of TM.
By default, Azure provided a wildcard cert for this domain *azurewebsites.net, so you can automatically access this hostname with HTTPS without any extra cert. Then use a CNAME record www in the domain domain.com in your DNS provider to point to the traffic manager hostname myappname.trafficmanager.net. Since Traffic Manager works as DNS level, it does not validate the server and client SSL, you could safely ignore the SSL warning when accessing with traffic manager hostname.
Feel free to let me know if you have any question.

How to allow external custom domains to run a Laravel app on my server?

My app is a Laravel app, running on Nginx, provisioned by Forge, and SSL certificates are provided by CloudFlare.
It is hosted at a URL like https://www.myapp.com
My app’s customers are businesses, and already own their domains:
https://www.customer1.com
https://www.customer2.com
https://www.customer3.com
etc.
I want my customers to run MyApp from the sub-domains of their choice:
https://some-name.customer1.com
https://some-other-name.customer2.com
https://any-name-they-want.customer3.com
etc.
My customers should not install anything — MyApp still runs on myapp.com, not on their servers
My customers should only (if possible) modify their DNS, probably add a CNAME like "some-name” that points to “myapp.com”
I followed this amazing article: Dynamic custom domain routing in Laravel.
but I can't get it to work in an https (with SSL) environment -- the browser returns:
This site can’t provide a secure connection
some-name.customer1.com uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
The client and server don't support a common SSL protocol version or cipher suite.
How should Nginx and/or SSL certificates be configured?
This is still a question which is not very simple.
However, Caddy does generate SSLs automatically (if replacing Nginx with Caddy is an option for you).
You can check the documentation for more.

Custom domains for Multi-tenant web app

I am developing an app (RoR + Heroku) which allows users create their own websites either using my subdomain (pagename.myapp.com) or using their own domain (pagename.com).
An important point of this is that this option is the key of my business: subdomains are the free plans and custom domains are the paid ones. So I have a table where I store the custom domains of each user and check if this page is active (exists and has paid the quota).
For that I need to give users the capability of point their domain to my servers. All we know that Heroku don't recommend the use of DNS A-Records.
Also I would like to abstract as much as possible this feature to being able to switch my infrastructure (Heroku to AWS) in the future without having to ask all my users to change their DNS Zone. Taking this into account, I think that the best option would be run something like an EC2 proxy (using AWS Elastic IP) which give me the ownership of this IP. This proxy I think that should redirect to proxy.myapp.com, and I would resolve the request in the app level.
Due to I didn't find clear information about that, I am not sure if this hypotesis is the best solution and how to setup the proxy (which type of proxy use? Nginx maybe?).
Said that, I would like to ask recommendations/best practices to solve this "common" feature.
Thanks
What you are wanting to do is fairly straight forward to implement. Your assumptions are correct about setting up the proxy. Nginx or haproxy will both work great for this (I personally would use haproxy). Here are some of the gotchas that you will run into though:
Changing the host header at a proxy server can cause the end web application to generate incorrect links. You can use relative paths to fix this, but it requires that the web application developer to be aware of the environment that they are running in.
user connects to www.example.com (proxy server)
proxy server connects to www.realdomain.com (web app)
the web app has a link for a shopping cart. www.realdomain.com/shoppingcart
the end user clicks on the link but the link is www.realdomain.com/shoppingcart instead of www.example.com/shoppingcart
The cost of the host acting as the proxy server. This can spiral out of control really quickly. For example, do you want redundancy, if so how are you planning on implementing that? Do you plan on having ssl termination? If so you will have to increase the CPU count to accommodate the additional load. Do you want to have a secure connection to heroku from your proxy? If you do then you will need to increase the CPU count for that as well. You may have to add additional ram as well depending on the number of concurrent connections.
Heroku also changes their load balancers regularly. This is important because your proxy service will need to reload the config / update the ip addresses of the heroku instances every 60 seconds. In my experience they may change once or twice a day, but the DNS entry that they use has a 60 second TTL. That means that you should make sure that you are capable of updating your config up to every 60 seconds.
My company has been doing something very similar to this for almost a year now. We use haproxy and simply have it reload the config regularly. We have never had an outage or an interruption to our end users. Nginx is also a very good product. It has built in DNS caching so if you go that route you will need to make sure that you configure it correctly so that the DNS cache TTL is 60 seconds.
Will many of your clients want to use your app on their domain apex? E.g. example.com rather than theapp.example.cpm? If not, I would recommend having them CNAME to proxy.myapp.com which CNAMEs to myapp.herokuapp.com. Then, you can update proxy.myapp.com without customer interruption.
If you do need apex or A record support, you would want to set up Nginx as a reverse proxy for your Heroku app. Keep in mind that if you need HTTPS support for client domains, you will need to do some sort of certificate management on your proxy.
I like the answer dtorgo gave and that he mentioned the TLS termination, which many online tutorials on custom domains don't touch at all.
I'll go into more detail on how to implement the custom domains feature for your SaaS while also handling the TLS/HTTPS.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
If you need more detail you can DM me on Twitter #dragocrnjac

Resources