I deployed one web application to EB. I used Route 53 to redirect two domains to my application. On EB environment, it seems it only allows me to add one certificate to port 443 for my load balancer. Let's say my users only use my domain names to access my web application. How should I go about creating and adding SSL certificate(s) to secure the connections from those two domains to my application.
Yes, you can add. In the EB concole, you can add only 1 SSL cert. To add other ones, you have to do it directly in the EC2 console on your load balancer.
The load balancer used by your EB env supports multiple certificates. So you can add extra SSL certificates to your HTTPS listener.
Helpful information is below:
How do I add multiple SSL certificates to the Application Load Balancer in my Elastic Beanstalk environment?
How can I add certificates for multiple domains to an ELB using AWS Certificate Manager?
Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI
Elastic Beanstalk Add more than one ssl certificate
Alternatively, you can register multiple domains under one certificate.
In the EC2 console, you have an option (marked below) to modify the SSL certificates for your HTTPS listener:
Related
Im new to the topic of SSL certificates and i want to install my purchased SSL so that when users enter my site they wont see the untrusted certificate waring here are the steps i did so far
created a p12 file using the keytool
created a csr file from the file in step 1
uploaded the csr to my ssl vendor and after passing their verification of my domain, downloading the following files: .crt, .ca-bundle, .p7b files
i placed all the files (including the generated file by me) in the resources directory and added the following properties
server.ssl.key-store:classpath:myFile.p12
server.ssl.key-store-password:some_pass
server.ssl.keyStoreType:PKCS12
server.ssl.keyAlias:someAlias
i later ran the following command: keytool -importcert - trying to import the file i got from the ssl vendor to the file i created (.p12)
than i created my jar and uploaded it to pivotal cloud foundry but i still see the invalid certificate message
i dont know if i need to do something on the pivotal platform or something on the spring boot config
The only way this would work is if you use a TCP route. With standard HTTP routes on Cloud Foundry, the traffic first hits a load balancer & then Gorouter. TLS termination is going to happen there, not at your application. If you use a TCP route, this will load balance at the TCP level and allow your application to perform the TLS termination directly.
That said, you really don't want to do that. the TCP route isn't likely to allow you to pick port 443, because a port can only be assigned to one application. That means only one application using TCP routes can have port 443. Also in most cases, platform operators are only allowing high numbered ports for TCP routes, which means no one would be able to pick 443. Long story short, you don't want your users to have to access your site as https://www.example.com:47385, so you don't want a TCP route.
To set this up properly with standard HTTP routes, you are going to need to work with your platform operations team. Together you will need to do the following:
Obtain the domain you'd like to use.
Obtain a load balancer. This needs to be configured to route traffic to the Gorouters in the foundation. You can skip this and use the existing load balancer, but that has implications[1] for step #6 below.
Configure DNS for your domain so that it routes to the load balancer in step #2.
Add the domain as a private or shared domain in CF.
Map a route to your application using the domain you created in step #3.
Add your TLS certificate & key to the load balancer [1].
When you've done all this, traffic to your domain will resolve to the IPs of your load balancers. Your user's browser will make an HTTPS request to the LB, which will terminate TLS (if it's an HTTP/layer-7 LB), and forward along to Gorouter (if there is a TCP/layer-4 LB, then TLS is terminated here), which in turn forwards along to your application (based on the route you mapped).
Your application will need to look at the x-forwarded-for and x-forwarded-proto headers to confirm if the request came in over HTTPS, since it is not terminating TLS directly.
[1] - The implication is with how the certificates get installed. With a separate LB, you add the cert to it and are done. If you are trying to reuse the platform LB, you will need to add the cert to the existing list of certs. In addition, if your platform operations team is using a TCP/layer-4 load balancer then TLS termination does not happen at the LB, it happens at Gorouter. This means you then have to load your TLS cert into the Gorouter, which requires a Bosh deploy and is more work. Modifying the platform LB also runs the risk of an error taking down the foundation. For those reasons and more, adding a separate LB for your app is usually the way to go.
I have the following setup:
React.js App on Cloudfront (example.eu) -> Certificate for *.example.eu and example.eu
Fargate Python FastAPI instance on port 5000
Load Balancer internet facing http://***.eu-central-1.elb.amazonaws.com/
I can visit my website https://example.eu just fine
So in my front-end I defined the Load Balancer URL for doing the requests to the Fargate instance --> GET http://***.eu-central-1.elb.amazonaws.com/users.
I clicked on the button on the website to fire the request to the backend but I get a mixed content error in the browser.
Well, I thought let's do the calls over https - I added a HTTPS on 443 listener and added the certificate created earlier. And if I deactivate the SSL verification (e.g. in Postman) that works fine but else I get in my browser the following error:
VM11:1 GET https://***.eu-central-1.elb.amazonaws.com/users net::ERR_CERT_COMMON_NAME_INVALID
Do I need another certificate for the load balancer URL? I checked out a lot of tutorials and they only create one for the domain.
Do I need to add the certificate to my back-end?
I'm really confused how I can establish a proper https communication from example.eu over the load balancer https://***.eu-central-1.elb.amazonaws.com to my Fargate backend on port 5000.
Thanks
Found the solution:
Go to your Route 53 and add an A entry with Alias Target to the ALB.
Important: Add a subdomain in the name field: e.g. api.example.eu.
That's it :)
Using Fabric 2 stateless services with Kestrel 3.1
Have a problem exposing a HTTPS endpoint. A primary certificate is defined on the cluster (Security section). This certificate (primary) is accessible to the nodes (i.e. via X509Store find operations on the thumbprint or subject) automatically by Service Fabric. When configuring kestrel for a particular endpoint the certificate is used by the UseHttps method on any Ipv6 address (i.e. Ipv6Any). In the Application Manifest the access to the certificate's private key is granted (see article) with an endpoint policy. Here is example code on gist. The cluster's load balanser exposes the 443 HTTPS endpoint via the 8443 port (similar to the setup in this tutorial).
Despite the above configuration when navigating to the application the response is that the web page is either down or has been moved plus a ERR_HTTP2_INADEQUATE_TRANSPORT_SECURITY error.
The service according to the logging sent to Insights starts fine using the primary certificate:
Hosting environment: Production
...
Now listening on: https://[::]:443
Anybody else get as similar setup working?
Turns out I had set the protocol to HTTP2 rather than HTTP1.
I have a spring boot application that i want to deploy on google compute engine or kubernetes and i want to expose it through https instead of http.
I want to do this because i have an angular frontend that is deployed on google app engine and it needs to access the api through https instead of http.
The api is accessible through port 8080 and it works if i use http. How can i exspose the api through https, can i use a load balancer with https to redirect all incoming traffic to http?
Well, I think the SSL certificate is the key for both (GCE and KE). You must to set a certificate for each option.
On Kubernetes Engine you could deploy the application with a Load Balancer and install a SSL certificate on it. Then you have to modify your ingress configuration to use the SSl certificate. Of course this process is too large to explain here, but you can find the details here [1], to find details about Load Balancer ingress configuration here [2]
For GCE you will require to set a SSL certificate on instance or using a Load balancer. Take a look to this GCP documentation that explain it [3]
[1] https://estl.tech/configuring-https-to-a-web-service-on-google-kubernetes-engine-2d71849520d
[2] https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks
[3] https://cloud.google.com/solutions/connecting-securely#https-and-ssl
How can I secure my site from http://my_site to https://my_site
I am running Apache Tomcat and I have the AWS Certificate and Elastic Load Balancer having my EC2 instance.
Essentially you cannot add Amazon issued certificates to Tomcat: you cannot retrieve the private key of the certificate.
However, you can deploy the certificate on ELB (elastic load balancer).
You have to ensure that ELB is listening on port 443.
You will find step by step instructions on AWS documentation (http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html#create-https-lb-clt).
Apparently you can download your private certificate's keys now - https://docs.aws.amazon.com/acm/latest/userguide/export-private.html