Using Fabric 2 stateless services with Kestrel 3.1
Have a problem exposing a HTTPS endpoint. A primary certificate is defined on the cluster (Security section). This certificate (primary) is accessible to the nodes (i.e. via X509Store find operations on the thumbprint or subject) automatically by Service Fabric. When configuring kestrel for a particular endpoint the certificate is used by the UseHttps method on any Ipv6 address (i.e. Ipv6Any). In the Application Manifest the access to the certificate's private key is granted (see article) with an endpoint policy. Here is example code on gist. The cluster's load balanser exposes the 443 HTTPS endpoint via the 8443 port (similar to the setup in this tutorial).
Despite the above configuration when navigating to the application the response is that the web page is either down or has been moved plus a ERR_HTTP2_INADEQUATE_TRANSPORT_SECURITY error.
The service according to the logging sent to Insights starts fine using the primary certificate:
Hosting environment: Production
...
Now listening on: https://[::]:443
Anybody else get as similar setup working?
Turns out I had set the protocol to HTTP2 rather than HTTP1.
Related
My Spring app uses lets encrypt and is https only. I did not include http to https thing, as it worked for me in postman with https:// format
When I deployed to Cloud Run, and mentioned the custom port (the port specified in spring)
and tested using URL from dashboard
https://..blah..run.app
I am getting error/message
Bad Request
This combination of host and port requires TLS.
What configuration is required on Cloud Run to resolve this?
The url as I see on service details page has htpps://...
EDIT:
If Cloudrun does not need me to take case of SSL, I can remove the application properties entries
server.ssl.key-store-type=PKCS12
server.ssl.key-store=classpath:key/keystore.p12
server.ssl.key-store-password=${lets.secret}
server.ssl.key-alias=someCertAlias
server.ssl.enabled=true
So Can I get an answer on whether to remove SSL from spring?
If cloudrun always uses http, all my calls use redirectConnector, which seems pointless
The Cloud Run Service listens on HTTP and HTTPS. Your application running in the container must listen on a port configured with HTTP only.
FYI: For a public facing web server, you should almost always enable HTTP. Otherwise, when a user enters www.example.com in the browser, the user will receive a connect error. This not always the case, for example .dev gTLDs, but is good practice. When a user connects to Cloud Run with the HTTP protocol, Cloud Run will redirect the user to HTTPS and connect to your application using the HTTP protocol.
I deployed one web application to EB. I used Route 53 to redirect two domains to my application. On EB environment, it seems it only allows me to add one certificate to port 443 for my load balancer. Let's say my users only use my domain names to access my web application. How should I go about creating and adding SSL certificate(s) to secure the connections from those two domains to my application.
Yes, you can add. In the EB concole, you can add only 1 SSL cert. To add other ones, you have to do it directly in the EC2 console on your load balancer.
The load balancer used by your EB env supports multiple certificates. So you can add extra SSL certificates to your HTTPS listener.
Helpful information is below:
How do I add multiple SSL certificates to the Application Load Balancer in my Elastic Beanstalk environment?
How can I add certificates for multiple domains to an ELB using AWS Certificate Manager?
Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI
Elastic Beanstalk Add more than one ssl certificate
Alternatively, you can register multiple domains under one certificate.
In the EC2 console, you have an option (marked below) to modify the SSL certificates for your HTTPS listener:
Im new to the topic of SSL certificates and i want to install my purchased SSL so that when users enter my site they wont see the untrusted certificate waring here are the steps i did so far
created a p12 file using the keytool
created a csr file from the file in step 1
uploaded the csr to my ssl vendor and after passing their verification of my domain, downloading the following files: .crt, .ca-bundle, .p7b files
i placed all the files (including the generated file by me) in the resources directory and added the following properties
server.ssl.key-store:classpath:myFile.p12
server.ssl.key-store-password:some_pass
server.ssl.keyStoreType:PKCS12
server.ssl.keyAlias:someAlias
i later ran the following command: keytool -importcert - trying to import the file i got from the ssl vendor to the file i created (.p12)
than i created my jar and uploaded it to pivotal cloud foundry but i still see the invalid certificate message
i dont know if i need to do something on the pivotal platform or something on the spring boot config
The only way this would work is if you use a TCP route. With standard HTTP routes on Cloud Foundry, the traffic first hits a load balancer & then Gorouter. TLS termination is going to happen there, not at your application. If you use a TCP route, this will load balance at the TCP level and allow your application to perform the TLS termination directly.
That said, you really don't want to do that. the TCP route isn't likely to allow you to pick port 443, because a port can only be assigned to one application. That means only one application using TCP routes can have port 443. Also in most cases, platform operators are only allowing high numbered ports for TCP routes, which means no one would be able to pick 443. Long story short, you don't want your users to have to access your site as https://www.example.com:47385, so you don't want a TCP route.
To set this up properly with standard HTTP routes, you are going to need to work with your platform operations team. Together you will need to do the following:
Obtain the domain you'd like to use.
Obtain a load balancer. This needs to be configured to route traffic to the Gorouters in the foundation. You can skip this and use the existing load balancer, but that has implications[1] for step #6 below.
Configure DNS for your domain so that it routes to the load balancer in step #2.
Add the domain as a private or shared domain in CF.
Map a route to your application using the domain you created in step #3.
Add your TLS certificate & key to the load balancer [1].
When you've done all this, traffic to your domain will resolve to the IPs of your load balancers. Your user's browser will make an HTTPS request to the LB, which will terminate TLS (if it's an HTTP/layer-7 LB), and forward along to Gorouter (if there is a TCP/layer-4 LB, then TLS is terminated here), which in turn forwards along to your application (based on the route you mapped).
Your application will need to look at the x-forwarded-for and x-forwarded-proto headers to confirm if the request came in over HTTPS, since it is not terminating TLS directly.
[1] - The implication is with how the certificates get installed. With a separate LB, you add the cert to it and are done. If you are trying to reuse the platform LB, you will need to add the cert to the existing list of certs. In addition, if your platform operations team is using a TCP/layer-4 load balancer then TLS termination does not happen at the LB, it happens at Gorouter. This means you then have to load your TLS cert into the Gorouter, which requires a Bosh deploy and is more work. Modifying the platform LB also runs the risk of an error taking down the foundation. For those reasons and more, adding a separate LB for your app is usually the way to go.
I have a spring boot application that i want to deploy on google compute engine or kubernetes and i want to expose it through https instead of http.
I want to do this because i have an angular frontend that is deployed on google app engine and it needs to access the api through https instead of http.
The api is accessible through port 8080 and it works if i use http. How can i exspose the api through https, can i use a load balancer with https to redirect all incoming traffic to http?
Well, I think the SSL certificate is the key for both (GCE and KE). You must to set a certificate for each option.
On Kubernetes Engine you could deploy the application with a Load Balancer and install a SSL certificate on it. Then you have to modify your ingress configuration to use the SSl certificate. Of course this process is too large to explain here, but you can find the details here [1], to find details about Load Balancer ingress configuration here [2]
For GCE you will require to set a SSL certificate on instance or using a Load balancer. Take a look to this GCP documentation that explain it [3]
[1] https://estl.tech/configuring-https-to-a-web-service-on-google-kubernetes-engine-2d71849520d
[2] https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#remarks
[3] https://cloud.google.com/solutions/connecting-securely#https-and-ssl
Why does one have to configure a port with a certificate when self-hosting a WCF service that uses transport security?
I understand SSL/HTTP and that the certificate is needed for it. However, in non-WCF contexts, I can just create an SSL socket and assign the certificate programmatically. For the WCF self-host case, why is the extra netsh step required (as in link below)?
http://msdn.microsoft.com/en-us/library/ms733791.aspx
This is only in the case of HTTP.
That setup required for HTTP.SYS so no matter if you are using IIS or self-hosting, HTTP.SYS running in Kernel mode will require setup.