I am trying to implement an authentication mechanism using Keycloak to my nodejs application which is running on https port 4443 (https://localhost:4443).
The Keycloak application is running in the EC2 instance in port 8443 https.
I am trying to login to my application which is using Keycloak for authentication which is running on port 4443.
But getting the below error:
Keycloak json file:
{
"realm": "VideoKYC-Realm",
"auth-server-url": "https://31.19.1.85:8443/auth",
"ssl-required": "external",
"resource": "VideoKYC",
"public-client": true,
"confidential-port": 0
}
Below is the settings made in the keycloak server
I dont have an SSL certificate as of now, so would need a way to bypass SSL.
Additionally I tried to run keycloack on port 8080, i was not able to access the keycloak via browser.
However in the aws security group have allowed all the traffic as of now.
I am using the below image to run Keycloak in the EC2 instance.
https://hub.docker.com/r/jboss/keycloak
Would request all of your help with this issue
You can use:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
to instruct Node js to disable TLS certificate verification - https://nodejs.org/api/cli.html#cli_node_tls_reject_unauthorized_value
Of course that is not a production setup, because it will sacrifice TLS security.
Another options (probably more complicated):
add used CA cert to your Node JS CA certs
maybe used keycloak lib has config to disable TLS verification on the lib level
Related
I have server that run docker with Nginx container inside which serve react build files inside, this nginx server have an installed and working SSL certificate on port 80 and 443.
On the same machine I have an JRE that run an Spring boot application that running on port 8801.
I have search for some infomation online related to how to create an SSL certificate for spring boot when port 80 and 443 is in use, or what is the best practice to do it simultaneously with the existance of SSL certificate, And could not find any.
My friend suggest to me that we will use reverse proxy in order to hide: http://example.com:8801 under https://example.com:80/api
What could be the best way to do it?
Thanks!
You would want to terminate the SSL on Nginx and offload that load on the application server (spring boot running tomcat, for eg.).
One reason to take SSL all the way to the app server is when the communication medium between those two needs to be kept secure. But if the app server and the web server are within the DMZ, you can just use the first approach and terminate on the web server. There is a lot of optimization that goes into web servers to handle TLS termination.
Refer to this for already detailed responses and insights.
My Spring app uses lets encrypt and is https only. I did not include http to https thing, as it worked for me in postman with https:// format
When I deployed to Cloud Run, and mentioned the custom port (the port specified in spring)
and tested using URL from dashboard
https://..blah..run.app
I am getting error/message
Bad Request
This combination of host and port requires TLS.
What configuration is required on Cloud Run to resolve this?
The url as I see on service details page has htpps://...
EDIT:
If Cloudrun does not need me to take case of SSL, I can remove the application properties entries
server.ssl.key-store-type=PKCS12
server.ssl.key-store=classpath:key/keystore.p12
server.ssl.key-store-password=${lets.secret}
server.ssl.key-alias=someCertAlias
server.ssl.enabled=true
So Can I get an answer on whether to remove SSL from spring?
If cloudrun always uses http, all my calls use redirectConnector, which seems pointless
The Cloud Run Service listens on HTTP and HTTPS. Your application running in the container must listen on a port configured with HTTP only.
FYI: For a public facing web server, you should almost always enable HTTP. Otherwise, when a user enters www.example.com in the browser, the user will receive a connect error. This not always the case, for example .dev gTLDs, but is good practice. When a user connects to Cloud Run with the HTTP protocol, Cloud Run will redirect the user to HTTPS and connect to your application using the HTTP protocol.
I am running airflow(version 1.10.10) webserver on EC2 behind AWS ELB
Here is the ELB listener configuration
Load Balancer Protocol : SSL Load Balancer
Port: 443 Instance
Protocol: TCP Instance
Port: 8080
Cipher omit here
SSL Certificate a cert here
in front of ELB , i configured a route53 and set a fqdn to the web server say: abc.fqdn
all the page loading are working, like
https://abc.fqdn/admin/ or
https://abc.fqdn/admin/airflow/tree?dag_id=tutorial
all the web form submission are working , like
Trigger DAG
however, after form submission, the page is forwarded to an http and the page did not load due to the elb listener.
I have to manually change to https such as https://admin/airflow/tree?dag_id=tutorial
here is what i did:
I read about this article: https://github.com/apache/incubator-superset/issues/978
then on the webserver ec2 , i found this file /usr/local/lib/python3.7/site-packages/airflow/www/gunicorn_config.py
and this example config : https://gist.github.com/kodekracker/6bc6a3a35dcfbc36e2b7
i added the following config and my config file looks like below
import setproctitle
from airflow import settings
secure_scheme_headers = {
'X-FORWARDED-PROTOCOL': 'ssl',
'X-FORWARDED-PROTO': 'https',
'X-FORWARDED-SSL': 'on'
}
forwarded_allow_ips = "*"
proxy_protocol = True
proxy_allow_from = "*"
def post_worker_init(dummy_worker):
setproctitle.setproctitle(
settings.GUNICORN_WORKER_READY_PREFIX + setproctitle.getproctitle()
)
However, the new configs above seems not working.
Did I do anything wrong? How to make may web node forward to https after form submission?
For https I used an ALB instead… Had to setup the Airflow web server to have a crt (generated self signed with the domain that will be used by the ALB) then serving on port 8443 (choose anything you like), then set the ALB to route https to the target group the webserver ASG instances are in for 8443; and you tell the ALB to use the signed cert that you already have (not the self signed that's on the instance) in your AWS account (probably).
Oh and change the baseURL to https schema.
I had trouble with the ELB because I was directing similarly 443 with cert in AWS account to 8080, but 8080 was unencrypted…
Using Fabric 2 stateless services with Kestrel 3.1
Have a problem exposing a HTTPS endpoint. A primary certificate is defined on the cluster (Security section). This certificate (primary) is accessible to the nodes (i.e. via X509Store find operations on the thumbprint or subject) automatically by Service Fabric. When configuring kestrel for a particular endpoint the certificate is used by the UseHttps method on any Ipv6 address (i.e. Ipv6Any). In the Application Manifest the access to the certificate's private key is granted (see article) with an endpoint policy. Here is example code on gist. The cluster's load balanser exposes the 443 HTTPS endpoint via the 8443 port (similar to the setup in this tutorial).
Despite the above configuration when navigating to the application the response is that the web page is either down or has been moved plus a ERR_HTTP2_INADEQUATE_TRANSPORT_SECURITY error.
The service according to the logging sent to Insights starts fine using the primary certificate:
Hosting environment: Production
...
Now listening on: https://[::]:443
Anybody else get as similar setup working?
Turns out I had set the protocol to HTTP2 rather than HTTP1.
We are trying to enable tls to the Consul so that our Rest service(which is using self-signed certificate) will be able to register to Consul in Https mode, For enabling TLS I am following Consul documentation as well as below links
https://www.digitalocean.com/community/tutorials/how-to-secure-consul-with-tls-encryption-on-ubuntu-14-04
http://russellsimpkins.blogspot.in/2015/10/consul-adding-tls-using-self-signed.html
Note: I am using centos 7.2
now my service try to register to the consul but in Consul dashboard its down and on the console I am getting below error:
x509: certificate signed by unknown authority
we found the solution. we have to add the CA cert to TLS trust store instead JVM trust store for centos it is "/etc/pki/tls/certs/ca-bundle.crt"
just by appending CA certificate to this file solved our issue