I've made a SpringBoot application that authenticate with Gloud Storage and performs action on it. It works locally, but when I deploy it on my GKE as a Pod, it suffers some errors.
I have a VPC environment Where I have a Google Cloud Storage, and a Kubernetes Cluster that will run some Spring Boot applications that performs actions on it through com.google.cloud.storage library.
It has Istio enabled for the Cluster and also a Gateway Resource with Secure HTTPS which targets the Ingress Load Balancer as defined here:
https://istio.io/docs/tasks/traffic-management/secure-ingress/sds/
Then my pods all are being reached through a Virtual Service of this Gateway, and it's working fine since they have the Istio-Car container on it and then I can reach them from outside.
So, I have configured this application in DEV environment to get the Credentials from the ENV values:
ENV GOOGLE_APPLICATION_CREDENTIALS="/app/service-account.json"
I know it's not safe, but just wanna make sure it's authenticating. And as I can see through the logs, it is.
As my code manipulates Storages, an Object of this type is needed, I get one by doing so:
this.storage = StorageOptions.getDefaultInstance().getService();
It works fine when running locally. But when I try the same on the Api now running inside the Pod container on GKE, whenever I try to make some interaction to the Storage it returns me some errors like:
[2019-04-25T03:17:40.040Z] [org.apache.juli.logging.DirectJDKLog] [http-nio-8080-exec-1] [175] [ERROR] transactionId=d781f21a-b741-42f0-84e2-60d59b4e1f0a Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is com.google.cloud.storage.StorageException: Remote host closed connection during handshake] with root cause
java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
...
Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:994)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1379)
at sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:559)
at sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:185)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:162)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:142)
at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:84)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1011)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:499)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:549)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.list(HttpStorageRpc.java:358)
... 65 common frames omitted
Caused by: java.io.EOFException: SSL peer shut down incorrectly
at sun.security.ssl.InputRecord.read(InputRecord.java:505)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
...
Looks like when I make the call from the Pod, it is expected some extra Https configuration. I don't know right.
So what I'm wondering is:
If this is some kind of Firewall Rule blocking this call from my Pod to "outside" (What is weird since they run on the same network, or at least I thought so).
If it's because of the Gateway I defined that is kind of not enabling this Pod
Or if I need to create the Storage Object using some custom HTTP configurations as can be seen on this reference:
https://googleapis.github.io/google-cloud-java/google-cloud-clients/apidocs/com/google/cloud/storage/StorageOptions.html#getDefaultHttpTransportOptions--
My knowledge of HTTPs and Secure conections is not very good, so maybe my lacking on concept on this area is making me not be able to see something obvious.
If some one have any idea on what maybe causing this, I would appreciate very much.
Solved it. It was really Istio.
I didn't know that we need a Service Entry resource to define what inbound and outbound calls OUTSIDE the mesh.
So, even that GCS is in the same project of the GKE, they are threated as completely separated services.
Just had to create it and everything worked fine:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
namespace: {{ cloud_app_namespace }}
name: external-google-api
spec:
hosts:
- "www.googleapis.com"
- "metadata.google.internal"
- "storage.googleapis.com"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: HTTPS
- number: 80
name: http
protocol: HTTP
https://istio.io/docs/reference/config/networking/v1alpha3/service-entry/
EDIT
I have disabled the Istio Injection on the namespace I were deploying the applications, by simply using:
kubectl label namespace default istio-injection=disabled --overwrite
Then redeployed the application and tried a curl there, and it worked fine.
My doubt now is: I though Istio only intercept on it's gateway layer, and after that the message keeps untouched, but this is not what seems to be working. Apparently he embbed some SSL layer on the request that my application doesn't do/require.
So sould I need to change my application just to fit on the service mesh requirements?
Related
I have two javascript adapter:adapterA、adapterB
And I need call adapterA after that the adapterA will call adapterB(use MFP.Server.invokeProcedure, in the same mfp server) using this API
when I call in mfp localhost, it works
http://localhost:9080/mfp/api/adapters/AdapterA/test
then I call https after I import mfp cer to jre cacerts
It works fine too
https://localhost:443/mfp/api/adapters/AdapterA/test
My question is I have IHS Server to redirect mfp services
when I call api by IHS http url
http://{domain}/mfp/api/adapters/AdapterA/test
It works
when I call api by IHS https url
https://{domain}/mfp/api/adapters/AdapterA/test
mfp server will get error like this:
com.ibm.mfp.server.js.adapter.internal.JavascriptManagerImpl E FWLST0904E: Exception was thrown while invoking procedure: test in adapter: adapterB
java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.j: PKIX path building failed: com.ibm.security.cert.IBMCertPathBuilderException: unable to find valid certification path to requested target
at com.ibm.mfp.server.js.adapter.internal.invocation.JavaScriptIntegrationLibraryImplementation.invokeProcedure(JavaScriptIntegrationLibraryImplementation.java:255)
but my IHS plugin only set http
how can I resolve this issue and avoid this issue
thanks
When the MobileFirst server creates the request to reach adapter B, the default behaviour is to frame the request, based on the URL of the currently executing request. That is, it uses the request originally used to reach adapter A, to frame the request to reach the target adapter B.
It works well in case 1, where the webserver is accessed using a "http://.." URL. In case 2, where MFP1 has to make an outbound call to the webserver using the "https://.." URL, it needs to first complete a SSL Handshake with the webserver. In case the MFP1 JVM lacks the certificates of the webserver, it fails to establish SSL Handshake and can lead to the error you saw.
In your case, there are two approaches you can take:
Choose to keep the adapter A to adapter B call internal to MFP1. This prevents the outbound "https://" call and you will not see the problem. Additionally, this helps in keeping the travel time shorter and also prevent a new connection on the webserver. To enable this setting, use the JNDI property mfp.adapter.invocation.url. For instance, if you set the value of this property to "http://localhost:9080/mfp", adapter B will be invoked as "http://localhost:9080/mfp/api/adapters/adapterB". The call stays local. More details on this property here.
If you wish to retain the request to adapter B go through the webserver using the secure endpoint, then you should ensure the webserver's root certificates are made available to the MFP1 JVM's trust store so that SSL handshake can be established successfully.
I'm using DefaultMarkLogicDatabaseClientService 1.9.1.3-incubator in NiFi 1.11.4. MarkLogic 10.0-4 is running AWS and has an app server where SSL is configured at the AWS level.
How do I configure the DefaultMarkLogicDatabaseClientService to use HTTPS without needing an SSL Context Service?
Details:
Before SSL was set up, the DefaultMarkLogicDatabaseClientService was able to connect. Once SSL was set up, I'd get this error:
PutMarkLogic[id=bbb8f3c3-7d83-3fb7-454f-9da7d64fa3f6] Failed to properly initialize Processor. If still scheduled to run, NiFi will attempt to initialize and run the Processor again after the 'Administrative Yield Duration' has elapsed. Failure is due to com.marklogic.client.MarkLogicIOException: java.io.IOException: unexpected end of stream on Connection{my-host:8010, proxy=DIRECT hostAddress=my-host/my-IP:8010 cipherSuite=none protocol=http/1.1}: com.marklogic.client.MarkLogicIOException: java.io.IOException: unexpected end of stream on Connection{my-host:8010, proxy=DIRECT hostAddress=my-ost/my-IP:8010 cipherSuite=none protocol=http/1.1}
Okay, seems like it's not successful using protocol HTTP for a server that needs HTTPS. I see that the service can be configured to use an SSL Context Service, but I'm not looking to do client authentication. (Setting this up requires a truststore or keystore.)
If I replace the PutMarkLogic processor that uses the DefaultMarkLogicDatabaseClientService with an InvokeHTTP processor, I can specify the full URL, including "https://", without needing an SSL Context Services (but then I don't get the batching that I get with PutMarkLogic). I'd like to simply tell the MarkLogic service to use HTTPS.
Creating an SSLContextService with a truststore (that contains the public certificate of the MarkLogic server) populated and no keystore populated should work in this situation.
The problem
I'm getting 403 SSL required from Spring when trying to route through my ELB to Kubernetes Nginx ingress controller.
Setup
My set up is as follows:
I've got an ELB (AWS) with ACM for my Kubernetes cluster (created by kops) which routes all requests to the
Nginx Ingress Controller which in turn routes all requests according to the rules dictated in the
Ingress that passes the traffic unto the
Service that exposes port 80 and routes in to port 8080 in the
Pods selected with labels "app=foobar" (which are described in a Deployment)
Pods are running a Spring Boot Web App v2.1.3
So basically:
https://foo.bar.com(:443) -> ingress -> http://foo.bar.svc.cluster.local:80
This works like a charm for everything. Except SprintBoot.
For some reason, I keep getting 403 - SSL required from Spring
One note to keep in mind here: my Spring application does not have anything to do with SSL. I don't want it to do anything in that nature. For this example's purposes, this should be a regular REST API requests, with the SSL termination happening outside the container.
What I tried so far
Port-forwarding to the service itself and requesting - it works fine.
Disabling CSRF in WebSecurityConfigurerAdapter
Putting ingress annotation nginx.ingress.kubernetes.io/force-ssl-redirect=true - it gives out TOO_MANY_REDIRECTS error when I try it (instead of the 403)
Putting ingress annotation nginx.ingress.kubernetes.io/ssl-redirect=true - doesn't do anything
Putting ingress annotation nginx.ingress.kubernetes.io/enable-cors: "true" - doesn't do anything
Also nginx.ingress.kubernetes.io/ssl-passthrough: "true"
Also nginx.ingress.kubernetes.io/secure-backends: "true"
Also kubernetes.io/tls-acme: "true"
I tried a whole bunch of other stuff that I can't really remember right now
How it all looks like in my cluster
Nginx ingress controller annotations look like this (I'm using the official nginx ingress controller helm chart, with very little modifications other than this thing):
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "aws_acm_certificate_arn"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
Ingress looks like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foobar
namespace: api
spec:
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: foobar
servicePort: http
path: /
Service looks like this:
apiVersion: v1
kind: Service
metadata:
name: foobar
namespace: api
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: foobar
What I think the problem is
My hunch is that it's something with X-Forwarded headers, and Spring doing its magic behind the scenes, trying to be all smart like and deciding that I need SSL based on some headers without me explicitly asking for it. But I didn't figure it out yet.
I searched far and wide for a solution, but I couldn't find anything to ease my pain... hope you'll be able to help!
Edit
I found out that my current setup (without k8s and nginx) works fine and ELB passes X-Forwarded-Port: 443 and X-Forwarded-Proto: https, and it seems to work, but on my k8s cluster with nginx, I put in a listener client that spits out all the headers, and my headers seem to be X-Forwarded-Port: 80 and X-Forwarded-Proto: http
Thanks for all the people that helped out, I actually found the answer.
Within the code there were some validations that all requests should come from a secure source, and Nginx Ingress Controller changed these headers (X-Forwarded-Proto and X-Forwarded-Port) because SSL was terminated within ELB and handed to the ingress controller as HTTP
To fix that I did the following:
Added use-proxy-protocol: true to the config map - which passed the correct headers, but got errors regarding broken connection (which I don't really remember the actual error right now, I'll edit this answer later if there will be any requests for it)
To fix these errors I added the following the the nginx ingress controller annotations configuration:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
This made sure that all traffic will use the proxy protocol, and I also had to change the backend-protocol from HTTP to TCP.
Doing this made sure all requests routed ELB reserve their original X headers, and are passed unto Nginx Ingress Controller, which in turn passed unto my applications that require these headers to be passed.
I'm having trouble accessing my Web Api that has been deployed to my Service Fabric cluster. I've followed the new Stateless Web Api template and have added the http endpoint seen below. I also made modifications that to the OwinCommunication as depicted here.
<Resources>
<Endpoints>
<Endpoint Name="ServiceEndpoint" Type="Input" Protocol="http" Port="8080" />
</Endpoints>
</Resources>
When creating my cluster I added a custom endpoint of 80 to my Node Type.
The client connection endpoint to my cluster is: mycluster.eastus.cloudapp.azure.com:19000
Also, I have a load balancing rule that maps port 80 to backend port 8080 over TCP. The probe associated is on port 80, and I have tried both protocols (http and tcp), but neither seem to work.
Locally, I can access an endpoint on my Web Api by calling http://localhost:8080/health/ping, and get back "pong". When I attempt to access it in service fabric cluster, a file is downloaded. The URL I use to access it in the cloud is http://mycluster.eastus.cloudapp.azure.com:19000/health/ping. I've tried other ports (19080, 80, 8080) but they either hang or give me a 400.
My questions regarding exposing a Web Api in a service fabric cluster are:
Should the probe be http or tcp?
Should the probe backend port be set to the web api port (e.g. 8080)?
Is my URL/port correct for accessing my api?
Why is a binary file being downloaded? This happens in all browsers, and the content being displayed in postman and fiddler.
Found the answer to my question after a number of heuristics. If my Web Api endpoint is set to port 8080 then I need the following:
Probe for port 8080 on TCP
A load balancing rule with port 80 and backend port 8080
Access the Web Api over the following URL: http://mycluster.eastus.cloudapp.azure.com/health/ping
As for #4, this is still a mystery.
http://mycluster.eastus.cloudapp.azure.com:19000/health/ping
This is wrong.
It should be http://mycluster.eastus.cloudapp.azure.com:8080/health/ping
At least this what the documentation says. So it should work without touching the load balancer.
I am trying to install the IPython html notebook server
on dotCloud. The IPython server uses tornado with websockets (and other internal communications using zeromq on tcp sockets).
Hhere's my dotcloud.yml:
www:
type: custom
buildscript: builder
ports:
nbserver: tcp
I am following the custom port recipes given here and here. As the logs show, I run the tornado server on 127.0.0.1:$DOTCLOUD_WWW_NBSERVER_PORT:
/var/log/supervisor/www.log:
[NotebookApp] The IPython Notebook is running at: 'http://127.0.0.1:35928/'
[NotebookApp] Use Control-C to stop this server and shut down all kernels.
But when I push, the dotCloud CLI tells me:
WARNING: The service crashed at startup or is listening to the wrong port. It failed to >respond on port "nbserver" (42801) within 30 seconds. Please check the application logs.
...
Deployment finished. Your application is available at the following URLs
No URL found. That's ok, it means that your application does not include a webservice."
There's nothing on my-app.dotcloud.com or my-app.dotcloud.com:DOTCLOUD_WWW_NBSERVER_PORT
What am I missing here? Thanks for your help.
UPDATE
Issue solved. The usual HTTP port works fine with websockets so the custom port recipes are not required. This is my new dotcloud.yml:
www:
type: custom
buildscript: builder
ports:
web: http
works with the following in ipython_notebook_config.py:
c.NotebookApp.ip = '*'
This makes it so that the tornado webserver listens to all ip addresses.
WARNING: setup security and authentication first!
See Running a Public Notebook Server for more information.
Glad you got it working!
In the future, and for other readers, you actually want your app to listen on $PORT_NBSERVER and then connect to it on DOTCLOUD_WWW_NBSERVER_PORT. $PORT_NBSERVER is the local port while the latter is the port that's exposed to the outside world through our routing/NAT layer.
If you have any other issue, don't hesitate to reach out to us at http://support.dotcloud.com
Source: I'm a dotCloud employee.