How to disable/configure the Spring Cloud Gateway rewriting of Location headers - spring-boot

We have a setup with Spring Cloud Gateway running with consul service discovery and proxying requests to services in a cluster.
When one of these services responds with a Location: / header this gets rewritten on the way out thru the gateway.
The problem is that the gateway seems to add the service local hostname and port as found in Consul. This url is of course not available (or desirable) for the client.
I have verified thet the upstream server sends:
Location: /
(Generated by the "redirect: /" Spring MCV shorthand)
But when it reached the end client is rewritten to:
Location: https://10.0.0.10:34567/
(https://10.0.0.10:34567/ being the upstream location of the service in consul)
If is of course incorrect.
My problem is that I can't find any documentation on how to configure this and no indication of what classes are used (to debug) so I just don't know where to start looking for the fix.
The desired behaviour is of course to just leave the redirect unchanged.
In this particular case we use a host based routing setup:
.route("app", r -> r.host("app.**").uri("lb://app"))
Any help or hint appreciated.

Related

Spring App on GCP - Cloud Run - HTTPS only - This combination of host and port requires TLS

My Spring app uses lets encrypt and is https only. I did not include http to https thing, as it worked for me in postman with https:// format
When I deployed to Cloud Run, and mentioned the custom port (the port specified in spring)
and tested using URL from dashboard
https://..blah..run.app
I am getting error/message
Bad Request
This combination of host and port requires TLS.
What configuration is required on Cloud Run to resolve this?
The url as I see on service details page has htpps://...
EDIT:
If Cloudrun does not need me to take case of SSL, I can remove the application properties entries
server.ssl.key-store-type=PKCS12
server.ssl.key-store=classpath:key/keystore.p12
server.ssl.key-store-password=${lets.secret}
server.ssl.key-alias=someCertAlias
server.ssl.enabled=true
So Can I get an answer on whether to remove SSL from spring?
If cloudrun always uses http, all my calls use redirectConnector, which seems pointless
The Cloud Run Service listens on HTTP and HTTPS. Your application running in the container must listen on a port configured with HTTP only.
FYI: For a public facing web server, you should almost always enable HTTP. Otherwise, when a user enters www.example.com in the browser, the user will receive a connect error. This not always the case, for example .dev gTLDs, but is good practice. When a user connects to Cloud Run with the HTTP protocol, Cloud Run will redirect the user to HTTPS and connect to your application using the HTTP protocol.

Services communication in consul

I am developing several services, and use consul as the service registry. I'm able to register all of my services to the consul.
And now for the next thing to do, I need to be able to communicate from service A to service B.
Without a service registry, usually what I did was simply dispatch a client HTTP request from service A to service B.
But since now I already have service discovery in place, should I get the service B host address via consul and then dispatch a client HTTP request to the service B host address something like that? Or does the consul also provide an API gateway, so I only need to dispatch my client HTTP request from service A to the consul, and then the consul will automatically forward it to the destination?
Also if there is relevant documentation about my case, I would be very glad to take a look at it? (I can't find the relevant documentation, probably my google search keyword is wrong)
Consul supports two methods for service discovery, DNS and HTTP.
Applications can perform DNS lookups against their local Consul agent which exposes a DNS server on port 8600 (you can also configure DNS forwarding). For example, an application can issue an A record query for web.service.consul and Consul will return a list of healthy instance endpoints for the web service. SRV lookups are also supported in order to retrieve the IP and port for a given service. The DNS interface also supports querying endpoints by service tag and data center. Details can be found at Consul.io: DNS - Service Lookups.
HTTP-based service discovery can be performed by querying the /v1/health/service/:name endpoint against the local agent. The following will return a full list of healthy and unhealthy endpoints for the service nginx.
$ curl http://127.0.0.1:8500/v1/health/service/nginx
You can use the passing query parameter to restrict the output to only healthy services.
$ curl "http://127.0.0.1:8500/v1/health/service/nginx?passing"
I recommend reviewing the guide Register a Service with Consul Service Discovery for more info on registering and querying services from the catalog.
Lastly, API gateways like Traefik and Solo's Gloo support using Consul for service discovery (see Traefik's Consul Catalog Provider and Gloo's Consul Services). You could configure your services to route requests to these gateways, and allow the gateway to forward to the backend destination.
I ended up getting the list of services info from the consul, and then perform name matching on it then get the service address.
I use this endpoint to get the list of the services and it's data:
http://localhost:8500/v1/agent/services
So it's the client-side discovery I guess.

Eureka not discovering service by name

I'm running simple eureka service discovery example as given in the link below :
https://www.logicbig.com/tutorials/spring-framework/spring-cloud/rest-template-load-balancer.html
I observed that while I am running this example on my home wifi/personal laptop this runs fine whereas while it runs on my office workstation on the corporate network, the resttemplate in unable to resolve the services by their name. (get 407 proxy unauthorized)
I can see that both the services are getting successfully registered on Eureka.
What might be stopping the Eureka server to work correctly on the corporate network?
P.S.: By following the answer and making the resttemplate and proxy aware results in 503 services unavailable.
You should configure your rest template to be proxy aware. You can find how to do this here

WSO2 ESB proxy service on Windows

i'm using the WSO2 ESB to integrate several services on the Windows virtual machine.
I used the simple proxy to map the services deployed on it. But the problem is what i can't access them from outside it nevetheless the port 8280 where services are deployed is open for internet, but i can see only blank page instead. What could be wrong?
Another question is i was trying to map the WSO2 ESB management console itself to be availbe from outside the machine using simple proxy, and i'm failed, it loads me the this is what i see on trying the service.
Could you please give me a hint on how to resolve this issue? is it possible to share the esb mgmt console using the ESB itself?
Thanks a lot in advance,
Do u have proxy in the middle? It looks like on screenshot webpage missing all pictures, meanwhile css was loaded successfully.
Another question which kind of virtual machine u use? For example in virtualbox by default virtual machine behind NAT.
I wasn't able to connect to server on virtual machine from host only opposite way server on host available in virtual machine.
To make server in virtual machine available on host need to configure network as bridge.
Not sure if it helps, but I think I had a similar problem in our corporate network after I applied all the security patches (poodle,Diffie-Hellman etc.). I had to configure the addresses in catalina.xml (if i remember right) that are/under which allowed to access the admin console. Cannot tell you more details because I'm on holiday :-)
Maybe it's worth to give it a try.
Another example from real life. HTTP Response from external resource was application/json, status of response 200 OK. ESB configured to use
<messageFormatter contentType="application/json"
class="org.apache.synapse.commons.json.JsonStreamFormatter"/>
but content was simple text/plain.
During parsing body of http response exception was thrown and just silently was written to log, without any fault message processing. Just empty response to client.
To clarify that services reachable, there is echo service by default on server, which respond content equal to request. Try to use it.
was trying to map the WSO2 ESB management console itself to be availbe
from outside the machine using simple proxy
By default the management console tries to enforce the port 9443 for dynamic links (JSP) pages. That's why you see only part of the pages and you shouldn't be able to log on.
what you can do is edit the repository/conf/tomcat/catalina-server.xml and to the Connector running the port 9443 you can add an attribute proxyPort="443", the carbon console will be happy to run on 443.
For the services, my educated guess would be on the firewall / network rules, however without other information I cannot answer (or - they are working, just you may not try to access them by simple browser request)

Elasticsearch-logging rc and svc are getting automatically deleted

https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
The cluster is getting automatically deleted by using these configs to create the cluster.
From https://github.com/kubernetes/kubernetes/issues/11435 the solution is to remove
kubernetes.io/cluster-service: "true"
Though without these the elasticsearch is not available through the kubernetes master.
Should i create a pull request to remove the line from the files in the repo so people dont get confused?
Firstly, I'd recommend reformatting future questions so they adhere to the stack overflow guidelines: https://stackoverflow.com/help/how-to-ask.
I'd recommend making Elasticsearch a normal Kubernetes Service. You can expose in one of the following ways:
1. Set service.Type = NodePort and access it via any public ip of node:nodePort
2. Set service.Type = LoadBalancer, this will only work on cloud providers that have loadbalancers
3. Expose the RC directly through a host port (not recommended)
Those are just the common options for accessing a Service, please see the following thread for a more detailed discussion: https://groups.google.com/forum/#!topic/kubernetes-sig-network/B-A_RuqpFWk
It's generally not a good idead to send all external traffic meant for a Kubernetes service through the apiserver. However if you must do so, you can via an endpoint such as:
/api/v1/proxy/namespaces/default/services/nginx:80/
Where default is the namespace, nginx is the name of your service and 80 is the service port (needed to disambiguate multiport services).

Resources