Is there a way to enable ssl connection between Spring cloud dataflow server and REDIS(for analytics). I do not find any configuration option in SCDF documentation. Due to this I always get an error while starting SCDF server. The healthcheck goes down as Redis healthcheck fails.
Related
Trying to use Spring Cloud Kubernetes Discovery server with discovery client as described in https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#spring-cloud-kubernetes-discoveryserver
and the client couldn't fetch service information from other namespaces. There is already an issue raised in Spring Cloud Kubernetes in GitHub - https://github.com/spring-cloud/spring-cloud-kubernetes/issues/824
Tried Fabric8 client also, encountering error as below:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://xx.xx.xx.xx/api/v1/services. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. services is forbidden: User "system:serviceaccount:abcd01:xyzssvc" cannot list resource "services" in API group "" at the cluster scope.
Did anyone manage to integrate Spring Cloud Kubernetes Discovery Server with Discovery Client? Integrating Discovery Server with Discovery client will help to prevent assigning clusterrole permission to services.
I'm trying to run a Spring Boot Admin application on Kubernetes using Spring Cloud Discovery (without Eureka or Zuul. It directly scrapes from Kubernetes API)
I've setup necessary RBAC and Ingress/Egress for the application to access Kubernetes API and the relevant services in the cluster.
The application is initially fetching all the information regarding the services but it's failing with below error when trying to communicate with individual pods in the namespace
reactor.netty.http.client.PrematureCloseException: Connection prematurely closed
When I try to curl the particular pods from the pod with Spring Boot Admin app:
When I try with FQDN of the service it returns fine with the response
curl {service}.{namespace}.svc.cluster.local/actuator/info
When I do the same with the PodIp ( the one which Spring Boot Admin is struggling to connect to ):
curl 10.x.x.x:8080/actuator/info
I get this error
curl: (56) Recv failure: Connection reset by peer
Is there any particular netpol for pods to be accessed directly rather through the service's cluster IP? Because Spring Boot Admin tries to monitor all the individual pods in the services.
Or is there a workaround/approach where it isn't required by the Spring Boot Admin app to send requests to all the individual pods?
I am confused when to use spring cloud config server and consul.
Both will read configurations files in their own ways.
Can you please let me know when use spring cloud config server and when to use consul?
Both serve configuration from remote servers to spring boot applications. Config Server aggregates configuration from multiple sources: git, svn, sql databases, vault and credhub. Spring Cloud Consul serves configuration to boot apps directly from the consul key-value store. If you already have consul in your infrastructure, it would simplify things by not having to run config server.
I am trying to setup a Spring Boot Admin server on a Cloud foundry. I am using the client Spring Cloud Discovery with SimpleDiscoveryClient configuration. We are not having any Thrid Party service discovery client like eureka. I can see the service getting registered to the spring boot admin server. But when i scale up any service, i see only one instance of that service and the actual number of instances are not reflected. I would like to know if that is possible without Eureka or any other service discovery, if yes how to achieve that without them.
Thanks
I'm using Spring Boot 1.5.4 and Spring Cloud Dalston SR4 to stand up a Turbine server on Cloud Foundry and aggregate my application Hystrix streams. In addition I want to add Spring Boot Actuator monitoring and management to the Turbine server. I realize there is ample documentation on how to do this in a local environment and I do have it working locally. However it is a different matter when deploying to Cloud Foundry where I cannot use port numbers in a Url binding.
The issue is that the Turbine stream is provided by an RxNetty server on one port and the Actuator endpoints are provided by via Tomcat on another port. In Cloud Foundry I can only bind my Url to RxNetty endpoint or the Tomcat endpoint, not both.
No combination of management.port and turbine.stream.port allows me to access the turbine stream and the actuator endpoints from one host binding. The following is an example of what I would expect to be able to do:-
https://myapp.mydomain.com/info (to report actuator info details)
https://myapp.mydomain.com/turbine.stream (to stream turbine metrics)
Note: There are no port numbers in these Urls.
Requests to your app on Cloud Foundry go through the Cloud Foundry Go Router, which uses the http host header to direct traffic to all the container instances running your app. The http based gorouter expecter only one port to be opened by the app to forward http traffic to. However, the gorouter also support tcp routing which should allow you to have multiple ports open. see the docsfor an explanation of tcp vs. http routes on cloud foundry.
If you are running on Pivotal Cloud Foundry you can use the Circuit Breaker Dashboard provided by Spring Cloud Serviecs for PCF then you won't need to setup the turbine stream. The Spring Cloud Services Dashboard uses RabbitMQ instead of SSE events see SCS docs for details
Just getting back to this now. As noted by spencergibb, moving to springboot 2.0 and cloud Finchley works.