When I issue the following:
curl localhost:8080/actuator/env -d'{"name":"test",value:"hello world!"}' -H "Content-type: application/json"
I get:
{"timestamp":"2020-03-09T16:21:18.245+0000","status":405,"error":"Method Not Allowed","message":"Request method 'POST' not supported","path":"/actuator/env
My application.yml file exposes the endpoint and I can issue a GET request without any issue:
management:
endpoints:
web:
exposure:
include: "*"
I do not have Spring Security enabled. How can I enable submission of POST requests to Spring Actuator endpoints? I'm using Spring Boot 2.2.5.
In the current versions(2.2.5+) of spring boot exposing the env endpoint is not enough if you want to update/set the environment property while the application is running.
You have to set
management.endpoint.env.post.enabled=true
After this you can update the property by using the below command:
curl -H "Content-Type: application/json" -X POST -d '{"name":"foo", "value":"bar"}' http://localhost:8080/my-application/actuator/env
Related
I am using this code to queue data into RabbitMQ: https://www.javainuse.com/spring/spring-boot-rabbitmq-hello-world
I configured the following properties correctly to match the RabbitMQ configuration
Host
Username
Password
Exchange
Routing key
Queue
But RabbitMQSender#send or rabbitTemplate.convertAndSend(exchange, routingkey, company); is not queuing any data into RabbitMQ and in the same time it's not returning any error
I tried to change the username or pwd to an incorrect one and I got not_authorized so the connection with correct username/pwd/queue/exchange/routingkey seems fine but it's not doing anything.
I tried to send event via Curl and it's working correctly, the event is queued correctly in RabbitMQ
curl -v -u username:pwd -H "Accept: application/json" -H "Content-Type:application/json" POST -d'{
"properties": {
},
"routing_key": "my-routingkey",
"payload":"hi",
"payload_encoding": "string"
}' localhost:15672/api/exchanges/%2F/my-exchange/publish
Does the spring RabbitTemplate#convertAndSend execute in the background this API localhost:15672/api/exchanges/%2F/my-exchange/publish ?
If not, what I need to change in my code?
I was trying to queue events into a remote RabbitMQ server which was not configured properly in kubernetes: it was missing the storage field storage: 10Gi and the RabbitMQ was failing silently ...
spec:
replicas: 1
image: rabbitmq:3.10.7-management
persistence:
storageClassName: managed-csi
storage: 10Gi
Please check whether exchange with the correct name is created
Following Spring Cloud Sleuth's documentation I've set up a Spring Boot application with a Zipkin client:
Gradle config:
"org.springframework.cloud:spring-cloud-starter-sleuth",
"org.springframework.cloud:spring-cloud-starter-zipkin"
With this I start a Zipkin Server instance:
docker run -d -p 9411:9411 openzipkin/zipkin
And I get traces in Zipkin. So far so good.
Then I switch to a Jaeger server:
docker run -d -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 --name jaeger -p 5775:5775/udp -p 6831:6831/udp -p 6832:6832/udp -p 5778:5778 -p 16686:16686 -p 14268:14268 -p 9411:9411 jaegertracing/all-in-one:latest
And without any change to my application I can see those traces in Jaeger. Great.
Sleuth docs states:
15.1. OpenTracing
Spring Cloud Sleuth is compatible with OpenTracing. If you have
OpenTracing on the classpath, we automatically register the
OpenTracing Tracer bean. If you wish to disable this, set
spring.sleuth.opentracing.enabled to false
Following this I added the Open Tracing dependency:
"io.opentracing.brave:brave-opentracing",
During startup the OpentracingAutoConfiguration creates a BraveTracer.
The point is I've placed a breakpoint in methods such as scopeManager(), activeSpan(), activateSpan(), buildSpan() from that BraveTracer and none of them is invoked during the execution of the application; the traces keep showing up in Jaeger though.
What am I missing here?
I’m trying to migrate JHipster from using Zuul to Spring Cloud Gateway. JHipster uses Eureka to look up routes and I believe I’ve configured Spring Cloud Gateway correctly to look up routes and propagate the access token to them. Here’s my config:
spring:
cloud:
gateway:
default-filters:
- TokenRelay
discovery:
locator:
enabled: true
lower-case-service-id: true
route-id-prefix: /services/
httpclient:
pool:
max-connections: 1000
The problem I’m experiencing is the access token is not sending an Authorization header to the downstream services.
Here's how things were configured with Zuul in my application.yml:
zuul: # those values must be configured depending on the application specific needs
sensitive-headers: Cookie,Set-Cookie #see https://github.com/spring-cloud/spring-cloud-netflix/issues/3126
host:
max-total-connections: 1000
max-per-route-connections: 100
prefix: /services
semaphore:
max-semaphores: 500
I created a pull request to show what's changed after integrating Spring Cloud Gateway.
https://github.com/mraible/jhipster-reactive-microservices-oauth2/pull/4
Steps to reproduce the issue:
git clone -b reactive git#github.com:mraible/jhipster-reactive-microservices-oauth2.git
Start JHipster Registry, Keycloak, and the gateway app:
cd jhipster-reactive-microservices-oauth2/gateway
docker-compose -f src/main/docker/jhipster-registry.yml up -d
docker-compose -f src/main/docker/keycloak.yml up -d
./mvnw
Start MongoDB and the blog app:
cd ../blog
docker-compose -f src/main/docker/mongodb.yml up -d
./mvnw
Navigate to http://localhost:8080 in your browser, log in with admin/admin, and try to go to Entities > Blog. You will get a 403 access denied error. If you look in Chrome Developer Tools at the network traffic, you'll see the access token isn't included in any headers.
I was able to solve this using this answer.
spring:
cloud:
gateway:
discovery:
locator:
enabled: true
predicates:
- name: Path
args:
pattern: "'/services/'+serviceId.toLowerCase()+'/**'"
filters:
- name: RewritePath
args:
regexp: "'/services/' + serviceId.toLowerCase() + '/(?<remaining>.*)'"
replacement: "'/${remaining}'"
I also had to add .pathMatchers("/services/**").authenticated() to my security config, which wasn't needed for Zuul. You can see my commit here.
I'm using Spring Boot 1.5.1 with:
compile 'io.micrometer:micrometer-spring-legacy:1.0.6'
compile 'io.micrometer:micrometer-registry-prometheus:1.0.6'
Spring properties file:
management.security.enabled=true
endpoints.prometheus.enabled=true
endpoints.prometheus.sensitive=false
In my Prometheus config file I added basic_auth:
basic_auth:
username: username
password: passw0rd
In my [prometheusURL]/targets I keep getting:
server returned HTTP status 403
What is the proper way for Prometheus to authenticate each scrape of Spring metrics?
EDIT: I can pass through Spring Security with this command:
curl --cookie "SESSION=7f18cfe7-9a54-4fcf-9662-21d81247c705" http://localhost:8082/management/metrics
AFAIK in prometheus config file we can pass only basic_auth or bearer_token. There are no other options for headers.
I have a Spring Boot application for which I externalized the configuration with Spring Cloud Config so I can modify and refresh its properties on the fly.
Here is my bootstrap.yml :
spring:
application:
name: ${appName:tasky}
profiles:
include:
- native
cloud:
config:
failFast: true
server:
bootstrap: true
prefix: /config
native:
search-locations: classpath:config/{profile}
Updating then refreshing a property (for eg. app.api-key) on my local project works perfectly :
curl -X POST http://localhost:9190/management/refresh
["app.api-key"]
I dockerized my app and tried to achieve the same result from my container, so I tried the following :
sed -i -e 's/oldvalue/newvalue/g' tasky-prod.properties
This correctly changes my file, but when I try to refresh the values in my app (from within the container), my spring boot context is restarted but nothing is picked up from the actuator and the new value is also not used by my app :
curl -X POST http://localhost:9999/management/refresh
[]
What did I do wrong ?