SpringCloud's /refresh not effective on properties used in camel routes - spring-boot

Camel version : 2.22.0
SpringBoot Version : 2.0.2.RELEASE
Observation: the config properties, which ideally have to be updated as they are changed in the configuration service and when a refresh is done at the Client service, does not take effect in case of properties that are used inside a Camel Route.
Note: there is a ticket in Jira whihc says fixed. but which release has this? https://issues.apache.org/jira/browse/CAMEL-8482
Adding the code segment:
The class which contains below code segment is annotated with #Component and #RefreshScope
#Value("${prop}")
String prop;
#Override
public void configure() throws Exception {
from("direct:route1").routeId("Child-1")
.setHeader( Exchange.CONTENT_ENCODING, simple("gzip"))
.setBody(simple("RESPONSE - [ { \"id\" : \"bf383eotal length is 16250]]"))
.log(prop + "${body}")
;

Related

Spring 6: Spring Cloud Stream Kafka - Replacement for #EnableBinding

I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.

Spring sleuth Baggage key not getting propagated

I've a filter (OncePerRequestFilter) which basically intercepts incoming request and logs traceId, spanId etc. which works well,
this filter lies in a common module which is included in other projects to avoid including spring sleuth dependency in all of my micro-services, the reason why I've created it as a library because any changes to library will be common to all modules.
Now I've to add a new propagation key which need to be propagated to all services via http headers like trace and spanId for that I've extracted current span from HttpTracing and added a baggage key to it (as shown below)
Span span = httpTracing.tracing().tracer().currentSpan();
String corelationId =
StringUtils.isEmpty(request.getHeader(CORELATION_ID))
? "n/a"
: request.getHeader(CORELATION_ID);
ExtraFieldPropagation.set(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
span.annotate("baggage_set");
span.tag(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
I've added propagation-keys and whitelisted-mdc-keys to my application.yml (with my library) file like below
spring:
sleuth:
propagation-keys:
- x-corelationId
log:
slf4j:
whitelisted-mdc-keys:
- x-corelationId
After making this change in filter the corelationId is not available when I make a http call to another service with same app, basically keys are not getting propagated.
In your library you can implement ApplicationEnvironmentPreparedEvent listener and add the configuration you need there
Ex:
#Component
public class CustomApplicationListener implements ApplicationListener<ApplicationEvent> {
private static final Logger log = LoggerFactory.getLogger(LagortaApplicationListener.class);
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
log.debug("Custom ApplicationEnvironmentPreparedEvent Listener");
ApplicationEnvironmentPreparedEvent envEvent = (ApplicationEnvironmentPreparedEvent) event;
ConfigurableEnvironment env = envEvent.getEnvironment();
Properties props = new Properties();
props.put("spring.sleuth.propagation-keys", "x-corelationId");
props.put("log.slf4j.whitelisted-mdc-keys:", "x-corelationId");
env.getPropertySources().addFirst(new PropertiesPropertySource("custom", props));
}
}
}
Then in your microservice you will register this custom listener
public static void main(String[] args) {
ConfigurableApplicationContext context = new SpringApplicationBuilder(MyApplication.class)
.listeners(new CustomApplicationListener()).run();
}
I've gone through documentation and seems like I need to add spring.sleuth.propagation-keys and whitelist them by using spring.sleuth.log.slf4j.whitelisted-mdc-keys
Yes you need to do this
is there another way to add these properties in common module so that I do not need to include them in each and every micro services.
Yes, you can use Spring Cloud Config server and a properties file called application.yml / application.properties that would set those properties for all microservices
The answer from Mahmoud works great when you want register the whitelisted-mdc-keys programatically.
An extra tip when you need these properties also in a test, then you can find the anwser in this post: How to register a ApplicationEnvironmentPreparedEvent in Spring Test

Camel: use datasource configured by spring-boot

I have a project and in it I'm using spring-boot-jdbc-starter and it automatically configures a DataSource for me.
Now I added camel-spring-boot to project and I was able to successfully create routes from Beans of type RouteBuilder.
But when I'm using sql component of camel it can not find datasource. Is there any simple way to add Spring configured datasource to CamelContext? In samples of camel project they use spring xml for datasource configuration but I'm looking for a way with java config. This is what I tried:
#Configuration
public class SqlRouteBuilder extends RouteBuilder {
#Bean
public SqlComponent sqlComponent(DataSource dataSource) {
SqlComponent sqlComponent = new SqlComponent();
sqlComponent.setDataSource(dataSource);
return sqlComponent;
}
#Override
public void configure() throws Exception {
from("sql:SELECT * FROM tasks WHERE STATUS NOT LIKE 'completed'")
.to("mock:sql");
}
}
I have to publish it because although the answer is in the commentary, you may not notice it, and in my case such a configuration was necessary to run the process.
The use of the SQL component should look like this:
from("timer://dbQueryTimer?period=10s")
.routeId("DATABASE_QUERY_TIMER_ROUTE")
.to("sql:SELECT * FROM event_queue?dataSource=#dataSource")
.process(xchg -> {
List<Map<String, Object>> row = xchg.getIn().getBody(List.class);
row.stream()
.map((x) -> {
EventQueue eventQueue = new EventQueue();
eventQueue.setId((Long)x.get("id"));
eventQueue.setData((String)x.get("data"));
return eventQueue;
}).collect(Collectors.toList());
})
.log(LoggingLevel.INFO,"******Database query executed - body:${body}******");
Note the use of ?dataSource=#dataSource. The dataSource name points to the DataSource object configured by Spring, it can be changed to another one and thus use different DataSource in different routes.
Here is the sample/example code (Java DSL). For this I used
Spring boot
H2 embedded Database
Camel
on startup spring-boot, creates table and loads data. Then camel route, runs "select" to pull the data.
Here is the code:
public void configure() throws Exception {
from("timer://timer1?period=1000")
.setBody(constant("select * from Employee"))
.to("jdbc:dataSource")
.split().simple("${body}")
.log("process row ${body}")
full example in github

Compatibility of ContainerResponseFilter in jersey 1.17

Can i run my CustomFilter extended with ContainerResponseFilter in jersey1.17.
I am using GrizzlyWebServer. Please suggest . Given below is my sample server code to add the filter.
GrizzlyWebServer webServer = new GrizzlyWebServer(.............);
....
....
ServletAdapter adapter3 = new ServletAdapter();
adapter3.addInitParameter("com.sun.jersey.config.property.packages", "com.motilink.server.services");
adapter3.setContextPath("/");
adapter3.setServletInstance(new ServletContainer());
adapter3.addContextParameter(ResourceConfig.PROPERTY_CONTAINER_RESPONSE_FILTERS, PoweredbyResponseFilter.class.getName());
webServer.addGrizzlyAdapter(adapter3, new String[]{"/"});
...
.....
MY Filter:
#FrontierResponse
#Provider
public class PoweredbyResponseFilter implements ContainerResponseFilter {
#Override
public void filter(ContainerRequestContext requestContext, ContainerResponseContext responseContext)
throws IOException {
System.out.println("hell");
responseContext.getHeaders().add("X-Powered-By", "Jersey :-)");
}
}
Resource Class:
#NameBinding
#Retention(value = RetentionPolicy.RUNTIME)
public #interface FrontierResponse {
}
#GET
#Produces("text/plain")
#Path("plain")
//#FrontierResponse
public String getMessage() {
System.out.println("hello world called");
return "Hello World";
}
and finally i call it from a browser
http:// localhost:4464/plain
Add the ResourceConfig.PROPERTY_CONTAINER_RESPONSE_FILTERS property as a init-param and not as a context-param:
...
adapter3.addInitParameter(ResourceConfig.PROPERTY_CONTAINER_RESPONSE_FILTERS, PoweredbyResponseFilter.class.getName());
...
EDIT 1
From your answer it seems that you're actually trying to use Jersey 1.x (1.17) runtime with implemented JAX-RS 2.0 providers (ContainerRequestContext and ContainerResponseContext have been introduced in JAX-RS 2.0 and Jersey 1.x doesn't know how to use them).
So my advice would be - drop all your Jersey 1.17 dependencies and replace them with Jersey 2.x dependencies. Take a look at our helloworld-webapp example (particularly at App class) to see how to create a Grizzly server instance with JAX-RS application.
Note that it is sufficient to add just ServerProperties.PROVIDER_PACKAGES property to init-params and your Resources and Providers (incl. response filters) will be scanned and registered in the application.

spring security logout handler not working

i have grails pluggin spring-security-core-1.2.1
I registered security event listener as a spring bean in grails-app/conf/spring/resources.groovy:
securityEventListener(LoggingSecurityEventListener)
and make two additions to grails-app/conf/Config.groovy:
grails.plugins.springsecurity.useSecurityEventListener = true
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'securityEventListener']
my logging/logout listener
class LoggingSecurityEventListener implements ApplicationListener<AbstractAuthenticationEvent>, LogoutHandler {
void onApplicationEvent(AbstractAuthenticationEvent event) {
System.out.println('appEvent')
}
void logout(HttpServletRequest request, HttpServletResponse response,
Authentication authentication) {
System.out.println('logout')
}
}
on ApplicationEvent works good, but logout not working
what could be the problem?
or you can tell how to get all logging users
When you set
grails.plugins.springsecurity.useSecurityEventListener = true
the spring security plugin will register it's own event listener called securityEventListener. The handler looked up from handlerNames is probably getting the plugin registered one instead of yours. Try renaming your bean to something like:
loggingSecurityEventListener(LoggingSecurityEventListener)
and replacing the handlerNames with
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'loggingSecurityEventListener']
NOTE: the configuration property names have changed (plugins -> plugin). If you're using the grails spring-security plugin version 2.0 or later, use this:
grails.plugin.springsecurity.logout.handlerNames

Resources