I've a filter (OncePerRequestFilter) which basically intercepts incoming request and logs traceId, spanId etc. which works well,
this filter lies in a common module which is included in other projects to avoid including spring sleuth dependency in all of my micro-services, the reason why I've created it as a library because any changes to library will be common to all modules.
Now I've to add a new propagation key which need to be propagated to all services via http headers like trace and spanId for that I've extracted current span from HttpTracing and added a baggage key to it (as shown below)
Span span = httpTracing.tracing().tracer().currentSpan();
String corelationId =
StringUtils.isEmpty(request.getHeader(CORELATION_ID))
? "n/a"
: request.getHeader(CORELATION_ID);
ExtraFieldPropagation.set(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
span.annotate("baggage_set");
span.tag(CUSTOM_TRACE_ID_MDC_KEY_NAME, corelationId);
I've added propagation-keys and whitelisted-mdc-keys to my application.yml (with my library) file like below
spring:
sleuth:
propagation-keys:
- x-corelationId
log:
slf4j:
whitelisted-mdc-keys:
- x-corelationId
After making this change in filter the corelationId is not available when I make a http call to another service with same app, basically keys are not getting propagated.
In your library you can implement ApplicationEnvironmentPreparedEvent listener and add the configuration you need there
Ex:
#Component
public class CustomApplicationListener implements ApplicationListener<ApplicationEvent> {
private static final Logger log = LoggerFactory.getLogger(LagortaApplicationListener.class);
public void onApplicationEvent(ApplicationEvent event) {
if (event instanceof ApplicationEnvironmentPreparedEvent) {
log.debug("Custom ApplicationEnvironmentPreparedEvent Listener");
ApplicationEnvironmentPreparedEvent envEvent = (ApplicationEnvironmentPreparedEvent) event;
ConfigurableEnvironment env = envEvent.getEnvironment();
Properties props = new Properties();
props.put("spring.sleuth.propagation-keys", "x-corelationId");
props.put("log.slf4j.whitelisted-mdc-keys:", "x-corelationId");
env.getPropertySources().addFirst(new PropertiesPropertySource("custom", props));
}
}
}
Then in your microservice you will register this custom listener
public static void main(String[] args) {
ConfigurableApplicationContext context = new SpringApplicationBuilder(MyApplication.class)
.listeners(new CustomApplicationListener()).run();
}
I've gone through documentation and seems like I need to add spring.sleuth.propagation-keys and whitelist them by using spring.sleuth.log.slf4j.whitelisted-mdc-keys
Yes you need to do this
is there another way to add these properties in common module so that I do not need to include them in each and every micro services.
Yes, you can use Spring Cloud Config server and a properties file called application.yml / application.properties that would set those properties for all microservices
The answer from Mahmoud works great when you want register the whitelisted-mdc-keys programatically.
An extra tip when you need these properties also in a test, then you can find the anwser in this post: How to register a ApplicationEnvironmentPreparedEvent in Spring Test
Related
I was reading "Spring Microservices In Action (2021)" because I wanted to brush up on Microservices.
Now with Spring Boot 3 a few things changed. In the book, an easy example of how to push messages to a topic and how to consume messages to a topic were presented.
The Problem is: The examples presented do just not work with Spring Boot 3. Sending Messages from a Spring Boot 2 Project works. The underlying project can be found here:
https://github.com/ihuaylupo/manning-smia/tree/master/chapter10
Example 1 (organization-service):
Consider this Config:
spring.cloud.stream.bindings.output.destination=orgChangeTopic
spring.cloud.stream.bindings.output.content-type=application/json
spring.cloud.stream.kafka.binder.zkNodes=kafka #kafka is used as a network alias in docker-compose
spring.cloud.stream.kafka.binder.brokers=kafka
And this Component(Class) which can is injected in a service in this project
#Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
#Autowired
public SimpleSourceBean(Source source){
this.source = source;
}
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
source.output().send(MessageBuilder.withPayload(change).build());
}
}
This code fires a message to the topic (destination) orgChangeTopic. The way I understand it, the firsttime a message is fired, the topic is created.
Question 1: How do I do this Spring Boot 3? Config-Wise and "Code-Wise"?
Example 2:
Consider this config:
spring.cloud.stream.bindings.input.destination=orgChangeTopic
spring.cloud.stream.bindings.input.content-type=application/json
spring.cloud.stream.bindings.input.group=licensingGroup
spring.cloud.stream.kafka.binder.zkNodes=kafka
spring.cloud.stream.kafka.binder.brokers=kafka
And this code:
#SpringBootApplication
#RefreshScope
#EnableDiscoveryClient
#EnableFeignClients
#EnableEurekaClient
#EnableBinding(Sink.class)
public class LicenseServiceApplication {
public static void main(String[] args) {
SpringApplication.run(LicenseServiceApplication.class, args);
}
#StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
}
What this method is supposed to do is to fire whenever a message is fired in orgChangeTopic, we want the method loggerSink to fire.
How do I do this in Spring Boot 3?
In Spring Cloud Stream 4.0.0 (the version used if you are using Boot 3), a few things are removed - such as the EnableBinding, StreamListener, etc. We deprecated them before in 3.x and finally removed them in the 4.0.0 version. The annotation-based programming model is removed in favor of the functional programming style enabled through the Spring Cloud Function project. You essentially express your business logic as java.util.function.Funciton|Consumer|Supplier etc. for a processor, sink, and source, respectively. For ad-hoc source situations, as in your first example, Spring Cloud Stream provides a StreamBridge API for custom sends.
Your example #1 can be re-written like this:
#Component
public class SimpleSourceBean {
#Autowired
StreamBridge streamBridge
public void publishOrganizationChange(String action, String organizationId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, organizationId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
organizationId,
UserContext.getCorrelationId());
streamBridge.send("output-out-0", MessageBuilder.withPayload(change).build());
}
}
Config
spring.cloud.stream.bindings.output-out-0.destination=orgChangeTopic
spring.cloud.stream.kafka.binder.brokers=kafka
Just so you know, you no longer need that zkNode property. Neither the content type since the framework auto-converts that for you.
StreamBridge send takes a binding name and the payload. The binding name can be anything - but for consistency reasons, we used output-out-0 here. Please read the reference docs for more context around the reasoning for this binding name.
If you have a simple source that runs on a timer, you can express this simply as a supplier as below (instead of using a StreamBrdige).
#Bean
public Supplier<OrganizationChangeModel> ouput() {
return () -> {
// return the payload
};
}
spring.cloud.function.definition=output
spring.cloud.bindings.output-out-0.destination=...
Example #2
#Bean
public Consumer<OrganizationChangeModel> loggerSink() {
return model -> {
log.info("Received an {} event for organization id {}",
orgChange.getAction(), orgChange.getOrganizationId());
};
}
Config:
spring.cloud.function.definition=loggerSink
spring.cloud.stream.bindings.loggerSink-in-0.destination=orgChangeTopic
spring.cloud.stream.bindings.loggerSinnk-in-0.group=licensingGroup
spring.cloud.stream.kafka.binder.brokers=kafka
If you want the input/output binding names to be specifically input or output rather than with in-0, out-0 etc., there are ways to make that happen. Details for this are in the reference docs.
#ConfigurationProperties locations is deprecated in Spring Boot 1.4.x and option is now removed in 1.5.x
I was using it like this: BucketTestConfig.java
For now with deprecation, I'm trying to set the system property spring.config.location for both production code and test code as an alternative.
./gradlew clean test is still failing although I set the system property.
What is the best alternative for deprecated #ConfigurationProperties locations in this case?
UPDATE:
Using SpringApplicationBuilder.properties() doesn't work in the test (BucketTestRepositoryTests).
Using SpringApplicationBuilder.listeners() doesn't work in the test (BucketTestRepositoryTests), either.
UPDATE (2nd):
There was no reason to depend on #ConfigurationProperties in my case, so I went with Yaml instead as follows: https://github.com/izeye/spring-boot-throwaway-branches/commit/a1290672dceea98706b1a258f8a17e2628ea01ee
So this question's title is invalid and this question can be deleted.
Follow this thread for more information.
Basically, this thread suggests two options
First option is to set spring.config.name to a list of the files you want to load:
new SpringApplicationBuilder(Application.class)
.properties("spring.config.name=application,mine")
.run(args);
Second options is to add listeners
new SpringApplicationBuilder(SanityCheckApplication.class)
.listeners(new LoadAdditionalProperties())
.run(args);
Content of LoadAdditionalProperties
#Component
public class LoadAdditionalProperties implements ApplicationListener<ApplicationEnvironmentPreparedEvent> {
private ResourceLoader loader = new DefaultResourceLoader();
#Override
public void onApplicationEvent(ApplicationEnvironmentPreparedEvent event) {
try {
Resource resource = loader.getResource("classpath:mine.properties");
PropertySource<?> propertySource = new PropertySourcesLoader().load(resource);
event.getEnvironment().getPropertySources().addLast(propertySource);
} catch (IOException ex) {
throw new IllegalStateException(ex);
}
}
}
maybe someone has an idea to my following problem:
I am currently on a project, where i want to use the AWS SQS with Spring Cloud integration. For the receiver part i want to provide a API, where a user can register a "message handler" on a queue, which is an interface and will contain the user's business logic, e.g.
MyAwsSqsReceiver receiver = new MyAwsSqsReceiver();
receiver.register("a-queue-name", new MessageHandler(){
#Override
public void handle(String message){
//... business logic for the received message
}
});
I found examples, e.g.
https://codemason.me/2016/03/12/amazon-aws-sqs-with-spring-cloud/
and read the docu
http://cloud.spring.io/spring-cloud-aws/spring-cloud-aws.html#_sqs_support
But the only thing i found there to "connect" a functionality for processing a incoming message is a annotation on a method, e.g. #SqsListener or #MessageMapping.
These annotations are fixed to a certain queue-name, though. So now i am at a loss, how to dynamically "connect" my provided "MessageHandler" (from my API) to the incoming message for the specified queuename.
In the Config the example there is a SimpleMessageListenerContainer, which gets a QueueMessageHandler set, but this QueueMessageHandler does not seem
to be the right place to set my handler or to override its methods and provide my own subclass of QueueMessageHandler.
I already did something like this with the Spring Amqp integration and RabbitMq and thought, that it would be also similar here with AWS SQS.
Does anyone have an idea, how to accomplish this?
thx + bye,
Ximon
EDIT:
I found, that Spring JMS could actually do that, e.g. www.javacodegeeks.com/2016/02/aws-sqs-spring-jms-integration.html. Does anybody know, what consequences using JMS protocol has here, good or bad?
I am facing the same issue.
I am trying to go in an unusual way where I set up an Aws client bean at build time and then instead of using sqslistener annotation to consume from the specific queue I use the scheduled annotation which I can programmatically pool (each 10 secs in my case) from which queue I want to consume.
I did the example that iterates over queues defined in properties and then consumes from each one.
Client Bean:
#Bean
#Primary
public AmazonSQSAsync awsSqsClient() {
return AmazonSQSAsyncClientBuilder
.standard()
.withRegion(Regions.EU_WEST_1.getName())
.build();
}
Consumer:
// injected in the constructor
private final AmazonSQSAsync awsSqsClient;
#Scheduled(fixedDelay = 10000)
public void pool() {
properties.getSqsQueues()
.forEach(queue -> {
val receiveMessageRequest = new ReceiveMessageRequest(queue)
.withWaitTimeSeconds(10)
.withMaxNumberOfMessages(10);
// reading the messages
val result = awsSqsClient.receiveMessage(receiveMessageRequest);
val sqsMessages = result.getMessages();
log.info("Received Message on queue {}: message = {}", queue, sqsMessages.toString());
// deleting the messages
sqsMessages.forEach(message -> {
val deleteMessageRequest = new DeleteMessageRequest(queue, message.getReceiptHandle());
awsSqsClient.deleteMessage(deleteMessageRequest);
});
});
}
Just to clarify, in my case, I need multiple queues, one for each tenant, with the queue URL for each one passed in a property file. Of course, in your case, you could get the queue names from another source, maybe a ThreadLocal which has the queues you have created in runtime.
If you wish, you can also try the JMS approach where you create message consumers and add a listener to each one you wish (See the doc Aws Jms documentation).
When we do Spring and SQS we use the spring-cloud-starter-aws-messaging.
Then just create a Listener class
#Component
public class MyListener {
#SQSListener(value="myqueue")
public void listen(MyMessageType message) {
//process the message
}
}
I have a project and in it I'm using spring-boot-jdbc-starter and it automatically configures a DataSource for me.
Now I added camel-spring-boot to project and I was able to successfully create routes from Beans of type RouteBuilder.
But when I'm using sql component of camel it can not find datasource. Is there any simple way to add Spring configured datasource to CamelContext? In samples of camel project they use spring xml for datasource configuration but I'm looking for a way with java config. This is what I tried:
#Configuration
public class SqlRouteBuilder extends RouteBuilder {
#Bean
public SqlComponent sqlComponent(DataSource dataSource) {
SqlComponent sqlComponent = new SqlComponent();
sqlComponent.setDataSource(dataSource);
return sqlComponent;
}
#Override
public void configure() throws Exception {
from("sql:SELECT * FROM tasks WHERE STATUS NOT LIKE 'completed'")
.to("mock:sql");
}
}
I have to publish it because although the answer is in the commentary, you may not notice it, and in my case such a configuration was necessary to run the process.
The use of the SQL component should look like this:
from("timer://dbQueryTimer?period=10s")
.routeId("DATABASE_QUERY_TIMER_ROUTE")
.to("sql:SELECT * FROM event_queue?dataSource=#dataSource")
.process(xchg -> {
List<Map<String, Object>> row = xchg.getIn().getBody(List.class);
row.stream()
.map((x) -> {
EventQueue eventQueue = new EventQueue();
eventQueue.setId((Long)x.get("id"));
eventQueue.setData((String)x.get("data"));
return eventQueue;
}).collect(Collectors.toList());
})
.log(LoggingLevel.INFO,"******Database query executed - body:${body}******");
Note the use of ?dataSource=#dataSource. The dataSource name points to the DataSource object configured by Spring, it can be changed to another one and thus use different DataSource in different routes.
Here is the sample/example code (Java DSL). For this I used
Spring boot
H2 embedded Database
Camel
on startup spring-boot, creates table and loads data. Then camel route, runs "select" to pull the data.
Here is the code:
public void configure() throws Exception {
from("timer://timer1?period=1000")
.setBody(constant("select * from Employee"))
.to("jdbc:dataSource")
.split().simple("${body}")
.log("process row ${body}")
full example in github
i have grails pluggin spring-security-core-1.2.1
I registered security event listener as a spring bean in grails-app/conf/spring/resources.groovy:
securityEventListener(LoggingSecurityEventListener)
and make two additions to grails-app/conf/Config.groovy:
grails.plugins.springsecurity.useSecurityEventListener = true
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'securityEventListener']
my logging/logout listener
class LoggingSecurityEventListener implements ApplicationListener<AbstractAuthenticationEvent>, LogoutHandler {
void onApplicationEvent(AbstractAuthenticationEvent event) {
System.out.println('appEvent')
}
void logout(HttpServletRequest request, HttpServletResponse response,
Authentication authentication) {
System.out.println('logout')
}
}
on ApplicationEvent works good, but logout not working
what could be the problem?
or you can tell how to get all logging users
When you set
grails.plugins.springsecurity.useSecurityEventListener = true
the spring security plugin will register it's own event listener called securityEventListener. The handler looked up from handlerNames is probably getting the plugin registered one instead of yours. Try renaming your bean to something like:
loggingSecurityEventListener(LoggingSecurityEventListener)
and replacing the handlerNames with
grails.plugins.springsecurity.logout.handlerNames =
['rememberMeServices',
'securityContextLogoutHandler',
'loggingSecurityEventListener']
NOTE: the configuration property names have changed (plugins -> plugin). If you're using the grails spring-security plugin version 2.0 or later, use this:
grails.plugin.springsecurity.logout.handlerNames