We have a spring application that publishes and listens to queues on a remote application server. My publisher and listener which are spring based listen within our own application server.
One of the problems we have for our test environments is the other app. server is not up so when our test application goes to start and it tries to inject JmsTemplate with its connectionFactory it blows up because it is not a valid connection and our entire application fails to load. This is causing grief with the other developers in our group that have nothing to do with JMS. All they want to do is run and test their code but the JmsTemplate connectionFactory is down.
Does anyone have any suggestion for enabling spring ignore some bad injections which will allow our application to start properly?
Thanks
I believe this could be achieved by defining separate spring profiles and then passing it as a parameter in your test environments while starting your application. You could mock or ignore any beans then.
Example
import org.springframework.context.annotation.Profile;
#Profile("test")
public class AppConfigTest {
....
....
}
JVM param / System property
-Dspring.profiles.active=test
Related
I am storing application.properties file in my config server. And my client applications are refering config server to download the property files.
Scenario 1:
When i change the value of property server.port in my config server. Can i reflect the changes in my client applicaiton without restarting the application.
You can use #RefreshScope beans for this purpose, this is not ideal but as close as you can get in config server, this is a pretty advanced thing after all.
So beans marked with this annotations will cause spring to clear the internal cache of the beans / configuration classes upon EnvironmentChangeEvent, then the instance of the bean will be created next time you'll try to call this bean.
To trigger such an event when the config server changes you can either explicitly call the actuator's refresh enpoint or develop your own solution that might be based on some messaging system so that the config server will be a producer of a "change" message and the consumer will be your application.
Now I can't say for sure whether it will work in particular with server.port, I've personally never seen a need to change this property, but for your custom beans this method will do the job.
Here is a good tutorial about this topic
We have an application stack, deployed in Tomcat, that consists of several Spring Boot applications. As part of our operations, we want to send some messages to a vm endpoint, where a camel route will consume those messages and then publish them to a JMS topic for any of the other Spring Boot applications that are interested in messages on that topic.
When I start the application stack, there are three spring boot apps that utilize camel, and I see camel start properly in the logs. But when one of the apps sends a message to the vm endpoint, the route that consumes from that endpoint and routes the messages to the jms topic does not seem to get that message. I have placed the camel-core jar in my tomcat lib directory. In the spring boot maven plugin configuration, I have specified an exclusion of the camel-core jar. Oddly enough, that jar is in the WEB-INF/lib of the war anyway! So I have stopped Tomcat, removed that jar from the exploded war, and restarted Tomcat, but that does not change the behavior of the messaging.
Here are the versions that we are using:
Spring Boot 2.3.1
Camel 3.4.2
Tomcat 8.5.5
The first spring boot app that links everything together, with the camel route that consumes from the vm endpoint and produces that message on the jms topic is our "routing engine". It uses camel-spring-boot-starter, spring-boot-starter-artemis, camel-vm-starter, artemis-jms-server and camel-jms-starter. Its RouteBuilder's configure method looks like this:
from("vm:task")
.log(LoggingLevel.WARN, "********** Received task message");
.to("jms:topic:local.private.task")
.routeId("taskToJms");
The app that produces messages to the vm endpoint uses camel-spring-boot-starter and camel-vm-starter. In that app, it has a #Service class that receives a ProducerTemplate that is auto-wired in the constructor. When the application invokes this component to send the message, I see a line in the logs that says
o.a.c.impl.engine.DefaultProducerCache (169) - >>>> vm://task Exchange[]
so it appears that the message is being produced and sent properly to the vm endpoint. However, I see no indication that it has been received/consumed in the routing engine's camel route, since the route's log line is not logging anything, and since I see no other indications of receiving the message in the log. The strange thing is that I am not getting the error of not having any consumers on the vm:task endpoint that I was getting before I put the camel-core jar in tomcat's lib directory.
Am I doing anything obviously wrong? How can I get the spring boot maven plugin to really exclude camel-core? And why are the messages (sent to the vm endpoint) not being consumed by the route in the routing engine? Thanks in advance for any help.
Edit: I was able to keep camel-core out of the war files by adding an exclusion to the configuration of the war plugin, but I was still not able to consume the message on the vm endpoint.
I will post the answer, or at least "an" answer, for anyone who might have found themselves in the puzzling situation that I found myself in.
In short, the answer is that it is best to avoid trying to send VM messages across separate contexts within one big JVM like Tomcat. Instead, use something like JMS. I used Artemis, and I stood up an embedded broker in one of the spring boot apps in tomcat. In other apps (that will be clients), I needed to connect to the embedded artemis server, which requires that you add a #Configuration class (in the module that stands up the embedded broker) that implements ArtemisConfigurationCustomizer:
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(final org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConnfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}
That lets your other stuff connect to the embedded Artemis broker. Also, you do not have to worry about upgrading camel-core jars in your tomcat shared lib folder when you upgrade camel to a different version. It's good to keep things simple for maintenance purposes!
Anyway, I hope this helps somebody else who might find themselves here someday.
I am new to Spring-Integration.
My use case is:
Listen to a RabbitMQ queue/topic, get the message, process it, send it to other message broker (mostly it will be another RabbitMQ instance).
Expected load: 5000 messages/sec
In application.properties we can set configurations for one host.
How to use Spring Integration between two message brokers?
All the examples that i see are for one message broker. Any pointers to get started with two message brokers and Spring Integration.
Regards,
Mahesh
Since you mention an application.properties it sounds like you use Spring Boot with its auto-configuration feature. It is very important detail in your question because Spring Boot has opinion about auto-configuration and you really can have only one broker connection configuration auto-configured. If you would like to have an another similar in the same application, then you should forget that auto-configuration feature. You still can use the mentioned application.properties, but you have to manage them manually.
Since you talk about a RabbitMQ connection, so you need to exclude RabbitAutoConfiguration and manage all the required beans manually:
#SpringBootApplication(exclude = RabbitAutoConfiguration.class)
You still can use the #EnableConfigurationProperties(RabbitProperties.class) on some your #Configuration class to be able to inject that RabbitProperties and populate respective CachingConnectionFactory. For the second broker you can introduce your own #ConfigurationProperties or just configure everything manually reading properties via #Value. See more info about manual connection factory configuration in Spring AMQP reference manual: https://docs.spring.io/spring-amqp/docs/2.2.1.RELEASE/reference/html/#connections
I am working with a Spring Boot application that was written using Apache Camel spring-xml routes. There is very little java based application logic, and it is nearly entirely written in xml and based on the various camel routes.
The routes are configured to connect to the different environments and systems through property files, using a property such as KAFKA_URL and KAFKA_PORT. Inside one of the implemented routes, the application connects with the following and consumes/produces messages to it:
<to id="route_id_replaced_for_question" uri="kafka:{{env:KAFKA_URL:{{KAFKA_URL}}}}:{{env:KAFKA_PORT:{{KAFKA_PORT}}}}?topic={{env:KAFKA_TOPIC:{{topic_to_connect_to}}}}&kerberosRenewJitter=1&kerberosRenewWindowFactor=1&{{kafka.ssl.props}}&{{kafka.encryption.props}}"/>
Additionally, we connect to an SFTP server, which I am also trying to mock using Citrus. That follows a similar pattern where:
<from id="_requestFile" uri="{{env:FTP_URL:{{FTP_URL}}}}:{{env:FTP_PORT:{{FTP_PORT}}}}/{{env:FTP_FILE_DIR:{{FTP_FILE_DIR}}}}/?delete=true&fileExist=Append&password={{env:FTP_PASSWORD:{{FTP_PASSWORD}}}}&delay={{env:FTP_POLL_DELAY:{{FTP_POLL_DELAY}}}}&username={{env:FTP_USER:{{FTP_USER}}}}"/>
Inside of my integration test, I have configured a Citrus' EmbeddedKafkaServer class with the following:
#Bean
public EmbeddedKafkaServer embeddedKafkaServer() {
return new EmbeddedKafkaServerBuilder()
.kafkaServerPort(9092)
.topics("topic_to_connect_to")
.build();
}
and a Citrus FTP server with:
#Bean
public SftpServer sftpServer() {
return CitrusEndpoints.sftp()
.server()
.port(2222)
.autoStart(true)
.user("username")
.password("passwordtoconnectwith")
.userHomePath("filedirectory/filestoreadfrom")
.build();
}
Ideally, my test will connect to the mock sftp server, and I will push a file to the appropriate directory using Citrus, which my application will then read in, process, and publish to a topic on the embedded kafka cluster and verify in the test.
I was under the impression that I would set KAFKA_PORT to 9092 and KAFKA_URL to localhost, as well as FTP_URL to localhost and FTP_PORT to 2222 (amongst the other properties needed) inside of my properties file, but that does not seem to connect me to the embedded cluster or sftp servers..
What piece of the puzzle am I missing to have my spring boot application connect to both of these mocked instances and run its' business logic processing from there?
I resolved this issue - it was due to using a very old version of Kafka (1.0.0 or older), which was missing some of the methods that are called when Citrus attempts to build new topics. If someone encounters a similar problem to this using Citrus, I recommend starting with evaluating the version of Kafka your service is on and determining if it needs to be updated.
For the sftp connection, the server or client was not being autowired, and therefore never starting.
In order to control the amount of threads on the jetty main embedded server I load a EmbeddedServletContainerCustomizer using the #Component annotation. Im using a different port for the management context and so it seems that a different jetty instance is executed for that port. How can I do the same process for that port or Jetty instance?
Regards
Bruno
Just found out how to solve my issue.
Using the application properties
server.jetty.acceptors
server.jetty.selectors
I can control the amount of threads on both ports. It is not very customisable but it gets the job done. For the main service port, configuring with a EmbeddedServletContainerCustomizer will override these configurations.
Regards