Embedded Kafka Broker IP Not Resolving in Property File - spring

I'm facing an issue where my Kafka ProducerConfig is getting an invalid bootstrap.servers value because my unit test #PropertySource isn't resolving the spring.embedded.kafka.brokers property. When I dump my producer config to the logs, I get the following:
acks = 0
batch.size = 10000
bootstrap.servers = [${spring.embedded.kafka.brokers}]
...
Clearly, the property isn't getting resolved. Suppose I have the following embedded Kafka test.
#EmbeddedKafka
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
#RunWith(SpringRunner.class)
public class EmbeddedKafkaTest {
#Value("${spring.embedded.kafka.brokers}")
private String embeddedKafkaBrokers;
#Test
public void test(){}
#SpringBootConfiguration
#PropertySource("classpath:kafkaTestProps.properties")
#EnableAutoConfiguration
class EmbeddedKafkaTestConfiguration {
}
}
and my kafkaTestProperties.properties file is as follows:
embedded-kafka-brokers=${spring.embedded.kafka.brokers}
...
embedded-kafka-brokers will eventually be provided to Kafka's ProducerConfig through some underlying autoconfiguration.
Interestingly, the embeddedKafkaBrokers instance field in the test class does contain the broker IPs set by embedded kafka.
I've concluded there's an issue with the property source loading order, where #EmbeddedKafka isn't setting the broker IP system property in time for kafkaTestProperties.properties to resolve it. This problem arose after porting our code base to Spring Boot 2 and Spring Cloud Finchley SR2, where Spring Kafka APIs have upgraded.
I've tried removing #SpringBootTest and wrapping the test() code in a SpringApplicationBuilder to no avail.
Any advice as to how I can fix this? Perhaps I can leverage #AutoConfigurationAfter or #Order? Is there a way to order the loading of property sources?

It looks that your placeholder is not valid.
Try to replace:
embedded-kafka-brokers=${"spring.embedded.kafka.brokers"}
with:
embedded-kafka-brokers=${spring.embedded.kafka.brokers}
and provide it to the producer using class with #ConfigurationProperties or using #Value("${embedded-kafka-brokers}").
documentation link

Related

How can I write a #DataJpaTest in a Spring Boot application that uses rsa keys loaded through configuration?

I followed the Spring Boot guides to set up JWT's using spring-boot-starter-oauth2-resource-server, so I've got references to the rsa keys used for signing the JWT's in my application.yml:
rsa:
privateKey: classpath:certs/private.pem
publicKey: classpath:certs/public.pem
This worked great until I tried to write a #DataJpaTest for testing the service layer of the application.
#DataJpaTest
public class FooTest {
#Test
public void test() {
System.out.println();
}
}
That test fails with the error:
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [java.security.interfaces.RSAPublicKey]
at app//org.springframework.boot.context.properties.bind.BindConverter.convert(BindConverter.java:118)
at app//org.springframework.boot.context.properties.bind.BindConverter.convert(BindConverter.java:100)
at app//org.springframework.boot.context.properties.bind.BindConverter.convert(BindConverter.java:92)
at app//org.springframework.boot.context.properties.bind.Binder.bindProperty(Binder.java:459)
at app//org.springframework.boot.context.properties.bind.Binder.bindObject(Binder.java:403)
at app//org.springframework.boot.context.properties.bind.Binder.bind(Binder.java:343)
I know the converters are available somewhere, because the same test runs fine with #SpringBootTest. I think I found them in org.springframework.security.converter.RsaKeyConverters. But I don't know how to register them so they're picked up during the #DataJpaTest.
I don't think those converters should be necessary for the test - FooTest has no dependencies right now.
How can I either set up the #DataJpaTest to work with this recommended spring-boot project setup, or how can I change the project setup so that I can easily write and run #DataJpaTests?
RsaKeyConverters is somehow configured by SecurityAutoConfiguration. #SpringBootTest will consider all the auto configuration and hence it can setup RsaKeyConverters properly. But #DataJpaTest will only consider the auto configuraiotn that are related to testing the JPA stuff and hence it will ignore SecurityAutoConfiguration.
You can use #ImportAutoConfiguration to tell it to also consider SecurityAutoConfiguration. But after that, you will find that although it is considered , it will only be enabled if the spring-boot is started as the servlet mode but #DataJpaTest starts it as the 'none' mode. So you need to configure another property spring.main.web-application-type=servlet to force it to start as servlet mode.
So making these two configuration changes should solve your problem :
#ImportAutoConfiguration(classes = SecurityAutoConfiguration.class)
#DataJpaTest(properties = "spring.main.web-application-type=servlet")
public class FooTest {
}
Have you tried to #MockBean Converter<String, RSAPublicKey> rsaPublicKeyConver; in your test class?

spring #JmsListener non compile time replacement

Looking for some alternative JMS destination configuration. The most common way of configuring destination and listener is by using annotation.
#JmsListener(destination = destination)
public void fetchMessage(final Message message) {
However, destination property have to be provided during the compile time. How to quickly replace it using some property which will be resolved only during runtime?
You can use a property placeholder for the destination
#JmsListener(destination = "${queue.name}")
Then set the property in some property source available to the application (e.g. application.properties or application.yml for a boot app, or a system property -Dqueue.name=foo for any app).

ZooKeeper latest value is not reloaded without restarting server

I am using ZooKeeper with spring boot. And In application.properties file I am using below properties as shown below.
minio.url=${minio.connection-string}
minio.access.key=${minio.accesskey}
where minio.connection-string and minio.accesskey value will be came from ZooKeeper znode data. I am using minio.url and minio.access.key in other Spring boot bean as shown below.
#Configuration
#RefreshScope
public class MinioClientConf
{
#Value("${minio.url}")
private String minioUrl;
#Value("${minio.access.key}")
private String minioKey;
.
.
When I start my spring boot application then all stuff works but when I change ZooKeeper node value then it is not reflecting in bean value without re-starting server.
My problem is that I want to reload latest zookeeper value without re-starting server. I have also tried with refresh scope annotation but it didn't work.
Instead of that use #ConfigurationProperties
#ConfigurationProperties("minio")
public class MinioClientConf
{
private String minioUrl;
private String minioKey;
.
.
For more details click here

SpringBoot and setting system properties

I develop SpringBoot app that conntects to Oracle Coherence cluster. App, as coherence node need some JVM properties to connect to cluster. I wanted to set this properties (taken from properties file) in spring boot custom starter. I set system properties in #Configuration class and I can read those without problem but coherence doesn't see one property tangosol.pof.enabled and it fails. When I call System.getProperty(..) it is there but not working (property is not seen by coherence).
It works when I #Autowire configuration class in some other bean in my application or when I have this configuration class not in spring boot starter but in my application.
This is my code:
Configuration class in starter (it works)
#Configuration
#PropertySource("coherence-app.properties")
public class EnvironmentConfig {
public static final Logger LOGGER = LoggerFactory.getLogger(EnvironmentConfig.class);
public EnvironmentConfig(Environment environment, ConfigurableApplicationContext ctx){
Properties props = new Properties();
ConfigsHelper.TANGOSOL_COHERENCE_CONFIGS.stream()
.forEach(prop -> props.setProperty(prop, environment.getProperty(prop)));
if (!ConfigsHelper.setTangosolCoherenceProperties(props)){
LOGGER.error("Can't set coherence props");
System.exit(1);
}
}
}
Then when I try to connect to cluster:
CacheFactory.ensureCluster();
I have error:
2017-08-09 13:17:56.049/7.494 Oracle Coherence GE 12.2.1.0.2
(thread=Cluster, member=n/a): Failed to deserialize the config Message
received from member 1. This member is configured with the following
serializer: com.tangosol.io.DefaultSerializer
{loader=sun.misc.Launcher$AppClassLoader#18b4aac2}, which may be
incompatible with the serializer configured by the sender.
java.io.StreamCorruptedException: invalid type: 100 at
com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2477)
at
com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2464)
at
com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:66)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at
com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService$ServiceJoining.read(ClusterService.CDB:14)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.onNotify(ClusterService.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:45)
at java.lang.Thread.run(Thread.java:748)
It is connected with tangosol.pof.enabled=false property.
Strange is that when I call
System.getProperty("tangosol.pof.enabled")
before ensureCluster it's true.
This code works properly when it's not in starter. Then configuration bean is initializing earlier and works.
Do you have any idea how to solve this problem.

Spring boot Artemis embedded broker behaviour

Morning all,
I've been struggling lately with the spring-boot-artemis-starter.
My understanding of its spring-boot support was the following:
set spring.artemis.mode=embedded and, like tomcat, spring-boot will instanciate a broker reachable through tcp (server mode). The following command should be successful: nc -zv localhost 61616
set spring.artmis.mode=native and spring-boot will only configure the jms template according to the spring.artemis.* properties (client mode).
The client mode works just fine with a standalone artemis server on my machine.
Unfortunatelly, I could never manage to reach the tcp port in server mode.
I would be grateful if somebody confirms my understanding of the embedded mode.
Thank you for tour help
After some digging I noted that the implementation provided out of the box by the spring-boot-starter-artemis uses org.apache.activemq.artemis.core.remoting.impl.invm.InVMAcceptorFactory acceptor. I'm wondering if that's not the root cause (again I'm by no means an expert).
But it appears that there is a way to customize artemis configuration.
Therefore I tried the following configuration without any luck:
#SpringBootApplication
public class MyBroker {
public static void main(String[] args) throws Exception {
SpringApplication.run(MyBroker.class, args);
}
#Autowired
private ArtemisProperties artemisProperties;
#Bean
public ArtemisConfigurationCustomizer artemisConfigurationCustomizer() {
return configuration -> {
try {
configuration.addAcceptorConfiguration("netty", "tcp://localhost:" + artemisProperties.getPort());
} catch (Exception e) {
throw new RuntimeException("Failed to add netty transport acceptor to artemis instance");
}
};
}
}
You just have to add a Connector and an Acceptor to your Artemis Configuration. With Spring Boot Artemis starter Spring creates a Configuration bean which will be used for EmbeddedJMS configuration. You can see this in ArtemisEmbeddedConfigurationFactory class where an InVMAcceptorFactory will be set for the configuration. You can edit this bean and change Artemis behaviour through custom ArtemisConfigurationCustomizer bean which will be sucked up by Spring autoconfig and be applied to the Configuration.
An example config class for your Spring Boot application:
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.springframework.boot.autoconfigure.jms.artemis.ArtemisConfigurationCustomizer;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}
My coworker and I had the exact same problem as the documentation on this link (chapter Artemis Support) says nothing about adding an individual ArtemisConfigurationCustomizer - Which is sad because we realized that without this Customizer our Spring Boot App would start and act as if everything was okay but actually it wouldn't do anything.
We also realized that without the Customizer the application.properties file is not beeing loaded so no matter what host or port you mentioned there it would not count.
After adding the Customizer as stated by the two examples it worked without a problem.
Here some results that we figured out:
It only loaded the application.properties after configuring an ArtemisConfigurationCustomizer
You don't need the broker.xml anymore with an embedded spring boot artemis client
Many examples showing the use of Artemis use a "in-vm" protocol while we just wanted to use the netty tcp protocol so we needed to add it into the configuration
For me the most important parameter was pub-sub-domain as I was using topics and not queues. If you are using topics this parameter needs to be set to true or the JMSListener won't read the messages.
See this page: stackoverflow jmslistener-usage-for-publish-subscribe-topic
When using a #JmsListener it uses a DefaultMessageListenerContainer
which extends JmsDestinationAccessor which by default has the
pubSubDomain set to false. When this property is false it is
operating on a queue. If you want to use topics you have to set this
properties value to true.
In Application.properties:
spring.jms.pub-sub-domain=true
If anyone is interested in the full example I have uploaded it to my github:
https://github.com/CorDharel/SpringBootArtemisServerExample
The embedded mode starts the broker as part of your application. There is no network protocol available with such setup, only InVM calls are allowed. The auto-configuration exposes the necessary pieces you can tune though I am not sure you can actually have a TCP/IP channel with the embedded mode.

Resources