I develop SpringBoot app that conntects to Oracle Coherence cluster. App, as coherence node need some JVM properties to connect to cluster. I wanted to set this properties (taken from properties file) in spring boot custom starter. I set system properties in #Configuration class and I can read those without problem but coherence doesn't see one property tangosol.pof.enabled and it fails. When I call System.getProperty(..) it is there but not working (property is not seen by coherence).
It works when I #Autowire configuration class in some other bean in my application or when I have this configuration class not in spring boot starter but in my application.
This is my code:
Configuration class in starter (it works)
#Configuration
#PropertySource("coherence-app.properties")
public class EnvironmentConfig {
public static final Logger LOGGER = LoggerFactory.getLogger(EnvironmentConfig.class);
public EnvironmentConfig(Environment environment, ConfigurableApplicationContext ctx){
Properties props = new Properties();
ConfigsHelper.TANGOSOL_COHERENCE_CONFIGS.stream()
.forEach(prop -> props.setProperty(prop, environment.getProperty(prop)));
if (!ConfigsHelper.setTangosolCoherenceProperties(props)){
LOGGER.error("Can't set coherence props");
System.exit(1);
}
}
}
Then when I try to connect to cluster:
CacheFactory.ensureCluster();
I have error:
2017-08-09 13:17:56.049/7.494 Oracle Coherence GE 12.2.1.0.2
(thread=Cluster, member=n/a): Failed to deserialize the config Message
received from member 1. This member is configured with the following
serializer: com.tangosol.io.DefaultSerializer
{loader=sun.misc.Launcher$AppClassLoader#18b4aac2}, which may be
incompatible with the serializer configured by the sender.
java.io.StreamCorruptedException: invalid type: 100 at
com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2477)
at
com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2464)
at
com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:66)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:1)
at
com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService$ServiceJoining.read(ClusterService.CDB:14)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.deserializeMessage(Grid.CDB:20)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:21)
at
com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService.onNotify(ClusterService.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:45)
at java.lang.Thread.run(Thread.java:748)
It is connected with tangosol.pof.enabled=false property.
Strange is that when I call
System.getProperty("tangosol.pof.enabled")
before ensureCluster it's true.
This code works properly when it's not in starter. Then configuration bean is initializing earlier and works.
Do you have any idea how to solve this problem.
Related
I am upgrading a spring-boot project from an old version (2.2.9.RELEASE + Spring Cloud HOXTON.SR12) to v2.6.1 + Spring Cloud 2021.0.0
The issue I am currently hitting is with Trust Store enabled Eureka clients. In my old version, all eureka registering applications would use
import org.springframework.cloud.client.discovery.EnableDiscoveryClient; and be tagged with
#EnableDiscoveryClient
To use a custom trust store, I'd then include the following beans in a configuration class:
#Bean
public DiscoveryClient.DiscoveryClientOptionalArgs getTrustStoredEurekaClient(SSLContext sslContext) {
DiscoveryClient.DiscoveryClientOptionalArgs args = new DiscoveryClient.DiscoveryClientOptionalArgs();
args.setSSLContext(sslContext);
return args;
}
#Bean
public SSLContext sslContext() throws Exception {
return new SSLContextBuilder().loadTrustMaterial(new File(trustStore).toURI().toURL(), trustStorePassword.toCharArray()).build();
}
using import com.netflix.discovery.DiscoveryClient;
Following the upgrade, any microservice which attempts to use this custom truststore will not start, with the error thrown below:
*************************** APPLICATION FAILED TO START
Description:
Field optionalArgs in
org.springframework.cloud.netflix.eureka.EurekaClientAutoConfiguration$RefreshableEurekaClientConfiguration
required a bean of type
'com.netflix.discovery.AbstractDiscoveryClientOptionalArgs' that could
not be found.
The injection point has the following annotations:
#org.springframework.beans.factory.annotation.Autowired(required=true)
Action:
Consider defining a bean of type
'com.netflix.discovery.AbstractDiscoveryClientOptionalArgs' in your
configuration.
It doesn't seem to matter if I try to autowrire in a separate bean of type DiscoveryClientOptionalArgs and set the context to this, and I am currently unable to resolve this.
I could solve this by setting the following in the gateway's application.properties:
eureka.client.tls.enabled=true
eureka.client.tls.key-store=file:<path-to-key-store>
eureka.client.tls.key-store-password=<password>
eureka.client.tls.keyStoreType=PKCS12
eureka.client.tls.keyPassword=<password>
eureka.client.tls.trust-store=file:<path-to-trust-store>
eureka.client.tls.trust-store-password=<password>
What's not clear to me is why a keystore in addition to the truststore needs to be set (as above, it was only necessary to configure a trust store for the SSL context of the DiscoveryClient override for the previous versions using Zuul), which suggests I haven't fully understood what's actually happening here.
How to view autoconfigure log output during spring boot server start
I have created a spring boot application. It uses a shared library (Spring boot jar via maven dependency). Shared library class is loaded via
META-INF/spring.factories
I have mentioned the classes from the library in spring.factories. The job of shared library is to read Vault role id and Vault
secret id value from application.properties and call a REST API and fetch secrets from Vault. After fetching the secret it sets the value again in system property.
for (Map.Entry<String, String> entry : allSecrets.entrySet())
{
System.setProperty(entry.getKey(), entry.getValue());
}
Everything is working as expected. But I am not able to see logs from shared library in my logs.
shared library's package structure is com.myorg.abc. My spring boot package structure is com.myorg.xyz
I tried the following in application properties.
logging.level.root= DEBUG
logging.level.com.myorg.xyz: DEBUG
logging.level.com.myorg.abc: DEBUG
logging.level.org.springframework.boot.autoconfigure.logging=DEBUG
I am able to get logs only from my application but not from shared library. But when I change the shared library Logger.error to System.out, then I am getting the message in my application. How to view shared library's log in my application.
Spring boot initializes logging at least 3 times. The first happens when SpringApplication is loaded. It creates an SLF4J Logger before anything in Spring is accessed. This causes whatever logging implementation you have chosen to initialize. By default, it will use the logging configuration in the Spring jar. With Log4j 2 you can override this by setting log4j.configurationFile to the location of your desired configuration either as a system property or in a log4j.component.properties file.
Everything Spring does will be logged using this configuration until it initializes the logging configuration again, which is controlled by bootstrap.yml. Finally, your application's logging configuration is initialized which is configured either from application.yml or again from bootstrap.yml.
I replaced org.springframework.boot.env.EnvironmentPostProcessor with org.springframework.context.ApplicationListener in Spring.factories and it fixed the issue. I was able to get logs from shared library in invoking application.
Spring.factories
org.springframework.context.ApplicationListener=com.mypackage.MyClassName
MyClassName.java
public class MyClassName implements ApplicationListener<ApplicationPreparedEvent>
{
private static final Logger LOGGER = LoggerFactory.getLogger(MyClassName.class);
#Override
public void onApplicationEvent(ApplicationPreparedEvent applicationPreparedEvent)
{
ConfigurableEnvironment configurableEnvironment = applicationPreparedEvent.getApplicationContext()
.getEnvironment();
String roleId = configurableEnvironment.getProperty(Constants.VAULT_ROLE_ID_LITERAL);
String secretId = configurableEnvironment.getProperty(Constants.VAULT_SECRET_ID_LITERAL);
...
Optional<String> errorMessage = ServiceUtil.validateSystemProperty(roleId, secretId);
if (!errorMessage.isPresent())
{
Map<String, String> secret = ServiceUtil.getSecret(roleId, secretId);
for (Map.Entry<String, String> entry : secret.entrySet())
{
System.setProperty(entry.getKey(), entry.getValue());
}
LOGGER.info("Successfully populated secrets from Vault in system property");
}
else
{
LOGGER.error("Failed to populate secrets from Vault in system property. Error:{}", errorMessage.get());
}
}
}
application.properties
logging.level.com.myorg.abc: DEBUG
I'm facing an issue where my Kafka ProducerConfig is getting an invalid bootstrap.servers value because my unit test #PropertySource isn't resolving the spring.embedded.kafka.brokers property. When I dump my producer config to the logs, I get the following:
acks = 0
batch.size = 10000
bootstrap.servers = [${spring.embedded.kafka.brokers}]
...
Clearly, the property isn't getting resolved. Suppose I have the following embedded Kafka test.
#EmbeddedKafka
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE)
#RunWith(SpringRunner.class)
public class EmbeddedKafkaTest {
#Value("${spring.embedded.kafka.brokers}")
private String embeddedKafkaBrokers;
#Test
public void test(){}
#SpringBootConfiguration
#PropertySource("classpath:kafkaTestProps.properties")
#EnableAutoConfiguration
class EmbeddedKafkaTestConfiguration {
}
}
and my kafkaTestProperties.properties file is as follows:
embedded-kafka-brokers=${spring.embedded.kafka.brokers}
...
embedded-kafka-brokers will eventually be provided to Kafka's ProducerConfig through some underlying autoconfiguration.
Interestingly, the embeddedKafkaBrokers instance field in the test class does contain the broker IPs set by embedded kafka.
I've concluded there's an issue with the property source loading order, where #EmbeddedKafka isn't setting the broker IP system property in time for kafkaTestProperties.properties to resolve it. This problem arose after porting our code base to Spring Boot 2 and Spring Cloud Finchley SR2, where Spring Kafka APIs have upgraded.
I've tried removing #SpringBootTest and wrapping the test() code in a SpringApplicationBuilder to no avail.
Any advice as to how I can fix this? Perhaps I can leverage #AutoConfigurationAfter or #Order? Is there a way to order the loading of property sources?
It looks that your placeholder is not valid.
Try to replace:
embedded-kafka-brokers=${"spring.embedded.kafka.brokers"}
with:
embedded-kafka-brokers=${spring.embedded.kafka.brokers}
and provide it to the producer using class with #ConfigurationProperties or using #Value("${embedded-kafka-brokers}").
documentation link
I tried to put <jmxConfigurator/> in logback configuration file. I am able to connect jconsole to local JVM running unit tests and interact with the logback mbean. However, when I deploy my web application to a remote Websphere application server and connect jconsole to that remote JVM, I can't see the logback mbean in the MBeans panel.
As a comparison, the web application is built with spring boot, which also register some MBeans by default. I can see MBeans of spring boot in both scenarios.
I investigated a bit further and found out logback always get MBeanServer instance from ManagementFactory.getPlatformMBeanServer(), while spring uses different approaches in Websphere/Weblogic environment.
It appears that in Websphere environment, the MBeanServer instance exposed for remote connection is NOT the default PlatformMBeanServer.
So the question is, how can I register the logback mbean to the WebSphere custom MBeanServer, rather than the default PlatformMBeanServer?
WebSphere custom MBeanServer is favoured because it is better integrated with security and clustering capabilities.
This is my workaround by extending JMXConfigurator.
Just for the record, there is no document endorsing such extension, and I didn't test it with multiple web-applications.
This class inherited most behaviors from JMXConfigurator, but it will register to / unregister from the MBeanServer that is injected by Spring.
#ManagedResource(objectName = AnnotatedJMXConfigurator.NAME, description = "Logback Configuration Management Bean")
#Component
public class AnnotatedJMXConfigurator extends JMXConfigurator {
public static final String NAME = "xxx.ch.qos.logback.classic:Name=default,Type=ch.qos.logback.classic.jmx.JMXConfigurator";
private static final LoggerContext CONTEXT = (LoggerContext) LoggerFactory.getILoggerFactory();
private static final ObjectName OBJECT_NAME;
static {
try {
OBJECT_NAME = new ObjectName(NAME);
} catch (MalformedObjectNameException e) {
throw new RuntimeException(e.getMessage(), e);
}
}
#Autowired
public AnnotatedJMXConfigurator(MBeanServer mbs) {
super(CONTEXT, mbs, OBJECT_NAME);
}
}
Morning all,
I've been struggling lately with the spring-boot-artemis-starter.
My understanding of its spring-boot support was the following:
set spring.artemis.mode=embedded and, like tomcat, spring-boot will instanciate a broker reachable through tcp (server mode). The following command should be successful: nc -zv localhost 61616
set spring.artmis.mode=native and spring-boot will only configure the jms template according to the spring.artemis.* properties (client mode).
The client mode works just fine with a standalone artemis server on my machine.
Unfortunatelly, I could never manage to reach the tcp port in server mode.
I would be grateful if somebody confirms my understanding of the embedded mode.
Thank you for tour help
After some digging I noted that the implementation provided out of the box by the spring-boot-starter-artemis uses org.apache.activemq.artemis.core.remoting.impl.invm.InVMAcceptorFactory acceptor. I'm wondering if that's not the root cause (again I'm by no means an expert).
But it appears that there is a way to customize artemis configuration.
Therefore I tried the following configuration without any luck:
#SpringBootApplication
public class MyBroker {
public static void main(String[] args) throws Exception {
SpringApplication.run(MyBroker.class, args);
}
#Autowired
private ArtemisProperties artemisProperties;
#Bean
public ArtemisConfigurationCustomizer artemisConfigurationCustomizer() {
return configuration -> {
try {
configuration.addAcceptorConfiguration("netty", "tcp://localhost:" + artemisProperties.getPort());
} catch (Exception e) {
throw new RuntimeException("Failed to add netty transport acceptor to artemis instance");
}
};
}
}
You just have to add a Connector and an Acceptor to your Artemis Configuration. With Spring Boot Artemis starter Spring creates a Configuration bean which will be used for EmbeddedJMS configuration. You can see this in ArtemisEmbeddedConfigurationFactory class where an InVMAcceptorFactory will be set for the configuration. You can edit this bean and change Artemis behaviour through custom ArtemisConfigurationCustomizer bean which will be sucked up by Spring autoconfig and be applied to the Configuration.
An example config class for your Spring Boot application:
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.springframework.boot.autoconfigure.jms.artemis.ArtemisConfigurationCustomizer;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}
My coworker and I had the exact same problem as the documentation on this link (chapter Artemis Support) says nothing about adding an individual ArtemisConfigurationCustomizer - Which is sad because we realized that without this Customizer our Spring Boot App would start and act as if everything was okay but actually it wouldn't do anything.
We also realized that without the Customizer the application.properties file is not beeing loaded so no matter what host or port you mentioned there it would not count.
After adding the Customizer as stated by the two examples it worked without a problem.
Here some results that we figured out:
It only loaded the application.properties after configuring an ArtemisConfigurationCustomizer
You don't need the broker.xml anymore with an embedded spring boot artemis client
Many examples showing the use of Artemis use a "in-vm" protocol while we just wanted to use the netty tcp protocol so we needed to add it into the configuration
For me the most important parameter was pub-sub-domain as I was using topics and not queues. If you are using topics this parameter needs to be set to true or the JMSListener won't read the messages.
See this page: stackoverflow jmslistener-usage-for-publish-subscribe-topic
When using a #JmsListener it uses a DefaultMessageListenerContainer
which extends JmsDestinationAccessor which by default has the
pubSubDomain set to false. When this property is false it is
operating on a queue. If you want to use topics you have to set this
properties value to true.
In Application.properties:
spring.jms.pub-sub-domain=true
If anyone is interested in the full example I have uploaded it to my github:
https://github.com/CorDharel/SpringBootArtemisServerExample
The embedded mode starts the broker as part of your application. There is no network protocol available with such setup, only InVM calls are allowed. The auto-configuration exposes the necessary pieces you can tune though I am not sure you can actually have a TCP/IP channel with the embedded mode.