Spring boot Artemis embedded broker behaviour - spring-boot

Morning all,
I've been struggling lately with the spring-boot-artemis-starter.
My understanding of its spring-boot support was the following:
set spring.artemis.mode=embedded and, like tomcat, spring-boot will instanciate a broker reachable through tcp (server mode). The following command should be successful: nc -zv localhost 61616
set spring.artmis.mode=native and spring-boot will only configure the jms template according to the spring.artemis.* properties (client mode).
The client mode works just fine with a standalone artemis server on my machine.
Unfortunatelly, I could never manage to reach the tcp port in server mode.
I would be grateful if somebody confirms my understanding of the embedded mode.
Thank you for tour help
After some digging I noted that the implementation provided out of the box by the spring-boot-starter-artemis uses org.apache.activemq.artemis.core.remoting.impl.invm.InVMAcceptorFactory acceptor. I'm wondering if that's not the root cause (again I'm by no means an expert).
But it appears that there is a way to customize artemis configuration.
Therefore I tried the following configuration without any luck:
#SpringBootApplication
public class MyBroker {
public static void main(String[] args) throws Exception {
SpringApplication.run(MyBroker.class, args);
}
#Autowired
private ArtemisProperties artemisProperties;
#Bean
public ArtemisConfigurationCustomizer artemisConfigurationCustomizer() {
return configuration -> {
try {
configuration.addAcceptorConfiguration("netty", "tcp://localhost:" + artemisProperties.getPort());
} catch (Exception e) {
throw new RuntimeException("Failed to add netty transport acceptor to artemis instance");
}
};
}
}

You just have to add a Connector and an Acceptor to your Artemis Configuration. With Spring Boot Artemis starter Spring creates a Configuration bean which will be used for EmbeddedJMS configuration. You can see this in ArtemisEmbeddedConfigurationFactory class where an InVMAcceptorFactory will be set for the configuration. You can edit this bean and change Artemis behaviour through custom ArtemisConfigurationCustomizer bean which will be sucked up by Spring autoconfig and be applied to the Configuration.
An example config class for your Spring Boot application:
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.springframework.boot.autoconfigure.jms.artemis.ArtemisConfigurationCustomizer;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}

My coworker and I had the exact same problem as the documentation on this link (chapter Artemis Support) says nothing about adding an individual ArtemisConfigurationCustomizer - Which is sad because we realized that without this Customizer our Spring Boot App would start and act as if everything was okay but actually it wouldn't do anything.
We also realized that without the Customizer the application.properties file is not beeing loaded so no matter what host or port you mentioned there it would not count.
After adding the Customizer as stated by the two examples it worked without a problem.
Here some results that we figured out:
It only loaded the application.properties after configuring an ArtemisConfigurationCustomizer
You don't need the broker.xml anymore with an embedded spring boot artemis client
Many examples showing the use of Artemis use a "in-vm" protocol while we just wanted to use the netty tcp protocol so we needed to add it into the configuration
For me the most important parameter was pub-sub-domain as I was using topics and not queues. If you are using topics this parameter needs to be set to true or the JMSListener won't read the messages.
See this page: stackoverflow jmslistener-usage-for-publish-subscribe-topic
When using a #JmsListener it uses a DefaultMessageListenerContainer
which extends JmsDestinationAccessor which by default has the
pubSubDomain set to false. When this property is false it is
operating on a queue. If you want to use topics you have to set this
properties value to true.
In Application.properties:
spring.jms.pub-sub-domain=true
If anyone is interested in the full example I have uploaded it to my github:
https://github.com/CorDharel/SpringBootArtemisServerExample

The embedded mode starts the broker as part of your application. There is no network protocol available with such setup, only InVM calls are allowed. The auto-configuration exposes the necessary pieces you can tune though I am not sure you can actually have a TCP/IP channel with the embedded mode.

Related

Configuration Bean in Quarkus

This is regarding CDI spec of quarkus. Would want to understand is there a configuration bean for quarkus? How does one do any sort of configuration in quarkus?
If I get it right the original question is about #Configuration classes that can contain #Bean definitions. If so then CDI producer methods and fields annotated with #javax.enterprise.inject.Produces are the corresponding alternative.
Application configuration is a completely different question though and Jay is right that the Quarkus configuration reference is the ultimate source of information ;-).
First of all reading how cdi spec of quarkus differs from spring is important.
Please refer this guide:
https://quarkus.io/guides/cdi-reference
The learnings from this guide is there is #Produces which is an alternative to #Configuration bean in Quarkus.
Let us take an example for libs that might require a configuration through code. Example: Microsoft Azure IOT Service Client.
public class IotHubConfiguration {
#ConfigProperty(name="iothub.device.connection.string")
String connectionString;
private static final Logger LOG = Logger.getLogger(IotHubConfiguration.class);
#Produces
public ServiceClient getIot() throws URISyntaxException, IOException {
LOG.info("Inside Service Client bean");
if(connectionString==null) {
LOG.info("Connection String is null");
throw new RuntimeException("IOT CONNECTION STRING IS NULL");
}
ServiceClient serviceClient = new ServiceClient(connectionString, IotHubServiceClientProtocol.AMQPS);
serviceClient.open();
LOG.info("opened Service Client Successfully");
return serviceClient;
}
For all libs vertically intergrated with quarkus application.properties can be used and then you will get a driver obj for that broker/dbs available directly through #Inject in your #applicationScoped/#Singleton bean So, Why is that?
To Simplify and Unify Configuration
To Make Sure no code is required for configuring anything i.e. database config, broker config , quarkus config etc.
This drastically reduces the amount of code written for configuring and also Junits needed to cover that code.
Let us take an example where kafka producer configuration needs to be added: in application.properties
kafka.bootstrap.servers=${KAFKA_BROKER_URL:localhost:9092}
mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
mp.messaging.outgoing.incoming_kafka_topic_test.connector=smallrye-kafka
mp.messaging.outgoing.incoming_kafka_topic_test.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
mp.messaging.outgoing.incoming_kafka_topic_test.health-readiness-enabled=true
For full blown project reference: https://github.com/JayGhiya/QuarkusExperiments/tree/initial_version_v1/KafkaProducerQuarkus
Quarkus References for Config:
https://quarkus.io/guides/config-reference
Example for reactive sql config: https://quarkus.io/guides/reactive-sql-clients
Now let us talk about a bonus feature that quarkus provides which improves developer experience by atleast an order of magnitude that is profile driven development and testing.
Quarkus provides three profiles:
dev - Activated when in development mode (i.e. quarkus:dev)
test - Activated when running tests
prod - The default profile when not running in development or test
mode
Let us just say that in the given example you wanted to have different topics for development and different topics for production. Let us achieve that!
%dev.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:input_topic1}
%prod.mp.messaging.outgoing.incoming_kafka_topic_test.topic=${KAFKA_INPUT_TOPIC_FOR_IOT_HUB:prod_topic}
This is how simple it is. This is extremely useful in cases where your deployments run with ssl enabled brokers/dbs etc and for dev purposes you have unsecure local brokers/dbs. This is a game changer.

Spring Integration: Automated integration tests with embedded Broker?

Is it in a way possible to, say in memory, start a broker that can be used to execute automated test cases using Spring Integration MQTT?
I've tried achieving this with ActiveMQ (following https://docs.spring.io/spring-boot/docs/current/reference/html/boot-features-messaging.html) but somehow didn't succeed, maybe anyone has a short working example?
It's not Spring Integration (Spring Boot) responsibility to provide some embedded broker for such a protocol. If there is one, we could consider to implement an auto-configuration on the matter , similar to what we do for embedded RDBMS, JMS and MongoDB. You really need to consult ActiveMQ documentation.
Looks like we can do it like this in the test class:
private static BrokerService activeMQBroker;
...
#BeforeClass
public static void setup() throws Exception {
activeMQBroker = new BrokerService();
activeMQBroker.addConnector("mqtt://localhost:1883");
activeMQBroker.setPersistent(false);
activeMQBroker.setUseJmx(false);
activeMQBroker.start();
}
I didn't try it, but this is exactly what I do to test against STOMP.

Spring Boot 2 integrate Brave MySQL-Integration into Zipkin

I am trying to integrate the Brave MySql Instrumentation into my Spring Boot 2.x service to automatically let its interceptor enrich my traces with spans concerning MySql-Queries.
The current Gradle-Dependencies are the following
compile 'io.zipkin.zipkin2:zipkin:2.4.5'
compile('io.zipkin.reporter2:zipkin-sender-okhttp3:2.3.1')
compile('io.zipkin.brave:brave-instrumentation-mysql:4.14.3')
compile('org.springframework.cloud:spring-cloud-starter-zipkin:2.0.0.M5')
I already configured Sleuth successfully to send traces concerning HTTP-Request to my Zipkin-Server and now I wanted to add some spans for each MySql-Query the service does.
The TracingConfiguration it this:
#Configuration
public class TracingConfiguration {
/** Configuration for how to send spans to Zipkin */
#Bean
Sender sender() {
return OkHttpSender.create("https://myzipkinserver.com/api/v2/spans");
}
/** Configuration for how to buffer spans into messages for Zipkin */
#Bean AsyncReporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
#Bean Tracing tracing(Reporter<Span> spanListener) {
return Tracing.newBuilder()
.spanReporter(spanReporter())
.build();
}
}
The Query-Interceptor works properly, but my problem now is that the spans are not added to the existing trace but each are added to a new one.
I guess its because of the creation of a new sender/reporter in the configuration, but I have not been able to reuse the existing one created by the Spring Boot Autoconfiguration.
That would moreover remove the necessity to redundantly define the Zipkin-Url (because it is already defined for Zipkin in my application.yml).
I already tried autowiring the Zipkin-Reporter to my Bean, but all I got is a SpanReporter - but the Brave-Tracer-Builder requries a Reporter<Span>
Do you have any advice for me how to properly wire things up?
Please use latest snapshots. Sleuth in latest snapshots uses brave internally so integration will be extremely simple.

Implement multi-tenanted application with Keycloak and springboot

When we use 'KeycloakSpringBootConfigResolver' for reading the keycloak configuration from Spring Boot properties file instead of keycloak.json.
Now there are guidelines to implement a multi-tenant application using keycloak by overriding 'KeycloakConfigResolver' as specified in http://www.keycloak.org/docs/2.3/securing_apps_guide/topics/oidc/java/multi-tenancy.html.
The steps defined here can only be used with keycloak.json.
How can we adapt this to a Spring Boot application such that keycloak properties are read from the Spring Boot properties file and multi-tenancy is achieved.
You can access the keycloak config you secified in your application.yaml (or application.properties) if you inject org.keycloak.representations.adapters.config.AdapterConfig into your component.
#Component
public class MyKeycloakConfigResolver implements KeycloakConfigResolver {
private final AdapterConfig keycloakConfig;
public MyKeycloakConfigResolver(org.keycloak.representations.adapters.config.AdapterConfig keycloakConfig) {
this.keycloakConfig = keycloakConfig;
}
#Override
public KeycloakDeployment resolve(OIDCHttpFacade.Request request) {
// make a defensive copy before changing the config
AdapterConfig currentConfig = new AdapterConfig();
BeanUtils.copyProperties(keycloakConfig, currentConfig);
// changes stuff here for example compute the realm
return KeycloakDeploymentBuilder.build(currentConfig);
}
}
After several trials, the only feasible option for spring boot is to have
Multiple instances of the spring boot application running with different spring 'profiles'.
Each application instance can have its own keycloak properties (as it is under different profiles) including the realm.
The challenge is to have an upgrade path for all instances for version upgrades/bug fixes, but I guess there are multiple strategies already implemented (not part of this discussion)
there is a ticket regarding this problem: https://issues.jboss.org/browse/KEYCLOAK-4139?_sscc=t
Comments for that ticket also talk about possible workarounds intervening in servlet setup of the service used (Tomcat/Undertow/Jetty), which you could try.
Note that the documentation you linked in your first comment is super outdated!

Port binding for Spring RESTful Web Service

I've used the Building a RESTful Web Service tutorial to build a WebService for my purpose. It was quite easy, but now I'm struggeling to configure the port the WebService should be binded through. There is no config.xml or anything to configure in this project. Can anyone give me a hint about how to configure the WebService's port?
As these details might be helpful. I'm starting the server with the code below, containing the #EnableAutoConfiguration tag. The configuration is done by Spring Boot.
#ComponentScan
#EnableAutoConfiguration
public class ServerStarter{
public static void main(String[] args) {
SpringApplication.run(ServerStarter.class, args);
}
}
To configure the ports you can set the server.port and management.port properties by writing the following in the "application.properties" file located under "src/main/resources":
server.port = 9000
management.port = 9001
Similarly if you need to specify the management address:
management.address = 127.0.0.1
There are more properties you can set for the server and management server. See spring-boot's documentation: http://docs.spring.io/spring-boot/docs/current-SNAPSHOT/reference/htmlsingle/
For a quick and dirty solution you can employ the command line option arguments (source):
--server.port=9000

Resources