I want to create performance test in gatling which check my server in spring. I use rsocket protocol(over websocket). I don't know how to establish connection and send any data through this protocol. Is gatling support it? If it isn't possible using gatling how can i simulate specific number of connection to my server. (maybe different library)
Server code
#Bean
public Mono<RSocketRequester> rSocketRequester(
RSocketStrategies rSocketStrategies,
RSocketProperties rSocketProps) {
return RSocketRequester.builder()
.rsocketStrategies(rSocketStrategies)
.connectWebSocket(getURI(rSocketProps));
}
private URI getURI(RSocketProperties rSocketProps) {
return URI.create(String.format("ws://localhost:%d%s",
rSocketProps.getServer().getPort(), rSocketProps.getServer().getMappingPath()));
}
properties file:
spring.rsocket.server.port=8080
spring.rsocket.server.transport=websocket
spring.rsocket.server.mapping-path=/rsocket
Examples endpoints where i want to send message:
#ConnectMapping
void joinToGame(RSocketRequester rSocketRequester) {
}
#MessageMapping("exampleEndpoint")
public void disconnect() {
}
maven dependency:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-rsocket</artifactId>
</dependency>
Related
Currently i have a spring clound funtion which consumes a topic and publish in to another topic. Now I have multiple topics and need to publish message to one of the multiple topic based on certain checks from spring cloud function. How can I achieve this? Here is current implementation.
#Bean("producerBean")
public Function<Message<SourceMessage>, Message<SinkMessage>> producerBean(SinkService<SourceMessage> sinkService) {
return sinkService::processMessage;
}
#Service("SinkService")
public class SinkService<T> {
public Message<SinkMessage> processMessage(Message<SourceMessage> message) {
log.info("Message consumed at {} \n{}", message.getHeaders().getTimestamp(), message.getPayload());
try {
if (message.getPayload().isManaged()) {
/*
Need to add one more check here.
if (type==2)
send to topic1
else if(type==4)
send to topic2
else
Just log the type, do not send to any topic.
*/
Message<SinkMessage> output = new GenericMessage<>(new SinkMessage());
output.getPayload().setPayload(message.getPayload());
return output;
}
} catch (Exception exception) {
exception.printStackTrace();
}
return null;
}
}
application.properties
spring.cloud.stream.kafka.binder.brokers=${bootstrap.servers}
spring.cloud.stream.kafka.binder.configuration.enable.idempotence=false
spring.cloud.stream.binders.test_binder.type=kafka
spring.cloud.stream.bindings.producerBean.binder=test_binder
spring.cloud.stream.bindings.producerBean-in-0.destination=${input-destination}
spring.cloud.stream.bindings.producerBean-in-0.group=${input-group}
spring.cloud.stream.bindings.producerBean-out-0.destination=topic1
spring.cloud.stream.bindings.producerBean-out-1.destination=topic2
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
<version>3.2.5</version>
</dependency>
You can use StreamBridge with kafka-topicname and spring-cloud will bind it automatically in runtime. That approach also auto creates topic if that not exist, you can turn it off.
#Autowired
private final StreamBridge streamBridge;
public void sendDynamically(Message message, String topicName) {
streamBridge.send(route, topicName);
}
https://docs.spring.io/spring-cloud-stream/docs/current/reference/html/spring-cloud-stream.html#_streambridge_and_dynamic_destinations
I have a Spring boot application with Prometheus Pushgateway using Micrometer, mainly based on this tutorial:
https://luramarchanjo.tech/2020/01/05/spring-boot-2.2-and-prometheus-pushgateway-with-micrometer.html
pom.xml has following related dependencies:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-core</artifactId>
</dependency>
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
</dependency>
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
<version>0.16.0</version>
</dependency>
And application.properties file has:
management.metrics.export.prometheus.pushgateway.enabled=true
management.metrics.export.prometheus.pushgateway.shutdown-operation=PUSH
management.metrics.export.prometheus.pushgateway.baseUrl=localhost:9091
It is working fine locally in Dev environment while connecting to Pushgateway without any TLS. In our CI environment, Prometheus Pushgateway has TLS enabled. How do I configure TLS support and configure certs in this Spring boot application?
Due to the usage of TLS, you will need to customize a few Spring classes:
HttpConnectionFactory -> PushGateway -> PrometheusPushGatewayManager
A HttpConnectionFactory, is used by prometheus' PushGateway to create a secure connection, and then, create a PrometheusPushGatewayManager which uses the previous pushgateway.
You will need to implement the prometheus’ interface HttpConnectionFactory, I’m assuming you are able to create a valid javax.net.ssl.SSLContext object (if not, more details in the end¹).
HttpConnectionFactory example:
public class MyTlsConnectionFactory implements io.prometheus.client.exporter.HttpConnectionFactory {
#Override
public HttpURLConnection create(String hostUrl) {
// considering you can get javax.net.ssl.SSLContext or javax.net.ssl.SSLSocketFactory
URL url = new URL(hostUrl);
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
connection.setSSLSocketFactory(sslContext.getSocketFactory());
return connection;
}
}
PushGateway and PrometheusPushGatewayManager:
#Bean
public HttpConnectionFactory tlsConnectionFactory() {
return new MyTlsConnectionFactory();
}
#Bean
public PushGateway pushGateway(HttpConnectionFactory connectionFactory) throws MalformedURLException {
String url = "https://localhost:9091"; // replace by your props
PushGateway pushGateway = new PushGateway(new URL(url));
pushGateway.setConnectionFactory(connectionFactory);
return pushGateway;
}
#Bean
public PrometheusPushGatewayManager tlsPrometheusPushGatewayManager(PushGateway pushGateway,
CollectorRegistry registry) {
// fill the others params accordingly (the important is pushGateway!)
return new PrometheusPushGatewayManager(
pushGateway,
registry,
Duration.of(15, ChronoUnit.SECONDS),
"some-job-id",
null,
PrometheusPushGatewayManager.ShutdownOperation.PUSH
);
}
¹If you face difficulty retrieving the SSLContext from java code, I recommend studying the library https://github.com/Hakky54/sslcontext-kickstart and https://github.com/Hakky54/mutual-tls-ssl (which shows how to apply it with different client libs).
Then, will be possible to generate SSLContext in java code in a clean way, e.g.:
String keyStorePath = "client.jks";
char[] keyStorePassword = "password".toCharArray();
SSLFactory sslFactory = SSLFactory.builder()
.withIdentityMaterial(keyStorePath, keyStorePassword)
.build();
javax.net.ssl.SSLContext sslContext = sslFactory.getSslContext();
Finally, if you need setup a local Prometheus + TLS environment for testing purposes, I recommend following the post:
https://smallstep.com/hello-mtls/doc/client/prometheus
#ReactiveFeignClient(name = "service.b",configuration = CustomConfiguration.class)
public interface FeingConfiguration {
#PostMapping("/api/students/special")
public Flux<Student> getAllStudents(#RequestBody Flux<SubjectStudent> lista);
}
Help, how can I add a basic authentication to my header that I have in the service: service.b.
I have the CustomConfiguration.class class but it doesn't allow me, I have 401 authorization failed
#Configuration
public class CustomConfiguration {
#Bean
public BasicAuthRequestInterceptor basic() {
return new BasicAuthRequestInterceptor("user","user") ;
}
Looks like you are trying to use feign-reactive (https://github.com/Playtika/feign-reactive) to implement your REST clients. I am also using it for one of my projects and it looks like this library does not have an out-of-the-box way to specify basic auth credentials. At least no way to do this declaratively. So I didn't find a better way to do this than to abandon the auto-configuration via #ReactiveFeignClient and start configuring reactive feign clients manually. This way you can manually add "Authorization" header to all outgoing requests. So, provided this client definition:
public interface FeingClient {
#PostMapping("/api/students/special")
public Flux<Student> getAllStudents(#RequestBody Flux<SubjectStudent> lista);
}
Add the following configuration class to your Spring context, replacing username, password and service-url with your own data:
#Configuration
public class FeignClientConfiguration {
#Bean
FeignClient feignClient() {
WebReactiveFeign
.<FeignClient>builder()
.addRequestInterceptor(request -> {
request.headers().put(
"Authorization",
Collections.singletonList(
"Basic " + Base64.getEncoder().encodeToString(
"username:password".getBytes(StandardCharsets.ISO_8859_1))));
return request;
})
.target(FeignClient.class, "service-url");
}
}
Note, that this API for manual configurftion of reactive feign clients can differ between different versions of the reactive-feign library. Also note that this approach has a major drawback - if you start creating beans for your feign clients manually you lose the main advantage of Feign - ability to write REST-clients declaratively with just a few lines of code. E.g. if you want to use the above client with some sort of client-side load-balancing mechanism, like Ribbon/Eureka or Ribbon/Kubernetes, you will also need to configure that manually.
You can use a direct interceptor:
#Configuration
class FeignClientConfiguration {
#Bean
fun reactiveHttpRequestInterceptor(): ReactiveHttpRequestInterceptor {
return ReactiveHttpRequestInterceptor { request: ReactiveHttpRequest ->
request.headers()["Authorization"] = //insert data from SecurityContextHolder;
Mono.just(request)
}
}
}
I'm trying to connect a Spring Boot Stomp Server with multiple sockjs clients offline but I get the warning
Websocket is closed before the connection is established
followed by
GET http://192.168.1.45:8080/socket/327/si5osugt/jsonp?c=jp.a3xdefl net::ERR_ABORTED 404 (Not Found)
I'm using Spring Boot Version 2.1.2 with the spring-boot-starter-websocket package on the backend side and on the frontend side I'm using Angular 6 with sockjs-client version 1.3.0. Frontend and backend are both running on port 8080
I'm getting some errors while turning the internet down. If the internet is turned off the iframe tries to reach to https://cdn.jsdelivr.net/npm/sockjs-client#1/dist/sockjs.js.
I managed by configuring stomp server on the backend to set the client library by adding .setClientLibraryUrl to a absolute path which is offline reachable.
registry.addEndpoint("/socket").setAllowedOrigins("*").withSockJS).setClientLibraryUrl("http://192.168.1.45/dist/sockjs.min.js");
and get a 200 OK on http://192.168.1.45/dist/sockjs.min.js
Spring Boot:
WebSocketConfiguration (extends AbstractWebSocketMessageBrokerConfigurer)
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/socket")
.setAllowedOrigins("*")
.withSockJS().setClientLibraryUrl("http://192.168.1.45/dist/sockjs.min.js");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry registry) {
MessageBrokerRegistry messageBrokerRegistry = registry.setApplicationDestinationPrefixes("/app");
messageBrokerRegistry.enableSimpleBroker( "/test", "/test2"
);
}
WebSocketController
private final SimpMessagingTemplate template;
#Autowired
WebSocketController(SimpMessagingTemplate template){
this.template=template;
}
#MessageMapping("/send/message")
public void onReceivedMessage( String destination , String message){
this.template.convertAndSend(destination , message);
}
public void convertAndSend(String url, Object o){
this.template.convertAndSend(url, o);
}
Angular 6:
TestComponet
ngAfterViewInit() {
let ws = new SockJS('http://192.168.1.45:8080/socket');
this.stompClient = Stomp.over(ws);
let that = this;
that.stompClient.subscribe("/test", (message) => {
if (message.body) {
console.log(message.body);
}
});
that.stompClient.subscribe("/test2", (message) => {
if (message.body) {
console.log(message.body);
}
});
}
I thought it would work by just adding the sockjs client lib to an offline reachable path but I get the warning
Websocket is closed before the connection is established
followed by
"GET http://192.168.1.45:8080/socket/327/si5osugt/jsonp?c=jp.a3xdefl net::ERR_ABORTED 404 (Not Found)"
The library works with an internet connection perfectly fine, but I need it to work with both situations online and offline.
I had the same issue, and I fixed it by removing SockJs.
So now I'm currently using only Stomp-Websockets.
Changes in SpringBoot-Service(WebsocketConfiguration):
registry.addEndpoint("/justStomp").setAllowedOrigins("*");
I removed the .withSockJS() and .setClientLibraryUrl(../sockjs.min.js)
Changes in my Javascript-Code to connect to the websocket:
const stompClient = Stomp.client(`ws://localhost:8080/justStomp`);
stompClient.heartbeat.outgoing = 0;
stompClient.heartbeat.incoming = 0;
stompClient.connect({ name: 'test' }, frame => this.stompSuccessCallBack(frame, stompClient), err => this.stompFailureCallBack(err));
Instead of using Stomp.over(sockjs) I use the Stomp.client Method to directly connect to the websocket-url.
I have a rabbitMQ in the background with stomp-plugin, and this only works properly with the 2 heartbeat-settings. see here RabbitMQ Web STOMP without SockJS
I tried to implement application with cqrs and event sourcing with axon framework. I implement command side and query part as a separate micro-service and replicate(scale up) query micro-service. I use message broker as RabbitMq. If the command part publish event that not update all query micro-service. It work as round robin way. how can i update all micro-services same time.
Here is my dependency file
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-amqp</artifactId>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-amqp</artifactId>
<version>${axon.version}</version>
</dependency>
<dependency>
<groupId>org.axonframework</groupId>
<artifactId>axon-spring-boot-starter</artifactId>
<version>${axon.version}</version>
</dependency>
this is my configs in command side
#Bean
public Exchange exchange() {
return ExchangeBuilder.fanoutExchange("SeatReserveEvents").build();
}
#Bean
public Queue queue() {
return QueueBuilder.durable("SeatReserveEvents").build();
}
#Bean
public Binding binding() {
return BindingBuilder.bind(queue()).to(exchange()).with("*").noargs();
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(exchange());
admin.declareQueue(queue());
admin.declareBinding(binding());
}
This is application.yml
axon:
amqp:
exchange: SeatReserveEvents
This is command side configurations
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(queues = "SeatReserveEvents")
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
this is handler
#Component
#ProcessingGroup("statistics")
public class EventLoggingHandler
{
#EventHandler
protected void on(SeatResurvationCreateEvent event) {
System.err.println(event);
}
#EventHandler
protected void on(SeatReservationUpdateEvent event) {
System.err.println(event);
}
}
this is application.yml
axon:
eventhandling:
processors:
statistics.source: statisticsQueue
I'd say this is more an AMQP/RabbitMQ configuration setting than an Axon Framework specific question. That said, you'd want to set up RabbitMQ to not do Round Robin, but Pub/Sub, like described in this tutorial here.
I do however have another, more Axon Framework specific response in mind.
Why immediately publish your events on a queue, if you could also pull the events from the store directly? So, you'd have TrackingEventProcessors on the Query Side of you application, which pull events from the event store as they get appended by the Command Side of your application.
That's how a monolith version of an Axon Framework application incorporating CQRS would initially look like any way. Hence the simplest next step to split up that CQRS application in a Command and Query side, would be to leave the way of receiving events as is, without adding the queue in between.
If you've got specific requirements to publish over a queue however, or you just prefer to use a queue instead of letting the Query applications pull from the Event Store directly, please disregard this comment and revert back to the RabbitMQ tutorial.
we need to change RabbitMq configuration to publish event for more instance from command side axon. For that we have to change configuration in publisher side as below.
#Bean
public FanoutExchange fanoutExchange() {
FanoutExchange exchange = new FanoutExchange("SeatReserveEvents");
return exchange;
}
#Autowired
public void configure(AmqpAdmin admin) {
admin.declareExchange(fanoutExchange());
}
and next thing is subscriber side we have to change bean like below
#Bean
public SpringAMQPMessageSource statisticsQueue(Serializer serializer) {
return new SpringAMQPMessageSource(new DefaultAMQPMessageConverter(serializer)) {
#RabbitListener(bindings = #QueueBinding(
value = #Queue,
exchange = #Exchange(value ="SeatReserveEvents",type = ExchangeTypes.FANOUT),
key = "orderRoutingKey")
)
#Override
public void onMessage(Message arg0, Channel arg1) throws Exception {
super.onMessage(arg0, arg1);
}
};
}
now we can replicate consumer for more instance. This pattern is publisher/subscriber pattern. and exchange type is fanout