Masstransit, saga resource not being created - masstransit

I have two services, one acts as a consumer, the other as a producer. Following are my configurations for each of them.
Producer config
services
.AddSingleton(KebabCaseEndpointNameFormatter.Instance);
services
.AddMassTransit(config =>
{
config.AddBus(serviceProvider =>
Bus.Factory.CreateUsingRabbitMq(config =>
{
config.Host(new Uri("amqp://admin:pass#localhost"));
}));
})
.AddMassTransitHostedService();
Consumer config
services
.AddSingleton(KebabCaseEndpointNameFormatter.Instance);
services
.AddMassTransit(config =>
{
config.AddSagaStateMachine<OrderStateMachine, OrderState>()
.RedisRepository();
config.AddBus(serviceProvider =>
Bus.Factory.CreateUsingRabbitMq(config =>
{
config.Host(new Uri("amqp://admin:pass#localhost"));
config.ReceiveEndpoint("service-5-queue", endpointConfig =>
{
endpointConfig.Consumer<SubmitOrderCommand>();
});
}));
})
.AddMassTransitHostedService();
Following some online tutorial, when the app launches, then I should see some queues and exchanges being created, one of which should be some 'order-state' exchange and queue. Unfortunatly not the case.
Anyone using masstransit having some idea why is this the case ?
The logs
[15:56:15 DBG] Declare queue: name: service-5-queue, durable, consumer-count: 0 message-count: 0
[15:56:15 DBG] Declare exchange: name: service-5-queue, type: fanout, durable
[15:56:15 DBG] Declare exchange: name: Messages:ISubmitOrder, type: fanout, durable
[15:56:15 DBG] Bind queue: source: service-5-queue, destination: service-5-queue
[15:56:15 DBG] Bind exchange: source: Messages:ISubmitOrder, destination: service-5-queue
[15:56:15 DBG] Consumer Ok: rabbitmq://localhost/service-5-queue - amq.ctag-X4WuaeOFDCCMcdEXd4EtuA
[15:56:15 DBG] Endpoint Ready: rabbitmq://localhost/service-5-queue
[15:56:15 INF] Bus started: rabbitmq://localhost/
Sending some message triggers the consumer, but the saga doesn't get trigger at any moment, nor it is invoked during some initialization steps or something.
[16:00:08 DBG] Declare exchange: name: Messages:IOrderSubmitted, type: fanout, durable [16:00:08 DBG] SEND rabbitmq://localhost/Messages:IOrderSubmitted 0cb00000-2327-309c-67c9-08d8ef964ca6 Messages.IOrderSubmitted
[16:00:09 DBG] Create send transport: rabbitmq://localhost/DESKTOPNH4IRSD_Service1_bus_b1ayyybdrhajaxmebdcq9fqbrz?temporary=true
[16:00:09 DBG] Declare exchange: name: DESKTOPNH4IRSD_Service1_bus_b1ayyybdrhajaxmebdcq9fqbrz, type: fanout, auto-delete
[16:00:09 DBG] SEND rabbitmq://localhost/DESKTOPNH4IRSD_Service1_bus_b1ayyybdrhajaxmebdcq9fqbrz?temporary=true 0cb00000-2327-309c-b0bc-08d8ef964d91 Messages.IOrderPreSubmissionOk [16:00:10 DBG] RECEIVE rabbitmq://localhost/service-5-queue 0cb00000-2327-309c-4f1f-08d8ef9647eb Messages.ISubmitOrder Service5.Handlers.SubmitOrderCommand(00:00:09.6961272)

You should remove the explicit receive endpoint configuration, and call ConfigureEndpoints. This will create the endpoint for the saga.
services
.AddMassTransit(config =>
{
config.AddSagaStateMachine<OrderStateMachine, OrderState>()
.RedisRepository();
config.UsingRabbitMq((context,cfg) =>
{
cfg.Host(new Uri("amqp://admin:pass#localhost"));
cfg.ConfigureEndpoints(context);
});
})
.AddMassTransitHostedService();

Related

Rabbitmq dead letter queue not delaying message

I have setup rabbitmq. I want to retry message after 10 second once they fail. But the way I have setup, the message is not getting delayed, it's coming back to queue immediately. I want to wait 10 second before sending message to main_queue.
Below are my code. I am using Bunny Ruby gem.
connection = Bunny.new('url_for_rabbitmq', verify_peer: true)
connection.start
channel = connection.create_channel
# Creating 2 Exchanges (One Main exchange, one retry exchange)
exchange = channel.direct('main_exchange')
retry_exchange = channel.direct('retry_exchange')
# Creating 2 Queue (One Main queue, one retry queue)
queue = channel.queue('main_queue', durable: true, arguments: { 'x-dead-letter-exchange' => retry_exchange.name })
queue.bind(exchange, routing_key: 'foo')
queue.bind(retry_exchange, routing_key: 'foo') # This one is pushing message directly to main queue without waiting for 10 second.
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
retry_queue.bind(retry_exchange, routing_key: 'foo')
If i change this line (retry_exchange to exchange)
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
to this
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => exchange.name })
then it works. but the message is coming from main_exchange but I want message to come from retry_exchange. How can i achieve this.
This is how I solve the problem
connection = Bunny.new('url_for_rabbitmq', verify_peer: true)
connection.start
channel = connection.create_channel
# Creating 2 Exchanges (One Main exchange, one retry exchange)
exchange = channel.direct('main_exchange')
retry_exchange = channel.direct('retry_exchange')
# Creating 2 Queue (One Main queue, one retry queue)
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
retry_queue.bind(retry_exchange, routing_key: 'foo')
retry_queue.bind(retry_exchange, routing_key: retry_queue.name)
queue = channel.queue('main_queue', durable: true, arguments: { 'x-dead-letter-exchange' => retry_exchange.name, 'x-dead-letter-routing-key' => retry_queue.name })
queue.bind(exchange, routing_key: 'foo')
queue.bind(exchange, routing_key: retry_queue.name)
basically i needed to add this to main queue 'x-dead-letter-routing-key' => retry_queue.name. and remove couple of unnecessary binding from main queue queue.bind(retry_exchange, routing_key: 'foo')
Now message come to main queue, if it fails then it goes to retry queue. Before going to retry queue, it will remove old routing key foo and replace it with new routing key retry_queue.name. It stays in retry queue for 10 seconds and then again come back to main queue for retry.

how to hot reload federation gateway in NestJS

Problem
In a federated nest app, a gateway collects all the schemas from other services and form a complete graph. The question is, how to re-run the schema collection after a sub-schema has been changed?
Current Workaround
Restarting the gateway solves the problem, but it does not seem like an elegant solution.
Other Resources
Apollo server supports managed federation which essentially reverts the dependency between the gateway and the services. Sadly I couldn't find anything relating it to NestJS.
When configuring gateway application with NestJS, and when already have integrated with Apollo studio, then you need not define any serviceList in GraphQLGatewayModule. This is how your module initialization should look like:
GraphQLGatewayModule.forRootAsync({
useFactory: async () => ({
gateway: {},
server: {
path: '/graphql',
},
}),
})
Following environment variables should be declared on the machine hosting your gateway application:
APOLLO_KEY: "service:<graphid>:<hash>"
APOLLO_SCHEMA_CONFIG_DELIVERY_ENDPOINT: "https://uplink.api.apollographql.com/"
Post deployment of Federated GraphQL service, you may need to run apollo/rover CLI service:push command like below to update the schema which writes to schema registry and then gets pushed to uplink URL which is polled by gateway periodically:
npx apollo service:push --graph=<graph id> --key=service:<graph id>:<hash> --variant=<environment name> --serviceName=<service name> --serviceURL=<URL of your service with /graphql path> --endpoint=<URL of your service with /graphql path>
You can add a pollIntervalInMs option to the supergraphSdl configuration.
That will automatically poll the services again in each interval.
#Module({
imports: [
GraphQLModule.forRootAsync<ApolloGatewayDriverConfig>({
driver: ApolloGatewayDriver,
useFactory: async () => ({
server: {
path: '/graphql',
cors: true
},
gateway: {
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'example-service', url: 'http://localhost:8081/graphql' },
],
pollIntervalInMs: 15000,
})
},
})
})
],
})
export class AppModule {}

Nestjs kafka implementation

I've read nestjs microservice and kafka documentation but I couldn't figure out some of it. I'll be so thankful if you can help me out.
So as the docs says I have to create a microService in main.ts file as follows:
const app = await NestFactory.createMicroservice<MicroserviceOptions>(AppModule, {
transport: Transport.KAFKA,
options: {
client: {
brokers: ['localhost:9092'],
}
}
});
await app.listen(() => console.log('app started'));
Then there is a kafkaModule file like this:
#Module({
imports: [
ClientsModule.register([
{
name: 'HERO_SERVICE',
transport: Transport.KAFKA,
options: {
client: {
clientId: 'hero',
brokers: ['localhost:9092'],
},
consumer: {
groupId: 'hero-consumer'
}
}
},
]),
]
})
export class KafkaModule implements OnModuleInit {
constructor(#Inject('HERO_SERVICE') private readonly clientService: KafkaClient)
async onModuleInit() {
await this.clientService.connect();
}
}
The first thing I can't figure is what is the use of the first parameter of createMicroservice ? (I passed AppModule and KafkaModule and both worked correctly. knowing that kafkaModule is imported at appModule)
The other thing is that from what I understood, the microservice part and the configuration in the main.ts file is used to subscribe on the topics that is used in MessagePattern or EventPattern decorators, and the kafkaClient described in the kafkaModule is used to send messages to different topics.
the problem here is if what I said earlier is true, then why clientModule uses a default groupId if not specified to work as consumer. strange thing is I couldn't find a solution to get any message from any topic using clientModule.
what I'm doing right now is to use different group ids in each file so they wont have any conflicts.
The first parameter of createMicroservice, it will help to guide how the consumer will connect to Kafka when you want to consume a message from a specific topic.
Example: we want to get message from topic: test01
How we declare?
import {Controller} from '#nestjs/common'
import {MessagePattern, Payload} from '#nestjs/microservices'
#Controller('sync')
export class SyncController {
#MessagePattern('test01')
handleTopicTest01(#Payload() message: Sync): any {
// Handle your message here
}
}
The second block is used as producer that is not consumer. When application want to send message to a specific topic, the clientModel will support this.
#Get()
sayHello() {
return this.clientModule.send('say.hello', 'hello world')
}

Register multiple nodes with consul

The question is, how do I register multiple nodes with consul under the same ID. I'm running a consul server in docker, and in my machine localhost I run two processes of the same HelloWorld nodejs app on my mac.
Problem: the entry for the process running at 3000 gets replaced by the process running at 3001 hence I'm ending up with one node only.
Question 2 Where do I download this GUI Client (not Web UI) from for Mac as shown in the screenshot.
Payload for node 1 port 3000
{
HTTP: 'http://My-Mac-Pro.local:3000/health',
Interval: '15s',
Name: 'My-Mac-Pro.local',
ID: 'user1'
}
Payload for node 2 port 3001
{
HTTP: 'http://My-Mac-Pro.local:3001/health',
Interval: '15s',
Name: 'My-Mac-Pro.local',
ID: 'user2'
}
nodeJS code
let http = require("http");
http.request({
method: "PUT",
hostname: env.CONSUL_HOST,
port: 8500,
path: "/v1/agent/check/register",
headers: {
"content-type": "application/json; charset=utf-8"
}
}, function(response){
if (response.statusCode == 200) {
resolve();
}
}).on("error", reject).end(JSON.stringify(body));
Expectation: See the multiple nodes under web
When you register services, each of services should register with unique service's ID.
It could be something as : ${serviceName}-${hostname}-{ip}-${port}-${process.pid()}-${uuid.v4()} or any combination of those to ensure that your service ID is unique. Different ID in registration payload is what sets consul to differ instances of same app/serviceIdentity running and they wont "override" one another.
Example of registration payload:
const id = `${ip}-${hostname}-${serviceIdentity}-${port}`;
const registrationDetails ={
Name: serviceIdentity,
ID: id,
Address: ip,
Port: parseInt(port),
Check: {
CheckID: `http-${id}`,
Name: `http-${id}`,
TLSSkipVerify: true,
HTTP: `http://${host}:${port}/health`,
Interval: '10s',
Notes: `Service http health`,
DeregisterCriticalServiceAfter: '60s',
},
};

Subscribing to a removed queue with spring-websocket and RabbitMQ broker (Queue NOT_FOUND)

I have a spring-websocket (4.1.6) application on Tomcat8 that uses a STOMP RabbitMQ (3.4.4) message broker for messaging. When a client (Chrome 47) starts the application, it subscribes to an endpoint creating a durable queue. When this client unsubscribes from the endpoint, the queue will be cleaned up by RabbitMQ after 30 seconds as defined in a custom made RabbitMQ policy. When I try to reconnect to an endpoint that has a queue that was cleaned up, I receive the following exception in the RabbitMQ logs: "NOT_FOUND - no queue 'position-updates-user9zm_szz9' in vhost '/'\n". I don't want to use an auto-delete queue since I have some reconnect logic in case the websocket connection dies.
This problem can be reproduced by adding the following code to the spring-websocket-portfolio github example.
In the container div in the index.html add:
<button class="btn" onclick="appModel.subscribe()">SUBSCRIBE</button>
<button class="btn" onclick="appModel.unsubscribe()">UNSUBSCRIBE</button>
In portfolio.js replace:
stompClient.subscribe("/user/queue/position-updates", function(message) {
with:
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
and also add the following:
self.unsubscribe = function() {
positionUpdates.unsubscribe();
}
self.subscribe = function() {
positionUpdates = stompClient.subscribe("/user/queue/position-updates", function(message) {
self.pushNotification("Position update " + message.body);
self.portfolio().updatePosition(JSON.parse(message.body));
});
}
Now you can reproduce the problem by:
Launch the application
click unsubscribe
delete the position-updates queue in the RabbitMQ console
click subscribe
Find the error message in the websocket frame via the chrome devtools and in the RabbitMQ logs.
reconnect logic in case the websocket connection dies.
and
no queue 'position-updates-user9zm_szz9' in vhost
Are fully different stories.
I'd suggest you implement "re-subscribe" logic in case of deleted queue.
Actually that is how STOMP works: it creates auto-deleted (generated) queue for the subscribe and yes, it is removed on the unsubscrire.
See more info in the RabbitMQ STOMP Adapter Manual.
From other side consider to subscribe to the existing AMQP queue:
To address existing queues created outside the STOMP adapter, destinations of the form /amq/queue/<name> can be used.
The problem is Stomp won't recreate the queue if it get's deleted by the RabbitMQ policy. I worked around it by creating the queue myself when the SessionSubscribeEvent is fired.
public void onApplicationEvent(AbstractSubProtocolEvent event) {
if (event instanceof SessionSubscribeEvent) {
MultiValueMap nativeHeaders = (MultiValueMap)event.getMessage().getHeaders().get("nativeHeaders");
List destination = (List)nativeHeaders.get("destination");
String queueName = ((String)destination.get(0)).substring("/queue/".length());
try {
Connection connection = connectionFactory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(queueName, true, false, false, null);
} catch (IOException e) {
e.printStackTrace();
}
}
}

Resources