I have setup rabbitmq. I want to retry message after 10 second once they fail. But the way I have setup, the message is not getting delayed, it's coming back to queue immediately. I want to wait 10 second before sending message to main_queue.
Below are my code. I am using Bunny Ruby gem.
connection = Bunny.new('url_for_rabbitmq', verify_peer: true)
connection.start
channel = connection.create_channel
# Creating 2 Exchanges (One Main exchange, one retry exchange)
exchange = channel.direct('main_exchange')
retry_exchange = channel.direct('retry_exchange')
# Creating 2 Queue (One Main queue, one retry queue)
queue = channel.queue('main_queue', durable: true, arguments: { 'x-dead-letter-exchange' => retry_exchange.name })
queue.bind(exchange, routing_key: 'foo')
queue.bind(retry_exchange, routing_key: 'foo') # This one is pushing message directly to main queue without waiting for 10 second.
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
retry_queue.bind(retry_exchange, routing_key: 'foo')
If i change this line (retry_exchange to exchange)
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
to this
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => exchange.name })
then it works. but the message is coming from main_exchange but I want message to come from retry_exchange. How can i achieve this.
This is how I solve the problem
connection = Bunny.new('url_for_rabbitmq', verify_peer: true)
connection.start
channel = connection.create_channel
# Creating 2 Exchanges (One Main exchange, one retry exchange)
exchange = channel.direct('main_exchange')
retry_exchange = channel.direct('retry_exchange')
# Creating 2 Queue (One Main queue, one retry queue)
retry_queue = channel.queue('retry_queue', durable: true, arguments: { 'x-message-ttl' => 10_1000, 'x-dead-letter-exchange' => retry_exchange.name })
retry_queue.bind(retry_exchange, routing_key: 'foo')
retry_queue.bind(retry_exchange, routing_key: retry_queue.name)
queue = channel.queue('main_queue', durable: true, arguments: { 'x-dead-letter-exchange' => retry_exchange.name, 'x-dead-letter-routing-key' => retry_queue.name })
queue.bind(exchange, routing_key: 'foo')
queue.bind(exchange, routing_key: retry_queue.name)
basically i needed to add this to main queue 'x-dead-letter-routing-key' => retry_queue.name. and remove couple of unnecessary binding from main queue queue.bind(retry_exchange, routing_key: 'foo')
Now message come to main queue, if it fails then it goes to retry queue. Before going to retry queue, it will remove old routing key foo and replace it with new routing key retry_queue.name. It stays in retry queue for 10 seconds and then again come back to main queue for retry.
Related
I want to be able to click a button on a website, have it represent a command, issue that command to my program via a websocket, have my program process that command (which will produce a side effect), and then return the results of that command to the website to be rendered.
The websocket would be responsible for updating state changes applied by different actors that are within the users view.
Example: Changing AI instructions via the website. This modifies some values, which would get reported back to the website. Other users might change other AI instructions, or the AI would react to current conditions changing position, requiring the client to update the screen.
I was thinking I could have an actor responsible for updating the client with changed information, and just have the receiving stream update the state with the changes?
Is this the right library to use? Is there a better method to achieve what I want?
You can use akka-streams and akka-http for this just fine. An example when using an actor as a handler:
package test
import akka.actor.{Actor, ActorRef, ActorSystem, Props, Stash, Status}
import akka.http.scaladsl.Http
import akka.http.scaladsl.model.ws.{Message, TextMessage}
import akka.http.scaladsl.server.Directives._
import akka.stream.scaladsl.{Flow, Sink, Source, SourceQueueWithComplete}
import akka.stream.{ActorMaterializer, OverflowStrategy, QueueOfferResult}
import akka.pattern.pipe
import scala.concurrent.{ExecutionContext, Future}
import scala.io.StdIn
object Test extends App {
implicit val actorSystem = ActorSystem()
implicit val materializer = ActorMaterializer()
implicit def executionContext: ExecutionContext = actorSystem.dispatcher
val routes =
path("talk") {
get {
val handler = actorSystem.actorOf(Props[Handler])
val flow = Flow.fromSinkAndSource(
Flow[Message]
.filter(_.isText)
.mapAsync(4) {
case TextMessage.Strict(text) => Future.successful(text)
case TextMessage.Streamed(textStream) => textStream.runReduce(_ + _)
}
.to(Sink.actorRefWithAck[String](handler, Handler.Started, Handler.Ack, Handler.Completed)),
Source.queue[String](16, OverflowStrategy.backpressure)
.map(TextMessage.Strict)
.mapMaterializedValue { queue =>
handler ! Handler.OutputQueue(queue)
queue
}
)
handleWebSocketMessages(flow)
}
}
val bindingFuture = Http().bindAndHandle(routes, "localhost", 8080)
println("Started the server, press enter to shutdown")
StdIn.readLine()
bindingFuture
.flatMap(_.unbind())
.onComplete(_ => actorSystem.terminate())
}
object Handler {
case object Started
case object Completed
case object Ack
case class OutputQueue(queue: SourceQueueWithComplete[String])
}
class Handler extends Actor with Stash {
import context.dispatcher
override def receive: Receive = initialReceive
def initialReceive: Receive = {
case Handler.Started =>
println("Client has connected, waiting for queue")
context.become(waitQueue)
sender() ! Handler.Ack
case Handler.OutputQueue(queue) =>
println("Queue received, waiting for client")
context.become(waitClient(queue))
}
def waitQueue: Receive = {
case Handler.OutputQueue(queue) =>
println("Queue received, starting")
context.become(running(queue))
unstashAll()
case _ =>
stash()
}
def waitClient(queue: SourceQueueWithComplete[String]): Receive = {
case Handler.Started =>
println("Client has connected, starting")
context.become(running(queue))
sender() ! Handler.Ack
unstashAll()
case _ =>
stash()
}
case class ResultWithSender(originalSender: ActorRef, result: QueueOfferResult)
def running(queue: SourceQueueWithComplete[String]): Receive = {
case s: String =>
// do whatever you want here with the received message
println(s"Received text: $s")
val originalSender = sender()
queue
.offer("some response to the client")
.map(ResultWithSender(originalSender, _))
.pipeTo(self)
case ResultWithSender(originalSender, result) =>
result match {
case QueueOfferResult.Enqueued => // okay
originalSender ! Handler.Ack
case QueueOfferResult.Dropped => // due to the OverflowStrategy.backpressure this should not happen
println("Could not send the response to the client")
originalSender ! Handler.Ack
case QueueOfferResult.Failure(e) =>
println(s"Could not send the response to the client: $e")
context.stop(self)
case QueueOfferResult.QueueClosed =>
println("Outgoing connection to the client has closed")
context.stop(self)
}
case Handler.Completed =>
println("Client has disconnected")
queue.complete()
context.stop(self)
case Status.Failure(e) =>
println(s"Client connection has failed: $e")
e.printStackTrace()
queue.fail(new RuntimeException("Upstream has failed", e))
context.stop(self)
}
}
There are lots of places here which could be tweaked, but the basic idea remains the same. Alternatively, you could implement the Flow[Message, Message, _] required by the handleWebSocketMessages() method by using GraphStage. Everything used above is also described in detail in akka-streams documentation.
Configuring a RetryPolicy inside a ReceiveEndpoint of a queue used for store messages from commands and events (as appears bellow) appears does not works when the queue is a saga endpoint queue.
This configuration works fine (note the endpoint RegisterOrderServiceQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.RegisterOrderServiceQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
...Bus RetryPolicy configuration on windows service to run the Saga state machine does not work (note the endpoint SagaQueue):
...
var bus = BusConfigurator.ConfigureBus((cfg, host) =>
{
cfg.ReceiveEndpoint(host, RabbitMqConstants.SagaQueue, e =>
{
e.UseRetry(Retry.Except<ArgumentException>().Immediate(3));
...
StateMachine class source code that throws an ArgumentException:
...
During(Registered,
When(ApproveOrder)
.Then(context =>
{
throw new ArgumentException("Test for monitoring sagas");
context.Instance.EstimatedTime = context.Data.EstimatedTime;
context.Instance.Status = context.Data.Status;
})
.TransitionTo(Approved),
...
But when ApproveOrder occurs the RetryPolicy rules are ignored, and connecting a ConsumeObserver in the bus that Sagas is connected, the ConsumeFault's method is executed 5 times (that is the default behavior of masstransit).
This should work? There is any missconception on my configurations?
I'm using logstash and elasticsearch to build a log system. RabbitMQ used to queue log message between two logstashs.
The message path is like below:
source log -> logstash -> rabbitMQ -> logstash(parse) -> elasticsearch
But i figure out that, no matter how much machine i added to rabbitMQ, it just use one machine resource to process messages.
I'm found some article say cluster just increase reliability and redundancy to prevent message lost.
But what i want is increase entire RabbitMQ cluster's throughput(in and out) by add more machine.
How do i configure my RabbitMQ cluster to increase it throughput?
Any comments are appreciated.
--
PS. i need to add more information here.
In my system limit i test is, can receive 7000/s messages and output 1700/s messages in 4 machine cluster system, but not enable HA and just bind 1 exchange to 1 queue and the queue just bind to 1 node. i guess 1 queue bind to 1 node is the throughput bottleneck. And its difficult to change the routing key now, so we have just one routing key and want to distribute message to different nodes to increase whole system throughput.
below is my logstash-indexer config
rabbitmq {
codec => "json"
auto_delete => false
durable => true
exchange => "logstash-exchange"
key => "logstash-routingKey"
queue => "logstash-queue"
host => "VIP-of-rabbitMQ"
user => "guest"
password => "guest"
passive => false
exclusive => false
threads => 4
prefetch_count => 512 }
You need to add more queues. I guess you using only one queue. So in other word you tied to one erlang process. What you want is use multiple queues:
Here is a quick and dirty example how to add some logic to logstash to send message to different queue:
filter {
# check if path contains source subfolder
if "foo" in [path] {
mutate { add_field => [ "source", "foo"] }
}
else if "bar" in [path] {
mutate { add_field => [ "source", "bar"] }
}
else {
mutate { add_field => [ "source", "unknown"] }
}
}
Then in your output:
rabbitmq {
debug => true
durable => true
exchange_type => "direct"
host => "your_rabbit_ip"
key => "%{source}"
exchange => "my_exchange"
persistent => true
port => 5672
user => "logstash"
password => "xxxxxxxxxx"
workers => 12
}
Updated:
Take a look at the repositories that this guy has:
https://github.com/simonmacmullen
I guess you will be interested in this one:
https://github.com/simonmacmullen/random-exchange
This exchange type is for load-balancing among consumers.
I am working on a basic example and am not able to work it out.
I need to forward messages from one machine (Machine1) to another (Machine2) via a queue (TestQ).
Producer is running on the Machine1 and a consumer on the Machine2.
My settings in the Machine1's rabbit broker config:
{rabbitmq_shovel, [ {shovels, [
{shovel_test, [
{sources, [{broker, "amqp://" }]},
{destinations, [{broker, "amqp://Machine2" }]},
{queue, <<"TestQ">>},
{ack_mode, on_confirm},
{reconnect_delay, 5}
]}
]} ]}
Machine2 has a default config and no shovel plugin enabled.
Producer's code running on the Machine1:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare("TestQ", true, false, false, null);
channel.basicPublish("", "TestQ", null, "Hello World!".getBytes());
Consumer's code running on the Machine2:
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare("TestQ", true, false, false, null);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume("TestQ", true, consumer);
while (true) {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String message = new String(delivery.getBody());
System.out.println(" [x] Received '" + message + "'");
}
Executing rabbitmqctl eval 'rabbit_shovel_status:status().' on the Machine1:
[{shovel_test,starting,{{2014,1,7},{9,47,38}}}]
...done.
Producer sends ok, but I never get a receive from the consumer on the Machine2.
Where is a problem? Something is missing in the conf of Machine1's broker, or Machine2's broker?
Thank you!
The status of your shovel should be running, not starting. If it stays in the starting phase, that means it can't start up correctly (e.g. can't connect to the destination broker).
One problem I spotted is that you used broker instead of brokers for specifying the list of sources. Try this:
{rabbitmq_shovel,
[{shovels, [{shovel_test,
[{sources, [{brokers, ["amqp://"]}]},
{destinations, [{broker, "amqp://Machine2"}]},
{queue, <<"TestQ">>},
{ack_mode, on_confirm},
{reconnect_delay, 5}
]}
]}
]}.
I have an interesting situation that I need to fulfill. I need to have an EventMachine loop that sits and waits for messages in an AMQP queue but then interrupts that loop in order to send out a message to a separate AMQP queue on a regular interval. I'm new to EventMachine and this is what I have so far, except the EventMachine loop doesn't send the necessary message.
Right now I've made two procs:
listen_loop = Proc.new {
AMQP.start(connection_config) do |connection|
AMQP::Channel.new(connection) do |channel|
channel.queue("queue1", :exclusive => false, :durable => true) do |requests_queue|
requests_queue.once_declared do
consumer = AMQP::Consumer.new(channel, requests_queue).consume
consumer.on_delivery do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply to #{metadata.reply_to}..."
response = "responding"
channel.default_exchange.publish(response,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
end
end
end
end
Signal.trap("INT") { AMQP.stop { EM.stop } }
Signal.trap("TERM") { AMQP.stop { EM.stop } }
}
send_message = Proc.new {
AMQP.start(connection_config) do |connection|
channel = AMQP::Channel.new(connection)
queue = channel.queue('queue2')
channel.default_exchange.publish("hello world", :routing_key => queue.name)
EM.add_timer(0.5) do
connection.close do
EM.stop{ exit }
end
end
end
}
And then I have my EventMachine Loop:
EM.run do
EM.add_periodic_timer(5) { send_message.call }
listen_loop.call
end
I am able to receive messages in the listen loop but I am unable to send off any of the messages on the regular interval.
Figured out what I was doing wrong. The message loop wasn't able to open up a new connection to the RabbitMQ server because it was already connected. Consolidated everything into a single EventMachine loop and reused the connection and it works.
For those curious it looks like this:
EM.run do
AMQP.start(connection_config) do |connection|
channel = AMQP::Channel.new(connection)
EM.add_periodic_timer(5) { channel.default_exchange.publish("foo", :routing_key => 'queue2') }
queue = channel.queue("queue1", :exclusive => false, :durable => true)
channel.prefetch(1)
queue.subscribe(:ack => true) do |metadata, payload|
puts "[requests] Got a request #{metadata.message_id}. Sending a reply to #{metadata.reply_to}..."
response = "bar"
channel.default_exchange.publish(response,
:routing_key => metadata.reply_to,
:correlation_id => metadata.message_id,
:mandatory => true)
metadata.ack
end
end
Signal.trap("INT") { AMQP.stop { EM.stop } }
Signal.trap("TERM") { AMQP.stop { EM.stop } }
end