MT in the cloud with AppHarbor and CloudAMQP - appharbor

Anybody successfully got MassTransit working with AppHarbor and CloudAMQP?
I am having a bear of a time with it.
I have the publisher (the web site) sending messages, but the server (a background worker) does not appear to be picking them up.
One of the things that concern me is that MT requires the rabbitmq scheme whereas CloudAMQP sets the scheme to amqp.
I am swapping the scheme (from amqp to rabbitmq) when configuring the bus and noticed the scheme in the addresses of the message is rabbitmq, which makes sense, since I replaced them.
But I am wondering if they have to be amqp for the server to pick them up?
Here is a simple message that I have sent, it got to RabbitMQ but server is not picking it up.
message_id: 08cf2cbc-5b4f-14dd-1231-381f8b520000
delivery_mode: 2
headers:
Content-Type: application/vnd.masstransit+json
Payload
614 bytes
Encoding: string
{
"destinationAddress": "rabbitmq://98eabe2a-aae8-464c-8555-855518dd87d0_apphb.com:*********#lemur.cloudamqp.com/98eabe2a-aae8-464c-8555-855518dd87d0_apphb.com/Messages.Product:ProductCreatedEvent",
"headers": {},
"message": {
"id": "dd6ecfaa-60d2-4cd4-8cd6-a08a00e872fb"
},
"messageType": [
"urn:message:Messages.Product:ProductCreatedEvent"
],
"retryCount": 0,
"sourceAddress": "rabbitmq://98eabe2a-aae8-464c-8555-855518dd87d0_apphb.com:**********#lemur.cloudamqp.com/98eabe2a-aae8-464c-8555-855518dd87d0_apphb.com/enterprise_web"
}
Thanks,
Joe
Edit: Thanks Carl pointing out the passwords in the url

Edit:
To let anyone else thinking about using MassTransit with CloudAMQP, you may want to look into EasyNetQ instead. Not taking anything away from MassTransit, its a great project. The problem when using it with a service like CloudAMQP that charges data usage per month, is that MassTransit uses polling to check for messages, instead of subscribing to them (at least in the last version I was working with 2.1.1). This polling will eat into your data usage even though you may not be publishing messages.
Well it is in fact possible.
Come to find out, it was all part of the configuration of the background worker on appharbor.
Once that got worked out the background worker started staying "alive" and consuming messages. The issue revolved around app.config vs myworker.exe.config and config transforms. Once I realized it was a config problem this link helped out. App.config transformation for appharbor background workers
You have to swap out the amqp scheme for rabbitmq but that is not to bad.
Here is my bus configuration for the background worker:
log.Info("Configuring MassTransit");
var rabbitUrl = ConfigurationManager.AppSettings["CLOUDAMQP_URL"];
var bus = ServiceBusFactory.New(sbc =>
{
// configure for log4net
sbc.UseLog4Net();
// configure the bus
sbc.UseRabbitMq();
sbc.UseRabbitMqRouting();
sbc.ReceiveFrom(String.Format("{0}/server", rabbitUrl.Replace("amqp://", "rabbitmq://"))); // need to swap the scheme for masstransit
// finds all the consumers in the container and register them with the bus
sbc.Subscribe(x => x.LoadFrom(container));
sbc.BeforeConsumingMessage(() =>
{
var session = container.GetInstance<ISessionFactory>().OpenSession();
CurrentSessionContext.Bind(session);
});
sbc.AfterConsumingMessage(() =>
{
var sessionFactory = container.GetInstance<ISessionFactory>();
if (CurrentSessionContext.HasBind(sessionFactory) == false) return;
var session = CurrentSessionContext.Unbind(sessionFactory);
if (session != null)
{
session.Dispose();
}
});
var results = sbc.Validate();
if (results.Any())
{
throw new Exception("MassTransit may not be setup correctly. Review validate results");
}
});
// finally inject the bus into the container
container.Inject(bus);

I'm not aware of anyone on the mailing who's done this. Good luck though, please report to the mailing list if you get it working.
The protocol of rabbitmq is used by MT to figure out what Transport to use.
Per the .NET docs for the RabbitMQ connector it should be using the amqp protocol. So this shouldn't be the issue; but can you connect to the instance from elsewhere?

Related

Is it possible to connect to specific queue and read specified number of messages?

I'm trying to create a maintenance tool, that should be able to modify messages in the queues. For example in _error queues. The idea is to read some specified count of messages from queue by its name. Store them to files. Modify the files. Than read them by the tool and publish to specified queue.
There are no problems with publishing.
var sendEndpoint = await _busControl.GetSendEndpoint(new Uri($"rabbitmq://localhost/{sendEndpoint}")).ConfigureAwait(false);
await sendEndpoint.Send(message, messageType).ConfigureAwait(false);
But I can't figure out how to read specified count of messages. I'm playing with this, but still have no idea how limit message count I want to read:
_busControl.ConnectReceiveEndpoint(endpointName, cfg =>
{
cfg.Handler<T>(context =>
{
// some handler logic
return Task.CompletedTask;
});
});
Thanx for the ideas in advance!
There is no way (using MassTransit) to read individual messages from a queue.
You would need to use the transport client library to read the message (including the headers and body) and write it to the file. That same tool could be used to send the message back to the broker.
RabbitMQ has a shovel feature for moving messages between queues, that might help. For Azure Service Bus, there is a ServiceBusExplorer project that has some useful message management tools.

Akka HTTP - Websocket - true bi-directional scenario

Trying to create a true bi-directional websocket server using akka http & akka-stream
Server will answer a request when the response in ready
Server will answer a request with multiple responses when those are ready
Server will push a notification without being asked anything
The official https://doc.akka.io/docs/akka-http/current/server-side/websocket-support.html#handling-messages is not really clear.
Creating the Route for the server
public Route createRoute() {
return path("subscription", () ->
get(() ->
concat(
handleWebSocketMessages(subscriptionFlow()))));
}
public Flow<Message, Message, NotUsed> subscriptionFlow() {
I find that you need to return the Flow that would handle the in/out messages.
Thinking that there is no request/response for the messages but just a weak link I will need a separate Sink for the requests and a separate Source for sending responses although the Sink still needs to know about the Source so it can later know to whom does this answer is sent to.
I have only found examples of request/response or the Sink is completely ignored and maybe one older example https://markatta.com/codemonkey/posts/chat-with-akka-http-websockets-old/
I am thinking to use Flow.fromSinkAndSourceCoupled and maybe a have an actor that is created for every websocket connection
I cannot get to work the idea of Sink.actorRefWithBackpressure, Source.actorRefWithBackpressure, Flow.fromSinkAndSourceCoupled and creating the actor for every websocket connection.
On the typed Actor (currently 2.6.3) I cannot find a way to create the actor for every websocket connection just like in the old example
val userActor = system.actorOf(Props(new User(chatRoom)))
Is there an example on akka / akka-stream / akka-http projects, or somewhere that shows this feature ?

IBM MQ Error 2009 - How to detect when Queue Manager seems to spin up its own thread

This is bizarre and somewhat worrying for reliable messaging. I'm hoping I'm missing something.
This has only come to light today due to known network failures. It's giving me a good opportunity to have a look at some fault tolerance.
On thread 1, we send a message on a Queue managed by a Queue Manager. This code:
using (MQQueueManager qMgr = new MQQueueManager(_queueManager, _connectionProperties))
{
// send message
}
Just "hangs" on the Queue Manager. As in, it seems to run and then timesout. Maybe it runs on a thread and the thread crashes.
We wouldn't have known anything except we have a second thread listening to another queue.
We have the same construct on the receive thread:
using (MQQueueManager qMgr = new MQQueueManager(_queueManager, _connectionProperties))
{
// listen for message
}
But this throws an MQException with error code 2009. This suggests a network issue. (http://www-01.ibm.com/support/docview.wss?uid=swg21472342)
However, again, the QueueManager on the second thread seems to spin up its own thread on which the exception is thrown, resulting in no ability to catch it and react.
Is there something we're missing?
Update
Here are my connection properties:
Hashtable connectionProperties = new Hashtable();
connectionProperties.Add(MQC.TRANSPORT_PROPERTY, MQC.TRANSPORT_MQSERIES_MANAGED);
connectionProperties.Add(MQC.USE_MQCSP_AUTHENTICATION_PROPERTY, true);
connectionProperties.Add(MQC.HOST_NAME_PROPERTY, hostName);
connectionProperties.Add(MQC.CHANNEL_PROPERTY, channel);
connectionProperties.Add(MQC.PORT_PROPERTY, portNumber);
connectionProperties.Add(MQC.CONNECT_OPTIONS_PROPERTY, MQC.MQCNO_RECONNECT_Q_MGR);
return connectionProperties;
Bearing in mind I am a novice at IBM MQ, I notice that I have the connection property MQC.MQCNO_RECONNECT_Q_MGR. Will this play a role? I'm aiming for knowing and managing when things go wrong as opposed to relying on something we don't entirely understand.

RabbitMQ: Connecting & publishing to an existing queue in Ruby

I have two process types on Heroku: a web dyno in Ruby and a worker in Node.js. I'm using the RabbitMQ addon (currently beta) to pass a message from Ruby to Node. Node connects and consumes correctly, and Ruby connects and publishes correctly as long as it is the first to connect / create the queue.
Apparently, Carrot throws some funny errors when you try to create a queue that already exists, which is how I discovered that the reason for not being able to get my message across (I could have sworn it worked when I tested last night) was that I started my Node process before my Ruby.
Since I'm on Heroku, I'm going to have more than one of each Ruby and Node threads working concurrently, and they each need to support being the first to start a queue and connect into an existing queue, without issue.
Which brings me to my question:
How do I connect to an existing RabbitMQ queue, using Ruby, for the purpose of publishing messages to consumers which are already connected and waiting to receive messages?
Carrot will silently fail if there is a collission with an existing queue.
In order to connect to an existing queue, without colliding, you must specify the same options used when you first created the queue.
It sucks that Carrot silently fails in this case, but that's what it is.
Ruby:
Carrot.server
q = Carrot.queue('onboarding', {:durable=>true, :autoDelete=>false})
q.publish('test')
Node.js:
var amqp = require("amqp");
var c = amqp.createConnection({ host: 'localhost' });
q = c.queue('onboarding', {durable: true, autoDelete:false});
// ... wait for queue to connect (1 sec), or use .addListener('ready', callback) ...
q.subscribe( {ack:true}, function(message){
console.log(message.data.toString())
q.shift()
})
Have you tried the other client(s)?
http://rubyamqp.info/

How to implement cache synchronization in tomcat 6.0 cluster environment?

I'm currently working on a migration a web application to run in a cluster. This application uses caches. Some of this caches are reloaded in case the user saves something. I'ld like to inform the other nodes of the cluster about this, so that all nodes refresh their caches.
It seems that the tomcat server has a group messaging build in. (Tribes)
I'm wondering if I can use this messaging for my task and how to have the event listener run the whole day then.
with kind regards
Michael
It is possible to use it and there is no need to start a thread or the like.
Sending class instances around requires a jar of the message class in tomcat lib directory.
cheers
Michae
You can use Hazelcast Topic. It is a very lightweight pub/sub messaging. Each node will listen to the topic. When user saves smth on any node just put some message "REFRESH". On receive each node can do whatever you want.
Here is the code to do this:
String REFRESH = "REFRESH";
ITopic<String> topic = Hazelcast.getTopic("myTopic");
topic.addMessageListener(new MessageListener<String>() {
public void onMessage(String msg) {
if(REFRESH.equals(msg){
//do refresh
}
}
});
//when user saves sth.
topic.publish(REFRESH);
If you are using handwritten CACHE, Than you can synchronize cache B/W all nodes of cluster using message broadcasting/receiving , You can use JGROUP for that.
for ex: node A update cache that it just broad cast message to other node to refill(refresh) their cache

Resources