Is there a relatively simple, documented way in a Phoenix application to read how many active sockets and channels are currently open at any given time? And more specifically, is it possible to filter this data by topic and other channel connection metadata?
My use case is for analytics on active connections to my backend.
Thanks for any suggestions!
You are looking for Phoenix.Presence. From the documentation:
Provides Presence tracking to processes and channels.
This behaviour provides presence features such as fetching presences for a given topic, as well as handling diffs of join and leave events as they occur in real-time. Using this module defines a supervisor and allows the calling module to implement the Phoenix.Tracker behaviour which starts a tracker process to handle presence information.
Basically, you are supposed to implement Phoenix.Presence behaviour (the almost ready-to-go example is there in docs,) and Phoenix.Tracker according to your needs.
Related
I'm using RabbitMQ in my pet project (Spring Boot based). In #Configuration I declare beans like Queue,Binding,DirectExchange. So, when I run the application all these exchanges and bindings with queues are created automatically. I'm concerned about whether this is the correct way to configure these RabbitMQ-related "entities". Should I separate this into separate steps before application startup? For example, calling series of curl to the management HTTP API to create all needed queues (with exchanges and bindings) before application startup. What are the best practices for creating/configuring routing-related stuff?
The thing is there is no one way of using RabbitMQ. However there are a few questions I always ask myself before working with any broker. I'll apply them to your question here.
Let's question the approach:
In regards to creating the exchanges, bindings, queues etc:
Try to understand if the elements you are using are durable. If so, then you could create this within your code on startup & apply a simple health check. You also want to check if your RabbitMQ server persists data. If not, then you'll HAVE to create your queues, exchanges & bindings every time.
In regards to routing & binding queues with exchanges:
There are two major questions you need to consider
How much does latency matter?
If latency does matter, try to use direct exchanges as much as you can. The reason for this is simple. You're simply going from exchange to queue instead of having to route your message. Routing adds latency, never forget! If those few extra ms won't make the difference for you, then the following question needs to be kept in mind to understand how to define your exchanges, bindings & queues.
How will I use my broker?
Some people use their broker to simply pub/sub messages. This is a perfectly reasonable use case. In this case a fanout exchange would be the most viable option. If you're trying to minimize the amount of queues you're creating a topic exchange may be interesting as well.
However more important, are you using your broker exclusively for between-service communication or is your service going to be both the producer and consumer? Hell, is it a mix between the previous two cases? This boundary needs to be defined clearly. Else you'll run into a mess where you suddenly notice you're consuming messages that can actually be handled internally by another library or simply passing arguments to functions.
Example of how to apply it:
Case: we have a logging service & user service:
1. How will I use my broker:
between service communication.
2. Which messages do I need to get across?
CRUD operations
login/logout
3. How would my routing table look like (draft)?
Exchange Name
Exchange Type
Binding
Queue
user
topic
user.cmd.*
user:crud
user
topic
user.event.login
user:login
Above you can clearly see we can handle all CRUD operations using one simple queue. Is this the most efficient approach? It depends on your service. It may be better that user.cmd.create should go to user:created. This is another boundary that you'll need to define.
Something that also needs to be mentioned is that you should use your Queues & routing keys as pieces of information. Debugging a micro service can be hellish. So applying a general naming convention would be most appropriate. There is no one naming convention, so this depends on your use case once again.
Conclusion:
In general the practice that is best in regards to any broker is clearly defining the scope of your broker and its underlying elements. If not done properly, it does not matter if you do a health check or not on startup. It's easy to get lost in the complexity and interesting features rabbitMQ offers. Try to keep it as simple as possible at first and ask yourself: "is it going to cost me a lot of time to refactor/debug/fix later?" If the answer is yes go back to the first sentence in this paragraph.
Documentation:
AMQP concepts: https://www.rabbitmq.com/tutorials/amqp-concepts.html
Some general best practices: https://www.cloudamqp.com/blog/part2-rabbitmq-best-practice-for-high-performance.html
Routing: https://www.rabbitmq.com/tutorials/tutorial-four-python.html
Latency & throughput: https://blog.rabbitmq.com/posts/2012/05/some-queuing-theory-throughput-latency-and-bandwidth
I was writing a POC on long-polling using go.
I see the general package to be used is https://github.com/jcuga/golongpoll .
But assuming that I would want to publish an event to the golongpoll.SubscriptionManager from a general context, especially when there is a possibility that the long poll API request is being served by one machine, while the Kafka event for that particular consumer group is consumed by another instance in the cluster.
The examples given in the documentation did not talk of such a scenario at all, even though this seems like a common scenario. One way I can think of is have a distributed cache like Redis in between and have all the services poll this for a change? But that sounds a bit dumb to me.
I'm attempting to do a pub/sub architecture where multiple publishers and multiple subscribers exist on the same bus. According to what I've read on the internet, only one socket should ever call bind(), and all others (whether pub or sub) should call connect().
The problem is, with this approach I'm finding that only the publisher that actually calls bind() on the socket ever publishes messages. All of my publishers that call connect() seem to fail silently and don't actually publish any messages to the bus. I've confirmed this isn't a subscriber key issue, as I've written a simple "sniffer" app that subscribes to all messages on the bus, and it is only showing the publisher that called bind().
If I attempt multiple binds with the publisher, the "expected" zmq behavior of silently stealing the bus occurs with ipc, and a port in use error is thrown with tcp.
I've verified this behavior with ipc and tcp endpoints, but ultimately the full system will be using epgm. I assume (though of course may be wrong) that in this situation I wouldn't need a broker since there's no dynamic discovery occurring (endpoints are known, whether ipc, tcp, or epgm multicast).
Is there something I'm missing, perhaps a socket setting, that would be causing the connecting publishers to not actually send their data? According to the literature I've seen on the internet, I'm doing things the "correct" way but it still doesn't work.
For reference, my publisher class has the following methods for setting up the endpoint:
ZmqPublisher::ZmqPublisher()
: m_zmqContext(1), m_zmqSocket(m_zmqContext, ZMQ_PUB)
{}
void ZmqPublisher::bindEndpoint(std::string ep)
{
m_zmqSocket.bind(ep.c_str());
}
void ZmqPublisher::connect(std::string ep)
{
m_zmqSocket.connect(ep.c_str());
}
So ultimately, my question is this: What is the proper way to handle multiple publishers on the same endpoint, and why am I not seeing messages from more than one publisher?
It may or may not be relevant, but The 0MQ Guide has the following slightly enigmatic remark:
In theory with ØMQ sockets, it does not matter which end connects and which end binds. However, in practice there are undocumented differences that I'll come to later. For now, bind the PUB and connect the SUB, unless your network design makes that impossible.
I've not yet discovered where the "come to later" actually happens, but I don't use pub/sub so much, and haven't read the "Advanced Pub-Sub Patterns" part of the guide in great detail.
However, the idea of multiple publishers on a single end-point, to me, suggests the need for an XPUB/XSUB style broker; it's not about dynamic discovery, it's about single point of contact and routing. Ultimately, I think a broker-based topology would simplify your application, and make it easier to identify problems.
Your mistake was that you call a single publisher with bind and others with connect. This is not supported with plain PUB-SUB pattern.
Plain PUB-SUB in ZeroMQ supports only two scenarios (see the image below):
single publisher, multiple subscribers
single subscriber, multiple publishers
In both cases, the party that is "single" must bind and the party that is "multiple" must connect. Otherwise, if you want many-to-many, you can use XPUB-XSUB or some other pattern.
I have a multiple frontends to my service written in Node.js and workers written in Ruby. Now the question is how to make those communicate? I need to maintain dynamic pool of workers to handle load (spawn more workers when load rises) and messages are quite big ~2-3M because I'm sending images to workers uploaded by users through Node.js frontends. Because I want nice scaling I thought about some queuing solution, but I didn't find any existing solutions (or misunderstood guides) that will provide:
Fallback mechanisms. Solutions I've found so far have single failure point - message broker and there are no ways to provide fallbacks.
Serialization. So when broker fails tasks are not lost.
Ability to pass big messages.
Easy API for Ruby and Node.js
Some API to track queue size so I could rearrange workers pool.
Preferrably lightweight.
Maybe my approach is wrong? Maybe I shouldn't use queues but some other way? Or there's some queueing solution that fits requirements above?
No doubt you require a Queue to scale and you can monitor this queue to spawn "workers".
Apache ActiveMQ is very robust and supports REST protocol. Ruby client is also available to access the queue.
Interesting article on RESTful queue using Apache ActiveMQ
in the end of the day i took ZeroMQ queue solution. Very fast, robust and lightweight implementation. Had to write own broker, but thats the only cons of this solution.
redis publish/subscribe should do the trick
http://redis.io/topics/pubsub
Messaging middleware solutions (JMS, Tibco, etc.) allow publish/subscribe with "topic" filtering using wildcards to subscribe to all messages of a certain "topic", e.g. SUBSCRIBE("ACCOUNT.*") topic allows you to subscribe to both "ACCOUNT.WITHDRAW" message and "ACCOUNT.CHECKBALANCE" message.
The problem is that such subscription also receives my own published messages.
I'm looking for a mechanism, similar to, say, UDP multicast loopback which can be turned ON or OFF by the transport layer without messing with the data being sent.
Is there a common, declarative (no custom code, configuration only) way to configure the middleware not to receive messages which that very same service instance has published? Ideally, this should also be able to filter out everything published by ALL servers (nodes) of the same "kind".
Thanks in advance.
The JMS API contains this option for TopicSubscribers, e.g. TIBCO EMS let's you create a consumer with the "noLocal" property. That means no messages published over the same connection, get consumed by clients on the same connection.
e.g. take a look here how to create a topic subscriber with the "noLocal" option:
https://docs.tibco.com/pub/enterprise_message_service/7.0.1-march-2013/doc/html/tib_ems_api_reference/api/javadoc/javax/jms/TopicSession.html
No one is answering, so I'll chime in (in a hand-wavey way).
I believe there's nothing in the JMS spec around controlling whether you get your own sent messages back on a topic receiver. So any capability like this would be a non-portable vendor feature. Especially for your second requirement (based on "kind" of JMS client versus some control based on the same connection doing the sending/receiving).
If you've got no flexibility to modify code or message content (properties), I think you've got no portable solutions. And likely no solution at all for that second "kind" requirement.
If you want to investigate vendor-specific options, you'll need to tell us which vendor you're interested in. You may get nothing, but there's no way to know without asking.