How to implement multinode architecture in crossbar.io? - websocket

I'm developing a decentralized(server and client) application architecture and use websockets as communication method between all servers and all clients.Multiple master servers federation.I am using crossbar on the server end and autobahn on the client.
On the documentation page in crossbar.io site it says:
A Crossbar.io node is a single instance of the Crossbar.io software
running on a single machine. This Crossbar.io node can form a cluster
or federated network by connecting to other Crossbar.io nodes on the
same, or, more often on other machines.Externally, the cluster will
behave like a single instance.
While application components connect to specific nodes or are directly
hosted by specific nodes, this is transparent from an application
point of view: application components are agnostic to how and where
they are deployed.
Searching for direction on how to implement this architecture gave me no results either on the documentation pages or the web.
How is this architecture implemented?

Unfortunately, this part of the architecture is something we're still working on. Currently, there is no way to cluster Crossbar.io nodes.
It is unfortunate that the architecture page currently suggests that this is already available. I've added a note at the beginning of the page which clarifies that parts of the architecture are not implemented yet (will deploy the changed version today).
Sorry for misleading you there!

Related

can micro-service interact with downstream service through localhost origin

Can micro-service interact with downstream service through localhost origin, since my all service is running in same server is that is correct approach ? I found that interacting a downstream service with domain name takes much time when compared to localhost. i was curious to know whether we can do like this ?
You're right, you can communicate with other services running in the same host with localhost. It's completely fine and when thinking about network round trips, it's beneficial.
But,
what if you want to scale the services?
What if you want to move any of the services to a different host?
While considering at least these scenarios, binding to a specific host is not worth. And this is applicable if you are using the IP of the host.
*I found that interacting a downstream service with domain name takes much time when compared to localhost.*.
I see what you're saying.
Microservices architecture is not a silver bullet for software development design and always come with tradeoffs
And about your deployment strategy Multiple Service Instances per Host pattern.
How you are going to handle if your services have different resource requirements?
say what if one of your services is utilizing all the host resource?
What if you need to scale out one independent service?
How you are going to ensure the availabilities of your services?
..
..
So there are so many questions you must consider before going with a pattern in microservices. It all depends on your requirements.
If your services are on the same server you should using a message broker or mechanism like grcp to talk between your services so doesn't matter if your orgin is. If you are using HTTP to communicate between your micro services then it totally not gain any advantages of micro services architecture and your architecture is flawed.
Microservice is a concept, it does not force you to where you deploy your application and how they might call each other. You may deploy your microservices on different virtual machines that are hosted on the same physical server. The whole point is you need to have a reason for everything that you decide to do with your architecture.
The first question is why you have split your application into different microservices? for only carrying the word of microservice on your architecture or having better control on the business logic, scalability, and maintainability of the project?
These are important things you need to take care of them when you are designing an application. draw the big picture of your product, how it's going to be used. which service/component is mostly being used by the customers, does keeping it with other microservices on the same server makes performance issues or not? what if any issue happens to the server and whole applications would be unreachable.

How can a Phoenix application tailored only to use channels scale on multiple machines? Using HAProxy? How to broadcast messages to all nodes?

I use the node application purely for socket.io channels with Redis PubSub, and at the moment I have it spread across 3 machines, backed by nginx load balancing on one of the machines.
I want to replace this node application with a Phoenix application, and I'm still all new to the erlang/Elixir world so I still haven't figured out how a single Phoenix application can span on more than one machine. Googling all possible scaling and load balancing terms yielded nothing.
The 1.0 release notes mention this regarding channels:
Even on a cluster of machines, your messages are broadcasted across the nodes automatically
1) So I basically deploy my application to N servers, starting the Cowboy servers in each one of them, similarly to how I do with node and them I tie them nginx/HAProxy?
2) If that is the case, how channel messages are broadcasted across all nodes as mentioned on the release notes?
EDIT 3: Taking Theston answer which clarifies that there is no such thing as Phoenix applications, but instead, Elixir/Erlang applications, I updated my search terms and found some interesting results regarding scaling and load balancing.
A free extensive book: Stuff Goes Bad: Erlang in Anger
Erlang pooling libraries recommendations
EDIT 2: Found this from Elixir's creator:
Elixir provides conveniences for process grouping and global processes (shared between nodes) but you can still use external libraries like Consul or Zookeeper for service discovery or rely on HAProxy for load balancing for the HTTP based frontends.
EDITED: Connecting Elixir nodes on the same LAN is the first one that mentions inter Elixir communication, but it isn't related to Phoenix itself, and is not clear on how it related with load balancing and each Phoenix node communicating with another.
Phoenix isn't the application, when you generate a Phoenix project you create an Elixir application with Phoenix being just a dependency (effectively a bunch of things that make building a web part of your application easier).
Therefore any Node distribution you need to do can still happen within your Elixir application.
You could just use Phoenix for the web routing and then pass the data on to your underlying Elixir app to handle the distribution across nodes.
It's worth reading http://www.phoenixframework.org/v1.0.0/docs/channels (if you haven't already) where it explains how Phoenix channels are able to use PubSub to distribute (which can be configured to use different adapters).
Also, are you spinning up cowboy on your deployment servers by running mix phoenix.server ?
If so, then I'd recommend looking at EXRM https://github.com/bitwalker/exrm
This will bundle your Elixir application into a self contained file that you can simply deploy to your production servers (with Capistrano if you like) and then you start your application.
It also means you don't need any Erlang/Elixir dependencies installed on the production machines either.
In short, Phoenix is not like Rails, Phoenix is not the application, not the stack. It's just a dependency that provides useful functionality to your Elixir application.
Unless I am misunderstanding your use case, you can still use the exact scaling technique your node version of the application is. Simply deploy the Phoenix application to > 1 machines and use an Nginx load balancer configured to forward requests to one of the many application machines.
The built in node communications etc of Erlang are used for applications that scale in a different way than a web app. For instance, distributed databases or queues.
Look at Phoenix.PubSub
It's where Phoenix internally has the Channel communication bits.
It currently has two adapters:
Phoenix.PubSub.PG2 - uses Distributed Elixir, directly exchanging notifications between servers. (This requires that you deploy your application in a elixir/erlang distributed cluster way.)
Phoenix.PubSub.Redis - uses Redis to exchange data between servers. (This should be similar to solutions found in socket.io and others)

Golang tour distributed pattern

According to this article, the app-engine front-end and the playground back-end communicate through RPC calls. Each one of app-engine front-end instance and playground instance can be created to support scaling.
I am asking myself what is/are the patterns (solutions) to load balance works between front-end request and back-end instance while keeping RPC.
One solution may be to use one global working queue where tasks are puts inside it with a 'Reply-To' header. This header should point to a per front-end instance queue where responses are put. Something like the following schema (from RabbitMQ tutorial) with rpc_queue shared between back-end instances :
I am not sure this would be a good way to do especially the fact that if the shared queue is offline, the whole system fail (but how to take care of this?).
Thank you.
As an answer and a follow-up of comments I received on the first post, I developed Indenter, a small proof of concept based on the idea proposed of a service discovery daemon (I use etcd instead of ZooKeepr for simplicity however).
I wrote an article about it and release the code if someone may be interested one day:
Indenter: a scalable, fault-tolerant, distributed web service copying the go playground architecture.

Should cluster support be at the application or framework level?

Lets say you're starting a new web project that required the website to run on and MVC framework on Mono. A couple major requirements are that it has to scale easy, be stable and work with multiple servers that may or may not be in the same place or even on the same local network.
The first thing I thought of was a sort of cluster communication between servers. Each server would act as a node and be its own standalone application and would query other nodes in a known list for session information and things like that.
But one major design questions I have is should this functionality be built into the supporting framework or should the application handle the synchronization of the data?
Or am I just way off and this would never work?
Normaly clustering rather belongs to some kind of middleware layer, thus on your framework level. However it can also be implemented on the application level.
It depends on your exact use, if you want load balancing, scalability etc.

Should I host Website and REST API on the same server or split?

I have a web application that consists of Website and REST API. Should I host them on the same server or should I host them on different servers? By "server" I mean a server cluster - several servers behind load balancer.
API is mostly inbound traffic, website - mostly outbound.
If it matters - hosted on Rackspace and/or AWS.
Here is what I see so far:
Benefits of having Website and REST API on the same server
Simple deployment
Simple scaling - something is slow - just launch another instance
Single load balancer configuration
Simple monitoring
Simple, simple, simple ...
Effective use of full duplex network (API - inbound, website - outbound)
Benefits of splitting
API overload will not affect website load time
Detailed monitoring (I will know which component uses resources at this moment)
Any comments?
Thank you
Alexander
Just as you stated, in most situations, there are more advantages in hosting the API on the same server as the website. So I would stick with that option.
But if you predict allot of traffic for either the website or the API, then maybe a separate server would be more suited.
If this is on a load balancer why don't you leave the services and pages on the same site and let the load balancer/cluster do its job?
Your list of advantages/disadvantages are operational considerations, but you should consider application needs as well.
Caching?
Security?
Other resources, i.e. filesystem
These may or may not apply, but if your application architecture is different between the two, be sure to factor this into your decision.

Categories

Resources