WCF and Akka.Net vs Pub/Sub - performance

We are using WCF for our company and we would like to make sure that we have complete separation between Client and Service.
We would like to study to see if using either Akka.Net (Orlean) or one of the Pub/Sub frameworks can help us to reduce the complexity and possibly improving our current system.
Which one would you choose ? Actor Model or PubSub and also why ?
Thanks

It's all about tradeoffs and the choice is depending on your priorities. To be honest I don't think that actor model or pub/sub will automagically help you to reduce complexity. If you're not familiar with patterns and philosophy of any of them it's probable, that the result will be quite opposite. While both are build on interesting concepts that may help you decouple components in your application, they move complexity to other places (such as transactions and read-write separation).
I've described a conceptual difference between actor model and pub/sub frameworks in other SO thread. But the general idea is: Pub/Sub are centered about message passing reliability, actor model is more about scaling out the application and it's resiliency.

Related

Does it make sense to use GraphQL for microservices intercommunication?

I've read a lot on using GraphQL as API gateway for the front-end in front of the micro-services.
But I wonder if all the GraphQL advantages over Rest aren't relevant to communication between the micro-services as well.
Any inputs, pros/cons and successful usage examples will be appreciated.
Key notes to consider:
GraphQL isn't a magic bullet, nor is it "better" than REST. It is just different.
You can definitely use both at the same time, so it is not either/or.
Per specific use, GraphQL (or REST) can be anywhere on the scale of great to horrible.
GraphQL and REST aren't exact substitutes:
GraphQL is a query language, specification, and collection of tools, designed to operate over a single endpoint via HTTP, optimizing for performance and flexibility.
REST is an Architectural Style / approach for general communication utilizing the uniform interface of the protocols it exists in.
Some reasons for avoiding a common use of GraphQL between microservices:
GraphQL is mainly useful when the client need a flexible response it can control without making changes to the server's code.
When you grant the client service control over the data that comes in, it can lead to exposing too much data, hence compromising Encapsulation on the serving service. This is a long-term risk on System maintainability and ability to change.
Between microservices, latency is far less an issue than between client-server, so goes for the aggregation capabilities.
Uniform interface is really useful when you have many services - but graphQL may be counter-productive for that cause.
The flexible queries defined by QueryQL can be more challenging in terms of performance optimizing it.
Updating an hierarchy of object at once (graphQL natural structure) may add complexities in atomicity, idempotency, error reporting, etc.
To recap:
GraphQL can be really great for server to server communication, but most likely it would be a good fit in a small percentage of the use-cases.
Do you have a use-case for an API Gateway between services? Maybe this is the question you should ask yourself. GraphQL is just a (popular) tool.
Like always, it is best to match a problem to a tool.
I don't have experience with using GraphQL in a microservices environment but I'm inclined to think that its not the greatest for microservices.
To add a little more color to #Lior Bar-On's answer, GraphQL is more of a query language and is more dynamic in nature. It is often used to aggregate data sets as a result of a single request which in turn will potentially require many requests being made to many services in a microservice environment. At the same time, it also adds complexity to have to translate the gathering of information from respective sources of the information (other microservices). Of course, how complex would depend on how micro your services are and what queries you may look to support.
On the other hand, I think a monolith that uses an MVC architecture may actually have an upper hand because it owns a larger body of a data that it can query.

Does it make sense to use actor/agent oriented programming in Function as a Service environment?

I am wondering, if is it possible to apply agent/actor library (Akka, Orbit, Quasar, JADE, Reactors.io) in Function as a Service environment (OpenWhisk, AWS Lambda)?
Does it make sense?
If yes, what is minimal example hat presents added value (that is missing when we are using only FaaS or only actor/agent library)?
If no, then are we able to construct decision graph, that can help us decide, if for our problem should we use actor/agent library or FaaS (or something else)?
This is more opinion based question, but I think, that in the current shape there's no sense in putting actors into FaaS - opposite works actually quite well: OpenWhisk is implemented on top of Akka actually.
There are several reasons:
FaaS current form is inherently stateless, which greatly simplifies things like request routing. Actors are stateful by nature.
From my experience FaaS functions are usually disjointed - ofc you need some external resources, but this is the mental model: generic resources and capabilities. In actor models we tend to think in category of particular entities represented as actors i.e. user Max, rather than table of users. I'm not covering here the scope of using actors solely as unit of concurrency.
FaaS applications have very short lifespan - this is one of the founding stones behind them. Since creation, placement and state recovery for more complex actors may take a while, and you usually need a lot of them to perform a single task, you may end up at point when restoring the state of a system takes more time than actually performing the task, that state is needed for.
That being said, it's possible that in the future, those two approaches will eventually converge, but it needs to be followed with changes in both mental and infrastructural model (i.e. actors live in runtime, which FaaS must be aware of). IMO setting up existing actor frameworks on top of existing FaaS providers is not feasible at this point.

ZeroMQ: Can I use a ROUTER and a DEALER as server / client, instead of using them as proxies?

I have a server/client application, which uses a REQ/REP formal pattern and I know this is synchronous.
Can I completely replace zmq.REQ / zmq.REP by zmq.ROUTER and zmq.DEALER ?
Or do these have to be used only as intermediate proxies?
ZeroMQ is a box with a few smart and powerful building blocks
However, only the Architect and the Designer decide how well or how poor these get harnessed in your distributed applications' architecture.
So, a synchronicity or asynchronicity is not an inherent feature of some particular ZeroMQ Scaleable Formal Communication Pattern's access-node, but depends on real deployment, within some larger context of use.
Yes, ROUTER can talk to DEALER, but ...
as one may read in details in ZeroMQ API-specification tables, so called compatible socket-archetypes are listed for each named socket type, however anyone can grasp much stronger powers from ZeroMQ if trying to start using the ZeroMQ way of thinking by spending more time on the ZeroMQ concept and their set of Zero-maxims -- Zero-copy + (almost) Zero-latency + Zero-warranty + (almost) Zero-scaling degradation etc.
The best next step:
IMHO if you are serious about professional messaging, get the great book and source both the elementary setups knowledge, a bit more complex multi-socket messaging layer designs with soft signaling and also the further thoughts about the great powers of concurrent, heterogeneous, distributed processing to advance your learning curve.
Pieter Hintjens' book "Code Connected, Volume 1" ( available in PDF ) is more than a recommended source for your issue.
There you will get grounds for your further use of ZeroMQ.
ZeroMQ is a great tool, not just for the messaging layer itself. Worth time and efforts.

Advantages of actors over futures

I currently program in Futures, and I'm rather curious about actors. I'd like to hear from an experienced voice:
What are the advantages of actors over futures?
When should I use one instead of other?
As far as I've read, actors hold state and futures doesn't, is this the only difference? So if I have true immutability I shouldn't care about actors?
Please enlighten me :-)
One important difference is that actors typically have internal state, and therefore theoretically, they are not composable; see this and this blog post for having some issues elaborated. However, in practice, they usually provide a sweet spot between the imperative and the purely functional approach. So if possible, it is recommended to stick to programming with only futures, but if the message-passing model fits your problem domain better, feel free to use actors.

Actor model to replace the threading model?

I read a chapter in a book (Seven languages in Seven Weeks by Bruce A. Tate) about Matz (Inventor of Ruby) saying that 'I would remove the thread and add actors, or some other more advanced concurrency features'.
Why and how an actor model can be an advanced concurrency model that replaces the threading?
What other models are the 'advanced concurrency model'?
It's not so much that the actor model will replace threads; at the level of the cpu, processes will still have multiple threads which are scheduled and run on the processor cores. The idea of actors is to replace this underlying complexity with a model which, its proponents argue, makes it easier for programmers to write reliable code.
The idea of actors is to have separate threads of control (processes in Erlang parlance) which communicate exclusively by message passing. A more traditional programming model would be to share memory, and coordinate communication between threads using mutexes. This still happens under the surface in the actor model, but the details are abstracted away, and the programmer is given reliable primitives based on message passing.
One important point is that actors do not necessarily map 1-1 to threads -- in the case of Erlang, they definitely don't -- there would normally be many Erlang processes per kernel thread. So there has to be a scheduler which assigns actors to threads, and this detail is also abstracted away from the application programmer.
If you're interested in the actor model, you might want to take look at the way it works in Erlang or Scala.
If you're interested in other types of new concurrency hotness, you might want to look at software transactional memory, a different approach that can be found in clojure and haskell.
It bears mentioning that many of the more aggressive attempts at creating advanced concurrency models appear to be happening in functional languages. Possibly due to the belief (I drink some of this kool-aid myself) that immutability makes concurrency much easier.
I made this question my favorite and am waiting for answers, but since there still isn't, here is mine..
Why and how an actor model can be an
advanced concurrency model that
replaces the threading?
Actors can get rid of mutable shared state, which is very difficult to code right. (My understanding is that) actors can basically thought as objects with their own thread(s). You send messages between actors that will be queued and consumed by the thread within the actor. So, whatever state in the actor is encapsulated, and will not be shared. So it is easy to code right.
see also http://www.slideshare.net/jboner/state-youre-doing-it-wrong-javaone-2009
What other models are the 'advanced
concurrency model'?
see http://www.slideshare.net/jboner/state-youre-doing-it-wrong-javaone-2009
See Dataflow Programming. It's an approach, which is a layer over top of the usual OOP design. In some words:
there are a scene, where Components resides;
Components have Ports: Producers (output, which generate messages) and Consumers (input, which process messages);
there are Messages pre-defined between Components: one Component's Producer port is bind with another's Consumer.
The programming is going on 3 layer:
writing the dataflow system (language, framework/server, component API),
writing Components (system, basic, and domain-oriented ones),
creating the dataflow program: placing components into the scene, and define messages between them.
Wikipedia articles are good starting point to understand the business: http://en.wikipedia.org/wiki/Flow-based_programming
See also "actor model", "dataflow programming" etc.
Please the following paper
Actor Model of Computation
Also please see
ActorScript(TM) extension of C#(TM), Java(TM), and Objective C(TM): iAdaptive(TM) concurrency for antiCloud(TM) privacy and securitY

Resources