Does it make sense to use actor/agent oriented programming in Function as a Service environment? - actor

I am wondering, if is it possible to apply agent/actor library (Akka, Orbit, Quasar, JADE, Reactors.io) in Function as a Service environment (OpenWhisk, AWS Lambda)?
Does it make sense?
If yes, what is minimal example hat presents added value (that is missing when we are using only FaaS or only actor/agent library)?
If no, then are we able to construct decision graph, that can help us decide, if for our problem should we use actor/agent library or FaaS (or something else)?

This is more opinion based question, but I think, that in the current shape there's no sense in putting actors into FaaS - opposite works actually quite well: OpenWhisk is implemented on top of Akka actually.
There are several reasons:
FaaS current form is inherently stateless, which greatly simplifies things like request routing. Actors are stateful by nature.
From my experience FaaS functions are usually disjointed - ofc you need some external resources, but this is the mental model: generic resources and capabilities. In actor models we tend to think in category of particular entities represented as actors i.e. user Max, rather than table of users. I'm not covering here the scope of using actors solely as unit of concurrency.
FaaS applications have very short lifespan - this is one of the founding stones behind them. Since creation, placement and state recovery for more complex actors may take a while, and you usually need a lot of them to perform a single task, you may end up at point when restoring the state of a system takes more time than actually performing the task, that state is needed for.
That being said, it's possible that in the future, those two approaches will eventually converge, but it needs to be followed with changes in both mental and infrastructural model (i.e. actors live in runtime, which FaaS must be aware of). IMO setting up existing actor frameworks on top of existing FaaS providers is not feasible at this point.

Related

How performant is Vue 3 provide/inject?

I work on a project that leans on an app-level provide for global state management. At first, the provided object was small, but it has grown with the project.
How much does app performance suffer for injecting this object where it's data is needed? Short of implementing a global state management tool, like Pinia, is it worthwhile to decompose the large object we're providing into individual objects to provide "chunks" of data where its needed?
Since it's global, I guess it's quite big memory wise. Take a few hours to setup Pinia, will be faster/easier since it will come with only the parts you need.
We can't give you an exact number to your solution since it depends of a lot of things mostly.
PS: it can also be totally irrelevant and coming from another part of your code tbh.

Should I share my library between Microservices?

I gain my first experience with Microservices and need help on an important descision.
Example: We have
CustomerService (contains CustomerDTO)
InvoiceService (contains CustomerDTO, InvoiceDTO)
PrintService (contains CustomerDTO, InvoiceDTO)
As you see I have a heavy Code Duplication here, each time I modify CustomerDTO, I need to do it in 3 different Microservices.
My potential solution is, to exclude the duplicated classes into a library and share this library between microservices. But in my opinion this will break the microservice approach.
So, what is the right way to handle my problem?
There is no single right way to solve this problem as with most things. There are pros and cons to each approach depending on your circumstances.
Here are probably your main options
Create a new common service:
If your logic is sufficiently complex, you have many services that need this logic, and you have time on your hands then look to creating a common service that will fulfill this need. This may not be a small undertaking as you now have to create one more service you need to manage/deploy/scale
Just accept the duplication:
This may seem counter-intuitive, however if the logic is sufficiently small enough it may be better to just duplicate it rather than couple your micro-services with a library
Create the library
Real life is different than the textbooks. We are often constrained by time and budget among other things. If your micro-services are small enough and you know this logic will not change as much, or you just need to get things out and duplication would take more time. Take this approach. There is nothing to say you can't address this later when you have the time/budget. Countless blog articles will scream at you for doing this however, they aren't you and don't know the circumstances of your project.

Common service for READ and WRITE - CQRS + DDD

Hey I have an aggregate root with some properties which need to be calculated - totals.
These properties are not saved, but are needed to fill a readDto while seeding and an Event for the EventStore when the aggregateRoot is created or updated.
Is it a good practice to have a COMMON service between the read and the write part, which will calculate these totals and provide them to dtos or events or wherever they are needed ?
to have a COMMON service
It's permissible to use any valid technique you'd normally use to effect the DRY principle (but bearing in mind to temper this with the Rule of three).
Sometimes, this means copying a file of helpers between the write/decision process and a projection service. Sometimes you might even compile those into a single helper library. Sometimes is a piece of code in a dynamic language that can be run against an instance of an event (or series of events being folded) hydrated in the context of the decision process, the projection service or a reader process.
There is no fundamental law that says there must be exactly one thing (a compiled piece of code running as single service) which owns the rule - for starters, what happens if you want to deploy an update to that service without downtime?
In short - there is no hard and fast rule; any such a one size fits all prescriptive recipe would have hundreds of exceptions. A bit like an enterprise wide universal data model ;)
I am not that afraid of breaking DRY principle by using a single service.
My real concern is about if using common read/write stuff is against
the CQRS pattern.
Quite often this can be legitimate - readers and writers both observe the stream of events in order to infer ('fold') a rolling state. Readers may be exposing contextual information which can be used to drive 'users' to formulate commands representing desired actions/state transitions, which necessarily have a degree of overlap. The key thing is that decision making itself should only live on the write side (bu this fits well with the general principle that a command should normally not result in feedback - typically it should be idempotently processable with any validation etc. being a fait accomplit security-driven double check)
And secondly if using logic in the read part is against the CQRS pattern.
One typically agreed no-no is having conditional logic in the fold which maintains the rolling state - this should be simple mechanical accumulation.
For a projection and/or denormalizer which is maintaining an eventually consistent read-only view based on observing the events will often be using overlapping simple projection logic. If that logic gets complex, involves transactions etc., that's a design smell though - if the Events represent what's going on naturally and not contrived, they should tend to be relatively straightforward to map/project - you should really only be 'indexing' them or laying them out in a form appropriate for the given necessities of the reader.
If you're ending up with complex flows in the projection system, that's a sign you probably have multiple different readers and should perhaps consider separate projections to that end (i.e. kinda like the Interface Segregation Principle)

What is Unifying Logic within the Semantic Stack Model and who is supposed to take care of it?

Basically, after hours of researching I still dont get what the Unifying Logic Layer within Semantic Web Stack Model is and whose problem it is to take care of it.
I think this depends on what your conceptualisation of the semantic web is. Suppose the ultimate expression of the semantic web is to make heterogeneous information sources available via web-like publishing mechanisms to allow programs - agents - to consume them in order to satisfy some high-level user goal in an autonomous fashion. This is close to Berners-Lee et al's original conceptualisation of the purpose of the semantic web. In this case, the agents need to know that the information they get from RDF triple stores, SPARQL end-points, rule bases, etc, is reliable, accurate and trustworthy. The semantic web stack postulates that a necessary step to getting to that end-point is to have a logic, or collection of logics, that the agent can use when reasoning about the knowledge it has acquired. It's rather a strong AI view, or well towards that end of the spectrum.
However, there's an alternative conceptualisation (and, in fact, there are probably many) in which the top layers of the semantic web stack, including unifying logic, are not needed, because that's not what we're asking agents to do. In this view, the semantic web is a way of publishing disaggregated, meaningful information for consumption by programs but not autonomously. It's the developers and/or the users who choose, for example, what information to treat as trustworthy. This is the linked data perspective, and it follows that the current stack of standards and technologies is perfectly adequate for building useful applications. Indeed, some argue that even well-established standards like OWL are not necessary for building linked-data applications, though personally I find it essential.
As to whose responsibility it is, if you take the former view it's something the software agent community is already working on, and if you take the latter view it doesn't matter whether something ever gets standardised because we can proceed to build useful functionality without it.

How to write reusable business logic in MVC models?

my problem is that we try to use a MVC (PHP) framework. After discussing a lot think that MVC is very good, but I'm missing the possibility to write reusable model-(application)logic. So, I'm not sure if we have the right approach to implement our software in a MVC framework.
First I´ll describe the non-MVC, oo-approach which we use at the moment.
For example - we are working on some browser games (yes that's our profession). Imagine we have an player object. We use this player object very often. We have some different pages where you can buy thinks, so you need to make "money" transactions on the players "bank-account" or imagine you can do fights against other players. We have several fight-scripts, and these scripts take 2 or more player objects (it depends on the type of battle ie. clan battle, player vs. player battle...).
So, we have several pages (and controllers) with different battle-logic. But each of this controllers use the player-object to calculate all the attributes and items a player has and which damage and defence a player will do.
so, how can we reuse the logic in the player object in case of a MVC Model? it would be bad to duplicate all the necessary logic in the different fight-controllers and -models.
I think the "gold-transaction"-logic would be a good example to give you some more detail information. you need the transact-function in case of a fight, if you win against an other player and loot some of his gold, you need the transaction function in case of buying some stuff and you need the transact-function in case of spending some gold to the players guild...
So, I would say it would be a bad approach to define all these functions in one player model! I can say you these player model would be very big (actually we have the problem that our player-class is really huge - its a god class)
do you think there is a MVC-style solution for this problem?
I would say you put the code where it makes the most sense, and where you would not need to duplicate it someplace else.
If there is some operation that always needs a Player object, but might be used across different Controllers, the Player class would be the logical place to put it. On the other hand, if a bit of logic only needs to be done in the context of a certain Controller, and involves potentially other classes, it perhaps should be in the controller - or perhaps in some other class, too.
If you are having trouble figuring out where the logic should go, perhaps it's because your functions are not granular and reusable enough as they are. There are certainly aspects of MVC that force you to think a little bit more about the separation of concerns and keeping things DRY more than a 'plain' OOP approach... so you might end up breaking up currently coded operations that are in a single function now into multiple functions on different classes to get the right code in the right places.
For example - and these are not at all specific suggestions, but just a random possible thought process - maybe the process of transferring 'gold' between players needs to be broken down into more granular processes. The player class may do the basic task of changing the balance, but then the controller(s) may do specific parts of the process, such as verifying to/from whom gold is being transferred and why.

Resources