Related
I am wondering, if is it possible to apply agent/actor library (Akka, Orbit, Quasar, JADE, Reactors.io) in Function as a Service environment (OpenWhisk, AWS Lambda)?
Does it make sense?
If yes, what is minimal example hat presents added value (that is missing when we are using only FaaS or only actor/agent library)?
If no, then are we able to construct decision graph, that can help us decide, if for our problem should we use actor/agent library or FaaS (or something else)?
This is more opinion based question, but I think, that in the current shape there's no sense in putting actors into FaaS - opposite works actually quite well: OpenWhisk is implemented on top of Akka actually.
There are several reasons:
FaaS current form is inherently stateless, which greatly simplifies things like request routing. Actors are stateful by nature.
From my experience FaaS functions are usually disjointed - ofc you need some external resources, but this is the mental model: generic resources and capabilities. In actor models we tend to think in category of particular entities represented as actors i.e. user Max, rather than table of users. I'm not covering here the scope of using actors solely as unit of concurrency.
FaaS applications have very short lifespan - this is one of the founding stones behind them. Since creation, placement and state recovery for more complex actors may take a while, and you usually need a lot of them to perform a single task, you may end up at point when restoring the state of a system takes more time than actually performing the task, that state is needed for.
That being said, it's possible that in the future, those two approaches will eventually converge, but it needs to be followed with changes in both mental and infrastructural model (i.e. actors live in runtime, which FaaS must be aware of). IMO setting up existing actor frameworks on top of existing FaaS providers is not feasible at this point.
While I am trying to discerned the difference between the application logic and business logic I have found set of articles but unfortunately there is a contradiction between them.
Here they say that they are the same but the answer here is totally different.
For me I understand it in the following way:
If we look up for the definition of the Logic word in Google we will get
system or set of principles underlying the arrangements of elements in
a computer or electronic device so as to perform a specified task.
So if the logic is set of principles underlying the arrangements of elements then the business logic should be set of principles underlying the arrangements of the business rules, in other words it means the rules the should be followed to get a system reflects your business needs.
And for me the application logic is the principles that the application based on, in other words, how to apply these rules to get a system reflects your business needs, for example should I use MVC or should not I use?, should I use SQL or MSSQL?, should I handle errors using exception handling or if statment?.
So please could anybody help me to get rid of confusion.
Well there's going to be a few interpretations of this one, but here's mine.
Business logic is the rules that are in place whether your business is computerized or not.
Application logic is how a particular slice of that business is realised.
Take for example an insurance business offering multiple and complex policies. All the conditions, calculations, payment schemes, conditions of offer etc. are 'business rules'. A website that says "enter dob and income to get an instant estimate on our most popular products" would contain application logic as would a back office report for "top 500 earners that didn't buy".
Each is an example of a specific use. Business rules apply but they are constrained and supplemented by other rules (like just these policies).
So typically business rules are rules, application rules are a subset selected and packaged for a purpose.
Application Logic tells how an application is designed and developed. How you have maintained the standards through out the application. Usability, UI, Functionality etc are maintained throughout the application.
Business Logic is how a business is designed and implemented. What are the Business rules, Business Workflows.
Now a days, sometimes Business logic is adjusted as per application logic. And sometimes application logic is injected in Business logic to streamline each other.
SalesForce is an example.
"Application logic" (sometimes referred to as "work logic" in older literature), is the abstract of your source. It's closely tied to the implementation, and not necessarily to the real world problem it solves.
Example 1
You have a deck of cards. Your business logic may contain a step like "sort the cards", focusing on the desired outcome, as in, "whatever you do at this point, the cards need to end up sorted". This makes sense from a business point of view.
Your application logic, on the other hand, will contain something like "use a distribution sort here", which is completely off topic for the business side (it only cares about the output), OR, your code may even do no sorting at all, for example because you store your cards with a bitfield where it's "already sorted". So the point is, a step on the business side don't necessarily correspond to a step in the app logic.
Example 2
You have an elevator. Business rules are like "if we're going downwards and someone below us presses the down button, we stop there". This is an algorithmic step, but from a human perspective. Your application gets this need as a requirement, and... well, in the case of an elevator, you'll need to know the speed, maximum deceleration, distance from caller floor, priorities, other elevator cars' position and a bunch of other factors, and you get a pretty complex app logic just to deliver that simple requirement. And still, both are algorithms. One for the purpose of elevators, and another for the horrible mess beneath. (I'm absolutely amazed by elevators and their software, btw.)
Basically, after hours of researching I still dont get what the Unifying Logic Layer within Semantic Web Stack Model is and whose problem it is to take care of it.
I think this depends on what your conceptualisation of the semantic web is. Suppose the ultimate expression of the semantic web is to make heterogeneous information sources available via web-like publishing mechanisms to allow programs - agents - to consume them in order to satisfy some high-level user goal in an autonomous fashion. This is close to Berners-Lee et al's original conceptualisation of the purpose of the semantic web. In this case, the agents need to know that the information they get from RDF triple stores, SPARQL end-points, rule bases, etc, is reliable, accurate and trustworthy. The semantic web stack postulates that a necessary step to getting to that end-point is to have a logic, or collection of logics, that the agent can use when reasoning about the knowledge it has acquired. It's rather a strong AI view, or well towards that end of the spectrum.
However, there's an alternative conceptualisation (and, in fact, there are probably many) in which the top layers of the semantic web stack, including unifying logic, are not needed, because that's not what we're asking agents to do. In this view, the semantic web is a way of publishing disaggregated, meaningful information for consumption by programs but not autonomously. It's the developers and/or the users who choose, for example, what information to treat as trustworthy. This is the linked data perspective, and it follows that the current stack of standards and technologies is perfectly adequate for building useful applications. Indeed, some argue that even well-established standards like OWL are not necessary for building linked-data applications, though personally I find it essential.
As to whose responsibility it is, if you take the former view it's something the software agent community is already working on, and if you take the latter view it doesn't matter whether something ever gets standardised because we can proceed to build useful functionality without it.
I am struggling to see the real-world benefits of loosely coupled code. Why spend so much effort making something flexible to work with a variety of other objects? If you know what you need to achieve, why not code specifically for that purpose?
To me, this is similar to creating untyped variables: it makes it very flexible, but opens itself to problems because perhaps an unexpected value is passed in. It also makes it harder to read, because you do not explicitly know what is being passed in.
Yet I feel like strongly typed is encouraged, but loosely coupling is bad.
EDIT: I feel either my interpretation of loose coupling is off or others are reading it the wrong way.
Strong coupling to me is when a class references a concrete instance of another class. Loose coupling is when a class references an interface that another class can implement.
My question then is why not specifically call a concrete instance/definition of a class? I analogize that to specifically defining the variable type you need.
I've been doing some reading on Dependency Injection, and they seem to make it out as fact that loose coupling better design.
First of all, you're comparing apples to oranges, so let me try to explain this from two perspectives. Typing refers to how operations on values/variables are performed and if they are allowed. Coupling, as opposed to cohesion, refers to the architecture of a piece (or several pieces) of software. The two aren't directly related at all.
Strong vs Weak Typing
A strongly typed language is (usually) a good thing because behavior is well defined. Take these two examples, from Wikipedia:
Weak typing:
a = 2
b = '2'
concatenate(a, b) # Returns '22'
add(a, b) # Returns 4
The above can be slightly confusing and not-so-well-defined because some languages may use the ASCII (maybe hex, maybe octal, etc) numerical values for addition or concatenation, so there's a lot of room open for mistakes. Also, it's hard to see if a is originally an integer or a string (this may be important, but the language doesn't really care).
Strongly typed:
a = 2
b = '2'
#concatenate(a, b) # Type Error
#add(a, b) # Type Error
concatenate(str(a), b) # Returns '22'
add(a, int(b)) # Returns 4
As you can see here, everything is more explicit, you know what variables are and also when you're changing the types of any variables.
Wikipedia says:
The advantage claimed of weak typing
is that it requires less effort on the
part of the programmer than, because
the compiler or interpreter implicitly
performs certain kinds of conversions.
However, one claimed disadvantage is
that weakly typed programming systems
catch fewer errors at compile time and
some of these might still remain after
testing has been completed. Two
commonly used languages that support
many kinds of implicit conversion are
C and C++, and it is sometimes claimed
that these are weakly typed languages.
However, others argue that these
languages place enough restrictions on
how operands of different types can be
mixed, that the two should be regarded
as strongly typed languages.
Strong vs weak typing both have their advantages and disadvantages and neither is good or bad. It's important to understand the differences and similarities.
Loose vs Tight Coupling
Straight from Wikipedia:
In computer science, coupling or
dependency is the degree to which each
program module relies on each one of
the other modules.
Coupling is usually contrasted with
cohesion. Low coupling often
correlates with high cohesion, and
vice versa. The software quality
metrics of coupling and cohesion were
invented by Larry Constantine, an
original developer of Structured
Design who was also an early proponent
of these concepts (see also SSADM).
Low coupling is often a sign of a
well-structured computer system and a
good design, and when combined with
high cohesion, supports the general
goals of high readability and
maintainability.
In short, low coupling is a sign of very tight, readable and maintainable code. High coupling is preferred when dealing with massive APIs or large projects where different parts interact to form a whole. Neither is good or bad. Some projects should be tightly coupled, i.e. an embedded operating system. Others should be loosely coupled, i.e. a website CMS.
Hopefully I've shed some light here :)
The question is right to point out that weak/dynamic typing is indeed a logical extension of the concept of loose coupling, and it is inconsistent for programmers to favor one but not the other.
Loose coupling has become something of a buzzword, with many programmers unnecessarily implementing interfaces and dependency injection patterns -- or, more often than not, their own garbled versions of these patterns -- based on the possibility of some amorphous future change in requirements. There is no hiding the fact that this introduces extra complexity and makes code less maintainable for future developers. The only benefit is if this anticipatory loose coupling happens to make a future change in requirements easier to implement, or promote code reuse. Often, however, requirements changes involve enough layers of the system, from UI down to storage, that the loose coupling doesn't improve the robustness of the design at all, and makes certain types of trivial changes more tedious.
You're right that loose coupling is almost universally considered "good" in programming. To understand why, let's look at one definition of tight coupling:
You say that A is tightly coupled to B if A must change just because B changed.
This is a scale that goes from "completely decoupled" (even if B disappeared, A would stay the same) to "loosely coupled" (certain changes to B might affect A, but most evolutionary changes wouldn't), to "very tightly coupled" (most changes to B would deeply affect A).
In OOP we use a lot of techniques to get less coupling - for example, encapsulation helps decouple client code from the internal details of a class. Also, if you depend on an interface then you don't generally have to worry as much about changes to concrete classes that implement the interface.
On a side note, you're right that typing and coupling are related. In particular, stronger and more static typing tend to increase coupling. For example, in dynamic languages you can sometimes substitute a string for an array, based on the notion that a string can be seen as an array of characters. In Java you can't, because arrays and strings are unrelated. This means that if B used to return an array and now returns a string, it's guaranteed to break its clients (just one simple contrived example, but you can come up with many more that are both more complex and more compelling). So, stronger typing and more static typing are both trade-offs. While stronger typing is generally considered good, favouring static versus dynamic typing is largely a matter of context and personal tastes: try setting up a debate between Python programmers and Java programmers if you want a good fight.
So finally we can go back to your original question: why is loose coupling generally considered good? Because of unforeseen changes. When you write the system, you cannot possibly know which directions it will eventually evolve in two months, or maybe two hours. This happens both because requirements change over time, and because you don't generally understand the system completely until after you've written it. If your entire system is very tightly coupled (a situation that's sometimes referred to as "the Big Ball of Mud"), then any change in every part of the system will eventually ripple through every other part of the system (the definition of "very tight coupling"). This makes for very inflexible systems that eventually crystallize into a rigid, unmaintanable blob. If you had 100% foresight the moment you start working on a system, then you wouldn't need to decouple.
On the other hand, as you observe, decoupling has a cost because it adds complexity. Simpler systems are easier to change, so the challenge for a programmer is striking a balance between simple and flexible. Tight coupling often (not always) makes a system simpler at the cost of making it more rigid. Most developers underestimate future needs for changes, so the common heuristic is to make the system less coupled than you're tempted to, as long as this doesn't make it overly complex.
Strongly typed is good because it prevents hard to find bugs by throwing compile-time errors rather than run-time errors.
Tightly coupled code is bad because when you think you "know what you need to achieve", you are often wrong, or you don't know everything you need to know yet.
i.e. you might later find out that something you've already done could be used in another part of your code. Then maybe you decide to tightly couple 2 different versions of the same code. Then later you have to make a slight change in a business rule and you have to alter 2 different sets of tightly coupled code, and maybe you will get them both correct, which at best will take you twice as long... or at worst you will introduce a bug in one, but not in the other, and it goes undetected for a while, and then you find yourself in a real pickle.
Or maybe your business is growing much faster than you expected, and you need to offload some database components to a load-balancing system, so now you have to re-engineer everything that is tightly coupled to the existing database system to use the new system.
In a nutshell, loose coupling makes for software that is much easier to scale, maintain, and adapt to ever-changing conditions and requirements.
EDIT: I feel either my interpretation
of loose coupling is off or others are
reading it the wrong way. Strong
coupling to me is when a class
references a concrete instance of
another class. Loose coupling is when
a class references an interface that
another class can implement.
My question then is why not
specifically call a concrete
instance/definition of a class? I
analogize that to specifically
defining the variable type you need.
I've been doing some reading on
Dependency Injection, and they seem to
make it out as fact that loose
coupling better design.
I'm not really sure what your confusion is here. Let's say for instance that you have an application that makes heavy use of a database. You have 100 different parts of your application that need to make database queries. Now, you could use MySQL++ in 100 different locations, or you can create a separate interface that calls MySQL++, and reference that interface in 100 different places.
Now your customer says that he wants to use SQL Server instead of MySQL.
Which scenario do you think is going to be easier to adapt? Rewriting the code in 100 different places, or rewriting the code in 1 place?
Okay... now you say that maybe rewriting it in 100 different places isn't THAT bad.
So... now your customer says that he needs to use MySQL in some locations, and SQL Server in other locations, and Oracle in yet other locations.
Now what do you do?
In a loosely coupled world, you can have 3 separate database components that all share the same interface with different implementations. In a tightly coupled world, you'd have 100 sets of switch statements strewn with 3 different levels of complexity.
If you know what you need to achieve, why not code specifically for that purpose.
Short answer: You almost never know exactly what you need to achieve. Requirements change, and if your code is loosely coupled in the first place, it will be less of a nightmare to adapt.
Yet I feel like strongly typed is encouraged, but loosely coupling is bad.
I don't think it is fair to say that strong typing is good or encouraged. Certainly lots of people prefer strongly typed languages because it comes with compile-time checking. But plenty of people would say that weak typing is good. It sounds like since you've heard "strong" is good, how can "loose" be good too. The merits of a language's typing system isn't even in the realm of a similar concept as class design.
Side note: don't confuse strong and static typing
strong typing will help reduce errors while typically aiding performance. the more information the code-generation tools can gather about acceptable value ranges for variables, the more these tools can do to generate fast code.
when combined with type inference and feature's like traits (perl6 and others) or type classes (haskell), strongly typed code can continue to be compact and elegant.
I think that tight/loose coupling (to me: Interface declaration and assignment of an object instance) is related to the Liskov Principle. Using loose coupling enables some of the advantages of the Liskov Principle.
However, as soon as instanceof, cast or copying operations are executed, the usage of loose coupling starts being questionable. Furthermore, for local variables withing a method or block, it is non-sense.
If any modification done in our function, which is in a derived class, will change the code in the base abstract class, then this shows the full dependency and it means this is tight coupled.
If we don't write or recompile the code again then it showes the less dependency, hence it is loose coupled.
I have an idea of organising a game loop. I have some doubts about performance. May be there are better ways of doing things.
Consider you have an array of game components. They all are called to do some stuff at every game loop iteration. For example:
GameData data; // shared
app.registerComponent("AI", ComponentAI(data) );
app.registerComponent("Logic", ComponentGameLogic(data) );
app.registerComponent("2d", Component2d(data) );
app.registerComponent("Menu", ComponentMenu(data) )->setActive(false);
//...
while (ok)
{
//...
app.runAllComponents();
//...
}
Benefits:
good component-based application, no dependencies, good modularity
we can activate/deactivate, register/unregister components dynamically
some components can be transparently removed or replaced and the system still will be working as nothing have happened (change 2d to 3d)(team-work: every programmer creates his/her own components and does not require other components to compile the code)
Doubts:
inner loop in the game loop with virtual calls to Component::run()
I would like Component::run() to return bool value and check this value. If returned false, component must be deactivated. So inner loop becomes more expensive.
Well, how good is this solution? Have you used it in real projects?
Some C++ programmers have way too many fears about the overhead of virtual functions. The overhead of the virtual call is usually negligible compared to whatever the function does. A boolean check is not very expensive either.
Do whatever results in the easiest-to-maintain code. Optimize later only if you need to do so. If you do find you need to optimize, eliminating virtual calls will probably not be the optimization you need.
In most "real" games, there are pretty strict requirements for interdependencies between components, and ordering does matter.
This may or may not effect you, but it's often important to have physics take effect before (or after) user interaction proecssing, depending on your scenario, etc. In this situation, you may need some extra processing involved for ordering correctly.
Also, since you're most likely going to have some form of scene graph or spatial partitioning, you'll want to make sure your "components" can take advantage of that, as well. This probably means that, given your current description, you'd be walking your tree too many times. Again, though, this could be worked around via design decisions. That being said, some components may only be interested in certain portions of the spatial partition, and again, you'd want to design appropriately.
I used a similar approach in a modular synthesized audio file generator.
I seem to recall noticing that after programming 100 different modules, there was an impact upon performance when coding new modules in.
On the whole though,I felt it was a good approach.
Maybe I'm oldschool, but I really don't see the value in generic components because I don't see them being swapped out at runtime.
struct GameObject
{
Ai* ai;
Transform* transform;
Renderable* renderable;
Collision* collision;
Health* health;
};
This works for everything from the player to enemies to skyboxes and triggers; just leave the "components" that you don't need in your given object NULL. You want to put all of the AIs into a list? Then just do that at construction time. With polymorphism you can bolt all sorts of different behaviors in there (e.g. the player's "AI" is translating the controller input), and beyond this there's no need for a generic base class for everything. What would it do, anyway?
Your "update everything" would have to explicitly call out each of the lists, but that doesn't change the amount of typing you have to do, it just moves it. Instead of obfuscatorily setting up the set of sets that need global operations, you're explicitly enumerating the sets that need the operations at the time the operations are done.
IMHO, it's not that virtual calls are slow. It's that a game entity's "components" are not homogenous. They all do vastly different things, so it makes sense to treat them differently. Indeed, there is no overlap between them, so again I ask, what's the point of a base class if you can't use a pointer to that base class in any meaningful way without casting it to something else?