Advantages of actors over futures - actor

I currently program in Futures, and I'm rather curious about actors. I'd like to hear from an experienced voice:
What are the advantages of actors over futures?
When should I use one instead of other?
As far as I've read, actors hold state and futures doesn't, is this the only difference? So if I have true immutability I shouldn't care about actors?
Please enlighten me :-)

One important difference is that actors typically have internal state, and therefore theoretically, they are not composable; see this and this blog post for having some issues elaborated. However, in practice, they usually provide a sweet spot between the imperative and the purely functional approach. So if possible, it is recommended to stick to programming with only futures, but if the message-passing model fits your problem domain better, feel free to use actors.

Related

What is an approach for designing complex FSMs?

At work, we use FSMs. Recently, I had to design an FSM for a problem that I deem "a little too complex for a simple FSM". Why? Because the problem has about 6 different data dimensions, and many permutations of this data impact the behaviour of the solution significantly. My brain thinks "6 data attributes means 2^6 +1 permutations of this data" if it were all boolean data. Furthermore, there are about 8 inputs that can happen at any given time.
This problem made me aware that my FSM creating skills stop at simple problems used in my hobby projects. At work, we are constrained to use FSMs. That means, I cannot just say "this problem is outside of the scope of FSMs. I'll use something else." Indeed, the FSM platform we have in place does provide a lot of power for our solutions.
Question: What is an approach for designing an FSM when the problem is sufficiently complex? I've researched a bit on this and found a few papers which, honestly, didn't help me much. I hope there are some best practices for this, and all I'm asking for is one. Please and thanks.
I suppose that you might be experiencing the usual "state-transition explosion", which is the known problem of traditional "flat" FSMs. The traditional FSMs "explode", because they inflict repetitions of the same reactions in many states. FSMs lack any mechanisms to capture commonalities of behavior among states. The long know solution is to use Hierarchical State Machines (a.k.a. Harel statecharts or UML state machines). HSM support the concept of state nesting, in which sub-states inherit behavior from the surrounding superstate(s). When used correctly, state nesting eliminates the repetitions and counteracts the "explosion" problem. Most non-trivial problems are not really tractable with FSMs, but are quite manageable with HSMs.

Why is tightly coupled bad but strongly typed good?

I am struggling to see the real-world benefits of loosely coupled code. Why spend so much effort making something flexible to work with a variety of other objects? If you know what you need to achieve, why not code specifically for that purpose?
To me, this is similar to creating untyped variables: it makes it very flexible, but opens itself to problems because perhaps an unexpected value is passed in. It also makes it harder to read, because you do not explicitly know what is being passed in.
Yet I feel like strongly typed is encouraged, but loosely coupling is bad.
EDIT: I feel either my interpretation of loose coupling is off or others are reading it the wrong way.
Strong coupling to me is when a class references a concrete instance of another class. Loose coupling is when a class references an interface that another class can implement.
My question then is why not specifically call a concrete instance/definition of a class? I analogize that to specifically defining the variable type you need.
I've been doing some reading on Dependency Injection, and they seem to make it out as fact that loose coupling better design.
First of all, you're comparing apples to oranges, so let me try to explain this from two perspectives. Typing refers to how operations on values/variables are performed and if they are allowed. Coupling, as opposed to cohesion, refers to the architecture of a piece (or several pieces) of software. The two aren't directly related at all.
Strong vs Weak Typing
A strongly typed language is (usually) a good thing because behavior is well defined. Take these two examples, from Wikipedia:
Weak typing:
a = 2
b = '2'
concatenate(a, b) # Returns '22'
add(a, b) # Returns 4
The above can be slightly confusing and not-so-well-defined because some languages may use the ASCII (maybe hex, maybe octal, etc) numerical values for addition or concatenation, so there's a lot of room open for mistakes. Also, it's hard to see if a is originally an integer or a string (this may be important, but the language doesn't really care).
Strongly typed:
a = 2
b = '2'
#concatenate(a, b) # Type Error
#add(a, b) # Type Error
concatenate(str(a), b) # Returns '22'
add(a, int(b)) # Returns 4
As you can see here, everything is more explicit, you know what variables are and also when you're changing the types of any variables.
Wikipedia says:
The advantage claimed of weak typing
is that it requires less effort on the
part of the programmer than, because
the compiler or interpreter implicitly
performs certain kinds of conversions.
However, one claimed disadvantage is
that weakly typed programming systems
catch fewer errors at compile time and
some of these might still remain after
testing has been completed. Two
commonly used languages that support
many kinds of implicit conversion are
C and C++, and it is sometimes claimed
that these are weakly typed languages.
However, others argue that these
languages place enough restrictions on
how operands of different types can be
mixed, that the two should be regarded
as strongly typed languages.
Strong vs weak typing both have their advantages and disadvantages and neither is good or bad. It's important to understand the differences and similarities.
Loose vs Tight Coupling
Straight from Wikipedia:
In computer science, coupling or
dependency is the degree to which each
program module relies on each one of
the other modules.
Coupling is usually contrasted with
cohesion. Low coupling often
correlates with high cohesion, and
vice versa. The software quality
metrics of coupling and cohesion were
invented by Larry Constantine, an
original developer of Structured
Design who was also an early proponent
of these concepts (see also SSADM).
Low coupling is often a sign of a
well-structured computer system and a
good design, and when combined with
high cohesion, supports the general
goals of high readability and
maintainability.
In short, low coupling is a sign of very tight, readable and maintainable code. High coupling is preferred when dealing with massive APIs or large projects where different parts interact to form a whole. Neither is good or bad. Some projects should be tightly coupled, i.e. an embedded operating system. Others should be loosely coupled, i.e. a website CMS.
Hopefully I've shed some light here :)
The question is right to point out that weak/dynamic typing is indeed a logical extension of the concept of loose coupling, and it is inconsistent for programmers to favor one but not the other.
Loose coupling has become something of a buzzword, with many programmers unnecessarily implementing interfaces and dependency injection patterns -- or, more often than not, their own garbled versions of these patterns -- based on the possibility of some amorphous future change in requirements. There is no hiding the fact that this introduces extra complexity and makes code less maintainable for future developers. The only benefit is if this anticipatory loose coupling happens to make a future change in requirements easier to implement, or promote code reuse. Often, however, requirements changes involve enough layers of the system, from UI down to storage, that the loose coupling doesn't improve the robustness of the design at all, and makes certain types of trivial changes more tedious.
You're right that loose coupling is almost universally considered "good" in programming. To understand why, let's look at one definition of tight coupling:
You say that A is tightly coupled to B if A must change just because B changed.
This is a scale that goes from "completely decoupled" (even if B disappeared, A would stay the same) to "loosely coupled" (certain changes to B might affect A, but most evolutionary changes wouldn't), to "very tightly coupled" (most changes to B would deeply affect A).
In OOP we use a lot of techniques to get less coupling - for example, encapsulation helps decouple client code from the internal details of a class. Also, if you depend on an interface then you don't generally have to worry as much about changes to concrete classes that implement the interface.
On a side note, you're right that typing and coupling are related. In particular, stronger and more static typing tend to increase coupling. For example, in dynamic languages you can sometimes substitute a string for an array, based on the notion that a string can be seen as an array of characters. In Java you can't, because arrays and strings are unrelated. This means that if B used to return an array and now returns a string, it's guaranteed to break its clients (just one simple contrived example, but you can come up with many more that are both more complex and more compelling). So, stronger typing and more static typing are both trade-offs. While stronger typing is generally considered good, favouring static versus dynamic typing is largely a matter of context and personal tastes: try setting up a debate between Python programmers and Java programmers if you want a good fight.
So finally we can go back to your original question: why is loose coupling generally considered good? Because of unforeseen changes. When you write the system, you cannot possibly know which directions it will eventually evolve in two months, or maybe two hours. This happens both because requirements change over time, and because you don't generally understand the system completely until after you've written it. If your entire system is very tightly coupled (a situation that's sometimes referred to as "the Big Ball of Mud"), then any change in every part of the system will eventually ripple through every other part of the system (the definition of "very tight coupling"). This makes for very inflexible systems that eventually crystallize into a rigid, unmaintanable blob. If you had 100% foresight the moment you start working on a system, then you wouldn't need to decouple.
On the other hand, as you observe, decoupling has a cost because it adds complexity. Simpler systems are easier to change, so the challenge for a programmer is striking a balance between simple and flexible. Tight coupling often (not always) makes a system simpler at the cost of making it more rigid. Most developers underestimate future needs for changes, so the common heuristic is to make the system less coupled than you're tempted to, as long as this doesn't make it overly complex.
Strongly typed is good because it prevents hard to find bugs by throwing compile-time errors rather than run-time errors.
Tightly coupled code is bad because when you think you "know what you need to achieve", you are often wrong, or you don't know everything you need to know yet.
i.e. you might later find out that something you've already done could be used in another part of your code. Then maybe you decide to tightly couple 2 different versions of the same code. Then later you have to make a slight change in a business rule and you have to alter 2 different sets of tightly coupled code, and maybe you will get them both correct, which at best will take you twice as long... or at worst you will introduce a bug in one, but not in the other, and it goes undetected for a while, and then you find yourself in a real pickle.
Or maybe your business is growing much faster than you expected, and you need to offload some database components to a load-balancing system, so now you have to re-engineer everything that is tightly coupled to the existing database system to use the new system.
In a nutshell, loose coupling makes for software that is much easier to scale, maintain, and adapt to ever-changing conditions and requirements.
EDIT: I feel either my interpretation
of loose coupling is off or others are
reading it the wrong way. Strong
coupling to me is when a class
references a concrete instance of
another class. Loose coupling is when
a class references an interface that
another class can implement.
My question then is why not
specifically call a concrete
instance/definition of a class? I
analogize that to specifically
defining the variable type you need.
I've been doing some reading on
Dependency Injection, and they seem to
make it out as fact that loose
coupling better design.
I'm not really sure what your confusion is here. Let's say for instance that you have an application that makes heavy use of a database. You have 100 different parts of your application that need to make database queries. Now, you could use MySQL++ in 100 different locations, or you can create a separate interface that calls MySQL++, and reference that interface in 100 different places.
Now your customer says that he wants to use SQL Server instead of MySQL.
Which scenario do you think is going to be easier to adapt? Rewriting the code in 100 different places, or rewriting the code in 1 place?
Okay... now you say that maybe rewriting it in 100 different places isn't THAT bad.
So... now your customer says that he needs to use MySQL in some locations, and SQL Server in other locations, and Oracle in yet other locations.
Now what do you do?
In a loosely coupled world, you can have 3 separate database components that all share the same interface with different implementations. In a tightly coupled world, you'd have 100 sets of switch statements strewn with 3 different levels of complexity.
If you know what you need to achieve, why not code specifically for that purpose.
Short answer: You almost never know exactly what you need to achieve. Requirements change, and if your code is loosely coupled in the first place, it will be less of a nightmare to adapt.
Yet I feel like strongly typed is encouraged, but loosely coupling is bad.
I don't think it is fair to say that strong typing is good or encouraged. Certainly lots of people prefer strongly typed languages because it comes with compile-time checking. But plenty of people would say that weak typing is good. It sounds like since you've heard "strong" is good, how can "loose" be good too. The merits of a language's typing system isn't even in the realm of a similar concept as class design.
Side note: don't confuse strong and static typing
strong typing will help reduce errors while typically aiding performance. the more information the code-generation tools can gather about acceptable value ranges for variables, the more these tools can do to generate fast code.
when combined with type inference and feature's like traits (perl6 and others) or type classes (haskell), strongly typed code can continue to be compact and elegant.
I think that tight/loose coupling (to me: Interface declaration and assignment of an object instance) is related to the Liskov Principle. Using loose coupling enables some of the advantages of the Liskov Principle.
However, as soon as instanceof, cast or copying operations are executed, the usage of loose coupling starts being questionable. Furthermore, for local variables withing a method or block, it is non-sense.
If any modification done in our function, which is in a derived class, will change the code in the base abstract class, then this shows the full dependency and it means this is tight coupled.
If we don't write or recompile the code again then it showes the less dependency, hence it is loose coupled.

Name of the concept of designing an interface to allow expert users to become more efficient?

I'm searching for sources and further information on a particular concept in user experience design. It's not a particularly complicated concept, just that when designing user interfaces, you should both make it intuitive and simple for new users, but also provide way for users to become more efficient as they become more familiar with the application.
An example could be including a prominent button for a common action for new users, but also providing a keyboard shortcut / mnemonic for expert users. However, that's just an example, another example could be providing full functionality through a GUI, but allow expert users to script the same actions. The point is it's more difficult to learn, but it makes them more efficient.
I'm pretty sure there's a name for that which I can't recall, and I'm having trouble searching for sources and references on it.
Name of the concept of designing an interface to allow expert users to become more efficient?
Accelerators?
Flexibility and efficiency of use:
Accelerators -- unseen by the novice
user -- may often speed up the
interaction for the expert user such
that the system can cater to both
inexperienced and experienced users.
Allow users to tailor frequent
actions.
(source: Ten Usability Heuristics by Jakob Nielsen)
Well, reading only your question "Name of the concept of designing an interface to allow expert users to become more efficient?" I'm inclined to point you toward The Humane Interface: New Directions for Designing Interactive Systems by Jef Raskin, in which there is the concept of habituation:
2-3-1 Formation of Habits
When you perform a task repeatedly, it
tends to become easier to do.
Juggling, table tennis, and playing
piano are everyday examples in my
life; they all seemed impossible when
I first attempted them. Walking is a
more widely practiced example. With
repetition, or practice, your
competence becomes habitual, and you
can do the task without having to
think about it. ...
...
... The ideal humane interface would
reduce the interface component of a
user's work to benign habituation.
Many of the problems that make
products difficult and unpleasant to
use are caused by human-machine design
that fails to take into account the
helpful and injurious properties of
habit formation. One notable example
is the tendency to provide many ways
of accomplishing the same task. Having
multiple options can shift your locus
of attention from the task to the
choice of method...
But is contrary to what you describe in your question, as evidenced by the last 2 sentences. In fact in that book there is also a sub-chapter dedicated to dispel the myth of beginner-expert dichotomy:
3-6 Myth of the Beginner-Expert Dichotomy
... This dichotomy is invalid. As a user
of a complex system, you are neither
a beginner nor an expert, and you cannot
be placed on a single continuum between
these two poles. You independently know
or do not know each feature or each related
set of features that work similarly to one
another. You may know how to use many
commands and features of a software package;
you may even work with the package professionally,
and people may seek your advice on using it.
Yet you may not know how to use or even know
about the existence of certain other commands
or even whole categories of commands in that
same package. ...
So, perhaps is not such a good term/concept that you are looking for.
Update: were you looking for the term Adaptive User Interfaces, perhaps? Well, I think that, as usually understood and implemented, it is not such a great idea (for example, disappearing menu items in Microsoft products). But my impression is that researchers use the term for something quite different.
Update: but Adaptive User Interfaces does not cover scripting.
The answer is in your question: Efficiency. It's a fundamental component of usability that Jakob Nielsen long ago defined as "Once users have learned the design, how quickly can they perform tasks." A UI with expert-supporting elements like accelerators, context menus, and double-click-for-defaults is an efficient UI.
It is also correct to simply say that making things fast for experienced users is part of usability -just as usability also includes making it easy for users to accomplish basic tasks on the first encounter, and making it satisfying, and tolerating errors.

Actor model to replace the threading model?

I read a chapter in a book (Seven languages in Seven Weeks by Bruce A. Tate) about Matz (Inventor of Ruby) saying that 'I would remove the thread and add actors, or some other more advanced concurrency features'.
Why and how an actor model can be an advanced concurrency model that replaces the threading?
What other models are the 'advanced concurrency model'?
It's not so much that the actor model will replace threads; at the level of the cpu, processes will still have multiple threads which are scheduled and run on the processor cores. The idea of actors is to replace this underlying complexity with a model which, its proponents argue, makes it easier for programmers to write reliable code.
The idea of actors is to have separate threads of control (processes in Erlang parlance) which communicate exclusively by message passing. A more traditional programming model would be to share memory, and coordinate communication between threads using mutexes. This still happens under the surface in the actor model, but the details are abstracted away, and the programmer is given reliable primitives based on message passing.
One important point is that actors do not necessarily map 1-1 to threads -- in the case of Erlang, they definitely don't -- there would normally be many Erlang processes per kernel thread. So there has to be a scheduler which assigns actors to threads, and this detail is also abstracted away from the application programmer.
If you're interested in the actor model, you might want to take look at the way it works in Erlang or Scala.
If you're interested in other types of new concurrency hotness, you might want to look at software transactional memory, a different approach that can be found in clojure and haskell.
It bears mentioning that many of the more aggressive attempts at creating advanced concurrency models appear to be happening in functional languages. Possibly due to the belief (I drink some of this kool-aid myself) that immutability makes concurrency much easier.
I made this question my favorite and am waiting for answers, but since there still isn't, here is mine..
Why and how an actor model can be an
advanced concurrency model that
replaces the threading?
Actors can get rid of mutable shared state, which is very difficult to code right. (My understanding is that) actors can basically thought as objects with their own thread(s). You send messages between actors that will be queued and consumed by the thread within the actor. So, whatever state in the actor is encapsulated, and will not be shared. So it is easy to code right.
see also http://www.slideshare.net/jboner/state-youre-doing-it-wrong-javaone-2009
What other models are the 'advanced
concurrency model'?
see http://www.slideshare.net/jboner/state-youre-doing-it-wrong-javaone-2009
See Dataflow Programming. It's an approach, which is a layer over top of the usual OOP design. In some words:
there are a scene, where Components resides;
Components have Ports: Producers (output, which generate messages) and Consumers (input, which process messages);
there are Messages pre-defined between Components: one Component's Producer port is bind with another's Consumer.
The programming is going on 3 layer:
writing the dataflow system (language, framework/server, component API),
writing Components (system, basic, and domain-oriented ones),
creating the dataflow program: placing components into the scene, and define messages between them.
Wikipedia articles are good starting point to understand the business: http://en.wikipedia.org/wiki/Flow-based_programming
See also "actor model", "dataflow programming" etc.
Please the following paper
Actor Model of Computation
Also please see
ActorScript(TM) extension of C#(TM), Java(TM), and Objective C(TM): iAdaptive(TM) concurrency for antiCloud(TM) privacy and securitY

Knowledge of windows internals?

I wondered if any of you have knowledge of the internal workings of windows (kernel, interrupts, etc) and if you've found that you've become a better developer as a result?
Do you find that the more knowledge the better is a good motto to have as a developer?
I find myself studying a lot of things, thinking with more understanding, I'll be a better developer. Of course practice and experience also comes into play.
This is a no brainier - absolutely (assuming you're a developer primarily on the Windows platform, of course). A working knowledge of how the car engine works will make a lot of common programming tasks (debugging, performance work, etc) a lot easier.
Windows Internals is the standard reference.
I believe it is valuable to understand how things work underneath. CLR/.NET to C++, native to ASM, ASM to CPU architecture, building registers and ops from logical gates, logical gates from MOSFETs, transistors from quantum physics and the latter from respective mathematical apparatus (group theory, etc).
Understanding low level makes you not only think different but also feel different - like you are in control of things, standing on the shoulders of giants.
More knowledge is always better, and having knowledge at many levels is a lot more valuable than just knowing whatever layer of abstraction you are working at.
A good rule of thumb is that you should have a good knowledge of the layer below the layer where you are working. So, for example, if you write a lot of .NET code, you should know how the CLR works. If you write a lot of web apps, you should understand HTTP. If you writing code that uses HTTP directly, then you should understand TCP/IP. If you are implementing a TCP/IP stack, then you need to understand how Ethernet works.
Knowledge of Windows internals is really helpful if you are writing native Win32 code, or if OS performance issues are critical to what you are doing. At higher levels of abstraction, it may be less helpful, but it never hurts.
I dont think that one requires special or secret knowledge of internals such as those that may be extended to members of the windows team or those with source access but I absolutely contend that understanding internals helps you become a better developer.
Take threading for instance, if you are going to build an application that uses threading in even a moderate way - understanding how windows works, how the threading works, how memory processes work are all keys to being able to do a good job with that code.
I agree to a point with your edict but I would not agree that experience/practice/knowledge are mutually exclusive. That net-net of experience is that you have knowledge gained from that experience. There is also a wisdom component to experience and practice but those are usually intangible situational elements that you apply in the future to avoid mistakes. Bottom line knowledge is a precipitate of experience.
Think of it this way, how many people do you know with 30+ years of experience in IT, think of them and take the top two. Now go into that memory bank and think of the people you know in the industry who are super smart, who know so much about so many things and pick the top two of those. You now have your final 4 - if you had to pick one to start a project with who would it be? Invariably we pick the super smart guy.
Yes, understanding Windows internals helped me to become a better programmer. It also taught be a lot of bad practices, bad ideas, and poor design concepts.
I highly suggest studying OS X or Linux internals as an alternative. It'll take less time, make more sense, and be much more productive.
Read code. Read lots of code. Read lots of good code. jQuery, Django, AIR framework source, Linux kernel, compilers.
Try to learn programming languages that introduce you to new approaches, like Lisp, Ruby, Python, or Javascript. OOP is good, but .net and Java seem to take the brainwash approach on it, and elevate it to some kind of religious level, instead of it just being a good tool in your toolbox.
If you don't understand the code you are reading, it likely means you are on the right track, and learning new techniques.
I'd suggest getting a mac simply because you'll find yourself wanting to make your UIs simpler and easier. It's really important to have a good environment if you want to become a great programmer. Surround yourself with engineers better than yourself (if you can), work with frameworks and languages that take the 'engineer' approach vs. the 'experimenter' approach, and... use a operating system that contains code better than yours.
I'd also reccomend the book "Coders at Work".
It depends. Many programmers who understand the internals of a system begin writing optimised code to exploit that knowledge. This has three very serious side-effects:
1.) It's harder for others without that knowledge to extend or support the code.
2.) System internals may change without notice, whereas interfaces are usually versioned and changes discussed publicly.
3.) Interfaces are generally consistent across platform revisions and hardware, internals do not have this consistency.
In short, There's a lot of broken, unsupportable code out there that's borked because it relies on an internal process that the vendor changed without notice.
Father of language C said that "you don't need to learn all features of language to write great codes. Better you understand the problem, better you write the code." Having knowledge is always better.

Resources