This is maybe question more bout philosophy but for me important. Maybe one of you understand the source of this strange (at least for me) design. So in rxjs we have Observable and we have Subject. Subjects are in fact Observables on steroids but there are also extra features that you can add to Subject . You can use ReplySubject you can use AsyncSubject but you can't ReplyObservable. Is there any good reason for this? Of course you can go with operators, but api is still at least strange.
Subjects are both observables and observers. They're often the easiest way to transition from any imperative/object-oriented approach into functional/reactive streams (observables).
They're also one of the fairly standard ways to multicast a stream (Many of the multicasting operators actually use a subject internally to do their work).
Imperative to Reactive:
For example:
Ideologically, a BehaviorSubject represents a property. You can use its .next and .value as direct replacements for setting and getting a property's value. More importantly, however, you can react to changes in that value as though it were part of an observable stream (because Subjects are observables).
Why aren't there BehaviorObservables?
In some ways, this exists and in others, it doesn't. An observable can be given most of these behaviors with various operators.
const behaviourStream$ = stream$.pipe(
startWith(defaultValue),
shareReplay(1)
);
Every operator (at least pipeable) operator functions over Observables. They transform an observable from one type to another (often by implicitly manipulating the values within).
What properties an observable has depends on how it has been transformed. It's not often all that helpful to have specialized observables the same way it's helpful to have specialized Subjects.
Instead, observables have static operators that create new observables with certain built-in characteristics.
These are creation operators like interval, timer, from, etc
That is said. Subject has status, Observable don't, Observable is just like pure function, grab input, return output, end of story. Subject could store input one or other way, which makes it more rich and flexible
Related
What's the usual best practice to split up a really long match on an enum with dozens of variants to handle, each with dozens or hundreds of lines of code?
I've started to create helper functions for each case and just call those functions passing in the enum's fields (or whatever they're called). But it seems a bit redundant to have MyEnum::MyCase{a,b,c} => handle_mycase(a,b,c) many times.
And if that is the best practice, is it possible to destructure MyEnum::MyCase directly in that helper function's parameters, despite the fact that technically it's refutable, since realistically I already know I'm calling it with the right case?
Maybe the crate enum_dispatch helps you.
IIRC, on a high level: It assumes that all your enum variants implement a trait with a function handle_mycase. Then handle_mycase can be called on the enum directly and will be dispatched to the concrete struct.
I'm (re)designing an API for this library. and there's this function which will take any number of arguments, templated. I can define it to be:
template<typename... Ts>
void foo(bar_t bar, Ts... params);
or, alternatively, to be:
template<typename... Ts>
void foo(bar_t bar, std::tuple<Ts...> params);
I know these two are interchangeable (in both directions):
C++11: I can go from multiple args to tuple, but can I go from tuple to multiple args?
so both options are just as general, but - should I prefer one variant over the other, or is it only a matter of taste / personal style?
The tuple method is harder on the caller side. Forwarding, for example, requires the user to use a different tuple definition or forward_as_tuple. By contrast, forwarding for the variadic case is a matter of your function's interface, nothing more. The caller doesn't even have to be aware of it.
The tuple method is harder on the implementer's side for many uses. Unpacking a parameter pack is much easier than unpacking a tuple. And while parameter packs are limited, they're much more flexible out of the box than a tuple.
The tuple method allows call chaining. There's no such thing as a variadic return value, and while structured binding comes close, you can't shove multiple return values into multiple parameters of a function. By contrast, with tuples and tuple return values, you can chain calls from one function to another.
So if call chaining is something that's likely to matter a lot to you, then a tuple interface might be appropriate. Otherwise, it's just a lot of baggage for little real gain.
There's been some conflicting views opinions on SO and elsewhere on whether the JQuery object is a monad or not. My question is, however, if the JQuery like object in d3.js qualifies as a monad, i.e. that it has these properties:
type constructor.
unit function.
binding operation.
There's no evidence that the object in d3.js implements the necessary operations (bind,join,return etc) or that those operations satisfy the monad laws. Typically, such objects have lots of backdoors and holes in the API that break any such laws. So the answer is almost certainly no.
I'm not sure what you mean by "the D3 object", but things that use closures like the layouts are somewhat monadic (although I wouldn't call the monads). They do encapsulate state that you can modify, and you can get something out of the monad by running them on some data.
Monads are more general than that. In particular, they determine how data can be passed from one component to another, which (D3) closures don't do at all. See also this question.
The same things mentioned in the first answer to the question you've linked to also apply here -- you'd have to show that the API is monadic.
I was wondering if this is Standard, or a bug in my code. I'm trying to compare a pair of my homegrown function objects. I rejected the comparison if the type of function object is not the same, so I know that the two lambdas are the same type. So why can't they be compared?
Every C++0x lambda object has a distinct type, even if the signature is the same.
auto l1=[](){}; // one do-nothing lambda
auto l2=[](){}; // and another
l1=l2; // ERROR: l1 and l2 have distinct types
If two C++0x lambdas have the same type, they must therefore have come from the same line of code. Of course, if they capture variables then they won't necessarily be identical, as they may have come from different invocations.
However, a C++0x lambda does not have any comparison operators, so you cannot compare instances to see if they are indeed the same, or just the same type. This makes sense when you think about it: if the captured variables do not have comparison operators then you cannot compare lambdas of that type, since each copy may have different values for the captured variables.
Is the equality operator overloaded for your lambda object? If not I'm assuming you'll need to implement it.
I'm in the middle of reading Code Complete, and towards the end of the book, in the chapter about refactoring, the author lists a bunch of things you should do to improve the quality of your code while refactoring.
One of his points was to always return as specific types of data as possible, especially when returning collections, iterators etc. So, as I've understood it, instead of returning, say, Collection<String>, you should return HashSet<String>, if you use that data type inside the method.
This confuses me, because it sounds like he's encouraging people to break the rule of information hiding. Now, I understand this when talking about accessors, that's a clear cut case. But, when calculating and mangling data, and the level of abstraction of the method implies no direct data structure, I find it best to return as abstract a datatype as possible, as long as the data doesn't fall apart (I wouldn't return Object instead of Iterable<String>, for example).
So, my question is: is there a deeper philosophy behind Code Complete's advice of always returning as specific a data type as possible, and allow downcasting, instead of maintaining a need-to-know-basis, that I've just not understood?
I think it is simply wrong for the most cases. It has to be:
be as lenient as possible, be as specific as needed
In my opinion, you should always return List rather than LinkedList or ArrayList, because the difference is more an implementation detail and not a semantic one. The guys from the Google collections api for Java taking this one step further: they return (and expect) iterators where that's enough. But, they also recommend to return ImmutableList, -Set, -Map etc. where possible to show the caller he doesn't have to make a defensive copy.
Beside that, I think the performance of the different list implementations isn't the bottleneck for most applications.
Most of the time one should return an interface or perhaps an abstract type that represents the return value being returned. If you are returning a list of X, then use List. This ultimately provides maximum flexibility if the need arises to return the list type.
Maybe later you realise that you want to return a linked list or a readonly list etc. If you put a concrete type your stuck and its a pain to change. Using the interface solves this problem.
#Gishu
If your api requires that clients cast straight away most of the time your design is suckered. Why bother returning X if clients need to cast to Y.
Can't find any evidence to substantiate my claim but the idea/guideline seems to be:
Be as lenient as possible when accepting input. Choose a generalized type over a specialized type. This means clients can use your method with different specialized types. So an IEnumerable or an IList as an input parameter would mean that the method can run off an ArrayList or a ListItemCollection. It maximizes the chance that your method is useful.
Be as strict as possible when returning values. Prefer a specialized type if possible. This means clients do not have to second-guess or jump through hoops to process the return value. Also specialized types have greater functionality. If you choose to return an IList or an IEnumerable, the number of things the caller can do with your return value drastically reduces - e.g. If you return an IList over an ArrayList, to get the number of elements returned - use the Count property, the client must downcast. But then such downcasting defeats the purpose - works today.. won't tomorrow (if you change the Type of returned object). So for all purposes, the client can't get a count of elements easily - leading him to write mundane boilerplate code (in multiple places or as a helper method)
The summary here is it depends on the context (exceptions to most rules). E.g. if the most probable use of your return value is that clients would use the returned list to search for some element, it makes sense to return a List Implementation (type) that supports some kind of search method. Make it as easy as possible for the client to consume the return value.
I could see how, in some cases, having a more specific data type returned could be useful. For example knowing that the return value is a LinkedList rather than just List would allow you to do a delete from the list knowing that it will be efficient.
I think, while designing interfaces, you should design a method to return the as abstract data type as possible. Returning specific type would make the purpose of the method more clear about what they return.
Also, I would understand it in this way:
Return as abstract a data type as possible = return as specific a data type as possible
i.e. when your method is supposed to return any collection data type return collection rather than object.
tell me if i m wrong.
A specific return type is much more valuable because it:
reduces possible performance issues with discovering functionality with casting or reflection
increases code readability
does NOT in fact, expose more than is necessary.
The return type of a function is specifically chosen to cater to ALL of its callers. It is the calling function that should USE the return variable as abstractly as possible, since the calling function knows how the data will be used.
Is it only necessary to traverse the structure? is it necessary to sort the structure? transform it? clone it? These are questions only the caller can answer, and thus can use an abstracted type. The called function MUST provide for all of these cases.
If,in fact, the most specific use case you have right now is Iterable< string >, then that's fine. But more often than not - your callers will eventually need to have more details, so start with a specific return type - it doesn't cost anything.