jsPlumb - How do you get all of the actual connections from a scrollable list when some connected elements are scrolled out of view? - jsplumb

I need to allow mapping between two long scrollable lists and I want to be able to get a list of all the connections created. Using the demo code off the jsPlumb community website I can create the lists, and connect items. But when I try to get a list of all the connections, any item that has had it's connector "collapsed" to an upper or lower corner of the list container doesn't show the actual connection. For example:
List 1, Item 5 is connected to List 2, Item 2. If I query all connections:
const connections = this.jsPlumbInstance.getAllConnections();
console.log(connections);
And drill into the object I can see the source and target are correct:
If I then scroll List 2, Item 2 out of view and get all connections again:
I now get:
Somewhere, the connection to L2I2 is being maintained - because it "reconnects" when scrolled back into view, but I want to programmatically extract all connected items without resorting to scrolling the lists. Where can I find the "real" mappings between items regardless of their visibility and the connector stacking? Thanks!

I think in fact this information is not readily available. Connections are maintained in the background via a "proxy" concept, handled by the proxyConnection method of JsPlumbInstance (it's the same method that is used when collapsing a group and moving child connections to the collapsed group, incidentally). This method manipulates a proxies array on each connection it is managing.
So, theoretically, you could examine the connections in the return value of getAllConnections() and find the real connection via the proxies array. It could be a bit annoying though. A better solution might be for the list manager to expose a method, maybe? We could follow this up on Github if you like.

Related

How to use Gremlin to get both node properties and edge names in one query?

I have been thrown into a pool of golang / gremlin / Neptune, and am able to get some things to work. Life is good - enough, but I am hoping there is a simple answer (which I have not been able to find) to what seems like a simple question.
I have 'obs' nodes with some properties, two of which are ('type','domain') and ('value','whitehouse.com).
Another set of nodes is 'attack' ('type','group') and ('value','Emotet'), along with other properties.
An observation node can have an edge pointing to one or more attack nodes. (and actually, other types of nodes as well.) These edges have a time-based property - when the observation was seen manifesting a certain type of attack.
I'm working in Go, using gremson to communicate with a Neptune db. In this environment you construct your query as a string and send it down the wire to Neptune, and get something called graphson back.
Thus, I construct this, and send it...
fmt.Sprintf("g.V().hasLabel('obs').has('value','%s').limit(1)", domain)
And I get back properties for a vector, in gremson. Were I using the console, all I would get back would be the id. Go figure.
Then I construct this, and send it...
fmt.Sprintf("g.V().hasLabel('obs').has('value','%s').limit(1).out()", domain)
and I get back the properties of the connected nodes, in graphson. Again, using the console I would only get back ids. No sweat.
What I would LIKE to do is to combine these two queries somehow so that I am not doing what seems to be like two almost identical lookups.
console-wise, assume both queries also have valueMap() or entityMap() tacked on the end. Is there any way to do them as one query?
There are many ways you could write this query. Here are a couple of options
g.V().hasLabel('obs').
has('value','%s').
limit(1).as('a').
out().as('b').
select('a','b')
or using project
g.V().hasLabel('obs').
has('value','%s').
limit(1).
project('a','b').
by().
by(out().fold())
My preference is for the project example as you will get the connected vertices back in a list.

socket.io, how to attach state to a socket room

I was wondering if there was any way of attaching state to individual rooms, can that be done?
Say I create a room room_name:
socket.join("room_name")
How can I assign an object, array, variable(s) to that specific room? I want to do something like:
io.adapter.rooms["room_name"].max = maxPeople
Where I give the room room_name a state variable max, and max is assigned the value of the variable maxPeople that is say type int. Max stores a variable for the maximum amount of people allowed to join that specific room. Other rooms can/could be assigned different max values.
Well, there is an object (internal to socket.io) that represents a room. It is stored in the adapter. You can see the basic definition of the object here in the source. so, if you reached into the adapter and got the room object for a given name, you could add data to it and it would stay there as long as the room didn't get removed.
But, that's a bit dangerous for a couple reasons:
If there's only one connection in the room and that user disconnects and reconnects (like say because they navigate to a new page), the room object may get destroyed and then recreated and your data would be lost.
Direct access to the Room object and the list of rooms is not in the public socket.io interface (as far as I know). The adapter is a replaceable thing and when you're doing things like using the redis adapter with clustering it may work differently (in fact this probably does because the list of rooms is centralized into a redis database). The non-public interface is also subject to change in future versions of socket.io (and socket.io has been known to rearrange some internals from time to time).
So, if this were my code, I'd just create my own data structure to keep room specific info.
When you add someone to a room, you make sure your own room object exists and is initialized properly with the desired data. It would probably work well to use a Map object with room name as key and your own Room object as value. When you remove someone from a room, you can clean up your own data structure if the room is empty.
You could even make your own room object be the central API you use for joining or leaving a room and it would then maintain its own data structure and then also call socket.io to to the join or leave there. This would centralize the maintenance of your own data structure when anyone joins or leaves a room. This would also allow you to pre-create your room objects with their own properties before there are any users in them (if you needed/wanted to do that) which is something socket.io will not do.

In Firebase Database, how to read a large list then get child updates, efficiently

I want to load a large list 5000 items, say, from a Firebase Database as fast as possible and then get any updates when a child is added or changed.
My approach so far is use ObserveSingleEvent to load the entire list at startup
pathRef.ObserveSingleEvent(DataEventType.Value, (snapshot) =>
and when that has finished use ObserveEvent for child changes and additions:
nuint observerHandle1 = pathRef.ObserveEvent(DataEventType.ChildChanged, (snapshot) =>
nuint observerHandle2 = pathRef.ObserveEvent(DataEventType.ChildAdded, (snapshot) =>
The problem is the ChildAdded event is triggered 5000 times, all of which are usually unnecessary and it takes a long time to complete.
(From the docs: "child_added is triggered once for each existing child and then again every time a new child is added to the specified path)
And I can't just ignore the first update for each child, because it might be a genuine change (eg database updated just after the ObserveSingleEvent completed).
So I have to check each item on the client and see if it has changed. This is all very slow and inefficient.
I've seen posts suggesting to use a query with LimitToLast etc, but this is not reliable as child changes may be missed.
My Plan B is to not do ObserveSingleEvent to load the list and just rely on the 5000 child added calls, but that seems crazy and is slower to get the list initially, and probably uses more of the user's data allowance.
I would think this is a common scenario, so what is the best solution?
Also, I have persistence enabled, so a couple of related questions:
1) If the entire list has been loaded before, does ObserveSingleEvent with Value just load from disk and there's no network use, when online (but no changes have happened since last download) or does it download the whole list again?
2) Similarly with the 5000 calls to child added - if it's all on disk, is there any network use, when online if no changes?
(I'm using C# for iOS, but I'm sure its the same issue for other platforms)
The wire traffic for a value listener and for a child_ listener is exactly the same. The events are purely a client-side interpretation of the underlying data.
If you need all children, consider using only the child_* listeners and no pathRef.ObserveSingleEvent(DataEventType.Value. It'll lead to the simplest code.

Redux Performance with Selectors

According to the Redux docs on performance, it states "connected list components should pass item IDs to their connected child list items (allowing the list items to look up their own data by ID)".
This makes sense to me. In this fashion the child items connect to the Redux store and find the item using the itemId that was passed in via props.
Since this involves a small computation however, it is unclear to me if a reselect selector should be used for memoization. I suppose the root of the confusion comes from connect and how state tree changes cause recomputations.

Can a React-Redux app really scale as well as, say Backbone? Even with reselect. On mobile

In Redux, every change to the store triggers a notify on all connected components. This makes things very simple for the developer, but what if you have an application with N connected components, and N is very large?
Every change to the store, even if unrelated to the component, still runs a shouldComponentUpdate with a simple === test on the reselected paths of the store. That's fast, right? Sure, maybe once. But N times, for every change? This fundamental change in design makes me question the true scalability of Redux.
As a further optimization, one can batch all notify calls using _.debounce. Even so, having N === tests for every store change and handling other logic, for example view logic, seems like a means to an end.
I'm working on a health & fitness social mobile-web hybrid application with millions of users and am transitioning from Backbone to Redux. In this application, a user is presented with a swipeable interface that allows them to navigate between different stacks of views, similar to Snapchat, except each stack has infinite depth. In the most popular type of view, an endless scroller efficiently handles the loading, rendering, attaching, and detaching of feed items, like a post. For an engaged user, it is not uncommon to scroll through hundreds or thousands of posts, then enter a user's feed, then another user's feed, etc. Even with heavy optimization, the number of connected components can get very large.
Now on the other hand, Backbone's design allows every view to listen precisely to the models that affect it, reducing N to a constant.
Am I missing something, or is Redux fundamentally flawed for a large app?
This is not a problem inherent to Redux IMHO.
By the way, instead of trying to render 100k components at the same time, you should try to fake it with a lib like react-infinite or something similar, and only render the visible (or close to be) items of your list. Even if you succeed to render and update a 100k list, it's still not performant and it takes a lot of memory. Here are some LinkedIn advices
This anwser will consider that you still try to render 100k updatable items in your DOM, and that you don't want 100k listeners (store.subscribe()) to be called on every single change.
2 schools
When developing an UI app in a functional way, you basically have 2 choices:
Always render from the very top
It works well but involves more boilerplate. It's not exactly the suggested Redux way but is achievable, with some drawbacks. Notice that even if you manage to have a single redux connection, you still have have to call a lot of shouldComponentUpdate in many places. If you have an infinite stack of views (like a recursion), you will have to render as virtual dom all the intermediate views as well and shouldComponentUpdate will be called on many of them. So this is not really more efficient even if you have a single connect.
If you don't plan to use the React lifecycle methods but only use pure render functions, then you should probably consider other similar options that will only focus on that job, like deku (which can be used with Redux)
In my own experience doing so with React is not performant enough on older mobile devices (like my Nexus4), particularly if you link text inputs to your atom state.
Connecting data to child components
This is what react-redux suggests by using connect. So when the state change and it's only related to a deeper child, you only render that child and do not have to render top-level components everytime like the context providers (redux/intl/custom...) nor the main app layout. You also avoid calling shouldComponentUpdate on other childs because it's already baked into the listener. Calling a lot of very fast listeners is probably faster than rendering everytime intermediate react components, and it also permits to reduce a lot of props-passing boilerplate so for me it makes sense when used with React.
Also notice that identity comparison is very fast and you can do a lot of them easily on every change. Remember Angular's dirty checking: some people did manage to build real apps with that! And identity comparison is much faster.
Understanding your problem
I'm not sure to understand all your problem perfectly but I understand that you have views with like 100k items in it and you wonder if you should use connect with all those 100k items because calling 100k listeners on every single change seems costly.
This problem seems inherent to the nature of doing functional programming with the UI: the list was updated, so you have to re-render the list, but unfortunatly it is a very long list and it seems unefficient... With Backbone you could hack something to only render the child. Even if you render that child with React you would trigger the rendering in an imperative way instead of just declaring "when the list changes, re-render it".
Solving your problem
Obviously connecting the 100k list items seems convenient but is not performant because of calling 100k react-redux listeners, even if they are fast.
Now if you connect the big list of 100k items instead of each items individually, you only call a single react-redux listener, and then have to render that list in an efficient way.
Naive solution
Iterating over the 100k items to render them, leading to 99999 items returning false in shouldComponentUpdate and a single one re-rendering:
list.map(item => this.renderItem(item))
Performant solution 1: custom connect + store enhancer
The connect method of React-Redux is just a Higher-Order Component (HOC) that injects the data into the wrapped component. To do so, it registers a store.subscribe(...) listener for every connected component.
If you want to connect 100k items of a single list, it is a critical path of your app that is worth optimizing. Instead of using the default connect you could build your own one.
Store enhancer
Expose an additional method store.subscribeItem(itemId,listener)
Wrap dispatch so that whenever an action related to an item is dispatched, you call the registered listener(s) of that item.
A good source of inspiration for this implementation can be redux-batched-subscribe.
Custom connect
Create a Higher-Order component with an API like:
Item = connectItem(Item)
The HOC can expect an itemId property. It can use the Redux enhanced store from the React context and then register its listener: store.subscribeItem(itemId,callback). The source code of the original connect can serve as base inspiration.
The HOC will only trigger a re-rendering if the item changes
Related answer: https://stackoverflow.com/a/34991164/82609
Related react-redux issue: https://github.com/rackt/react-redux/issues/269
Performant solution 2: listening for events inside child components
It can also be possible to listen to Redux actions directly in components, using redux-dispatch-subscribe or something similar, so that after first list render, you listen for updates directly into the item component and override the original data of the parent list.
class MyItemComponent extends Component {
state = {
itemUpdated: undefined, // Will store the local
};
componentDidMount() {
this.unsubscribe = this.props.store.addDispatchListener(action => {
const isItemUpdate = action.type === "MY_ITEM_UPDATED" && action.payload.item.id === this.props.itemId;
if (isItemUpdate) {
this.setState({itemUpdated: action.payload.item})
}
})
}
componentWillUnmount() {
this.unsubscribe();
}
render() {
// Initially use the data provided by the parent, but once it's updated by some event, use the updated data
const item = this.state.itemUpdated || this.props.item;
return (
<div>
{...}
</div>
);
}
}
In this case redux-dispatch-subscribe may not be very performant as you would still create 100k subscriptions. You'd rather build your own optimized middleware similar to redux-dispatch-subscribe with an API like store.listenForItemChanges(itemId), storing the item listeners as a map for fast lookup of the correct listeners to run...
Performant solution 3: vector tries
A more performant approach would consider using a persistent data structure like a vector trie:
If you represent your 100k items list as a trie, each intermediate node has the possibility to short-circuit the rendering sooner, which permits to avoid a lot of shouldComponentUpdate in childs.
This technique can be used with ImmutableJS and you can find some experiments I did with ImmutableJS: React performance: rendering big list with PureRenderMixin
It has drawbacks however as the libs like ImmutableJs do not yet expose public/stable APIs to do that (issue), and my solution pollutes the DOM with some useless intermediate <span> nodes (issue).
Here is a JsFiddle that demonstrates how a ImmutableJS list of 100k items can be rendered efficiently. The initial rendering is quite long (but I guess you don't initialize your app with 100k items!) but after you can notice that each update only lead to a small amount of shouldComponentUpdate. In my example I only update the first item every second, and you notice even if the list has 100k items, it only requires something like 110 calls to shouldComponentUpdate which is much more acceptable! :)
Edit: it seems ImmutableJS is not so great to preserve its immutable structure on some operations, like inserting/deleting items at a random index. Here is a JsFiddle that demonstrates the performance you can expect according to the operation on the list. Surprisingly, if you want to append many items at the end of a large list, calling list.push(value) many times seems to preserve much more the tree structure than calling list.concat(values).
By the way, it is documented that the List is efficient when modifying the edges. I don't think these bad performances on adding/removing at a given index are related to my technique but rather related to the underlying ImmutableJs List implementation.
Lists implement Deque, with efficient addition and removal from both the end (push, pop) and beginning (unshift, shift).
This may be a more general answer than you're looking for, but broadly speaking:
The recommendation from the Redux docs is to connect React components fairly high in the component hierarchy. See this section.. This keeps the number of connections manageable, and you can then just pass updated props into the child components.
Part of the power and scalability of React comes from avoiding rendering of invisible components. For example instead of setting an invisible class on a DOM element, in React we just don't render the component at all. Rerendering of components that haven't changed isn't a problem at all as well, since the virtual DOM diffing process optimizes the low level DOM interactions.

Resources