Existing works of distributed animation queues? - animation

I'm building out an animation library on a distributed system. It is currently working well for single-processes but I'd like to take advantage of the distributed nature of the system I'm working on. To that end, the process the animations are spawned from holds the state of the values to render the scene.
When I start to conceptualize how the animation queue can work I keep running into race conditions around the scene's state. For example, my animation implementation takes values and you essentially provide the value that a given property should be set to. The animation library is responsible to building the timing frames of what the in between values are given a duration, frame rate, and easing. The frame is popped off the animation queue, evaluated, value is updated to the frame's value, and the scene is rendered.
However, when I start to think about updating with the new property values the eventual race conditions and how to handle multiple processes looking to update that state concurrently eventually come to mind.
So I'm interested in knowing if there are pre-existing works that achieve a similar goal or other efforts in distributing animation handling across processes that I can reference and learn from?

Related

Replay projection in production

How do we replay projection in a production environment?
For example, we have about 100k events, to replay, it takes about 15 minutes, if we do this live, new events may come in and the projection will not be up to date after the replay.
So aside from schedule a system down time, how do we replay the projection gracefully?
Projection is always (potentially) not up to date. Projections are Data on the Outside -- unlocked, non-authoritative copies of the real data.
The fact that projection updates lag behind the changes to the authoritative copies of the data is an inevitable consequence of distributing copies of the data.
So aside from schedule a system down time, how do we replay the projection gracefully?
You accept into your design that the projections are data "as at" some time in the past; and you let the system run with the previously cached projection while the new projection is assembled.
We typically name our projections. If you projected all your order events into a projected-orders-v1 you can create a projected-orders-v2 in parallel and let it build up in the background.
When it's ready you do the code change required to access the new projections.
After that you can delete your old projection if you want.
This requires that your projection mechanism can read your event log from the beginning independently.
Update: Designing your system according to CQRS, separating READS from WRITES, solves this as there will be separate non-conflicting processes. One process is responsible for writing events to the end of the event stream, and (at least) one is responsible for reading from the beginning of the event stream. The process reading the events don't have to care if the event is new or not, it will only have to keep track of it's position (last known event) and keep reading forever.

Anylogic, animating a queue

This always annoys me so I usually just ignore it but this time it has prompted me to ask the question...
I am animating agents queuing for a resource using a path to represent the queue. I have a moveTo block to move my agents to a node which is placed at the front of the queue. When the queue is empty and an agent arrives to be serviced, it looks great as the agent moves to the end of the queue path and smoothly progresses along the path to the front of the queue where the node is located.
However, if there are multiple agents in the queue then new agents will move to the queue path and move all the way to the front of the queue (where the node is located) and then jump back to their correct position on the queue path.
If I put the node at the back end of the queue then the animation looks great when the agents arrive as they join the queue behind others already there but when the agent at the front of the queue seizes the resource they are waiting for it jumps to the back of the queue and then proceeds along the queue to the resource node.
Any ideas on how to get this to animate correctly?
You can achieve this simply using a Conveyor block to represent the 'shuffling along' queue (with some specific configuration), but it's worth considering the wider picture (which also helps understand why trying to add a MoveTo block to a Service with its queue along a path cannot achieve what you want).
A process model can include model-relevant spatiality where movement times are important. (As well as the MoveTo block, blocks such as RackPick/Store and Service blocks with "Send seized resources" checked implicitly include movement.) However, typically you don't: a Service block with the queue being along a path is using the path just to provide some visual representation of the queue. In the underlying model agents arrive instantly into the queue from the upstream block and instantly enter the delay when a resource is free — that is the process abstraction of the model. Hence trying to 'fix the animation' with a previous MoveTo block or similar will not work because the Service block is not supposed to be representing such a conception of its queue (so agents will 'spring back' to the reality of the underlying behaviour as you observed). Additionally, a 'properly animated queue' would be obscuring the underlying basis of the model (making it seem as if that movement is being explicitly modelled when it isn't).
A Conveyor does conceptually capture agents which have to stay a certain distance apart and (for an accumulating conveyor) explicitly models agents moving along when there is free space. So, although it may seem counterintuitive, this is actually a 'correct' detailed conceptualisation of a moving human queue (which also of course matches an actual conveyor).
To make it work as you want it, you need to make the size of the agents (just from the conveyor's perspective) such that you only have the required number of people in your queue (now a conveyor), with the following Service block just having a capacity 1 queue (which thus represents the 'front of the queue person' only) — Service blocks can't have a capacity 0 queue. You can use a Point Node as the location for this single-entry queue which is just beyond the end of the conveyor path (so that this effectively represents the first position in the queue) — see below.
You then want the agent length on the conveyor to represent your 'queue slot length' which requires specifying the queue capacity (a variable in my example), so something like
path.length(METER) / (queueCapacity - 1)
where path is your conveyor path. (The conveyor represents all queue 'slots' except the first, hence why we subtract 1 above.)
You could also encapsulate all of this as a custom ServiceWithMovingQueue block or similar.
Note that the Queue before the Conveyor is needed in case the conveyor has no room for an arriving agent (i.e., the 'conceptual queue' is full). If you wanted to be particularly realistic you'd have to decide what happens in real-life and explicitly model that (e.g., overflow queue, agent leaves, etc.).
P.S. Another alternative is to use the Pedestrian library, where Service with Lines space markup is designed to model this: partial example flow below. However, that means switching your agents to be pedestrians (participating in the pedestrian modelling underlying physics model for movement) and back again which is performance-intensive (and can cause some bizarre movement in some cases due to the physics). Plus, because the pedestrian library has no explicit concept of resources for pedestrian services, you'd have to do something like have resource pool capacity changes influence which service points were open or not. (The service points within a Service with Lines markup have functions like setSuspended so you can dynamically set them as 'open' or 'closed', in this case linked to whether resources are on-shift to man them.)
P.P.S. Note that, from a modelling accuracy perspective, capturing the 'real' movement in a human queue is typically irrelevant because
If the queue's empty, the time to move from the end to the front is typically negligible compared to the service time (and, even if it's not negligible, a service which generally has a queue present means this extra movement is only relevant for a tiny proportion of arrivals — see below).
If the queue's not empty, people move up whilst others are being served so there is no delay in terms of the service (which can always accept the next person 'immediately' after finishing with someone because they are already at the front of the queue).
This cannot be fixed with the existing blocks of the process modeling library.
Nevertheless, if you use the pedestrian library, this problem doesn't occur, maybe you can consider using it if the animation is that important, at the cost of processing speed of your model
The only other way to actually do it, is by creating your own Agent-Based model to handle the behavior of agents in a queue, but this is not very straight forward.
Now, if you think about operation time, there is no difference for the process statistics if an agent moves like it does or if it moves to the end of the line, so in terms of results, you shouldn't be worried about it

Detect events/features/trend in Time series Data for a particular kind of data

Background for the data: it is the data from a single variable from a machine like a bulldozer (pressure of the hydraulics, which is responsible for the movement of its bucket), which performs actions like loading its bucket and then moving the vehicle to a place to dump the loaded material and then dumping the material.
I have marked the Load Event (loading the bucket), Haul Event (machine moving to dump), and Dump Event (dumping the load).
So one Load Event, Haul Event and Dump Event constitutes a Complete Cycle. In the image provided I see 12 such cycles.
Problem Statement: Detect the count of such cycles in the data provided, and also eliminate the noise (I have marked Noise in red in the image). Also calculate the time taken by each event: how much time it took for the load event, haul event and dump event? Combining these three gives the complete cycle time.
I tried to detect it using a moving average but it doesn't fit well.
Can anyone suggest a machine learning/ANN/better way which can accurately detect the events?
Observing at the image one can learn that the initial peaks are Load and then Haul and the last spike would be Dump. So, we need to detect peaks based on a dynamic threshold.
Any viable approach to solve this problem is appreciated.

state change and packet loss

Let's say I want to speed up networking in a real-time game by sending only changes in position instead of absolute position. How might I deal with packet loss? If one packet is dropped the position of the object will be wrong until the next update.
Reflecting on #casperOne's comment, this is one reason why some games go "glitchy" over poor connections.
A possible solution to this is as follows:
Decide on the longest time you can tolerate an object/player being displayed in the wrong location - say xx ms. Put a watchdog timer in place that sends location of an object "at least" every xx ms, or whenever a new position is calculated.
Depending on the quality of the link, and the complexity of your scene, you can shorten the value of xx. Basically, if you are not using available bandwidth, start sending current position of the object that has not had an update sent the longest.
To do this you need to maintain a list of items in the order you have updated them, and rotate through it.
That means that fast changes are reflected immediately (if object updates every ms, you will probably get a packet through quite often so there is hardly any lag), but it never takes more then xx ms before you get another chance at having an updated state.

How to handle shared data in async process environment? (Node.js)

You have a Node.js server running, accepting connections from the outside (via Socket.io, but it's irrelevant). Every time a connection is made, you create an "Instance" structure, and then copy the instance inside a global Instances array. This part is asynchronous.
Every three instances created, you'll hand them all to another process and flag them as "promoted" (or delete them, or something else, you get the idea).
The problem is that it will go all well if the connections would arrive in a discrete laps of time, say 100ms one after the other. If they arrive almost at the same time it is quite possible that you'll end up having the famous three instances in the buffer when, in fact, a lot more have been arrived beacuse the creation and buffering of the instance objects is asynchronous.
Can you think of a way to mitigate this problem?
(I'd buffer the Instances in Redis and not in memory, but in that case I'd have some serious serialization problem. Each instance object is quite rich in methods)

Resources