Save agent location in a "seize" block and use location in a "move-to" block - location

My model is a basic warehouse situation model. Trucks enter the model, are seized by an empty loading dock resource and then offloaded by the forklifts. The trucks location (loading bay 1, loading bay 2 etc.) is determined by the available resource that gets seized. My "move-to" block then has the "move to location of seized resource" option ticked. In this case, the truck half of the model is working as expected.
The forklift half is a little harder. I have been advised to inject pallet agents into the model instead of splitting them off the incoming trucks. The problem is that I can't specify the location of the agents to a specific resource.
How can I save the location of the newly parked truck (loading bay 1, loading bay 2 etc.) in a variable, and then call on that same variable to inject the agents into that location? The nodes that are acting as the loading bays, are the home locations of specific resources (loading bays).
When I save the location (varAgentLocation = getNetworkNode();) I have to set the variable type to type "custom" (INode) to void errors. The if I type "varAgentLocation" in the location box of the source, I get an error stating that the types are not the same (INode and InitialLocationType)

Using home locations for your (non-moving) loading bay resources is unnecessary.
Just have them as custom resource agent types (e.g., agent type LoadingBay, not vanilla Agent) with a type Node parameter that you set at model startup to be the relevant space markup nodes. (You need to have the resource pool add the resource agents to a custom (initially empty) population of LoadingBays you've created beforehand; this allows you then to loop through the resource agents at model startup to setup any parameters, etc. for them.)
Then probably the most coherent way is to
Copy this value into a variable within your Truck agent via the on-seize action of your Seize block.
Have your Pallet agents created with a reference to the Truck agent they came from (in a variable or parameter). There are various design alternatives for whether the pallets exist beforehand (where you might use an Unbatch block to 'release' them) or whether you create them on the fly.
When you inject pallet agents into, say, a Source block, set that to have the arrival node (which can be a dynamic expression) getting the relevant node from the 'parent' Truck agent (e.g., something like agent.arrivalTruck.loadingBayNode).

Related

How to get the position of a destination node?

I have been working on a position-based protocol using veins-inet and I want to get the position of the destination node.
In my code, I got the IP Address of the destination from the datagram.
const L3Address& destAddr = datagram->getDestinationAddress();
and I want to get the current position of this node.
I already checked the following question
How to get RSU coordinate from TraCIDem11p.cc?
But it seems that it refers to the node by using the node ID.
Is there a way to get the position of the node by referring to its IP Address?
I am using instant veins-4.7.1
A very simple solution would be to have each node publish its current L3Address and Coord to a lookup table whenever it moves. This lookup table could be located in a shared module or every node could have its own lookup table. Remember, you are writing C++ code, so even a simple singleton class with methods for getting/setting information is enough to coordinate this.
If, however, the process of "a node figures out where another node is" is something you would like to model (e.g., this should be a process that takes some time, can fail, causes load on the wireless channel, ...) you would first need to decide how this information would be transferred in real life, then model this using messages exchanged between nodes.

Anylogic restricted area limits cause error and show -1 still inside the limit when the model crashes

This model runs by injecting products via a button that get sent to rooms USP1 and USP2. There is a fleet of transporters that carry the product to the next room by moveByTransporters as well as the logic that moves the product through the equipment by delays and seize/releases. The logic for USP1 and USP2 both move the product to the same Harvest room after their logic is complete.
The model randomly chooses which USP room to go to and when it goes through the USP2 logic there are no errors. By injecting a second product both USP1 and USP2's logic have a product moving through and meet at this Harvest logic.
When I run the model it works until there are multiple products and the error occurs. Looking at the restricted limits for USP1 I see a -1 still inside the limit.
[
well, somewhere within the RestrictedArea, you are letting additional agents "slip in" without registering them into the RestrictedArea.
One candidate is the weird "split2" that folds splitted agents back into the subsequent elements. This creates new agents within your RestrictedArea without them registering properly.
You cannot create new agents within a RestrictedArea as it will not notice that, see the help

Advantages of using timeSeries over container resource

The timeSeries resource represents a container for data instances and timeSeriesInstance resource represents a data instance in the resource.
The main difference from container and contentInstance is to keep the time information with data and to be able to detect the missing data.
Is there any other advantage which can be achieved using timeSeries and timeSeriesInstance resource instead of container and contentInstance resources?
Does it also help in saving data redundancy e.g. if my one application instance is sending data every 30 seconds so in a day 24*120 contentInstance will be created.
If timeSeries and timeSeriesInstance resources are being used then will the same number of timeSeriesInstance be created in a day (i.e. 24*120) for the above case?
Also, is there any specific purpose for keeping contentInfo attribute in timeSeries instead of timeSeriesInstance (like we have contentInfo in contentInstance resource)
There are a couple of differences between the <container> and <timeSeries> resource types.
A <container> resource may contain an arbitrary number of <contentInstance> resources as well as <flexContainer> and (sub) <container> resources as child resources. The advantage of this is that a <container> can be further structured to represent more complex data types.
This is also the reason why the contentInfo attribute cannot be part of the <container> resource, because the type of the content can just be mixed, or the <container> resource my not have direct <contentInstance> resources at all.
A <timeSeries> resource can only have <timeSeriesInstance> resources as a child resource (except from <subscription>, <oldest>, <latest> etc). It is assumed that all the child <timeSeriesInstance> resources are of the same type, therefore the contentInfo is located in the <timeSeries> resource.
<timeSeriesInstance> resources may also have a sequenceNr attribute which allows the CSE to check for missing or out-of-sequence data. See, for example, the missingDataDetect attribute in the <timeSeries> resource.
For your application (sending and storing data every 30 seconds): It depends on the requirements. Is it important that measurements are transmitted all the time, or when it is important to know when data is missing? Then use <timeSeries> and <timeSeriesInstances>. If your application just sends data when the measurement changes and it is only important to retrieve the latest value, then use <container> and <contentInstance>.
Two uses cases for <timeSeries> that seem better to me than using a <container>.
The first use case involves the dataGenerationTime attribute. This allows a sensor to specifically record the time that a sensor value was captured, whereas with a <contentInstance> you have the creation time (you could put the capture time into the content attribute, but then that requires additional processing to extract from the content). If you use the creationtime attribute of the <contentInstance> there will be variations in the time based on when the CSE receives the primitive. When using the <timeSeriesInstance> the variations go away because the CREATE request includes the dataGenerationTime attribute. That makes the data more accurate.
The second use case involves the missingDataDetect attribute. In short, using this, along with the expected periodicInterval you can implement a "heartbeat" type functionality for your sensor. If the sensor does not send a measurement indicating that the door is closed/open every 30 seconds, a notification can be sent indicating that the sensor is malfunctioning or tampered with.

Controlling Group Of Lights in oneM2M

What if an IN-AE creates a group of lights with ADN-AE1 and ADN-AE2, controlling them by using just one request. The diagram shows that, it uses one request to control both of them but when I click the request example, its creates <contentInstances> one by one. Is there any example that I can control a group of resources with just one request or this is not in oneM2M's scope ?
Call flows for multiple light control are depicted in the figure below
and are ordered as follows:
When the user updates a group of light states on her/his smartphone,
the IN-AE creates a new contentInstance targeting a group of Light
ADN-AE container resources hosted on the MN-CSE. Request shown here
For each contentInstances created successfully, the MN-CSE sends a
notification to the corresponding Light ADN-AE.
---------------------- --------- EDITED -------------------------------
A <group> resource bundles and manages a number of resources (either of the same or of a mixed resource type), in your example the two <container>'s under ADN-AE1 and ADN-AE2.
In addition to its other attributes a <group> has a virtual resource called the <fanOutPoint>. This virtual resource multiplies internally every request it receives for all the fitting resources of the <group>, be it CREATE, READ, UPDATE or DELETE.
In the example, the <container>'s exist before they are organised in a group, and can be accessed and controlled independently. The <group> resource now bundles them together and makes them available for an application as a single entity. When this <group> receives a CREATE request for a <contentInstance> the group automatically creates a new <contentInstance> resource for all its resources. For the ADN-AE's, though, it doesn't matter who and how the <contentInstance>'s where created.
Interestingly, this decouples the IN-AE application from the actual deployment and orchestration of an infrastructure. Just imagine that a <group> bundles all the lights in a home. This <group> is managed by a home manager AE. Now, another AE, for managing the home when the inhabitants leave, doesn't need to know much about the actual devices in the home. It only needs to sends one request to the <group> resource to switch off all the lights.
Update
Check oneM2M's "TS-0001 - Functional Architecture", sections "9.6.13 - Resource Type group" for the <group> and "9.6.14 - Resource Type fanOutPoint" for the <fanOutPoint> for the specification of this behaviour.

Reducing calls to distant WebService

I am working on a web application. It has a page which loads information releated to people(name, surname, telephone, etc). In addition to this default information there exists an icon which represents the status of the person in another external system.
Each time the person page is loaded our system invoke a WS for updating the icon:
State = 1 implies icon_color=red
State = 2 implies icon_color=blue
State = 3 implies icon_color=grey
An important point is that the external system interacts with the person by means of his/her mobile phone whereas our system does not. That means that the person may change its stataus on the external system at any moment.
The problem is that the external server receives a huge number of calls for retrieving the status information. Our goal is reducing as much as we can the number of calls to the WS.
We are evaluating the following approach. Add the status information in our database. We would update it once a day. The problem with this approach is that the status information could change since last update, so the icon colour may not be the actual one.
In a few words, we have one approach that is completely up-to-date all the time resulting in many calls to the external WS. On the other hand, we have an approach that will call the WS once a day but the information stored in our system may not be up-to-date.
My question is whether there exists a tradeoff approach.

Resources