Specify which node should turn off - omnet++

I'm trying to simulate a sensor network in Castalia, where each radio works with a different duty cycle. I'm controlling the radio by the application, through the commands toRadioLayer(createRadioCommand(SET_STATE,SLEEP)) to turn off and toNetworkLayer(createRadioCommand(SET_STATE,RX)) to turn on. However, as each radio has its own schedule, I need to send this command to a specific radio. Is it possible to define for which node these commands, or another if it exists, are executed?

Every node has its own application module. So when the application sends the commands you describe, these go to the radio module of the same node. So if you need different nodes to use different duty cycles, then you'd have to build it in the application to behave differently according to whatever conditions you have in mind. One very simple way is to choose randomly the duty cycle (so each application module will have a different duty cycle).
If you want application modules to communicate across nodes, then there is no magic way for this. You'll have to establish communication via data packets.

Related

No state machine in elsa-workflows?

love the elsa-workflows project as I was heavily using WWF in the past. However many of my workflows where state machines. I can't see any in elsa, any plans to support this ?
Elsa 2 does not support the state machine model (only the flowchart model), but I am planning on revising the engine for Elsa 3 which would allow any type of model, including state machine and simple sequential flows like we have in Windows WF.
UPDATE
After I answered with the above I started to think ahead of the state machine architecture for V3, during which I realized we can implement the state machine model already today with V2.
All it would take is a simple new activity called e.g. "State" that has an infinite number of outcomes. This State activity would simply set a workflow variable called e.g. "StateMachineState" or "CurrentState". Each outbound connection would be connected to any trigger responsible for transitioning into the next state. This could be a message from a service bus, a timer, an HTTP request, or anything else that's available with Elsa.
The only real change that would need to be added to make the user experience smooth is the ability to keep adding connections without having to specify them manually from the activity editor. With the current design, we could probably just automatically add an extra outcome to the activity. So initially there would just be e.g. "Transition 1". When that one becomes connected, a "Transition 2" would appear.
Anyway, I am revising my answer to: it's not here yet, but:
You can implement it yourself today, and
I will add an initial version of the State machine model to either Elsa 2.1 or 2.2, depending on any hidden gotchas I might have failed to see.
UPDATE 2
I just pushed a change that includes a State activity.
With this, you can now easily implement a state machine by adding State activities to your workflow. Here's an example of a traffic light state machine:
This workflow kick starts automatically after 5 seconds, after which it will transition into the "Green" state. Then it stays there for 10 seconds before transitioning into the "Yellow" state. After 5 seconds, it then transitions into the "Red" state, and finally transitions back to the "Green" state after 5 seconds. Then it repeats.
To use the State activity, you specify things:
State name.
Allowed transitions (the traffic light example includes only one transition per state, but you can specify more than just one).

Multiple flows with nifi

We have multiple (50+) nifi flows that all do basically the same thing: pull some data out of a db, append some columns conver to parquet and upload to hdfs. They differ only in details such as the sql query to run or the location in hdfs that they land.
The question is how to factor these common nifi flows out such that any change made to the common flow automatically applies to all all derived flows. E.g if i want to add an extra step to also publish the data to Kafka I want to make this once and have it automatically apply to all 50 flows.
We’ve tried to get this working with nifi registry, however it seems like an imperfect fit. Essentially the issue is that nifi registry seems to work well for updating a flow in one environment (say wat) and then autmatically updating it in another environment (say prod). It seems less suited for updating multiple flows in the same environment with one specific example bing that it will reset the name of each flow to be the template name every time we redeploy meaning that al flows end up with the same name!
Does anyone know how one is supposed to manage a situation like ours asi guess it must be pretty common.
Apache NiFi has ProcessorGroups. As the name itself suggests, the processor groups are there to group together a set of processors' and their pipeline that does similar task.
So for your case what you can do is, you can refactor the flow by moving the common flow which can be reused with different pipelines to a separate processor group with an input port. Connect the outside flow that depends on this reusable flow by connecting to the input port of the reusable processor group. Depending on your requirement you can create an output port as well in this processor group and connect it with the outside flow.
Attaching a sample:
For the sake of explaining, I have made a mock flow so ignore the Processor types that are used, but rather see the name I had given to those processors.
The following screenshots show that I read from two different sources and individually connect them to two different processors that does the source specific changes to those processors
Then I connect these two flows to the input port of a processor group that has the reusable flow inside. So ultimately the two different flows shown in the above screenshot gets to work with a common reusable flow.
Showing what's inside the reusable flow:
Finally the output port output to outside connects the reusable flow to the outside component Write to somewehere
I hope this helps you with refactoring your complex flows. Feel free to get back, if you have any queries.

Concurrent use of OSEK_TP.DLL

I have a CANoe simulateur with a main node which uses OSEK_TP.DLL
I'd like to create another node (to be reusable easily) with different usefull "on keyboard 'x'" macros to send CANext messages, alos with OSEK_TP.DLL
Can I have different nodes using OSEK_TP.DLL ? Does everyone handles its own context : for example, can I declare in each "on start" call a different OSEKTL_SetRxId value ?
Thanks for your help,
Yes. Basically a simulation node is the simulation of an ECU. So yeah, each node can reference and use the OSEK_TP.DLL. Each node is simulated with a different CAPL file. So yeah, they can have different "on start" callbacks.

What is the Proper Non-blocking Algorithm for Modbus Master on Microcontroller

Modbus is a a request and response type serial communication. Basically the master send out a request and one of the slave response.
I am modifing the code on a microcontroller which is a master unit on a modbus network. This unit also has a small dot-matrix LCD and some buttons for user interface. The microcontroller is running at 16MHz.
The problem is after the master unit send out a request, it does not know when the slave response, so it may need to wait for a relatively long time. However as this unit has buttons and LCD, it can not wait at a point for too long because the user will feel lag when he pressed a button. The original code is using a RTOS. It seperate the user interface task and the serial communication tasks so it has no problem. Now I need to change it to non-RTOS code. I have implemented a system tick timer which will interrupt at each 1ms. What is the proper (or common) way to do that?
It is possible to do quite a lot with just a single task, especially if you have interrupts. The intermediate position between a single very simple task and an RTOS is a cyclic executive. See http://www3.nd.edu/~cpoellab/teaching/cse40463/slides10.pdf for a brief overview of the spectrum of functionality from a cyclic executive up to a fully preemptive multitasking operating system. You will find much more if you search on this phrase and related phrases, including very sophisticated schemes for making sure that the system never misses its deadlines. If you are an aircraft flight control system, forgetting to check the aircraft pitch angle every X ms can cause problems elsewhere :-)
One way to rewrite code which is naturally multi-threaded is to maintain a model of the state of the system, such as a collection of objects each representing a modbus connection, indexed by a connection id. Then write a routine for every sort of event that can happen, including the arrival of a clock interrupt. When that event happens these routines typically work out which connection is involved, retrieve it from the main collection (or create it from scratch and enter it there if necessary) do the work associated with that particular sort of event, and then return.
It is often convenient to keep a queue of future events, indexed by time, and to have a routine that creates an object representing something to be done at some future time (such as calling a method to check for the expiration of a timeout) and puts this object on the queue.
You need to worry about interrupt processing getting called halfway through an event service routine. One way to deal with this is to lock out interrupts when that could cause a problem. Another way is to have the interrupt routine do nothing more than put an object on a queue that something else will check for later, or just set a flag. Then you need only lock out interrupts when you are checking for items on the queue and removing them.
A number of communications protocols are implemented in this way. Even in a true multitasking operating system you very often don't want to have to create a new thread every time you need to create a new connection. The two main problems with this is that the code is less clear than code which has a thread per object, because stuff that naturally goes together is chopped up into loads of event service events, and if any of the event service methods burn significant amounts of cpu, the system will stall because nothing else will happen when this is going on.

Interprocess synchronization barrier in Windows

I am trying to establish a barrier between to different processes in Windows. They are essentially two copies of the same process (Running them as two separate threads instead of processes is not an option).
The idea is to place barriers at different stages of the program, to make sure that both processes start each stage at the same time.
What is the most efficient way of implementing this in Windows?
Use a named event (see CreateEvent and WaitForSingleObject API functions). You would need two events per barrier - each event created in another instance of the application. Then both instances wait for each other's event. Of course, these events can be reused later for another barrier.
There exists one complexity, though - as event names are globally unique (let's say so for simplicity), each event would have a different name, maybe prefixed by the instance's process ID. So each instance of the application would have to get another instance's ID in order to find the name of the event created by another instance.
If you have a windowed application, you can broadcast a message which will inform the second instance of the application about an existence of the first instance.

Resources