Filtering child processes - wfp

I have a callout driver at FWPM_LAYER_ALE_FLOW_ESTABLISHED_V4 layer with the filter condition FWPM_CONDITION_ALE_APP_ID to filter traffic from a specific application.
However, with some applications, they also spawn some child processes and one of them may communicate with the Internet, so filtering the parent process will
give no output, with the filtering condition FWPM_CONDITION_ALE_APP_ID, WFP
filters the process created by this application only.
How can I filter the parent and all its child processes?

Related

Find Provenance Data For Flowfile Within a Processor

I am attempting to develop a NiFi processor that would extend the functionality of the built-in processor "Monitor Activity".
The problem I am attempting to solve is that in my application, I would have multiple flows entering the processor, with the processor alerting by email when no flowfiles arrive within a certain time period. However, if only one of the flows stop, no alert will be triggered.
I would like to modify the processor such that it would be able to distinguish between the different flows and alert accordingly.
In order to do this, I would need a way to deferentiate between flowfiles originating from one processor and another.
I am aware NiFi keeps detailed provenance records that can be easily accessed from within the GUI interface but I'm unable to find an easy way of accessing this information programmatically from within processor code.

Specify which node should turn off

I'm trying to simulate a sensor network in Castalia, where each radio works with a different duty cycle. I'm controlling the radio by the application, through the commands toRadioLayer(createRadioCommand(SET_STATE,SLEEP)) to turn off and toNetworkLayer(createRadioCommand(SET_STATE,RX)) to turn on. However, as each radio has its own schedule, I need to send this command to a specific radio. Is it possible to define for which node these commands, or another if it exists, are executed?
Every node has its own application module. So when the application sends the commands you describe, these go to the radio module of the same node. So if you need different nodes to use different duty cycles, then you'd have to build it in the application to behave differently according to whatever conditions you have in mind. One very simple way is to choose randomly the duty cycle (so each application module will have a different duty cycle).
If you want application modules to communicate across nodes, then there is no magic way for this. You'll have to establish communication via data packets.

How to handle saving on child context but the objected is already deleted in parent context?

I have core data nested contexts setup. Main queue context for UI and saving to SQLite persistent store. Private queue context for syncing data with the web service.
My problem is the syncing process can take a long time and there are the chance that the syncing object is deleted in the Main queue context. When the private queue is saved, it will crash with the "Core Data could not fulfill faulted" exception.
Do you have any suggestion on how to check this issue or the way to configure the context for handle this case?
There is no magic behind nested contexts. They don't solve a lot of problems related to concurrency without additional work. Many people (you seem to be one of those people) expect things to work out of the box which are not supposed to work. Here is a little bit of background information:
If you create a child context using the private queue concurrency type then Core Data will create a queue for this context. To interact with objects registered at this context you have to use either performBlock: or performBlockAndWait:. The most important thing those two methods do is to make sure to invoke the passed block on the queue of the context. Nothing more - nothing less.
Think about this for a moment in the context of a non Core Data based application. If you want to do something in the background you could create a new queue and schedule blocks to do work on that queue in the background. If your job is done you want to communicate the result of the background operations to another layer inside your app logic. What happens when the user deleted the object/data in the meantime which is related to the results from the background operation? Basically the same: A crash.
What you experience is not a Core Data specific problem. It is a problem you have as soon you introduce concurrency. What you need is to think about a policy or some kind of contract between your child and parent contexts. For example, before you delete the object from the root context you should cancel all of the operations/blocks which are running on other queues and wait for the cancellation to finish before you actually delete the object.

Interprocess synchronization barrier in Windows

I am trying to establish a barrier between to different processes in Windows. They are essentially two copies of the same process (Running them as two separate threads instead of processes is not an option).
The idea is to place barriers at different stages of the program, to make sure that both processes start each stage at the same time.
What is the most efficient way of implementing this in Windows?
Use a named event (see CreateEvent and WaitForSingleObject API functions). You would need two events per barrier - each event created in another instance of the application. Then both instances wait for each other's event. Of course, these events can be reused later for another barrier.
There exists one complexity, though - as event names are globally unique (let's say so for simplicity), each event would have a different name, maybe prefixed by the instance's process ID. So each instance of the application would have to get another instance's ID in order to find the name of the event created by another instance.
If you have a windowed application, you can broadcast a message which will inform the second instance of the application about an existence of the first instance.

Windows: Child Process with Redirected Input and Output

I'm trying to create a Child Process with Redirected Input and Output (as described here - http://msdn.microsoft.com/en-us/library/ms682499(VS.85).aspx).
For the people that don't want to bother reading the source code on that page, the author is using anonymous pipes to redirect the child's input and output. The parent process writes to the child process's input and reads from the child process's output.
In that code however, the program is closing the pipes after reading and writing (in WriteToPipe and ReadFromPipe), so actually the program just reads a file, dumps it on the child process input stream and then reads the child process response.
Now, what I'm looking for is a code where we will not close the pipes, but we will continuously post requests and read the child process response (in contrast to making just 1 request).
I've tried several modifications to the source code given on the link posted above, but no matter what I try, the program always hangs when calling ReadFile() in the ReadFromPipe() function (it probably waits for the child to quit - but as I said I like to get the child response, and then send other requests to it).
Any ideas on how I can get over this?
Update:
Can anyone at least tell me whether using the .NET Process class with RedirectStandardInput and RedirectStandardOutput is a good option?
Had exactly the same problem, and solved it by using PeekNamedPipe (which according to MSDN is also fine for anonymous read pipes) to check for available data before each call to ReadFile. That removed the blocking issues I was getting, and allowed my GetExitCodeProcess() to see that the process had exited and cleanup the pipes.
Yes - the .Net Process class redirects the standard input / output of the child process with anonymous pipes in a very similar way to the linked sample if you use RedirectStandardInput and RedirectStandardOutput, so this is probably a fairly good option.
As for why ReadFile is hanging - it sounds like this function is waiting for the child process to either send some data back, or to close the pipe. If you want to continuously post requests to the child process then you need to either:
Know exactly when it is appropriate to read so that you are not left waiting / blocked for the child process (so for example you only read immediately after a request has been sent). This strategy is very risky as there is always a chance that the child process doesn't behave as expected and you are left waiting on the child process indefinitely.
Have a dedicated thread for reading - if you have a dedicated thread for reading then it doesn't matter that the thread may wait indefinitely for the child process as your other threads are still able to send requests and operate as normal. This method is more complex than the first option, however when done properly is far more robust. The only other drawback to this approach is that it requires you have an additional read thread for each child process, meaning that it doesn't scale very well if you need to communicate with a large number of child processes.
Use asynchronous IO - It is possible to call the ReadFile function in a way such that it always immediately returns, however notifies you when a read has completed (I'm a little fuzzy on the exact details on how this works as I'm more used to C#). This is know as Asynchronous IO and is the most versatile of these 3 methods as it allows you to communicate with many child processes without needing a dedicated thread for each one. The tradeoff however is that it is also the most complex to do correctly (at least in my opinion).

Resources