Calls FilterAttach in filter driver - ndis

I create NDIS network filter driver but when I install it I see "FilterAttach" call 4 times.
Why "FilterAttach" call 4 times In my Filter Driver?

There are 3 reasons that you'll see many FilterAttach calls in your driver:
multiple NICs,
monitoring filters, and
NDIS binding recalculations
Let's look at each in detail.
Multiple NICs
Filter drivers will bind a filter module to each NIC that is compatible with the filter driver. So if you have 3 compatible NICs, you'll get at least three calls to FilterAttach.
[TCPIP] [TCPIP] [TCPIP]
| | |
[filter1] [filter2] [filter3]
| | |
[NIC1] [NIC2] [NIC3]
You can tell you're in this situation because the NDIS_FILTER_ATTACH_PARAMETERS::BaseMiniportIfIndex value is different across the different FilterAttach instances. This means your filter is getting bound over different NICs.
Monitoring filters
An NDIS LWF is either monitoring or modifying. Look in the INF file to see which type of filter you have:
; For a Monitoring filter, use this:
; HKR, Ndi,FilterType,0x00010001, 1 ; Monitoring filter
; For a Modifying filter, use this:
; HKR, Ndi,FilterType,0x00010001, 2 ; Modifying filter
The difference between monitoring and modifying is in how these filters bind to a network card. A modifying filter is the simplest: it will just bind once per network card. In contrast, monitoring filters will bind once for each other modifying filter, and one more time for the NIC itself. Here's a diagram of what happens when you have a monitoring filter and 2 modifying filters:
[TCPIP]
|
[monitoring1] // 3
|
[modifying2]
|
[monitoring1] // 2
|
[modifying1]
|
[monitoring1] // 1
|
[NIC]
The key thing to notice in this diagram is that the same monitoring filter is attached 3 times to the stack: once over the NIC, and once over each of the 2 modifying filters (modifying1 and modifying2).
If you don't want your monitoring filter to bind at each altitude like that, you can return NDIS_STATUS_NOT_SUPPORTED from your FilterAttach handler any time NDIS_FILTER_ATTACH_PARAMETERS::LowerIfIndex is different from NDIS_FILTER_ATTACH_PARAMETERS::BaseMiniportIfIndex. If you have a mandatory filter, you should also set the NDIS_FILTER_ATTACH_FLAGS_IGNORE_MANDATORY flag in the NDIS_FILTER_ATTACH_PARAMETERS::Flags, but note that we do not recommend marking a monitoring filter as mandatory.
You can tell that you're in this situation if the NDIS_FILTER_ATTACH_PARAMETERS::BaseMiniportIfIndex is the same in both calls to FilterAttach, but the NDIS_FILTER_ATTACH_PARAMETERS::FilterModuleGuidNameis different. TheBaseMiniportIfIndextells you which miniport your filter is over, and theFilterModuleGuidName` tells you exactly which filter instance is attaching.
NDIS binding recalculations
The final reason that a filter may see multiple calls to its FilterAttach routine is because NDIS sometimes recalculates bindings. Maybe a new filter is getting installed below your filter — NDIS will unbind your filter (FilterDetach) bind the new filter, then bind your filter again (FilterAttach).
You can tell that NDIS is re-trying your filter due to a binding recalculation because the NDIS_FILTER_ATTACH_PARAMETERS::FilterModuleGuidName is the same as a previous call to FilterAttach. This means that NDIS is attaching your filter in the same spot as before.
Debugging tips
If you have a kernel debugger attached, you can always use !ndiskd.filterdriver to see where your filter is attached. You can also use !ndiskd.netreport to see a graphical visualization of the network stack.

Related

How to do aggregation on fixed-size count-based sliding window?

How do I implement a sliding window aggregation (or transformation) with a fixed-size count-based window?
For e.g: If I have stream data like the following
input stream = 1,2,3,4,5,6,7,8...
Assume that time is not relevant here. And say my aggregate function is AVERAGE and window size is fixed at 3 records (not 3 millis, 3 secs, 3 hours etc), I would like my output stream to be
output stream = avg(1,2,3), avg(2,3,4), avg(3,4,5), avg(4,5,6), avg(5,6,7)... = 2,3,4,5,6...
The Windows documented in Kafka streams work are "time-based". Even the constructor of base class Window has following signature:
Window(long startMs, long endMs)
So I was not sure if it's the right tool to do non-time based windowing aggregating.
Apache Flink supports count-based sliding and tumbling windows. That's exactly what I need, but I'm looking for a similar feature in Kafka Streams.
If time-ordering is no concern for you, you can implement a custom Transformer with attached state.
StreamsBuilder builder = new StreamsBuilder();
builder.addStoreStore(...); // add KeyValueStore here
KStream result = builder.stream("topic").transform(...); // pass in name of your KeyValueStore, too
For you custom Transformer you can maintain a List per key with the list being your window -- as long as the list is smaller than you window-size you append new record to the list -- if it's exactly the size, you trigger the computation -- if it exceeds the size, you trim it and trigger the computation afterwards.
See the docs for more details: https://kafka.apache.org/10/documentation/streams/developer-guide/processor-api.html (Note, that a Processor and a Transformer are basically the same thing.)
If you wish to use Apache Storrm which is also an streaming engine, kafka can be connected as a data source to it. Storm new version provides a concept called Tumbling Window, which delivers exact number of tuple to your topology. This can easily be used to solve your problem.
For more have a look at https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_storm-component-guide/content/storm-windowing-concepts.html

DisMan Monitoring - traps not being generated

In order to set up a self-monitoring of a linux OS (CentOS) in order to send traps if a condition occurs i have configured the lines
com2sec notConfigUser default Public0
group notConfigGroup v1 notConfigUser
group notConfigGroup v2c notConfigUser
view systemview included .1
access notConfigGroup "" any noauth exact systemview systemview none
for disk query
disk / 100000000
trap2sink 10.10.64.132
authorization for self monitoring
rouser admin
iquerySecName admin
define message to send OID to monitor threshold values
monitor -r 10 DiskAlmostFull dskPercent < 90
monitor -r 10 machineTooBusy hrProcessorLoad < 90
But the traps are generated only when i restart the snmpd deamon.
I have tried to troubleshoot this issue without success.
Any held will be helpful.
Thanks in advance
Having had the same problem, I discovered the following explanation in "man snmpd.conf".
Section "monitor [OPTIONS] NAME EXPRESSION" states:
"Note that the event will only be triggered once, when the expression first matches. This monitor entry will not fire again until the monitored condition first becomes false, and then matches again."
You may not like the answer, but the monitor command behaves as advertised.

XBee - XBee-API and multiple endpoints

Using Andrew Rapp's XBee-API, how can I sample I/O data via a coordinator from more than two endpoints?
I have 17 Series 1 XBees. I have programmed one to be a coordinator (API mode = 2) and the rest to be endpoints. Using XBee-API I am sending a Force I/O Sample ("IS") remote AT command, unicast to each endpoint. This works perfectly well when there are up to two endpoints, but as soon as a third is added, one of the three always becomes non-responsive (times out with XBeeTimeoutException). It's not always the same physical unit that stops responding, but it is always the third one (for example, if I send Force I/O Sample to Device1, Device2, and Device3, Device3 will time out, and if I change the order to Device3, Device1, Device2, Device2 will time out.
If I set up more than three XBees, about 1 out of 3 will time out - but not every third one.
I've verified that the XBees themselves are fine. I've searched the Internet and Stack Overflow in particular to no avail. I've tried using a simple ZNetRemoteAtRequest. I've tried opening and closing the XBee coordinator serial connection once for all three devices, once per device, and once per program run. I've tried varying the distance between the coordinator and endpoints (never more than five feet apart). I've tried different coordinator configuration parameters (from the Digi documentation). I've tried changing out the XBee for the coordinator.
This is the code I'm using to send the Force I/O Sample request to each endpoint and read the response:
xbee = new XBee(); // Coordinator
xbee.open("/dev/ttyUSB0, 115200)); // Happens before any of the endpoints are contacted
... // Loop through known endpoint addresses
XBeeRequest request = new ZBForceSampleRequest(new XBeeAddress64(endpointAddress));
ZNetRemoteAtResponse response = null;
response = (ZNetRemoteAtResponse) xbee.sendSynchronous(request, remoteXBeeTimeout);
if (response.isOk()) {
// Process response payload
}
... // End loop and finally close coordinator connection
What might help polling I/O samples from more than two endpoints?
EDIT: I found that Andrew Rapp's XBee-API library fakes multithreaded behavior, which causes the synchronization issues described in this question. I wrote a replacement library that is actually multithreaded and correctly maps responses from multiple XBee endpoints: https://github.com/steveperkins/xbee-api-for-java-1-4. When I wrote it Java 1.4 was necessary for use on the BeagleBone, Plug, and Zotac single-board PCs but it's an easy conversion to 1.7+.
Are you using hardware flow control on your serial port? Is it possible that you're sending requests out when the local XBee has deasserted CTS (e.g., asking you to stop sending)? I assume you're running at 115200 bps, so the XBee serial port can keep up with the network data rate.
Can you turn on debugging information, or connect some port monitoring hardware/software to display the data going over the serial port to the local XBee?

reading EMV card using PPSE and not PSE

I'm trying to read the data off a contactless Visa Paywave card.
For the Paywave, I have to submit a SELECT using PPSE (2PAY.SYS.DDF01) instead of PSE (1PAY.SYS.DDF01).
The EMV book 1, section 11.3.4, table 43 only describes how to interpret the response for a successful SELECT command using PSE. Does anyone know or can refer me to a source that shows how to process the data returned from a successful SELECT command using PPSE?
Here's my request APDU:
00A404000e325041592e5359532e444446303100
Here's the response:
6F2F840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A564953412044454249548701019000
I understand tag 84, tag 85, tag BF0C from the response. According to the examples for reading PSE, I should be able to just send GET PROCESSION OPTIONS (to get the AIP and AFL) with PDOL = null after this successful response as follows: 80A80000830000.
But request 80A80000830000 returns error code 6985 - Command not allowed; conditions of use not satisfied.
I also tried reading all the files after successfully selecting the PPSE by traversing through every single SFI (0-30) and every single record (0-16) of each SFI. Yes, I also did the 3 bit shift and bitwise-OR the SFI with 0x4. But I got no data.
I'm stuck, any help that would point me into getting some info from my Paywave card would be appreciated!
Have you tried this tool from EMVLAB http://www.emvlab.org/emvtags/
Using that tool,
http://www.emvlab.org/tlvutils/?data=6F2F840E325041592E5359532E4444463031A51DBF0C1A61184F07A0000000031010500A564953412044454249548701019000
2PAY.SYS.DDF01 is for contactless (e.g. NFC ) cards, while 1PAY.SYS.DDF01 is for contact cards.
After successfully (SW1 SW2 = 90 00) reading a PSE, you should only search for the SFI (tag 88) which is a mandatory field in the FCI template returned.
With the SFI as your start index, your would have to read the records starting from the start index until you get a 6A83 (RECORD_NOT_FOUND). E.g. if your SFI is 1, you would do a readRecord with record_number=1. That would probably be successful. Then you increament record_number to 2 and do readRecord again. The increament to 3 .... Repeat it until you get 6A83 as your status.
The records read would be ADFs (at least 1). Then your would have to compare the read ADF Names with what your terminal support and also based on the ASI (Application Selection Indicator). At the end you would have a list of possible ADFs (Candidate list)
All the above steps (1-3) are documented in chapter 12.3.2 Book1 v4.3 of the EMV spec.
You would have to make a final selection (Chapter 12.4 Book1)
Read the spec book 1 chapter 12.3 - 12.4 for all the detailed steps.
You seem to have the flow mixed up a bit, you want to:
Send 1PAY or 2PAY, it doesn't actually matter for all of the cards I've tested. This will return a list of the AIDs available on the card. Alternately you can just select an AID straight away if you know it's there but good practice would be to check first.
Get the list of AIDs returned in response to 1PAY/2PAY, in PayWave's case this will probably be A0000000031010 if you sent 2PAY but you may get more if you send 1PAY.
Select one of the AIDs sent back (or one you already know is on there).
Then loop through the SFIs and records sending the Read Records command to get the data.
You don't have to send Get Processing Options before sending the Read Records command even though that's now a normal transaction flow goes.
I think the information you're looking for is available from this VISA website. But only if you're a registered and/or licensed partner of VISA.
EDIT: Looking at the resulting TLV struct under BF0C:
tag=0xBF0C, length=0x1A
tag=0x61, length=0x18
tag=0x4F, length=0x07, value=0xA0000000031010 // looks like an AID to me
tag=0x50, length=0x0A, value="VISA DEBIT"
tag=0x87, length=0x01, value=0x01
I would guess that you need to first select A0000000031010 before getting the processing options.
I was selecting application 2PAY.SYS.DDF01. when I should have been selecting AID = 0xA0000000031010. It looks like there's no records under application 2PAY.SYS.DDF01.
But there was 1 record under application 0xA0000000031010. After I got this application, I performed a READ RECORD, and the first record gave me the PAN and all the credit card info I wanted.
Thanks everyone for chiming in.

Does validation in CQRS have to occur separately once in the UI, and once in the business domain?

I've recently read the article CQRS à la Greg Young and am still trying to get my head around CQRS.
I'm not sure about where input validation should happen, and if it possibly has to happen in two separate locations (thereby violating the Don't Repeat Yourself rule and possibly also Separation of Concerns).
Given the following application architecture:
# +--------------------+ ||
# | event store | ||
# +--------------------+ ||
# ^ | ||
# | events | ||
# | v
# +--------------------+ events +--------------------+
# | domain/ | ---------------------> | (denormalized) |
# | business objects | | query repository |
# +--------------------+ || +--------------------+
# ^ ^ ^ ^ ^ || |
# | | | | | || |
# +--------------------+ || |
# | command bus | || |
# +--------------------+ || |
# ^ |
# | +------------------+ |
# +------------ | user interface | <-----------+
# commands +------------------+ UI form data
The domain is hidden from the UI behind a command bus. That is, the UI can only send commands to the domain, but never gets to the domain objects directly.
Validation must not happen when an aggregate root is reacting to an event, but earlier.
Commands are turned into events in the domain (by the aggregate roots). This is one place where validation could happen: If a command cannot be executed, it isn't turned into a corresponding event; instead, (for example) an exception is thrown that bubbles up through the command bus, back to the UI, where it gets caught.
Problem:
If a command won't be able to execute, I would like to disable the corresponding button or menu item in the UI. But how do I know whether a command can execute before sending it off on its way? The query side won't help me here, as it doesn't contain any business logic whatsoever; and all I can do on the command side is send commands.
Possible solutions:
For any command DoX, introduce a corresponding dummy command CanDoX that won't actually do anything, but lets the domain give feedback whether command X could execute without error.
Duplicate some validation logic (that really belongs in the domain) in the UI.
Obviously the second solution isn't favorable (due to lacking separation of concerns). But is the first one really better?
I think my question has just been solved by another article, Clarified CQRS by Udi Dahan. The section "Commands and Validation" starts as follows:
Commands and Validation
In thinking through what could make a command fail, one topic that comes up is validation. Validation is
different from business rules in that it states a context-independent fact about a command. Either a
command is valid, or it isn't. Business rules on the other hand are context dependent.
[…] Even though a command may be valid, there still may be reasons to reject it.
As such, validation can be performed on the client, checking that all fields required for that command
are there, number and date ranges are OK, that kind of thing. The server would still validate all
commands that arrive, not trusting clients to do the validation.
I take this to mean that — given that I have a task-based UI, as is often suggested for CQRS to work well (commands as domain verbs) — I would only ever gray out (disable) buttons or menu items if a command cannot yet be sent off because some data required by the command is still missing, or invalid; ie. the UI reacts to the command's validness itself, and not to the command's future effect on the domain objects.
Therefore, no CanDoX commands are required, and no domain validation logic needs to be leaked into the UI. What the UI will have, however, is some logic for command validation.
Client-side validation is basically limited to format validation, because the client side cannot know the state of the data model on the server. What is valid now, may be invalid 1/2 second from now.
So, the client side should check only whether all required fields are filled in and whether they are of the correct form (an email address field must contain a valid email address, for instance of the form (.+)#(.+).(.+) or the like).
All those validations, along with business rule validations, are then performed on the domain model in the Command service. Therefore, data that was validated on the client may still result in invalidated Commands on the server. In that case, some feedback should be able to make it back to the client application... but that's another story.

Resources