I'm trying to format data sent over a USB UART with printf and it's giving me garbage. I can send a simple string and that works but anything I try to format gives junk. Looking through the code I think it has to do with my string not being in program space but I'm not sure.
Here is my main:
void main(void) {
CPU_PRESCALE(CPU_16MHz);
init_uart();
int degree = 0;
char buffer[50];
while(1) {
degree = (degree + 1) % 360;
send_str(PSTR("\n\nHello!!!\n\n"));
memset(buffer, 0, 50);
sprintf_P(buffer, PSTR("%d degrees\n"), degree);
send_str(buffer);
_delay_ms(20);
}
}
The output looks like this:
Hello!!!
����/�������(/����#Q��������
Hello!!!
����/�������(/����#Q��������
The USB UART code I found in a tutorial. The relevant parts look like this:
void send_str(const char *s)
{
char c;
while (1) {
c = pgm_read_byte(s++);
if (!c) break;
usb_serial_putchar(c);
}
}
int8_t usb_serial_putchar(uint8_t c)
{
uint8_t timeout, intr_state;
// if we're not online (enumerated and configured), error
if (!usb_configuration) return -1;
// interrupts are disabled so these functions can be
// used from the main program or interrupt context,
// even both in the same program!
intr_state = SREG;
cli();
UENUM = CDC_TX_ENDPOINT;
// if we gave up due to timeout before, don't wait again
if (transmit_previous_timeout) {
if (!(UEINTX & (1<<RWAL))) {
SREG = intr_state;
return -1;
}
transmit_previous_timeout = 0;
}
// wait for the FIFO to be ready to accept data
timeout = UDFNUML + TRANSMIT_TIMEOUT;
while (1) {
// are we ready to transmit?
if (UEINTX & (1<<RWAL)) break;
SREG = intr_state;
// have we waited too long? This happens if the user
// is not running an application that is listening
if (UDFNUML == timeout) {
transmit_previous_timeout = 1;
return -1;
}
// has the USB gone offline?
if (!usb_configuration) return -1;
// get ready to try checking again
intr_state = SREG;
cli();
UENUM = CDC_TX_ENDPOINT;
}
// actually write the byte into the FIFO
UEDATX = c;
// if this completed a packet, transmit it now!
if (!(UEINTX & (1<<RWAL))) UEINTX = 0x3A;
transmit_flush_timer = TRANSMIT_FLUSH_TIMEOUT;
SREG = intr_state;
return 0;
}
I am trying to use some static analysis tools to check a program with extensive usage of recursive calls. Conceptually, it is something like this:
int counter = 0;
int access = 0;
extern int nd (); // a nondeterministic value
void compute1();
void compute2();
int get()
{
static int fpa[2] = {2, 2}; // each function can be called for twice
int k = nd() % 2;
if (fpa[k] > 0) {
fpa[k]--;
return k+1;
}
else
return 0;
}
void check_r(int* x) {
if (x == &counter) {
__VERIFIER_assert(!(access == 2));
access = 1;
}
}
void check_w(int* x) {
if (x == &counter) {
__VERIFIER_assert((access == 0));
access = 2;
}
}
void schedule() {
for (int i = 0; i < 5; i++) {
int fp = get();
if (fp == 0)
return;
elif (fp == 1)
compute1();
elif (fp == 2)
compute2();
}
}
void compute1()
{
// some computations
...
schedule(); // recursive call
check_w(&counter); // check write access
...
}
void compute2()
{
// some computations
...
schedule(); //recursive call
check_r(&counter);
...
}
int main()
{
schedule();
return 0;
}
My tentative tests show that due to the recursive call, the static analysis becomes too slow to terminate.
While in principle, I can somehow rewrite the recursive call into a switch statement or so, but the problem is that before the recursive call schedule, compute1 and compute2 functions have performed nontrivial amount of computations already, and it is difficult to save the program context for further usage.
I have been trapped to optimize this cases for a few days, but just cannot come up with a even ad-hoc solution. Could anyone please provide some comments and suggestions to get rid of the recursive call here? Thank you so much.
To me it looks like all the schedule function is doing is deciding whether to call compute1 or compute2 and all the get is doing is ensuring a single function is never called more than twice. I don't think the recursive call from compute to schedule is necessary then since there is never more than two calls. The recursion here seem to imply that every time we can successfully call one of the compute functions we wan't another chance to call compute again
void schedule() {
int chances = 1;
for (int i = 0; i < 5 || chances > 0; i++) {
int fp = get();
if (fp == 0){
chances--;
if(chances < 0)
chances = 0;
continue;
}
elif (fp == 1){
compute1(); chances++;
}
elif (fp == 2){
compute2(); chances++;
}
}
}
void compute1()
{
// some computations
...
//schedule(); remove
check_w(&counter); // check write access
...
}
void compute2()
{
// some computations
...
//schedule(); remove
check_r(&counter);
...
}
This code is a bit confusing so please clarify if I made any incorrect assumptions
I have implemented a custom storage interface in libtorrent as described in the help section here.
The storage_interface is working fine, although I can't figure out why readv is only called randomly while downloading a torrent. From my view the overriden virtual function readv should get called each time I call handle->read_piece in piece_finished_alert. It should read the piece for read_piece_alert?
The buffer is provided in read_piece_alert without getting notified in readv.
So the question is why it is called only randomly and why it's not called on a read_piece() call? Is my storage_interface maybe wrong?
The code looks like this:
struct temp_storage : storage_interface
{
virtual int readv(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
// Only called on random pieces while downloading a larger torrent
std::map<int, std::vector<char> >::const_iterator i = m_file_data.find(piece);
if (i == m_file_data.end()) return 0;
int available = i->second.size() - offset;
if (available <= 0) return 0;
if (available > num_bufs) available = num_bufs;
memcpy(&bufs, &i->second[offset], available);
return available;
}
virtual int writev(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
std::vector<char>& data = m_file_data[piece];
if (data.size() < offset + num_bufs) data.resize(offset + num_bufs);
std::memcpy(&data[offset], bufs, num_bufs);
return num_bufs;
}
virtual bool has_any_file(storage_error& ec) { return false; }
virtual ...
virtual ...
}
Intialized with
storage_interface* temp_storage_constructor(storage_params const& params)
{
printf("NEW INTERFACE\n");
return new temp_storage(*params.files);
}
p.storage = &temp_storage_constructor;
The function below sets up alerts and invokes read_piece on each completed piece.
while(true) {
std::vector<alert*> alerts;
s.pop_alerts(&alerts);
for (alert* i : alerts)
{
switch (i->type()) {
case read_piece_alert::alert_type:
{
read_piece_alert* p = (read_piece_alert*)i;
if (p->ec) {
// read_piece failed
break;
}
// piece buffer, size is provided without readv
// notification after invoking read_piece in piece_finished_alert
break;
}
case piece_finished_alert::alert_type: {
piece_finished_alert* p = (piece_finished_alert*)i;
p->handle.read_piece(p->piece_index);
// Once the piece is finished, we read it to obtain the buffer in read_piece_alert.
break;
}
default:
break;
}
}
Sleep(100);
}
I will answer my own question. As Arvid said in the comments: readv was not invoked because of caching. Setting settings_pack::use_read_cache to false will invoke readv always.
Question
What can I do to get a locking mechanism that provides minimal and stable latency while guaranteeing that a thread cannot reacquire a resource before another thread has acquired and released it?
The desirability of answers to this question are ranked as follows:
Some combination of built-in C++11 features that work in MinGW on Windows 7 (note that the <thread> and <mutex> libraries do not work on a Windows platform)
Some combination of Windows API features
A modification to the FairLock listed below, my own attempt at implementing such a mechanism
Some features provided by a free, open-source library that does not require a .configure/make/make install process, (getting that to work in MSYS is more of an adventure than I care for)
Background
I am writing an application which is effectively a multi-stage producer/consumer. One thread generates input consumed by another thread, which produces output consumed by yet another thread. The application uses pairs of buffers so that, after an initial delay, all threads can work nearly simultaneously.
Since I am writing a Windows 7 application, I had been using CriticalSections to guard the buffers. The problem with using CriticalSections (or, so far as I can tell, any other Windows or C++11-built-in synchronization object) is that it does not allow for any provision that a thread that just released a lock cannot reacquire it until another thread has done so first. Because of this, many of my test drivers for the middle thread (the Encoder) never gave the Encoder a chance to acquire the test input buffers and completed without having tested them. The end result was a ridiculous process of trying to determine an artificial wait time that stochastically worked for my machine.
Since the structure of my application requires that each stage waits for the other stage to have acquired, finished using, and released the necessary buffers for getting to use the buffer again, I need, for lack of a better term, a fair locking mechanism. I took a crack at writing one (the source code is provided below). In testing, this FairLock allows my test driver to run my Encoder at the same speeds that I was able to achieve using the CriticalSection maybe 60% of the runs. The other 40% of the runs take anywhere between 10 to 100 ms longer, which is not acceptable for my application.
FairLock
// FairLock.hpp
#ifndef FAIRLOCK_HPP
#define FAIRLOCK_HPP
#include <atomic>
using namespace std;
class FairLock {
private:
atomic_bool owned {false};
atomic<DWORD> lastOwner {0};
public:
FairLock(bool owned);
bool inline hasLock() const;
bool tryLock();
void seizeLock();
void tryRelease();
void waitForLock();
};
#endif
// FairLock.cpp
#include <windows.h>
#include "FairLock.hpp"
#define ID GetCurrentThreadId()
FairLock::FairLock(bool owned) {
if (owned) {
this->owned = true;
this->lastOwner = ID;
} else {
this->owned = false;
this->lastOwner = 0;
}
}
bool inline FairLock::hasLock() const {
return owned && lastOwner == ID;
}
bool FairLock::tryLock() {
bool success = false;
DWORD id = ID;
if (owned) {
success = lastOwner == id;
} else if (
lastOwner != id &&
owned.compare_exchange_strong(success, true)
) {
lastOwner = id;
success = true;
} else {
success = false;
}
return success;
}
void FairLock::seizeLock() {
bool success = false;
DWORD id = ID;
if (!(owned && lastOwner == id)) {
while (!owned.compare_exchange_strong(success, true)) {
success = false;
}
lastOwner = id;
}
}
void FairLock::tryRelease() {
if (hasLock()) {
owned = false;
}
}
void FairLock::waitForLock() {
bool success = false;
DWORD id = ID;
if (!(owned && lastOwner == id)) {
while (lastOwner == id); // spin
while (!owned.compare_exchange_strong(success, true)) {
success = false;
}
lastOwner = id;
}
}
EDIT
DO NOT USE THIS FairLock CLASS; IT DOES NOT GUARANTEE MUTUAL EXCLUSION!
I reviewed the above code to compare it against The C++ Programming Language: 4th Edition text I had not read carefully and what CouchDeveloper's recommended Synchronous Queue. I realized that there are several sequences in which the thread that just released the FairLock can be tricked into thinking it still owns it. All it takes is interleaving instructions as follows:
New owner: set owned to true
Old owner: is owned true? yes
Old owner: am I the last owner? yes
New owner: set me as the last owner
At this point, the old and new owners both enter their critical sections.
I am considering whether this problem has a solution and whether it is worth attempting to solve this at all. In the meantime, don't use this unless you see a fix.
I would implement this in C++11 using a condition_variable-per-thread setup so that I could choose exactly which thread to wake up when (Live demo at Coliru):
class FairMutex {
private:
class waitnode {
std::condition_variable cv_;
waitnode* next_ = nullptr;
FairMutex& fmtx_;
public:
waitnode(FairMutex& fmtx) : fmtx_(fmtx) {
*fmtx.tail_ = this;
fmtx.tail_ = &next_;
}
~waitnode() {
for (waitnode** p = &fmtx_.waiters_; *p; p = &(*p)->next_) {
if (*p == this) {
*p = next_;
if (!next_) {
fmtx_.tail_ = &fmtx_.waiters_;
}
break;
}
}
}
void wait(std::unique_lock<std::mutex>& lk) {
while (fmtx_.held_ || fmtx_.waiters_ != this) {
cv_.wait(lk);
}
}
void notify() {
cv_.notify_one();
}
};
waitnode* waiters_ = nullptr;
waitnode** tail_ = &waiters_;
std::mutex mtx_;
bool held_ = false;
public:
void lock() {
auto lk = std::unique_lock<std::mutex>{mtx_};
if (held_ || waiters_) {
waitnode{*this}.wait(lk);
}
held_ = true;
}
bool try_lock() {
if (mtx_.try_lock()) {
std::lock_guard<std::mutex> lk(mtx_, std::adopt_lock);
if (!held_ && !waiters_) {
held_ = true;
return true;
}
}
return false;
}
void unlock() {
std::lock_guard<std::mutex> lk(mtx_);
held_ = false;
if (waiters_ != nullptr) {
waiters_->notify();
}
}
};
FairMutex models the Lockable concept so it can be used like any other standard library mutex type. Put simply, it achieves fairness by inserting waiters into a list in arrival order, and passing the mutex to the first waiter in the list when unlocking.
If it's useful:
This demonstrates *) an implementation of a "synchronous queue" using semaphores as synchronization primitives.
Note: the actually implementation uses semaphores implemented with GCD (Grand Central Dispatch):
using gcd::mutex;
using gcd::semaphore;
// A blocking queue in which each put must wait for a get, and vice
// versa. A synchronous queue does not have any internal capacity,
// not even a capacity of one.
template <typename T>
class simple_synchronous_queue {
public:
typedef T value_type;
enum result_type {
OK = 0,
TIMEOUT_NOT_DELIVERED = -1,
TIMEOUT_NOT_PICKED = -2,
TIMEOUT_NOTHING_OFFERED = -3
};
simple_synchronous_queue()
: sync_(0), send_(1), recv_(0)
{
}
void put(const T& v) {
send_.wait();
new (address()) T(v);
recv_.signal();
sync_.wait();
}
result_type put(const T& v, double timeout) {
if (send_.wait(timeout)) {
new (storage_) T(v);
recv_.signal();
if (sync_.wait(timeout)) {
return OK;
}
else {
return TIMEOUT_NOT_PICKED;
}
}
else {
return TIMEOUT_NOT_DELIVERED;
}
}
T get() {
recv_.wait();
T result = *address();
address()->~T();
sync_.signal();
send_.signal();
return result;
}
std::pair<result_type, T> get(double timeout) {
if (recv_.wait(timeout)) {
std::pair<result_type, T> result =
std::pair<result_type, T>(OK, *address());
address()->~T();
sync_.signal();
send_.signal();
return result;
}
else {
return std::pair<result_type, T>(TIMEOUT_NOTHING_OFFERED, T());
}
}
private:
using storage_t = typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type;
T* address() {
return static_cast<T*>(static_cast<void*>(&storage_));
}
storage_t storage_;
semaphore sync_;
semaphore send_;
semaphore recv_;
};
*) demonstrates: be carefully about potential issues, could be improved, etc. ... ;)
I accepted CouchDeveloper's answer since it pointed me down the right path. I wrote a Windows-specific C++11 implementation of a synchronous queue, and added this answer so that others could consider/use it if they so choose.
// SynchronousQueue.hpp
#ifndef SYNCHRONOUSQUEUE_HPP
#define SYNCHRONOUSQUEUE_HPP
#include <atomic>
#include <exception>
#include <windows>
using namespace std;
class CouldNotEnterException: public exception {};
class NoPairedCallException: public exception {};
template typename<T>
class SynchronousQueue {
private:
atomic_bool valueReady {false};
CRITICAL_SECTION getCriticalSection;
CRITICAL_SECTION putCriticalSection;
DWORD wait {0};
HANDLE getSemaphore;
HANDLE putSemaphore;
const T* address {nullptr};
public:
SynchronousQueue(DWORD waitMS): wait {waitMS}, address {nullptr} {
initializeCriticalSection(&getCriticalSection);
initializeCriticalSection(&putCriticalSection);
getSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
putSemaphore = CreateSemaphore(nullptr, 0, 1, nullptr);
}
~SynchronousQueue() {
EnterCriticalSection(&getCriticalSection);
EnterCriticalSection(&putCriticalSection);
CloseHandle(getSemaphore);
CloseHandle(putSemaphore);
DeleteCriticalSection(&putCriticalSection);
DeleteCriticalSection(&getCriticalSection);
}
void put(const T& value) {
if (!TryEnterCriticalSection(&putCriticalSection)) {
throw CouldNotEnterException();
}
ReleaseSemaphore(putSemaphore, (LONG) 1, nullptr);
if (WaitForSingleObject(getSemaphore, wait) != WAIT_OBJECT_0) {
if (WaitForSingleObject(putSemaphore, 0) == WAIT_OBJECT_0) {
LeaveCriticalSection(&putCriticalSection);
throw NoPairedCallException();
} else {
WaitForSingleObject(getSemaphore, 0);
}
}
address = &value;
valueReady = true;
while (valueReady);
LeaveCriticalSection(&putCriticalSection);
}
T get() {
if (!TryEnterCriticalSection(&getCriticalSection)) {
throw CouldNotEnterException();
}
ReleaseSemaphore(getSemaphore, (LONG) 1, nullptr);
if (WaitForSingleObject(putSemaphore, wait) != WAIT_OBJECT_0) {
if (WaitForSingleObject(getSemaphore, 0) == WAIT_OBJECT_0) {
LeaveCriticalSection(&getCriticalSection);
throw NoPairedCallException();
} else {
WaitForSingleObject(putSemaphore, 0);
}
}
while (!valueReady);
T toReturn = *address;
valueReady = false;
LeaveCriticalSection(&getCriticalSection);
return toReturn;
}
};
#endif
I am porting the famous packet capture software -- WinPcap from NDIS 5.0 to NDIS 6.x. I tried to translate every NDIS 5.0 function to its 6.0 version. In WinPcap sourcecode function NdisOpenAdapter is called by NPF_OpenAdapter in Openclos.c. I translated it to NdisOpenAdapterEx for NDIS 6.0. But I cannot find a way to set the 4th parameter BindContext.
The delcaration of NdisOpenAdapterEx can be found here:
http://msdn.microsoft.com/en-us/library/windows/hardware/ff563715(v=vs.85).aspx
Also MS said "A protocol driver must call NdisOpenAdapterEx from its ProtocolBindAdapterEx function. NDIS fails any attempt to call NdisOpenAdapterEx outside the context of ProtocolBindAdapterEx.". So it seems that NdisOpenAdapterEx cannot be called in NPF_OpenAdapter. it must be called in the NPF_BindAdapterEx function. I substituted the driver npf.sys with my own version, started Wireshark (a packet capture frontend), set breakpoints in NPF_BindAdapterEx and found NPF_BindAdapterEx was never called before NPF_OpenAdapter. So it is impposible for me to get the BindContext paramters before calling NdisOpenAdapterEx.
I just want to imgrate WinPcap to NDIS 6.0 with modifications as small as possible. And how to solve this problem?
Here is the code of Openclos.c
/*
* Copyright (c) 1999 - 2005 NetGroup, Politecnico di Torino (Italy)
* Copyright (c) 2005 - 2010 CACE Technologies, Davis (California)
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. Neither the name of the Politecnico di Torino, CACE Technologies
* nor the names of its contributors may be used to endorse or promote
* products derived from this software without specific prior written
* permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*/
#include "stdafx.h"
#include <ntddk.h>
#include <ndis.h>
#include "debug.h"
#include "packet.h"
#include "..\..\Common\WpcapNames.h"
static
VOID NPF_ReleaseOpenInstanceResources(POPEN_INSTANCE pOpen);
static NDIS_MEDIUM MediumArray[] =
{
NdisMedium802_3,
// NdisMediumWan,
NdisMediumFddi, NdisMediumArcnet878_2, NdisMediumAtm, NdisMedium802_5
};
#define NUM_NDIS_MEDIA (sizeof MediumArray / sizeof MediumArray[0])
//Itoa. Replaces the buggy RtlIntegerToUnicodeString
// void PacketItoa(UINT n, PUCHAR buf)
// {
// int i;
// for(i=0;i<20;i+=2){
// buf[18-i]=(n%10)+48;
// buf[19-i]=0;
// n/=10;
// }
// }
/// Global start time. Used as an absolute reference for timestamp conversion.
struct time_conv G_Start_Time =
{
0, {0, 0},
};
ULONG g_NumOpenedInstances = 0;
BOOLEAN NPF_StartUsingBinding(IN POPEN_INSTANCE pOpen)
{
ASSERT(pOpen != NULL);
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
if (pOpen->AdapterBindingStatus != ADAPTER_BOUND)
{
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
return FALSE;
}
pOpen->AdapterHandleUsageCounter++;
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
return TRUE;
}
VOID NPF_StopUsingBinding(IN POPEN_INSTANCE pOpen)
{
ASSERT(pOpen != NULL);
//
// There is no risk in calling this function from abobe passive level
// (i.e. DISPATCH, in this driver) as we acquire a spinlock and decrement a
// counter.
//
// ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
ASSERT(pOpen->AdapterHandleUsageCounter > 0);
ASSERT(pOpen->AdapterBindingStatus == ADAPTER_BOUND);
pOpen->AdapterHandleUsageCounter--;
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
}
VOID NPF_CloseBinding(IN POPEN_INSTANCE pOpen)
{
NDIS_EVENT Event;
NDIS_STATUS Status;
ASSERT(pOpen != NULL);
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
NdisInitializeEvent(&Event);
NdisResetEvent(&Event);
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
while (pOpen->AdapterHandleUsageCounter > 0)
{
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
NdisWaitEvent(&Event, 1);
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
}
//
// now the UsageCounter is 0
//
while (pOpen->AdapterBindingStatus == ADAPTER_UNBINDING)
{
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
NdisWaitEvent(&Event, 1);
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
}
//
// now the binding status is either bound or unbound
//
if (pOpen->AdapterBindingStatus == ADAPTER_UNBOUND)
{
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
return;
}
ASSERT(pOpen->AdapterBindingStatus == ADAPTER_BOUND);
pOpen->AdapterBindingStatus = ADAPTER_UNBINDING;
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
//
// do the release procedure
//
NdisResetEvent(&pOpen->NdisOpenCloseCompleteEvent);
// Close the adapter
Status = NdisCloseAdapterEx(pOpen->AdapterHandle);
if (Status == NDIS_STATUS_PENDING)
{
TRACE_MESSAGE(PACKET_DEBUG_LOUD, "Pending NdisCloseAdapter");
NdisWaitEvent(&pOpen->NdisOpenCloseCompleteEvent, 0);
}
else
{
TRACE_MESSAGE(PACKET_DEBUG_LOUD, "Not Pending NdisCloseAdapter");
}
NdisAcquireSpinLock(&pOpen->AdapterHandleLock);
pOpen->AdapterBindingStatus = ADAPTER_UNBOUND;
NdisReleaseSpinLock(&pOpen->AdapterHandleLock);
}
//-------------------------------------------------------------------
NTSTATUS NPF_OpenAdapter(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp)
{
PDEVICE_EXTENSION DeviceExtension;
POPEN_INSTANCE Open;
PIO_STACK_LOCATION IrpSp;
NDIS_STATUS Status;
NDIS_STATUS ErrorStatus;
UINT i;
PUCHAR tpointer;
PLIST_ENTRY PacketListEntry;
NTSTATUS returnStatus;
NET_BUFFER_LIST_POOL_PARAMETERS PoolParameters;
NDIS_OPEN_PARAMETERS OpenParameters;
NET_FRAME_TYPE FrameTypeArray[2] =
{
NDIS_ETH_TYPE_802_1X, NDIS_ETH_TYPE_802_1Q
};
//
// Old registry based WinPcap names
//
// WCHAR EventPrefix[MAX_WINPCAP_KEY_CHARS];
// UINT RegStrLen;
TRACE_ENTER();
DeviceExtension = DeviceObject->DeviceExtension;
IrpSp = IoGetCurrentIrpStackLocation(Irp);
// allocate some memory for the open structure
Open = ExAllocatePoolWithTag(NonPagedPool, sizeof(OPEN_INSTANCE), '0OWA');
if (Open == NULL)
{
// no memory
Irp->IoStatus.Status = STATUS_INSUFFICIENT_RESOURCES;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return STATUS_INSUFFICIENT_RESOURCES;
}
RtlZeroMemory(Open, sizeof(OPEN_INSTANCE));
//
// Old registry based WinPcap names
//
// //
// // Get the Event names base from the registry
// //
// RegStrLen = sizeof(EventPrefix)/sizeof(EventPrefix[0]);
//
// NPF_QueryWinpcapRegistryString(NPF_EVENTS_NAMES_REG_KEY_WC,
// EventPrefix,
// RegStrLen,
// NPF_EVENTS_NAMES_WIDECHAR);
//
Open->DeviceExtension = DeviceExtension;
NdisZeroMemory(&PoolParameters, sizeof(NET_BUFFER_LIST_POOL_PARAMETERS));
PoolParameters.Header.Type = NDIS_OBJECT_TYPE_DEFAULT;
PoolParameters.Header.Revision = NET_BUFFER_LIST_POOL_PARAMETERS_REVISION_1;
PoolParameters.Header.Size = sizeof(PoolParameters);
PoolParameters.ProtocolId = NDIS_PROTOCOL_ID_TCP_IP;
PoolParameters.ContextSize = 0;
PoolParameters.fAllocateNetBuffer = TRUE;
PoolParameters.PoolTag = NPCAP_ALLOC_TAG;
Open->PacketPool = NdisAllocateNetBufferListPool(NULL, &PoolParameters);
if (Open->PacketPool == NULL)
{
TRACE_MESSAGE(PACKET_DEBUG_LOUD, "Failed to allocate packet pool");
ExFreePool(Open);
Irp->IoStatus.Status = STATUS_INSUFFICIENT_RESOURCES;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
return STATUS_INSUFFICIENT_RESOURCES;
}
// // Allocate a packet pool for our xmit and receive packets
// NdisAllocatePacketPool(
// &Status,
// &Open->PacketPool,
// TRANSMIT_PACKETS,
// sizeof(PACKET_RESERVED));
//
// if (Status != NDIS_STATUS_SUCCESS) {
//
// TRACE_MESSAGE(PACKET_DEBUG_LOUD, "Failed to allocate packet pool");
//
// ExFreePool(Open);
// Irp->IoStatus.Status = STATUS_INSUFFICIENT_RESOURCES;
// IoCompleteRequest(Irp, IO_NO_INCREMENT);
// return STATUS_INSUFFICIENT_RESOURCES;
// }
NdisInitializeEvent(&Open->WriteEvent);
NdisInitializeEvent(&Open->NdisRequestEvent);
NdisInitializeEvent(&Open->NdisWriteCompleteEvent);
NdisInitializeEvent(&Open->DumpEvent);
NdisAllocateSpinLock(&Open->MachineLock);
NdisAllocateSpinLock(&Open->WriteLock);
Open->WriteInProgress = FALSE;
for (i = 0; i < g_NCpu; i++)
{
NdisAllocateSpinLock(&Open->CpuData[i].BufferLock);
}
NdisInitializeEvent(&Open->NdisOpenCloseCompleteEvent);
// list to hold irp's want to reset the adapter
InitializeListHead(&Open->ResetIrpList);
// Initialize the request list
KeInitializeSpinLock(&Open->RequestSpinLock);
InitializeListHead(&Open->RequestList);
//
// Initialize the open instance
//
//Open->BindContext = NULL;
Open->bpfprogram = NULL; //reset the filter
Open->mode = MODE_CAPT;
Open->Nbytes.QuadPart = 0;
Open->Npackets.QuadPart = 0;
Open->Nwrites = 1;
Open->Multiple_Write_Counter = 0;
Open->MinToCopy = 0;
Open->TimeOut.QuadPart = (LONGLONG)1;
Open->DumpFileName.Buffer = NULL;
Open->DumpFileHandle = NULL;
#ifdef HAVE_BUGGY_TME_SUPPORT
Open->tme.active = TME_NONE_ACTIVE;
#endif // HAVE_BUGGY_TME_SUPPORT
Open->DumpLimitReached = FALSE;
Open->MaxFrameSize = 0;
Open->WriterSN = 0;
Open->ReaderSN = 0;
Open->Size = 0;
Open->SkipSentPackets = FALSE;
Open->ReadEvent = NULL;
//
// we need to keep a counter of the pending IRPs
// so that when the IRP_MJ_CLEANUP dispatcher gets called,
// we can wait for those IRPs to be completed
//
Open->NumPendingIrps = 0;
Open->ClosePending = FALSE;
NdisAllocateSpinLock(&Open->OpenInUseLock);
//
//allocate the spinlock for the statistic counters
//
NdisAllocateSpinLock(&Open->CountersLock);
//
// link up the request stored in our open block
//
for (i = 0 ; i < MAX_REQUESTS ; i++)
{
NdisInitializeEvent(&Open->Requests[i].InternalRequestCompletedEvent);
ExInterlockedInsertTailList(&Open->RequestList, &Open->Requests[i].ListElement, &Open->RequestSpinLock);
}
NdisResetEvent(&Open->NdisOpenCloseCompleteEvent);
//
// set the proper binding flags before trying to open the MAC
//
Open->AdapterBindingStatus = ADAPTER_BOUND;
Open->AdapterHandleUsageCounter = 0;
NdisAllocateSpinLock(&Open->AdapterHandleLock);
//
// Try to open the MAC
//
TRACE_MESSAGE2(PACKET_DEBUG_LOUD, "Opening the device %ws, BindingContext=%p", DeviceExtension->AdapterName.Buffer, Open);
returnStatus = STATUS_SUCCESS;
NdisZeroMemory(&OpenParameters, sizeof(NDIS_OPEN_PARAMETERS));
OpenParameters.Header.Type = NDIS_OBJECT_TYPE_OPEN_PARAMETERS;
OpenParameters.Header.Revision = NDIS_OPEN_PARAMETERS_REVISION_1;
OpenParameters.Header.Size = sizeof(NDIS_OPEN_PARAMETERS);
OpenParameters.AdapterName = &DeviceExtension->AdapterName;
OpenParameters.MediumArray = MediumArray;
OpenParameters.MediumArraySize = sizeof(MediumArray) / sizeof(NDIS_MEDIUM);
OpenParameters.SelectedMediumIndex = &Open->Medium;
OpenParameters.FrameTypeArray = NULL;
OpenParameters.FrameTypeArraySize = 0;
//OpenParameters.FrameTypeArray = &FrameTypeArray[0];
//OpenParameters.FrameTypeArraySize = sizeof(FrameTypeArray) / sizeof(NET_FRAME_TYPE);
NDIS_DECLARE_PROTOCOL_OPEN_CONTEXT(OPEN_INSTANCE);
Status = NdisOpenAdapterEx(g_NdisProtocolHandle, (NDIS_HANDLE)Open, &OpenParameters, NULL, &Open->AdapterHandle);
// NdisOpenAdapter(
// &Status,
// &ErrorStatus,
// &Open->AdapterHandle,
// &Open->Medium,
// MediumArray,
// NUM_NDIS_MEDIA,
// g_NdisProtocolHandle,
// Open,
// &DeviceExtension->AdapterName,
// 0,
// NULL);
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Opened the device, Status=%x", Status);
if (Status == NDIS_STATUS_PENDING)
{
NdisWaitEvent(&Open->NdisOpenCloseCompleteEvent, 0);
if (!NT_SUCCESS(Open->OpenCloseStatus))
{
returnStatus = Open->OpenCloseStatus;
}
else
{
returnStatus = STATUS_SUCCESS;
}
}
else
{
//
// request not pending, we know the result, and OpenComplete has not been called.
//
if (Status == NDIS_STATUS_SUCCESS)
{
returnStatus = STATUS_SUCCESS;
}
else
{
//
// this is not completely correct, as we are converting an NDIS_STATUS to a NTSTATUS
//
returnStatus = Status;
}
}
if (returnStatus == STATUS_SUCCESS)
{
ULONG localNumOpenedInstances;
//
// complete the open
//
localNumOpenedInstances = InterlockedIncrement(&g_NumOpenedInstances);
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Opened Instances: %u", localNumOpenedInstances);
// Get the absolute value of the system boot time.
// This is used for timestamp conversion.
TIME_SYNCHRONIZE(&G_Start_Time);
returnStatus = NPF_GetDeviceMTU(Open, Irp, &Open->MaxFrameSize);
if (!NT_SUCCESS(returnStatus))
{
//
// Close the binding
//
NPF_CloseBinding(Open);
}
}
if (!NT_SUCCESS(returnStatus))
{
NPF_ReleaseOpenInstanceResources(Open);
//
// Free the open instance itself
//
ExFreePool(Open);
}
else
{
// Save or open here
IrpSp->FileObject->FsContext = Open;
}
Irp->IoStatus.Status = returnStatus;
Irp->IoStatus.Information = 0;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
TRACE_EXIT();
return returnStatus;
}
BOOLEAN NPF_StartUsingOpenInstance(IN POPEN_INSTANCE pOpen)
{
BOOLEAN returnStatus;
NdisAcquireSpinLock(&pOpen->OpenInUseLock);
if (pOpen->ClosePending)
{
returnStatus = FALSE;
}
else
{
returnStatus = TRUE;
pOpen->NumPendingIrps ++;
}
NdisReleaseSpinLock(&pOpen->OpenInUseLock);
return returnStatus;
}
VOID NPF_StopUsingOpenInstance(IN POPEN_INSTANCE pOpen)
{
NdisAcquireSpinLock(&pOpen->OpenInUseLock);
ASSERT(pOpen->NumPendingIrps > 0);
pOpen->NumPendingIrps --;
NdisReleaseSpinLock(&pOpen->OpenInUseLock);
}
VOID NPF_CloseOpenInstance(IN POPEN_INSTANCE pOpen)
{
ULONG i = 0;
NDIS_EVENT Event;
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
NdisInitializeEvent(&Event);
NdisResetEvent(&Event);
NdisAcquireSpinLock(&pOpen->OpenInUseLock);
pOpen->ClosePending = TRUE;
while (pOpen->NumPendingIrps > 0)
{
NdisReleaseSpinLock(&pOpen->OpenInUseLock);
NdisWaitEvent(&Event, 1);
NdisAcquireSpinLock(&pOpen->OpenInUseLock);
}
NdisReleaseSpinLock(&pOpen->OpenInUseLock);
}
VOID NPF_ReleaseOpenInstanceResources(POPEN_INSTANCE pOpen)
{
PKEVENT pEvent;
UINT i;
TRACE_ENTER();
ASSERT(pOpen != NULL);
ASSERT(KeGetCurrentIrql() == PASSIVE_LEVEL);
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Open= %p", pOpen);
//NdisFreePacketPool(pOpen->PacketPool);
NdisFreeNetBufferListPool(pOpen->PacketPool);
//
// Free the filter if it's present
//
if (pOpen->bpfprogram != NULL)
ExFreePool(pOpen->bpfprogram);
//
// Jitted filters are supported on x86 (32bit) only
//
#ifdef _X86_
// Free the jitted filter if it's present
if (pOpen->Filter != NULL)
BPF_Destroy_JIT_Filter(pOpen->Filter);
#endif //_X86_
//
// Dereference the read event.
//
if (pOpen->ReadEvent != NULL)
ObDereferenceObject(pOpen->ReadEvent);
//
// free the buffer
// NOTE: the buffer is fragmented among the various CPUs, but the base pointer of the
// allocated chunk of memory is stored in the first slot (pOpen->CpuData[0])
//
if (pOpen->Size > 0)
ExFreePool(pOpen->CpuData[0].Buffer);
//
// free the per CPU spinlocks
//
for (i = 0; i < g_NCpu; i++)
{
NdisFreeSpinLock(&pOpen->CpuData[i].BufferLock);
}
//
// Free the string with the name of the dump file
//
if (pOpen->DumpFileName.Buffer != NULL)
ExFreePool(pOpen->DumpFileName.Buffer);
TRACE_EXIT();
}
//-------------------------------------------------------------------
VOID NPF_OpenAdapterCompleteEx(IN NDIS_HANDLE ProtocolBindingContext, IN NDIS_STATUS Status)
{
POPEN_INSTANCE Open;
PLIST_ENTRY RequestListEntry;
PINTERNAL_REQUEST MaxSizeReq;
NDIS_STATUS ReqStatus;
TRACE_ENTER();
Open = (POPEN_INSTANCE)ProtocolBindingContext;
ASSERT(Open != NULL);
if (Status != NDIS_STATUS_SUCCESS)
{
//
// this is not completely correct, as we are converting an NDIS_STATUS to a NTSTATUS
//
Open->OpenCloseStatus = Status;
}
else
{
Open->OpenCloseStatus = STATUS_SUCCESS;
}
//
// wake up the caller of NdisOpen, that is NPF_Open
//
NdisSetEvent(&Open->NdisOpenCloseCompleteEvent);
TRACE_EXIT();
}
NTSTATUS NPF_GetDeviceMTU(IN POPEN_INSTANCE pOpen, IN PIRP pIrp, OUT PUINT pMtu)
{
PLIST_ENTRY RequestListEntry;
PINTERNAL_REQUEST MaxSizeReq;
NDIS_STATUS ReqStatus;
TRACE_ENTER();
ASSERT(pOpen != NULL);
ASSERT(pIrp != NULL);
ASSERT(pMtu != NULL);
// Extract a request from the list of free ones
RequestListEntry = ExInterlockedRemoveHeadList(&pOpen->RequestList, &pOpen->RequestSpinLock);
if (RequestListEntry == NULL)
{
//
// THIS IS WRONG
//
//
// Assume Ethernet
//
*pMtu = 1514;
TRACE_EXIT();
return STATUS_SUCCESS;
}
MaxSizeReq = CONTAINING_RECORD(RequestListEntry, INTERNAL_REQUEST, ListElement);
MaxSizeReq->Request.RequestType = NdisRequestQueryInformation;
MaxSizeReq->Request.DATA.QUERY_INFORMATION.Oid = OID_GEN_MAXIMUM_TOTAL_SIZE;
MaxSizeReq->Request.DATA.QUERY_INFORMATION.InformationBuffer = pMtu;
MaxSizeReq->Request.DATA.QUERY_INFORMATION.InformationBufferLength = sizeof(*pMtu);
NdisResetEvent(&MaxSizeReq->InternalRequestCompletedEvent);
// submit the request
ReqStatus = NdisOidRequest(pOpen->AdapterHandle, &MaxSizeReq->Request);
if (ReqStatus == NDIS_STATUS_PENDING)
{
NdisWaitEvent(&MaxSizeReq->InternalRequestCompletedEvent, 0);
ReqStatus = MaxSizeReq->RequestStatus;
}
//
// Put the request in the list of the free ones
//
ExInterlockedInsertTailList(&pOpen->RequestList, &MaxSizeReq->ListElement, &pOpen->RequestSpinLock);
if (ReqStatus == NDIS_STATUS_SUCCESS)
{
TRACE_EXIT();
return STATUS_SUCCESS;
}
else
{
//
// THIS IS WRONG
//
//
// Assume Ethernet
//
*pMtu = 1514;
TRACE_EXIT();
return STATUS_SUCCESS;
// return ReqStatus;
}
}
//-------------------------------------------------------------------
NTSTATUS NPF_CloseAdapter(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp)
{
POPEN_INSTANCE pOpen;
PIO_STACK_LOCATION IrpSp;
TRACE_ENTER();
IrpSp = IoGetCurrentIrpStackLocation(Irp);
pOpen = IrpSp->FileObject->FsContext;
ASSERT(pOpen != NULL);
//
// Free the open instance itself
//
ExFreePool(pOpen);
Irp->IoStatus.Status = STATUS_SUCCESS;
Irp->IoStatus.Information = 0;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
TRACE_EXIT();
return STATUS_SUCCESS;
}
//-------------------------------------------------------------------
NTSTATUS NPF_Cleanup(IN PDEVICE_OBJECT DeviceObject, IN PIRP Irp)
{
POPEN_INSTANCE Open;
NDIS_STATUS Status;
PIO_STACK_LOCATION IrpSp;
LARGE_INTEGER ThreadDelay;
ULONG localNumOpenInstances;
TRACE_ENTER();
IrpSp = IoGetCurrentIrpStackLocation(Irp);
Open = IrpSp->FileObject->FsContext;
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Open = %p\n", Open);
ASSERT(Open != NULL);
NPF_CloseOpenInstance(Open);
if (Open->ReadEvent != NULL)
KeSetEvent(Open->ReadEvent, 0, FALSE);
NPF_CloseBinding(Open);
// NOTE:
// code commented out because the kernel dump feature is disabled
//
//if (AdapterAlreadyClosing == FALSE)
//{
//
// Unfreeze the consumer
//
// if(Open->mode & MODE_DUMP)
// NdisSetEvent(&Open->DumpEvent);
// else
// KeSetEvent(Open->ReadEvent,0,FALSE);
// //
// // If this instance is in dump mode, complete the dump and close the file
// //
// if((Open->mode & MODE_DUMP) && Open->DumpFileHandle != NULL)
// {
// NTSTATUS wres;
// ThreadDelay.QuadPart = -50000000;
// //
// // Wait the completion of the thread
// //
// wres = KeWaitForSingleObject(Open->DumpThreadObject,
// UserRequest,
// KernelMode,
// TRUE,
// &ThreadDelay);
// ObDereferenceObject(Open->DumpThreadObject);
// //
// // Flush and close the dump file
// //
// NPF_CloseDumpFile(Open);
// }
//}
//
// release all the resources
//
NPF_ReleaseOpenInstanceResources(Open);
// IrpSp->FileObject->FsContext = NULL;
//
// Decrease the counter of open instances
//
localNumOpenInstances = InterlockedDecrement(&g_NumOpenedInstances);
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Opened Instances: %u", localNumOpenInstances);
if (localNumOpenInstances == 0)
{
//
// Force a synchronization at the next NPF_Open().
// This hopefully avoids the synchronization issues caused by hibernation or standby.
//
TIME_DESYNCHRONIZE(&G_Start_Time);
}
//
// and complete the IRP with status success
//
Irp->IoStatus.Information = 0;
Irp->IoStatus.Status = STATUS_SUCCESS;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
TRACE_EXIT();
return(STATUS_SUCCESS);
}
//-------------------------------------------------------------------
VOID NPF_CloseAdapterCompleteEx(IN NDIS_HANDLE ProtocolBindingContext)
{
POPEN_INSTANCE Open;
PIRP Irp;
TRACE_ENTER();
Open = (POPEN_INSTANCE)ProtocolBindingContext;
ASSERT(Open != NULL);
TRACE_MESSAGE1(PACKET_DEBUG_LOUD, "Open= %p", Open);
NdisSetEvent(&Open->NdisOpenCloseCompleteEvent);
TRACE_EXIT();
return;
}
//-------------------------------------------------------------------
NDIS_STATUS NPF_NetPowerChange(IN NDIS_HANDLE ProtocolBindingContext, IN PNET_PNP_EVENT_NOTIFICATION pNetPnPEvent)
{
TRACE_ENTER();
TIME_DESYNCHRONIZE(&G_Start_Time);
TIME_SYNCHRONIZE(&G_Start_Time);
TRACE_EXIT();
return STATUS_SUCCESS;
}
//------------------------------------------------------------------
NDIS_STATUS NPF_BindAdapterEx(IN NDIS_HANDLE ProtocolDriverContext, IN NDIS_HANDLE BindContext, IN PNDIS_BIND_PARAMETERS BindParameters)
{
NTSTATUS ntStatus = NDIS_STATUS_SUCCESS;
int a = 1;
a ++;
TRACE_ENTER();
TRACE_EXIT();
return ntStatus;
}
//-------------------------------------------------------------------
NDIS_STATUS NPF_UnbindAdapterEx(IN NDIS_HANDLE UnbindContext, IN NDIS_HANDLE ProtocolBindingContext)
{
NTSTATUS Status;
POPEN_INSTANCE Open = (POPEN_INSTANCE)ProtocolBindingContext;
TRACE_ENTER();
ASSERT(Open != NULL);
//
// The following code has been disabled bcause the kernel dump feature has been disabled.
//
////
//// Awake a possible pending read on this instance
//// TODO should be ok.
////
// if(Open->mode & MODE_DUMP)
// NdisSetEvent(&Open->DumpEvent);
// else
if (Open->ReadEvent != NULL)
KeSetEvent(Open->ReadEvent, 0, FALSE);
//
// The following code has been disabled bcause the kernel dump feature has been disabled.
//
////
//// If this instance is in dump mode, complete the dump and close the file
//// TODO needs to be checked again.
////
// if((Open->mode & MODE_DUMP) && Open->DumpFileHandle != NULL)
// NPF_CloseDumpFile(Open);
Status = NDIS_STATUS_SUCCESS;
NPF_CloseBinding(Open);
TRACE_EXIT();
return Status;
}
//-------------------------------------------------------------------
VOID NPF_ResetComplete(IN NDIS_HANDLE ProtocolBindingContext, IN NDIS_STATUS Status)
{
POPEN_INSTANCE Open;
PIRP Irp;
PLIST_ENTRY ResetListEntry;
TRACE_ENTER();
Open = (POPEN_INSTANCE)ProtocolBindingContext;
//
// remove the reset IRP from the list
//
ResetListEntry = ExInterlockedRemoveHeadList(&Open->ResetIrpList, &Open->RequestSpinLock);
Irp = CONTAINING_RECORD(ResetListEntry, IRP, Tail.Overlay.ListEntry);
Irp->IoStatus.Status = STATUS_SUCCESS;
IoCompleteRequest(Irp, IO_NO_INCREMENT);
TRACE_EXIT();
return;
}
You've definitely hit across an interesting problem with WinPcap. Its protocol driver (NPF) expects to be able to open an adapter whenever it wants. When paired with Wireshark, it will do this frequently — it's typical to see NPF open and close the same adapter dozens of times just while the Wireshark GUI is loading. It's even possible to see NPF have multiple bindings to the same adapter simultaneously.
The rough equivalent of this in NDIS 6.x is the NdisReEnumerateProtocolBindings function. What this does is queue up a workitem to call into your protocol' ProtocolBindAdapterEx handler for each adapter that is marked as bound in the registry, but isn't currently bound in NDIS. (I.e., for each adapter that INetCfg finds a bindpath to that does not already have an open handle.)
However, due to the large impedance between NPF's API and how NDIS regards binding, you'll need to tackle a few issues:
Multiple simultaneous bindings to the same adapter. (This is a rarely-used feature; NPF is one of two protocols that I know use this, so it's not really discussed much in the MSDN documentation.) The only way to get multiple simultaneous bindings in NDIS 6.x is to call NdisOpenAdapterEx twice within the same ProtocolBindAdapterEx call. That's going to be challenging, because NPF's model is to open a new binding whenever an API call comes in from usermode; it doesn't know in advance how many handles will need to be opened.
If another bind request comes in, you can attempt to close all previous handles to that adapter (transparently to the NPF API[!]), call NdisReEnumerateProtocolBindings, then open N+1 handles in your upcoming ProtocolBindAdpaterEx handler. But this is brittle.
You can also try and merge all API calls to the same adapter. If a second bind request comes in, just route it to the pre-existing binding to that adapter. This might be difficult, depending on how NPF's internals work. (I'm not allowed to read NPF source code; I can't say.)
Finally, the cheesy solution is to just allocate two (or three) binding handles always, and keep the extras cached in case Wireshark needs them. This is cheap to implement, but still a bit fragile, since you can't really know if Wireshark will want more handles than you pre-allocated.
Missing INetCfg bindings. NDIS 5.x protocols are allowed to bind to an adapter even if the protocol isn't actually supposed to be bound (according to INetCfg). Wireshark uses this to get itself bound to all sorts of random adapters, without worrying too much about whether INetCfg agrees that NPF should be bound. Once you convert to NDIS 6.x, the rules are enforced strictly, and you'll need to make sure that your protocol's INF has a LowerRange keyword for each type of adapter you want to bind over. (I.e., the NPF protocol should show up in the Adapter Properties dialog box.)
Asynchronous bindings. The NdisReEnumerateProtocolBindings model is that you call it, and NDIS will make an attempt to bind your protocol to all bindable adapters. If the adapter isn't bindable for some reason (perhaps it's in a low-power state, or it's being surprise-removed), then NDIS will simply not call your protocol back. It's going to be tough to know exactly when to give up and return failure to the usermode NPF API, since you don't get a callback saying "you won't bind to this adapter". You may be able to use NetEventBindsComplete, but frankly that's kind of a dodgy, ill-defined event and I'm not convinced it's bulletproof. I'd put in a timeout, then use the NetEvent as a hint to cut the timeout short.
Finally, I just wanted to note that, although you said that you wanted to minimize the amount of churn in WinPcap, you might want to consider repackaging its driver as an NDIS LWF. LWFs were designed for exactly this purpose, so they tend to fit better with NPF's needs. (In particular, LWFs can see native 802.11 traffic, can get more accurate data without going through the loopback hack, and are quite a bit simpler than protocols.)