I'm attempting to send a scsi ReadCapacity16 (0x9E) to a volume on Windows using D. The CDBs are to spec and my ReadCapacity16 works on Linux and scsi Inquiries work on Windows. Only the not-inquiry calls on Windows fail to work with an "incorrect function" from the windows kernel.
Since only inquiries work, is there a trick to sending not-inquiries through the Windows kernel? Any tips on getting this to work? I've researched a couple weeks and haven't solved this.
This is an example of the CDB:
\\.\physicaldrive0
CDB buffer contents:
9e 10 00 00 00 00 00 00 - 00 00 00 00 00 20 00 00
sgio.exceptions.IoctlFailException#sgio\exceptions.d(13): ioctl error code is 1. Incorrect function.
Here is where the CDB is copied to a buffer for the DeviceIoControl call, and this is the same code path which successfully sends the Inquiry commands (but fails for readcap). Code in github pasted below:
void sgio_execute(ubyte[] cdb_buf, ubyte[] dataout_buf, ubyte[] datain_buf, ubyte[] sense_buf)
version (Windows)
{
const uint SENSE_LENGTH = 196;
ubyte[512] iobuffer = 0;
DWORD amountTransferred = -1;
SCSI_PASS_THROUGH_DIRECT scsiPassThrough = {0};
scsiPassThrough.Cdb[] = 0;
uint size = cast(uint)((cdb_buf.length <= scsiPassThrough.Cdb.length ?
cdb_buf.length : scsiPassThrough.Cdb.length));
scsiPassThrough.Cdb[0..size] = cdb_buf[0..size];
scsiPassThrough.Length = SCSI_PASS_THROUGH_DIRECT.sizeof;
scsiPassThrough.ScsiStatus = 0x00;
scsiPassThrough.TimeOutValue = 0x40;
scsiPassThrough.CdbLength = cast(ubyte)(size);
scsiPassThrough.SenseInfoOffset = SCSI_PASS_THROUGH_DIRECT.sizeof;
scsiPassThrough.SenseInfoLength = SENSE_LENGTH;
scsiPassThrough.DataIn = SCSI_IOCTL_DATA_IN;
scsiPassThrough.DataBuffer = datain_buf.ptr;
scsiPassThrough.DataTransferLength = bigEndianToNative!ushort(cast(ubyte[2]) cdb_buf[3..5]);
int status = DeviceIoControl( m_device,
IOCTL_SCSI_PASS_THROUGH_DIRECT,
&scsiPassThrough,
iobuffer.length, //scsiPassThrough.sizeof,
&iobuffer,
iobuffer.length,
&amountTransferred,
null);
if (status == 0)
{
int errorCode = GetLastError();
// build error message ...
throw new IoctlFailException(exceptionMessage);
}
}
}
Reading the Windows SCSI_PASS_THROUGH_DIRECT structure documentation very closely I noticed this:
DataTransferLength: Indicates the size in bytes of the data buffer.
Many devices transfer chunks of data of predefined length. The value
in DataTransferLength must be an integral multiple of this predefined,
minimum length that is specified by the device. If an underrun occurs,
the miniport driver must update this member to the number of bytes
actually transferred.
I changed the code to use 512 bytes for DataTransferLength, by increasing the size of datain_buffer, and the code now works just fine.
Related
I'm running into some initial LIDAR connection issue with simultaneously connecting 4 Slamtec RPLIDAR A3 using MATALB
with the provided interface library found here: https://github.com/ENSTABretagneRobotics/Hardware-MATLAB
The issue is that I am having to retry the connection on at least one of the LIDARS before it connects.
And it can also vary with LIDAR that is. That is, all but one LIDAR connects the first time.
One time, it could be LIDAR on one COM port, another time it could be a LIDAR on another COM port.
This is the way it is set up right now.
Basically MATALB loads the provided interface library, hardwarex.dll. That exposes some library methods to be used by MATLAB.
The method to connect the LIDAR does the following:
Opens the RS232 port
Sets port options
Gets some info and health statuses form lidar
Sets the motor PWM to zero (stop lidar motor)
Uses express scan mode option
Here somewhere the communication will error out.
Using a serial sniffer I was able to see that the LIDAR errors out after the following message to the LIDAR:
a5 f0 02 ff 03 ab a5 25 a5 82 05 00 00 00 00 00 22
Which I tracked to the following library methods, in that order
SetMotorPWMRequestRPLIDAR()
CheckMotorControlSupportRequestRPLIDAR()
StopRequestRPLIDAR()
StartExpressScanRequestRPLIDAR() <-- Error here
To which the LIDAR responds with:
a5 5a 54 00 00 40 82
Where as a successfully connection response from the LIDAR much longer in content.
Things I've tried
Drain (force all write data) the write buffer with the interface libraries DrainComputerRS232Port() method before and/or after any write to lidar.
Setting the TX/Write OS FIFO buffer to FILE_FLAG_NO_BUFFERING (ie. WriteFile()).
Changing the Hardware FIFO buffer form max (16) to min (1).
Using MATLAB's serial() command to flush any input or output buffers prior to loading the library or trying the connections.
This is the system and settings I am working with
Lidar (x4):
Slamtec RPLIDAR A3
Firmware 1.26
Connected via USB (no USB hub used)
No other COM port devices connected
Computer
OS: Windows 10 Pro - Build 1903
CPU: Intel Xeon 3.00Ghz
RAM: 64 GB
HD: SSD - 512GB NVMe
Serial Port Settings
Boud Rate: 256000
Timeout: 1000
Software
MATLAB R2018b (9.5.0)
I've been banging my head on the wall with this. Any help is much much appreciated!
I'm going to answer my own question. And anyone is interested in a more detailed discussion please refer to the issue posted on the MATLAB RPLIDAR repo:
https://github.com/ENSTABretagneRobotics/Hardware-MATLAB/issues/2
As I mentioned, when debugging, the error seemed to happen ConnectRPLIDAR() --> StartExpressScanRequestRPLIDAR(), then specifically here:
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
What seemed to have happened before that is after the command being send out in WriteAllRS232Port(), sometimes it would not read a response in the ReadAllRS232Port(), esdata_prev would be nothing.
We tried implementing a mSleep(500) delay before that second ReadAllRS232Port(), and it seemed to help (my guess that the lidar was slow to respond), but the issue did not get resolved fully.
The following is what made it work every time with 4 lidars:
inline int StartExpressScanRequestRPLIDAR(RPLIDAR* pRPLIDAR)
{
unsigned char reqbuf[] = { START_FLAG1_RPLIDAR,EXPRESS_SCAN_REQUEST_RPLIDAR,0x05,0,0,0,0,0,0x22 };
unsigned char descbuf[7];
unsigned char sync = 0;
unsigned char ChkSum = 0;
// Send request to output/tx OS FIFO buffer for port
if (WriteAllRS232Port(&pRPLIDAR->RS232Port, reqbuf, sizeof(reqbuf)) != EXIT_SUCCESS)
{
printf("Error writing data to a RPLIDAR. \n");
return EXIT_FAILURE;
}
// Receive the response descriptor.
memset(descbuf, 0, sizeof(descbuf)); // Alocate memory
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, descbuf, sizeof(descbuf)) != EXIT_SUCCESS)
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Quick check of the response descriptor.
if ((descbuf[2] != 0x54) || (descbuf[5] != 0x40) || (descbuf[6] != MEASUREMENT_CAPSULED_RESPONSE_RPLIDAR))
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Keep anticipating a port read buffer for 1 second
int timeout = 1500;
// Check it every 5 ms
// Note on Checking Period Value:
// Waiting on 82 bytes in lidar payload
// 10 bits per byte for the serial communication
// 820 bits / 256000 baud = 0.0032s = 3.2ms
int checkingperiod = 5;
RS232PORT* pRS232Port = &pRPLIDAR->RS232Port;
int i;
int count = 0;
// Wait for something to show up on the input buffer on port
if (!WaitForRS232Port(&pRPLIDAR->RS232Port, timeout, checkingperiod))
{
//Success - Something is there
// If anything is on the input buffer, wait until there is enough
count = 0;
for (i = 0; i < 50; i++)
{
// Check the input FIFO buffer on the port
GetFIFOComputerRS232Port(pRS232Port->hDev, &count);
// Check if there is enough to get a full payload read
if (count >= sizeof(pRPLIDAR->esdata_prev))
{
// Thre is enough, stop waiting
break;
}
else
{
// Not enough, wait a little
mSleep(checkingperiod);
}
}
}
else
{
//Failure - After waiting for an input buffer, it wasn't there
printf("[StartExpressScanRequestRPLIDAR] : Failed to detect response on the input FIFO buffer. \n");
return EXIT_FAILURE;
}
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Analyze the first data response.
sync = (pRPLIDAR->esdata_prev[0] & 0xF0)|(pRPLIDAR->esdata_prev[1]>>4);
if (sync != START_FLAG1_RPLIDAR)
{
printf("A RPLIDAR is not responding correctly : Bad sync1 or sync2. \n");
return EXIT_FAILURE;
}
ChkSum = (pRPLIDAR->esdata_prev[1]<<4)|(pRPLIDAR->esdata_prev[0] & 0x0F);
// Force ComputeChecksumRPLIDAR() to compute until the last byte...
if (ChkSum != ComputeChecksumRPLIDAR(pRPLIDAR->esdata_prev+2, sizeof(pRPLIDAR->esdata_prev)-1))
{
printf("A RPLIDAR is not responding correctly : Bad ChkSum. \n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
So in the above code, we are waiting for the OS read FIFO buffer to show something within 1.5s, checking every 5ms (WaitForRS232Port()). If anything shows up, makes sure to wait to have enough, the size of the payload (GetFIFOComputerRS232Port()).
I'm not sure if it made a difference but we also removed the OS write FIFO buffer by changing it from 0 to FILE_FLAG_NO_BUFFERING:
File: OSComputerRS232Port.h
...
hDev = CreateFile(
tstr,
GENERIC_READ|GENERIC_WRITE,
0, // Must be opened with exclusive-access.
NULL, // No security attributes.
OPEN_EXISTING, // Must use OPEN_EXISTING.
FILE_FLAG_NO_BUFFERING, // Not overlapped I/O. Should use FILE_FLAG_WRITE_THROUGH and maybe also FILE_FLAG_NO_BUFFERING?
NULL // hTemplate must be NULL for comm devices.
);
...
Does web-bluetooth run into a 20 byte MTU on Windows Chrome?
I am seeing writes to a characteristic value of the same 25 bytes succeed in Chrome on mac OS and fail in Chrome on Windows.
As I whittle down the bytes, the error continues occurring until I reach 20 bytes. If this is the MTU, is it documented somewhere? And is there a way to write values >20 bytes?
I've tried the following, in TypeScript:
const writeBytesBuffered = async (
characteristic: BluetoothRemoteGATTCharacteristic
bytes: Uint8Array,
index: number = 0
): Promise<void> => {
while (index < bytes.length) {
await characteristic.writeValue(bytes);
index += 20;
}
}
No errors are thrown but presumably my device's GATT server doesn't know what to do with them and fails silently.
I am trying to interface freescale imx6 SoC with mpu92/65 sensor device.
I have taken mpu92/65 device driver from android (https://github.com/NoelMacwan/Kernel-10.4.1.B.0.101/tree/master/drivers/staging/iio/imu ) and have done necessary modifications to the driver and device tree.
Device tree modifications:
&i2c3{
...
extaccelerometer: mpu9250#68{
compatible = "mpu9250";
reg = <0x68>;
interrupt-parent = <&gpio2>;
interrupts = <9>;
int_config = /bits/ 8 <0x00>;
level_shifter = /bits/ 8 <0>;
orientation = [ 01 00 00 00 01 00 00 00 01 ];
sec_slave_type = <2>;
sec_slave_id = <0x12>;
secondary_i2c_addr = /bits/ 16 <0x0C>;
secondary_orientation = [ 00 01 00 ff 00 00 00 00 01 ];
};
}
inv_mpu_iio driver modifications:
static void get_platdata(struct device *dev, struct inv_mpu_iio_s *st){
struct device_node *np = dev->of_node;
int i=0;
of_property_read_u8(np, "int_config", &st->plat_data.int_config);
of_property_read_u8(np, "level_shifter", &st->plat_data.level_shifter);
of_property_read_u8_array(np, "orientation", &st->plat_data.orientation,9);
of_property_read_u32(np, "sec_slave_type", &st->plat_data.sec_slave_type);
of_property_read_u32(np, "sec_slave_id", &st->plat_data.sec_slave_id);
of_property_read_u16(np, "secondary_i2c_addr", &st->plat_data.secondary_i2c_addr);
of_property_read_u8_array(np, "secondary_orientation", &st->plat_data.secondary_orientation,9);
}
static int inv_mpu_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
.....
if (client->dev.of_node) {
get_platdata(&client->dev, st);
} else {
st->plat_data = *(struct mpu_platform_data *)dev_get_platdata(&client->dev);
}
.....
}
I have retrieved the platform data from device tree in the above manner. In probe function I am getting client->irq=0. But I have mentioned about the IRQ in the device tree. Please can someone tell me what else I need to do to mention gpio2-9 (linux pad) as an interrupt line for this i2c device.
0x68 is the slave address of the i2c device. Driver probe functionality is trying to write on to the device for verifying the chip type initially. So the data and the address of the slave is sent to the adapter driver where in the adapter driver start function writes onto and reads from control and status registers is successfully executed.
static int i2c_imx_start(struct imx_i2c_struct *i2c_imx)
{
unsigned int temp = 0;
int result;
dev_dbg(&i2c_imx->adapter.dev, "<%s>\n", __func__);
i2c_imx_set_clk(i2c_imx);
result = clk_prepare_enable(i2c_imx->clk);
if (result)
return result;
imx_i2c_write_reg(i2c_imx->ifdr, i2c_imx, IMX_I2C_IFDR,__func__);
/* Enable I2C controller */
imx_i2c_write_reg(i2c_imx->hwdata->i2sr_clr_opcode, i2c_imx, IMX_I2C_I2SR,__func__);
imx_i2c_write_reg(i2c_imx->hwdata->i2cr_ien_opcode, i2c_imx, IMX_I2C_I2CR,__func__);
/* Wait controller to be stable */
udelay(50);
/* Start I2C transaction */
temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
temp |= I2CR_MSTA;
imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR,__func__);
result = i2c_imx_bus_busy(i2c_imx, 1);
if (result)
return result;
i2c_imx->stopped = 0;
temp |= I2CR_IIEN | I2CR_MTX | I2CR_TXAK;
temp &= ~I2CR_DMAEN;
imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2CR,__func__);
return result;
}
Then the adapter driver writes on to the data register
imx_i2c_write_reg(msgs->addr << 1, i2c_imx, IMX_I2C_I2DR,__func__);
After this the adapter interrupt is generated ( bus interrupt got i2c3: 291).
static irqreturn_t i2c_imx_isr(int irq, void *dev_id)
{
struct imx_i2c_struct *i2c_imx = dev_id;
unsigned int temp;
printk("irq:%d\n",irq);
temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2SR);
if (temp & I2SR_IIF) {
/* save status register */
i2c_imx->i2csr = temp;
temp &= ~I2SR_IIF;
printk("temp=%d\n",temp);
temp |= (i2c_imx->hwdata->i2sr_clr_opcode & I2SR_IIF);
imx_i2c_write_reg(temp, i2c_imx, IMX_I2C_I2SR,__func__);
wake_up(&i2c_imx->queue);
return IRQ_HANDLED;
}
return IRQ_NONE;
}
In ISR after reading status register the value should be 162 (last bit should be 0 to indicate acknowledged) but for my device I am getting this value as 163 (last bit is 1 so it is not acknowledged). Then in acknowledge success function -EIO error is thrown. For all the other device connected to this bus the status register after writing onto the data register is 162.
I don't know why I am getting the above behavior. And one more thing is that even if I don't connect the device the start function is able to write into and read from the status and control registers. I am not sure which status register is being read and writing into. If I assume that this writes and reads the adapter registers, then can I also assume that the adapter h/w automatically reads and writes onto the device connected. If so then how am I getting the same behavior if I don't connect the device?
Please help me out.
In probe function I am getting client->irq=0. But I have mentioned about the IRQ in the device tree. Please can someone tell me what else I need to do to mention gpio2-9 (linux pad) as an interrupt line for this i2c device.
Wrong definition of interrupts property
Your interrupts definition seems incorrect:
interrupts = <9>;
It should be in "two cells" format (see Documentation/devicetree/bindings/interrupt-controller/interrupts.txt for details).
I ran next command:
$ find arch/arm/boot/dts/ -name '*imx6*' -exec grep -Hn interrupt {} \; | grep cell
and I see that most of imx6 SoCs have two-cell format for GPIO interrupts. So your definition of interrupts should look like that:
interrupts = <9 IRQ_TYPE_EDGE_FALLING>;
or if your kernel version still doesn't have named constants for IRQ types:
interrupts = <9 2>;
Refer to the datasheet or driver code for MPU9250 to figure out the type of IRQ (falling/rising).
Missingof_match_table
I'm not 100% sure that what explained next is the cause of your issue, but at least that's worth to be checked.
As I see it, the problem is that OF (device tree) matching is not happening. To fix this, in addition to .id_table you need to define and assign .of_match_table in your driver struct. So for now you have next driver definition in your driver:
static const struct i2c_device_id inv_mpu_id[] = {
...
{"mpu9250", INV_MPU9250},
...
{}
};
static struct i2c_driver inv_mpu_driver = {
...
.id_table = inv_mpu_id,
...
};
And you need to add something like this:
#include <linux/of.h>
#ifdef CONFIG_OF
static const struct of_device_id inv_mpu_of_table[] = {
...
{ .compatible = "invensense,mpu9250" },
...
{ }
};
MODULE_DEVICE_TABLE(of, inv_mpu_of_table);
#endif
static struct i2c_driver inv_mpu_driver = {
.driver = {
.of_match_table = of_match_ptr(inv_mpu_of_table),
...
},
...
};
Be sure that your compatible strings have exactly "vendor,product" format (which is "invensense,mpu9250" in your case).
Now in your device tree you can describe your device using "invensense,mpu9250" as a value for compatible property:
&i2c3 {
...
extaccelerometer: mpu9250#68 {
compatible = "invensense,mpu9250";
...
}
After these steps OF matching should happen correctly and you should see your client->irq assigned appropriately (so it's not 0).
Run next command to list all I2C/IIO drivers that has device tree support, and you'll see that they all have both tables in driver struct:
$ git grep --all-match -e of_match_table -e '\i2c_driver' -e '\.id_table\b' drivers/iio/* | sed 's/:.*//g' | sort -u
Under the hood
Look into drivers/i2c/i2c-core.c, i2c_device_probe() function to see how IRQ number is being read from device tree for I2C device:
static int i2c_device_probe(struct device *dev)
{
...
if (dev->of_node) {
...
irq = of_irq_get(dev->of_node, 0);
}
...
client->irq = irq;
...
status = driver->probe(client, i2c_match_id(driver->id_table, client));
}
This function is being executed when device/driver match happens. Devices information is read from device tree on your I2C adapter probe. So on i2c_add_driver() call for your driver there can be match (by compatible string) with device from device tree, and i2c_device_probe() called, populating client->irq and calling your driver probe function next.
of_irq_get() function obtains IRQ number from device tree interrupts property
Also, there was an attempt to get rid of .id_table and use .of_match_table exclusively for device matching: commit. But then it was reverted further in this commit, due to some side effects. So for now we must define both .id_table AND .of_match_table for I2C driver to work correctly.
I'm writing a program that should simultaneously send and receive special RAW UDP packets. Almost all fields (except MAC-addresses) are filled by myself. And besides that I'm using some fake IP options like: FE 04 01 00 (I've tried different), but the whole IP header is right. So here is an example packet
MACs: 30 85 a9 1f b8 d6 00 25 22 62 22 07 08 00
IP header: 48 00 00 3e 03 d0 00 00 40 11 bb b7 c0 a8 89 a1 c0 a8 89 01
IP options:
fd 04 01 00
fe 08 01 11 1d 15 0a 00
UDP header, payload: c3 50 c3 55 00 1e d7 ce 01 02 03 04 05 06 07 08 09 0a 0b 0c 0d 0e 0f 11 22 33 44 55 66 77
Issue
Precondition: windows firewall is disabled and no other firewall is running. On Windows 7 host program is launched with "Run as Administrator".
On Windows XP user is Administrator.
When packet is sent to remote host with Windows XP (LAN), or to the localhost on Windows 7 or Windows XP then it's successfully received(on the remote host or localhost). But if the packet is sent to the remote host with Windows 7 (LAN) then it's rejected by the remote host with ICMP message "Parameter problem". This only can be seen using (for example) Wireshark: sample wireshark capture file
Here's some source code on sending
ressrc = ResolveAddress(srcAddrStr, srcPort, addressFamily, socketType, protocol);
if (ressrc == NULL) {
// handling
}
resdest = ResolveAddress(dstAddrStr, dstPort, ressrc->ai_family, ressrc->ai_socktype, ressrc->ai_protocol);
if (resdest == NULL) {
// handling
}
sendSock = socket(ressrc->ai_family, socketType, ressrc->ai_protocol);
if (sendSock == INVALID_SOCKET) {
// handling
}
int optval = 1;
rc = setsockopt(sendSock, IPPROTO_IP, IP_HDRINCL, (char *)&optval, sizeof(optval));
if (rc == SOCKET_ERROR) {
// handling
}
// packetizing
WSABUF *wbuf = ...
rc = sendto(sendSock, wbuf[0].buf, wbuf[0].len, 0, resdest->ai_addr, resdest->ai_addrlen);
and receiving
ressrc = ResolveAddress(srcAddrStr, NULL, addressFamily, socketType, protocol);
if (ressrc == NULL) {
//handling
}
recvSock = socket(ressrc->ai_family, socketType, ressrc->ai_protocol);
if (recvSock == INVALID_SOCKET) {
//handling
}
rc = bind(recvSock, ressrc->ai_addr, ressrc->ai_addrlen);
if (rc == SOCKET_ERROR) {
//handling
}
while (1)
{
fromlen = sizeof(safrom);
rc = recvfrom(recvSock, buf, MAX_PACKET, 0, (SOCKADDR *)&safrom, &fromlen);
if (rc == SOCKET_ERROR) {
//handling
}
/*
* handling received packet
*/
}
I've tried:
comparing packets received on different hosts (no differences)
putting receiving socket in promiscuous mode using WSAIoctl and ioctlsocket functions
googling, but the only thing I've found about my issue is this topic
What can cause this problem? Should I turn something on/off? Or am I doing wrong something else?
I am writing an adapter TSP for a phone system. This system has a TAPI API but it is incompatible with the application I am trying to TAPI-enable. In order to place a call from the correct line, I need to know some information (from HKCU) about who is making the request. Since the TSP runs in the context of the Telephony service, I cannot access is directly. My plan was to use the functionality of LINE_CREATEDIALOGINSTANCE to read this information.
The problem I'm having is that the Telephony service is crashing immediately after returning from TUISPI_providerGenericDialog with the following stack trace:
72004400()
tapisrv.dll!_FreeDialogInstance#20() + 0xa93 bytes
tapisrv.dll!_ClientRequest#16() + 0x8f bytes
rpcrt4.dll!_Invoke#12() + 0x30 bytes
rpcrt4.dll!_NdrStubCall2#16() + 0x217 bytes
rpcrt4.dll!_NdrServerCall2#4() + 0x19 bytes
rpcrt4.dll!_DispatchToStubInCNoAvrf#12() + 0x17 bytes
rpcrt4.dll!RPC_INTERFACE::DispatchToStubWorker() + 0xae bytes
rpcrt4.dll!RPC_INTERFACE::DispatchToStub() + 0x4b bytes
rpcrt4.dll!LRPC_SCALL::DealWithRequestMessage() + 0x1d5 bytes
rpcrt4.dll!LRPC_ADDRESS::DealWithLRPCRequest() + 0x90 bytes
rpcrt4.dll!LRPC_ADDRESS::ReceiveLotsaCalls() + 0x20c bytes
rpcrt4.dll!RecvLotsaCallsWrapper() + 0xd bytes
rpcrt4.dll!BaseCachedThreadRoutine() + 0x92 bytes
rpcrt4.dll!ThreadStartRoutine() + 0x1b bytes
kernel32.dll!_BaseThreadStart#8() + 0x34 bytes
As per this book, the Telephony service will crash if TSPI_providerFreeDialogInstance is not implemented. I have implemented this function and DepWalker shows it as being properly exported. ApiSpy32 shows that its address is correctly returned via GetProcAddress when my TSP is loaded. Why is it still crashing?
The relevant code:
LONG TSPIAPI TSPI_lineMakeCall(DRV_REQUESTID dwRequestID, HDRVLINE hdLine, HTAPICALL htCall,
LPHDRVCALL lphdCall, LPCWSTR lpszDestAddress, DWORD dwCountryCode, LPLINECALLPARAMS const lpCallParams)
{
OutputDebugString("TSPI_lineMakeCall\n");
PDRVLINE pLine = (PDRVLINE) hdLine;
*lphdCall = (HDRVCALL)hdLine;
typedef TUISPICREATEDIALOGINSTANCEPARAMS PARAMS;
pLine->htCall = htCall;
DWORD lLength = (lstrlenW(lpszDestAddress) + 1) * sizeof(WCHAR);
PARAMS* lParams = (PARAMS*)DrvAlloc(sizeof(PARAMS) + lLength);
RtlZeroMemory(lParams, sizeof(PARAMS) + lLength);
lParams->dwRequestID = dwRequestID;
lParams->hdDlgInst = (HDRVDIALOGINSTANCE)1000;
lParams->lpszUIDLLName = L"TapiAdapter.tsp";
lParams->lpParams = lParams + 1;
lParams->dwSize = lLength;
lstrcpyW((LPWSTR)(lParams + 1), lpszDestAddress);
(*pLine->pfnEventProc)(pLine->htLine, 0, LINE_CREATEDIALOGINSTANCE, (DWORD)lParams, 0, 0);
return dwRequestID;
}
LONG TSPIAPI TSPI_providerGenericDialogData(DWORD_PTR dwObjectID, DWORD dwObjectType, LPVOID lpParams, DWORD dwSize)
{
OutputDebugString("TSPI_providerGenericDialogData\n");
return 0;
}
LONG TSPIAPI TSPI_providerFreeDialogInstance(HDRVDIALOGINSTANCE hdDlgInst)
{
OutputDebugString("TSPI_providerFreeDialogInstance\n");
return 0;
}
LONG TSPIAPI TUISPI_providerGenericDialog(TUISPIDLLCALLBACK lpfnUIDLLCallback, HTAPIDIALOGINSTANCE htDlgInst, LPVOID lpParams, DWORD dwSize, HANDLE hEvent)
{
SetEvent(hEvent);
LPCWSTR lNumber = (LPCWSTR)lpParams;
MessageBoxW(0, lNumber, L"Dial Number", MB_OK);
return 0;
}
I don't know but the help for the TUISPICREATEDIALOGINSTANCEPARAMS Structure says that lpszUIDLLName should be a ...
pointer to a NULL-terminated string
specifying the fully qualified name of
the UI DLL to load in the application
context
... however L"TapiAdapter.tsp" doesn't look like a fully qualified name of the UI DLL ("fully-qualified" means that it includes the path name). Do you have a UI DLL to be loaded? Is it loaded? Does it display the dialog? Is it unloaded? Does TUISPI_providerGenericDialog exist in your TSP, or does it existin your UI DLL (they're supposed to be two different DLLs)?
I have found the solution: As per MSDN, the first parameter of the LINEEVENT call for this event only needs to be an HPROVIDER, not an HTAPILINE. Since the first parameter of LINEEVENT is of type HTAPILINE, the HPROVIDER will need to be cast.