XCB: How to "free" XID allocated by xcb_generate_id - x11

(1) In XCB, we must manually allocate XID before some requests. For example,
xcb_window_t win = xcb_generate_id(conn);
xcb_create_window(conn, XCB_COPY_FROM_PARENT, win, ...);
Then, what should we do to free (or unallocate) the XID? Is it enough to invoke xcb_destroy_window?
(2) How about in a case xcb_create_window fails? For example,
xcb_generic_cookie_t cookie = xcb_create_window_checked(conn, XCB_COPY_FROM_PARENT, win, ...);
if(xcb_request_check(conn, cookie)) ... // xcb_create_window failed.
In the example above, xcb_create_window failed. So, the XID win is allocated but not used. What should we do for win in this case?

Is it enough to invoke xcb_destroy_window?
Yup. Sadly, I don't have any authoritative docs to link to, so I can just say "trust me".
That's the simple answer for question 1. Answer 2... uhm, I guess it does not have a simple answer. So I guess you need "the full truth".
When an X11 client connects to the X11 server, it is assigned a range of XIDs to use. These are the fields resource-id-base and resource-id-mask in the server response here: https://www.x.org/releases/X11R7.6/doc/xproto/x11protocol.html#server_response
Conceptually, these two fields describe something like "all XIDs from 0x100 to 0x200 are for you".
xcb_generate_id() now simply returns the next entry from this range and marks it as used. So, after the first call and staying with the example above, "all XIDs from 0x101 to 0x200" are still available.
This is quite a simple data structure and there is no way to return XIDs as "no longer used".
So, what happens after all XIDs were used? For this, the XC-MISC extension exists: https://www.x.org/releases/X11R7.7/doc/xcmiscproto/xc-misc.html
When it is out of XIDs, libxcb will send a XCMiscGetXIDRange request to get a "new" batch of XIDs and then starts handing out them.
I write "new" in quotes since it is actually a sub-range of the original XID range: The X11 server knows which range it originally assigned this client. It knows which XIDs are in use. Thus, it can pick a free range and return it to the client.
Thanks to this mechanism, libxcb does not have to track used XIDs in any way. Stuff just works automatically.

Related

MFC: Conflicting information on how to clean up after using COleDataSource::DoDragDrop()

Can someone clear up all the conflicting information.
The documentation for COleDataSource::CacheData() says:
After the call to CacheData the ptd member of lpFormatEtc and the
contents of lpStgMedium are owned by the data object, not by the
caller.
The documentation for COleDataSource::CacheGlobalData() doesn't say that.
You find code examples of how to use COleDataSource::DoDragDrop() on places like Code Project which call COleDataSource::CacheGlobalData() then check the DoDragDrop() results and free the memory if it didn't drop:
DROPEFFECT dweffect=datasrc->DoDragDrop(DROPEFFECT_COPY);
// They say if operation wasn't accepted, or was canceled, we
// should call GlobalFree() to clean up.
if (dweffect==DROPEFFECT_NONE) {
GlobalFree(hgdrop);
}
You also have Q182219 (which for some unknown reason MS has totally broken all the good old MSKB links and you can't find any information anymore (links all over the Internet are dead). Q182219 says something about after a successful move operation on NT based OS (Win10), it will return DROPEFFECT_NONE so you have to check if things really moved.
So the questions are:
1) Should you really free the memory if the drop operation returns DROPEFFECT_NONE or would MFC handle all that?
2) Does it still return DROPEFFFECT_NONE after a successful move operation or does MFC handle that internally (or fixed in some Windows version) ?
3) If it does return DROPEFFECT_NONE after a move, wouldn't logic like the above be a double free on memory if it was a move operation?
Extra:
You also find examples and usage of people doing COleDataSource mydatasource which is WRONG. You have to allocate it like COleDataSource *mydatasource=new ColeDataSource() and then at the end use mydatasource->ExternalRelease() to release it. (ExternalRelease() handles calling InternalRelease() if the object isn't using aggregation (that is a derived class that acts as a wrapper))
TIA!!

writing partial data with libwebsockets

I'm using the libwebsockets v2.4.
The doc seems unclear to me about what I have to do with the returned value of the lws_write() function.
If it returns -1, it's an error and I'm invited to close the connection. That's fine for me.
But when it returns a value that is strictly inferior to the buffer length I pass, should I consider that I have to write the last bytes that could not be written later (in another WRITABLE callback occurrence). Is it even possible to have this situation?
Also, should I use the lws_send_pipe_choked() before using the lws_write(), considering that I always use lws_write() in the context of a WRITABLE callback?
My understanding is that lws_write always return the asked buffer length except is an error occurs.
If you look at lws_issue_raw() (from which the result is returned by lws_write()) in output.c (https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L157), you can see that if the length written by lws_ssl_capable_write() is less than the provided length, then the lws allocate a buffer to fill up the remaining bytes on wsi->trunc_alloc, in order for it to be sent in the future.
Concerning your second question, I think it is safe to call lws_write() in the context of a WRITABLE callback without checking if the pipe is choked. However, if you happen to loop on lws_write() in the callback, lws_send_pipe_choked() must be called in order to protect the subsequent calls to lws_write(). If you don't, you might stumble upon this assertion https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L83 and the usercode will crash.

Siemens MC35 + ATcommand

I would like to do 2 things.
Recognize when someone is calling - In terminal will appear RING and to answer I have to send command ATA. But How can I recognize it when I am doing something else. Should I use new thread and read port until send RING? Is there any beter solution?
What is a symbol of end of response? I'm reading char using for(), but I do not know number of signs. Example below doesn't work properly
while(readCharUART()!=10) {};
while(readCharUART()!=13)
{
getchar() = ..
}
You are on the right track.
For RING then yes, the correct way to do it is to have one thread just read modem responses until you get the Unsolicited result code RING. If you from time to time want to run AT commands (say ATA), then you should let this thread do that as well, e.g. you have one thread that takes care of both issuing AT commands and monitor for UR codes.
Regarding formatting of responses from the modem, this is well described in chapter 5.7.1 Responses in the ITU V.250 standard. Short summary (reading the spec is highly recommended!):
<header>RING<trailer>
where header and trailer is both "\r\n" (unless the modem is configured strangely).

IoGetDeviceObjectPointer() fails with no return status

This is my code:
UNICODE_STRING symbol;
WCHAR ntNameBuffer[128];
swprintf(ntNameBuffer, L"\\Device\\Harddisk1\\Partition1");
RtlInitUnicodeString(&symbol, ntNameBuffer);
KdPrint(("OSNVss:symbol is %ws\n",symbol.Buffer));
status = IoGetDeviceObjectPointer(&symbol,
FILE_READ_DATA,
&pDiskFileObject,
&pDiskDeviceObject);
My driver is next-lower-level of \\Device\\Harddisk1\\Partition1.
When I call IoGetDeviceObjectPointer it will fail and no status returns and it not continue do remaining code.
When I use windbg debug this, it will break with a intelpm.sys;
If I change the objectname to "\\Device\\Harddisk1\\Partition2" (the partition2 is really existing), it will success call
If I change objectname to "\\Device\\Harddisk1\\Partition3", (the partition3 is not existing), it failed and return status = 0xc0000034, mean objectname is not existing.
Does anybody know why when I use object "\\Device\\Harddisk1\\Partition1" it fails and no return status? thanks very much!
First and foremost: what are you trying to achieve and what driver model are you using? What bitness, what OS versions are targeted and on which OS version does it fail? Furthermore: you are at the correct IRQL for the call and is running inside a system thread, right? From which of your driver's entry points (IRP_MJ_*, DriverEntry ...) are you calling this code?
Anyway, was re-reading the docs on this function. Noting in particular the part:
The IoGetDeviceObjectPointer routine returns a pointer to the top object in the named device object's stack and a pointer to the
corresponding file object, if the requested access to the objects can
be granted.
and:
IoGetDeviceObjectPointer establishes a "connection" between the caller
and the next-lower-level driver. A successful caller can use the
returned device object pointer to initialize its own device objects.
It can also be used as as an argument to IoAttachDeviceToDeviceStack,
IoCallDriver, and any routine that creates IRPs for lower drivers. The
returned pointer is a required argument to IoCallDriver.
You don't say, but if you are doing this on a 32bit system, it may be worthwhile tracking down what's going on with IrpTracker. However, my guess is that said "connection" or rather the request for it gets somehow swallowed by the next-lower-level driver or so.
It is also hard to say what kind of driver you are writing here (and yes, this can be important).
Try not just breaking at a particular point before or after the fact but rather follow the stack that the IRP would travel downwards in the target device object's stack.
But thinking about it, you probably aren't attached to the stack at all (for whatever reason). Could it be that you actually should be using IoGetDiskDeviceObject instead, in order to get the actual underlying device object (at the bottom of the stack) and not a reference to the top-level object attached?
Last but not least: don't forget you can also ask this question over on the OSR mailing lists. There are plenty of seasoned professionals there who may have run into the exact same problem (assuming you are doing all of the things correct that I asked about).
thanks everyone , I solve this problem; what cause this problem is it becoming synchronous; when I
call IoGetDeviceObjectPointer , it will generate an new Irp IRP_MJ_WRITER which pass though from high level, when this irp reach my driver, my thread which handle IRP is the same thread whilch call IoGetDeviceObjectPointer ,so it become drop-dead halt;

Is it possible to use midiOutLongMsg to play a chord? (Win32 API)

This guys says yes:
http://web.tiscalinet.it/giordy/midi-tech/lowmidi.htm
Same with a really old book from 1998 (Maximum MIDI).
MSDN doesn't mention it.
I'm not getting any sound.
I fill a char buffer with status|note|velocity|status|note|velocity...
Set lpData, dwBufferLength, and dwFlags of a MIDIHDR struct
call midiOutPrepareHeader (MMSYSERR_NOERROR)
call midiOutLongMsg (MMSYSERR_NOERROR)
Still no sound! Spamming midiOutShortMsg is working but will that work for slower machines? Did they change the functionality?
Thanks.
I'm an idiot! I figured it out: Microsoft GS Wavetable Synth does NOT support sending multiple short messages in midiOutLongMsg. The MIDI Mapper DOES!
midiOutShortMsg should be plenty fast, even on slow machines. MIDI interfaces themselves (hardware that is, but some software will limit themselves) run at 31,250 baud. This of course is ignoring any slow code you may have wrapped around where you call midiOutShortMsg.
Anyway, technically you should also be able to get away with one status byte, if the following notes use the same status byte. So, if you want to do note on/off (using velocity 0 for off) and those notes are on the same channel, you could do this:
status|note|velocity|note|velocity|note|velocity|note|velocity
This is called running status.

Resources