SetNamedSecurityInfo is defined as taking an LPTSTR, not an LPCTSTR. Now the standard Win32 API that takes a LPTSTR also has some way of indicating the necessary buffer length. Sometimes that's explicit in the signature, sometimes it's documented as MAX_PATH or otherwise. Not so for SetNamedSecurityInfo.
To be honest, I have no idea why SetNamedSecurityInfo would want to write to that buffer, but perhaps it tries to canonicalize a path in-place. But then I might need to support 32768 characters?
As you can see in the document SetNamedSecurityInfo
pObjectName
A pointer to a null-terminated string that specifies the name of the
object for which to set security information.
That means the buffer length which will be sent into the function is always related to the string length of the buffer.
Related
I do not understand why the following code, which sets the position of an open file handle relative to the base (i. e. sets the absolute position) succeeds when trying to set a positive position for an empty file that's open for reading only:
LARGE_INTEGER offset;
offset.QuadPart = 100;
LARGE_INTEGER pos = {0};
return ::SetFilePointerEx(_h, offset, &pos, FILE_BEGIN) != 0;
It returns a non-zero result, and the pos variable receives the value 100. That behavior is counter-intuitive for a GENRIC_READ file of size zero, what is the logic? I understand that this is normal behavior for files with write access.
P. S. The file is not overlapped and overall as simple as it can be with no fancy flags.
Does SetFilePointerEx ever fail at all for valid handles, positive absolute positions and plain files?
SetFilePointerEx internally call ZwSetInformationFile with FilePositionInformation. the FILE_POSITION_INFORMATION used as input.
exist only such restriction on this value
If the file was opened or created with the
FILE_NO_INTERMEDIATE_BUFFERING option, the value of CurrentByteOffset
must be an integral multiple of the sector size of the underlying
device.
also, always must be 0 <= CurrentByteOffset.QuadPart- so position must be not negative.
no more restriction on position value. you can set it to any, not depend from file size. this call even never go to file system but handle by I/O manager.
all what he doing - set CurrentByteOffset in FILE_OBJECT
how this is used ? when we call ZwReadFile or ZwWriteFile - the optional parameter ByteOffset exist
Pointer to a variable that specifies the starting byte offset in the
file where the read operation will begin. If an attempt is made to
read beyond the end of the file, ZwReadFile returns an error.
If the call to ZwCreateFile set either of the CreateOptions flags
FILE_SYNCHRONOUS_IO_ALERT or FILE_SYNCHRONOUS_IO_NONALERT, the I/O
Manager maintains the current file position. If so, the caller of
ZwReadFile can specify that the current file position offset be used
instead of an explicit ByteOffset value. This specification can be
made by using one of the following methods:
Specify a pointer to a LARGE_INTEGER value with the HighPart member
set to -1 and the LowPart member set to the system-defined value
FILE_USE_FILE_POINTER_POSITION.
Pass a NULL pointer for ByteOffset.
ZwReadFile updates the current file position by adding the number of
bytes read when it completes the read operation, if it is using the
current file position maintained by the I/O Manager.
Even when the I/O Manager is maintaining the current file position,
the caller can reset this position by passing an explicit ByteOffset
value to ZwReadFile. Doing this automatically changes the current file
position to that ByteOffset value, performs the read operation, and
then updates the position according to the number of bytes actually
read. This technique gives the caller atomic seek-and-read service.
so we can or explicit pass ByteOffset value or use additional api call for set it first in FILE_OBJECT and then I/O manager take it from here, if no explicit ByteOffset pointer.
note - in case asynchronous I/O - we need always explicit pass ByteOffset value or call just fail (exception for pipes and mailslot files)
in case ReadFile and WriteFile - ByteOffset taken from OVERLAPPED parameter. if it is 0 - the ByteOffset set to 0 pointer and CurrentByteOffset from FILE_OBJECT is used. and if pointer to OVERLAPPED not 0 - the exactly value from OVERLAPPED is explicit passed as ByteOffset value and CurrentByteOffset in FILE_OBJECT is ignored.
also always is ok use pointer to OVERLAPPED - not only for asynchronous file handles. simply for asynchronous - this is mandatory parameter and for synchronous is optional.
really faster and better - direct pass pointer to api call (read/write) than use separate api call, which take time, can (theoretical) fail, etc
use SetFilePointer may be exist sense only in legacu code, where it called from huge count of places, for not modify too many code
The Win32 API ReadFileScatter() takes an array of pointers to page-sized buffers to read a file.
Microsoft's documentation explicitly requires an extra entry in the array for a terminating NULL.
What is this for? There is an nNumberOfBytesToRead parameter to specify the size to be read.
I debug an application and it reads a single page (nNumberOfBytesToRead==0x1000). Array elements points to valid memory, but the array of pointers is not null terminated. Can it cause memory corruption, or something like this?
I was trying to implement a unified input interface using Windows API function ReadFile for my application, which should be able to handle both console input and redirection. It didn't work as expected with console input containing multibyte (like CJK) characters.
According to Microsoft Documentation, for console input handles, ReadFile just behaves like ReadConsoleA. (FYI, results are encoded in console's current code page, so A family console functions are acceptable. And there's no ReadFileW as ReadFile works on bytes.) The third and fourth arguments in ReadFile is nNumberOfBytesToRead and lpNumberOfBytesRead respectively, but they are nNumberOfCharsToRead and lpNumberOfCharsRead in ReadConsole. To find out the exact mechanism, I did the following test:
BYTE buf[8];
DWORD len;
BOOL f = ReadFile(in, buf, 4, &len, NULL);
if (f) {
// Print buf, len
ReadConsoleW(in, buf, 4, &len, NULL); // check count of remaining characters
// Print len
}
For input like 字, len is set to 4 first (character plus CRLF), indicating the arguments are counting bytes.
For 文字 or a字, len keeps 4 and only the first 4 bytes of buf are used at first, but the second read does not get the CRLF. Only when more than 3 characters are input will the second read get unread LF, then CR. It means that ReadFile is actually consuming up to 4 logical characters, and discarding the part of input after the first 4 bytes.
The behavior of ReadConsoleA is identical to ReadFile.
Obviously, this is more likely to be a bug than design. I did some searches and found a related feedback dating back to 2009. It seems that ReadConsoleA and ReadFile did read data fully from console input, but as it was inconsistent with ReadFile specifications and could cause severe buffer overflow that threatened system processes, Microsoft did a makeshift repair, by simply discarding excess bytes, ignoring support for multibyte charsets. (This is an issue about the behavior after that, limiting buffer to 1 byte.)
Currently the only practical solution I have come up with to make input correct is to check whether the input handle is a console, and process it differently using ReadConsoleW if so, which adds complexity to the implementation. Are there other ways to get it correct?
Maybe I could still keep ReadFile, by providing a buffer large enough to hold any input at one time. However, I don't have any ideas on how to check or set the input buffer size. (I can only enter 256 characters (254 plus CRLF) in my application on my computer, but cmd.exe allows to enter 8,192 characters, so this is really a problem.) It will also be helpful if more information about this can be provided.
Ps.: Maybe _getws could also help, but this question is about Windows API, and my application needs to use some low-level console functions.
The documentation of OSData says that "...You can add bytes to them and overwrite portions of the byte array.". I can see a method to append bytes, but I don't understand how I am able to overwrite a portion of the buffer.
Another option would be to use IONewZero to allocate a number of elements of the type I need. I my case I just need this for ints.
Example:
T* dataBuffer = IONewZero(T, SIZE);
And then deallocate with:
IOSafeDeleteNULL(dataBuffer_, T, SIZE);
What are the advantages of using an OSData object compared to the solution with IONewZero / IOSafeDeleteNULL?
I think the documentation might just be copy-pasted from the kernel variant of OSData. I've seen that in a bunch of places, especially USBDriverKit.
OSData is mostly useful for dealing with plist-like data structures (i.e. setting and getting properties on service objects) in conjunction with the other OSTypes: OSArray, OSDictionary, OSNumber, etc. It's also used for in-band (<= 4096 byte) "struct" arguments of user client external methods.
The only use I can see outside of those scenarios is when you absolutely have to reference-count a blob of data. But it's certainly not a particularly convenient or efficient container for data-in-progress. If you subsequently need to send the data to a device or map it to user space, IOBufferMemoryDescriptor is probably a better choice (and also reference counted) though it's even more heavyweight.
I'm using the libwebsockets v2.4.
The doc seems unclear to me about what I have to do with the returned value of the lws_write() function.
If it returns -1, it's an error and I'm invited to close the connection. That's fine for me.
But when it returns a value that is strictly inferior to the buffer length I pass, should I consider that I have to write the last bytes that could not be written later (in another WRITABLE callback occurrence). Is it even possible to have this situation?
Also, should I use the lws_send_pipe_choked() before using the lws_write(), considering that I always use lws_write() in the context of a WRITABLE callback?
My understanding is that lws_write always return the asked buffer length except is an error occurs.
If you look at lws_issue_raw() (from which the result is returned by lws_write()) in output.c (https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L157), you can see that if the length written by lws_ssl_capable_write() is less than the provided length, then the lws allocate a buffer to fill up the remaining bytes on wsi->trunc_alloc, in order for it to be sent in the future.
Concerning your second question, I think it is safe to call lws_write() in the context of a WRITABLE callback without checking if the pipe is choked. However, if you happen to loop on lws_write() in the callback, lws_send_pipe_choked() must be called in order to protect the subsequent calls to lws_write(). If you don't, you might stumble upon this assertion https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L83 and the usercode will crash.