what is the address family unknown1 in winsock2.h? - windows

In the header file winsock2.h, I found an address family called unknown1.
What does this address family represents and what is it used for ??
Here is the source code of the header file winsock2.h, and here is the code line that contain the constant of that address family:
#define AF_UNKNOWN1 20

Your copy of winsock2.h is strange, perhaps you left off the comment on purpose. I keep old versions of SDKs around, they are an interesting archeological record of Windows development. I can trace this one back to the WinNT version 4 SDK, released in 1996 and the first SDK version that supported Winsock v2. It extended the address families first supported in NT 3.1 and Winsock v1.1, copy-pasting all of the added ones:
#define AF_VOICEVIEW 18 /* VoiceView */
#define AF_FIREFOX 19 /* Protocols from Firefox */
#define AF_UNKNOWN1 20 /* Somebody is using this! */
#define AF_BAN 21 /* Banyan */
#define AF_ATM 22 /* Native ATM Services */
#define AF_INET6 23 /* Internetwork Version 6 */
Still looks the same way today. Obviously the comment is relevant, Somebody is using this! should have the emphasis on Somebody. It is bracketed by products of companies that had pretty successful products back in the middle 90s, big enough to have a working relationship with Microsoft and get their product verified and supported by Winsock 2 and WinNT4 (Firefox was a company, not the browser btw).
So a somewhat plausible scenario is that a conflict was detect by a tester, otherwise having any idea how dirty his machine was, and filed a bug report. If Microsoft didn't know back in 1996 then, well, nobody knows. Time has not been kind to these companies and their products, the dominance of TCP/IP and the Dot-com bubble bust killed about all of them. Surely the same happened to Somebody Inc :)

That actually is pretty self-describing: it is everything else which is not defined otherwise. E.g., AF_UNKNOWN1 is an address family which is none of the other, defined address families; PF_UNKNOWN1 is such a protocol family. For the 1 postfix I didn't find now quickly pointers, my assumption is that it has been introduced to avoid conflicts with possibly already existing _UNKNOWN definitions.

Related

Multidrop Bus addressing

Currently I am reading the MDB_interface_specification( (https://namanow.org/wp-content/uploads/Multi-Drop-Bus-and-Internal-Communication-Protocol.pdf) Version 4. 3(July 2019). In Kapitel 2.3 page 34 they are talking about the Peripheral Address and I don't undrstand how the Address scheme has been built . One prototy of the address scheme look like this: 00101xxxB ( this can be 28H for example ). They upper five bits are used for addressing and the lower 3 bit are the command. If i considered this statement wih my example then the address ist 5 and the Command is 0. I am a little bit confuse can someone please explain me that?
OK. First, read this:
Then, we have value 0x00 as command to Energy Management System (huh, i never saw this kind of MBD device in the wild). MDB datasheet doesn't contain any references to this device yet, but seems it's just POLL command, device must respond to POLL with last status changes (if any) or just ACK with x100 - it's not mistake, it's 0x00 with 9th bit set. Don't read this datasheet unless you want to lose your mind. I am already read this awesome shit and put it (mostly) to hardware implementation see github repo with complete solution
Cheers.

What impact does a discardable section have in a kernel driver if it is marked RWX?

I'm intrigued by the DISCARDABLE flag in the section flags in PE files, specifically in the context of Windows drivers (in this case NDIS). I noticed that the INIT section was marked as RWX in a driver I'm reviewing, which seems odd - good security practice says you should adopt a W^X policy.
The dump of the section is as follows:
Name Virtual Size Virtual Addr Raw Size Raw Addr Reloc Addr LineNums RelocCount LineNumCount Characteristics
INIT 00000B7E 0000E000 00000C00 0000B200 00000000 00000000 0000 0000 E2000020
The characteristics map to:
IMAGE_SCN_MEM_EXECUTE
IMAGE_SCN_MEM_READ
IMAGE_SCN_MEM_WRITE
IMAGE_SCN_MEM_DISCARDABLE
IMAGE_SCN_CNT_CODE
The INIT section seems to contain the driver entry, which implies that it might be used to ensure that the driver entry function resides in nonpaged memory, whereas the rest of the code is allowed to be paged. I'm not entirely sure, though. I can see no evidence in the driver code to say that the developers explicitly set the page flags, or forced the driver entry into a separate section, so it looks like the compiler did it automatically. I also manually flipped the writeable flag in the driver binary to test it out, and it works fine without writing enabled, so that implies that having it RWX is unnecessary.
So, my questions are:
What is the INIT section used for in the context of a Windows driver and why is it marked discardable?
How are discardable sections treated in the Windows kernel? I have some idea of how ReactOS handles them but that's still fuzzy and not massively helpful.
Why would the compiler move the driver entry to an INIT section?
Why would the compiler mark the section as RWX, when RX is sufficient and RWX may constitute a security issue?
References I've looked at so far:
What happens when you mark a section as DISCARDABLE? - The Old New Thing
Windows Executable Files - x86 Disassembly Book
Pageable and Discardable Code in a Protocol Driver - MSDN
EDIT, 2022: I forgot to update this, but a while after I posted this question I passed it on to Microsoft and it did turn out to be a bug in the MSVC linker. They were mistakenly marking the discard section that contained DriverEntry as RWX. The issue was fixed in VS2015.
What is the INIT section used for in the context of a Windows...
It is normally used for the DriverEntry() function.
How are discardable sections treated in the Windows kernel?
It allows the page(s) that contain the DriverEntry() function code to be discarded. They are no longer needed after the driver is initialized.
Why would the compiler move the driver entry to an INIT section?
An NDIS driver normally contains
#pragma NDIS_INIT_FUNCTION(DriverEntry)
Which is a macro in the WDK's inc/ddk/ndis.h header file:
#define NDIS_INIT_FUNCTION(_F) alloc_text(INIT,_F)
#pragma alloc_text is one of the ways to move a function into a particular section. Another common way it is done is by bracketing the DriverEntry function with #pragma code_seg(INIT) and #pragma code_seg().
Why would the compiler mark the section as RWX
That requires an archeological dig. Many drivers were started a long time ago and are likely to still use ~VS6, back when life was still uncomplicated and programmers wore white hats. Or perhaps the programmer used #pragma section, yet another way to name sections, it permits setting the attributes directly. A modern toolchain certainly won't do this, you get RX from #pragma alloc_text. There very little point in fretting about it, given that DriverEntry() lives for a very short time and any malware code that runs with ring0 privileges can do a lot more practical damage.
I passed this information on to Microsoft and it did turn out to be a bug in the MSVC linker. They were mistakenly marking the discard section that contained DriverEntry as RWX. This issue was fixed in Visual Studio 2015.
I wrote about the issue in more detail here.

Port B GPIO ep93xx/gpio.c interrupt issue

I am having troubles with gpio interrupt issue.
According documentation for ep93xx ports A, B, F can be configured to generate interrupts.
quote:
Any of the 19 GPIO lines maybe configured to generate interrupts
However arch/arm/march-ep93xx/gpio.c is handling only interrupts from port A. And doesn't react to port B and F.
static void ep93xx_gpio_ab_irq_handler(unsigned int irq, struct irq_desc *desc)
{
unsigned char status;
int i;
printk(KERN_INFO "ep93xx_gpio_ab_irq_handler: irq=%u", irq);
I know printk is terrible in irq_handlers.
I am configuring iterrupts via sysfs.
GPIO 0,8 are wired with Port F if it is important to issue.
Also when enabling interrupts on port B without having configured port A i get following warning:
------------[ cut here ]------------
WARNING: at drivers/gpio/gpiolib.c:103 gpio_ensure_requested+0x54/0x118()
autorequest GPIO-1
Modules linked in:
[<c002696c>] (unwind_backtrace+0x0/0xf0) from [<c00399d4>] (warn_slowpath_fmt+0x54/0x78)
[<c00399d4>] (warn_slowpath_fmt+0x54/0x78) from [<c019dd90>] (gpio_ensure_requested+0x54/0x118)
[<c019dd90>] (gpio_ensure_requested+0x54/0x118) from [<c019e05c>] (gpio_direction_input+0xb0/0x150)
[<c019e05c>] (gpio_direction_input+0xb0/0x150) from [<c002c9a8>] (ep93xx_gpio_irq_type+0x3c/0x1d8)
[<c002c9a8>] (ep93xx_gpio_irq_type+0x3c/0x1d8) from [<c0066ad8>] (__irq_set_trigger+0x38/0x9c)
[<c0066ad8>] (__irq_set_trigger+0x38/0x9c) from [<c0066e14>] (__setup_irq+0x2d8/0x354)
[<c0066e14>] (__setup_irq+0x2d8/0x354) from [<c0066f38>] (request_threaded_irq+0xa8/0x140)
[<c0066f38>] (request_threaded_irq+0xa8/0x140) from [<c019e784>] (gpio_setup_irq+0x14c/0x260)
[<c019e784>] (gpio_setup_irq+0x14c/0x260) from [<c019ec1c>] (gpio_edge_store+0x90/0xac)
[<c019ec1c>] (gpio_edge_store+0x90/0xac) from [<c01be8fc>] (dev_attr_store+0x1c/0x28)
[<c01be8fc>] (dev_attr_store+0x1c/0x28) from [<c00e8b2c>] (sysfs_write_file+0x168/0x19c)
[<c00e8b2c>] (sysfs_write_file+0x168/0x19c) from [<c009a3d4>] (vfs_write+0xa4/0x160)
[<c009a3d4>] (vfs_write+0xa4/0x160) from [<c009a6a4>] (sys_write+0x3c/0x7c)
[<c009a6a4>] (sys_write+0x3c/0x7c) from [<c0020e40>] (ret_fast_syscall+0x0/0x2c)
---[ end trace ff56c09a294dbe68 ]---
I am using kernel version 2.6.34.14 with linux-2.6.34-ts7200_matt-6.tar.gz patch (hovewer it doesn't seem contain patches for gpio.c or gpiolib.c)
cross version:
binutils-2.23.1
gcc-4.7.3
glibc-2.17
Also i crawled through change history of gpio.c and gpiolib.c and didn't find anything that can be related to this issue.
Can someone give me and advice regarding this issue? I want interrupts on all ports (A,B,F) not just A.
There are a lot of question on this issue (and ARM irq OR interrupt). Please look at them.
We can see many changes by looking at more recent Linux 3.0 gpio.c change logs versus the 2.6.34 logs and the current version. You should be able to get the current Linux stable tree and extract these patches and back port them to your kernel. For instance, there is a bug where port C and F are swapped; I don't know if this is in your ts7200_matt variant.
Some important change sets to look at,
arm: Fold irq_set_chip/irq_set_handler
arm: Cleanup the irq namespace
arm: ep93xx: Use proper irq accessor functions
arm: ep93xx: Add basic interrupt info
ARM: ep93xx: irq_data conversion.
ARM: 5954/1: ep93xx: move gpio interrupt support to gpio.c
[ARM] 5243/1: ep93xx: bugfix, GPIO ports C and F are swapped
You may have #6, but it is worth looking at as it is basically the interrupt implementation for your controller. After about linux-3.0, your SOC's GPIO controller was moved to drivers/gpio/gpio-ep93xx.c. You may wish to look at these changes, but none seem to be related to your issue. You should be aware of structural changes to Linux. Ie, overall changes to interrupt handling and/or the generic GPIO infrastructure. A good guess is that Thomas Gleixner or Russell King will make these changes.
The patches can be extracted from a particular Linux stable tree with git format-patch b685004.. b0ec5cf1 gpio.c. This will create several patch files. Move them to your tree and apply with either git am or patch -p1. You may have to massage these files to get them to apply cleanly to your tree; if you take them all, even though they are not related to interrupt handling, you will have better luck doing this automatically. You can also look at the patch set and try to manually patch the file with a text editor.
None of this addresses your specific questions. However, it gives a path to merge changes from the latest Linux versions. Also, the previous stack overflow questions give details on the structure of the GPIO interrupt handling. Coupled with your data sheet, the Linux GPIO document, and the given change sets, you should be able to fix your own problem. Otherwise, you need someone familiar with the EP93xx and the question is fairly localized.
Note: The stack trace indicates that a GPIO is being used without a corresponding gpio_request()
. This is either a bug in the machine file or in the EP93xx GPIO interrupt handling code.
I had the same warning:
------------[ cut here ]------------
WARNING: at drivers/gpio/gpiolib.c:103 gpio_ensure_requested
From my research we have to call gpio_request_one / gpio_request, before gpio_direction_input.
It fixed the problem for me.
http://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&t=99789
http://e2e.ti.com/support/embedded/linux/f/354/p/119946/427889.aspx

Same C code producing different results on Mac OS X than Windows and Linux

I'm working with an older version of OpenSSL, and I'm running into some behavior that has stumped me for days when trying to work with cross-platform code.
I have code that calls OpenSSL to sign something. My code is modeled after the code in ASN1_sign, which is found in a_sign.c in OpenSSL, which exhibits the same issues when I use it. Here is the relevant line of code (which is found and used exactly the same way in a_sign.c):
EVP_SignUpdate(&ctx,(unsigned char *)buf_in,inl);
ctx is a structure that OpenSSL uses, not relevant to this discussion
buf_in is a char* of the data that is to be signed
inl is the length of buf_in
EVP_SignUpdate can be called repeatedly in order to read in data to be signed before EVP_SignFinal is called to sign it.
Everything works fine when this code is used on Ubuntu and Windows 7, both of them produce the exact same signatures given the same inputs.
On OS X, if the size of inl is less than 64 (that is there are 64 bytes or less in buf_in), then it too produces the same signatures as Ubuntu and Windows. However, if the size of inl becomes greater than 64, it produces its own internally consistent signatures that differ from the other platforms. By internally consistent, I mean that the Mac will read the signatures and verify them as proper, while it will reject the signatures from Ubuntu and Windows, and vice versa.
I managed to fix this issue, and cause the same signatures to be created by changing that line above to the following, where it reads the buffer one byte at a time:
int input_it;
for(input_it = (int)buf_in; input_it < inl + (int)buf_in; intput_it++){
EVP_SIGNUpdate(&ctx, (unsigned char*) input_it, 1);
}
This causes OS X to reject its own signatures of data > 64 bytes as invalid, and I tracked down a similar line elsewhere for verifying signatures that needed to be broken up in an identical manner.
This fixes the signature creation and verification, but something is still going wrong, as I'm encountering other problems, and I really don't want to go traipsing (and modifying!) much deeper into OpenSSL.
Surely I'm doing something wrong, as I'm seeing the exact same issues when I use stock ASN1_sign. Is this an issue with the way that I compiled OpenSSL? For the life of me I can't figure it out. Can anyone educate me on what bone-headed mistake I must be making?
This is likely a bug in the MacOS implementation. I recommend you file a bug by sending the above text to the developers as described at http://www.openssl.org/support/faq.html#BUILD17
There are known issues with OpenSSL on the mac (you have to jump through a few hoops to ensure it links with the correct library instead of the system library). Did you compile it yourself? The PROBLEMS file in the distribution explains the details of the issue and suggests a few workarounds. (Or if you are running with shared libraries, double check that your DYLD_LIBRARY_PATH is correctly set). No guarantee, but this looks a likely place to start...
The most common issue porting Windows and Linux code around is default values of memory. I think Windows sets it to 0xDEADBEEF and Linux set's it to 0s.

CreateThread() fails on 64 bit Windows, works on 32 bit Windows. Why?

Operating System: Windows XP 64 bit, SP2.
I have an unusual problem. I am porting some code from 32 bit to 64 bit. The 32 bit code works just fine. But when I call CreateThread() for the 64 bit version the call fails. I have three places where this fails. 2 call CreateThread(). 1 calls beginthreadex() which calls CreateThread().
All three calls fail with error code 0x3E6, "Invalid access to memory location".
The problem is all the input parameters are correct.
HANDLE h;
DWORD threadID;
h = CreateThread(0, // default security
0, // default stack size
myThreadFunc, // valid function to call
myParam, // my param
0, // no flags, start thread immediately
&threadID);
All three calls to CreateThread() are made from a DLL I've injected into the target program at the start of the program execution (this is before the program has got to the start of main()/WinMain()). If I call CreateThread() from the target program (same params) via say a menu, it works. Same parameters etc. Bizarre.
If I pass NULL instead of &threadID, it still fails.
If I pass NULL as myParam, it still fails.
I'm not calling CreateThread from inside DllMain(), so that isn't the problem. I'm confused and searching on Google etc hasn't shown any relevant answers.
If anyone has seen this before or has any ideas, please let me know.
Thanks for reading.
ANSWER
Short answer: Stack Frames on x64 need to be 16 byte aligned.
Longer answer:
After much banging my head against the debugger wall and posting responses to the various suggestions (all of which helped in someway, prodding me to try new directions) I started exploring what-ifs about what was on the stack prior to calling CreateThread(). This proved to be a red-herring but it did lead to the solution.
Adding extra data to the stack changes the stack frame alignment. Sooner or later one of the tests gets you to 16 byte stack frame alignment. At that point the code worked. So I retraced my steps and started putting NULL data onto the stack rather than what I thought was the correct values (I had been pushing return addresses to fake up a call frame). It still worked - so the data isn't important, it must be the actual stack addresses.
I quickly realised it was 16 byte alignment for the stack. Previously I was only aware of 8 byte alignment for data. This microsoft document explains all the alignment requirements.
If the stackframe is not 16 byte aligned on x64 the compiler may put large (8 byte or more) data on the wrong alignment boundaries when it pushes data onto the stack.
Hence the problem I faced - the hooking code was called with a stack that was not aligned on a 16 byte boundary.
Quick summary of alignment requirements, expressed as size : alignment
1 : 1
2 : 2
4 : 4
8 : 8
10 : 16
16 : 16
Anything larger than 8 bytes is aligned on the next power of 2 boundary.
I think Microsoft's error code is a bit misleading. The initial STATUS_DATATYPE_MISALIGNMENT could be expressed as a STATUS_STACK_MISALIGNMENT which would be more helpful. But then turning STATUS_DATATYPE_MISALIGNMENT into ERROR_NOACCESS - that actually disguises and misleads as to what the problem is. Very unhelpful.
Thank you to everyone that posted suggestions. Even if I disagreed with the suggestions, they prompted me to test in a wide variety of directions (including the ones I disagreed with).
Written a more detailed description of the problem of datatype misalignment here: 64 bit porting gotcha #1! x64 Datatype misalignment.
The only reason that 64bit would make a difference is that threading on 64bit requires 64bit aligned values. If threadID isn't 64bit aligned, you could cause this problem.
Ok, that idea's not it. Are you sure it's valid to call CreateThread before main/WinMain? It would explain why it works in a menu- because that's after main/WinMain.
In addition, I'd triple-check the lifetime of myParam. CreateThread returns (this I know from experience) long before the function you pass in is called.
Post the thread routine's code (or just a few lines).
It suddenly occurs to me: Are you sure that you're injecting your 64bit code into a 64bit process? Because if you had a 64bit CreateThread call and tried to inject that into a 32bit process running under WOW64, bad things could happen.
Starting to seriously run out of ideas. Does the compiler report any warnings?
Could the bug be due to a bug in the host program, rather than the DLL? There's some other code, such as loading a DLL if you used __declspec(import/export), that occurs before main/WinMain. If that DLLMain, for example, had a bug in it.
I ran into this issue today. And I checked every argument feed into _beginthread/CreateThread/NtCreateThread via rohitab's Windows API Monitor v2. Every argument is aligned properly (AFAIK).
So, where does STATUS_DATATYPE_MISALIGNMENT come from?
The first few lines of NtCreateThread validate parameters passed from user mode.
ProbeForReadSmallStructure (ThreadContext, sizeof (CONTEXT), CONTEXT_ALIGN);
for i386
#define CONTEXT_ALIGN (sizeof(ULONG))
for amd64
#define STACK_ALIGN (16UI64)
...
#define CONTEXT_ALIGN STACK_ALIGN
On amd64, if the ThreadContext pointer is not aligned to 16 bytes, NtCreateThread will return STATUS_DATATYPE_MISALIGNMENT.
CreateThread (actually CreateRemoteThread) allocated ThreadContext from stack, and did nothing special to guarantee the alignment requirement is satisfied. Things will work smoothly if every piece of your code followed Microsoft x64 calling convention, which unfortunately not true for me.
PS: The same code may work on newer Windows (say Vista and newer). I didn't check though. I'm facing this issue on Windows Server 2003 R2 x64.
I'm in the business of using parallel threads under windows
for calculations. No funny business, no dll-calls, and certainly
no call-back's. The following works in 32 bits windows. I set up the stack for my calculation, well within the area reserved for my program.
All releveant data about area's and start addresses is contained in
a data structure that is passed to CreateThread as parameter 3.
The address that is called contains a small assembler routine
that uses this data stucture.
Indeed this routine finds the address to return to on the stack,
then the address of the data structure.
There is no reason to go far into this. It just works and it calculates
the number of primes below 2,000,000,000 just fine, in one thread,
in two threads or in 20 threads.
Now CreateThread in 64 bits doesn't push the address of the data
structure. That seems implausible so I show you the smoking gun,
a dump of a debug session.
In the subwindow at the bottom right you see the stack, and
there is merely the return address, amidst a sea of zeroes.
The mechanism I use to fill in parameters is portable between 32 and 64 bits.
No other call exhibits a difference between word-sizes.
Moreover why would the code address work but not the data address?
The bottom line: one would expect that CreateThread passes the data parameter on the stack in the same way in 64 bits as in 32 bits, then does a subroutine call. At the assembler level it doesn't work that way. If there are any hidden requirements to e.g. RSP that are automatically fullfilled in C++ that would be very nasty.
P.S. No there are no 16 byte alignment problems. That lies ages behind me.
Try using _beginthread() or _beginthreadex() instead, you shouldn't be using CreateThread directly.
See this previous question.

Resources