Arguments passed to a function in IDA Pro - arguments

When I analyzed a binary with IDA, I saw the following function:
Function::Function(void *, unsigned int, void *, unsigned int)
So, as you can see, IDA displays that we have 4 arguments. But below that, in the summary view, IDA shows that we have 5 arguments. In the following you can see the summary view of IDA where usually the arguments and local variables are shown (in that case we have no local var.):
arg_0 = dword ptr 8
arg_4 = dword ptr 0Ch
arg_8 = dword ptr 10h
arg_C = dword ptr 14h
arg_10 = dword ptr 18h
So, I am asking: Why is that happen? Is that a mistake of IDA? Or, is arg_10 a global variable rather than a argument passed to that function?
My assumption is that IDA can not resolve the type of the 5th argument, so it leaves it out in the function declaration.

When calling methods of an object, a pointer to the object is implicitly passed as a parameter to the function. (This is what the this keyword represents)
It is very likely that arg_0 is the this pointer.

Related

Efficient type punning without undefined behavior

Say I'm working on a library called libModern. This library uses a legacy C library, called libLegacy, as an implementation strategy. libLegacy's interface looks like this:
typedef uint32_t LegacyFlags;
struct LegacyFoo {
uint32_t x;
uint32_t y;
LegacyFlags flags;
// more data
};
struct LegacyBar {
LegacyFoo foo;
float a;
// more data
};
void legacy_input(LegacyBar const* s); // Does something with s
void legacy_output(LegacyBar* s); // Stores data in s
libModern shouldn't expose libLegacy's types to its users for various reasons, among them:
libLegacy is an implementation detail that shouldn't be leaked. Future versions of libModern might chose to use another library instead of libLegacy.
libLegacy uses hard-to-use, easy-to-misuse types that shouldn't be part of any user-facing API.
The textbook way to deal with this situation is the pimpl idiom: libModern would provide a wrapper type that internally has a pointer to the legacy data. However, this is not possible here, since libModern cannot allocate dynamic memory. Generally, its goal is not to add a lot of overhead.
Therefore, libModern defines its own types that are layout-compatible with the legacy types, yet have a better interface. In this example it is using a strong enum instead of a plain uint32_t for flags:
enum class ModernFlags : std::uint32_t
{
first_flag = 0,
second_flag = 1,
};
struct ModernFoo {
std::uint32_t x;
std::uint32_t y;
ModernFlags flags;
// More data
};
struct ModernBar {
ModernFoo foo;
float a;
// more data
};
Now the question is: How can libModern convert between the legacy and the modern types without much overhead? I know of 3 options:
reinterpret_cast. This is undefined behavior, but in practice produces perfect assembly. I want to avoid this, since I cannot rely on this still working tomorrow or on another compiler.
std::memcpy. In simple cases this generates the same optimal assembly, but in any non-trivial case this adds significant overhead.
C++20's std::bit_cast. In my tests, at best it produces exactly the same code as memcpy. In some cases it's worse.
This is a comparison of the 3 ways to interface with libLegacy:
Interfacing with legacy_input()
Using reinterpret_cast:
void input_ub(ModernBar const& s) noexcept {
legacy_input(reinterpret_cast<LegacyBar const*>(&s));
}
Assembly:
input_ub(ModernBar const&):
jmp legacy_input
This is perfect codegen, but it invokes UB.
Using memcpy:
void input_memcpy(ModernBar const& s) noexcept {
LegacyBar ls;
std::memcpy(&ls, &s, sizeof(ls));
legacy_input(&ls);
}
Assembly:
input_memcpy(ModernBar const&):
sub rsp, 24
movdqu xmm0, XMMWORD PTR [rdi]
mov rdi, rsp
movaps XMMWORD PTR [rsp], xmm0
call legacy_input
add rsp, 24
ret
Significantly worse.
Using bit_cast:
void input_bit_cast(ModernBar const& s) noexcept {
LegacyBar ls = std::bit_cast<LegacyBar>(s);
legacy_input(&ls);
}
Assembly:
input_bit_cast(ModernBar const&):
sub rsp, 40
movdqu xmm0, XMMWORD PTR [rdi]
mov rdi, rsp
movaps XMMWORD PTR [rsp+16], xmm0
mov rax, QWORD PTR [rsp+16]
mov QWORD PTR [rsp], rax
mov rax, QWORD PTR [rsp+24]
mov QWORD PTR [rsp+8], rax
call legacy_input
add rsp, 40
ret
And I have no idea what's going on here.
Interfacing with legacy_output()
Using reinterpret_cast:
auto output_ub() noexcept -> ModernBar {
ModernBar s;
legacy_output(reinterpret_cast<LegacyBar*>(&s));
return s;
}
Assembly:
output_ub():
sub rsp, 56
lea rdi, [rsp+16]
call legacy_output
mov rax, QWORD PTR [rsp+16]
mov rdx, QWORD PTR [rsp+24]
add rsp, 56
ret
Using memcpy:
auto output_memcpy() noexcept -> ModernBar {
LegacyBar ls;
legacy_output(&ls);
ModernBar s;
std::memcpy(&s, &ls, sizeof(ls));
return s;
}
Assembly:
output_memcpy():
sub rsp, 56
lea rdi, [rsp+16]
call legacy_output
mov rax, QWORD PTR [rsp+16]
mov rdx, QWORD PTR [rsp+24]
add rsp, 56
ret
Using bit_cast:
auto output_bit_cast() noexcept -> ModernBar {
LegacyBar ls;
legacy_output(&ls);
return std::bit_cast<ModernBar>(ls);
}
Assembly:
output_bit_cast():
sub rsp, 72
lea rdi, [rsp+16]
call legacy_output
movdqa xmm0, XMMWORD PTR [rsp+16]
movaps XMMWORD PTR [rsp+48], xmm0
mov rax, QWORD PTR [rsp+48]
mov QWORD PTR [rsp+32], rax
mov rax, QWORD PTR [rsp+56]
mov QWORD PTR [rsp+40], rax
mov rax, QWORD PTR [rsp+32]
mov rdx, QWORD PTR [rsp+40]
add rsp, 72
ret
Here you can find the entire example on Compiler Explorer.
I also noted that the codegen varies significantly depending on the exact definition of the structs (i.e. order, amount & type of members). But the UB version of the code is consistently better or at least as good as the other two versions.
Now my questions are:
How come the codegen varies so dramatically? It makes me wonder if I'm missing something important.
Is there something I can do to guide the compiler to generate better code without invoking UB?
Are there other standard-conformant ways that generate better code?
In your compiler explorer link, Clang produces the same code for all output cases. I don't know what problem GCC has with std::bit_cast in that situation.
For the input case, the three functions cannot produce the same code, because they have different semantics.
With input_ub, the call to legacy_input may be modifying the caller's object. This cannot be the case in the other two versions. Therefore the compiler cannot optimize away the copies, not knowing how legacy_input behaves.
If you pass by-value to the input functions, then all three versions produce the same code at least with Clang in your compiler explorer link.
To reproduce the code generated by the original input_ub you need to keep passing the address of the caller's object to legacy_input.
If legacy_input is an extern C function, then I don't think the standards specify how the object models of the two languages are supposed to interact in this call. So, for the purpose of the language-lawyer tag, I will assume that legacy_input is an ordinary C++ function.
The problem in passing the address of &s directly is that there is generally no LegacyBar object at the same address that is pointer-interconvertible with the ModernBar object. So if legacy_input tries to access LegacyBar members through the pointer, that would be UB.
Theoretically you could create a LegacyBar object at the required address, reusing the object representation of the ModernBar object. However, since the caller presumably will expect there to still be a ModernBar object after the call, you then need to recreate a ModernBar object in the storage by the same procedure.
Unfortunately though, you are not always allowed to reuse storage in this way. For example if the passed reference refers to a const complete object, that would be UB, and there are other requirements. The problem is also whether the caller's references to the old object will refer to the new object, meaning whether the two ModernBar objects are transparently replaceable. This would also not always be the case.
So in general I don't think you can achieve the same code generation without undefined behavior if you don't put additional constraints on the references passed to the function.
Most non-MSVC compilers support an attribute called __may_alias__ that you can use
struct ModernFoo {
std::uint32_t x;
std::uint32_t y;
ModernFlags flags;
// More data
} __attribute__((__may_alias__));
struct ModernBar {
ModernFoo foo;
float a;
// more data
} __attribute__((__may_alias__));
Of course some optimizations can't be done when aliasing is allowed, so use it only if performance is acceptable
Godbolt link
Programs which would ever have any reason to access storage as multiple types should be processed using -fno-strict-aliasing or equivalent on any compiler that doesn't limit type-based aliasing assumptions around places where a pointer or lvalue of one type is converted to another, even if the program uses only corner-case behaviors mandated by the Standard. Using such a compiler flag will guarantee that one won't have type-based-aliasing problems, while jumping through hoops to use only standard-mandated corner cases won't. Both clang and gcc are sometimes prone to both:
have one phase of optimization change code whose behavior would be mandated by the Standard into code whose behavior isn't mandated by the Standard would be equivalent in the absence of further optimization, but then
have a later phase of optimization further transform the code in a manner that would have been allowable for the version of the code produced by #1 but not for the code as it was originally written.
If using -fno-strict-aliasing on straightforwardly-written source code yields machine code whose performance is acceptable, that's a safer approach than trying to jump through hoops to satisfy constraints that the Standard allows compilers to impose in cases where doing so would allow them to be more useful [or--for poor quality compilers--in cases where doing so would make them less useful].
You could create a union with a private member to restrict access to the legacy representation:
union UnionBar {
struct {
ModernFoo foo;
float a;
};
private:
LegacyBar legacy;
friend LegacyBar const* to_legacy_const(UnionBar const& s) noexcept;
friend LegacyBar* to_legacy(UnionBar& s) noexcept;
};
LegacyBar const* to_legacy_const(UnionBar const& s) noexcept {
return &s.legacy;
}
LegacyBar* to_legacy(UnionBar& s) noexcept {
return &s.legacy;
}
void input_union(UnionBar const& s) noexcept {
legacy_input(to_legacy_const(s));
}
auto output_union() noexcept -> UnionBar {
UnionBar s;
legacy_output(to_legacy(s));
return s;
}
The input/output functions are compiled to the same code as the reinterpret_cast-versions (using gcc/clang):
input_union(UnionBar const&):
jmp legacy_input
and
output_union():
sub rsp, 56
lea rdi, [rsp+16]
call legacy_output
mov rax, QWORD PTR [rsp+16]
mov rdx, QWORD PTR [rsp+24]
add rsp, 56
ret
Note that this uses anonymous structs and requires you to include the legacy implementation, which you mentioned you do not want. Also, I'm missing the experience to be fully confident that there's no hidden UB, so it would be great if someone else would comment on that :)

How to pass a struct by value in x86 assembly

I'm trying to call a function from the windows api in masm.
This is the signature:
BOOL WINAPI SetConsoleScreenBufferSize(
_In_ HANDLE hConsoleOutput,
_In_ COORD dwSize
);
The COORD structure dwSize is passed by value, but when I try to call it the function fails.
Looks like this:
.DATA
dwSize COORD <20, 20>
.CODE
INVOKE SetConsoleScreenBufferSize,
hConsoleOutput,
dwSize
This causes a type error and the program won't assemble. If I pass a reference to the struct, the program assembles but the function does not work. I've tried with other functions that accept structs by value, with no success.
Hans is correct Invoke doesn't understand how to pass a struct by value. COORD is 2 16-bit values which happens to be the size of a DWORD. In the case of COORD you can cast it to a DWORD as a parameter to Invoke . This should work:
.DATA
dwSize COORD <20, 20>
.CODE
INVOKE SetConsoleScreenBufferSize,
hConsoleOutput,
DWORD PTR [dwSize]
Note: It is important to understand that since COORD happened to be the size of a DWORD we could get away with this. For structures that don't have a size that can be pushed on the stack directly you'd have to build the structure on the stack and use the CALL instruction rather than Invoke.
COORD is just two 16-bit numbers packed together and passed as normal 32-bit number.
MSVC (x86) turns
COORD cord = { 0x666, 0x42 };
SetConsoleScreenBufferSize(0, cord);
into
33db xor ebx,ebx
66c745986606 mov word ptr [ebp-68h],666h ; store cord.x
66c7459a4200 mov word ptr [ebp-66h],42h ; store cord.y
ff7598 push dword ptr [ebp-68h] ; push the whole struct
53 push ebx ; push 0
ff1540104000 call dword ptr [image00400000+0x1040 (00401040)] ; SetConsoleScreenBufferSize
After push'ing but before the call the stack starts with:
00000000 00420666 ...
xor-zeroing a register and then pushing that is a missed optimization vs. push 0 of an immediate zero. Storing to the stack first is also only because the source was compiled with optimization disabled.

Why syscall doesn't work?

I'm on MAC OSX and I'm trying to call through assembly the execve syscall..
His opcode is 59 .
In linux I have to set opcode into eax, then parameters into the others registers, but here I have to put the opcode into eax and push parameters into the stack from right to left.
So I need execve("/bin/sh",NULL,NULL), I found somewhere that with assembly null=0, so I put null into 2nd and 3rd parameters.
global start
section .text
start:
jmp string
main:
; 59 opcode
; int execve(char *fname, char **argp, char **envp);
pop ebx ;stringa
push 0x0 ;3rd param
push 0x0 ;2nd param
push ebx ;1st param
add eax,0x3b ;execve opcode
int 0x80 ;interupt
sub eax,0x3a ; exit opcode
int 0x80
string:
call main
db '/bin/sh',0
When I try to execute it say:
Bad system call: 12
32-bit programs on BSD (on which OS/X is based) requires you to push an extra 4 bytes onto the stack if you intend to call int 0x80 directly. From the FreeBSD documentation you will find this:
By default, the FreeBSD kernel uses the C calling convention. Further, although the kernel is accessed using int 80h, it is assumed the program will call a function that issues int 80h, rather than issuing int 80h directly.
[snip]
But assembly language programmers like to shave off cycles. The above example requires a call/ret combination. We can eliminate it by pushing an extra dword:
open:
push dword mode
push dword flags
push dword path
mov eax, 5
push eax ; Or any other dword
int 80h
add esp, byte 16
When calling int 0x80 you need to adjust the stack pointer by 4. Pushing any value will achieve this. In the example they just do a push eax. Before your calls to int 0x80 push 4 bytes onto the stack.
Your other problem is that add eax,0x3b for example requires EAX to already be zero which is almost likely not the case. To fix that add an xor eax, eax to the code.
The fixes could look something like:
global start
section .text
start:
jmp string
main:
; 59 opcode
; int execve(char *fname, char **argp, char **envp);
xor eax, eax ;zero EAX
pop ebx ;stringa
push 0x0 ;3rd param
push 0x0 ;2nd param
push ebx ;1st param
add eax,0x3b ;execve opcode
push eax ;Push a 4 byte value after parameters per calling convention
int 0x80 ;interupt
sub eax,0x3a ; exit opcode
push eax ;Push a 4 byte value after parameters per calling convention
; in this case though it won't matter since the system call
; won't be returning
int 0x80
string:
call main
db '/bin/sh',0
Shellcode
Your code is actually called the JMP/CALL/POP method and is used for writing exploits. Are you writing an exploit or did you just find this code online? If it is intended to be used as shell code you would need to avoid putting a 0x00 byte in the output string. push 0x00 will encode 0x00 bytes in the generated code. To avoid this we can use EAX which we are now zeroing out and push it on the stack. As well you won't be able to NUL terminate the string so you'd have to move a NUL(0) character into the string. One way after zeroing EAX and popping EBX is to move zero to the end of the string manually with something like mov [ebx+7], al. Seven is the index after the end of the string /bin/sh. Your code would then look like this:
global start
section .text
start:
jmp string
main:
; 59 opcode
; int execve(char *fname, char **argp, char **envp);
xor eax, eax ;Zero EAX
pop ebx ;stringa
mov [ebx+7], al ;append a zero onto the end of the string '/bin/sh'
push eax ;3rd param
push eax ;2nd param
push ebx ;1st param
add eax,0x3b ;execve opcode
push eax
int 0x80 ;interupt
sub eax,0x3a ; exit opcode
push eax
int 0x80
string:
call main
db '/bin/sh',1
You are using a 64 bit syscall numbers and a 32 bit instruction to jump to the syscall. That is not going to work.
For 32 bit users:
opcode for Linux/MacOS execve: 11
instruction to call syscall: int 0x80
For 64 bit users:
opcode for Linux execve: 59 (MacOS 64-bit system calls also have a high bit set).
instruction to call syscall: syscall
The method for passing args to system calls is also different: 32-bit uses the stack, 64-bit uses similar registers to the function-calling convention.

Crash while manually manipulating a UNICODE_STRING

I get a very strange (for me) crash while manually manipulating a UNICODE_STRING:
UNICODE_STRING ustrName;
UNICODE_STRING ustrPortName;
UNICODE_STRING linkName;
UCHAR m_COMPortName[6];
RtlInitUnicodeString(&ustrName, L"PortName");
status = WdfStringCreate(NULL, WDF_NO_OBJECT_ATTRIBUTES, &strPortName);
if(NT_SUCCESS(status)) // String created
{ status = WdfRegistryQueryString (hKey, &ustrName, strPortName); // strPortName is now "COM8"
if (NT_SUCCESS (status)) {
WdfStringGetUnicodeString(strPortName, &ustrPortName);
m_COMPortName[0] = (UCHAR)ustrPortName.Buffer[0];
m_COMPortName[1] = (UCHAR)ustrPortName.Buffer[1];
m_COMPortName[2] = (UCHAR)ustrPortName.Buffer[2];
m_COMPortName[3] = (UCHAR)ustrPortName.Buffer[3];
m_COMPortName[4] = (UCHAR)ustrPortName.Buffer[4];
m_COMPortName[5] = 0; // Force a null-termination
}
}
WdfRegistryClose(hKey);
RtlInitUnicodeString(&linkName, L"\\??\\COM123"); // Init with lets say COM123, Breakpoint here...
linkName.Buffer[7] = (USHORT)m_COMPortName[3]; // First digit in the COM-port number // ** THIS LINE CRASH **
linkName.Buffer[8] = (USHORT)m_COMPortName[4]; // Second digit in the COM-port number // (if any else NULL)
linkName.Buffer[9] = (USHORT)m_COMPortName[5]; // Third digit in the COM-port number // (if any else NULL)
Disassembly:
902de533 6840072e90 push offset mydriver! ?? ::FNODOBFM::'string' (902e0740) ** Breakpoint here (same as above...) **
902de538 8d45f8 lea eax,[ebp-8]
902de53b 50 push eax
902de53c ff1528202e90 call dword ptr [mydriver!_imp__RtlInitUnicodeString (902e2028)]
902de542 660fb60d23392e90 movzx cx,byte ptr [mydriver!m_COMPortName+0x3 (902e3923)] ** Start of the crashing line **
902de54a 8b55fc mov edx,dword ptr [ebp-4] ** Seems ok **
902de54d 66894a0e mov word ptr [edx+0Eh],cx ds:0023:902e074e=0031 ** CRASH!!! **
902de551 660fb60524392e90 movzx ax,byte ptr [mydriver!m_COMPortName+0x4 (902e3924)]
902de559 8b4dfc mov ecx,dword ptr [ebp-4]
902de55c 66894110 mov word ptr [ecx+10h],ax
902de560 660fb61525392e90 movzx dx,byte ptr [mydriver!m_COMPortName+0x5 (902e3925)]
902de568 8b45fc mov eax,dword ptr [ebp-4]
902de56b 66895012 mov word ptr [eax+12h],dx
Both linkName and m_COMPortName looks correct in the Watch. Whats up?
Another solution is to in some way concatenate the unicode string L"\\??\\" with the dynamically read unicode string L"COMx". But I don't know how to do that. I'm aware of MultiByteToWideChar but I'm not so fond of using it since it needs windows.h and when I include that file into my tiny KMDF-driver project the compiler gives me tons of errors...
All code made for Windows Vista in WinDDK 7600.16385.1 (KMDF)
From MSDN RtlUnicodeStringInit:
Sets the Buffer member of the UNICODE_STRING structure to the
address that the source parameter specifies.
linkName buffer points to a constant (L"\\??\\COM123") so it crashed when you try to modify it.

save inline asm register value to C pointer, can get it on GCC but not VC

for the sake of simplicity ill just paste an example instead of my entire code which is a bit huge. while im porting my code to VC++ instead of using GCC i need to rewrite a few inline assembly functions that receive pointers and save values on those pointers.
imagine cpuid for example:
void cpuid( int* peax, int* pebx, int* pecx, int* pedx, int what ){
__asm__ __volatile__( "cpuid" : "=a" (*peax), "=b" (*pebx), "=c" (*pecx), "=d" (*pedx) : "a" (what) );
}
that will just work, it will save the values on the registers "returned" by cpuid on the pointers that i passed to the function.
can the same be done with the inline assembler for VC?
so far the exact same function signature but with:
mov eax, what;
cpuid;
mov dword ptr [peax], eax;
etc
wont work, peax will have the same value it had before calling the function.
thanks in advance.
Tough to see because it is just a snippet, plus it could be called from C++ code / thiscall.
It might have to be 'naked' ( __declspec(naked) ) in some cases.
It won't port as VC is dropping x64 inline asm support iirc.
Use the __cpuid or __cpuidex intrinsic and enjoy.
mov eax, what;
cpuid;
mov ecx, dword ptr peax;
mov [ecx], eax;
will work.
Good luck!

Resources