How do I enable GDB/GEF to allow me to see how stack changes as I insert discrete input? - debugging

I am trying to identify the offset in which a buffer overflow occurs via pwntools and gdb. Here is the C code (x64):
int input[8];
int count, num;
count = 0;
while(1)
{
printf("Enter:\n");
scanf("%d", &num);
if (num == -1){
break;
} else {
input[count++] = num;
}
}
Understanding that the size of the integer is 4 bytes, I am attempting to feed the program a string of integers via pwntools (code below):
from pwn import *
context.log_level = "debug"
io = gdb.debug('_file_')
for i in range(0,10,1):
io.clean()
io.sendline("{:d}".format(i))
io.interactive()
However, I am having trouble finding the offset and trying to debug the program via gdb. I would like to be able to see changes to the stack as each integer is input (via ni or si). Is there a better way to identify where the program crashes?
I am using the for loop as a proxy for pattern create (with the hope to see which integer causes the crash).
Any insights would greatly be appreciated!

Related

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

Kissfftr different results x86 - Atheros AR9331

This is my first question on stackoverflow and my englsich is unfortunately poor. But I want to try it.
A customized routine of twotonetest of kissfft brings on two different systems very different results.
The under ubuntu translated with gcc on x86 program brings the correct values. That with the openWRT SDK translated for the Arduino YUN (Atheros AR9331) program displays incorrect values​​. It seems as if since the definition of FIXED_POINT is ignored.
Defined is:
#define FIXED_POINT 32
the function:
double GetFreqBuf( tBuf * io_pBuf, int nfft)
{
kiss_fftr_cfg cfg = NULL;
kiss_fft_cpx *kout = NULL;
kiss_fft_scalar *tbuf = NULL;
uint32_t ptr;
int i;
double sigpow=0;
double noisepow=0;
long maxrange = SHRT_MAX;
cfg = kiss_fftr_alloc(nfft , 0, NULL, NULL);
tbuf = KISS_FFT_MALLOC(nfft * sizeof(kiss_fft_scalar));
kout = KISS_FFT_MALLOC(nfft * sizeof(kiss_fft_cpx));
/* generate the array from samples*/
for (i = 0; i < nfft; i++) {
//nur einen Kanal, eine Krücke, würde nun auch mit 2 kanälen gehen, aber so ist schneller
if (io_pBuf->IndexNextValue >= (i*2))
ptr = io_pBuf->IndexNextValue - (i*2);
else
ptr = io_pBuf->bufSize - ((i*2) - io_pBuf->IndexNextValue);
tbuf[i] = io_pBuf->aData[ptr] ;
}
kiss_fftr(cfg, tbuf, kout);
for (i=0;i < (nfft/2+1);++i) {
double tmpr = (double)kout[i].r / (double)maxrange;
double tmpi = (double)kout[i].i / (double)maxrange;
double mag2 = tmpr*tmpr + tmpi*tmpi;
if (i!=0 && i!= nfft/2)
mag2 *= 2; /* all bins except DC and Nyquist have symmetric counterparts implied*/
/* if there is power between the frq's, it is signal, otherwise noise*/
if ( i > nfft/96 && i < nfft/32 )
noisepow += mag2;
else
sigpow += mag2;
}
kiss_fft_cleanup();
//printf("TEST %d Werte, noisepow: %f sigpow: %f noise # %fdB\n",nfft,noisepow,sigpow,10*log10(noisepow/sigpow +1e-30) );
free(cfg);
free(tbuf);
free(kout);
return 10*log10(noisepow/sigpow +1e-30);
}
As input samples of 16-bit sound from the same file be used. Results differ for example from-3dB to-15dB. AWhere could you start troubleshooting?
Possibility #1 (most likely)
You are compiling kissfft.c or kiss_fftr.c differently than the calling code. This happens to a lot of people.
An easy way to force the same FIXED_POINT is to edit the kiss_fft.h directly. Another option: verify with some printf debugging. i.e. place the following in various places:
printf( __FILE__ " sees sizeof(kiss_fft_scalar)=%d\n" , sizeof(kiss_fft_scalar) )
Possibility #2
Perhaps the FIXED_POINT=16 code works but the FIXED_POINT=32 code does not because something is being handled incorrectly either inside kissfft or on the platform. The 32 bit fixed code relies on int64_t being implemented correctly.
Is that Atheros a 16 bit processor? I know kissfft has been used successfully on 16 bit platforms, but I'm not sure if FIXED_POINT=32 real FFTs on a 16 bit fixed point has been used.
viel Glück,
Mark

vsnprintf on an ATMega2560

I am using a toolkit to do some Elliptical Curve Cryptography on an ATMega2560. When trying to use the print functions in the toolkit I am getting an empty string. I know the print functions work because the x86 version prints the variables without a problem. I am not experienced with ATMega and would love any help on this matter. The print code is included below.
Code to print a big number (it itself calls a util_print)
void bn_print(bn_t a) {
int i;
if (a->sign == BN_NEG) {
util_print("-");
}
if (a->used == 0) {
util_print("0\n");
} else {
#if WORD == 64
util_print("%lX", (unsigned long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*lX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long int)a->dp[i]);
}
#else
util_print("%llX", (unsigned long long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*llX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long long int)a->dp[i]);
}
#endif
util_print("\n");
}
}
The code to actually print a big number variable:
static char buffer[64 + 1];
void util_printf(char *format, ...) {
#ifndef QUIET
#if ARCH == AVR
char *pointer = &buffer[1];
va_list list;
va_start(list, format);
vsnprintf(pointer, 128, format, list);
buffer[0] = (unsigned char)2;
va_end(list);
#elif ARCH == MSP
va_list list;
va_start(list, format);
vprintf(format, list);
va_end(list);
#else
va_list list;
va_start(list, format);
vprintf(format, list);
fflush(stdout);
va_end(list);
#endif
#endif
}
edit: I do have UART initialized and can output printf statments to a console.
I'm one of the authors of the RELIC toolkit. The current util_printf() function is used to print inside the Avrora simulator, for debugging purposes. I'm glad that you could adapt the code to your purposes. As a side note, the buffer size problem was already fixed in more recent releases of the toolkit.
Let me know you have further problems with the library. You can either contact me personally or write directly to the discussion group.
Thank you!
vsnprintf store it's output on the given buffer (which in this case is the address point by pointer variable), in order for it to show on the console (through UART) you must send your buffer using printf (try to add printf("%s", pointer) after vsnprintf), if you're using avr-libc don't forget to initialized std stream first before making any call to printf function
oh btw your code is vulnerable to buffer overflow attack, buffer[64 + 1] means your buffer size is only 65 bytes, vsnprintf(pointer, 128, format, list); means that the maximum buffer defined by your application is 128 bytes, try to change it below 65 bytes in order to avoid overflow
Alright so I found a workaround to print the bn numbers to a stdout on an ATMega2560. The toolkit comes with a function that writes a variable to a string (bn_write_str). So I implemented my own print function as such:
void print_bn(bn_t a)
{
char print[BN_SIZE]; // max precision of a bn number
int bi = bn_bits(a); // get the number of bits of the number
bn_write_str(print, bi, a, 16) // 16 indicates the radix (hexadecimal)
printf("%s\n"), print);
}
This function will print a bn number in hexadecimal format.
Hope this helps anyone using the RELIC toolkit with an AVR.
This skips the util_print calls.

Matlab MEX File: Program Crashes in the second run: Access Violation in Read

I have a C++ code that I am trying to interface with Matlab. My mex file runs fine in the first run but crashes in the second run. However, if I clear all the variables in the Matlab before execution (using clear all) program never crashes. So I have a question in this:
1. Can mex function takes variables from the Matlab workspace without using some special functions? Am I doing it somehow in my code, unintentionally?
I have a posted the mex function that I wrote. It has a one dimensional vector called "block" that is read inside the C++ function called sphere_detector. For the present problem the block size is 1x1920 and it is read in the chunk of 16 elements inside the sphere_detector. Program crashed when I read the SECOND chunk of 16 elements. The first element that I read in the chunk will throw this error:
First-chance exception at 0x000007fefac7206f (sphere_decoder.mexw64) in MATLAB.exe: 0xC0000005: Access violation reading location 0xffffffffffffffff.
MATLAB.exe has triggered a breakpoint
I checked my block vector, it should have all the values initialized and it has that. So, I am little confused as to why I am facing this problem.
I am using Matlab 2010a and Visual Studio 2010 Professional.
Here is the mex function:
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
double *mod_scheme, *Mt, *Mr, *block_length, *SNR;
mod_scheme = mxGetPr(prhs[0]);
Mt = mxGetPr(prhs[1]);
Mr = mxGetPr(prhs[2]);
block_length = mxGetPr(prhs[3]);
SNR = mxGetPr(prhs[4]);
/* Now take the input block. This is an encoded block and sphere detector will do the transmission too -- I can change it later */
double *block = mxGetPr(prhs[5]);
double *LIST_SIZE = mxGetPr(prhs[6]);
double **cand_sym;
int a = *mod_scheme;
int b = *Mt;
int c = *Mr;
int d = *block_length;
int e = *SNR;
int f = *LIST_SIZE;
int bitSize = (int)(log10(1.0*a)/log10(2.0));
for(int i=0; i<(int)*block_length; ++i)
{
printf("%d\n", (int)block[i]);
}
printf("Hello world %d %d %d %d %d!\n", (int)*mod_scheme, (int)*Mt, (int)*Mr, (int)*block_length, (int)*SNR);
/* Inputs are read correctly now set the outputs */
double *llr, *cand_dist;
/* for llrs */
plhs[0] = mxCreateDoubleMatrix(1, d, mxREAL);
llr = mxGetPr(plhs[0]);
/* for cand_dist */
int no_mimo_sym = d/(b*bitSize);
plhs[1] = mxCreateDoubleMatrix(1, f*no_mimo_sym, mxREAL);
cand_dist = mxGetPr(plhs[1]);
/* for cand_syms */
plhs[2] = mxCreateDoubleMatrix(b*bitSize*no_mimo_sym, f,mxREAL); //transposed version
double *candi;
candi = mxGetPr(plhs[2]);
cand_sym = (double**)mxMalloc(f*sizeof(double*));
if(cand_sym != NULL)
{
for(int i=0;i<f; ++i)
{
cand_sym[i] = candi + i*b*bitSize*no_mimo_sym;
}
}
sphere_decoder(a,b,c,d,e,block,f,llr,cand_dist,cand_sym);
// mxFree(cand_sym);
}
The portion inside the sphere decoder code where I get read exception looks like this:
for(int _block_length=0;_block_length<block_length; _block_length+=Mt*bitSize)
{
printf("Transmitting MIMO Symbol: %d\n", _block_length/(Mt*bitSize));
for(int _antenna = 0; _antenna < Mt; ++_antenna)
for(int _tx_part=0;_tx_part<bitSize; _tx_part++)
{
// PROGRAM CRASHES EXECUTING THIS LINE
bitstream[_antenna][_tx_part] = (int)block_data[_block_length + _antenna*bitSize + _tx_part];
}
............................REST OF THE CODE..................
}
Any help would be appreciated.
With regards,
Newbie
Well I finally managed to solve the problem. It was a very stupid mistake that I made. I had a pointer to a pointer(double *a;) of data type double and by mistake I assigned it memory of integer (I ran a find and replace command where I changed lots of int to double but this one left). Hence heap was getting corrupted. Also I changed my Mex function where I created dynamic variables using calloc and passed them to the C++ function. Once C++ function returned I copied there values to matlab variables and freed them usind free().

Getting the current stack trace on Mac OS X

I'm trying to work out how to store and then print the current stack in my C++ apps on Mac OS X. The main problem seems to be getting dladdr to return the right symbol when given an address inside the main executable. I suspect that the issue is actually a compile option, but I'm not sure.
I have tried the backtrace code from Darwin/Leopard but it calls dladdr and has the same issue as my own code calling dladdr.
Original post:
Currently I'm capturing the stack with this code:
int BackTrace(Addr *buffer, int max_frames)
{
void **frame = (void **)__builtin_frame_address(0);
void **bp = ( void **)(*frame);
void *ip = frame[1];
int i;
for ( i = 0; bp && ip && i < max_frames; i++ )
{
*(buffer++) = ip;
ip = bp[1];
bp = (void**)(bp[0]);
}
return i;
}
Which seems to work ok. Then to print the stack I'm looking at using dladdr like this:
Dl_info dli;
if (dladdr(Ip, &dli))
{
ptrdiff_t offset;
int c = 0;
if (dli.dli_fname && dli.dli_fbase)
{
offset = (ptrdiff_t)Ip - (ptrdiff_t)dli.dli_fbase;
c = snprintf(buf, buflen, "%s+0x%x", dli.dli_fname, offset );
}
if (dli.dli_sname && dli.dli_saddr)
{
offset = (ptrdiff_t)Ip - (ptrdiff_t)dli.dli_saddr;
c += snprintf(buf+c, buflen-c, "(%s+0x%x)", dli.dli_sname, offset );
}
if (c > 0)
snprintf(buf+c, buflen-c, " [%p]", Ip);
Which almost works, some example output:
/Users/matthew/Library/Frameworks/Lgi.framework/Versions/A/Lgi+0x2473d(LgiStackTrace+0x5d) [0x102c73d]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x2a006(tart+0x28e72) [0x2b006]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x2f438(tart+0x2e2a4) [0x30438]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x35e9c(tart+0x34d08) [0x36e9c]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x1296(tart+0x102) [0x2296]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x11bd(tart+0x29) [0x21bd]
It's getting the method name right for the shared object but not for the main app. Those just map to "tart" (or "start" minus the first character).
Ideally I'd like line numbers as well as the method name at that point. But I'll settle for the correct function/method name for starters. Maybe shoot for line numbers after that, on Linux I hear you have to write your own parser for a private ELF block that has it's own instruction set. Sounds scary.
Anyway, can anyone sort this code out so it gets the method names right?
What releases of OS X are you targetting. If you are running on Mac OS X 10.5 and higher you can just use the backtrace() and backtrace_symbols() libraray calls. They are defined in execinfo.h, and there is a manpage with some sample code.
Edit:
You mentioned in the comments that you need to run on Tiger. You can probably just include the implementation from Libc in your app. The source is available from Apple's opensource site. Here is a link to the relevent file.

Resources