Logging pointer's values on Android native code - debugging

I'm developing an Android application with native code.
I don't know how to debug a shared library so I decided to log values from a pointer to LogCat.
I have this C++ code:
extern GLfloat* vertPos;
// More code here:
...
int i = 0;
for(i = 0; i < numVertices; i++)
{
__android_log_print(ANDROID_LOG_VERBOSE, "initRendering-Vertices", "%d, %f", i, vertPos[i]);
}
numVertices is equal to 2472 elements.
I'm getting something like that at LogCat:
12-11 08:17:35.354: VERBOSE/initRendering-Vertices(900): i = 614, value = 3.246999
12-11 08:17:35.354: VERBOSE/initRendering-Vertices(900): i = 924, value = -8.000200
I've lost 310 elements.
Is there any other way to see all pointer's elements?
Thanks.

The Android log implementation uses a 64KB circular buffer in the kernel. If you manage to write data to the log faster than "logcat" can read it out, you will start losing lines. Since logcat has to wait on adb to send output over USB to your workstation, it's not that hard to outrun it.
Use fopen to create a file called /sdcard/debug.txt, change the __android_log_print() to fprintf(), and then "adb pull" the output off after the run completes. That will ensure nothing is dropped.

Related

OpenCL possible reason a clGetEventInfo would cause a segfault?

I have a pretty complicated OpenCL app. It fires up 5 different contexts on 5 different GPUs, and executes the same kernel on all of them, splitting up the work into 1024 "chunks" to be processed.
Each time a kernel finishes, a result is checked for, and it's given a new chunk. Sometimes, when running, as the app is starting (very rarely mid-run) it will immediately segfault on the GetEventInfo call.
This is done in a loop using callbacks and clGetEventInfo calls to ensure something is finished before moving on to the next step.
GDB output:
(gdb) back
#0 0x00007fdc686ab525 in clGetEventInfo () from /usr/lib/libOpenCL.so.1
#1 0x00000000004018c1 in ready (event=0x26a00000267) at gputest.c:165
#2 0x0000000000404b5a in main (argc=9, argv=0x7fffdfe3b268) at gputest.c:544
The ready function:
int ready(cl_event event) {
int rdy;
if(!event)
return 0;
clGetEventInfo(event, CL_EVENT_COMMAND_EXECUTION_STATUS, sizeof(cl_int), &rdy, NULL);
if(rdy == CL_COMPLETE)
return 1;
return 0;
}
How the kernel is run, the event set, and checked. Some pseudocode inserted for brevity:
while(test if loop is complete) {
for(j = 0; j < GPUS; j++) {
if(gpu[j].waiting && loops < 9999) {
gpu[j].waiting = 0;
offset[j] = loops * 1024 * 1024;
loops++;
EC("kernel init", clEnqueueNDRangeKernel(queues[j], kernel_init[j], 1, &(offset[j]), &global_work_size, &work128, 0, NULL, &events[j]));
gpu[j].readsearch = events[j];
gpu[j].reading = 1;
}
}
for(j = 0; j < GPUS; j++) {
if(gpu[j].reading && ready(gpu[j].readsearch)) {
gpu[j].reading = 0;
gpu[j].waiting = 1;
// unrelated reporting other code here
}
}
}
Its pretty simple. There is more to the code, but it's unrelated. The ready/checking function is very simple. I even added debugging to the ready function to printf the event # to see what was happening when it crashed - nothing really. No pattern I could see.
What could be causing this?
Ugh. Found the problem. Since you cannot initialize values when you create/declare a struct, I was using some values uninitialized. I malloc'ed the gpu structs then just started using them. With if(gpu[x].reading &&...) being random data and completely uninitialized. So sometimes it was non-zero, which allowed the ready() function to fire off. Since the gpu[x].readsearch event was never set in the first place, clGetEventInfo bombed trying to use whatever was at the memory location.
This would be time number 482,847 that accidentally using uninitialized variables has burned me.

NPAPI: Basic usage of NPN_RequestRead

I am trying to understand how NPN_RequestRead should be used when writing an NPAPI plugin. The documentation looked at first pretty clear but I still cannot make the plugin work so far.
Here is my goal: implement a JPEG 2000 plugin using NPAPI. To have a proper implementation I need to access the JPEG 2000 stream using random access. In my case images are huge (100000x100000 RGB), but can efficiently be displayed using the first few bytes (thanks to multiresolution !).
As far I can tell I cannot make the plugin stop the GET. I cannot use local file access in firefox since it appears to be broken. However I can use a local apache2 installation and have the plugin be called with NPP_NewStream( ... seekable=true ) mode:
$ HEAD http://localhost/test.jp2 | grep Accept-Ranges
Accept-Ranges: bytes
Since seekable is set to true, I create the plugin with *stype = NP_SEEK. It seems that from this point I should be able to stop the GET with:
NPError NPP_NewStream(NPP instance, NPMIMEType type, NPStream* stream, NPBool seekable, uint16_t* stype)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
However the requestread returns an error. I've had a little more chance with:
int32_t NPP_Write(NPP instance, NPStream* stream, int32_t offset, int32_t len, void* buffer)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
But still, from the network console I can see that the entire stream has been downloaded.
Does anyone has a minimal example of a working NPAPI using the NPN_RequestRead API ?
You're requesting 0 bytes (.length = 0).
Firefox will therefore skip the range. Since there are no other valid ranges, there are no actual requests and hence Firefox returns an error.
From nsPluginStreamListenerPeer.cpp, unrelated parts stripped:
int32_t requestCnt = 0;
for (NPByteRange * range = aRangeList; range != nullptr; range = range->next) {
// XXX zero length?
if (!range->length)
continue;
// ...
requestCnt++;
}
// ...
*numRequests = requestCnt;
// ...
if (numRequests == 0)
return NS_ERROR_FAILURE;
So, you'll need to actually request something!
(Admittedly, the implementation looks kinda broken/lmited, e.g. you cannot request bytes=0- with it)

FreeBSD newbus driver loading succesfully but cant create /dev/** file and debugging

I am installing a new newbuf driver on FreeBSD 10.0 . After compiling with make the driver.ko file has been created and than kldload can load successfully. kldload returns 0 and I can see the device at the kldstat output. When attempt to use the driver opening the /dev/** file, the file is not exist.
I think that this /dev/** file should be created by make_dev function which is located in device_attach member method. To test if the kldload reaches this attaching function; when write printf and uprintf to debug the driver, I can not see any output at console nor dmesg output.
But the problem is after writing printf at beginnings (after local variable definitions) of device_identify and device_probe functions, I can't see any output again at console nor dmesg.
My question is that even if the physical driver has problem (not located etc.), should I see the ouput of printf at the device_identify member function which is called by kldload at starting course (I think)?
Do I have a mistake when debugging newbuf driver with printf (I also tried a hello_world device driver and at this driver I can take output of printf at dmesg)?
Mainly how can I test/debug this driver's kldload processes?
Below some parts of my driver code (I think at least I should see MSG1, but I can not see):
struct mydrv_softc
{
device_t dev;
};
static devclass_t mydrv_devclass;
static struct cdevsw mydrv_cdevsw = {
.d_version = D_VERSION,
.d_name = "mydrv",
.d_flags = D_NEEDGIANT,
.d_open = mydrv_open,
.d_close = mydrv_close,
.d_ioctl = mydrv_ioctl,
.d_write = mydrv_write,
.d_read = mydrv_read
};
static void mydrv_identify (driver_t *driver, device_t parent) {
devclass_t dc;
device_t child;
printf("MSG1: The process inside the identfy function.");
dc = devclass_find("mydrv");
if (devclass_get_device(dc, 0) == NULL) {
child = BUS_ADD_CHILD(parent, 0, "mydrv", -1);
}
}
static int mydrv_probe(device_t dev) {
printf("MSG2: The process inside the probe function.");
mydrv_init();
if (device_get_unit(dev) != 0)
return (ENXIO);
device_set_desc(dev, "FreeBSD Device Driver");
return (0);
}
static int mydrv_attach(device_t dev) {
struct mydrv_softc *sc;
device_printf(dev, "MSG3: The process will make attachment.");
sc = (struct mydrv_softc *) device_get_softc(dev);
sc->dev = (device_t)make_dev(&mydrv_cdevsw, 0, UID_ROOT, GID_WHEEL, 0644, "mydrv_drv");
return 0;
}
static int mydrv_detach(device_t dev) {
struct mydrv_softc *sc;
sc = (struct mydrv_softc *) device_get_softc(dev);
destroy_dev((struct cdev*)(sc->dev));
bus_generic_detach(dev);
return 0;
}
static device_method_t mydrv_methods[] = {
DEVMETHOD(device_identify, mydrv_identify),
DEVMETHOD(device_probe, mydrv_probe),
DEVMETHOD(device_attach, mydrv_attach),
DEVMETHOD(device_detach, mydrv_detach),
{ 0, 0 }
};
static driver_t mydrv_driver = {
"mydrv",
mydrv_methods,
sizeof(struct mydrv_softc),
};
DRIVER_MODULE(mydrv, ppbus, mydrv_driver, mydrv_devclass, 0, 0);
If you don't see your printf's output on your console then your device functions will probably not be called. Can you show us your module's code?
Have you used DRIVER_MODULE() or DEV_MODULE()?
What parent bus are you using?
I guess printf works fine, but I prefer to use device_printf as it also prints the device name, and will be easier when looking through logs or dmesg output. Also leave multiple debug prints and check the log files on your system. Most logs for the device drivers are logged in /var/log/messages. But check other log files too.
Are you running your code on a virtual machine? Some device drivers don't show up their device files in /dev if the OS is running on a virtual machine. You should probably run your OS on actual hardware for the device file to show up.
As far as I know, you can't see the output in dmesg if you cannot find the corresponding device file in /dev but you may have luck with logs as I mentioned.
The easiest way to debug is of course using the printf statements. Other than this, you can debug the kernel using gdb running on another system. I am not familiar with the exact process but I know you can do this. Google it.

How to directly write to the frame buffer in windows driver

I am writing the driver that can directly write data to the frame buffer, so that I can show the secret message on the screen while the applications in user space can't get it. Below is my code that trying to write the value to the frame buffer, but after I write the value to the frame buffer, the values i retrieved from the frame buffer are all 0.
I am puzzled, anyone knows the reason? Or anyone knows how to display a message on the screen while the applications in the user space can't get the content of the message? Thanks a lot!
#define FRAME_BUFFER_PHYSICAL_ADDRESS 0xA0000
#define BUFFER_SIZE 0x20000
void showMessage()
{
int i;
int *vAddr;
PHYSICAL_ADDRESS pAddr;
pAddr.QuadPart = FRAME_BUFFER_PHYSICAL_ADDRESS;
vAddr = (int *)MmMapIoSpace(pAddr, BUFFER_SIZE, MmNonCached);
KdPrint(("Virtual address is %p", vAddr));
for(i = 0; i < BUFFER_SIZE / 4; i++)
{
vAddr[i] = 0x11223344;
}
for(i = 0; i < 0x80; i++)
{
KdPrint(("Value: %d", vAddr[i])); // output are all zero
}
MmUnmapIoSpace(vAddr, BUFFER_SIZE);
}
You must map the shared memory during device start up. I assume that showMessage isn't called during the start up. See more here.
Regarding displaying message on the screen - it must involve user-space interaction since GUI is a user-space component. I suppose you could notify some GUI listener without other applications involvement.
Memory mapped IO isn't designed to act exactly like memory (retrieving data that is placed there in the same form it was stored). The writes into the 0xA0000+ range are writes into PORTS in the video device's IO space (from its perspective); So long as the appropriate writes result in the appropriate pixels lighting up, then the video device has done its job from the perspective of people that write drivers for screen rendering (or old DOS code where memory was a free-for-all without a user-space/kernel-space division). But such code never had a need to store data that would later be retrieved from the video segment. Therefore typical memory semantics would generally not have been implemented (waste of hardware and effort). Here, these randoms talk about it:
Magic number with MmMapIoSpace

Getting the current stack trace on Mac OS X

I'm trying to work out how to store and then print the current stack in my C++ apps on Mac OS X. The main problem seems to be getting dladdr to return the right symbol when given an address inside the main executable. I suspect that the issue is actually a compile option, but I'm not sure.
I have tried the backtrace code from Darwin/Leopard but it calls dladdr and has the same issue as my own code calling dladdr.
Original post:
Currently I'm capturing the stack with this code:
int BackTrace(Addr *buffer, int max_frames)
{
void **frame = (void **)__builtin_frame_address(0);
void **bp = ( void **)(*frame);
void *ip = frame[1];
int i;
for ( i = 0; bp && ip && i < max_frames; i++ )
{
*(buffer++) = ip;
ip = bp[1];
bp = (void**)(bp[0]);
}
return i;
}
Which seems to work ok. Then to print the stack I'm looking at using dladdr like this:
Dl_info dli;
if (dladdr(Ip, &dli))
{
ptrdiff_t offset;
int c = 0;
if (dli.dli_fname && dli.dli_fbase)
{
offset = (ptrdiff_t)Ip - (ptrdiff_t)dli.dli_fbase;
c = snprintf(buf, buflen, "%s+0x%x", dli.dli_fname, offset );
}
if (dli.dli_sname && dli.dli_saddr)
{
offset = (ptrdiff_t)Ip - (ptrdiff_t)dli.dli_saddr;
c += snprintf(buf+c, buflen-c, "(%s+0x%x)", dli.dli_sname, offset );
}
if (c > 0)
snprintf(buf+c, buflen-c, " [%p]", Ip);
Which almost works, some example output:
/Users/matthew/Library/Frameworks/Lgi.framework/Versions/A/Lgi+0x2473d(LgiStackTrace+0x5d) [0x102c73d]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x2a006(tart+0x28e72) [0x2b006]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x2f438(tart+0x2e2a4) [0x30438]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x35e9c(tart+0x34d08) [0x36e9c]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x1296(tart+0x102) [0x2296]
/Users/matthew/Code/Lgi/LgiRes/build/Debug/LgiRes.app/Contents/MacOS/LgiRes+0x11bd(tart+0x29) [0x21bd]
It's getting the method name right for the shared object but not for the main app. Those just map to "tart" (or "start" minus the first character).
Ideally I'd like line numbers as well as the method name at that point. But I'll settle for the correct function/method name for starters. Maybe shoot for line numbers after that, on Linux I hear you have to write your own parser for a private ELF block that has it's own instruction set. Sounds scary.
Anyway, can anyone sort this code out so it gets the method names right?
What releases of OS X are you targetting. If you are running on Mac OS X 10.5 and higher you can just use the backtrace() and backtrace_symbols() libraray calls. They are defined in execinfo.h, and there is a manpage with some sample code.
Edit:
You mentioned in the comments that you need to run on Tiger. You can probably just include the implementation from Libc in your app. The source is available from Apple's opensource site. Here is a link to the relevent file.

Resources