Ollydbg Comment Column - win32 api - winapi

From what i see in the various videos/screenshots, whenever you load an executable, the win32 api calls are visible in the comment column in the main CPU window.
How can one achieve this? In my version, i don't see them.
Thanks.

Try 'analyzing' at the current location or the location where you expect to see comments. When analyzing self-modifying code like packers, decryptors etc, the contents of the CPU window might not have been 'analyzed' by Ollydbg, because they could just have been unpacked, or modified.
To analyze, right click at the desired location in CPU Window. Select Analysis> Analyze Code.

IIRC, it's always there but the default column size just hides it. Try increasing the column size of the disassembly window and you should see it.

try 'Analyze This' Plugin for ollydbg
AnalyzeThis! is an OllyDbg plugin to allow OllyDbg's analysis function to operate outside of the marked code segment, by telling OllyDbg the current segment is the code segment.
Another way would be to switch to OllyDbg 2 -> MUCH better analysis; and more comments in Comment Column
PS: Analysis depends upon the executable type (c++, vb6, Delphi), so the Comment section may vary

Related

How can I determine what objects ARC is retaining using Instruments or viewing assembly?

This question is not about finding out who retained a particular object but rather looking at a section of code that appears from the profiler to have excessive retain/release calls and figuring out which objects are responsible.
I have a Swift application that after initial porting was spending 90% of its time in retain/release code. After a great deal of restructuring to avoid referencing objects I have gotten that down to about 25% - but this remaining bit is very hard to attribute. I can see that a given chunk of it is coming from a given section of code using the profiler, but sometimes I cannot see anything in that code that should (to my understanding) be causing a retain/release. I have spent time viewing the assembly code in both Instruments (with the side-by-side view when it's working) and also the output of otool -tvV and sometimes the proximity of the retain/release calls to a recognizable section give me a hint as to what is going on. I have even inserted dummy method calls at places just to give me a better handle on where I am in the code and turned off optimization to limit code reordering, etc. But in many cases it seems like I would have to trace the code back to follow branches and figure out what is on the stack in order to understand the calls and I am not familiar enough with x86 to know know if that is practical. (I will add a couple of screenshots of the assembly view in Instruments and some otool output for reference below).
My question is - what else can I be doing to debug / examine / attribute these seemingly excessive retain/release calls to particular code? Is there something else I can do in Instruments to count these calls? I have played around with the allocation view and turned on the reference counting option but it didn't seem to give me any new information (I'm not actually sure what it did). Alternately, if I just try harder to interpret the assembly should I be able to figure out what objects are being retained by it? Are there any other tools or tricks I should know on that front?
EDIT: Rob's info below about single stepping into the assembly was what I was looking for. I also found it useful to set a symbolic breakpoint in XCode on the lib retain/release calls and log the item on the stack (using Rob's suggested "p (id)$rdi") to the console in order to get a rough count of how many calls are being made rather than inspect each one.
You should definitely focus on the assembly output. There are two views I find most useful: the Instruments view, and the Assembly assistant editor. The problem is that Swift doesn't support the Assembly assistant editor currently (I typically do this kind of thing in ObjC), so we come around to your complaint.
It looks like you're already working with the debug assembly view, which gives somewhat decent symbols and is useful because you can step through the code and hopefully see how it maps to the assembly. I also find Hopper useful, because it can give more symbols. Once you have enough "unique-ish" function calls in an area, you can usually start narrowing down how the assembly maps back to the source.
The other tool I use is to step into the retain bridge and see what object is being passed. To do this, instruction-step (^F7) into the call to swift_bridgeObjectRetain. At that point, you can call:
p (id)$rdi
And it should print out at least some type information about the what's being passed ($rdi is correct on x86_64 which is what you seem to be working with). I don't always have great luck extracting more information. It depends on exactly is in there. For example, sometimes it's a ContiguousArrayStorage<Swift.CVarArgType>, and I happen to have learned that usually means it's an NSArray. I'm sure better experts in LLDB could dig deeper, but this usually gets me at least in the right ballpark.
(BTW, I don't know why I can't call p (id)$rdi before jumping inside bridgeObjectRetain, but it gives strange type errors for me. I have to go into the function call.)
Wish I had more. The Swift tool chain just hasn't caught up to where the ObjC tool chain is for tracing this kind of stuff IMO.

How to detect who's issuing a wrong kfree

I am suspecting a double kfree in my kernel code. Basically, I have a data structure that is kzalloced and kfreed in a module. I notice that the same address is allocated and then allocated again without being freed in the module.
I would like to know what technique should I employ in finding where the wrong kfree is issued.
1.
Yes, kmemleak is an excellent tool, especially suitable for system-wide analysis.
Note that if you are going to use it to analyze a kernel module, you may need to save the addresses of the ELF sections containing the code of the module (.text, .init.text, ...) when the module is loaded. This may help you decipher the call stacks in the kmemleak's report. It usually makes sense to ask kmemleak to produce a report after the module has been unloaded but kmemleak cannot resolve the addresses at that time.
While a module is loaded, the addresses fo its sections can be found in the files in /sys/module/<module_name>/sections/.
After you have found the section each code address in the report belongs to and the corresponding offset into that section, you can use objdump, gdb, addr2line or a similar tool to obtain more detailed information about where the event of interest occurred.
2.
Besides that, if you are working on an x86 system and you would like to analyze a single kernel module, you can also use KEDR LeakCheck tool.
Unlike kmemleak, most of the time, it is not required to rebuild the kernel to be able to use KEDR.
The instructions on how to build and use KEDR are here. A simple example of how LeakCheck can be used is described in "Detecting Memory Leaks" section.
Have you tried enabling the kmemleak detection code?
See Documentation/kmemleak.txt for details.

Edit library in hex editor while preserving its integrity

I'm attempting to edit a library in hex editor, insert mode. The main point is to rename a few entries in it. If I make it in "Otherwrite" mode, everything works fine, but every time I try to add a few symbols to the end of string in "Insert" mode, the library fails to load. Anything I'm missing here?
Yes, you're missing plenty. A library follows the PE/COFF format, which is quite heavy on pointers throughout the file. (Eg, towards the beginning of the file is a table which points to the locations of each section in the file).
In the case that you are editing resources, there's the potential to do it without breaking things if you make sure you correct any pointers and sizes for anything pointing to after your edits, but I doubt it'll be easy. In the case that you are editing the .text section (ie, the code), then I doubt you'll get it done, since the operands of function calls and jumps are relative locations to their position in code - you would need to update the entire code to account for edits.
One technique to overcome this is a "code cave", where you replace a piece of the existing code with an explicit JMP instruction to some empty location (You can do this at runtime, where you have the ability to create new memory) - where you define some new code which can be of arbitrary length - then you explicitly JMP back to where you called from (+5 bytes say for the JMP opcode + operand).
Are the names you're changing them to the same length as the old names? If not, then the offsets of everything is shifted. And do any of the functions call one another? That could be another problem point. It'd be easier to obtain the source code (from the project's website if it's not in-house, or from the vendor if it's closed) and change them in that, and then recompile it. I'm curious as to why you are changing the names anyway.
DLLs are a complex binary format (ie compiled code). The compiling process turns named function calls into hard-wired references to specific positions in the file ("offsets"). Therefore if you insert characters into the middle of the file, the offsets after that point will no longer match what is actually at the position they reference, meaning that the function calls in your library will run the wrong code (if they manage to run anything at all).
Basically, the bottom line is what you're doing is always going to break stuff. If you're unlucky, it might even break it really badly and cause serious damage.
Sure - a detailed knowledge of the format, and what has to change. If you're wondering why some of your edits cause loading to fail, you are missing that knowledge.
Libraries are intended to be written by the linker for the use of the linker. They follow a well-defined format that is intended to be easy for the linker to write and read. They don't need tolerance for human input like a compiler does.
Very simply, libraries aren't intended to be modified by hex editors. It may be possible to change entries by overwriting them with names of the same length, or that may screw up an index somewhere. If you change the length of anything, you're likely breaking pointers and metadata.
You don't give any reason for wanting to do this. If it's for fun, well, it's harder than you expected. If you have another reason, you're better off getting the source, or getting somebody who has the source to rename and rebuild.

What is the difference between intellitrace and moving to a previous line of code when stepping through?

I read a description of how intellitrace in VS2010 Ultimate is like going back in time, in the execution of your application.
However, this sounds just like moving the line marker to a previous line (that yellow arrow in the breakpoints margin to the left of the code, when you are stepping through code).
Thanks
From Wikipedia:
Unlike the current debugger, that records only the currently-active stack, IntelliTrace records all events like prior function calls, method parameters, events, exceptions etc. This allows the code execution to be rewound in case a breakpoint wasn't set where the error occurred.
When you move the execution point back you run the same code again, but the variables might have different values. This is because running the code the first time may have changed some variables.
With Intellitrace you should be able to run the same code again with the same values as the first time. I have not tested it though.
The difference is that intellitrace keeps the history of each of those moments that it recorded. So unlike moving the line marker it is showing you the value of all the variables at that point in time. Similar to how a memory dump but for debugging.
It relies heavily on Tracing hence the name. Here is a good explanation of how it works. It's actually pretty cool.

Find Windows DLLs not compiled with SafeSEH

I'd like to find out which of the DLLs located in various of my installed softwares have been compiled with SafeSEH and which ones haven't. Is there a tool that could give me that information, otherwise what would be the best solution to code something that does that verification?
Thanks in advance.
You could start out by taking this tool, SafeSEH Dump and examine the output. It shouldn't be too hard to run it batch-like against a list of all your DLLs. You need to create a login to download it. Here's a blog post that references SafeSEH Dump too, but the download link at that page seems dead.
Also you can use dumpbin.exe /loadconfig to look for the presence of the Safe Exception Handler table. More info here: http://www.jwsecure.com/dan/2007/07/06/the-safe-exception-handler-table/
I know this is an ancient question that I'm dredging up from the dead, but there is a programmatic solution.
First, parse the PE format. There are all sorts of solutions for this, so I won't go into it. Suffice to say that it's a bigger topic than I can cover here. If you decide to roll your own, be careful of the differences between 32-bit and 64-bit executables.
Once you've got a PE file parsed, skip the DOS header, NT signature, file header (a.k.a. COFF header), the optional header, and finally get to the Data Directories. Each of these directories has an RVA and a size. Find the RVA and size of the Configuration Directory (the 10th entry in the list).
Here's where we can start doing detection. If the RVA or size is zero, SafeSEH isn't enabled. If the size is anything other than 0x40, it was built with a compiler that was (probably) vulnerable to the MS12-001 SafeSEH bypass bug. Don't trust the size value though - it does not necessarily match up to the size of the data within, because of some quirks with Windows XP - see the previous link for more details.
If the RVA and size seem sensible, follow the RVA to the Load Configuration structure. Parse that, then read the SEHandlerTable and SEHandlerCount values. If the handler table pointer is null (i.e. zero) then SafeSEH is not enabled. If the handler count is zero, there are no handlers registered, even though SafeSEH might be switched on.

Resources