How can I create a byte signature for ClamAV? - byte

I want to get a byte sequence out of the .text section of an object file and turn it into a signature. I want to execute ClamAV's clamscan with this signature to find other object files containing the same byte sequence.
With objdump the byte sequence looks like this:
A byte sequence for this example could look like this:
55 48 89 e5 48 83 ec 10 bf 0a 00 00 00 e8 ?? ?? ?? ?? 48 89 45 f8 c9 c3
the ?? being place holder.
I didn't find a way to do it with sigtool. Is there another tool for that, or do I have to do it manually and if so in which form do I have to save the signatures (format within the signature database and format of the database itself)?

I had to write a script which was doing this task by hand. I didn't find a way sigtool can do that for me. A script ran through the objdump and replaced the variable bytes. I stored the result in a database and with this database I could identify which library was linked statically using clamscan in binary mode (even if someone strips out the library names).

Related

How do I read a record's payload from an NXP MIFARE Ultralight tag?

I've got a couple of NXP MIFARE Ultralight tags (type 2) that contain some data in the first record. I'm using an ACS 1252U to read the tags, and I've tried manually iterating over some of the data to get a sense of what's on the tag, but I can't seem to figure out how to determine where the record begins and where it ends.
Here's some detailed information on the NFC tag and the record I'm trying to read:
And here's some data from one of my tags starting at page 04:
03 ff 01 5a
c4 0f 00 00
01 45 62 63
61 72 64 2e
6e 65 74 3a
62 63 61 72
64 39 39 37
30 31 1e 34
Now if I convert all of that to ASCII, I get the following:
ÿZÄEbcard.net:bcard997014
All I know is that the actual data I'm after (or the payload) begins at 99701, but how in the world am I supposed to know that? Surely there's something in the data that can tell me where the record's payload starts and where it stops?
The data follows the Type 2 Tag specification just fine. A Type 2 tag has its data pages starting at page/block 4. Data is embedded into TLV structures.
In your case, the first byte of page 4 is the tag of an NDEF Message TLV (0x03). The next byte indicates that the length filed is encoded in 3-byte format. Consequently, the length is 0x015A (= 346 bytes). Thus, you have to read the next 87 pages (= ceil(346/4) since data starts at page boundary) to retrieve the complete NDEF message.
The NDEF message itself consists of 1 NDEF record (the header byte 0xC4 indicates that the record is the first (MB=1) and last (ME=1) record of the message). The record is an NFC Forum external type (TNF=4 in the header byte). The type name has a length of 0x0F (= 15 bytes). The payload has a length of 0x0145 (= 325 bytes). Consequently, the type name is "bcard.net:bcard" and the payload is '39 39 37 30 31 1E 34 ...' (ITN doesn't seem to have published a specification on how their bcard type is structured).
See How to interpret NDEF content on Mifare Classic 1K on how to decode these TLV structures and the NDEF message.

Not getting AFL for Visa Contactless Application?

I am not getting AFL in the GPO command for Visa contactless Application
GPO Request as Below:
Request :80 A8 00 00 12 83 10 B6 60 40 00 00 00 00 01 00 00 00 00 38 39 30 31 00
Tag 9F 66: Terminal Transaction Qualifiers : B6 60 40 00
Tag 9F 02: Transaction Amount : 00 00 00 01 00 00
Tag 5F 2A: Transaction Currency Code : 03 56
Tag 9F 37: Unpredictable Number : 38 39 30 31
Getting AFL is not mandatory. If you do not get AFL you are not expected to do any READs. You need not do some functions like ODA as you wont have data associated with it. You can proceed with the available data as such.
As per VISA specification (VCPS), AFL is not mandatory.
If it is not returned in GPO the kernel shall skip the READ RECORDS and proceeds to Card Read Complete.
Your Terminal Transaction Qualifier byte 1 bit 1 is set to zero, meaning "Offline Data Authentication for Online Authorizations not supported". Try setting it to 1: B6 60 40 00 --> B7 60 40 00.
I was having the same issue and this was enough to receive an AFL.
I am experimenting now with Visa contactless, Get Processing Options, PDOL, and Read Record commands.
Here is what I found:
Visa Contactless has data accessible via Read Record in either rec 1 or 2, in file 1. You do not need to issue GPO to get this data.
A more complicated case is Visa Contactless inside Google Pay.
Contrary to simple PDOL having 4 elements, this "card" application requests PDOL over 20 elements. I was not able to guess so far the proper values of all of them, to construct proper PDOL and get AFL in GPO APDU Response, and SW=0x90.
The application returns 0 bytes for each Read Record I tried, and so far I cannot find which record file contains application data.

win32 singleton with std containers CRT false memory leak? [duplicate]

It seems whenever there are static objects, _CrtDumpMemoryLeaks returns a false positive claiming it is leaking memory. I know this is because they do not get destroyed until after the main() (or WinMain) function. But is there any way of avoiding this? I use VS2008.
I found that if you tell it to check memory automatically after the program terminates, it allows all the static objects to be accounted for. I was using log4cxx and boost which do a lot of allocations in static blocks, this fixed my "false positives"...
Add the following line, instead of invoking _CrtDumpMemoryLeaks, somewhere in the beginning of main():
_CrtSetDbgFlag ( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
For more details on usage and macros, refer to MSDN article:
http://msdn.microsoft.com/en-us/library/5at7yxcs(v=vs.71).aspx
Not a direct solution, but in general I've found it worthwhile to move as much allocation as possible out of static initialization time. It generally leads to headaches (initialization order, de-initialization order etc).
If that proves too difficult you can call _CrtMemCheckpoint (http://msdn.microsoft.com/en-us/library/h3z85t43%28VS.80%29.aspx) at the start of main(), and _CrtMemDumpAllObjectsSince
at the end.
1) You said:
It seems whenever there are static objects, _CrtDumpMemoryLeaks returns a false positive claiming it is leaking memory.
I don't think this is correct. EDIT: Static objects are not created on heap. END EDIT: _CrtDumpMemoryLeaks only covers crt heap memory. Therefore these objects are not supposed to return false positives.
However, it is another thing if static variables are objects which themselves hold some heap memory (if for example they dynamically create member objects with operator new()).
2) Consider using _CRTDBG_LEAK_CHECK_DF in order to activate memory leak check at the end of program execution (this is described here: http://msdn.microsoft.com/en-us/library/d41t22sb(VS.80).aspx). I suppose then memory leak check is done even after termination of static variables.
Old question, but I have an answer. I am able to split the report in false positives and real memory leaks. In my main function, I initialize the memory debugging and generate a real memory leak at the really beginning of my application (never delete pcDynamicHeapStart):
int main()
{
_CrtSetDbgFlag( _CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF );
char* pcDynamicHeapStart = new char[ 17u ];
strcpy_s( pcDynamicHeapStart, 17u, "DynamicHeapStart" );
...
After my application is finished, the report contains
Detected memory leaks!
Dumping objects ->
{15554} normal block at 0x00000000009CB7C0, 80 bytes long.
Data: < > DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD DD
{14006} normal block at 0x00000000009CB360, 17 bytes long.
Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74
{13998} normal block at 0x00000000009BF4B0, 32 bytes long.
Data: < ^ > E0 5E 9B 00 00 00 00 00 F0 7F 9C 00 00 00 00 00
{13997} normal block at 0x00000000009CA4B0, 8 bytes long.
Data: < > 14 00 00 00 00 00 00 00
{13982} normal block at 0x00000000009CB7C0, 16 bytes long.
Data: < # > D0 DD D6 40 01 00 00 00 90 08 9C 00 00 00 00 00
...
Object dump complete.
Now look at line "Data: <DynamicHeapStart> 44 79 6E 61 6D 69 63 48 65 61 70 53 74 61 72 74".
All reportet leaks below are false positives, all above are real leaks.
False positives don't mean there is no leak (it could be a static linked library which allocates heap at startup and never frees it), but you cannot eliminate the leak and that's no problem at all.
Since I invented this approach, I never had leaking applications any more.
I provide this here and hope this helps other developers to get stable applications.
Can you take a snapshot of the currently allocated objects every time you want a list? If so, you could remove the initially allocated objects from the list when you are looking for leaks that occur in operation. In the past, I have used this to find incremental leaks.
Another solution might be to sort the leaks and only consider duplicates for the same line of code. This should rule out static variable leaks.
Jacob
Ach. If you are sure that _CrtDumpMemoryLeaks() is lying, then you are probably correct. Most alleged memory leaks that I see are down to incorect calls to _CrtDumpMemoryLeaks(). I agree entirely with the following; _CrtDumpMemoryLeaks() dumps all open handles. But your program probably already has open handles, so be sure to call _CrtDumpMemoryLeaks() only when all handles have been released. See http://www.scottleckie.com/2010/08/_crtdumpmemoryleaks-and-related-fun/ for more info.
I can recommend Visual Leak Detector (it's free) rather than using the stuff built into VS. My problem was using _CrtDumpMemoryLeaks with an open source library that created 990 lines of output, all false positives so far as I can tell, as well as some things coming from boost. VLD ignored these and correctly reported some leaks I added for testing, including in a native DLL called from C#.

Hex file and disassembly discrepancy

I have parsed hex files for the purpose of bootloading before. This is my first time with a hex file generated using Microchip's XC32 tool chain. Right away I noticed what appears to be a discrepancy between the hex file and the disassembly.
The first 3 lines of the hex file:
:020000040000fa
:020000041d00dd
:10000000030000100000000040f3060000000000a4
From the listing file:
9d000000 <_reset>:
9d000000: 10000003 b 9d000010 <__reset_switch_isa>
9d000004: 00000000 nop
9d000008 <__reset_micromips_isa>:
9d000008: f340 0006 jalx 9d000018 <_startup>
9d00000c: 0000 0000 nop
Notice that address 9d000008 looks like it should contain 0x06 in the listing file. However, the hex file seems to indicate 0x40 at this location. The following 3 bytes are also not in the expected order.
:10 0000 00 03 00 00 10 00 00 00 00 40 f3 06 00 00 00 00 00 a4
When I look though the file other records are as expected, but the bytes pertaining to this jalx instruction word seem to be out of order. Can someone set me straight?
Thanks!
UPDATE:
Another perplexing data point. If I flash the hex file into the part using the debugger (not using my bootloader). Then if I view the execution memory and disassembly listing, I see the following:
Address Instruction Disassembly
1D00_0000 10000003 BEQ ZERO, ZERO, 0x1D000010
1D00_0004 00000000 NOP
1D00_0008 0006F340 SLL S8, A2, 13
1D00_000C 00000000 NOP
When the IDE reinterprets the code that it programmed in, it now shows a SLL instruction not a JALX. This is compiler generated startup code so I cannot be sure what it should be. The byte order matches the hex file not the listing file, so the Microchip tools interpret the hex file as I would but this does not match the listing file.
I posted this question on the microchip.com forum and a couple users there provided the answer.
Basically the JALX instruction in the listing file excerpt is formatted in microMIPS not MIPS32. So there is not actually a discrepancy between the listing file and the hex file. The hex file is interpreted with each byte written into ascending address locations as I was attempting to do. However, viewing the disassembly as I did in the update does no interpret the instructions as microMIPS, so the disassembly for that instruction is incorrect when viewed through the IDE. When the JALX is executed, a flag in the CPU informs the processor to treat this instruction as mircoMIPS.
For more information, see the excellent responses I received at:
http://www.microchip.com/forums/FindPost/986740
It looks like the listing is using Little Endian for those 16-bit values. If that were meant to be a 32-bit value, the 0x06 would come first.
As long as that's the case, there's not really a problem.

PE 101 explanation of addresses to windows api calls

I am trying to build a program that will give more information about a file and possibly a disassembler. I looked at https://code.google.com/p/corkami/wiki/PE101 to get more information and after reading it a few times I am understanding most of it. the part I don't understand is the call addresses to windows api. for example how did he know that the instruction call [0x402070] was an api call to messagebox? I understand how to count the addresses to the strings and the 2 push commands to strings make sense, but not the dll part.
I guess what I am trying to say is I don't understand the part that says "imports structures"
(the part I drew a box around in yellow) If any one could please explain to me how 0x402068 points to exitProcess and 0x402070 points to MessageBoxA, this would really help me. thanks
Loader (a part of Windows OS) "patches up" the Import Address Table (IAT) before starting the sample program, that's when the real addresses of the library procedures appear in the memory locations 0x402068 and 0x402068. Please note that imports reside in section nobits in simple.asm:
section nobits vstart=IMAGEBASE + 2 * SECTIONALIGN align=FILEALIGN
The section with imports after load starts at virtual address (IMAGEBASE=400000h)+2*(SECTIONALIGN=1000h)=0x402000 .
The yasm source of the example is quite unusual and the diagram is also not the best place to learn PE format from. Please start by reading Wikipedia:Portable_Executable first (a short article). It has links to the full documents, so I will only make some short notes here.
You might also want to use the Cheat Engine to inspect the sample. Launch simple.exe, then attach to the process with Cheat Engine, press Memory View, then menu Tools->Dissect PE headers, then button Info, look at tab Imports. In the memory dump, go to address 00402000 (CTRL+G 00402000 Enter:
00402068: E4 39 BE 75 00 00 00 00 69 5F 47 77 00 00 00 00 6B 65 72 6E 65 6C 33 32 2E
Note the values at these locations
00402068: 0x75BE39E4 (on my computer) = the address of KERNEL32.ExitProcess
00402070: 0x77475F69 (in my case only) = the address of user32.MessageBoxA
Notice the text "kernel32.dll user32.dll" right after them. Now look at the hexdump of simple.exe (I would use Far Manager) and spot the same location before strings "kernel32.dll user32.dll". The values there are
0000000450: 69 74 50 72 6F 63 65 73 │ 73 00 00 00 4D 65 73 73 itProcess Mess
0000000460: 61 67 65 42 6F 78 41 00 │ 4C_20_00_00 00 00 00 00 ageBoxA L
0000000470: 5A_20_00_00 00 00 00 00 │ 6B 65 72 6E 65 6C 33 32 Z kernel32
0000000480: 2E 64 6C 6C 00 75 73 65 │ 72 33 32 2E 64 6C 6C 00 .dll user32.dll
0000000468: 0x0000204C — the Relative Virtual Address of dw 0;db 'ExitProcess', 0
0000000470: 0x0000205A — the Relative Virtual Address of dw 0;db 'MessageBoxA', 0
The loader has changed these values from what they were in the file after loading into memory. The Microsoft document pecoff.doc says about it:
6.4.4. Import Address Table
The structure and content of the Import Address Table are identical to that of the Import Lookup Table, until the file is bound. During binding, the entries in the Import Address Table are overwritten with the 32-bit (or 64-bit for PE32+) addresses of the symbols being imported: these addresses are the actual memory addresses of the symbols themselves (although technically, they are still called “virtual addresses”). The processing of binding is typically performed by the loader.

Resources