Windows provides a WritePrivateProfileStruct API which can be used to write binary data to an INI file, and a GetPrivateProfileStruct API which can be used to read that binary data back from the INI file.
The binary data is serialised in hex format, followed by a single additional byte which is a checksum. For example:
[ultravnc]
passwd=2AE0C448372D3C1CD2
in that case, the binary data is 2AE0C448372D3C1C and the checksum byte is D2.
How is that checksum calculated?
The checksum is calculated simply by adding up each byte, and using the result (mod 256) as the checksum. It is not using anything more sophisticated than that.
Related
Look into this post which describes a technique to put an executable code in the first 128 bytes of a DICOM file i.e. in the preamble section. This way the DICOM can be viewed as both a DICOM and an PE executable file.
This git repo demonstrates the same. However they don't show the code, instead only has the binaries.
Now my question. How can an executable be kept only in 128 bytes because I understand a minimal exe will take at least a few KBs from this, this and this SO posts?
From looking at image 1 it appears pretty simple, the valid DOS header is placed in the free area while the full PE image is embedded later in the file, the author put it between two legitimate DICOM meta entries for example. The DOS header is really short and has a field named e_lfanew which holds the file offset to IMAGE_NT_HEADERS. In other words you don't actually need 128 bytes for the full image, you can embed it anywhere in the file as long as it doesn't interfere with DICOM, all that's needed at the start is the dos header.
Before answering how to put an executable in 128 bytes, we need to understand a few things first.
A dicom file must have the characters DICM (File extension) on the bytes 121-124 (Prefix section) to be recognized as a dicom file
A windows executable file must have the DOS Header in the first 64 bytes of the file to be able to be executable as per the PE(Portable Executable) File Format.
Combining the above 2 points a new file format is created called PEDICOM which is both a dicom as well as an executable. The PEDICOM has the architecture as shown in the image above.
The PEDICOM contains both the header and content of the executable file in different sections because an entire executable can’t be fit inside 128 bytes.
Windows provides a list of structures and Win32 APIs to read/write these PE files section by section in winnt.h header.
Creating a PEDICOM file:
DOS header (IMAGE_DOS_HEADER) has 1 field named e_lfanew which contains the offset of the actual PE content. This allows to keep an entire executable code in at least 2 memory locations.
The PE Header (IMAGE_NT_HEADER) has the number of sections and the pointes to the sections (Code, Data, Stack etc.)
Now to answer the original question, an entire executable can't be kept in 128 bytes. However 128 bytes of data are sufficient to declare a file as executable i.e. the dos header and the dos stub can be kept in the 128 bytes while the rest of the executable can be kept somewhere else, in this case in a private dicom tag and a field in the header can point to this. Make the containing file a valid and legitimate executable.
I want to make boot loader code for AVR, which can update firmware over the air.
Now I am able to write to the application area using some fixed data. I have a hex file of the new firmware to be updated. How do I convert that hex file to raw data so that I can update the application using that raw data?
If you're using WinAVR for compilation you may do this using included avr-objcopy:
C:\WinAVR-20100110\bin> avr-objcopy.exe -I ihex -O binary input_file.hex output.bin
If you're developing on Linux, there's a package, avr-binutils, with the avr-objcopy program.
You may use some tool (http://hex2bin.sourceforge.net/ or another hex2bin converter) or write your own hex parser that may have some caveats when coming to files > 64 KB.
As you pointed out, the hex file is encoded in Intel Hex format. You have to extract the flash data from the data records. Each record (line) holds up to 16 bytes (common, but may vary) of data.
Note that that there are different record types and some may introduce an address offset, depending on how the flash data is distributed. The Wiki description should be enough to get the concept.
I ParseFromArray the protocol buffer's protocol, the protocol is not lack any filed. But the ParseFromArray function returns false. Why?
I'm assuming you are using C++. ParseFromArray() fails if:
The input data is not in valid protobuf format.
The input data is lacking a required field.
If you are sure that all required fields are set, then it must be the case that your input data is corrupted. You should verify that the bytes and size you are passing into ParseFromArray() are exactly the bytes and size that you got from SerializeToArray() and ByteSize() on the sending side. You will probably find that you are losing some bytes somewhere, or that some bytes got corrupted.
Common reasons for corruption include:
Passing the encoded bytes over a text-only channel. E.g. if you write the data to (or read it from) a file that is not opened in "binary" mode, or if you at some point store the bytes in a Java String, the data will become corrupted, as these channels expect text, and encoded protobufs are not text.
Passing the bytes as a char*, i.e. assuming NUL-termination. Encoded protobufs can contain '\0' bytes, meaning that you cannot represent one as a char* alone -- you must include the size separately.
Serializing to an array that is larger than needed, and then forgetting to pay attention to how much data was actually written. When you call SerializeToArray(), you must also call ByteSize() to see how large the message is, and you must make sure the receiving end receives that size and passes it to ParseFromArray(). Otherwise, the parser will think that the extra bytes at the end of the buffer are part of the message, and will fail to parse them.
I have a socket server listening for UDP packets from a GSM device. Some of the data comes in as multibytes, such as time, which requires multibytes for accuracy. This is an example:
179,248,164,14
The bytes are represented in decimal notation. My goal is to convert that into seconds:
245692595
I am trying to do that and was told:
"You must take those 4 bytes and place them into a single long integer in little endian format. If you are using Python to read and encode the data, you will need to look at using the .read() and struct.unpack() methods to successfully convert it to an integer. The resulting value is the number of seconds since 01/01/2000."
So, I tried to do this:
%w(179 248 164 14).sort.map(&:to_i).inject(&:+)
=> 605
And I obviously am getting the wrong answer.
You should use the pack and unpack methods to do this:
[179,248,164,14].pack('C*').unpack('I')[0]
# => 245692595
It's not about adding them together, though. You're doing the math wrong. The correct way to do it with inject is this:
[179,248,164,14].reverse.inject { |s,v| s * 256 + v }
# => 245692595
Note you will have to account for byte ordering when representing binary numbers that are more than one byte long.
If the original data is already a binary string you won't have to perform the pack operation and can proceed directly to the unpack phase.
I have a "stringstream" variable that stores some compressed binary data in gzip format.
I want to decompress this stringstream variable in memory.
First of all, for in-memory decompression of binary data in gzip format, what third party library do you suggest to use ?
I noticed zlib library for compression/decompression of gzip and deflate formats.
However, the two functions handling decompression that zlip provides do not seem to meet my needs exactly:
int uncompress (Bytef *dest, uLongf *destLen, const Bytef *source, uLong sourceLen);
int gzread (gzFile file, voidp buf, unsigned len);
The first one (uncompress) requires me to know the length of the decompressed data in advance to properly allocate enough memory for storage. In my case, it is unknown.
On the other hand, the second one (gzread) takes a file as input, not a memory buffer.
What do you suggest for an "efficient" in-memory decompression using zlip or some other library ?
Thanks.
There appears to be some decompression filters for gzip in the Boost library, this might be worth looking into:
http://www.boost.org/doc/libs/1_48_0/libs/iostreams/doc/classes/gzip.html