Binary files format (ARM GCC) - gcc

what contains a binary file which come from a ARM GCC for ARM devices?
Is there inside it some information about destination address which write to?
Or just native, pure, content of program without information about memory location?
If i have a bootloader, Or any way through programmer, can i write a binary file everywhere in flash or written itself by internal information about specific memory address?
If i setup my linker-script to write a program in a specific memory address, is there an influence in the bin file?

There are several types of files, which are called "binary' (at least among my colleagues):
.bin file extension. Contains only data that would/could be written to single continuous partition. It doesn't contain any addresses or offsets inside. When flashing this file to microcontroller you should explicitly specify destination address (often this is 0x0, beginning of flash). If you need to write to different partitions you need separate .bin for each of them (or it can be merged one if these partitions are consecutive). So this file type is like memory footprint.
Pros: minimum overhead if you have a single continuous partition and destination address always the same (so it can be hardcoded)
.hex is an Intel hex file format. It contains destination address for each line in it. Can be opened in any text editor.
.s19 or .srec Motorola s-record. Very similar to .hex, just another format. Also can include some metadata, that wouldn't be flashed.
Pros of last two types: best choice if you have several inconsistent partitions. Can be compressed by removing gaps
For VSCode there are several plugins that can highlight .s19 and .hex files

Related

read/write to a disk without a file system

I would like to know if anybody has any experience writing data directly to disk without a file system - in a similar way that data would be written to a magnetic tape. In particular I would like to know if/how data is written in blocks, and whether a certain blocksize needs to be specified (like it does when writing to tape), and if there is a disk equivalent of a tape file mark, which separates the archives written to a tape.
We are creating a digital archive for over 1 PB of data, and we want redundancy built in to the system in as many levels as possible (by storing multiple copies using different storage media, and storage formats). Our current system works with tapes, and we have a mechanism for storing the block offset of each archive on each tape so we can restore it.
We'd like to extend the system to work with disk volumes without having to change much of the logic. Another advantage of not having a file system is that the solution would be portable across Operating Systems.
Note that the ability to browse the files on disk is not important in this application, since we are considering this for an archival copy of data which is not accessed independently. Also note that we would already have an index of the files stored in the application database, which we also write to the end of the tape/disk when it is almost full.
EDIT 27/05/2020: It seems that accessing the disk device as a raw/character device is what I'm looking for.

Checksum inside Altera FPGA .jic file

I'm modifying a firmware file (.jic) JTAG Indirect Configuration File with a small algorithm, but changing data inside the file makes it unusable because there is a checksum somewhere in the file that has to be updated.
I need to find where is a checksum inside .jic file and decipher which algorithm is used (crc32, etc).
The bits on each byte are reversed and I inspected the normal and the reversed bit file with no success.
Does someone know or is there a way to find out where are is the checksum data inside the .jic file?
You need to generate a .rpd file.
This data will be loaded into the FPGA at power-up.
This is what you will see if you read flash memory byte-by-byte after loading .jic.
If you have access to the software that creates .jic files (e.g. Quartus) you can create two .jic files with one bit of difference and compare the two outputs (the two .jic) files. It should give you a hint about where the check is located (if there is one)
Not by starting from a .jic file. But if the data you're trying to update is initialized from a .hex or .mif file, you can use quartus_cdb --update_mif to perform a partial recompilation of your project. (This is also available in the IDE as "Update Memory Initialization File".)

Creating virtual disk with arbitrary size

I want to do some "experimentation" to learn about the file systems starting from FAT16.
The idea is to use a C++ program to manipulate a disk at byte level and then see how it is read by Windows.
In short, format the disk to FAT16, create files, create directories, rename files, delete files, delete directories, change properties of files, see what happens if I tamper with sector number of files e.t.c. It wil all use the C++ readfile and writefile functions.
Having a "virtual disk" will make things considerably easier as no hardware will be made corrupt and the disk can be "reset" easily.
Yes, I am an electronic engineer so have to work at low level of hardware.

Why isn't copy operation implemented in kernel?

It's my understanding that most file IO operations are implemented in the kernel, such as CRUD, move or remove. However file copy is not implemented as a kernel level API.
In order to detect a file copy in the kernel one will need to use heuristics approach (discussion on this approach), e.g. as detect file reads, file creates and file writes from the same user with the same file name, but different paths.
Why copy is a user land operation?
First, because caring about whether or not two different files have the same content, where one file's content is copied directly from the other, is a user-space concern that has no logical reason to exist inside a kernel.
At best.
Bytes are bytes.
Second, how would the kernel distinguish copying a file between what are just two different file descriptors? See the man page for sendfile(). Why should the kernel track if the calling user called sendfile() to send the contents of a file to a TCP socket to who-knows-where or to another file?
Third, even if the kernel tracked copying a file, what on God's good Earth would it do with such data?
If you care about such file copy events, set up auditing.

How can I find information about a file from logical cluster number in NTFS/FAT32?

I am trying to defragment a single file through Windows defragmentation API ( http://msdn.microsoft.com/en-us/library/aa363911(VS.85).aspx ) but if there is no free space block large enough for my file I would like to move other parts of files to make room for it.
The linked article mentions moving parts of other files but I can't find any information about how to find out which files to move. From the free space bitmap I can find an almost large enough space and I know the logical cluster numbers surrounding it, but from this I can't find out which files are surrounding it and a handle to the files is required to do FSCTL_MOVE_FILE which moves parts of files.
Is there any way, through the API or by parsing the MFT, to find out what file a logical cluster number is part of, and what virtual cluster number in the file corresponds to the logical cluster number found through the bitmap?
The slow but compatible method is to recursively scan all directories for files, and use the FSCTL_GET_RETRIEVAL_POINTERS. Then scan the resulting VCN-LCN mapping for the cluster in question.
Another option would be to query the USN Journal of the drive to get the File Reference IDs, then use FSCT_GET_NTFS_FILE_RECORD to get the $MFT file record.
I'm currently working on a simple Defrag program (written in Java) with the aim to pack files of a directory (e.g. all files of a large game) close together to reduce loading times and loading lags.
I use a faster method to retrieve the file mappings on the NTFS or FAT32 drive.
I parse the $MFT file directly (the format has some pitfalls), or the FAT32 file allocation table along with the directories.
The trick is to open the drive (e.g. "c:") with FileCreate for fully shared GENERIC read. The resulting handle can then be read with FileRead and FileSeek on a byte granularity. This works only in administrator mode (or elevated).
On NTFS, the $MFT might be fragmented and is a bit tricky to locate it from the boot sector info. I use the FSCTL_GET_RETRIEVAL_POINTERS on the C:\$MFT file to get its clusters.
On FAT32, one must parse the boot sector to locate the FAT table and the cluster containing root directory file. You need to parse the directory entries and recursively locate the clusters of the sub-directories.
There is no O(1) way of mapping from block # to file. You need to walk the entire MFT looking for files that contain that block.
Of course, in a live system, once you've read that data it's out-of-date and you must be prepared for failures in the move data FSCTL.

Resources