I'm modifying a firmware file (.jic) JTAG Indirect Configuration File with a small algorithm, but changing data inside the file makes it unusable because there is a checksum somewhere in the file that has to be updated.
I need to find where is a checksum inside .jic file and decipher which algorithm is used (crc32, etc).
The bits on each byte are reversed and I inspected the normal and the reversed bit file with no success.
Does someone know or is there a way to find out where are is the checksum data inside the .jic file?
You need to generate a .rpd file.
This data will be loaded into the FPGA at power-up.
This is what you will see if you read flash memory byte-by-byte after loading .jic.
If you have access to the software that creates .jic files (e.g. Quartus) you can create two .jic files with one bit of difference and compare the two outputs (the two .jic) files. It should give you a hint about where the check is located (if there is one)
Not by starting from a .jic file. But if the data you're trying to update is initialized from a .hex or .mif file, you can use quartus_cdb --update_mif to perform a partial recompilation of your project. (This is also available in the IDE as "Update Memory Initialization File".)
Related
I'm creating my custom binary file extension.
I use the RIFF standard for encoding data. And it seems to work pretty well.
But there are some additional requirements:
Binary files could be large up to 500 MB.
Real-time saving data into the binary file in intervals when data on the application has changed.
Application could run on the browser.
The problem I face is when I want to save data it needs to read everything from memory and rewrite the whole binary file.
This won't be a problem when data is small. But when it's getting larger, the Real-time saving feature seems to be unscalable.
So main requirement of this binary file could be:
Able to partially read the binary file (Cause file is huge)
Able to partially write changed data into the file without rewriting the whole file.
Streaming protocol like .m3u8 is not an option, We can't split it into chunks and point it using separate URLs.
Any guidance on how to design a binary file system that scales in this scenario?
There is an answer from a random user that has been deleted here.
It seems great to me.
You can claim your answer back and I'll delete this one.
He said:
If we design the file to be support addition then we able to add whatever data we want without needing to rewrite the whole file.
This idea gives me a very great starting point.
So I can append more and more changes at the end of the file.
Then obsolete old chunks of data in the middle of the file.
I can then reuse these obsolete data slots later if I want to.
The downside is that I need to clean up the obsolete slot when I have a chance to rewrite the whole file.
what contains a binary file which come from a ARM GCC for ARM devices?
Is there inside it some information about destination address which write to?
Or just native, pure, content of program without information about memory location?
If i have a bootloader, Or any way through programmer, can i write a binary file everywhere in flash or written itself by internal information about specific memory address?
If i setup my linker-script to write a program in a specific memory address, is there an influence in the bin file?
There are several types of files, which are called "binary' (at least among my colleagues):
.bin file extension. Contains only data that would/could be written to single continuous partition. It doesn't contain any addresses or offsets inside. When flashing this file to microcontroller you should explicitly specify destination address (often this is 0x0, beginning of flash). If you need to write to different partitions you need separate .bin for each of them (or it can be merged one if these partitions are consecutive). So this file type is like memory footprint.
Pros: minimum overhead if you have a single continuous partition and destination address always the same (so it can be hardcoded)
.hex is an Intel hex file format. It contains destination address for each line in it. Can be opened in any text editor.
.s19 or .srec Motorola s-record. Very similar to .hex, just another format. Also can include some metadata, that wouldn't be flashed.
Pros of last two types: best choice if you have several inconsistent partitions. Can be compressed by removing gaps
For VSCode there are several plugins that can highlight .s19 and .hex files
I need to get any information about where the file is physically located on the NTFS disk. Absolute offset, cluster ID..anything.
I need to scan the disk twice, once to get allocated files and one more time I'll need to open partition directly in RAW mode and try to find the rest of data (from deleted files). I need a way to understand that the data I found is the same as the data I've already handled previously as file. As I'm scanning disk in raw mode, the offset of the data I found can be somehow converted to the offset of the file (having information about disk geometry). Is there any way to do this? Other solutions are accepted as well.
Now I'm playing with FSCTL_GET_NTFS_FILE_RECORD, but can't make it work at the moment and I'm not really sure it will help.
UPDATE
I found the following function
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364952(v=vs.85).aspx
It returns structure that contains nFileIndexHigh and nFileIndexLow variables.
Documentation says
The identifier that is stored in the nFileIndexHigh and nFileIndexLow members is called the file ID. Support for file IDs is file system-specific. File IDs are not guaranteed to be unique over time, because file systems are free to reuse them. In some cases, the file ID for a file can change over time.
I don't really understand what is this. I can't connect it to the physical location of file. Is it possible later to extract this file ID from MFT?
UPDATE
Found this:
This identifier and the volume serial number uniquely identify a file. This number can change when the system is restarted or when the file is opened.
This doesn't satisfy my requirements, because I'm going to open the file and the fact that ID might change doesn't make me happy.
Any ideas?
Use the Defragmentation IOCTLs. For example, FSCTL_GET_RETRIEVAL_POINTERS will tell you the extents which contain file data.
I need to do an integrity check for a single big file. I have read the SHA code for Android, but it will need one another file for the result digest. Is there another method using a single file?
I need a simple and quick method. Can I merge the two files into a single file?
The file is binary and the file name is fixed. I can get the file size using fstat. My problem is that I can only have one single file. Maybe I should use CRC, but it would be very slow because it is a large file.
My object is to ensure the file on the SD card is not corrupt. I write it on a PC and read it on an embedded platform. The file is around 200 MB.
You have to store the hash somehow, no way around it.
You can try writing it to the file itself (at the beginning or end) and skip it when performing the integrity check. This can work for things like XML files, but not for images or binaries.
You can also put the hash in the filename, or just keep a database of all your hashes.
It really all depends on what your program does and how it's set up.
I need to get the list of all the files on a drive. I am using a recursive solution. But it is taking a lot of time. I was wondering that, is it possible to get the names and location of all the files on a NTFS drive from it's Master File Table? I think it will be very fast. Any suggestions?
There is a tool that will search the mft directly, it's called ndff. I have used it before and it is very fast.
Presumably it is possible to do what you want - there is another tool called "Everything" which I guess does the same thing - it also uses the USN change journal to update it's index.
When you get a list of all the files on an NTFS-formatted drive using a recursive solution, you are getting them from the MFT. There should be little disk IO outside of the MFT when simply retrieving a list of filenames and directories.
Before going down the path of determining the format of the MFT (which is available from a variety of places on the Internet) and writing code to read it directly, you should probably profile your code and determine that you aren't already CPU or IO bound.
I have the impression you're imagining some kind of list-like structure in the MFT which you can read in one go with no or minimal seeking.
This is not the case. The MFT uses a type of b-tree to store pathnames. When you scan the directory structure on your disk, you are in fact walking the MFT b-tree; you are doing what you would have to do if you accessed the MFT directly.
Yes there is, and the program I just open-sourced does exactly this.
You can read the source to find out how it works, but basically, it just looks for FILE_NAME attributes inside the $MFT and then uses the ParentDirectory field to get the parent of every file.
That way it can completely avoid reading the contents of any directory.