Can anyone explains how CD/DVD Boot Sector works to me? I've extracted some boot sectors from ISO images and found out that some of them are 6 sectors long and some are 8. I tried to look it up but no results. What is the minimum(maximum) length of a CD/DVD boot sector? Does it have to end with 0x55 0xAA?
Bootable ISO 9660 images are very different from other media such as floppies and hard drives. In the latter case, the BIOS loads a single, 512-byte sector, verifies the final 55 AA bytes, and then jumps to what it loaded.
El Torito, the extension that defines bootable ISO 9660 images for PCs, supports various methods of booting. Four of the methods emulate floppy (1.2M, 1.44M, 2.88M) and hard disk boot sectors; the BIOS will map the first floppy or hard disk to the CD-ROM so that you can take bootable floppies or small, bootable hard disks and convert them to ISO images. The last method is called native booting. Native boot sectors can be anywhere from 1-65535 sectors in length, or up to 32 MiB. Native boot sectors do not have to end in 55 AA.
ISO 9660 native sectors are almost always 2048 bytes, so native boot sectors will usually be 4 sectors long (512 * 4 == 2048).
You can find more info including a link to the El Torito standard here:
http://wiki.osdev.org/ISO_9660
http://wiki.osdev.org/El-Torito
https://en.wikipedia.org/wiki/El_Torito_(CD-ROM_standard)
Also, this document shows the binary structure of El Torito:
http://fossies.org/linux/xorriso/doc/boot_sectors.txt
Related
Refer to here, IOCTL_STORAGE_QUERY_PROPERTY with StorageAdapterProperty can be used to get the max transfer size of per SCSI Read(10) command.
In this code, 16 sectors are read from start of lba. I tried to modify the number and in my Win7 environment, the max number is 256 sector through SATA and 128 sector through bridge (SATA-USB) to a SSD, which match the result using IOCTL_STORAGE_QUERY_PROPERTY with StorageAdapterProperty.
As far as I know, when installing OS (win7, win10, macOS), a device can receive a SCSI Read(10) command up to 2048 sector. I wonder which layer limits the transfer size (operating system/device driver...) and is there any way to bypass the layer to send a SCSI Read(10) command longer than the limitation at a time?
My application on PC sends a file (2 MB) in chunks of 1 KB to embedded device.
I use FTDI Windows driver, I use the classic FT_Write() API function as my code is cross-platform.
Note: These issues below appear when I use 1KB chunk size. Smaller chunk (I tried 64 bytes) works fine.
The problem is the function returns "0 byte sent" every couple hundred packets and stuck. I found a work around, by purging both TX and Rx, followed by ResetDevice() call recovered the chip. It still happened every couple hundred packets, but at least I can send the whole file (2 MB).
But when I use USB isolator (http://www.bb-elec.com/Products/USB-Connectivity/USB-Isolators/Compact-USB-Port-Guardian.aspx)
the work around failed.
I believe my work around is not a graceful solution.
Note: I use large chunk because of suggestion I found in FTDI application note below:
When writing data to an FTDI device, as much data as possible should
be buffered in the application and written to the device in a single
write function call (either WriteFile for a VCP application using the
Win32 API, FT_Write if using the D2XX classic interface or
FT_WriteFile if using the D2XX FT_W32 interface). The result of this
is that the data will be written to the device with 64 bytes per USB
packet.
Any idea what's the proper fix for these issues? Is it related to FTDI initialization? My driver version is 2.12.16.0 (3/9/2016).
I also saw the same problem of API FT_Write() not working right if too much data was passed,
while working on the library for my USB device Nusbio.
I mostly work in the mode Synchronous Bitbanging rather than UART but after all it is the same
hardware, driver and API.
There are the USB 2.0 specification or the FTDI FT232RL specification and then there is
reality of the electron and bit. The expected numbers of transfer speed never really match at
least at first. In other words it is complicated (see more below in my referenced blog post).
In 2015 I was under the impression that with FTDI chip FT232RL the size of 384 bytes was working well
and the number comes from the chip datasheet (128 byte receive buffer and 256 byte transmit buffer).
Using a size of 500 bytes would still work but above 600 bytes thing would not work.
I later used the chip FT231X which has a larger buffer (1k, 512 byte receive buffer and 512 byte transmit buffer).
and was able to transfer with FT_Write() 1k and 2k buffer of data, therefore more than doubling my speed of transfer.
But above 2k things would not work.
In 2016, I read every thing you can read about FTDI USB 2.0 Full speed chip, I came to the
conclusion that FT_Write should support up to 64K (see datasheet for the following chip
FT232RL, FT231X, FT232H, FT260, FT4222).
I also did some research on faster serial port communication from .NET than 115200 baud.
Somehow I was able to update my C# library to send data in buffer of 32k in FT_Write() and it is
working with the FT232RL and the FT231X chip, but I can't tell you what changed.
I was probably not completely underdanding the in and out of the USB 2.0 full speed FTDI technology.
For example let's say you are using the FT232RL and transfering 384 bytes at the time with
FT_Write(). Knowing that there is at least a 1 milli-second latency in USB 2.0 full speed what ever you
do, you are transfering from a USB point of view 384*1000/1024, that is 375 K byte/s in theory
(that would be the max), that said now what is the baudrate supported by your embedded device.
What is the baudrate used?
The FT232RL max baudrate is 900 000 baud, which would give you only 900000/(1+8+1) == 87 K byte/S.
Right away you can tell there is going to be some problem, may be the FTDI driver takes care of
it or not. I can't tell.
Re do the math based on the baudrate supported by your embedded device, and a 384 byte buffer
sent 1000 per second, then slow down your USB speed with a sleep() to match your baud rate.
That is where I would start.
i wanted to make a code that will run before the mbr, so i moved the mbr to the second sector, my code to sector zero. in sector 1 I loaded the second sector(which contains the mbr) than i call to address 7c00 to begin the mbr code.
so hard disk looks like this:
sector 0: my program that does IO ans loads sector 1
sector 1: code that loads sector 2
sector 2 mbr code
when i boot i receive this message:
"could not open drive multi 0 disk 0 rdisk 0 partition 1"
its importent to say that i want windows xp to run after my code
What you describe is exactly how the MBR code works:
The MBR of a hard disk is located at the first sector of the hard disk. BIOS will load that sector.
The MBR sector will move itself to some other address and load the first sector of the bootable hard disk partition to address 7C00 (hex). Then it will jump to 7C00 (hex).
However:
The MBR also contains information about the hard disk partitions in the last 80 bytes. If you want to replace the MBR by your own boot sector you'll have to copy the data located in the last 80 bytes. Otherwise hard disk access won't work any longer because the OS will look for hard disk information in thes last 80 bytes of the first sector of the hard disk.
If you want to replace the boot sector of the bootable partition you have a similar problem. Depending on the file system used there is file system information stored in some bytes of the boot sector. For FAT or NTFS the first three bytes must be a "JMP" instruction and the following about 65 bytes contain file system information.
I'm preparing an SD card with OpenElec XMBC to use in a Raspberry Pi. To start with, I have formatted the SD Card using this software. Then I followed the steps on this page to load the image on the SD card. Before writing the image to the SD card, the size is around 4 GB (as it should be). After writing the image, the size of the SD card goes back to around 128 MB. If I format the card again, it returns to 4 GB. Re-writing the image puts it back at 128 MB again.
I'm still awaiting the delivery of my Raspberry Pi to test it, however I find it hard to imagine that this is normal behavior or that the Raspberry Pi would recognize the 4GB. Has anybody had this issue?
EDIT:
I'm using Windows 8.1
UPDATE:
Tried it in my Raspberry Pi and it is showing 1G. Still 3GB missing.
It's probably re-written the partition table and created partitions that your OS doesn't recognise. I'd be willing to bet that the only partition your OS recognises is the one that's 128mb - OpenELEC is Linux based, so for example one of the partitions will be Linux Swap
If you DD/bitwise copy (which is what I assume the util is doing) an image that's only 1 gig, it would only show the disk as being 1 gig. You can resize or create a new partition.
I took a look into the iso images (ISO 9660) of small Linux distributions. I found 16 empty sectors, followed by a sector describing the Primary Volume Description. Next sector is commonly a Boot Record containing only descriptive information such as System and Version Identifier and a little endian integer inside the unspecified byterange. Then comes the Supplementary Volume Descriptor and finally the Volume Descriptor Set Terminator.
I only guess it's a little endian integer in the Boot Record. I found no more Information about this. In all the images I used was the little endian integer smaller than the value for Sector Count from the Primary Volume Descriptor, so I further guess it's pointing to a sector inside the Image. Could someone provide more detailed informations about this?
The "El Torito Bootable CD-ROM Format Specification" describes the format of bootable CDs.