How to fix bad dd img write to sdcard - image

I am trying to write 2016-05-10-raspbian-jessie.img (an image) to my SanDisk Ultra 4GB (Class? 4) SDCARD using my MacBook Pro (running OS X) for my Raspberry Pi.
When I run the dd cmd it gives some errors:
Matts-MacBook-Pro:dev Matt$ sudo dd bs=1m if=~/Downloads/2016-05-10-raspbian-jessie.img of=/dev/rdisk2
dd: /dev/rdisk2: short write on character device
dd: /dev/rdisk2: Input/output error
3782+0 records in
3781+1 records out
3965190144 bytes transferred in 635.507654 secs (6239406 bytes/sec)
I actually get the files on the SDcard but when I boot the Raspberry Pi, I get a kernel panic:
Unable to mount root filessystem on unknown block
An old forum post recommends:
adding a forcefsck to the end of cmdline.txt on the SDCard.
using fsck: sudo fsck -fy /dev/disk2, but I just get back the usage text: usage: fsck -fdnypq -l number
What is the best way to do this in OS X? My MBP is my only working SDCard reader, I can't get the commands that "should" work to work in fsck. It'd even better if you know what the cause is.

How big is the image file, and what's the actual size of the SD card? That looks like it might have run out of space on the disk, since "3965190144 bytes transferred" is 3.965 GB, which is pretty close to the nominal 4GB capacity of the card.
Note: when checking sizes there's a potential for confusion between true gigabytes (GB = 1,000,000,000 bytes) and gibibytes (GiB = 1,073,741,824 bytes) which are sometimes called gigabytes. Disk sizes are generally given in true GB, but RAM is generally given in GiB, and software tools for showing sizes are often inconsistent. When in doubt, look at the actual number of bytes. In OS X, you can get the exact disk size with e.g. diskutil info /dev/disk2.

Related

Need to check disk space on /dev/disk2

So, I have a raspberry pie 3 B. I wanted to know how much space is used so I plugged the micro sd card in my mac and opened the terminal.
When I ran the command:
df -h /dev/disk2
I got:
df: /dev/disk2: Raw devices not supported
What should I do now?
PS: I don't want to plug the RPI in.
You'd indeed need to mount the disk first, like Ferrybig said.
Note: It could be, since it's for the RPI, that you SD card is formatted with one of the EXT file system variants. To access those under MacOS, you'll need something like FUSE (free, but I have never used) or Paragon's ExtFS (commercial, I've used it quite often). Some SD cards for the RPI are FAT32 formatted - those should work just fine under MacOS.
Easiest way to mount a volume, if you don't want to mess too much with the commandline parameters, is by opening Disk Utlity. Find the disk/partition you'd like to mount, right click it and select "Mount". This works for "known" file system types.
Now after the disk has been mounted, type "mount" in terminal to see where it's mounted. It will show several lines, one of them could be (as an example):
/dev/disk2s1 on /Volumes/Untitled (ntfs, local, nodev, nosuid, read-only, noowners)
(there may be more that start with "/dev/disk2sX")
df- h /Volumes/Untitled will now show the disk space info, for example:
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk2s1 15Gi 57Mi 15Gi 1% 19 15239289 0% /Volumes/Untitled
If your disk2 has multiple partitions, then you'd need to repeat the steps for all disks that start with "/dev/disk2sX" (where X is a number).

Scratched CD-ROM / Sector by sector reading / SCSI command

I'm trying to recover data from a slightly scratched CD-ROM.
I've tested without success various free or proprietary software (the recover process couldn't even start). I tried to figure out what happened and so far and I've discovered that the size of the CD-ROM was 0. SCSI command READ CD CAPACITY returns 0 Logical Block Addresses / LBA. The CD is not empty, and was not formatted for info.
Consequently all reading access via READ CD SCSI command raise exceptions because asked LBA are out of range.
I also tried SCSI command : READ CD MSF command in order to address directly the physical addresses without considering the logical addresses. Unsuccessfully, it raises the same error out of range error based on LBA ?)
Any idea or help is welcome !
Thanks in advance.

Hard disk not detected after formating to DOS_FAT_32

I have a WD Elements external hard disk. Earlier it was using NTFS file system, which I have formatted to DOS_FAT_32. Now I can't see the hard disk in finder or disk utility.
But when I execute the command diskutil list, the the external HD is listed:
/dev/disk2 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: FDisk_partition_scheme *1.0 TB disk2
1: DOS_FAT_32 JAY3 1.0 TB disk2s1
How do I make the hard disk detect in finder window and be able to store and retrieve files from it?
If you are absolutely sure it is /dev/disk2 and you have no data on it that you want, you can overwrite the initial few sectors of the disk with zeroes so that it looks completely uninitialised like this:
sudo dd if=/dev/zero of=/dev/rdisk2 bs=65536 count=1000
Yes, I did mean to add the extra r in front of disk2 and no, nobody should copy and paste this command because it will obliterate disk2 !!!.
Then unplug and replug the disk and go into Disk Utility, select the blank disk and click Erase at the top of the Disk Utility window and you will be able to format it.

ext4 commit= mount option and dirty_writeback_centisecs

I'm tring to understand the way bytes go from write() to the phisical disk plate to tune my picture server performance.
Thing I don't understand is what is the difference between these two: commit= mount option and dirty_writeback_centisecs. Looks like they are about the same procces of writing changes to the storage device, but still different.
I do not get it clear which one fires first on the way to the disk for my bytes.
Yeah, I just ran into this investigating mount options for an SDCard Ubuntu install on an ARM Chromebook. Here's what I can tell you...
Here's how to see the dirty and writeback amounts:
user#chrubuntu:~$ cat /proc/meminfo | grep "Dirty" -A1
Dirty: 14232 kB
Writeback: 4608 kB
(edit: This dirty and writeback is rather high, I had a compile running when I ran this.)
So data to be written out is dirty. Dirty data can still be eliminated (if say, a temporary file is created, used, and deleted before it goes to writeback, it'll never have to be written out). As dirty data is moved into writeback, the kernel tries to combine smaller requests that may be into dirty into single larger I/O requests, this is one reason why dirty_expire_centisecs is usually not set too low. Dirty data is usually put into writeback when a) Enough data is cached to get up to vm.dirty_background_ratio. b) As data gets to be vm.dirty_writeback_centisecs centiseconds old (3000 default is 30 seconds) it is put into writeback. vm.dirty_writeback_centisecs, a writeback daemon is run by default every 500 centiseconds (5 seconds) to actually flush out anything in writeback.
fsync will flush out an individual file (force it from dirty into writeback and wait until it's flushed out of writeback), and sync does that with everything. As far as I know, it does this ASAP, bypassing any attempt to try to balance disk reads and writes, it stalls the device doing 100% writes until the sync completes.
commit=5 default ext4 mount option actually forces a sync() every 5 seconds on that filesystem. This is intended to ensure that writes are not unduly delayed if there's heavy read activity (ideally losing a maximum of 5 seconds of data if power is cut or whatever.) What I found with an Ubuntu install on SDCard (in a Chromebook) is that this actually just leads to massive filesystem stalls like every 5 seconds if you're writing much to the card, ChromeOS uses commit=600 and I applied that Ubuntu-side to good effect.
The dirty_writeback_centisecs, configures the daemons of the kernel Linux related to the virtual memory (that's why the vm). Which are in charge of making a write back from the RAM memory to all the storage devices, so if you configure the dirty_writeback_centisecs and you have 25 different storage devices mounted on your system it will have the same amount of time of writeback for all the 25 storage systems.
While the commit is done per storage device (actually is per filesystem) and is related to the sync process instead of the daemons from the virtual memory.
So you can see it as:
dirty_writeback_centisecs
writing from RAM to all filesystems
commit
each filesystem fetches from RAM

U-Boot hangs while loading kernel?

I am working on Freescale board imx50evk. I have built the uboot.bin and uImage using LTIB (linux target image builder). At the U-Boot prompt I enter the bootm addr command, and then it hangs after showing the message "Loading Kernel..."
> MX50_RDP U-Boot > boot
MMC read: dev # 0, block # 2048, count 6290 partition # 0 ...
6290 blocks read: OK
## Booting kernel from Legacy Image at 70800000 ...
Image Name: Linux-2.6.35.8
Image Type: ARM Linux Kernel Image (uncompressed)
Data Size: 1323688 Bytes = 1.3 MB
Load Address: a0008000
Entry Point: a0008000
Verifying Checksum ... OK
Loading Kernel Image ...
You need to verify that your board really has RAM at 0xa0008000, which is the kernel "load address". U-Boot is probably trying to copy the image to that region of memory when it appears to hang.
[By your comment, I'll assume that you have verified that main memory does not exist at physical address 0xAXXXXXXX.]
The uImage file that you are using was made from the zImage file using the mkimage utility.
You probably have to manually edit the line that looks like
zreladdr-y := 0xa0008000
in arch/arm/mach-XXX/Makefile.boot for your board. The convention is that this address should be the base of physical RAM plus an offset of 0x8000 (32K). Then adjust the other values in the file. Delete the zImage file and perform another make for the kernel.
While building 3.20 development kernels for rockchip's rk3288 I found using LZO image compression made the device hang at 'Starting the kernel.' I assume it's because of a version miss-match between the build hosts LZO and the deployed decompression code, so it could probably happen with any of the compression algorithms. In my case switch to gzip fixed it.
This is only my assumption for why changing the compression algorithm gave a bootable kernel. Please correct me if I'm wrong.

Resources