I try to create an image of a disk (USB flash key) in CygWin with use of the ddrescue command. I do the following:
first, with the command df I look where the disks are in CygWin.
Τhe output is:
C: 30716276 30489824 226452 100% /cygdrive/c
D: 56323856 55794432 529424 100% /cygdrive/d
F: 1953480700 1927260140 26220560 99% /cygdrive/f
H: 7847904 140324 7707580 2% /cygdrive/h
Then, to create the image of the disk h:/ I run the command like this:
ddrescue -v -n /cygdrive/h f:/___buffer/discoH.img discoH.log
The program works some time and is likely reading the disk. Αs a result, the file
f:/___buffer/discoH.img is really created but
its size is zero!
I tried some variations of the command options but with the same result. The disk to be read is fully working and readable, now I only want to learn to create its image.
When using ddrescue under true Linux (Ubuntu), the non-zero-size image of the same disk is created without any problem. What could be the cause for the fail in CygWin?
I still work in Windows XP SP3 32bit, the version of CygWin is
$ uname -r
2.0.4(0.287/5/3)
$ uname -m
i686 (32bit)
On another computer, with Windows 8, the result is the same. Probably, I lack doing something elementary?
PS the disk I want to image is 8GB, and there are 26GB of free space on the disk f:/ where I want to create the image
/cygdrive/h is not a disk image. Try with /dev/sdX
You can identify the X letter, from
$ cat /proc/partitions
major minor #blocks name win-mounts
8 0 976762584 sda
8 1 960658432 sda1 D:\
8 2 16102400 sda2 E:\
8 16 250059096 sdb
8 17 266240 sdb1
8 18 16384 sdb2
8 19 248765440 sdb3 C:\
8 20 1003520 sdb4
Thank you matzeri ! Yours was really the elementary thing I needed but did not know.
So, I use the command cat /proc/partitions instead of df, get the disk reference
sdc1 instead of /cygdrive/h, run the command
ddrescue -v -n /dev/sdc1 f:/___buffer/discoH.img discoH.log
instead of the one I indicated above in my question text, and it works! The image is being recorded
Related
I'm developing on an ubuntu x86 machine, trying to run the u-boot hello_world standalone application which resides on an image sd.img which contains a partition.
I've compiled u-boot (v2022.10) with qemu-x86_64_defconfig
I run qemu with qemu-system-x86_64 -m 1024 -nographic -bios u-boot.rom -drive format=raw,file=sd.img
u-boot starts up, doesn't find a script, doesn't detect tftp, and awaits a command. If I type ext4ls ide 0:1, I can clearly see hello_world.bin (3932704 hello_world.bin).
When I do a ext4load ide 0:1 0x40000 hello_world.bin (in preparation for go 40000 This is another test), qemu/u-boot restarts.
0x40000 is the CONFIG_STANDALONE_LOAD_ADDR for x86.
I have even tried making an image of hello_world mkimage -n "Hello stand alone" -A x86_64 -O u-boot -T standalone -C none -a 0x40000 -d hello_world.bin -v hello_world.img and tried to load the image into 0x40000 with the intention of using bootm in case of cache issues - qemu/u-boot still resets.
Could anyone possibly point out the basic mistake I'm making.
Cheers
The memory area 0xa0000-0xffffff is reserved and you are overwriting it when loading your 4 MiB file to 040000 due to the excessive size of the file.
If you build hello_world.bin correctly, it will be a few kilobytes.
When booting a virtual server with Ubuntu 14.04/16.04 (I had the issues with both), it cant find the boot partition for root and the system falls to the initramfs shell with the following error:
(initframs) exit
Gave up waiting for root device. Common problems:
- Boot args (cat proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/CAC_VG-CAC_LV does not exist. Dropping to a shell!
if I type
ls /dev/mapper/
I still can see the partition mentioned in the error (and in the GRUB)
root=/dev/mapper/CAC_VG-CAC_LV
cat output as suggested in the error message
(initframs) cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.4.0-66-generic root=/dev/mapper/CAC_VG-CAC_LV ro
Notice: it seems to mount the device in Read-Only (ro). Maybe I should change this after I manage to start the system...
If I type exit I get the same error as above.
Then I try to mount:
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV
mount: can't find /dev/mapper/CAC_VG-CAC_LV in /etc/fstab`
I had the same problem after a fresh install of Ubuntu 14.04
And this actually worked!!
mount -o remount, rw /
lvm vgscan
lvm vgchange -a y
mount -t ext4 /dev/mapper/CAC_VG-CAC_LV /root
exit
I recently found the post entitled "How to mount volumes in docker release of openFOAM" post on this site on October 2016. That post asks about automatically
mounting an already mounted (under bash or csh) volume through the Docker version of openfoam. Hopefully, this is explained below.
I have the situation that under csh, the output from lsblk is:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 1.8T 0 disk /mnt/hdd
sda 8:0 0 111.8G 0 disk
├─sda2 8:2 0 1K 0 part
├─sda5 8:5 0 7.9G 0 part [SWAP]
└─sda1 8:1 0 103.9G 0 part /
Then I run the script startOpenFOAM+, which is the following Bash shell script:
#!/bin/bash
# this script will
# i) Start OpenFOAM+ container with name 'of_v1612_plus'
# in the the shell-terminal.
# User also need to run xhost+ from other terminal
# Note: Docker daemon should be running before launching script
# PostProcessing: User can launch paraview/paraFoam from terminal
# to postprocess the results
# Note: user can launch script in different shell to have OpenFOAM
# working environment in different terminal
xhost +local:of_v1612_plus
docker start of_v1612_plus
docker exec -it of_v1612_plus /bin/bash -rcfile /opt/OpenFOAM/setImage_v1612+
I am dumped into a Bash shell and the output from lsblk is now:
bash-4.1$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 1.8T 0 disk
sda 8:0 0 111.8G 0 disk
|-sda2 8:2 0 1K 0 part
|-sda5 8:5 0 7.9G 0 part [SWAP]
`-sda1 8:1 0 103.9G 0 part /etc/sudoers.d
I guess the answer to the problem is to add the line docker run -v .... into the startOpenFOAM+ shell script. However, I am not sure what to replace the dots with and where to place the command.
Any help would be much appreciated.
Thanks,
Peter.
If I understand, you need this:
docker run -v /mnt/hdd:/mnt/hdd .....
But you didn't showed where you docker run, if you find it, then add that -v.
Important: you will not see a mount point with lsblk inside container regarding sdb, because docker mount a directory, not a device. You just will see contents in /mnt/hdd
I created an EC2 instance (Ubuntu 64 bit) and attached a volume from a publicly available snapshot to the instance. I successfully mounted the volume. I am supposed to be able to run a script from this attached volume using the following steps as explained in the tutorial:
Log in to your virtual machine.
mkdir /space
mount /dev/sdf1 /space
cd /space
./setup-script
The problem is that, when I try: ./setup-script I got the following message:
-bash: ./setup-script: No such file or directory
What is the problem ? How can I search for the ./setup-script in the whole machine ? I'm not very familiar with linux system. Please, help.
For more details about the issue: Look at my previous post:
Error when mounting drive
# Is it a script or an executable ?
file /space/setup-script
# Show us it is readable and marked executable
ls -l /space/setup-script
# Mark it executable
chmod a+x /space/setup-script
# Then try running it again? If you know it is shell script you can:
bash /space/setup-script
If still not working, then we get into why it wont execute.
grep space /proc/mounts
Does the options it have noexec ?
Try mount -o remount,exec /space now try your instructions again.
NOTE: All commands presume you are 'root' user or you can 'sudo' each command.
It is possible that you have mounted the wrong device. I've just recalled a trick you can use to find the device name of an EBS volume in Linux, since it is often different from the device name reported in the AWS console. First unmount the device in Linux, then detach it from the instance using the AWS console, so we go back to the original state. Now run this command in Linux:
cat /proc/partitions
The command will show the volumes currently attached. The next step is to attach the volume to the instance using the AWS console, and then to run that same command again in Linux. You should see an additional line appear. This line will tell you the name of the device to mount. For example, I get this output in my Ubuntu instance:
major minor #blocks name
202 1 8388608 xvda1
202 80 8388608 xvdf
The first line was already there before I attached the volume, so I know this is my root volume. The second line is the one that appeared, so in this case, the device to mount would be /dev/xvdf.
On Mac OS X, if I send SIGQUIT to my C program, it terminates, but there is no core dump file.
Do you have to manually enable core dumps on Mac OS X (how?), or are they written to somewhere else instead of the working directory?
It seems they are suppressed by default. Running
$ ulimit -c unlimited
Will enable core dumps for the current terminal, and it will be placed in /cores as core.PID. When you open a new session, it will be set to the default value again.
On macOS, your crash dumps are automatically handled by Crash Reporter.
You can find backtrace files by executing Console and going to User Diagnostic Reports section (under 'Diagnostic and Usage Information' group) or you can locate them in ~/Library/Logs/DiagnosticReports.
You can also check where dumps are generated by monitoring system.log file, e.g.
tail -f /var/log/system.log | grep crash
The actual core dump files you can find in /cores.
See also:
How to generate core dumps in Mac OS X?
Technical Note TN2118: Kernel Core Dumps.
Additionally, the /cores directory must exist and the user running the program must have write permissions on it.
The answer above,
ulimit -c unlimited
works -- but be sure to run that in the same terminal from which you will run the program that dumps core. You need to run the ulimit command first.
by default, specific directories in mac osx are hidden. you might want to enable this feature in the terminal and then the core dump should be visible within the directory /cores.
defaults write com.apple.finder AppleShowAllFiles TRUE
There is a great explanation by Quinn “The Eskimo!” on Apple's forums
https://developer.apple.com/forums/thread/694233
I roughly followed that guide. Here are the steps that I did.
Grant write all access to the /cores dir
PROMPT> ls -la / | grep cores
drwxr-xr-x 2 root wheel 64 Dec 8 2021 cores
PROMPT> sudo chmod 1777 /cores
PROMPT> ls -la / | grep cores
drwxrwxrwt 2 root wheel 64 Dec 21 23:29 cores
Set size of core file
PROMPT> ulimit -c unlimited
Compile and sign the program
PROMPT> cargo build --release -p my-crashing-program
PROMPT> /usr/libexec/PlistBuddy -c "Add :com.apple.security.get-task-allow bool true" tmp.entitlements
PROMPT> codesign -s - -f --entitlements tmp.entitlements my-crashing-program
Run the program
PROMPT> my-crashing-program
thread 'main' panicked at 'boom', my-crashing-program/src/main.rs:74:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
dumping core for pid 80995
zsh: quit my-crashing-program
Now there is a core file
PROMPT> ls /cores
core.80995
Also Apple's Console app has a list with Crash Reports.