Trying to back up CentOS using the "dd" command - image

I'd like to back up a SSD which I'm using for CentOS. Trying to learn dd. My drive is a fairly simple GPT partition of 120GB.
I run "dd" to copy the image of sda to a USB stick sdd1:
[root#localhost ~]# dd if=/dev/sda conv=sync,noerror status=progress bs=64k of=/dev/sdd1
120029118464 bytes (120 GB, 112 GiB) copied, 30810 s, 3.9 MB/s
1831575+1 records in
1831576+0 records out
120034164736 bytes (120 GB, 112 GiB) copied, 30810.8 s, 3.9 MB/s
But then when I examine the USB stick, there is nothing to be seen on it and I see no way to mount it
this is what appears under the Disks command
Question is:
How do I access the image?
(As a side note, I read a claim that the dd command is like the IBM JCL statement of the same name. I was a mainframe programmer. The IBM DD command is often still called a "DD Card". It doesn't copy files. It just joins your file declaration in your program to some external file. To copy a file the old skool way is to use IEBGENER)

if=/dev/sda Is cloning the entire disk and of=/dev/sdd1 Is writing to a partition. Which doesn't make much sense.
You may want to clone the entire disk onto another disk
dd if=/dev/sda conv=sync,noerror status=progress bs=64k of=/dev/sdd
Or better yet clone to an compressed image
dd if=/dev/sda | gzip > /sda.img.gz
And restore like so
gzip -d /sda.img.gz | dd of=/dev/sda

Related

TrueNAS slow performance with NFS

hardware and system
CPU:Intel(R) Xeon(R) CPU E5-2620 v2 # 2.10GHz * 2
MEMORY: 128G
POOL: 2T * 10 (raidz3) + 512G * 2 L2ARC
TRUENAS: TrueNAS-13.0-U3.1
dd test
# on truenas
root#freenas[/mnt/dev1/docker]# dd if=/dev/zero of=test.dat bs=1M count=400 conv=fsync oflag=direct
400+0 records in
400+0 records out
419430400 bytes transferred in 0.126497 secs (3315728885 bytes/sec)
root#freenas[/mnt/dev1/docker]# dd if=/dev/zero of=test.dat bs=512K count=800 conv=fsync oflag=direct
800+0 records in
800+0 records out
419430400 bytes transferred in 0.128923 secs (3253346675 bytes/sec)
root#freenas[/mnt/dev1/docker]# dd if=/dev/zero of=test.dat bs=4K count=20000 conv=fsync oflag=direct
20000+0 records in
20000+0 records out
81920000 bytes transferred in 0.135577 secs (604232041 bytes/sec)
# on nfs client (1GB network)
[root#node1 docker]# dd if=/dev/zero of=test.dat bs=1M count=400 conv=fsync oflag=direct
400+0 records in
400+0 records out
419430400 bytes (419 MB) copied, 4.01762 s, 104 MB/s
[root#node1 docker]# dd if=/dev/zero of=test.dat bs=512K count=800 conv=fsync oflag=direct
800+0 records in
800+0 records out
419430400 bytes (419 MB) copied, 4.39617 s, 95.4 MB/s
[root#node1 docker]# dd if=/dev/zero of=test.dat bs=4K count=20000 conv=fsync oflag=direct
20000+0 records in
20000+0 records out
81920000 bytes (82 MB) copied, 6.95326 s, 11.8 MB/s
nfs
# client mount
192.168.10.16:/mnt/dev1/docker on /docker type nfs4 (rw,noatime,nodiratime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.10.120,local_lock=none,addr=192.168.10.16,_netdev)
other
Truenas pool already set sync disable
HELP
We see 4k test is very slow on nfs client. Can someone help me troubleshoot?
The problem with TrueNas is that it ships preconfigured with periodic security checks that hog the system.
You should do the following:
Edit using vim the file /usr/local/lib/python3.9/site-packages/middlewared/etc_files/local/collectd.conf
and disable python plugin by commenting the line "LoadPlugin python"
Go to Truenas UI, in System, reporting, and change points from 1200 to 100, so it purges and regenerate RRD data.
Then execute:
service collectd onestatus
service collectd onerestart
Then edit with vi in a shell session the file /etc/defaults/periodic.conf and set
security_status_neggrpperm_enable="NO"
See other recommended settings in https://gist.github.com/adam12/b2a645adc55c4c076d9887838ab4bae7
Edit also the file /conf/base/etc/defaults/periodic.conf because it will not persist a boot otherwise.
There is also some settings in the /data directory, in sqlite databases, this is written by the TrueNas UI, be careful.
Also if you have like me, a ntopng in your network, and see the message kex_exchange_identification pop every 30 minutes in your freenas terminal screen, then you have two options:
disable network device discovery in ntopng settings, or
Go to TrueNAS, Services menu, SSH, and change the port from 22 to other, for example 8022.
Best regards

Discrepancy between the size of file created and size displayed by du -sh [duplicate]

This question already has answers here:
Size() vs ls -la vs du -h which one is correct size?
(3 answers)
Closed last month.
I had to create a random file of 10GB size, which I can using dd or fallocate, but the size shown by du -sh is twice the one I created:
$ dd bs=1MB count=10000 if=/dev/zero of=foo
10000+0 records in
10000+0 records out
10000000000 bytes (10 GB, 9.3 GiB) copied, 4.78419 s, 2.1 GB/s
$ du -sh foo
19G foo
$ ls -sh foo
19G foo
$ fallocate -l 10G bar
$ du -sh bar
20G bar
$ ls -sh bar
20G bar
Can someone please explain me this apparent discrepancy?
On wikipedia, it mentions about GPFS ...
The system stores data on standard block storage volumes, but includes an internal RAID layer that can virtualize those volumes for redundancy and parallel access much like a RAID block storage system.
I conclude that there is at least one non-visible duplicate for every file, and therefore each file actually uses twice the amount of space than the actual content of a single file. So the underlying RAID imposes the double-usage.
That could explain it, because I have created a similar massive file for other purposes, also using dd, on an ext4 filesystem, but the OS reports my file size matching the dd creation size, as per design intent (no RAID in effect on my drive).
The fact that you indicate that stat does report the correct file size as per dd's actions, confirms what I put forward above.

Remote backup of Raspberry Pi sd card from Windows

This references details a procedure for backing up the SD card on a running Raspberry Pi from and to a file on another computer (https://johnatilano.com/2016/11/25/use-ssh-and-dd-to-remotely-backup-a-raspberry-pi/)
To do this from a Windows PC it is necessary to install a new "DD tool" for Windows, also referenced above. I've done all of this and thought I had the process figured out, the following command run from a Windows CMD window does create a file on my Windows PC, but shows an error message which I can not figure out.
c:\Windows\System32>ssh me#192.168.xxx.yyy “sudo dd if=/dev/mmcblk0 bs=1M | gzip -” | dd of=\\.\c:/Users/me/Desktop/rPi_backup.gz –progress
I have run this on my LAN to two different Pi's (3B and 4B), Here is the output from the later after about 30 minutes:
c:\Windows\System32>ssh me#192.168.xxx.yyy “sudo dd if=/dev/mmcblk0
bs=1M | gzip -” | dd of=\.\c:/Users/me/Desktop/rPi_backup.gz
–progress
rawwrite dd for windows version 0.5. Written by John Newbigin This
program is covered by the GPL. See copying.txt for details
0me#192.168.xxx.yyy‘s password:
11,754,655,23215193+1 records in 15193+1 records out 15931539456 bytes
(16 GB, 15 GiB) copied, 2528.86 s, 6.3 MB/s 11,755,209,216Error
reading file: 109 The pipe has been ended
22959393+0 records in 22959393+0 records out
I got the same Error 109 for both Pi's. I have checked the SD cards and they are functional, not corrupted nor have any "dirty bits". I have tried to find the error code description, but nothing I see seems to apply to this situation.
Can anyone help me with this problem. I posted it to the above reference, but have not gotten any meaningful replies....RDK
EDIT after using Maxim Sagaydachny's suggestion (in comments). It worked and I got a .gz file on my desktop. I used BalenaEtcher to expand and copy it to an SD card, same size as that of the source. The Pi booted up but it appears that the restored version is a bit crippled, the Pi boots ok, I can log on (console and SSH) but I get this message just after I log on:
-bash: /etc/profile.d/magnum_path.sh: line 1: syntax error near unexpected token
$'sʧ\237\376\351\370'' -bash: /etc/profile.d/magnum_path.sh: line 1: MA▒$▒g▒p▒[▒kh▒▒▒▒▒D(sʧ▒▒▒▒|▒Kq/t
▒▒▒ ▒9▒$A▒6▒▒f▒▒V▒▒▒Y▒▒ UX▒O:▒▒35▒▒1z▒׎▒YC#▒ȓ!▒ӪW▒▒GM▒
<▒▒'▒▒▒`W=|k▒▒▒V ▒9▒▒R▒[j▒
*▒▒ֈ▒▒
I also noted a message in the bootup screen but it went by too fast to fully read so the following i s a stab at what it really said. Something about the expected number of ?? was, for example, 273 and it only sees 272. And I think it was referencing mmcblk0p2....RDK

Allocating a larger u-boot image

I am compiling my own kernel and bootloader (U-boot). I added a bunch of new environmental variables, but U-boot doesn't load anymore (it just doesn't load anything from the memory). I am using pocketbeagle and booting from an SD card. Thus I am editing the file "am335x_evm.h" found in /include/configs/.
I am trying to allocate U-boot in a way that it has more space for the environmental variables and that it can load succesfully from the memory, but I have been unable to do so. As far as I understand, by default it allocates 128kb of memory to U-boot env variables. Since I added a bunch of them, I am trying to increase its size from 128kb to 512kb.
I have changed the following line (from 128kb to 512kb):
#define CONFIG_ENV_SIZE (512 << 10)
(By the way, anyone knows why it is shifted to the left 10 bits?)
I have also changed CONFIG_ENV_OFFSET and CONFIG_ENV_OFFSET_REDUND to:
#define CONFIG_ENV_OFFSET (1152 << 10) /* Originally 768 KiB in */
#define CONFIG_ENV_OFFSET_REDUND (1664 << 10) /* Originally 896 KiB in */
Then after compiling the new U-boot, I format the SD card and insert the new kernel and U-boot.
I start by erasing partitions:
export DISK=/dev/mmcblk0
sudo dd if=/dev/zero of=${DISK} bs=1M count=10
Then I transfer U-boot by doing:
sudo dd if=./u-boot/MLO of=${DISK} count=1 seek=1 bs=512k
sudo dd if=./u-boot/u-boot.img of=${DISK} count=2 seek=1 bs=576k
I then create the partition layout by doing:
sudo sfdisk ${DISK} <<-__EOF__
4M,,L,*
__EOF__
Then I add the kernel, binary trees, kernel modules, etc... When trying to boot and reading the serial port, I get nothing at all. U-boot is not able to load anything from the SD card. What am I doing wrong? I'd appreciate if you could point me what my problem is and exactly what I should be doing to increase the size and allocate everything correctly.

Write partial data from MBR.bin to a sector in USB

DD is a tool for linux which can Write partial data from MBR.bin to a sector in USB (instead of writing a whole sector). Now I need to do such thing in windows. There is a DD for windows, but it seems it will write a whole sector!
I need to write first 440 bytes of a mbr file to a usb stick. the code in linux is:
dd if=mbr.bin of=/dev/sd<X> bs=440 count=1
and in windows it will be:
dd bs=440 count=1 if=mbr.bin of=\\.\<x>:
where x is the volume letter. But in windows it will cause USB to be corrupted and usb need to be formatted. It seems it writes the whole data. How can I solve this problem?
Copy a complete block!
e.g. for a 512 byte blocksize (512-440=72)
copy mbr.bin mbr.full
dd bs=1 if=\\.\<x>: skip=440 seek=440 of=mbr.full count=72
dd bs=512 if=mbr.full of=\\.\<x>: count=1
Are you sure you pass the parameters correctly? Maybe the win version expects it to be /bs=440. Just a guess. Can't you anyway just truncate the file to 440 bytes?

Resources