Creating an image of a drive cloned with ddrescue. - clone

We have an old server with disk failures that we've tried to clone into VMSphere. This resulted in an error from what that error came from we couldn't pin point.
With ddrescue we cloned the machine to a 2TB external hard drive that we can use to lab around with without having any downtime.
Then we used normal dd to try to create an image that we then later could convert or insert into the virtual environment.
Problem is that we have don't have any workstations that are able to handle a 2TB file. Is there any way that we can create an image of the drive with the partitions, data and mbr? Basically everything except for the unallocated space.

You could try using "dd". If you have some idea how big the OS and data partitions were, just chop that much plus a bit extra off the front of the image and save in a new file. Say you guess it was 4GB of OS and 8GB of data, do something like this:
dd if=yourimage of=newsmallerimage bs=1024k count=14000
which will get you around 14GB. Any Virtual Machine will likely ignore any blank space at the end anyway.

Related

Problem with ShadowCopy, error 0x80042306

I have a problem with the Shadow Copy. Specifically, when I try to set up a Shadow Copy of a given volume, error 0x80042306 appears.
Additionally, there is no possibility to choose a Shadow Copy for the same volume, I simply cannot select my own partition to perform the copy on the same volume.
The second issue is that the partition to which the error pertains is part of a larger disk. We have a 30TB disk and expanded it by creating a new 70TB partition, and the error is related to this second one. Other disks perform correctly. The entire disk is on a disk array.
To preempt the question, all other backup applications have been removed and no other applications are using VSS.
There are only two Microsoft providers in the registry.
I would be grateful for any information.
Best regards,
We have uninstalled all backup applications.
We have tried to set up ShadowCopy on other disk/partitions.

Write to and read from free disk space using Windows API

Is it possible to write to free clusters on disk or read data from them using Windows APIs? I found Defrag API: https://learn.microsoft.com/en-gb/windows/desktop/FileIO/defragmenting-files
FSCTL_GET_VOLUME_BITMAP can be used to obtain allocation state of each cluster, FSCTL_MOVE_FILE can be used to move clusters around. But I couldn't find a way of reading data from free clusters or writing data to them.
Update: one of the workarounds which comes to mind is creating a small new file, writing some data to it, then relocating it to desired position and deleting the file (the data will remain in freed cluster). But that still doesn't solve reading problem.
What I'm trying to do is some sort of transparent cache, so user could still use his NTFS partition as usual and still see these clusters as free space, but I could store some data in them. Data safety is not of concern, it can be overwritten by user actions and will just be regenerated / redownloaded later when clusters become free again.
There is no easy solution in this way.
First of all, you should create own partition of the drive. It prevents from an accidental access to your data from OS or any process. Then call CreateFileA() with name of the partition. You will get raw access to the data. Please bear in mind that the function will fail for any partition accessed by OS.
You can perform the same trick with a physical drive too.
The docs
One way could be to open the volume directly via using CreateFile with the volumes UNC path as filename arguement (e.g.: \\.\C:).
You now can directly read and write to the volume.
So you maybe can achieve your desired goal with:
get the cluster size in bytes with GetDiskFreeSpace
get the map of free clusters with DeviceIoControl and FSCTL_GET_VOLUME_BITMAP
open the volume with CreateFile with its UNC path \\.\F:
(take a careful look into the documentation, especially the Remarks sections part about opening drives and volumes)
seek to the the offset of a free cluster (clusterindex * clusterByteSize) by using SetFilePointer
write/read your data with WriteFile/ReadFile on the handle, retreived by above CreateFile
(Also note that read/write access has to be sector aligned, otherwise the ReadFile/WriteFile calls fail)
Please note:
this is only meant as a starting point for your own research. This is not a bullet proof cooking receipt.
Backup your data before messing with the file system!!!
Also keep in mind that the free cluster bitmap will be outdated as soon as you get it (especially if using the system volume).
So I would strongly advise against use of such techniques in production or customer environments.

Writing to /dev/loop USB image?

I've got an image that I write onto a bootable USB that I need to tweak. I've managed to mount the stick as /dev/loopX including allowing for the partition start offset, and I can read files from it. However writing back 'seems to work' (no errors reported) but after writing the resulting tweaked image to a USB drive, I can no longer read the tweaked files correctly.
The file that fails is large and also a compressed tarfile.
Is writing back in this manner simply a 'no-no' or is there some way to make this work?
If possible, I don't want to reformat the partition and rewrite from scratch because that will (I assume) change the UUID and then I need to go worry about the boot partition etc.
I believe I have the answer. When using losetup to create a writeable virtual device from the partition on your USB drive, you must specify the --sizelimit parameter as well as the offset parameter. If you don't then the resulting writes can go past the last defined sector on the partition (presumably requires your USB drive to have extra space). Linux reports no errors until later when you try to read. The key hints/evidence for this are that when reads (or (re)written data) fail, dmesg shows read errors attempting to read past the end of the drive. fsck tools such as dosfsck also indicate that the drive claims to be larger than it is.

Solr ate all Memory and throws -bash: cannot create temp file for here-document: No space left on device on Server

I have been started solr for long time approx 2 weeks then I saw that Solr ate around 22 GB from 28 GB RAM of my Server.
While checking status of Solr, using bin/solr -i it throws -bash: cannot create temp file for here-document: No space left on device
I stopped the Solr, and restarted the solr. It is working fine.
What's the problem actually. Didn't get?
And what is the solution for that?
I never want that Solr gets stop/halt while running.
First you should check the space on your file system. For example using df -h. Post the output here.
Is there any mount-point without free space?
2nd: find out the reason, why there is no space left. Your question handles two different thing: no space left on file system an a big usage of RAM.
Solr stores two different kind of data: the search index an the data himself.
Storing the data is only needed, if you like to output the documents after finding them in index. For example if you like to use highlighting. So take a look at your schema.xml an decide for every singe field, if it must be stored or if "indexing" the field is enough for your needs. Use the stored=true parameter for that.
Next: if you rebuild the index: keep in mind, that you need double space on disc during index rebuild.
You also could think about to move your index/data files to an other disk.
If you have solved you "free space" problem on disc, so you probably don't have an RAM issue any more.
If there is still a RAM problem, please post you java start parameter. There you can define, how much RAM is available for Solr. Solr needs a lot of virtual RAM, but an moderate size of physical RAM.
And: you could post the output of your logfile.

Does SSD Trim work on Windows 7 if there isn't a drive letter assigned?

I am using the Intel Solid-State Drive Toolbox to view an SSD drive. This utility has an option to manually run "TRIM". What I found odd is the utility reports "The selected Intel SSD has no partition letter. This feature requires a partition letter to run."
I have the disk mounted as a junction point. I hope this is a limitation of Intel's utility, or does Windows 7 TRIM require a drive to be assigned a drive letter in order for it to work?
The way trim works, is that it is the hint to the SSD to indicate which address areas is not being used to contain data. This allows the SSD to optimize internally (usually save work being done by "garbage collection").
when there is no partition on the drive, generally it means that everything is "trimmed". This may not be the case if the SSD wasn't made aware of that. So in this case, I think it's the tool, unable to figure out what it could and could not trim and may just want to avoid trimming unintentionally.
Aside from that fact, the trim feature is specific to ATA protocol. Meaning, it's a command sent to the drive at the lower level so it's not tied to Windows 7 or an application. It's open for anything that is will and able to send the command.

Resources