How to manage disk space while working with Yocto? - embedded-linux

I am using Yocto for one of my project. I know that Yocto needs a good amount of disk space for build activities. And also I am working as non-root user in Ubuntu 20.04. But I often run into space issues during build.
WARNING: The free space of [...]/tmp-glibc (overlay) is running low (0.555GB left)
ERROR: No new tasks can be executed since the disk space monitor action is "STOPTASKS"!
WARNING: The free space of [...]/downloads (overlay) is running low (0.555GB left)
ERROR: No new tasks can be executed since the disk space monitor action is "STOPTASKS"!
WARNING: The free space of [...]/sstate-cache (overlay) is running low (0.555GB left)
ERROR: No new tasks can be executed since the disk space monitor action is "STOPTASKS"!
I have tried deleting $TMPDIR (build/tmp), $SSTATE_DIR (build/sstate-cache), $DL_DIR (build/downloads). But these things didn't help me.
Is there any way to allocate more space to user in Ubuntu? And also what is the best practice for space usage while working with Yocto?
Can anyone please let me know how to resolve this issue?
Your help will be much appreciated.
Thanks in advance.
P.S: I working with Yocto "Honister" release. Please let me know if any info is missing here.

It is known to all Yocto users/developers that it needs lot of space, and it has 3 parts that are huge:
TMP
DOWNLOADS
SSTATE CACHE
My advice is the following (based on my experience):
Before starting to work on any Yocto build, create one directory for the three components mentioned above.
Example:
mkdir -p /home/user/yocto_shared/{tmp,downloads,sstate-cache}
After that, any build you create you share those with it:
In local.conf:
DL_DIR ?= "/home/user/yocto_shared/downloads"
SSTATE_DIR ?= "/home/user/yocto_shared/sstate-cache"
TMPDIR = "/home/user/yocto_shared/tmp"
This will save you lot of space and save you time for new builds.
Also, you can inherit rm_work which removes tmp output files for every recipe after it bitbake:
INHERIT += "rm_work"
NOTE
After you remove something on Ubuntu, specially Yocto tmp, downloads and sstate-cache, do not forget to empty the Trash.

In addition to the suggestions above, you could try adding the following to your conf/local.conf file to avoid creating -dbg packages, which could save some extra space.
INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
In any case, console-only images might take up to 20GB, whilst graphical images can go up to 50GB. You should not be deleting your $TMPDIR (build/tmp), $SSTATE_DIR (build/sstate-cache) and $DL_DIR folders, but getting a larger HDD/SDD instead.

Related

Problem with ShadowCopy, error 0x80042306

I have a problem with the Shadow Copy. Specifically, when I try to set up a Shadow Copy of a given volume, error 0x80042306 appears.
Additionally, there is no possibility to choose a Shadow Copy for the same volume, I simply cannot select my own partition to perform the copy on the same volume.
The second issue is that the partition to which the error pertains is part of a larger disk. We have a 30TB disk and expanded it by creating a new 70TB partition, and the error is related to this second one. Other disks perform correctly. The entire disk is on a disk array.
To preempt the question, all other backup applications have been removed and no other applications are using VSS.
There are only two Microsoft providers in the registry.
I would be grateful for any information.
Best regards,
We have uninstalled all backup applications.
We have tried to set up ShadowCopy on other disk/partitions.

Anaconda Installer (Fedora/Cent/RH/Qubes)- CLI Disk Prep Prior to Install

I'm looking to have root on a RAID BtrFS built on a number of luks disks. I typically do this on Debian or Ubuntu by preparing my disks before-hand, then running the install into those disks. At the end, I need to pivot into the new system to modify crypttab and fstab.
I'm trying the same thing with Qubes, which uses the Anaconda installer. When I get to the GUI partitioner, the BtrFS appears under the "Unknown" dropdown, but if I try to set "mount point to "/" and then "Update Settings," it errors with "You must create a new filesystem on the root device." (But there is already one there.) If I use "+" instead, I am told "Not enough free space for thin provisioning." The installer is clearly confused about how much space is available: "Available space 992.5 KiB," "Total space 238.47 GiB." In fact, there is 932.35GiB in the RAID'ed BtrFS.
If I just open the luks devices, but put no FS in there, then all /dev/mapper/luks* devices appear in the partitioner under the "Unknown" dropdown, but choosing "New mount points will use the following partitioning scheme: Btrfs," none of the devices allow me to associate a mount point. It's greyed out, or if I try to use "+" and test it with a single disk, it comes back with an error "Not enough disks for single." (But I have multiple LUKS disks there!)
Trying without any prior formatting, neither luks nor Btrfs, I find that the partitioner can't handle bare disks. It wants a partition table (which I don't).
Does anyone have a way through this?
Edit: It appears there are serious issues with this installer.
The answer to all of this appears to be: "Don't try to fight the Anaconda, as you will lose." Despite the access to a root terminal (Control-Alt-F1 reaches a tmux session, Control-b 2, reaches a terminal with root privileges), you must return to the graphical installer, which is too limited to allow any headway, particularly with BtrFS disks. Anaconda sees BtrFS not as a filesystem, but as a device, and this makes problems insurmountable.
The solution is to do a dummy install and then modify all disks, editing crypttab, fstab, /etc/default/grub as needed. Then pivot in and run dracut -f, along with grub2-mkinstall if needed. Also, if necessary, grub2-install.
One advantage of BtrFS in this process, is that it's possible to avoid having to use a live-DVD or Anaconda's rescue shell to make changes in a system "at rest", afterward pivoting in to run dracut et al. You'd just use btrfs device add to add a device to the root, and then btrfs device remove the original. Then make relevant changes to the original partitions, afterwards reversing the add/remove. So it's possible to make changes by moving back and forth from one disk to the other.

How to resize my partition on virtualbox with Debian?

I already created space in the virtualbox so you can see this free 55gb.
But when i want to delete partition 2 and partition 5, i get an error message: rror deleting partition /dev/sda5: warning partition dev/sda5 is being used are you sure you want to continue and i cant do nothing with it.
I tried to delete these partition with sudo fdisk /dev/sda and after that I deleted these partition, but nothing changed, they are stayed there. How can incrase my SDA1 dark size then?
I know, it isn't a programming question, but i tried many links and i haven't got any idea how can I increase my Partition1..
The easiest way is to use a gparted live cd that you'll find here https://gparted.org/livecd.php.
Then boot your VM on it (since it's a virtual machine you won't have to burn a real cd) and follow the steps (there are plenty of tutos with screen captures on the net, won't try to make one more here).

Transferring (stopping, resuming) file using rsync

I have an external hard-drive that I suspect is on its way out. At the minute, I can transfer files from it, but only for a while. Unfortunately, I have one single file that's >50GB in size. My solution to this is to use rsync to transfer this one particular file a bit at a time, leave the drive to rest (switch it off), and resume a little while later.
I'm using rsync --partial --progress --inplace --append -a /Volumes/Backup\ Drive/chris/Desktop/Recording\ Sessions/S1/Session\ 1/untitled ~/Desktop/temp to transfer it. (The file is in the untitled folder, which I'm moving into the temp folder) However, after having stopped it and resumed it, it seems to be over-writing the previous attempt at the file, meaning I don't really get any further.
Is there something I'm missing? :X
Thankyou ^_^
EDIT: Still don't know :\
Well, since this is a programming site, here's a program to do it. I tested it on OS X, but you should definitely test it on some small files first to make sure it does what you want:
#!/usr/bin/env python
import os
import sys
source = sys.argv[1]
target = sys.argv[2]
begin = int(sys.argv[3])
end = int(sys.argv[4])
mode = 'r+b' if os.path.exists(target) else 'w+b'
with open(source, 'rb') as source_file, open(target, mode) as target_file:
source_file.seek(begin)
target_file.seek(begin)
buffer = source_file.read(end - begin)
target_file.write(buffer)
You run this with four arguments: the source file, the destination, and two numbers. The first number is the byte count to start copying from (so on the first run you'd use 0). The second number is the byte count to copy until (not including). So on subsequent runs you'd always use the previous fourth argument as the new third argument (new begin equals old end). And just go on like that until it's done, using whatever sizes you like along the way.
I know this is related to macOS, but the best way to get all the files off a dying drive is with GNU ddrescue. I have no idea if this runs nicely on macOS, but you can always use a Linux live-usb to do this. You'll want to open a terminal and be either root (preferred) or use sudo.
Firstly, find the disk that you want to backup. This can be done by running the following. Make note of the partition name or disk name that you want to back up. Hard drives/flash drives will typically use the format sdX, where X is the drive letter. Partitions will be listed under sdX1, sdX2... etc. NVMe drives/partitions follow a similar naming convention.
lsblk -o name,size,label,fstype,model
Mount and change directory (cd) to a writable location that is bigger than the drive/partition you want to back up.
Now we are going to do a first pass over the drive/partition. This will do a first pass, without stopping on problematic sections. This will ensure that ddrescue does not cause any more damage by trying to access a bad section. Think of it like a hole in a sweater -- you wouldn't want to keep picking at the hole or it would get bigger. Run the following, with sdX replaced with the drive/partition name from earlier:
ddrescue -d /dev/sdX backup.img backup.logfile
the -d flag uses direct disk access and ignores the kernel cache, and the logfile is important in case the drive gets disconnected or the process stops somehow.
Run ddrescue again with the -r flag. This will retry bad sections 3 times. Feel free to run this a few times, but note that ddrescue cannot restore everything. From my experience it usually restores in the high 90%s, and many of the files are system files (aka not your personal files).
ddrescue -d -r3 /dev/sdX backup.img backup.logfile
Finally, you can use the image however you want. You can either mount it to copy the files off or use it in a virtual machine/burn it to a working drive with dd. Do note that the latter options will not always work if system critical files were damaged.
Good luck and remember to make backups!

IntelliJ 9; caching a lot of data under C:\Users\

Is there a reason why IntelliJ creates a lot of files under C:\Users\<username>\.IntelliJIdea90 ?
This directory has slowly grown to around 2GB. I can understand IntelliJ needs to perform some caching for local history, and indexing, but 2GB seems a litle excessive
Is there a way to safely clear down some of this data and free up some disk space?
I haven't yet heard of unexplainable growth of those indices, maybe there is a reason after all.
You can safely delete that directory (with IDEA not running), but expect a full rebuild of the index on next startup. If you want to preserve your configuration, though, consider only removing system/caches and system/index.
Edit: Back at work, I had a look on my machine:
$ du -sh ~/Library/Caches/IntelliJIdea90
3,8G /Users/jjungnickel/Library/Caches/IntelliJIdea90

Resources