WSL2 - Resize/Extend Disk [closed] - windows

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm using Windows 10 with Docker on WSL2 (With Ubuntu20 Linux Subsystem).
I originally got all this installed on my C: disk, which was a 256Go Drive.
I've change my drive for a 1To drive and I extended my system partition. In windows all is fine but I still encounter "no space left on device" issues inside WSL2.
How can I resize my Linux disk so I can use more space?
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb ext4 251G 18G 221G 8% /
tmpfs tmpfs 4.9G 400M 4.5G 9% /mnt/wsl
tools 9p 931G 269G 663G 29% /init
none devtmpfs 4.9G 0 4.9G 0% /dev
none tmpfs 4.9G 8.0K 4.9G 1% /run
none tmpfs 4.9G 0 4.9G 0% /run/lock
none tmpfs 4.9G 0 4.9G 0% /run/shm
none tmpfs 4.9G 0 4.9G 0% /run/user
tmpfs tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
C:\ 9p 931G 269G 663G 29% /mnt/c
D:\ 9p 466G 6.6G 460G 2% /mnt/d
E:\ 9p 1.9T 85G 1.8T 5% /mnt/e
/dev/sdd ext4 251G 164G 76G 69% /mnt/wsl/docker-desktop-data/isocache
none tmpfs 4.9G 12K 4.9G 1% /mnt/wsl/docker-desktop/shared-sockets/host-services
/dev/sdc ext4 251G 130M 239G 1% /mnt/wsl/docker-desktop/docker-desktop-proxy
/dev/loop0 iso9660 396M 396M 0 100% /mnt/wsl/docker-desktop/cli-tools

Related

Splitting the Output data of a command and process the splitted and saved data further in Ansible

I have a situation as below.
I have to run a command that fetches a data related to two nodes (let us say:NODE-1, NODE-2. Please note that there is no way to fetch the data seperately for NODE-1 and NODE-2). The sample data is as shown below.
#################### NODE-1 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
#################### NODE-2 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
It is required to process the data of these two nodes sepearely and draw & print the conclusions individually based on their repsctive values.
I am unable to find a way to seperate the data of these two nodes and save them as seperate variables/ in files to process them further.
PS: The number of fields (rows) in the data of the nodes is not constant.
Can I get a way (atleast a hint) to do it using ansible ?
i suggest you to use a custom filter (splitregex): you put this file
in folder filter_plugin (same level than your playbook)
#!/usr/bin/python
import re
class FilterModule(object):
def filters(self):
return {
'splitregex': self.splitregex
}
def splitregex(self, l1, pattern):
return filter(None, re.split(pattern, l1, flags=re.MULTILINE))
you use it in this playbook:(the file datas.txt contains your initial result)
- hosts: localhost
gather_facts: false
tasks:
- name: read file
shell: cat datas.txt
register: cc
- name: show item
debug:
msg: "{{ item }}"
loop: "{{ parts }}"
vars:
parts: "{{ cc.stdout | splitregex('^#+ NODE-[0-9]+ #+$') }}"
parts is a list which contains your initial file cut following the pattern
you could use '^#+ NODE-[0-9]+ #+$\n' if you want to remove the \n in beginning of each part
if you are using roles
for role test (for example), you create a folder roles/test/filter_plugins and put it the python file in it

How to implement A/B dual copy scheme partitioning?

I am trying to implement A+B dual copy scheme partition on my Avenger96 board. I am using Yocto build system and also .wks file to create partitions. My .wks file:
part fsbl1 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl1" --ondisk mmcblk --align 1 --size 256k
part fsbl2 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl2" --ondisk mmcblk --align 1 --size 256k
part ssbl --source rawcopy --sourceparams="file=u-boot.itb" --part-name "ssbl" --ondisk mmcblk --align 1 --size 2M
part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_A --part-name "rootfs_A" --align 4096 --use-uuid --active --size 3G
part /rootfsB --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_B --part-name "rootfs_B" --align 4096 --use-uuid --size 3G
bootloader --ptable gpt
And I am able to build .wic.xz image and copied image to SD. It created 2 rootfs partitions.
But when I boot using this SD card, I could see both partitions are mounted. For example, there is also /dev/root which refers to current active partition /dev/mmcblk0p4 and also /dev/mmcblk0p5 (/rootfsB) when I do df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 5.0G 1.1G 3.6G 23% /
devtmpfs 469.7M 0 469.7M 0% /dev
tmpfs 502.2M 0 502.2M 0% /dev/shm
tmpfs 502.2M 9.6M 492.5M 2% /run
tmpfs 502.2M 0 502.2M 0% /sys/fs/cgroup
tmpfs 502.2M 0 502.2M 0% /tmp
tmpfs 502.2M 16.0K 502.1M 0% /var/volatile
/dev/mmcblk0p5 5.0G 1.1G 3.6G 23% /rootfsB
tmpfs 100.4M 0 100.4M 0% /run/user/0
And boot messages also show:
Mounting /rootfsB...
Starting Start psplash boot splash screen...
[ OK ] Mounted /rootfsB.
Also doing mount from linux user-space results in:
/dev/mmcblk0p4 on / type ext4 (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=480932k,nr_inodes=120233,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /var/volatile type tmpfs (rw,relatime)
/dev/mmcblk0p5 on /rootfsB type ext4 (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=102840k,mode=700)
Is this the expected behavior for A+B type partitioning?
Can anyone please let me know what could be the issue and how to resolve it?
Your help will be much appreciated.
Thanks in advance.
P.S: Please let me know if any info is missing here.
This is actually what I would expect. You want to boot from the "active" (A) partition, but you also want to be able to update the "passive" (B) partition. After you updated the "passive" (B) partition you typically tell the bootloader to try to boot from it. If that works (B) becomes the "active" partition and (A) the "passive" one.

docker needs huge disk space on macOS

Following up on
docker no space left on device macOS
I found that docker needs huge space on macOS
This is for creating the following docker images (with packer):
CREATED SIZE
6 hours ago 81.7GB
8 hours ago 80.5GB
14 hours ago 230MB
14 hours ago 153MB
I.e., it is taking nearly 1/3 of my total disk spaces, and I need to allocate nearly half of my total disk spaces. Why it is taking so much space?
I've been building the same images under Linux and Windows and never had I been running out of allocated space and need to allocate more disk space again and again before.
% docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 0 162.5GB 162.5GB (100%)
Containers 0 0 0B 0B
Local Volumes 2 0 350.5MB 350.5MB (100%)
Build Cache 5 0 0B 0B
% df -h
Filesystem Size Used Avail Capacity iused ifree %iused Mounted on
/dev/disk1s1 932Gi 10Gi 555Gi 2% 488001 9767490159 0% /
devfs 194Ki 194Ki 0Bi 100% 673 0 100% /dev
/dev/disk1s2 932Gi 363Gi 555Gi 40% 712613 9767265547 0% /System/Volumes/Data
/dev/disk1s5 932Gi 2.0Gi 555Gi 1% 2 9767978158 0% /private/var/vm
map auto_home 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/home
map -fstab 0Bi 0Bi 0Bi 100% 0 0 100% /System/Volumes/Data/Network/Servers
/dev/disk1s4 932Gi 507Mi 555Gi 1% 54 9767978106 0% /Volumes/Recovery
/dev/disk2s1 1.8Gi 1.5Gi 323Mi 83% 17770 4294949509 0% /Volumes/Docker
The above reports look reasonable to me, where have the remaining ~150G space gone to?

EC2 r3.xlarge storage space not correspond to the documentation

I'm using Hadoop YARN on EC2 over r3.xlarge instances, I launched the instances from an AMI using spark-ec2 scripts.
On https://aws.amazon.com/ec2/instance-types/, the specifications of r3.xlarge are the following:
vCPU: 4
Mem: 30.5 GiB
Storage: 1 x 80 GB
The Memory is good, free command gives me this result:
root#ip-xxx-xx-xx-xxx ~]$ free -g
total used free shared buffers cached
Mem: 29 2 27 0 0 1
But the storage not correspond to the indicated one.
root#ip-xxx-xx-xx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.9G 783M 91% /
devtmpfs 15G 64K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Is it normal to have only ~40GB and not 80GB like this was specified in the documentation ? Or this is because I launched the instance from an AMI ?
The two tmpfs directories aren't where your missing 80gb is. This is looks like an Debian/Ubuntu distro. I can reproduce something similar to your df:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
none 15G 0 15G 0% /run/shm
Note /dev/xvda1. That's your boot partition that is on ELB. Your 80gb SSD is actually at /dev/xvdb. You need to make use of it:
mkdir -p /mnt/ssd && mkfs.ext4 /dev/xvdb \
&& echo '/dev/xvdb /mnt/ssd auto defaults,nobootwait 0 0' >> /etc/fstab \
&& mount /mnt/ssd
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
/dev/xvdb 74G 52M 70G 1% /mnt/ssd
Congrats! You are now the proud owner of a 80gb mount. Okay, not quite 80gb. Lets get 80gb:
$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 17G 13k 17G 1% /dev
tmpfs 3.3G 336k 3.3G 1% /run
/dev/xvda1 8.4G 828M 7.1G 11% /
/dev/xvdb 80G 55M 76G 1% /mnt/ssd
Your filesystem is probably on EBS, not the instance storage that comes with r3.xlarge. This is the default for most AMIs. Note the size of the EBS volume is not part of the image. You can choose it when you create the instance.
Instance store is available on the larger instance types as shown here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
AMI images have two options for the root storage device. The most common are EBS images, which use EBS for the root device. Since EBS isn't locked to specific hardware, these instances are much more flexible.
The other option is an AMI with an instance store root storage device. However, you lose the ability to stop the instance without terminating, change the instance type, resize the storage device, and manage the storage separately from the instance itself.
Instance store AMIs are often tagged with S3. For example: amzn-ami-hvm-2016.03.0.x86_64-s3 (ami-152bc275).

Running out of inodes on a docker volume

I have the following docker-compos.yml file:
web:
build: .
ports:
- "4200:4200"
- "35729:35729"
volumes:
- ..:/code
- ../home:/home/dev
which maps the 2 volumes above. When I login into my VM and run df -i i see
Filesystem Inodes IUsed IFree IUse% Mounted on
none 1218224 509534 708690 42% /
tmpfs 256337 18 256319 1% /dev
shm 256337 1 256336 1% /dev/shm
tmpfs 256337 11 256326 1% /sys/fs/cgroup
none 1000 0 1000 0% /code
none 1000 0 1000 0% /home/dev
/dev/sda1 1218224 509534 708690 42% /etc/resolv.conf
/dev/sda1 1218224 509534 708690 42% /etc/hostname
/dev/sda1 1218224 509534 708690 42% /etc/hosts
tmpfs 256337 18 256319 1% /proc/kcore
tmpfs 256337 18 256319 1% /proc/timer_stats
As you can /code and /home/dev my 2 volumes only have 1000 inodes, so when I run my build process and it ends up creating a ton of files, I get an error that I don't have enough inodes.
Host = OSX
Guest = CentOs 6.5
Using Virtualbox
My question is: how do I assign more inodes to my data volumes /code and /home/dev above?
I'm looking for a similar solution and so far, I've found this post to have some useful information in it:
how to free inode usage
Also, according to this post there doesn't seem to be a dynamic way of allocating inodes:
how can I increase the number of inodes ...
And finally, there is this line in the documentation:
it is not possible to expand the number of inodes on a filesystem after it is created

Resources