How to implement A/B dual copy scheme partitioning? - embedded-linux

I am trying to implement A+B dual copy scheme partition on my Avenger96 board. I am using Yocto build system and also .wks file to create partitions. My .wks file:
part fsbl1 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl1" --ondisk mmcblk --align 1 --size 256k
part fsbl2 --source rawcopy --sourceparams="file=u-boot-spl.stm32" --part-name "fsbl2" --ondisk mmcblk --align 1 --size 256k
part ssbl --source rawcopy --sourceparams="file=u-boot.itb" --part-name "ssbl" --ondisk mmcblk --align 1 --size 2M
part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_A --part-name "rootfs_A" --align 4096 --use-uuid --active --size 3G
part /rootfsB --source rootfs --ondisk mmcblk0 --fstype=ext4 --label root_B --part-name "rootfs_B" --align 4096 --use-uuid --size 3G
bootloader --ptable gpt
And I am able to build .wic.xz image and copied image to SD. It created 2 rootfs partitions.
But when I boot using this SD card, I could see both partitions are mounted. For example, there is also /dev/root which refers to current active partition /dev/mmcblk0p4 and also /dev/mmcblk0p5 (/rootfsB) when I do df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 5.0G 1.1G 3.6G 23% /
devtmpfs 469.7M 0 469.7M 0% /dev
tmpfs 502.2M 0 502.2M 0% /dev/shm
tmpfs 502.2M 9.6M 492.5M 2% /run
tmpfs 502.2M 0 502.2M 0% /sys/fs/cgroup
tmpfs 502.2M 0 502.2M 0% /tmp
tmpfs 502.2M 16.0K 502.1M 0% /var/volatile
/dev/mmcblk0p5 5.0G 1.1G 3.6G 23% /rootfsB
tmpfs 100.4M 0 100.4M 0% /run/user/0
And boot messages also show:
Mounting /rootfsB...
Starting Start psplash boot splash screen...
[ OK ] Mounted /rootfsB.
Also doing mount from linux user-space results in:
/dev/mmcblk0p4 on / type ext4 (rw,relatime)
devtmpfs on /dev type devtmpfs (rw,relatime,size=480932k,nr_inodes=120233,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=666)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /var/volatile type tmpfs (rw,relatime)
/dev/mmcblk0p5 on /rootfsB type ext4 (rw,relatime)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=102840k,mode=700)
Is this the expected behavior for A+B type partitioning?
Can anyone please let me know what could be the issue and how to resolve it?
Your help will be much appreciated.
Thanks in advance.
P.S: Please let me know if any info is missing here.

This is actually what I would expect. You want to boot from the "active" (A) partition, but you also want to be able to update the "passive" (B) partition. After you updated the "passive" (B) partition you typically tell the bootloader to try to boot from it. If that works (B) becomes the "active" partition and (A) the "passive" one.

Related

Splitting the Output data of a command and process the splitted and saved data further in Ansible

I have a situation as below.
I have to run a command that fetches a data related to two nodes (let us say:NODE-1, NODE-2. Please note that there is no way to fetch the data seperately for NODE-1 and NODE-2). The sample data is as shown below.
#################### NODE-1 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
#################### NODE-2 ####################
Filesystem Size Used Avail Use% Mounted on
devtmpfs 12G 8.0K 12G 1% /dev
root 4.0G 2.4G 1.7G 59% /
tmpfs 17G 796K 17G 1% /dev/user1
tmpfs 12G 1.2G 11G 10% /run
tmpfs 12G 0 12G 0% /sys/user1
tmpfs 2.4G 0 2.4G 0% /run/user1/0
/dev/sda1 3.9G 158M 3.5G 5% /boot
/dev/sda2 3.9G 158M 3.5G 5% /var/log
/dev/sda3 9.8G 1.1G 8.2G 12% /var/opt
none 64M 0 64M 0% /var/opt/group1
none 64M 0 64M 0% /var/opt/group2
It is required to process the data of these two nodes sepearely and draw & print the conclusions individually based on their repsctive values.
I am unable to find a way to seperate the data of these two nodes and save them as seperate variables/ in files to process them further.
PS: The number of fields (rows) in the data of the nodes is not constant.
Can I get a way (atleast a hint) to do it using ansible ?
i suggest you to use a custom filter (splitregex): you put this file
in folder filter_plugin (same level than your playbook)
#!/usr/bin/python
import re
class FilterModule(object):
def filters(self):
return {
'splitregex': self.splitregex
}
def splitregex(self, l1, pattern):
return filter(None, re.split(pattern, l1, flags=re.MULTILINE))
you use it in this playbook:(the file datas.txt contains your initial result)
- hosts: localhost
gather_facts: false
tasks:
- name: read file
shell: cat datas.txt
register: cc
- name: show item
debug:
msg: "{{ item }}"
loop: "{{ parts }}"
vars:
parts: "{{ cc.stdout | splitregex('^#+ NODE-[0-9]+ #+$') }}"
parts is a list which contains your initial file cut following the pattern
you could use '^#+ NODE-[0-9]+ #+$\n' if you want to remove the \n in beginning of each part
if you are using roles
for role test (for example), you create a folder roles/test/filter_plugins and put it the python file in it

WSL2 - Resize/Extend Disk [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
I'm using Windows 10 with Docker on WSL2 (With Ubuntu20 Linux Subsystem).
I originally got all this installed on my C: disk, which was a 256Go Drive.
I've change my drive for a 1To drive and I extended my system partition. In windows all is fine but I still encounter "no space left on device" issues inside WSL2.
How can I resize my Linux disk so I can use more space?
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdb ext4 251G 18G 221G 8% /
tmpfs tmpfs 4.9G 400M 4.5G 9% /mnt/wsl
tools 9p 931G 269G 663G 29% /init
none devtmpfs 4.9G 0 4.9G 0% /dev
none tmpfs 4.9G 8.0K 4.9G 1% /run
none tmpfs 4.9G 0 4.9G 0% /run/lock
none tmpfs 4.9G 0 4.9G 0% /run/shm
none tmpfs 4.9G 0 4.9G 0% /run/user
tmpfs tmpfs 4.9G 0 4.9G 0% /sys/fs/cgroup
C:\ 9p 931G 269G 663G 29% /mnt/c
D:\ 9p 466G 6.6G 460G 2% /mnt/d
E:\ 9p 1.9T 85G 1.8T 5% /mnt/e
/dev/sdd ext4 251G 164G 76G 69% /mnt/wsl/docker-desktop-data/isocache
none tmpfs 4.9G 12K 4.9G 1% /mnt/wsl/docker-desktop/shared-sockets/host-services
/dev/sdc ext4 251G 130M 239G 1% /mnt/wsl/docker-desktop/docker-desktop-proxy
/dev/loop0 iso9660 396M 396M 0 100% /mnt/wsl/docker-desktop/cli-tools

Resize Amazon EC2 volume without AMI

I have a server on aws-ec2 with defaulit free tier. How can I increase the size of volume without using an AMI?
Here are the Steps which will help you to resize ec2 volume without AMI (Snapshots).
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
Choose Volumes, select the volume to modify, and then choose Actions, Modify Volume.
The Modify Volume window displays the volume ID and the volume's current configuration, including type, size, and IOPS. You can change any or all of these settings in a single action. Set new configuration values as follows:
To modify the type, choose a value for Volume Type.
To modify the size, enter an allowed integer value for Size.
If you chose Provisioned IOPS (IO1) as your volume type, enter an allowed integer value for IOPS.
After you have specified all of the modifications to apply, choose Modify, Yes.
Modifying volume size has no practical effect until you also extend the volume's file system to make use of the new storage capacity.
After then you have to run these command on ec2 terminal
ubuntu#ip-192-168-1-26:~$ sudo su
root#ip-192-168-1-26:/home/ubuntu# df -h
Filesystem Size Used Avail Use% Mounted on
udev 487M 0 487M 0% /dev
tmpfs 100M 12M 88M 12% /run
/dev/xvda1 7.8G 5.5G 2.0G 74% /
tmpfs 496M 0 496M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 496M 0 496M 0% /sys/fs/cgroup
tmpfs 100M 0 100M 0% /run/user/999
tmpfs 100M 0 100M 0% /run/user/1000
root#ip-192-168-1-26:/home/ubuntu# sudo file -s /dev/xvd*
/dev/xvda: DOS/MBR boot sector
/dev/xvda1: Linux rev 1.0 ext4 filesystem data, UUID=e6d1a865-817b-456f-99e7-118135343487, volume name "cloudimg-rootfs" (needs journal recovery) (extents) (large files) (huge files)
root#ip-192-168-1-26:/home/ubuntu# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
root#ip-192-168-1-26:/home/ubuntu# sudo growpart /dev/xvda 1
CHANGED: partition=1 start=16065 old: size=16761118 end=16777183 new: size=33538334,end=33554399
root#ip-192-168-1-26:/home/ubuntu# sudo resize2fs /dev/xvda1
resize2fs 1.42.13 (17-May-2015)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
The filesystem on /dev/xvda1 is now 4192291 (4k) blocks long.
that's command will help you to resize ec2 volume

EC2 r3.xlarge storage space not correspond to the documentation

I'm using Hadoop YARN on EC2 over r3.xlarge instances, I launched the instances from an AMI using spark-ec2 scripts.
On https://aws.amazon.com/ec2/instance-types/, the specifications of r3.xlarge are the following:
vCPU: 4
Mem: 30.5 GiB
Storage: 1 x 80 GB
The Memory is good, free command gives me this result:
root#ip-xxx-xx-xx-xxx ~]$ free -g
total used free shared buffers cached
Mem: 29 2 27 0 0 1
But the storage not correspond to the indicated one.
root#ip-xxx-xx-xx-xxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.9G 783M 91% /
devtmpfs 15G 64K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
Is it normal to have only ~40GB and not 80GB like this was specified in the documentation ? Or this is because I launched the instance from an AMI ?
The two tmpfs directories aren't where your missing 80gb is. This is looks like an Debian/Ubuntu distro. I can reproduce something similar to your df:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
none 15G 0 15G 0% /run/shm
Note /dev/xvda1. That's your boot partition that is on ELB. Your 80gb SSD is actually at /dev/xvdb. You need to make use of it:
mkdir -p /mnt/ssd && mkfs.ext4 /dev/xvdb \
&& echo '/dev/xvdb /mnt/ssd auto defaults,nobootwait 0 0' >> /etc/fstab \
&& mount /mnt/ssd
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 15G 12K 15G 1% /dev
tmpfs 3.0G 328K 3.0G 1% /run
/dev/xvda1 7.8G 790M 6.6G 11% /
/dev/xvdb 74G 52M 70G 1% /mnt/ssd
Congrats! You are now the proud owner of a 80gb mount. Okay, not quite 80gb. Lets get 80gb:
$ df -H
Filesystem Size Used Avail Use% Mounted on
udev 17G 13k 17G 1% /dev
tmpfs 3.3G 336k 3.3G 1% /run
/dev/xvda1 8.4G 828M 7.1G 11% /
/dev/xvdb 80G 55M 76G 1% /mnt/ssd
Your filesystem is probably on EBS, not the instance storage that comes with r3.xlarge. This is the default for most AMIs. Note the size of the EBS volume is not part of the image. You can choose it when you create the instance.
Instance store is available on the larger instance types as shown here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html
AMI images have two options for the root storage device. The most common are EBS images, which use EBS for the root device. Since EBS isn't locked to specific hardware, these instances are much more flexible.
The other option is an AMI with an instance store root storage device. However, you lose the ability to stop the instance without terminating, change the instance type, resize the storage device, and manage the storage separately from the instance itself.
Instance store AMIs are often tagged with S3. For example: amzn-ami-hvm-2016.03.0.x86_64-s3 (ami-152bc275).

cannot expand partition

I am trying to expand /dev/xvda1 to 25Gb does anyone know how to do that.
[ec2-user#ip-XX-XXX-XX-XX~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.8G 47M 100% /
tmpfs 849M 0 849M 0% /dev/shm
[ec2-user#ip-XX-XXX-XX-XX~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda1 202:1 0 25G 0 disk /
xvda3 202:3 0 896M 0 disk
When I try umount:sudo umount /
it says that the
device is busy. (In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Try:
sudo resize2fs /dev/xvda1

Resources