Multiple partitions in buildroot? - embedded-linux

Let's discuss a very common case in building up a system image, in which, we need our rootfs as SquashFs to be read-only indeed, and another ext4 partition(let's say home) for general read-write storage.
The system image layout(genimage.cfg) looks like this in a buildroot environment:
image sdcard.img {
hdimage {}
partition boot {
partition-type = 0xC
bootable = "true"
image = "boot.vfat"
}
partition rootfs {
partition-type = 0x83
image = "rootfs.squashfs"
}
partition home {
partition-type = 0x83
image = "home.ext4"
}
}
image boot.vfat {
vfat {
files = {
"bcm2711-rpi-4-b.dtb",
"rpi-firmware/cmdline.txt",
"rpi-firmware/config.txt",
"rpi-firmware/fixup4.dat",
"rpi-firmware/start4.elf",
"rpi-firmware/overlays",
"zImage"
}
}
size = 16M
}
image home.ext4 {
name = "home"
mountpoint = "/home"
ext4 {}
size = 32M
}
But in final stage of creating image, we end up with an error:
>> Executing post-image script ~/rpi4/post-image.sh
INFO: cmd: "mkdir -p "/home/iman/rpi4/genimage.tmp"" (stderr+stdout):
INFO: cmd: "rm -rf "/home/iman/rpi4/genimage.tmp"/*" (stderr+stdout):
DEBUG: hdimage(sdcard.img): adding implicit file rule for 'rootfs.squashfs'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'bcm2711-rpi-4-b.dtb'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'rpi-firmware/cmdline.txt'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'rpi-firmware/config.txt'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'rpi-firmware/fixup4.dat'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'rpi-firmware/start4.elf'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'rpi-firmware/overlays'
DEBUG: vfat(boot.vfat): adding implicit file rule for 'zImage'
INFO: cmd: "mkdir -p "/home/iman/rpi4/genimage.tmp"" (stderr+stdout):
INFO: cmd: "cp -a "/tmp/tmp.dMfSigyUwW" "/home/iman/rpi4/genimage.tmp/root"" (stderr+stdout):
INFO: cmd: "mv "/home/iman/rpi4/genimage.tmp/root/home" "/home/iman/rpi4/genimage.tmp/home"" (stderr+stdout):
mv: cannot stat '/home/iman/rpi4/genimage.tmp/root/home': No such file or directory
Makefile:809: recipe for target 'target-post-image' failed
make[1]: *** [target-post-image] Error 1
Makefile:84: recipe for target '_all' failed
make: *** [_all] Error 2
What other steps should be taken care of?
Or in more general, what is the way to creating + mounting new partition in buildroot?

If you want an empty /home directory then you should not use the mountpoint keyword:
image home.ext4 {
name = "home"
ext4 {}
size = 32M
}
The mountpoint keyword does not tell where the partition will be mounted. See The genimage docs for an explanation of what it does.
If you want to mount your partition from your rootfs then you can add it to /etc/fstab, perhaps in a rootfs overlay (BR2_ROOTFS_OVERLAY). genimage has no control on what will be actually mounted.

There are a lot of examples in test/ folder. For example it's about How add empty partition in your image:
image test.hdimage {
hdimage {
align = 1M
disk-signature = 0x12345678
}
partition part1 {
image = "part2.img"
size = 5k
partition-type = 0x83
}
partition extraimage {
image = "diskEmpty.ext2"
}
}
image diskEmpty.ext2 {
size = 10M //space for user, real space = (space for user + space for filesystem) > 10M
empty = true
temporary = true
ext2 {
label = "my_empty"
use-mke2fs = true
}
}

Related

"ERROR : invalid main GPT header" after SWUpdate on a Raspberry Pi 3

I am getting started on embedded systems and I am using BuildRoot to update the Linux OS on my Raspberry Pi 3. I started by building the OS and testing it on my board. The system boots without any problems. Then I wanted to use SWUpdate to update the OS via a USB key. But when I mount the USB key and then launch the command :
$ swupdate -i /mnt/buildroot.swu -e rootfs,rootfs-2 -p /etc/swupdate/postupdate.sh
The terminal shows the message “Swupate was successful! ”.
Then I get the following errors :
ERROR : Caution: invalid main GPT header, but valid backup; regenerating main header from backup!
ERROR : Warning : invalid CRC on main header data; loaded backup partition table.
ERROR : Warning! Main and backup partition tables differ! Use the 'c' and 'e' options on the recovery & transformation menu to examine the two tables.
ERROR : Warning! Main partition table CRC mismatch! Loaded backup partition table instead of main partition table!
ERROR : Warning! One or more CRCs don't match. You should repair the disk!
ERROR : Main Header : ERROR
ERROR : Backup header : OK
ERROR : Main partition table: ERROR
ERROR : Backup partition table: OK
ERROR : Invalid partition data!
Then the system reboots on the same previous partition and not on the one that is supposed to be updated.
I suspected that the genimage.cfg file might be causing this so I tried changing the content of the file by adding "partition-table-type = "gpt" and replacing "partition-type=0x83" with "partition-type-uuid = U". I still have the same problem.
Here is the content of genimage.cfg:
image boot.vfat {
vfat {
files = {
"bcm2710-rpi-3-b.dtb",
"bcm2710-rpi-3-b-plus.dtb",
"bcm2710-rpi-cm3.dtb",
"rpi-firmware/bootcode.bin",
"rpi-firmware/cmdline.txt",
"rpi-firmware/config.txt",
"rpi-firmware/fixup.dat",
"rpi-firmware/start.elf",
"rpi-firmware/overlays",
"zImage"
}
}
size = 32M
}
image sdcard.img {
hdimage {
partition-table-type = "gpt"
}
partition boot {
partition-type-uuid = F
bootable = "true"
image = "boot.vfat"
}
partition rootfs1 {
partition-type-uuid = U
image = "rootfs.ext4"
size = 120M
}
partition rootfs2 {
partition-type-uuid = U
size = 120M
}
}
Does anyone have an idea on how to solve this?

How to pass ip-address from terraform to ansible [duplicate]

I am trying to create Ansible inventory file using local_file function in Terraform (I am open for suggestions to do it in a different way)
module "vm" config:
resource "azurerm_linux_virtual_machine" "vm" {
for_each = { for edit in local.vm : edit.name => edit }
name = each.value.name
resource_group_name = var.vm_rg
location = var.vm_location
size = each.value.size
admin_username = var.vm_username
admin_password = var.vm_password
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.edit_seat_nic[each.key].id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
output "vm_ips" {
value = toset([
for vm_ips in azurerm_linux_virtual_machine.vm : vm_ips.private_ip_address
])
}
When I run terraform plan with the above configuration I get:
Changes to Outputs:
+ test = [
+ "10.1.0.4",
]
Now, in my main TF I have the configuration for local_file as follows:
resource "local_file" "ansible_inventory" {
filename = "./ansible_inventory/ansible_inventory.ini"
content = <<EOF
[vm]
${module.vm.vm_ips}
EOF
}
This returns the error below:
Error: Invalid template interpolation value
on main.tf line 92, in resource "local_file" "ansible_inventory":
90: content = <<EOF
91: [vm]
92: ${module.vm.vm_ips}
93: EOF
module.vm.vm_ips is set of string with 1 element
Cannot include the given value in a string template: string required.
Any suggestion how to inject the list of IPs from the output into the local file while also being able to format the rest of the text in the file?
If you want the Ansible inventory to be statically sourced from a file in INI format, then you basically need to render a template in Terraform to produce the desired output.
module/templates/inventory.tmpl:
[vm]
%{ for ip in ips ~}
${ip}
%{ endfor ~}
alternative suggestion from #mdaniel:
[vm]
${join("\n", ips)}
module/config.tf:
resource "local_file" "ansible_inventory" {
content = templatefile("${path.module}/templates/inventory.tmpl",
{ ips = module.vm.vm_ips }
)
filename = "${path.module}/ansible_inventory/ansible_inventory.ini"
file_permission = "0644"
}
A couple of additional notes though:
You can modify your output to be the entire map of objects of exported attributes like:
output "vms" {
value = azurerm_linux_virtual_machine.vm
}
and then you can access more information about the instances to populate in your inventory. Your templatefile argument would still be the module output, but the for expression(s) in the template would look considerably different depending upon what you want to add.
You can also utilize the YAML or JSON inventory formats for Ansible static inventory. With those, you can then leverage the yamldecode or jsondecode Terraform functions to make the HCL2 data structure transformation much easier. The template file would become a good bit cleaner in that situation for more complex inventories.

aws_imagebuilder_component issue with template_file

I need to deploy a bash script using aws_imagebuilder_component resource; I am using a template_file to populate an array inside my script. I am trying to figure out the correct way to render the template inside the imagebuilder_component resource.
I am pretty sure that the formatting of my script is the main issue :(
This is the error that I keep getting, seems like an issue with the way I am formatting the script inside the yml, can you please assist? I have not worked with image builder previously or with yamlencode.
Error: error creating Image Builder Component: InvalidParameterValueException: The value supplied for parameter 'data' is not valid. Failed to parse document. Error: line 1: cannot unmarshal string phases:... into Document.
# Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}
# template_file
data "template_file" "init" {
template = "${file("${path.module}/myscript.yml")}"
vars = {
devnames= join(" ", local.devnames)
}
}
# myscript.yml
schemaVersion: 1.0
phases:
- name: build
steps:
- name: pre-requirements
action: ExecuteBash
inputs:
commands: |
#!/bin/bash
for i in `seq 0 6`; do
nvdev="/dev/nvm${i}n1"
if [ -e $nvdev ]; then
mapdev="${devnames[i]}"
if [[ -z "$mapdev" ]]; then
mapdev="${devnames[i]}"
fi
else
ln -s $nvdev $mapdev
echo "symlink created: ${nvdev} to ${mapdev}"
fi
fi
done
#Marko
# tfvars
vols = {
data01 = {
devname = "/dev/xvde"
size = "200"
}
data01 = {
devname = "/dev/xvdf"
size = "300"
}
}
Variables.tf: populating this list "devnames" from the map object "vols", shown above
locals {
devnames = ([ for key, value in var.vols: value.devname ])
}
main: template_file is using the list "devnames" to assign its values to devnames variable; devnames variables is used inside myscript.yml
devnames = join(" ", local.devnames)
At this point, everything is working w/o issues
But when this is executed, it fails and complains about the formatting of the template that was rendered using myscript.yml.
I am doing something wrong here that I cannot figure it out
## Image builder component
resource "aws_imagebuilder_component" "devmaps" {
name = "devmaps"
platform = "Linux"
data = yamlencode(data.template_file.init.rendered)
version = "1.0.0"
}

Sematext Logagent Elasticsearch - Indexes not being created?

I'm trying to send data to Elasticsearch using logagent but while there doesn't seem to be any error sending the data, the index isn't being created in ELK. I'm trying to find the index by creating a new index pattern via the Kibana GUI but the index does not seem to exist. This is my logagent.conf right now:
input:
# bro-start:
# module: command
# # store BRO logs in /tmp/bro in JSON format
# command: mkdir /tmp/bro; cd /tmp/bro; /usr/local/bro/bin/bro -i eth0 -e 'redef LogAscii::use_json=T;'
# sourceName: bro
# restart: 1
# read the BRO logs from the file system ...
files:
- '/usr/local/bro/logs/current/*.log'
parser:
json:
enabled: true
transform: !!js/function >
function (sourceName, parsed, config) {
var src = sourceName.split('/')
// generate Elasticsearch _type out of the log file sourceName
// e.g. "dns" from /tmp/bro/dns.log
if (src && src[src.length-1]) {
parsed._type = src[src.length-1].replace(/\.log/g,'')
}
// store log file path in each doc
parsed.logSource = sourceName
// convert Bro timestamps to JavaScript timestamps
if (parsed.ts) {
parsed['#timestamp'] = new Date(parsed.ts * 1000)
}
}
output:
stdout: false
elasticsearch:
module: elasticsearch
url: http://10.10.10.10:9200
index: bro_logs
Maybe I have to create the index mappings manually? I don't know.
Thank you for any advice or insight!
I found out that there actually was an error . I was trying to send some authentication via a field called "auth" but that doesn't exist. I can do url: https://USERNAME:PASSWORD#10.10.10.10:9200 though.

Yocto/Bitbake recipe for adding empty directory to rootfs Embedded Linux

Is there any recipe for adding a new, empty directory to the rootfs? I tried adding this into one of my bbappend file:
do_install() {
install -d ${D}/tmp/myNewDir
}
FILES_${PN} += "/tmp/myNewDir"
but I am getting a non descriptive error, Function failed: do_install
There are several ways. The image command way is already described by StackedUser.
You can also try to extend some of your recipes (as you are doing in your question). I guess that you are seeing the error because you are overwriting the do_install task. You are probably wanting to extend it, so you should add _append to the task name, i.e.:
do_install_append () {
install -d ${D}/tmp/myNewDir
}
BTW, the error "Function failed: do_install" you are hitting usually show an error code or a problematic command. Maybe there is something.
Another way is to create a simple recipe and add it to the image, here is a stub:
SUMMARY = "XXX project directory structure"
# FIXME - add proper license below
LICENSE = "CLOSED"
PV = "1.0"
S = "${WORKDIR}"
inherit allarch
do_install () {
install -d ${D}/foo/bar
}
FILES_${PN} = "/foo/bar"
In our image recipe we have something like this to create a new directory:
create_data_dir() {
mkdir -p ${IMAGE_ROOTFS}/data
}
IMAGE_PREPROCESS_COMMAND += "create_data_dir;"

Resources