I installed openembedded and tried building a couple of images for Zaurus SL-6000 "Tosa", basically, helloworld-image and console-image. And I ended up with an angstrom-dev/deploy/glibc/images/tosa directory that contains files like this (slightly truncated from a forum post I made elsewhere):
Angstrom-helloworld-image-glibc-ipk-2009.X-test-20090529-tosa-installkit.tgz
Angstrom-helloworld-image-glibc-ipk-2009.X-test-20090529-tosa.rootfs.jffs2
Angstrom-helloworld-image-glibc-ipk-2009.X-test-20090529-tosa.rootfs.tar.bz2
Angstrom-helloworld-image-glibc-ipk-2009.X-test-20090529-tosa.rootfs.tar.gz
helloworld-image-tosa.tar.bz2
helloworld-image-tosa.tar.gz
initramfs-kexecboot-image-tosa.cpio.gz
initramfs-kexecboot-image-tosa.jffs2
initramfs-kexecboot-image-tosa.tar.bz2
initramfs-kexecboot-image-tosa.tar.gz
modules-2.6.29-r0-tosa.tgz
updater.sh.tosa
zImage-2.6.29-r0-tosa.bin
zImage-kexecboot-2.6.24-r0-tosa.bin
zImage-kexecboot-tosa.bin
zImage-tosa.bin
I have no idea what all these do or how to install them properly. What I did try is various combinations of flashing a zImage.bin and initrd.bin using option 4 of the maintenance menu (as specified per earlier instructions). The flashing usually works alright but then when it boots, it loads a bootloader that cannot find any bootable devices. On a hunch, I tried unpacking one of the tar.gz images to an ext2 formatted SD card and tried booting with that plugged in and it was detected by the bootloader. Booting it sort of worked but it quickly exited back to the bootloader (I assume that was just a problem with the image I unpacked).
My questions are:
What is the correct usage for all of these file types, i.e. should the .jffs2 files be renamed initrd.bin and included in the flashing process? What am I supposed to do with the bz2 and gz files? Are they only for unpacking to external media?
How do I install to the internal flash? It used to work with the stable Angstrom 2007-12 build and instructions.
Is there a newer version of updater.sh (that one was not built by oe and I added it myself having picked it up from elsewhere)? The reason I ask is that when trying to flash zImage-2.6.29-r0-tosa.bin it fails during the update program with the error that the file is too big. That kernel is approximately 1.3mb while the others are 1.2mb. Is this a constraint of the SL-6000 itself? I thought it has 32mb of internal memory.
Unfortunately, none of the available documentation that I could find online talks about installing these files. I did find a small entry in the "Angstrom Manual" which talks about what they are but not how to use them as they are all device specific. Unfortunately the tosa documentation only talks about copying the files from an installkit and flashing the device from the maintenance menu.
Okay, "ant" over at OE forums was able to answer my questions ^^ Just recording the answer here for posterity.
installkit-tosa.tar.gz, contains updater.sh and zImage (the kexecboot-kernel). This kexecboot-kernel can be and is likely different from the kernel you will have on the rootfs after the machine boots. Unpack the installkit on a formatted card and follow the flashing procedure for the device.
Regarding the also be various image-rootfs.tar.gz, .bz2, and .jffs2 files. These are the root file systems that will be be booted by kexecboot. The tar.gz or .bz2 archive should be unpacked onto an ext2 (or possibly ext3) formatted SD or CF card. It will be detected by kexecboot at boot time and appear in the kexecboot menu.
If you want a rootfs in nand (installed internally), rename your-image-rootfs.jffs2 to initrd.bin and copy it on the card with updater.sh (then flash).
Related
I wrote a C++ program with multiple classes and divided it into multiple files, which is intended to run on an embedded device (raspi 2 to be specific) that has no internet access. Therefore building the source and installing the dependencies on every device would be very laborious.
Is there a way to compile the program on one of the devices (as an exception to the others, this one has internet access), so that I can just transfer the build files, e.g. via USB, to the other devices? This should also include the various libraries I used so that I don't have to install them on every device. These are mainly std, but also a self-cloned and build library and a with apt installed library (I linked the libraries used as an example, but they shouldn't affect the process, I guess).
I'm using CMake. Is there an option, to make CMake compile a program into a (set of) files that run independently of the system-installed libraries with other words: they run without the need to have the required libraries installed on the system, but shipped with the build files.
Edit:
My main problem is, that I cannot get a certain dependency on the target devices due to a lack of internet access. Can I build the package and also include the library in that build, without me having to install it?
I'm not sure I fully understand why you need internet for your deployment but I can give you several methods and you'll choose the one you seem the best.
Method A: Cloning the SD card image
During your development phase, you ended to successfully have a RPi device working and you want to replicate this. You can use some tools to duplicate your image on another SD card, N times and eventually, this could be sufficient to make it work.
Pros: Very quick method
Cons: Usually, your development phase involve adjustments, trying different tools, different versions, etc.. your original RPi image is not clean and so you'll replicate this. Definitely not valid for an industrial project for could be sufficient for a personal one.
Method B: Create deployments scripts
You can create a deployment script on your computer to copy, configure, install what you need. Assuming you start with a certain version of Raspberry pi OS, you flash it, then you boot your PI that is connected via Ethernet for example. You can start a script on your computer that will:
Copy needed sources / packages / binaries
(Optional) compile sources (if you have a compiler that suits your need on RPi OS)
Miscellaneous configuration
To do all these, a script like this can do the job:
PI_USERNAME="pi"
PI_PASSWORD="raspberry"
PI_IPADDRESS="192.168.0.3"
# example on how to execute a command remotely
sshpass -p ${PI_PASSWORD} ssh ${PI_USERNAME}#${PI_IPADDRESS} sudo apt update
# example on how to copy a local file on the RPI
sshpass -p ${PI_PASSWORD} scp local_dir/nlohmann/json.hpp ${PI_USERNAME}#${PI_IPADDRESS}:/home/pi/sources
Importante note:
Hard coded credentials is not recommended.
This script is assuming you are using linux but you'll find equivalent tool under Windows.
This assume your RPI has a fixed IP but you can still improve this script to automatically find the RPI on the network. (lots of possibilities)
Pros: While you create this deployment script, you'll force yourself to start from a clean image and no dirty environment is duplicated.
Cons: Take a bit longer than method A
Method C: Create your own Raspberry PI image using Yocto
Yocto is a tool to create your own images, suitable for Raspberry PI. You can customize absolutely everything and produce an sdcard image you can just flash your RPis SD cards.
Pros: Very complete tool, industrial process
Cons: Quite complicated to deal with, not suitable for beginners, time cost
Saying in the comments it's only for 10 devices and that you were a bit scared to cross compile, I would not promote the Yocto method for you. I would not recommend the method A as well mostly because of the dirty environment duplication (but up to you in the end). The method B with the deployment script may be the best to go.
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
Improve this question
I'm doing a project that needs to generate a vm image file which will then be used as qemu bootable disk image. Previously, our product is like an modified linux system and is made into a USB installation drive and then boot and install into an bare metal machine. But now we want to get rid of the hardware and run it in virtual machines, that's why we need an image file.
Instead of using the existing USB drive to install the system in qemu then shutdown the vm and get the image, we were asked to make an out-of-box image file directly, and skip all the booting and installing on real machine or virtual machine but still get a installed image, so that we can just deliver this image and people can load this image as a ready-to-use virtual machine image.
But during all the procedure, I can NOT use any command that requires root privilege! Don't ask why, there's a whole bunch of restrictions to our project, I just can't use root privilege, no sudo, no su, just anything only a regular user can do....
The part I already achieved is using the latest version of mke2fs -d command to populate a tree of folders and files into different partitions of this image file like this
suppose after the image is booted, we have these folder structure
$ ls /
$ bin dev boot home lib32 mnt proc run srv tmp var boot data etc lib lib64 opt root sbin sys usr
some of the folders are mounted by different partitions
extract a single partition from image
dd if=image of=partitionN skip=offset_of_partition_N count=size_of_partition_N bs=512 conv=sparse
populate a folder into the partition
mke2fs -d root_dir/etc partitionN
put the partition back into image
dd if=partitionN of=image seek=offset_of_partition_N count=size_of_partition_N bs=512 conv=sparse,notrunc
We have the first partition in the image as the boot partition, which contains the 'boot' folder and will be mounted under /boot once it is booted.
And this boot partition is an EFI compatible partition(which actually seems to be a FAT32 format), since our project needs it to be this way.
BUT
After get all partitions successfully populated into the image, I can not find a way to install grub for this bootable image. And that's the most damn important step that needed to make this image bootable.
All solutions I found on the web suggest loop mount the image's boot partition, which I can not do because without root privilege I can't loop mount the image.
So does any one have any idea how to do this?
I tried to understand how grub write raw values into mbr, and how to find stage1 and stage2 from the values inside mbr, and how to figure out the sector list at the end of stage2's first sector, but that's so crazy and I eventually failed to get this trick work.
Disclaimer: if facing this problem myself I would attack the problem directly by making a grub2-mbr-image installer myself. It's a true attack on the problem and would make a better answer, and more on topic for this site; however it's more hours of research than I'm willing to put into it for a stackverflow answer.
There's an extra trick here. We can get success/error code back by using a virtual floppy disk. There's no need to ship the floppy disk driver. You can build it as a module and include it only in the cpio image.
If we are willing to forgo the kqemu component and pay the 10x slowdown price, we can start qemu with -hda image.img, -kernel bzImage and -initrd initrd.cpio.gz and it will boot that. You need an X server (which you can provide with Xvnc) but no privileges a normal user doesn't have. Assuming / and /boot are the same, /linuxrc looks like this:
#!/bin/sh
insmod /lib/modules/kernel/floppy.ko
mount /dev/hda1 -t ext2 /mnt
PATH=/bin:/usr/bin:/usr/sbin:/sbin chroot /mnt grub2-install
RESULT=$?
umount /dev/hda1
mount /dev/fda -t fat /mnt
echo -n $RESULT > /mnt/errorcode
umount /mnt
poweroff
And you can get your error code back with mcopy to read the floppy disk image.
If qemu is not available, you can build a i586 compatible kernel and use dosbox-x instead and start the kernel using loadlin.exe. This actually works. If you try it with modern stuff it just dies because stuff demands i686 now; but you can build the grub install tool itself targeting i586 and just use an old kernel to boot to do the install. https://www.vogons.org/viewtopic.php?t=53531
Caveat: This is not a complete solution, however I'll post since nobody else answered.
I have never tried to do this and don't have a couple of hours spare to test it, however The OpenWrt project has standard x64 disk image files including Grub and kernel which you can find here:
https://downloads.openwrt.org/chaos_calmer/15.05.1/x86/64/openwrt-15.05.1-x86-64-combined-ext4.img.gz
Instructions tell you how to convert the images for VMWare, for Qemu it must be similar:
https://wiki.openwrt.org/doc/howto/vmware
The thing is, OpenWrt philosophy has always been that builds shouldn't be done as root, and it generally refuses to build as root, so I think you'll find they have ways of creating EXT4 filesystem images complete with MBR and Grub. I have only tested embedded platforms and never actually built from source for x86, but this is where you should start if you're stuck.
Of course, the OpenWrt disk has only a single partition, I'm unsure how you'd create a virtual disk with a more complex partition table, but perhaps there are some options in the tools used by OpenWrt.
I want to study the source files of some of the device drivers that are installed and loaded on either a raspberry pi(raspian), beaglebone(debian) or a my laptop(ubuntu).
My aim is to learn how to properly implement my own modules by studying the source files of some drivers that actually works.
I am particularly interested in drivers that communicates with actual hardware (USB, I2C, SPI, UART etc).
Can someone tell me how to find these sources? are they available in some particular folder i.e something like /usr/src/**** or do I have to download all of the kernel source files from a particular kernel release?
All advice's, opinions and recommendations are most appreciated.
p.s I have read "Linux Kernel Development 3rd edition" but please tell me if
you know any other free/open-source books on the subject.
Best regards
Henrik
Linux Source directory and description :
arch/ -
The arch sub-directory contains all of the architecture specific kernel code.
Example :
1. 'arch/arm/' will have your board related configuration file.
like 'arch/arm/mach-omap/' will have omap specific source code.
2. 'arch/arm/config' Generates a new kernel configuration with the
default answer being used for all options. The default values
are taken from a file located in the arch/$ARCH/defconfig
file,where $ARCH refers to the specific architecture for which
the kernel is being built.
3. arch/arm/boot have kernel zImage, dtb image after compilation.
block/ -
This folder holds code for block-device drivers. Block devices are devices that accept and send data in blocks. Data blocks are chunks of data instead of a continual stream.
crypto/ -
This folder contains the source code for many encryption algorithms.
example, “sha1_generic.c” is the file that contains the code for
the sha1 encryption algorithm.
Documentation/ -
It has kernel related information in text format.
drivers/ - All of the system's device drivers live in this directory. They are further sub-divided into classes of device driver.
Example,
1. drivers/video/backlight/ has blacklight driver source which
will control display brightness.
2. drivers/video/display/ has display driver source.
3. drivers/input/ has input driver source code. like touch,
keyboard and mouse driver.
4. drivers/char/ has charter driver source code.
5. drivers/i2c/ has i2c subsystem and driver source code.
6. drivers/pci/ has pci subsytem and driver related source code.
7. drivers/bluetooth has Bluetooth driver file.
8. drivers/power has power and battery driver.
firmware/ -
The firmware folder contains code that allows the computer to read and understand signals from devices. For illustration, a webcam manages its own hardware, but the computer must understand the signals that the webcam is sending the computer.
fs/ -
All of the file system code. This is further sub-divided into directories, one per supported file system, for example vfat and ext2.
kernel/ -
The code in this folder controls the kernel itself. For instance, if a debugger needed to trace an issue, the kernel would use code that originated from source files in this folder to inform the debugger of all of the actions that the kernel performs. There is also code here for keeping track of time. In the kernel folder is a directory titled "power". Some code in this folder provide the abilities for the computer to restart, power-off, and suspend.
net/ -
net
The kernel's networking code.
lib
This directory contains the kernel's library code. The architecture specific library code can be found in arch/*/lib/.
scripts
This directory contains the scripts (for example awk and tk scripts) that are used when the kernel is configured.
lib/ -
This directory contains the kernel's library code. The architecture specific library code can be found in arch/*/lib/.
scripts/ -
This directory contains the scripts (for example awk and tk scripts) that are used when the kernel is configured.
mm/ -
This directory contains all of the memory management code. The architecture specific memory management code lives down in arch/*/mm/, for example arch/i386/mm/fault.c.
ipc/ -
This directory contains the kernels inter-process communications code.
**init/ -**The init folder has code that deals with the startup of the kernel (INITiation). The main.c file is the core of the kernel. This is the main source code file the connects all of the other files.
sound/ - This is where all of the sound card drivers are.
There are few more directory certs, crypto, security, include, virt and usr etc....
There are a few different methods that I use for viewing kernel related source, and I'm sure there are a few other good methods out there as well. You will find that the methods are largely the same.
Head on over to kernel.org and download the kernel of your choice. You will find driver related source under /<path to your downloaded kernel>/drivers. For example, I have downloaded and extracted kernel 4.5.5 to /usr/src/linux-4.5.5, and access the source for my drivers via /usr/src/linux-4.5.5/drivers.
Use a linux cross-reference website. Personally, I use the one hosted on free-electrons. These websites are nice for their free-text or identifier searches.
Browse Linus Torvalds' linux repo hosted on github.
Never mind, I found the source files under
~/linux/drivers/
example:
nano ~/linux/drivers/spi/spi-bitbang.c
Sorry, for any inconvenience!
I need to provide a single update file to a customer in order to update an embedded system via USB. The system is built using Yocto. I'm curious if the plan I have to implement USB updating is viable, or if I'm missing something that should be obvious.
opkg exists on the system, but in order to use opkg update it needs to have a repo to pull from. Since I have no network capabilities I will need to put the entire repo on a USB drive. Since I need to provide a single file to the customer the repo will then need to be a tar file.
Procedure
Plug USB drive in
The udev rule calls a script and pushes it to the background since this will be a long process (see this)
Un-tar the repo update file
opkg update
Notify user they may remove USB drive
At least from a high-level point of view does this sound like it is a good way to update an embedded system via USB? What pitfalls might exist?
Your use case is covered by SWUpdate - it is maybe worth to take a look at my project (github.com/sbabic/swupdate). Upgrading from USB with a single image file is one of the use case, and you can use meta-swupdate (listed on openembedded) to generate the single image with all artifacts.
Well, regarding what pitfalls might exist, one of the biggest pitfall would likely be a power outage in the middle of the process. How do you recover in that case? (The answer might depend on what type of embedded device your making). (Personally, I'm in favour of complete image based upgrades and not package based upgrades).
Regarding your scenario with the tarball, do you have enough space on your device to unpack out? It might be wiser to instead of distributing a tarball with your opkg repository, to distribute eg a ext2 or squashfs image of the repository. That would allow you to mount it from the USB drive using the loopback device.
Apart from that, as long as you have a good way to communicate work the user, your approach should work. The main issue is what do you do in case of an interrupted upgrade process. That is something you need to think about in advance.
When deciding on the update format and design there are several things to consider; including what you want to update (e.g. kernel, applications, bootloader); there is a paper on the most popular designs here: https://mender.io/user/pages/04.resources/_white-papers/Software%20Updates.pdf
In your case (no network, lots of storage available on USB) the easiest approach is probably a full rootfs update. I am involved in an open source project Mender.io which does a dual A/B rootfs update and integrates with the Yocto Project to make it easy and fast to enable updates on devices without custom low-level coding.
I need to create an ELF image file from shared objects (.so files) and write it to another partition in Windows. Then open this partition in Linux and load the shared objects.
Does anybody know how to create an ELF image (a bundle of many shared objects) in Windows?
You can use Cygwin and try a suitable GCC cross-toolchain. Perhaps you'll have to build it yourself first (which is troublesome), but there it goes...
EDIT:
Okay, here you are:
A simplified one:
Building GCC cross compiler (from "Linux" to "Windows") -- the basic steps are the same as described there. You'll just need to ./configure it with relevant --host=... and --target=.... And oh! Don't forget to set the build root, since building "in the source tree" is not supported -- you'll just get stuck in errors if you try (I did...)
A killer one:
http://cygwin.wikia.com/wiki/How_to_install_a_newer_version_of_GCC#Build_and_Install_GCC -- a complete guide.
Nowadays Linux understands NTFS. At least, it should be able to read off it.
You can also use a flash stick formatted as FAT32 or NTFS as the shared storage.
You can also run Linux in a VM and set up FTP server on it and exchange files through it.
There're many ways of sharing data between different OSes.