Building two different builds for two different MCUs into one binary blob in one yocto project - embedded-linux

We have two different MCUs on our system. The primary one runs an OS that controls the whole system, and the secondary one runs a small task on bare metal.
The secondary has no flash, so when the system starts, it will ask the primary for a program, which the primary will feed it from the primary's flash.
This all works fine, but we have 2 Yocto builds to build each of these, and we have to feed the secondary's software as a built binary blob to the primary's Yocto build.
Is there a way to get one Yocto project to build both, without us having to manually run one build, then the other?

Yocto is definitely not the tool you are looking for from my point of view.
What you need is more a CI/CD platform like Gitlab which contains the project for the MCU. On each commit (or depending on your build policy) the project is built and binary is pushed to a web server then you Yocto can have a git based version recipe which can download the resulting binary.
Alternatively, your CI can also manage the Yocto project and start the build triggered by the MCU firmware project.
This is something common in embedded system to use directly binary instead of sources, e.g. the Linux kernel can embed binary blob for coprocessor but that's clearly not the job of Yocto to handle dependencies between multiple architecture types.

Related

How to manage stable binaries and avoid risk of CI rebuilds when install packaging?

I am looking for a tool to manage the collection of binary files (input components) that make up a software release. This is a software product and we have released multiple versions each year for the last 20 years. The details and types of files may vary, but this is something many software teams need to manage.
What's a Software Release made of?
A mixture of files go into our software releases, including:
Windows executables/binaries (40 DLLs and 30+ EXE files).
Scripts used by the installer to create a database
API assemblies for various platforms (.NET, ActiveX, and Java)
Documentation files (HTML, PDF, CHM)
Source code for example applications
The full collected files for a single version of the release are about 90MB. Most are built from source code, but some are 3rd party.
Manual Process
Long ago we managed this manually.
When starting each new release the files used to build the last release would be copied to a new folder on a shared drive.
The developers would manually add or update files in this folder (hoping nothing was lost or deleted accidentally).
The software installer script would be compiled using the files in this folder to produce a SETUP.EXE (output).
Iterate steps 2 and 3 during validation & testing until release.
Automatic Process
Some years ago we adopted CI (building our binaries nightly or on-demand).
We resorted to putting 3rd party binaries under version control since they usually don't change as often.
Then we automated the process of collecting & updating files for a release based on the CI build outputs. Finally we were able to automate the construction of our SETUP.EXE.
Remaining Gaps
Great so far, but this leaves us with two problems:
Rebuilding Assemblies The CI mostly builds projects when something has changed, but when forced it will re-compile a binary that doesn't have any code change. The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?).
Latest vs Stable Mostly our CI machine builds the latest versions of each project. In some cases this is ok, but often we want to release an older tested or stable version. To do this we have separate CI projects for the latest and stable builds - this works but is clumsy.
Thanks for your patience if you've got this far :-)
I Still Haven't Found What I'm Looking For
After some time searching for solutions it seems it might be easier to build our own solution, but surely someone else has solved these problems before!?
What we want is a way to store and manage binary files (either outputs from CI, or 3rd party files) such that each is tagged with a version (v1.2.3.4) that allows:
The CI to publish new versions of each binary (but reject rebuilt versions that already exist).
The development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
package name
version
path/destination in the release folder
The Automatic package script to use the recipe collect the required files, and compile the install package (e.g. SETUP.EXE).
I am aware of past debates about storing binaries in a VCS. For now I am looking for a better solution. That approach does not appear ideal for long-term ongoing use (e.g. how to prune old binaries)... amongst other issues.
I have tried some artifact repositories currently available. From my investigation these provide a solution for component/artifact storage and version control. However they do not provide tools for managing a list of components/artifacts to include in a software release.
Does anybody out there know of tools for this?
Have you found a way to get your CI infrastructure to address these remaining issues?
If you're using an artifact repository to solve this problem, how do you manage and automate the process?
This is a very broad topic, but it sounds like you want a release management tool (e.g. BuildMaster, developed by my company Inedo), possibly in conjunction with a package management server like ProGet (which you tagged, and is how I discovered this question).
To address some specific questions you have, I'll associate it with a feature that would solve the problem:
A mixture of files go into our software releases, including...
This is handled in BuildMaster with artifacts. This video gives a basic overview of how they are manually added to releases and deployed to a file system: https://inedo.com/support/tutorials/buildmaster/deployments/deploying-a-simple-web-app-to-iis
Of course, once that works to satisfaction, you can automate the import of artifacts from your existing CI tool, create them from a BuildMaster deployment plan itself, pull them from your package server, whatever. Down the line you can also have your CI tool call the BuildMaster release management API to create a release and automatically have it include all the artifacts and components you want (this is what most of our customers do now, i.e. have a build step in TeamCity create a release from a template).
Rebuilding Assemblies ... The output is a fresh build of a binary we've previously tested (hint: should we always trust these are equivalent?)
You can mostly assume they are equivalent functionally, but it's only the times that they are not that problems arise. This is especially true with package managers that do not lock dependencies to specific version numbers (i.e. NuGet, npm). You should be releasing exactly the same binary that was tested in previous environments.
[we want] the development team to make a recipe for a software release (kinda like NuGet packages.config) that specifies components to include:
This is handled with releases. A developer can choose its name, dates, etc., and associate it with a pipeline (i.e. a set of testing stages that the artifacts are deployed to), then can "click the deploy button" and have the automation do all the work.
Releases are grouped by "application", similar to a project in TeamCity. As a more advanced use case, you can use deployables. Deployables are essentially individual components of an application you include in a release; in your case the "Documentation" could be a deployable, and maybe contain an artifact of the .pdf and .docx files. Deployables from other applications (maybe a different team is responsible for them, or whatever) can then be referenced and "included" in a release, or you can reference ones from a past release.
Hopefully that provides some overview and fits your needs. Getting into this space is a bit overwhelming because there are so many terms, technologies, and methodologies, but my advice is to start simple and then slowly build upon it, e.g.:
deploy a single, manually uploaded component through BuildMaster to a share drive, then manually deploy it from there
add a deployment plan that imports the component
add a second plan and associate it with the 2nd stage that takes the uploaded artifact and deploys it to the target, bypassing the need for the share drive
add more deployment plans and associate them with pipeline stages and promote through them all to "close out" a release
add an agent and deploy to that instead of the default localhost server
add more components and segregate their deployment with deployables
add event listeners to email team members at points in the process
start adding approvals if you require gated "sign-offs"
and so on.

Platform independent go build for engineer testing

My typical endly test automation is parametrized to takes place either on my localhost(osx) or on staging box (linux), ideally I want to build separately cross platform app binary.
All that said when I build my app binary on OSX for linux, I am seeing the following
export GOOS=linux
go build
# github.com/alexbrainman/odbc/api
../../../../github.com/alexbrainman/odbc/api/api.go:17:9: undefined: SQLSMALLINT
../../../../github.com/alexbrainman/odbc/api/api.go:18:9: undefined: SQLUSMALLINT
../../../../github.com/alexbrainman/odbc/api/api.go:19:9: undefined: SQLUSMALLINT
My application uses odbc to connect to vertica, and at the moment the only available Vertica driver in go uses CGO,
Is there a way to build cross platform CGO independent, statically compiled app?
While there are definitely ways of doing this manually, I'd recommend you to use xgo. I've successfully used it in a project which involved using zserge/webview, and the gitea project uses it for cross-compiling release binaries (which involve using SQLite, which requires cgo).
Keep in mind it requires Docker, and it needs to download a very large image, but there is a good wrapper around all the commands that you need to run.
# installing the wrapper
go get github.com/karalabe/xgo
# go into your repo, and then run this to crosscompile!
xgo --targets=windows/*,darwin/*,linux/amd64

Do composite builds make multi-module builds obsolete?

I have a hard time do understand when to use composite builds vs multi-module builds. It seems both can be used to achieve similar things.
Are there still valid use cases for multi-module builds?
In my opinion, a multi-module build is a single system which is built and released together. Every module in the build should have the same version and is likely developed by the same team and committed to a single repository (git/svn etc).
I think that a composite build is for development only and for use in times when a developer is working on two or more systems (likely in different repositories with different release cycles/versions). eg:
Developing a patch for an open source library whilst validating the changes in another system
Tweaking a utility library in a separate in-house repository (perhaps shared by multiple teams) whilst validating the changes in another system
A fix/improvement that spans two or more systems (likely in separate repos)
I don't think that a composite build should be committed to source control or built by continuous integration. I think CI should use jars from a repo (eg nexus). Basically I think composite builds serves the same purpose as the resolve workspace artifacts checkbox in m2e
Please note that one of the restrictions on a composite build is that it can not include another composite build. So I think it's safer to commit multi-module builds to source control and use composite builds to join them together locally for development only.
These are my opinions on how the two features should be used, I'm sure there are valid exceptions to the above
We use our own monorepo with monobuild-type detection and use composite builds for CI and CD to staging (any microservices that end up building from your changes auto-deploy to staging). I disagree that composite builds are just for development as we use it to get to production in a monorepo/monobuild.
multi-project build estimated time at Orderly Health is about 15-20 minutes based on webpieces 5 minutes AND based on modifying a library in OrderlyHealth that affects EVERY project takes about 15 minutes.
Instead, we detect projects changed, what leaf nodes depend on it and all leaf nodes are composite projects pulling in libraries that pull in libraries and the general average build time is 3 minutes. (That is a 5x boost right there on build time).
later
Dean

Build farms using ccnet

Is it possible to use CruiseControl.Net to set up a build farm? We currently have 4 different build machines building different things at different times and have a bit of a headache to manually balance the load somehow. I would prefer to designate one of them to be the master build machine, which would delegate work to the other ones when they are free.
As far as I can determine, there is no support in CruiseControl.Net for build farms - at least not operating the way you describe. CCNet's interpretation of "farm" seems to assume that projects are assigned manually to a machine and a given project will always be built on the same machine.
If you wanted to dynamically select which machine actually performs the build, you would need to create your own mechanism to select that machine and trigger the build on it. There is likely to be quite a bit of complexity associated with this. For instance you would probably need to ensure that the same project does not get built simultaneously on two different machines if a second commit occurs while the previous commit is still being processed.
If there is a shared location that all the build machines can access, it may be possible to use the Filesystem source control block or CCNet's ForceBuild mechanism to start the build on the designated machine, but have all the build machines publish their output for a given project to the same final location.
See load-Balancing the Build Farm with CruiseControl.NET blogpost for a possible solution

Continuous Integration and running builds on virgin machines?

Two parts to this questions.
1) As part of our Continuous Integration build process i would like to install everything as-if it were a virgin machine. Martin folwler paper: http://martinfowler.com/articles/continuousIntegration.html
Does he mean that we take each (integration) build (clean machine) and installing ALL the necessary software to make the build work? I'm guess this is what he meant by "Single Command" build.
2) Which leads me nicely onto the next question. Is it possible to install programs using Powershell/Dos all through the command line? For example how would I install WinRar and possibly MySQL? (Winrar being a easy example, MySql complex).
Anyways, I am interested to hear from real-world practitioners of CI and how they approach their build processes.
In the latest CI environment I built, I installed and configured the toolchains and SDKs under a single directory tree and then created an ImageX WIM image of the tree. Each clean build would then mount the image, checkout sources from version control, build them, run tests etc. When unmounting, just remember to not commit the changes back to the image so that the image file stays clean.
For each of our builds with Zed we ensure a completely clean working environment, but assume that the entire tool-chain and utility applications are already installed on the machine.
If you really want to go to the level of virgin machine, then I would agree with laalto and look into VM. Setup your VM library to represent the different build environments/configurations that you will need for your product set, and load/start them on demand as you require builds for different products.
I think it is very important to always build from a clean working directory, but I'd question the real value of always trying to start with a bare OS and install everything from scratch for every build.

Resources