How can I save local changes that have been done in Yocto repository? - embedded-linux

I'm working on some Linux embedded system at the moment and using Yocto to build Linux distribution for a board.
I've followed Yocto build flow:
download layers sources
build image
flash image into the board or generate SDK.
Everything works great. However I was required to add some changes to local.conf, probably add some *.bbapend files, systemd services and so forth.
So, I'm wondering how can I save those local changes in case if I'll want to setup a new build machine or current one will be corrupted.
Should I create a custom image or layer that will inherit everything from a board manufacturer one and add changes and functionalities that are needed to me? Or something else?

Generally when working on a custom project with Yocto, here is what possibly you will need:
First of all, you need to create your custom layer
bitbake-layers create-layer meta-custom
and add it:
bitbake-layers add-layer <path/to/meta-custom>
After that, here are some ideas:
Official recipes modification:
When you have to modify some official recipe that exist in other official layer, you need to create a .bbappend file into your custom layer and make your changes there.
meta-official/recipes-example/example/example_1.0.bb
your modifications must be made under:
meta-custom/recipes-example/example/example_1.0.bbappend
or to match all versions of that recipe:
meta-custom/recipes-example/example/example_%.bbappend
Distro modification:
If you changed DISTRO_FEATURES in local.conf you may need to create a new distro in your new custom layer:
meta-custom/conf/distro/custom-distro.conf
in custom-distro.conf:
include or require your current used distro
Add your custom configuration DISTRO_FEATURES
Then, when creating new build, set (in local.conf):
DISTRO = "custom-distro"
Examples for distro changes:
Select the init manager: INIT_MANAGER = "systemd" for example.
Add some distro features
Set some preferred recipes versions PREFERRED_VERSION_recipe = "x"
Set some preferred providers PREFERRED_PROVIDER_virtual/xx = "x"
Machine modification:
If your board presents a permanent hardware components that, by default, are not activated in Yocto, then I suggest to create a new custom machine as well:
meta-custom/conf/machine/custom-machine.conf
In that, include or require your current machine configuration file and you may:
Select your preferred virtual/kernel provider
Select your preferred virtual/bootloader provider
Select your custom kernel and bootloader device tree files
etc.
and then, set it (in local.conf):
MACHINE = "custom-machine"
Image modification:
This is the most probable modification one may have, which is adding some packages to the image with IMAGE_INSTALL, so you may need to create a custom image:
meta-custom/recipes-core/images/custom-image.bb
in that require or include other image and:
Add packages with IMAGE_INSTALL
NOTES
If you have bbappend that append to an official bbappend then you consider making your layer more priority to the official one in meta-custom/conf/layer.conf
If your new custom layer depends on your manufacturer layer than you may consider making it depends on it in the layer conf file:
LAYERDEPENDS_meta-custom = "meta-official"
I recommend using kas which you can setup an automatic layers configuration with your custom layer and create the build automatically, this is also useful for DevOps pipelines automation.
This is what I can think of right now :))
EDIT
You can then create a custom repository for your custom layer.
If you are using repo for your manufacturer provided initialization, then you can use this idea:
You can customize the manufacturer's manifest file to add your new custom repository, like the following:
Add remote block for your custom git server
<remote name="custom-git" fetch="ssh://git#gitlab.xxx/<group>/"/>
If your custom layer is under the git server directly remove group or set it if it is the case.
Then, add your custom layer as a project:
<project path="<where/to/unpack>" name="<name/under/remote>" remote="custom-git" revision="<commit>" />
You can check for more repo details here.
Finally, you just use repo with your custom repository/manifest:
repo init -u <custom-git/manifest-project> -b <branch> -m custom-project.xml
repo sync

Related

How to link an APM agent like NewRelic to a Spring Boot application with bootBuildImage?

I have a gradle based Spring Boot 3 application. I use the bootBuildImage gradle task in circleci to build a docker image of this application.
Now, I want to add NewRelic to this application. I know I can do it by writing my own Dockerfile but I want to do it by configuring the bootBuildImage gradle task.
I saw that I can add buildPacks like this:
tasks.named("bootBuildImage") {
buildpacks = [...]
}
And it appears that NewRelic has a buildpack here.
How can I generate the docker image with NewRelic integration?
Bonus: I need to inject environment variable as NEW_RELIC_ENABLE_AGENT=true|false. How can I do it?
You're on the right track. You want to use the New Relic Buildpack that you found.
High-level instructions for that buildpack can be found here. It essentially works by taking in bindings (the secret config data) and the buildpack securely maps those values to the standard New Relic agent configuration properties (through env variables).
An example of an APM tool configured through bindings can be found here. The specific example is using a different APM tool, but the same steps will work with any APM tool configured through bindings, like New Relic.
For your app:
Create a bindings directory. The root of your project is a reasonable place, but the path doesn't ultimately matter. Don't check in binding files that contain secret data :)
In the folder, create a subfolder called new-relic. Again, the name doesn't really matter.
In the folder from the previous step, create a file called type. The name does matter. In that file, write NewRelic and that's it. Save the file. This is how the buildpack identifies the bindings.
In the same folder, you can now add additional files to configure New Relic. The name of the file is the key and the contents of the file are the value. When your app runs, the buildpack will read the bindings and translate these to New Relic configuration settings in the form NEW_RELIC_<KEY>=<VALUE>. Thus if you read the New Relic docs and see a property called foo, you could make a file called foo set the value to bar and at runtime, you'll end up with an env variable NEW_RELIC_foo=bar being set. The New Relic agent reads environment variables for it's configuration, although sometimes it's not the first way that's mentioned in their docs.
Next you need to configure your build.gradle file. These changes will tell bootBuildImage to add the New Relic buildpack and to pass through your bindings.
In the tasks.named("bootBuildImage") block, add buildpacks = ["urn:cnb:builder:paketo-buildpacks/java", "gcr.io/paketo-buildpacks/new-relic"]. This will run the standard Java buildpack and then append New Relic onto the end of that list. Example.
Add a bindings list. In the same tasks.named("bootBuildImage") block add bindings = ["path/to/local/bindings/new-relic:/platform/bindings/new-relic"]. This will mount path/to/local/bindings/new-relic on your host to /platform/bindings/new-relic in the container, which is where the buildpack expects bindings to live. You will need to change the first path to point to the local bindings you created above (you can probably use a Gradle variable to the project to reference them, but I don't know if off the top of my head). Don't change the path on the container side, that needs to be exactly what I put above.
Run your build. ./gradlew bootBuildImage. In the output, you should see the New Relic buildpack pass detection (it passes if it finds the type file with NewRelic as the contents) and it should also run and contribute the New Relic agent as is described in the buildpack README.md.
After a successful build, you'll have the image. The key to remember is that bindings are not added to the image. This is intentional for security reasons. You don't want secret binding info to be included in the image, as that will leak your secrets.
This means that you must also pass the bindings through to your container runtime when you run the image. If you're using Docker, you can docker run --volume path/to/local/bindings/new-relic:/platform/bindings/new-relic ... and use the same paths as build time. If you're deploying to Kubernets, you'll need to set up Secrets in K8s and mount those secrets as files within the container under the same path as before /platform/bindings/new-relic. So you need to make a type file, /platform/bindings/new-relic/type, and files for each key/value parameter you want to set.
At some point in the future, we're working to have all of the APM buildpacks included in the main Java buildpack by default. This would eliminate the first config change in step #5.
Because managing bindings can be kind of a pain, I also have a project called binding-tool that can help with steps 1-3. It allows you to easily create the binding files, like bt add -t NewRelic -p key1=val1 -p key2=val2. It's not doing anything magic, just creates the files for you, but I find it handy. In the future, I want it to generate the Kubernetes YAML as well.

Is there a way to make the dbt_cloud_pr_xxxx_xxx a clone of an existing data?

so using dbt cloud, and having a run on every pull request, but my incremental models are fully refreshed since everything runs in a new db destination (dbt_cloud_pr_xxxxx_xxx) any way of solving this? perhaps creating the new destination as a clone of an old one?
dbt calls this "Slim CI". You can use their "deferral" and "state comparison" features -- they will check the manifest of the compiled project to the manifest from another run you specify (typically the last production run). Any models that are unchanged will have ref() compile to the prod target, and then you can use the --state:modified+ selector in your dbt Cloud job definition to only rebuild the models with changes.
See the docs for CI in dbt Cloud.

Managing custom Go modules accessed via non-standard ports

Background
At my company, we use Bit Bucket to host our git repos. All traffic to the server flows through a custom, non-standard port. Cloning from our repos looks something like git clone ssh://git#stash.company.com:9999/repo/path/name.git.
The problem
I would like to create Go modules hosted on this server and managed by go mod, however, the fact that traffic has to flow through port 9999 makes this very difficult. This is because go mod operates on the standard ports and doesn't seem to provide a way to customise ports for different modules.
My question
Is it possible to use go mod to manage Go modules hosted on a private git server with a non-standard port?
Attempted solutions
Vendoring
This seems to be the closest to offering a solution. First I go mod vendor the Go application that wants to use these Go modules, then I git submodule the Go module in the vendor/ directory. This works perfectly up to the point that a module needs to be updated or added. go mod tidy will keep failing to download or update the other Go modules because it cannot access the "git URL" of the custom Go module. Even when the -e flag is set.
Editing .gitconfig
Editing the .gitconfig to replace the URL without the port with the URL with the port is a solution that will work but is a very dirty hack. Firstly, these edits will have to be done for any new modules, and for every individual developer. Secondly, this might brake other git processes when working on these repositories.
The go tool uses git under the hood, so you'd want to configure git in your environment to use an alternate url. Something like
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com"
Though I recall that bitbucket/stash sometimes provides an extra suffix for reasons I don't recall, so you might need to do something like this:
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com/scm/"
ADDITIONAL EDIT
user bcmills mentioned below that you can also serve the go-import metadata over HTTPS, and use whatever vanity URL you like, provided you control the domain resolution. This can be done with varying degrees of sophistication, from a simple nginx rule to static content generators, dedicated vanity services or even running your own module proxy with Athens
This still doesn't completely solve the problem of build environment configuration, however, since you'll want the user to set GOPRIVATE or GOPROXY or both, depending on your configuration.
Also, if your chosen domain is potentially globally resolvable, you might want to consider registering it anyway to keep it from being registered by a potentially-malicious third party.

How to notify apt/dpkg dependent package during dependency update?

I'm building some in-house configuration packages for Debian-based OS.
There is a package (in default Debian repositories) which contains some service (as a binary) and all machinery needed to run it, but it's not started automatically during installation (and it does not even create the appropriate user for it to be run under).
I've successfully built a custom dependent configuration package which creates a dedicated system user to run this service, adjusts the configuration, and starts this service instance via systemd.
However I'm not sure how to handle original service package update case. I think that my service instance (running under custom system user with custom configuration) should be restarted in case of such upstream update but I can't find a good way to do it. According to official documentation apt/dpkg doesn't notify (call) any maintainer scripts of a package's dependency graph - only those of the package being directly updated. Currently I'm thinking about some inotify-based workarounds to watch for service binary file changes and trigger my service instance restart manually but it feels... hacky =)
So here goes the question:
Is there any existing infrastructure in apt/dpkg to notify dependent packages about updates of their dependencies?

Tentacle/Octopack - Group Applications By Folder (Project Name)

I used to group related parts of my application like so:
Project Name
--Web UI
--Windows Service
--File Drop
"File Drop" is an example that might be related to both the service and the web site.
However, tentacle deploys each package separately, so with that I get something that looks more like (assuming ProjectName is used in the package id):
--ProjectName.WebUI
--ProjectName.WindowsService
How should I deploy a related shared folder? Can you group applications in some way? If not is there a recommended pattern to creating shared resources?
I should add that I'm using octopack. I figure I certainly can manually put a nuget package together and use the relative dir parameters for IIS Sites and Services, but that starts to get more difficult.
It sounds like you want to use Custom Installation Directory. This will let you control what directory the package is extracted to.
You can also do some custom setup in a deploy.ps1 file for each of your packages.

Resources