I have an application where I wish to have InfluxDB on a PHYTEC Mira board. I have found the a meta layer for the same and on initial build I was successful to have it compiled on board.
Upon Boot:
$influxd
needs to be started first and then subsequently:
$ influx
to run the shell influxDB
I however want to include a influxd.service systemd script
[Unit]
Description=InfluxDB is an open-source, distributed, time series database
Documentation=https://docs.influxdata.com/influxdb/
After=network.target
[Service]
LimitNOFILE=65536
EnvironmentFile=-/etc/default/influxdb
ExecStart=/usr/bin/influxd $INFLUXD_OPTS
ExecStartPost=/bin/sh -c 'while ! influx -execute exit >& /dev/null;
do sleep 0.1;done'
KillMode=control-group
Restart=on-failure
[Install]
WantedBy=multi-user.target
Alias=influxd.service
but within the yocto structure I do not know where to place it in order to make is available for all subsequent builds.
According to the board's BSP Manual, section CAN Bus I placed the above mentioned .service script in
meta-yogurt/recipes-core/systemd/systemd-machine-units/
folder
I made a new image and on booting the board I tried:
systemctl start influxd.service
but there does not exist such a script. I tried looking into the /lib/systemd/system/ folder on the board to see if the influxd.service file exists but it doesn't.
Update
This is the current file structure:
where meta-umg is a custom-layer and within it is the recipes-go/go/ as it is in the meta-influx layer
../sources/meta-umg/
├── conf
│ └── layer.conf
├── COPYING.MIT
├── README
└── recipes-go
└── go
├── files
│ └── influxd.service
└── github.com-influxdata-influxdb_%.bbappend
The github.com-influxdata-influxdb_%.bbappend has the same content as #Nayfe mentioned.
Upon executing bitbake -e github.com-influxdata-influxdb I get the following error:
No recipes available for:
/opt/PHYTEC_BSPs/yocto_fsl/sources/poky/../meta-umg/recipes-go/go/github.com-influxdata-influxdb_%.bbappend
I guess the % is not valid since the recipe has no versions attached to it.
So I went ahead and changed the name of the .bbappend file to github.com-influxdata-influxdb.bbappend and
bitbake -e github.com-influxdata-influxdb | grep ^SYSTEMD_
provides
bitbake -e github.com-influxdata-influxdb | grep ^SYSTEMD_
SYSTEMD_AUTO_ENABLE="enable"
SYSTEMD_SERVICE_github.com-influxdata-influxdb="influxd.service"
SYSTEMD_PACKAGES="github.com-influxdata-influxdb"
SYSTEMD_PACKAGES_class-native=""
SYSTEMD_PACKAGES_class-nativesdk=""
and
bitbake-layers show-appends | grep "github.com*"
Parsing recipes..done.
github.com-influxdata-influxdb.bb:
/opt/PHYTEC_BSPs/yocto_fsl/sources/poky/../meta-umg/recipes-go/go/github.com-influxdata-influxdb.bbappend
When I create an image where my local.conf file has IMAGE_INSTALL_append = " github.com-influxdata-influxdb
The SystemD script is available in the /etc/systemd/system/multi-user.wants/ folder but the daemon influxd and influx shell commands are not installed on the board.
I suspect that removing the % sign overrides the complete installation recipe.
update1
oe-pkg-utils list-pkg-files -p github.com-influxdata-influxdb provides the following output when the layer is added and compiled using bitbake github.com-influxdata-influxdb
github.com-influxdata-influxdb:
/lib/systemd/system/influxd.service
github.com-influxdata-influxdb-dbg:
github.com-influxdata-influxdb-dev:
You need to append influxd recipe and create a files folder with influxd.service in it.
influxd_%.bbappend:
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
inherit systemd
SYSTEMD_SERVICE_${PN} = "influxd.service"
SRC_URI += " \
file://influxd.service \
"
do_install_append () {
# systemd
install -d ${D}${systemd_unitdir}/system/
install -m 0644 ${WORKDIR}/influxd.service ${D}${systemd_unitdir}/system/
}
PS: i assume your influxd recipe name is influxd, if you are using github.com-influxdata-influxdb.bb, you'll need to create github.com-influxdata-influxdb.bbappend.
Related
I want to serve a static website using Bitnami's Nginx base image. I have a multi-stage Dockerfile as follows:
# build stage
FROM node:lts-alpine as build-stage
COPY ./ /app
WORKDIR /app
COPY .npmrc .npmrc
RUN npm install && npm run build
# Production stage
FROM bitnami/nginx:1.16 as production-stage
COPY --from=build-stage --chown=1001 /app/dist /app
COPY nginx.conf /opt/bitnami/nginx/conf/nginx.conf
COPY --chown=1001 entrypoint.sh /
RUN chmod +w /entrypoint.sh
CMD ["/entrypoint.sh"]
I use that entrypoint.sh to replace some file content with environment variables like:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ' ' $vars)
for file in /app/js/app.*;
do
### T H I S L I N E T H R O W S E R R O R ###
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
exec "$#"
On cp command it throws an error:
cp: cannot create regular file '/app/js/app.042ea3b0.js.tmpl': Permission denied
As you see, I have copied both the dist files and the entrypoint.sh with --chown=1001 (The default user in the Bitnami image), but no benefits.
Is it because the image folder app is exposed by a volume by default? How can I copy and modify that file I have moved into the image?
P.S: It runs in an OpenShift environment.
The Bitnami image performs some actions in the postunpack.sh script, this script is called from the Dokerfile. One of the actions perfomed by the script is related to configure the permissions due to the user running nginx is a non-root user. You can try implementing something similar with your needs.
It turned out to be a result of Openshift's behavior stated here:
How can I enable an image to run as a set user ID?:
When an application is deployed it will run as a user ID unique to the project it is running in. This overrides the user ID which the application image defines it wants to be run as.
...
The best solution is to build the application image so it can be run as an arbitrary user ID.
So, instead of copying the files and modifying the owner of them (chown), the access levels (chmod) of the files must be set appropriately.
I have a Dockerfile like this:
FROM centos:7 as builder
LABEL maintainer="seanchann <seanchann#test.com>"
COPY ./build.sh /build.sh
RUN source /build.sh; \
build_lib ""
then in my build_lib function. it calls Makefile to build a c lib. but there not have any output information from the function of build_lib. how to enable output from make in build_lib
build.sh:
function build_lib(){
cd /mysource/
make
}
Make sure to COPY build.sh first into your image:
COPY build.sh /
Then you can try and run it.
Your docker build should be done in a dedicated folder where there is only your Dockerfile and build.sh script, in order to limit the docker build context volume.
you need to have the source present for build to run
COPY mysource mysource/
you should see build.sh failed in your docker build output
when make runs correctly you will see the make output run in stdout
if you want to catch this to a file use tee
e.g.
docker build -t mycontainer . | tee output.file
I have written a simple script to use a 3G UMTS Dongle with my board.
The bash script is as follows:
#!/bin/bash
sleep 1;
/usr/bin/tmux new-session -d -s Cloud
/usr/bin/tmux set-option set-remain-on-exit on
/usr/bin/tmux new-window -d -n 'usb_modeswitch' -t Cloud:2 '/usr/sbin/usb_modeswitch --default-vendor 12d1 --default-product 1446 -J';
/usr/bin/tmux new-window -d -n 'wvdial' -t Cloud:1 'sleep 10; /usr/bin/wvdialconf; /usr/bin/wvdial';
and its corresponding systemd script is as follows:
[Unit]
Description=Enable UMTS Dongle for Cloud Connectivity
[Service]
Type=oneshot
ExecStart=/usr/umts.sh
RemainAfterExit=true
[Install]
WantedBy=default.target
I have other such systemd files for certain applications files that I have currently written directly on the board but want them to be available for every image I make for new board.
How should I go around with this in terms of a recipe?
I thought of creating my own Yocto layer:
meta-custom
------ recipes-custom/
------------- files / all such scripts here
------------ custom_1.0.bb
Should I only perform do_install() the bash scripts in the custom_1.0.bb recipes? since the scripts do not require to be compile?
Creating own layer is a good idea and structure you listed is fine too.
in your recipes you can create empty do_compile and do_configure tasks\
here is a pseudo recipe. And dont forget to add it to IMAGE_INSTALL in
the end so that your image build picks it up as dependency.
SRC_URI = "file://file.service \
file://file.sh \
"
inherit systemd
do_configure(){
:
}
do_compile() {
:
}
do_install() {
install -Dm 0644 ${WORKDIR}/<file.service> ${D}/${systemd_unitdir}/system/<file.service>
install -Dm 0755 ${WORKDIR}/<file.sh> ${D}/${bindir}/<file.sh>
...
}
SYSTEMD_SERVICE_${PN} = "<file.service>"
I want my bash file to run whenever I the run the docker image, so firstly I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
Here is my docker file -
FROM ubuntu
MAINTAINER Aman Kh
COPY . /home
CMD /home/testfile.sh
On building it with command - sudo docker build -t amankh99/hello .
the following output was received -
Sending build context to Docker daemon 3.584kB
Step 1/4 : FROM ubuntu
—> 0458a4468cbc
Step 2/4 : MAINTAINER Aman Kh
—> Using cache
—> 98fbe31ed233
Step 3/4 : COPY . /home
—> Using cache
—> 7e52ff3439e2
Step 4/4 : CMD /home/testfile.sh
—> Using cache
—> 1d2660df6387
Successfully built 1d2660df6387
Successfully tagged amankh99/hello:latest
But when I run it with command
sudo docker run --name test -it amankh99/hello
it says
bin/sh: 1: /home/testfile.sh: not found
After having build successfully why it is unable to found the file.
I want to convert this container in image and want to push on docker hub, so that when I can run this with simple run command as we run the hello-world(sudo docker run hello-world) , I get my bash file executed, so what changes in dockerfile can I do to acheive this.
OP's Description
I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
So according to your description
demo/
+--- Dockerfile
+--- home/
+--- testfile.sh
You need to COPY home directory into /home
FROM ubuntu
COPY home /home
CMD /home/testfile.sh
If you do this COPY . /home, your home/testfile.sh will be copy into /home/home/testfile.sh
If you want copy only your testfile.sh, then do this
COPY home/testfile.sh /home/
It either can find the file and it just does not know what to do with it because the interpreter of your script ( #!.... on the first line of your script) is not present in your docker image, or it cannot find the file because it was not copied.
You can verify this by passing /bin/bash as final argument to your docker run command, and run ls -l /home/testfile.sh and/or /home/testfile.sh on the prompt.
I use Vagrant to spawn a standard "precise32" box and provision it with Chef so I can test my Node.js code on Linux when I work on a Windows machine. This works fine.
I also have this bash command so it auto installs my npm modules:
bash "install npm modules" do
code <<-EOH
su -l vagrant -c "cd /vagrant && npm install"
EOH
end
This also works fine except that I never see the console output if it completes successfully. But I'd like to see it so we can visually monitor what is going on. This is not specific to npm.
I see this similar question with no concrete answers: Vagrant - how to print Chef's command output to stdout?
I tried specifying flags but I'm a terrible linux/ruby n00b and create either errors or no output at all, so please edit my snippet with an example of your solution.
I try to use logging when possible, but I've found that in some scenarios seeing the output is important. Here's the short version of the way I do it. Substituting the execute resource for the bash resource also works fine. Both standard error and standard output go into the file.
results = "/tmp/output.txt"
file results do
action :delete
end
cmd = "ls /"
bash cmd do
code <<-EOH
#{cmd} &> #{results}
EOH
end
ruby_block "Results" do
only_if { ::File.exists?(results) }
block do
print "\n"
File.open(results).each do |line|
print line
end
end
end
Use the live_stream attribute of the execute resource
execute 'foo' do
command 'cat /etc/hosts'
live_stream true
action :run
end
Script output will be printed to the console
Starting Chef Client, version 12.18.31
resolving cookbooks for run list: ["apt::default", "foobar::default"]
Synchronizing Cookbooks:
Converging 2 resources
Recipe: foobar::default
* execute[foo] action run
[execute] 127.0.0.1 default-ubuntu-1604 default-ubuntu-1604
127.0.0.1 localhost
127.0.1.1 vagrant.vm vagrant
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
- execute cat /etc/hosts
https://docs.chef.io/resource_execute.html
When you run chef - suppose we are using chef-solo, you can use -l debug to output more debug information into stdout.
For example: chef-solo -c solo.rb -j node.json -l debug
For example, a simple cookbook as below:
$ tree
.
├── cookbooks
│ └── main
│ └── recipes
│ └── default.rb
├── node.json
└── solo.rb
3 directories, 3 files
default.rb
bash "echo something" do
code <<-EOF
echo 'I am a chef!'
EOF
end
You'll see the following output like below:
Compiling Cookbooks...
[2013-07-24T15:49:26+10:00] DEBUG: Cookbooks to compile: [:main]
[2013-07-24T15:49:26+10:00] DEBUG: Loading Recipe main via include_recipe
[2013-07-24T15:49:26+10:00] DEBUG: Found recipe default in cookbook main
[2013-07-24T15:49:26+10:00] DEBUG: Loading from cookbook_path: /data/DevOps/chef/cookbooks
Converging 1 resources
[2013-07-24T15:49:26+10:00] DEBUG: Converging node optiplex790
Recipe: main::default
* bash[echo something] action run[2013-07-24T15:49:26+10:00] INFO: Processing bash[echo something] action run (main::default line 4)
[2013-07-24T15:49:26+10:00] DEBUG: Platform ubuntu version 13.04 found
I am a chef!
[2013-07-24T15:49:26+10:00] INFO: bash[echo something] ran successfully
- execute "bash" "/tmp/chef-script20130724-17175-tgkhkz"
[2013-07-24T15:49:26+10:00] INFO: Chef Run complete in 0.041678909 seconds
[2013-07-24T15:49:26+10:00] INFO: Running report handlers
[2013-07-24T15:49:26+10:00] INFO: Report handlers complete
Chef Client finished, 1 resources updated
[2013-07-24T15:49:26+10:00] DEBUG: Forked child successfully reaped (pid: 17175)
[2013-07-24T15:49:26+10:00] DEBUG: Exiting
I think it contains the information you want. For example, output and the exit status of the shell script/command.
BTW: looks like there is a limitation (prompt for password?), you won't be able to use su
[2013-07-24T15:46:10+10:00] INFO: Running queued delayed notifications before re-raising exception
[2013-07-24T15:46:10+10:00] DEBUG: Re-raising exception: Mixlib::ShellOut::ShellCommandFailed - bash[echo something] (main::default line 4) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of "bash" "/tmp/chef-script20130724-16938-1jhil9v" ----
STDOUT:
STDERR: su: must be run from a terminal
---- End output of "bash" "/tmp/chef-script20130724-16938-1jhil9v" ----
Ran "bash" "/tmp/chef-script20130724-16938-1jhil9v" returned 1
I used the following:
bash "install npm modules" do
code <<-EOH
su -l vagrant -c "cd /vagrant && npm install"
EOH
flags "-x"
end
The flags property makes the command execute like bash -x script.sh
Kind of related... setting the log_location (-L) to a file prevents the chef logs (Chef::Log.info() or simply log) from going to standard out.
You can override this to print the full log information to stdout
chef-client -L /dev/stdout