I currently have a rebar3 based erlang application, it consists of an erlang backend and a javascript frontend. To combine the frontend and backend build systems I use a makefile. My rebar.config looks like this:
rebar.config:
...
{relx, [{release, {pgserver_dev, "0.1.0"},[pgserver]}
]},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true}
]}.
Makefile:
...
release:
#echo "creating release"
rebar3 release
ln -sf _build/$(PROFILE)/rel/$(APP)_dev/bin/$(APP)_dev /.run-$(APP)-$(PROFILE)
I'd like to use environment variables in the rebar.config file to control parameters like e.g. the version -- {pgserver_dev, "0.1.0"} when creating a release. If I specify a variable: VERSION the build could look like this:
rebar.config:
...
{relx, [{release, {pgserver_dev, os:getenv("VERSION")},[pgserver]}
]},
{dev_mode, true},
{include_erts, false},
{extended_start_script, true}
]}.
So, is it possible to use linux environment variables in relx/rebar3?
P.S.: It is not possible with os:getenv(), the build fails with:
===> Error reading file rebar.config: 15: bad term
You can make a dynamic configuration by using a rebar.config.script. It will give you an Erlang script where you can update or add terms inside the rebar.config. You can search for rebar.config.script on Github to find examples. I found one here.
Related
I really like streamlit as an environment for research. Mixing a notebook/dashboard-like output I can design quickly with pure code for its definition (no cells etc.) as well as the ability to influence my code through widgets while it runs is a game changer.
For this purpose, I was looking for a way to run or even debug a streamlit application, since the tutorials only show it being started via the commandline:
streamlit run code.py
Is there a way to do either running or debugging from an IDE?
I found a way to at least run the code from the IDE (PyCharm in my case). The streamlit run code.py command can directly be called from your IDE. (The streamlit run code.py command actually calls python -m streamlit.cli run code.py, which was the former solution to run from the IDE.)
The -m streamlit run goes into the interpreter options field of the Run/Debug Configuration (this is supported by Streamlit, so has guarantees to not be broken in the future1), the code.py goes into the Script path field as expected. In past versions, it was also working to use -m streamlit.cli run in the interpreter options field of the Run/Debug Configuration, but this option might break in the future.
Unfortunately, debugging that way does not seem to work since the parameters appended by PyCharm are passed to streamlit instead of the pydev debugger.
Edit: Just found a way to debug your own scripts. Instead of debugging your script, you debug the streamlit.cli module which runs your script. To do so, you need to change from Script path: to Module name: in the top-most field (there is a slightly hidden dropdown box there...). Then you can insert streamlit.cli into the field. As the parameters, you now add run code.py into the Parameters: field of the Run/Debug Configuration.
EDIT: adding #sismo 's comment
If your script needs to be run with some args you can easily add them as
run main.py -- --option1 val1 --option2 val2
Note the first -- with blank: it is needed to stop streamlit argument parsing and pass to main.py argument parsing.
1 https://discuss.streamlit.io/t/run-streamlit-from-pycharm/21624/3
If you're a VS Code user, you can debug your Streamlit app by adding the following configuration to your launch.json file:
{
"name": "Python:Streamlit",
"type": "python",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"${file}",
"--server.port",
"SPECIFY_YOUR_OWN_PORT_NUMBER_HERE" ]
}
Specifying the port number allows you to launch the app on a fixed port number each time you run your debug script.
Once you've updated your launch.json file, you need to navigate to the Run tab on the left gutter of the VS code app and tell it which Python config it should use to debug the app:
Selecting Debug config for python interpreter
Thanks to git-steb for pointing me to the solution!
I've come up with an alternative solution which allows you to use PyCharm debugging in a natural way. Simply set up a run script (which I call run.py which looks like this:
from streamlit import bootstrap
real_script = 'main_script.py'
bootstrap.run(real_script, f'run.py {real_script}', [], {})
and set that up as a normal Python run configuration in PyCharm.
Cannot comment so I have to put this as an answer.
An addition to #Ben's answer (module debugging part):
if your script needs to be run with some args you can easily add them as
run main.py -- --option1 val1 --option2 val2
Note the first -- with blank: it is needed to stop streamlit argument parsing and pass to main.py argument parsing
With some modification to #aiwa answer - This worked for me in the VS code version - 1.58
{
"configurations": [
{
"name": "Python:Streamlit",
"type": "python",
"request": "launch",
"module": "streamlit.cli",
"args": [
"run",
"${file}"
],
}
]
}
Aug, 12, 2022:
Please update your pip and streamlit versions. Sometime, it is mandatory to update all both version.
pip install pip --upgrade
pip install --upgrade streamlit
Open Pycharm Editor and go to the Edit Configuration file as mentioned below in picture. Do not clear streamlit in my dropdown box. Click on dropdown box.
Run/Debug Configurations:
You have to change three directories remember that script path.
1) You can obtain script path by typing which streamlit in terminal and paste the path in script path.
2) click on working directory and give directory of your python file which contain streamlit.
3) in Paramaters: give python file name like app.py with run.
Alongside other solutions, another easy and quick solution is using pdb library.
For instance;
st.dataframe(df)
import pdb; pdb.set_trace()
st.bar_chart(df)
When you run code, your IDE (or even command line) will stop at the 'set trace' point and the command line show you something like that:
(Pdb)>
In that case, you can call your variables and process them on the command line. For instance:
For other options of PDB library please see: https://docs.python.org/3/library/pdb.html
I run locally on my project the following command
gometalinter --config=gometalinter.json ./...
at the beginning I got some errors and I was fixed them all!
now I run the same command exaclty in Travis script
and I got vendor errros like
vendor/github.com/spf13/viper/flags.go:3:8:warning: error return value not checked (could not import github.com/spf13/pflag (go/build: importGo github.com/spf13/pflag: exit status 1) (errcheck)
vendor/github.com/spf13/viper/viper.go:42:7:warning: error return value not checked (could not import github.com/pelletier/go-toml (go/build: importGo github.com/pelletier/go-toml: exit status 1) (errcheck)
This is the gometalinter.json for the config
{
"vendor": true,
"Deadline": "2m",
"Sort": [
"linter",
"severity"
],
"DisableAll": true,
"Enable": [
"gotypex",
"vetshadow",
"errcheck",
"gocyclo",
"vet",
"golint",
"vetshadow",
"ineffassign",
],
"Cyclo": 10,
"LineLength": 120
}
I dont understand why locally I dont get this error (i've the vendor repo) and why it ask for vendor error ? what could be the reason ?
gometalinter runs binaries in your path to do its check. I have had problems where my CI would have one set of binaries versions while my local development environment would have different versions.
Try updating all the required binaries on your local machine.
Try --vendor flag and check versions of gometalinter and all used linters.
Extract from gometalinter documentation:
How do I make gometalinter work with Go 1.5 vendoring?
gometalinter has a --vendor flag that just sets
GO15VENDOREXPERIMENT=1, however the underlying tools must support it.
Ensure that all of the linters are up to date and built with Go 1.5
(gometalinter --install --force) then run gometalinter --vendor ..
That should be it.
I'm trying to use a powershell script as part of a precompile process. In the project.json file there is a section where you can specify scripts at different levels during compilation, one of which is precompile. For ex:
"scripts": {
"precompile": [
"assemblyInfo.ps1"
]
}
When I do this and compile I get the following error:
The specified executable is not a valid application for this OS platform.
Is it not possible to do this?
Per ExoComp
it doesn't support powershell scripts, you could however use a batch script that then calls something else
I ended up just using a Node.js CLI script.
I'm running the configure.ac on RHEL 7.2, I'm wondering if there's a way to set the Release number (which is defined om the spec file) as a variable like the Version number which is being generated by the configure.ac and it's written to the config.h file , I'd like to set a kind of BUILD_NUMBER variable somewhere, and it'll take the value of the exported variable during the execution.
The release number for an RPM package is set by the Release: tag in the spec-file. Some spec-files are generated, e.g., using autoconf to substitute values such as the release number in a template, e.g., mypackage.spec.in, to obtain mypackage.spec
A quick check of wireshark's source shows that it uses this scheme, but its template hardcodes the release number as 1. You could modify the configure script and template to add your own option.
For example, adapting the style of --with-XXX options used in the wireshark 2.0.1 configure.ac, you would add a chunk like this (untested):
AC_ARG_WITH([release],
AC_HELP_STRING( [--with-release=#<:#1/no/4/5#:>#],
[set release-number in package #<:#default=1#:>#]),
with_release="$withval", with_release="unspecified")
case "x$with_release" in
x[[1-9]]*)
RELEASE="$with_release"
;;
*)
AC_MSG_ERROR(release is not a number: $with_release)
;;
esac
AC_SUBST(RELEASE)
and use the RELEASE variable in packaging/rpm/SPECS/wireshark.spec.in, as you see the VERSION value used:
Release: #RELEASE#
Alternatively, if you are using the wireshark source without modifying it directly, your build script could
unpack the sources,
update the spec-file,
repack the tarball,
deploy the updated tarball to your build area
Either way, you would have to do some work.
Introduction
I have a do_install task in a BitBake recipe which I've written for a driver where I execute a custom install script. The task fails because the installation script cannot find kernel source header files within <the image rootfs>/usr/src/kernel. This script runs fine on the generated OS.
What's Happening
Here's the relevant part of my recipe:
SRC_URI += "file://${TOPDIR}/example"
DEPENDS += " virtual/kernel linux-libc-headers "
do_install () {
( cd ${TOPDIR}/example/Install ; ./install )
}
Here's a relevant portion of the install script:
if [ ! -d "/usr/src/kernel/include" ]; then
echo ERROR: Linux kernel source include directory not found.
exit 1
fi
cd /usr/src/kernel
make scripts
...
./install_drv pci ${DRV_ARGS}
I checked changing to if [ ! -d "/usr/src/kernel" ], which also failed. install passes different options to install_drv, which I have a relevant portion of below:
cd ${DRV_PATH}/pci
make NO_SYSFS=${ARG_NO_SYSFS} NO_INSTALL=${ARG_NO_INSTALL} ${ARGS_HWINT}
if [ ${ARG_NO_INSTALL} == 0 ]; then
if [ `/sbin/lsmod | grep -ci "uceipci"` -eq 1 ]; then
./unload_pci
fi
./load_pci DEBUG=${ARG_DEBUG}
fi
The make target build: within ${DRV_PATH}/pci is essentially this:
make -C /usr/src/kernel SUBDIRS=${PWD} modules
My Research
I found these comments within linux-libc-headers.inc relevant:
# You're probably looking here thinking you need to create some new copy
# of linux-libc-headers since you have your own custom kernel. To put
# this simply, you DO NOT.
#
# Why? These headers are used to build the libc. If you customise the
# headers you are customising the libc and the libc becomes machine
# specific. Most people do not add custom libc extensions to the kernel
# and have a machine specific libc.
#
# But you have some kernel headers you need for some driver? That is fine
# but get them from STAGING_KERNEL_DIR where the kernel installs itself.
# This will make the package using them machine specific but this is much
# better than having a machine specific C library. This does mean your
# recipe needs a DEPENDS += "virtual/kernel" but again, that is fine and
# makes total sense.
#
# There can also be a case where your kernel extremely old and you want
# an older libc ABI for that old kernel. The headers installed by this
# recipe should still be a standard mainline kernel, not your own custom
# one.
I'm a bit unclear if I can 'get' the headers from the STAGING_KERNEL_DIR properly since I'm not using make.
Within kernel.bbclass provided in the meta/classes directory, there is this variable assigment:
# Define where the kernel headers are installed on the target as well as where
# they are staged.
KERNEL_SRC_PATH = "/usr/src/kernel"
This path is then packaged later within that .bbclass file here:
PACKAGES = "kernel kernel-base kernel-vmlinux kernel-image kernel-dev kernel-modules"
...
FILES_kernel-dev = "/boot/System.map* /boot/Module.symvers* /boot/config* ${KERNEL_SRC_PATH} /lib/modules/${KERNEL_VERSION}/build"
Update (1/21):
A suggestion on the yocto IRC channel was to use the following line:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
which is corroborated by the Yocto Project Reference Manual, which states that in version 1.8, there was the following change:
The kernel build process was changed to place the source in a common shared work area and to place build artifacts separately in the source code tree. In theory, migration paths have been provided for most common usages in kernel recipes but this might not work in all cases. In particular, users need to ensure that ${S} (source files) and ${B} (build artifacts) are used correctly in functions such as do_configure and do_install. For kernel recipes that do not inherit from kernel-yocto or include linux-yocto.inc, you might wish to refer to the linux.inc file in the meta-oe layer for the kinds of changes you need to make. For reference, here is the commit where the linux.inc file in meta-oewas updated.
Recipes that rely on the kernel source code and do not inherit the module classes might need to add explicit dependencies on the do_shared_workdir kernel task, for example:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
But I'm having difficulties applying this to my recipe. From what I understand, I should be able to change the above line to:
do_install[depends] += "virtual/kernel:do_shared_workdir"
Which would mean that the do_install task now must be run after do_shared_workdir task of the virtual/kernel recipe, which means that I should be able to work with the shared workdir (see Question 3 below), but I still have the same missing kernel header issue.
My Questions
I'm using a custom linux kernel (v3.14) from git.kernel.org. which inherits the kernel class. Here are some of my questions:
Shouldn't the package kernel-dev be a part of any recipe which inherits the kernel class? (this section of the variables glossary)
If I add the virtual/kernel to the DEPENDS variable, wouldn't that mean that the kernel-dev would be brought in?
If kernel-dev is part of the dependencies of my recipe, wouldn't I be able to point to the /usr/src/kernel directory from my recipe? According to this reply on the Yocto mailing list, I think I should.
How can I properly reference the kernel source header files, preferably without changing the installation script?
Consider your Environment
Remember that there are different environments within the the build time environment, consisting of:
sysroots
in the case of kernels, a shared work directory
target packages
kernel-dev is a target package, which you'd install into the rootfs of the target system for certain things like kernel symbol maps which are needed by profiling tools like perf/oprofile. It is not present at build time although some of its contents are available in the sysroots or shared workdir.
Point to the Correct Directories
Your do_install runs at build time so this is within the build directory structures of the build system, not the target one. In particular, /usr/src/ won't be correct, it would need to be some path within your build directory. The virtual/kernel do_shared_workdir task populates ${STAGING_DIR_KERNEL} so you would want to change to that directory in your script.
Adding a Task Dependency
The:
do_install[depends] += "virtual/kernel:do_shared_workdir
dependency like looks correct for your use case, assuming nothing in do_configure or do_compile accesses the data there.
Reconsider the module BitBake class
The other answers are correct in the recommendation to look at module.bbclass, since this illustrates how common kernel modules can be built. If you want to use custom functions or make commands, this is fine, you can just override them. If you really don't want to use that class, I would suggest taking inspiration from it though.
Task Dependencies
Adding virtual/kernel to DEPENDS means virtual/kernel:do_populate_sysroot must run before our do_configure task. Since you need a dependency for do_shared_workdir here, a DEPENDS on virtual/kernel is not enough.
Answer to Question 3
The kernel-dev package would be built, however it would then need to be installed into your target image and used at runtime on a real target. You need this at build time so kernel-dev is not appropriate.
Other Suggestions
You'd likely want the kernel-devsrc package for what you're doing, not the kernel-dev package.
I don't think anyone can properly answer that last question here. You are using a non-standard install method: we can't know how to interact with it...
That said, take a look at what meta/classes/module.bbclass does. It sets several related variables for make: KERNEL_SRC=${STAGING_KERNEL_DIR}, KERNEL_PATH=${STAGING_KERNEL_DIR}, O=${STAGING_KERNEL_BUILDDIR}. Maybe your installer supports some of these environment variables and you could set them in your recipe?