Add extra file into the rpm building process - compilation

i have the source-code of an application that supports adding python plugins. i have written a python script and want to build a custom rpm that by-default includes my script. So that i do not have to additionally add it after the rpm installation.
Now as far as i understand, there are two parts to this-
Adding the file to the source code.
Listing that file in the .spec file.
How do i know where to put the file in the source? How do i specify the path where i want my script to be copied? The spec file contains text like-
%if %{with_python}
%files python
%{_mandir}/man5/collectd-python*
%{_libdir}/%{name}/python.so
//Something like this?
// %{_libdir}/%{name}/gearman.py
// %{_libdir}/%{name}/redis.py
%endif

You need to know where to place your script file on the target installation (e.g. /usr/lib/myApp/plugins/myNiceScript.py)
In the spec-File (section %install) you have to copy your script under %{buildroot} into the target directory (which has to be created first.
%install
...
# in case the dir does not exist:
mkdir -p %{buildroot}/usr/lib/myApp/plugins
cp whereitis/myNiceScript.py %{buildroot}/usr/lib/myApp/plugins
At the end you have to define the file flags in the %files section. E.g. if your file has to have 644 under root:
%files
...
%defattr(644,root,root)
/usr/lib/myApp/plugins/myNiceScript.py
If your plugins directory is to be created during installation you need to define these flags too:
%defattr(755,root,root)
%dir /usr/lib/myApp/plugins

Related

dockerfile copy list of files, when list is taken from a local file

I've got a file containing a list of paths that I need to copy by Dockerfile's COPY command on docker build.
My use case is such: I've got a python requirements.txt file, when inside I'm calling multiple other requirements files inside the project, with -r PATH.
Now, I want to docker COPY all the requirements files alone, run pip install, and then copy the rest of the project (for cache and such). So far i haven't managed to do so with docker COPY command.
No need of help on fetching the paths from the file - I've managed that - just if it is possible to be done - how?
thanks!
Not possible in the sense that the COPY directive allows it out of the box, however if you know the extensions you can use a wildcard for the path such as COPY folder*something*name somewhere/.
For simple requirements.txt fetching that could be:
# but you need to distinguish it somehow
# otherwise it'll overwrite the files and keep the last one
# e.g. rename package/requirements.txt to package-requirements.txt
# and it won't be an issue
COPY */requirements.txt ./
RUN for item in $(ls requirement*);do pip install -r $item;done
But if it gets a bit more complex (as in collecting only specific files, by some custom pattern etc), then, no. However for that case simply use templating either by a simple F-string, format() function or switch to Jinja, create a Dockerfile.tmpl (or whatever you'd want to name a temporary file), then collect the paths, insert into the templated Dockerfile and once ready dump to a file and execute afterwards with docker build.
Example:
# Dockerfile.tmpl
FROM alpine
{{replace}}
# organize files into coherent structures so you don't have too many COPY directives
files = {
"pattern1": [...],
"pattern2": [...],
...
}
with open("Dockerfile.tmpl", "r") as file:
text = file.read()
insert = "\n".join([
f"COPY {' '.join(values)} destination/{key}/"
for key, values in files.items()
])
with open("Dockerfile", "w") as file:
file.write(text.replace("{{replace}}", insert))
You might want to do this for example:
FROM ...
ARG files
COPY files
and run with
docker build -build-args items=`${cat list_of_files_to_copy.txt}`

How to add all, except one file in iverilog command line instruction from a folder?

I understand that if I want to include all the Verilog files I can do so by adding files like this:
iverilog /Users/kp/Desktop/all_new2/*.v -s testbench.v
which takes all files in all_new2 folder and sets testbench.v as the top module. However, I wish to exclude a file c_functions.v file from this folder. How do I do it?
One way is to use the -y <libdir> option, which is common among other simulators as well. This is described in the iverilog Command Flags/Arguments document.
iverilog -y /Users/kp/Desktop/all_new2 testbench.v
This will compile only those modules that are needed in the directory. There is no need to explicitly list out all files.

Failing to open a file which should be in the base path

I have a Go project (bazel-remote) that tries to read a yaml file passed in the command line, when built with bazel. This yaml file sits in the same location from where I run the bazel run command.
But it fails to run because os.Open fails with no such file or directory.
I printed the basePath using os.Getwd, because someone suggested that my basePath might be set wrong. But my basePath is set to a location in my /private/var/tmp/ where the bazel objects are created and stored:
/private/var/tmp/bazel/312feba8ddcde6737ae7dd7ef9bc2a5a/execroot/main/bazel-out/darwin-fastbuild/bin/darwin_amd64_static_pure_stripped/bazel-remote.runfiles/main'
How do I set my basePath correctly? Why is my basePath set to where it is?
Binaries started with bazel run are executed in an internal Bazel directory. They'll have access to "runfiles", which are files mentioned in the data attribute of the binary rule or its dependencies. For example, if you have a rule like the one below, you'll be able to read foo.txt, but not bar.txt or other files:
load("#io_bazel_rules_go//go:def.bzl", "go_binary")
go_binary(
name = "hello",
srcs = ["hello.go"],
data = ["foo.txt"],
)
Note that the working directory of the binary corresponds to the repository root directory, not the directory where the binary is defined. You can debug with os.Getwd and filepath.Walk.
You mentioned you wanted to access a yaml file passed in on the command line though. Presumably, you want to be able to access any file the user passes in, not just files mentioned in the data attribute. For this case, take a look at the BUILD_WORKING_DIRECTORY environment variable (bazel run sets this). That gives the path to the directory where bazel run was invoked. Also, BUILD_WORKSPACE_DIRECTORY is the path to the workspace root directory.

Using CMake, how can I concat files and install them

I'm new to CMake and I have a problem that I can not figure out a solution to. I'm using CMake to compile a project with a bunch of optional sub-dirs and it builds shared library files as expected. That part seems to be working fine. Each of these sub-dirs contains a sql file. I need to concat all the selected sql files to one sql header file and install the result. So one file like:
sql_header.sql
sub_dir_A.sql
sub_dir_C.sql
sub_dir_D.sql
If I did this directly in a make file I might do something like the following only smarter to deal with only the selected sub-dirs:
cat sql_header.sql > "${INSTALL_PATH}/somefile.sql"
cat sub_dir_A.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_C.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_D.sql >> "${INSTALL_PATH}/somefile.sql"
I have sort of figured out pieces of this, like I can use:
LIST(APPEND PACKAGE_SQL_FILES "some_file.sql")
which I assume I can place in each of the sub-dirs CMakeLists.txt files to collect the file names. And I can create a macro like:
CAT(IN "${PACKAGE_SQL_FILES}" OUT "${INSTALL_PATH}/somefile.sql")
But I am lost between when the CMake initially runs and when it runs from the make install. Maybe there is a better way to do this. I need this to work on both Windows and Linux.
I would be happy with some hints to point me in the right direction.
You can create the concatenated file mainly using CMake's file and function commands.
First, create a cat function:
function(cat IN_FILE OUT_FILE)
file(READ ${IN_FILE} CONTENTS)
file(APPEND ${OUT_FILE} "${CONTENTS}")
endfunction()
Assuming you have the list of input files in the variable PACKAGE_SQL_FILES, you can use the function like this:
# Prepare a temporary file to "cat" to:
file(WRITE somefile.sql.in "")
# Call the "cat" function for each input file
foreach(PACKAGE_SQL_FILE ${PACKAGE_SQL_FILES})
cat(${PACKAGE_SQL_FILE} somefile.sql.in)
endforeach()
# Copy the temporary file to the final location
configure_file(somefile.sql.in somefile.sql COPYONLY)
The reason for writing to a temporary is so the real target file only gets updated if its content has changed. See this answer for why this is a good thing.
You should note that if you're including the subdirectories via the add_subdirectory command, the subdirs all have their own scope as far as CMake variables are concerned. In the subdirs, using list will only affect variables in the scope of that subdir.
If you want to create a list available in the parent scope, you'll need to use set(... PARENT_SCOPE), e.g.
set(PACKAGE_SQL_FILES
${PACKAGE_SQL_FILES}
${CMAKE_CURRENT_SOURCE_DIR}/some_file.sql
PARENT_SCOPE)
All this so far has simply created the concatenated file in the root of your build tree. To install it, you probably want to use the install(FILES ...) command:
install(FILES ${CMAKE_BINARY_DIR}/somefile.sql
DESTINATION ${INSTALL_PATH})
So, whenever CMake runs (either because you manually invoke it or because it detects changes when you do "make"), it will update the concatenated file in the build tree. Only once you run "make install" will the file finally be copied from the build root to the install location.
As of CMake 3.18, the CMake command line tool can concatenate files using cat. So, assuming a variable PACKAGE_SQL_FILES containing the list of files, you can run the cat command using execute_process:
# Concatenate the sql files into a variable 'FINAL_FILE'.
execute_process(COMMAND ${CMAKE_COMMAND} -E cat ${PACKAGE_SQL_FILES}
OUTPUT_VARIABLE FINAL_FILE
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
)
# Write out the concatenated contents to 'final.sql.in'.
file(WRITE final.sql.in ${FINAL_FILE})
The rest of the solution is similar to Fraser's response. You can use configure_file so the resultant file is only updated when necessary.
configure_file(final.sql.in final.sql COPYONLY)
You can still use install in the same way to install the file:
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/final.sql
DESTINATION ${INSTALL_PATH})

Intltool with an autoconf-generated .desktop file

In the Emperor project, I'm having some issues getting intltool to work when doing an out-of-tree build. When running make check out-of-tree, which is one of the things make distcheck does, intltool fails thus:
INTLTOOL_EXTRACT="/usr/bin/intltool-extract" XGETTEXT="/usr/bin/xgettext" srcdir=../../po /usr/bin/intltool-update --gettext-package emperor --pot
can't open ../../po/../data/emperor.desktop.in: No such file or directory at /usr/bin/intltool-extract line 212.
intltool is looking for emperor.desktop.in, which is listed in po/POTFILES.in, in the source tree. However, emperor.desktop.in is generated by the configure script from a file called emperor.desktop.in.in, in order to insert the installed executable path as configured by the user, and lands in the build tree.
These are the relevant bootstrap.sh lines:
echo +++ Running intltoolize ... &&
intltoolize --force --copy &&
cat >>po/Makefile.in.in <<EOF
../data/_column_names.h:
cd ../data && \$(MAKE) _column_names.h
EOF
The setup code in configure.ac:
IT_PROG_INTLTOOL([0.35.0])
GETTEXT_PACKAGE=emperor
AC_SUBST(GETTEXT_PACKAGE)
AC_DEFINE_UNQUOTED([GETTEXT_PACKAGE], ["$GETTEXT_PACKAGE"],
[The domain to use with gettext])
AM_GLIB_GNU_GETTEXT
data/emperor.desktop.in is listed in AC_CONFIG_FILES.
data/Makefile.am contains these lines:
desktopdir = $(datadir)/applications
desktop_in_files = emperor.desktop.in
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
and po/POTFILES.in contains the line
data/emperor.desktop.in
You can review all the details in the public git repository if you wish.
Can I somehow tell intltool that this file will be located in the build tree, not in the source tree? Otherwise, my options appear to be to break make distcheck (not a great option), or to ship a desktop file that doesn't include the full path and assumes that the executable is installed in the PATH. (just as messy, IMHO) - Any other options?
In your source code you have emperor.desktop.in.in, which does not seem to be in any rule as a dependency. That file has to be converted first to emperor.desktop.in and later to emperor.desktop, which does not seem to be the case in your data/Makefile.am.
desktopdir = $(datadir)/applications
desktop_in_in_files = emperor.desktop.in.in
desktop_in_files = $(desktop_in_in_files:.desktop.in.in=.desktop.in)
desktop_DATA = $(desktop_in_files:.desktop.in=.desktop)
#INTLTOOL_DESKTOP_RULE#
[...]
EXTRA_DIST = \
$(desktop_in_in_files) \
[...]
$(desktop_in_in_files) contains $(desktop_in_in_files), and Makefile will know how to deal with that.
Some further digging has brought me believe that the answer is: intltool does not support source files that aren't source files in the project. Ergo, any additional processing must be done after intltool is through
Intltool requires the lines in POTFILES to be relative to the (build-time) working directory. The file POTFILES is generated by the configure script from POTFILES.in with a simple sed script defined in the IT_PO_SUBDIR autoconf macro (called by IT_PROG_INTLTOOL) that simply prepends the relative location of the top-level source directory to the paths. Alas, modifying POTFILES does not help: the intltool-extract script does everything it can to get the source directory right. I don't believe files that are sometimes inside and sometimes outside the source tree can be supported without modifying intltool itself.

Resources