Automatically generate conf file during make - bash

I have a conf file that is of the format:
name=value
What I want to do is using a template, generate a result based on some values in another file.
So for example, say I have a file called PATHS that contains
CONF_DIR=/etc
BIN_DIR=/usr/sbin
LOG_DIR=/var/log
CACHE_DIR=/home/cache
This PATHS file gets included into a Makefile so that when I call make install the paths are created and built applications and conf files copied appropriately.
Now I also have a conf file which I want to use as a template.
Say the template contains lines like
LogFile=$(LOG_DIR)/myapp.log
...
Then generate a destination conf that would have
LogFile=/var/log/myapp.log
...
etc
I think this can be done with a sed script, but I'm not very familiar with sed and regular expression syntax. I will accept a shell script version too.

You should definitely go with autoconf here, whose very job is to do this. You'll have to write a conf.in file, wherein all substitutions are marked with #'s, e.g.
prefix=#prefix#
bindir=#bindir#
and write up a configure.ac, which is a shell script that will perform these substitutions for you and create conf. conf is subsequently included in the Makefile. I'd even recommend using a Makefile.in file, i.e. including your snippet in the Makefile.
If you keep to the standard path names, your configure.ac is a four-liner and has the added advantage of being GNU compatible (easy to understand & use).

You may want to consider using m4 as a simple template language instead.

Related

Can you expand an environment variable in a file while copying via an RPM spec file?

This may be a little awkward question and I don't know if this is possible.
The question is:
file.txt
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, ${MY_USERNAME}");
}
}
I want to club the above file in one of my rpm. The .spec file for the RPM creation will have only below action for this particular file.
cp /root/template/file.txt /home/gaur/hello.java
Is it possible without using some logic in .spec file (for rpm creation), that when I copy it from /root/template/file.txt to /home/gaur/hello.java, the value in file.txt i.e. ${MY_USERNAME} gets replaced by gaur? (by putting something in this file.txt?
Note: I know I can use sed in the .spec, but I am just curious to know if we can have some logic inside file.txt.
Note: The template file in /root/template can be in any language including shell, Python, Perl etc.
Looking at the alternatives, a better approach would be to generate the spec-file itself (using sed, or whatever tool), because your rpm is apparently used to generate data for a pathname which might change (a username).
For instance, your rpm might be used to create a user with a sample program. The username would be the parameter substituted in the sample program, as well as the parameter passed to useradd.
Generating the spec-file would also let you generate the %files section of the rpm, making it match the actual pathname.
Relying on an environment variable would be awkward:
inheriting it from the environment of the installing (root) user might not provide the value you needed, since sudo trims most environment variables, and
using an environment variable as such within the script in the spec-file is usually unnecessary since it handles shell variables.
Pretty sure what you are asking for is impossible, because the RPM would provide compiled Java bytecode, while you are asking to modify the source. For that, like you said, sed or similar would be the way to go.

wanted to use results of find command in custom script that i am building

I want to validate my XML's for well-formed ness, but some of my files are not having a single root (which is fine as per business req eg. <ri>...</ri><ri>..</ri> is valid xml in my context) , xmlwf can do this, but it flags out a file if it's not having single root, So wanted to build a custom script which internally uses xmlwf, my custom script should do below,
iterate through list of files passed as input (eg. sample.xml or s*.xml or *.xml)
for each file prepare a temporary file as <A>+contents of file+</A>
and call xmlwf on that temp file,
Can some one help on this?
You could add text to the beginning and end of the file using cat and bash, so that your file has a root added to it for validation purposes.
cat <(echo '<root>') sample.xml <(echo '</root>') | xmlwf
This way you don't need to write temporary files out.

Get result of compilation as single file with ASDF

Is it possible to tell ASDF that it should produce only one fas(l) file for entire system? This file should be concatenation (in right order) of all compiled files of the system, including all files of systems on which target system depends.
Yes, with compile-bundle-op (ASDF 3.1): http://common-lisp.net/project/asdf/asdf/Predefined-operations-of-ASDF.html
edit: Actually, monolithic-compile-bundle-op seemes to be asked for (as shown in other answers).
If you have to predict the extension, use uiop:compile-file-type.
And/or you can just call (asdf:output-files 'asdf:monolithic-compile-bundle-op :my-system) to figure out what is actually used.
Option monolithic-compile-bundle-op will create single compiled file which includes all dependencies, while compile-bundle-op creates a file for every system.
Example of use:
(asdf:operate 'asdf:monolithic-compile-bundle-op :my-system)
This command will create file my-system--all-systems.fas(l) in output directory of target project, as well as "bundle" files for every system, named like my-system--system.fas(l).

Using CMake, how can I concat files and install them

I'm new to CMake and I have a problem that I can not figure out a solution to. I'm using CMake to compile a project with a bunch of optional sub-dirs and it builds shared library files as expected. That part seems to be working fine. Each of these sub-dirs contains a sql file. I need to concat all the selected sql files to one sql header file and install the result. So one file like:
sql_header.sql
sub_dir_A.sql
sub_dir_C.sql
sub_dir_D.sql
If I did this directly in a make file I might do something like the following only smarter to deal with only the selected sub-dirs:
cat sql_header.sql > "${INSTALL_PATH}/somefile.sql"
cat sub_dir_A.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_C.sql >> "${INSTALL_PATH}/somefile.sql"
cat sub_dir_D.sql >> "${INSTALL_PATH}/somefile.sql"
I have sort of figured out pieces of this, like I can use:
LIST(APPEND PACKAGE_SQL_FILES "some_file.sql")
which I assume I can place in each of the sub-dirs CMakeLists.txt files to collect the file names. And I can create a macro like:
CAT(IN "${PACKAGE_SQL_FILES}" OUT "${INSTALL_PATH}/somefile.sql")
But I am lost between when the CMake initially runs and when it runs from the make install. Maybe there is a better way to do this. I need this to work on both Windows and Linux.
I would be happy with some hints to point me in the right direction.
You can create the concatenated file mainly using CMake's file and function commands.
First, create a cat function:
function(cat IN_FILE OUT_FILE)
file(READ ${IN_FILE} CONTENTS)
file(APPEND ${OUT_FILE} "${CONTENTS}")
endfunction()
Assuming you have the list of input files in the variable PACKAGE_SQL_FILES, you can use the function like this:
# Prepare a temporary file to "cat" to:
file(WRITE somefile.sql.in "")
# Call the "cat" function for each input file
foreach(PACKAGE_SQL_FILE ${PACKAGE_SQL_FILES})
cat(${PACKAGE_SQL_FILE} somefile.sql.in)
endforeach()
# Copy the temporary file to the final location
configure_file(somefile.sql.in somefile.sql COPYONLY)
The reason for writing to a temporary is so the real target file only gets updated if its content has changed. See this answer for why this is a good thing.
You should note that if you're including the subdirectories via the add_subdirectory command, the subdirs all have their own scope as far as CMake variables are concerned. In the subdirs, using list will only affect variables in the scope of that subdir.
If you want to create a list available in the parent scope, you'll need to use set(... PARENT_SCOPE), e.g.
set(PACKAGE_SQL_FILES
${PACKAGE_SQL_FILES}
${CMAKE_CURRENT_SOURCE_DIR}/some_file.sql
PARENT_SCOPE)
All this so far has simply created the concatenated file in the root of your build tree. To install it, you probably want to use the install(FILES ...) command:
install(FILES ${CMAKE_BINARY_DIR}/somefile.sql
DESTINATION ${INSTALL_PATH})
So, whenever CMake runs (either because you manually invoke it or because it detects changes when you do "make"), it will update the concatenated file in the build tree. Only once you run "make install" will the file finally be copied from the build root to the install location.
As of CMake 3.18, the CMake command line tool can concatenate files using cat. So, assuming a variable PACKAGE_SQL_FILES containing the list of files, you can run the cat command using execute_process:
# Concatenate the sql files into a variable 'FINAL_FILE'.
execute_process(COMMAND ${CMAKE_COMMAND} -E cat ${PACKAGE_SQL_FILES}
OUTPUT_VARIABLE FINAL_FILE
WORKING_DIRECTORY ${CMAKE_CURRENT_LIST_DIR}
)
# Write out the concatenated contents to 'final.sql.in'.
file(WRITE final.sql.in ${FINAL_FILE})
The rest of the solution is similar to Fraser's response. You can use configure_file so the resultant file is only updated when necessary.
configure_file(final.sql.in final.sql COPYONLY)
You can still use install in the same way to install the file:
install(FILES ${CMAKE_CURRENT_BINARY_DIR}/final.sql
DESTINATION ${INSTALL_PATH})

Including a postscript file into another one?

I wonder if there a standard way to include a postscript file into another.
For example, say I have got one file of data generated by a 3rd party program:
%!PS
\mydata [ 1 2 3 4 5 6
(...)
1098098
1098099
] def
and I would like to include it into a main PS document
%PS
\processData
{
mydata { (..) } foreach
}
(...)
(data.ps) include %<=== ???
Thanks
The operator you want is run.
string run -
execute contents of named file
Unfortunately, run is not allowed if the interpreter has the SAFER option set.
Edit: Bill Casselman, author of *Mathematical Illustrations" has a Perl script called psinc you can use to "preprocess" yor postscript files, inlining all (...) run files.
The standard way to include PostScript is to make the code to be included an EPS (Encapsulated PostScript) file. There are rules on how encapsulated PostScript must be created, and how to include it. See Adobe Tech Note 5002 'Encapsulated PostScript File Format Specification'
Simply executing 'run' on a PostScript file may well work, but it might also cause problems. Many PostScript files (especially those produced by 3rd parties) will include procedure definitions which may clash with your own names, and also the included program may leave the interpreter in a state different from the one it was in when the included file was executed. At the very least you should execute a save/restore pair around the code included via 'run'.
I would suggest meta-solution: use C preprocessor or M4 preprocessor. They are powerful tools and their power may find use in other ways as well, not only file inclusion. Though this was not asked, but use of Makefile will be wise to automate whole workflow. By using a preprocessor and Makefile in combination you can elegantly automate complex inclusions processing and beyond.
C Preprocessor
Including a file:
#include "other.ps"
Commandline for preprocessing:
cpp -P main.pps main.ps
M4 Preprocessor
Including a file:
include(other.ps)
Commandline for preprocessing:
m4 main.pps > main.ps

Resources