Replace Source Files When Compiling CentOS Kernel with RPM - compilation

I am trying to modify one of the CentOS (7.6) kernel source file and recompile all of them for later installation.
I followed the guide on wiki.centos to do customized kernel:
https://wiki.centos.org/HowTos/Custom_Kernel
I found that in step 5, the RPM method always unpacked source files from tar files and replaced my modification in BUILD/.
Therefore, I changed my way. I put my modification at another place and added a line in kernel.spec file under SPECS/ to copy my file into the BUILD/. Namely, one-line with cp command is put before %build in the kernel.spec (after unpacked). However, the compilation went wrong in the %build section:
...
Executing(%build): /bin/sh -e /var/tmp/rpm-tmp.Vd6by5
BUILDING A KERNEL FOR x86_64...
USING ARCH=x86_64
...
###
### Now generating an X.509 key pair to be used for signing modules.
###
### If this takes a long time, you might wish to run rngd in the
### background to keep the supply of entropy topped up. It
### needs to be run as root, and uses a hardware random
### number generator if one is available.
###
Generating a 3072 bit RSA private key
....++
......................................................................................................................................................................................++
writing new private key to 'signing_key.priv'
-----
###
### Key pair generated.
###
- Including cert /home/user/rpmbuild/BUILD/kernel-3.10.0-957.12.2.el7/linux-3.10.0-957.12.2.el7.v2.x86_64/centos-kpatch.x509
- Including cert /home/user/rpmbuild/BUILD/kernel-3.10.0-957.12.2.el7/linux-3.10.0-957.12.2.el7.v2.x86_64/centos-ldup.x509
- Including cert signing_key.x509
RPM build errors:
Could somebody suggest me a better way to replace the source file in the compilation procedure?
Thanks.

I found the solution myself. Instead of directly replacing files, we should apply patch within RPM to indicate the differences between modified file and original file.
Firstly, use diff command to build patch. Then, modify spec file, kernel.spec, applying the patch to the building procedure.
This website shows the example when using this way to compile kernel:
https://www.hiroom2.com/2016/05/29/centos-7-rebuild-kernel-with-src-rpm/
A more clear reference for patching files:
https://rpm-packaging-guide.github.io/#patching-software

Related

How can I build ghostscript with an alternative zlib?

I'm using ghostscript to generate rather large PDF files, and profiling has lead me to believe that a lot of time is spent compressing data.
For whatever reason, the ghostscript source tree ships with a copy of the zlib 1.2.11 source code, which is then compiled into the resulting gs executable.
I would like to benchmark other zlib implementations, notably Cloudflare's and possibly Intel's.
In ghostscript's Makefile.in, there's an interesting section:
# Define the directory where the zlib sources are stored.
# See zlib.mak for more information.
SHARE_ZLIB=#SHARE_ZLIB#
ZSRCDIR=#ZLIBDIR#
#ZLIB_NAME=gz
ZLIB_NAME=z
ZLIB_CFLAGS=#ZLIBCFLAGS#
And looking in base/zlib.mak:
# makefile for zlib library code.
# Users of this makefile must define the following:
# GSSRCDIR - the GS library source directory
# ZSRCDIR - the source directory
# ZGENDIR - the generated intermediate file directory
# ZOBJDIR - the object directory
# SHARE_ZLIB - 0 to compile zlib, 1 to share
# ZLIB_NAME - if SHARE_ZLIB=1, the name of the shared library
# ZAUXDIR - the directory for auxiliary objects.
So, in theory, one should just compile zlib elsewhere, get an .so file (perhaps?), set SHARE_ZLIB to 1 and ZLIB_name to /foo/bar/zlib_cloudflare/libz.so and everything should be good. Except it doesn't work, and there's zero documentation.

V8 : Isolate is incompatible with the embedded blob

I am trying to create custom snapshot from some Javascript file. I was able to create a snapshot using the command
mksnapshot.exe snapshot11.js --startup_blob snap.bin
but when I was trying to create an Isolate with this snap.bin file I got this message
The Isolate is incompatible with the embedded blob. This is usually caused by incorrect usage of mksnapshot. When generating custom snapshots, embedders must ensure they pass the same flags as during the V8 build process (e.g.: --turbo-instruction-scheduling).
I am guessing that I need recreate the snapshot with the proper flags but I couldn't find which flags I need to use.
My args.gn
is_component_build=true
v8_static_library=false
is_official_build=false
is_debug=true
use_custom_libcxx=false
use_custom_libcxx_for_host=false
target_cpu="x64"
use_goma=false
v8_use_external_startup_data=false
v8_enable_i18n_support = false
symbol_level=2
v8_enable_fast_mksnapshot=true
Any lead will be helpful.
10x
You can invoke ninja with -v to have it print all the commands it executes; e.g. if you compile V8 with:
ninja -v -C out/... v8_monolith
then you'll find a line for the mksnapshot invocation in the output, and can copy the flags from there. (If you have already compiled V8, ninja will say "nothing to do"; in that case you can either clean out everything, or just delete snapshot_blob.bin and libv8_monolith.so.)

How can I specify a minimum compute capability to the mexcuda compiler to compile a mexfunction?

I have a CUDA project in a .cu file that I would like to compile to a .mex file using mexcuda. Because my code makes use of the 64-bit floating point atomic operation atomicAdd(double *, double), which is only supposed for GPU devices of compute capability 6.0 or higher, I need to specify this as a flag when I am compiling.
In my standard IDE, this works fine, but when compiling with mexcuda, this is not working as I would like. In this post on MathWorks, it was suggested to use the following command (edited from the comment by Joss Knight):
mexcuda('-v', 'mexGPUExample.cu', 'NVCCFLAGS=-gencode=arch=compute_60,code=sm_60')
but when I use this command on my file, the verbose option spits out the following line last:
Building with 'NVIDIA CUDA Compiler'.
nvcc -c --compiler-options=/Zp8,/GR,/W3,/EHs,/nologo,/MD -
gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_50,code=sm_50 -
gencode=arch=compute_60,code=sm_60 -
gencode=arch=compute_70,code=\"sm_70,compute_70\"
(and so on), which signals to me that the specified flag was not passed to the nvcc properly. And indeed, compilation fails with the following error:
C:/path/mexGPUExample.cu(35): error: no instance of overloaded function "atomicAdd" matches
the argument list. Argument types are: (double *, double)
The only other post I could find on this topic was this post on SO, but it is almost three years old and seemed to me more like a workaround - one which I do not understand even after some research, otherwise I would have tried it - rather than a true solution to the problem.
Is there a setting I missed, or can this simply not be done without a workaround?
I was able to work my way around this problem after some messing around with the standard xml-files in the MatLab folder. The following steps allowed me to compile using -mexcuda:
-1) Go to the folder C:\Program Files\MATLAB\-version-\toolbox\distcomp\gpu\extern\src\mex\win64, which contains xml-files for different versions of msvcpp;
-2) Make a backup of the file that corresponds to the version you are using. In my case, I made a copy of the file nvcc_msvcpp2017 and named it nvcc_msvcpp2017_old, to always have the original.
-3) Open nvcc_msvcppYEAR with notepad, and scroll to the following block of lines:
COMPILER="nvcc"
COMPFLAGS="--compiler-options=/Zp8,/GR,/W3,/EHs,/nologo,/MD $ARCHFLAGS"
ARCHFLAGS="-gencode=arch=compute_30,code=sm_30 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=\"sm_70,compute_70\" $NVCC_FLAGS"
COMPDEFINES="--compiler-options=/D_CRT_SECURE_NO_DEPRECATE,/D_SCL_SECURE_NO_DEPRECATE,/D_SECURE_SCL=0,$MATLABMEX"
MATLABMEX="/DMATLAB_MEX_FILE"
OPTIMFLAGS="--compiler-options=/O2,/Oy-,/DNDEBUG"
INCLUDE="-I"$MATLABROOT\extern\include" -I"$MATLABROOT\simulink\include""
DEBUGFLAGS="--compiler-options=/Z7"
-4) Remove the architectures that will not allow your code to compile, i.e. all the architecture flags below 60 in my case:
ARCHFLAGS="-gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=\"sm_70,compute_70\" $NVCC_FLAGS"
-5) I was able to compile using mexcuda after this. You do not need to specify any architecture flags in the mexcuda call.
-6) (optional) I suppose you want to revert this change after you are done with the project that required you to make this change, if you want to ensure maximum portability of the code you will compile after this.
Note: you will need administrator permission to make these changes.

GnuCOBOL entry point not found

I've installed GnuCOBOL 2.2 on my Ubuntu 17.04 system. I've written a basic hello world program to test the compiler.
1 IDENTIFICATION DIVISION.
2 PROGRAM-ID. HELLO-WORLD.
3 *---------------------------
4 DATA DIVISION.
5 *---------------------------
6 PROCEDURE DIVISION.
7 DISPLAY 'Hello, world!'.
8 STOP RUN.
This program is entitled HelloWorld.cbl. When I compile the program with the command
cobc HelloWorld.cbl
HelloWorld.so is produced. When I attempt to run the compiled program using
cobcrun HelloWorld
I receive the following error:
libcob: entry point 'HelloWorld' not found
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
According to the official manual of GNUCOBOL, you should compile your code with:
cobc -x HelloWorld.cbl
then run it with
./HelloWorld
You can also read GNUCOBOL wiki page which contains some exmaples for further information.
P.S. As Simon Sobisch said, If you change your file name to HELLO-WORLD.cbl to match the program ID, the same commands that you have used will be ok:
cobc HELLO-WORLD.cbl
cobcrun HELLO-WORLD
Can anyone explain to me what an entry point is in GnuCOBOL, and perhaps suggest a way to fix the problem and successfully execute this COBOL program?
An entry point is a point where you may enter a shared object (this is actually more C then COBOL).
GnuCOBOL generates entry points for each PROGRAM-ID, FUNCTION-ID and ENTRY. Therefore your entry point is HELLO-WORLD (which likely gets a conversion as - is no valid identifier in ANSI C - you won't have to think about this when CALLing a program as the conversion will be done internal).
Using cobcrun internally does:
search for a shared object (in your case HelloWord), as this is found (because you've generated it) it will be loaded
search for an entry point in all loaded modules - which isn't found
There are three possible options to get this working:
As mentioned in Ho1's answer: use cobc -x, the reason that this works is because you don't generate a shared object at all but a C main which is called directly (= the entry point doesn't apply at all)
preload the shared object and calling the program by its PROGRAM-ID (entry point), either manually with COB_PRE_LOAD=HelloWorld cobcrun HELLO-WORLD or through cobcrun (option available since GnuCOBOL 2.x) cobcrun -M HelloWorld HELLO-WORLD
change the PROGRAM-ID to match the source name (either rename or change the source, I'd do the second: PROGRAM-ID. HelloWorld.)

cvs2svn 2.4.0 - error pass 16 svnadmin

Good afternoon,
Using version 2.4.0-dev on a Linux machine, I am trying to migrate a CVS project to SNV. I had some issues with symbols, and I created a hint rule file based on symbol-info.
Now to my current error. The CVS project is called package. I want to migrate it to SVN under the directory structure svnrepos/sw/package. The svnrepos/sw already exists (along with other projects under svnrepos.
In my option file (created from cvs2svn-example.options), I am using
ctx.output_option = ExistingRepositoryOutputOption(
r'/var/svn-test', # Path to repository
#author_transforms=author_transforms,
)
...
run_options.add_project(
r'cvs/package',
trunk_path='sw/package/trunk',
branches_path='sw/package/branches',
tags_path='sw/package/tags',
...
I also tried
run_options.add_project(
r'cvs/package',
trunk_path='trunk',
branches_path='branches',
tags_path='tags',
initial_directories=[
r'sw/package'
],
with the same error:
----- pass 16 (OutputPass) ----- Starting Subversion Repository. Starting Subversion r1 / 635 Starting Subversion r2 / 635 Starting
Subversion r3 / 635 ERROR: svnadmin failed with the following output
while loading the dumpfile: svnadmin: E160020: File already exists:
filesystem '/var/svn-test/db', transaction '48-1c', path 'sw'
I am at a lost on how to resolve this issue.
Note:
My initial tests were using command line arguments with the results that trunk, branches and tags were created in svnrepos/trunk, svnrepos/branches and svnrepos/tags respectively. As I indicated earlier, I want these to be under svnrepos/sw/package
Thanks in advance
Daniel
I solved this issue. Essentially the migration has to be done is two steps
1.Use cvs2svn to produce a dump file. In the option file, I used the following
# Use this type of output option if you want the output of the
# conversion to be written to a SVN dumpfile instead of committing
# them into an actual repository. The author_transforms option is as
# described above:
ctx.output_option = DumpfileOutputOption(
# dumpfile_path=r'/path/to/cvs2svn-dump', # Name of dumpfile to create
dumpfile_path='packageDump',
#author_transforms=author_transforms,
)
Note that the path for trunk, tags, and branches also includes a reference to package.
run_options.add_project(
r'cvs/package',
trunk_path='package/trunk',
branches_path='package/branches',
tags_path='package/tags',
As I mentioned in the original message, there were some symbol issues, and I created a symbol hint file where svn-path points to my desired package directory. For example
0 tag_pk_1_0_0 tag package/tags/pk_1_0_0 .trunk.
2: Use svnadmin load to load the generated dumpfile onto svn
svnadmin load --parent-dir sw /var/svn-test < packageDump

Resources