I have compiled Chromium on Ubuntu. Now I want to modify the source code of V8 engine. But I don`t want to compile the whole Chromium because it consumes so much time. So how to compile V8 alone and replace it for Chromium?
Thank you very much~
If you edit V8's source within the Chromium checkout (in <chromium>/src/v8/src) and then recompile with ninja -C out/Release chrome (as you've probably compiled before), the build process will be smart enough to recompile only what's necessary.
One build step that takes considerable time is linking of the final binary. You can avoid that if you use a shared library build: run gn args out/Release and add a line is_component_build = true, then save and quit. On the next compile, that will cause everything to get recompiled, but on any further recompiles after that it will save time. (In Debug mode, you'll get a shared library build by default anyway.)
Related
After an error occurred because of a missing flag or incorrectly set environment variable, is it possible to continue compiling once the mistake has been fixed?
I regularly use CMake and make to compile toolkits that take quite a while to compile and, also regularly, I accidentally set variables incorrectly in the process. Just now for example, I was attempting to include OpenInventor headers which on my machine are located in the directory /Users/user/software/prod/coin/include/Inventor.
I mistakenly passed
-DINVENTOR_INCLUDE_DIR=/Users/user/software/prod/coin/include/Inventor
rather than the correct
-DINVENTOR_INCLUDE_DIR=/Users/user/software/prod/coin/include
This only became an issue after 30 minutes when about 95% of the compilation was completed. Because I knew that reconfiguring using CMake would force a recompilation from scratch, I tried to add -I/Users/user/software/prod/coin/include to CMAKE_CXX_FLAGS in CMakeCache.txt but to no avail–it still recompiled from scratch. Since only a single source file actually includes the headers in question, it would be desirable if I could start compiling from the point where it exited with an error once the relevant path has been corrected. How can I do this and, as an aside, why does it force the compiler to start from scratch?
I'm using CMake version 3.11.1 and clang (Apple LLVM version 9.1.0) on macOS 10.13
CMake does not need to recompile everything just because it regenerates its makefiles. It will still perform normal make avoidance operations. However CMake does track the compiler options used to build each target, so if you make a change in the compiler options for all the targets then they'll all need to be rebuilt.
If this compiler option is only needed for one target, you can add it to just that target an no others, with something like this:
set_property(SOURCE my_source.c APPEND PROPERTY
COMPILE_FLAGS -I/foo/bar)
then it should only rebuild that one source file.
CMake looks for files' "last modified" times to decide which files need recompilation. But if you change the input to CMake itself, then it needs to regenerate the Makefiles and therefore recompile everything. But still, one hack may be possible...
CMake stores information about the include directories and the libraries to be linked in various text files in the build directory. So one hack (not recommended, but works) can be to modify these text files.
In the particular example that you mentioned, the hack would be to search and replace all occurrences of /Users/user/software/prod/coin/include/Inventor with /Users/user/software/prod/coin/include in all the files of the build directory.
(As an aside, if you don't already know, you can use make -j <n> to build using multiple threads which can considerably decrease the build times.)
How can I build a Linux kernel in Travis CI. I have added script: make menuconfig to my Travis config and it doesn't work and says
No output has been received in the last 10 minutes
How can I fix this?
Link to GitHub repo : https://github.com/ProjectPolyester/tegra_kernel and submit fixes in PRs if possible
Travis monitors your build process and if there is no output for about 10 minutes, it assumes that your process is stuck somewhere for unknown reasons, and then kills it.
Solution in your case :
You need to provide with the actual build command.
make menuconfig
actually just allows you to configure the kernel. It doesn't really starts the kernel build process. So there is no output of this command.
Also, the kernel should already be configured or you can download the appropriate .config file if its available some where online. And then there will be no need to execute:
make menuconfig
The build command
It can be simply
make
or something like
make -j3 modules ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- LOCALVERSION=-$SOURCE_VERSION
The second one is actually to perform cross compilation.
You also need to set all the prerequisites like downloading the header file etc
You may want to take a look at this script , it crosscompiles the modules only, not the entire kernel.
If you want to use the old config for a new kernel, you can use make olddefconfig. Here is my example how to compile and boot a new kernel in travis: https://github.com/avagin/criu/blob/linux-next/scripts/travis/kexec.sh#L54
I know that this is an old thread but I was recently able to get Travis CI working on building a Linux kernel
https://github.com/GlassROM-devices/android_kernel_oneplus_msm8994/commit/6ed484812bbd4a25c3b22e730b7489eaaf668da1
GCC fix is for toolchains compiled on Debian unstable, arch, gentoo, etc. These toolchains will fail to compile on Ubuntu so you'll have to use the GCC fix for these toolchains
And you really want to upgrade GCC before you even try building a kernel. Travis CI has a very old GCC that will fail if you try to compile the kernel
In my commit I'm building it with GCC 8 linaro built by myself
I've been trying this for over 5 days and I have no idea how to get this to work. I had successfully installed boost once, then I got my computer re-imaged and now it's just not happening. I have Windows 7 Enterprise, and 64-bit operating system.
I downloaded boost from here sourceforge
unzipped it into program files.
I then went to the VS 2013 Native Command prompt. Changed directory to boost tools/build
Ran bootstrap.bat
I then ran ./b2 address-model=64
but it did not give me directories for the compiler and the linker like last time.
I then ran ./b2 --prefix=C:\ProgramFiles\boost_1_58_0
but again nothing happens. I get the following errors:
Warning: No Toolsets were configured.
Warning: Configuring default toolset ""msvc"
Warning: If the default is wrong, your build may not work correctly
warning: Use the "toolset=xxxxx" option to overrride out guess
warning: for more configuration please consult
I have no idea why this worked the first time I had done this and why this isn't working now. Can someone please help me out. I know nothing about Unix but I need to install this so I can use the libraries.
I compile boost with both mingw (64 bit) and msvc 2013 pro. I have never in my life used the vs command prompt to build boost with msvc. Here are my commands to build 64 bit binaries on both toolchains.
First go into the boost folder and just double click bootstrap.bat. This should run and build bjam/b2. Nothing special required and doesn't matter what compiler this gets built with.
Then simply run, in a normal command prompt:
bjam.exe -a -j8 --toolset=msvc --layout=system optimization=speed link=shared threading=multi address-model=64 --stagedir=stage\MSVC-X64 release stage
Where -a forces rebuild all, -j8 means for the build to use 8 cores (adjust this based on your processor capabilities), toolset is obvious, layout means how to structure the naming of the output files, address-model is obvious, stagedir is where to output the built binaries (either relative or absolute path) and release stage is the type of build the system. See more here.
Same thing but using 64 bit mingw.
bjam.exe -j8 --toolset=gcc --layout=system optimization=speed link=shared threading=multi address-model=64 --stagedir=stage\x64 release stage
You can even go on to build additional libraries with a second pass tweaking your arguments. For example, after I've built the core libs with the above command, I run the following command to build in zlib and gzip support.
bjam.exe -a -j8 --toolset=msvc --layout=system optimization=speed link=shared threading=multi address-model=64 --stagedir=stage\MSVC-X64 --with-iostreams -s BZIP2_SOURCE=C:\dev\libraries\cpp\bzip2-1.0.6 -s ZLIB_SOURCE=C:\dev\libraries\cpp\zlib-1.2.8 release stage
Anyway that's just as an example. I linked to the full docs for the build system. Try building using these commands NOT inside a vs command prompt. If you still have problems then please post specific errors.
A note about MSVC
So, msvc compiler + boost supports a type of linking where it will automatically include additional libraries that it knows it needs. I believe boost does this by using the #pragma directive in headers. For example you might use boost::asio configured in such a way that it needs boost::system (for error codes and such). Even if you don't explicitly add boost::system library to the linker options, msvc will "figure out" that it needs this library and automatically try to link against it.
Now an issue arises here because you're using layout=system. The libraries are named simply "boost_" + the lib name, like "boost_system.dll". However, unless you specify a preprocessor define, this auto linking will try and link to a name like "boost_system_msvc_mt_1_58.lib". That's not exact as I can't recall the exact name, but you get the idea. You specifically told boost to name your libraries with the system layout (boost_thread, boost_system) etc instead of the default, which includes the boost version, "MT" if multithreaded, the compiler version, and lots of other stuff in the name. So that's why the auto linking feature goes looking for such a crazy weird name and fails. To fix this, add "BOOST_AUTO_LINK_NOMANGLE" in the Preprocessor section of your C++ settings in visual studio.
More on that here. Note that the answer here gives you a different preprocessor definition to solve this problem. I believe I ended up using BOOST_AUTO_LINK_NOMANGLE instead (which ended up working for me) because the ALL_DYN_LINK macro turned out to be designed as an "internal" define that boost controls and sets itself. Not sure as it's been some time since I had this issue, but the define I provide seems to solve the same root issue anyway.
Is there an option to stop compilation and save state to a file and then load the file and continue compilation?
I know that GCC has -fdump-gimple-tree option, which makes GCC dump an internal representation "GIMPLE" to a file, but I cannot find an option to load the file back in.
I see a few options:
try to update your patch for 4.3.1 to 4.5.0 (do not try to merge the patched 4.3.1 code branch with the 4.5.0 code branch, that would be mayhem)
try to get your patch to 4.3.1 to be included in the 4.5 release. If it fixes a bug, that should be possible (just file a _detailed_ report on the GCC bugzilla)
try to modify your code so that it does not depend on that 4.3.1 patch or on the plugin infrastructure
Or... all of the above.
Trying to get your code halfway compiled partially by one version, then finished by another sounds completely hopeless.
I don't know about stopping GCC compilation, but you can cache already compiled file so GCC won't have to compile them again. See CCache.
So if you stop brutally the compilation at some time, say with Ctrl-C, when you start again, all files that were already compiled will be retrieved from the cache.
A small change in a 1000's of lines of code leads to running the ./configure again on the entire software.
Is there any alternative, where we can compile only the changed file and the files associated with it?
If you have a sane Makefile.am with proper dependencies, running ./configure and make should only recompile files that depend on the touched file. So make already does what you are asking for.
If your Makefiles are not sane (e.g. they only work if you run make clean) and you are compiling C or C++ sources, using ccache might give you a speed gain. With ccache only the preprocessor part is run and its output compared to a cache of compile outputs. If nothing changed in the file or its includes it won't be recompiled. Properly installed it is run in a transparent way.