Solana build crashing on the compilation of solana-validator package - solana

super new to Solana. Trying to get the cloned repo build on my local machine. But the build stops while compiling solana-validator package and the terminal closes itself (crashes if you will) without showing any error or warning. screenshot
Nothing can be seen on the logs either. Tried cargo build --verbose. Nothing. I have Ubuntu 22.04 OS with 15GB RAM and 500GB SSD. So I reckon I have enough resources to get it build. Any ideas why I could be facing this issue?
These are the versions I have installed :
rustup - 1.24.0
rustc - 1.60.0
solana-cli - 1.11.0
nodejs - 16.15.0
Thank you!

This is totally normal, especially with 15GB of RAM -- cargo defaults to spinning up N jobs, where N is the number of cores on your machine. With a lot of cores, it's easy to use up too much memory while trying to compile so many modules in parallel, causing the OOM killer on Linux to take down processes until memory is recovered.
To get around this, you can start with cargo build --jobs 1, which should succeed. If you want to speed up, feel free to experiment with larger numbers, but know that it might fail again!

Related

How to improve make and make install's speed in centos?

Sometimes use make and make install mysql or python, the process of those are very slow.
how cloud i speed up the process of make or make install in centos?
make -j$(nproc) will run compilation using multiple cores, if available.
The nproc reports number of CPU cores, and the -j flag to make will run compilation in parallel.
P.S. you better not compile anything by hand in CentOS and stick to RPM packaging, which will usually have spec files which make use of the above mentioned parallel compilation.
By hand-compiling stuff in an Community Enterprise OS, you're essentially turning it to something that it's not. (hand compiling will result in not enterprise, insecure, outdated software, sooner or later).

Building Brave-Browser in Windows doesn't work

I am trying to build Brave Browser in Windows 10 64bit 15.8GB RAM and more than 200gb in free space.
I am following this repo: https://github.com/brave/brave-browser/wiki
I have installed all the requeriments for the build on Windows, however, when I run npm run init I get the following error:
Downloading CIPD client for windows-amd64 from https://chrome-infra-packages.appspot.com/client?platform=windows-amd64&version=git_revision:db7a486094873e3944b8e27ab5b23a3ae3c401e7...
error: The following untracked working tree files would be overwritten by checkout:
pylint.bat
Please move or remove them before you switch branches.
Aborting
fatal: Could not detach HEAD
Failed to update depot_tools.
Anyone knows why that might be happening? I have tried installing python and setting the enviroment variable, rebooting the machine, tried installing VS Code a few times thinking it might be it but the error is always the same. I have also tried the Brave Community and I cannot see anything similar.
Any help would be appreciated,
I believe it's an upstream problem in Chromium, so a fix should appear in Brave soon. See https://bugs.chromium.org/p/chromium/issues/detail?id=996359 for details, including a potential workaround (though I haven't personally tested it).

Custom built kernel fails to install correctly - Centos7

I am attempting to build and install multiple kernels on my machine, all of the exact same release (4.19.10, found here) but with different preemption models (for benchmarking). I was successful with initial vanilla kernel build and install, but all subsequent installs have not been bootable.
I am building the kernels as rpm packages. Again all are the exact same except for 2 changes in make menuconfig:
General Setup >> Local version - append to kernel release - Here I add a string to indicate preemption model, such as -lld for low-latency desktop
General Setup >> Preemption Model - Here I select the preemption model
All of them (with and without CONFIG_RT_PREEMPT patch) build fine with no errors.
I am installing with rpm -ivh kernel-4.19.10_lld-1.x86_64.rpm, which appears successful until it reaches 100% and hangs. Eventually I kill the install with ctrl+c and check what is running with top and can see grub2-editenv is still running.
From here, a few different things can happen but it all ends up the same. Reboot usually hangs, a 2nd reboot either brings me to grub command line or back to the command line with Welcome to emergency mode!.
I can add the new kernel to grub with grub2-mkconfig -o /boot/grub2/grub.cfg, which has no issues. But regardless of selecting the boot image from the grub command line directly or adding it to grub and selecting it during boot, I get the same text:
error: invalid magic number.
error: you need to load the kernel first
I recognize that there might not be enough info here to identify my issue, but I was hoping to at least get some direction and answer a few questions:
Is utilizing General Setup >> Local version - append to kernel release sufficient enough to make these kernels unique so that they may be installed along side one another?
Are these symptoms indicative of a bad build, incorrectly configured rpm spec, or just a bad grub configuration?
Thanks
Update: I was able to upgrade my kernel with rpm -Uvh kernel-4.19.10_lld-1.x86_64.rpm successfully and have it correctly boot, although I could not do that with one of the other kernels. Not sure what that indicates, but I'm thinking the issue is probably trying to install the same kernel versions in parallel and the builds themselves are probably OK.
Update 2:
I ditched the rpm solution and tried just make modules_install and make install. Installs no issues, but then running grub2-mkconfig hangs. Booting hangs at black screen, rebooting takes me to grub command line. Then manually loading the kernel does not give any errors but booting ends up with a kernel panic right after the hardware is identified. Message is Kernel Panic - not syncing: VFS: Unable to mount.
Probably related - I built the first (working) kernel on a VM (intel i7 hardware), but have been building the others on an intel atom e3950 chipset. I'm thinking that might be the issue because the menuconfig ends up different. I dont think I've had a healthy build on that chipset yet.

Launch VSCode from source through WSL

I would like to build/launch the VSCode source code in the native Bash for Windows client. I have followed the steps outlined in the VSCode wiki on how to contribute, and everything is working as expected (All commands have been run on the WSL terminal following the Linux instructions)
After running yarn run watch, I try to launch VSCode by running DISPLAY=:0 ./scripts/code.sh from the source code directory, but nothing happens. I get two duplicate warnings:
[21496:1128/120229.392130:WARNING:audio_manager.cc(295)] Multiple instances of AudioManager detected
but I'm not sure if this is causing the problem. I have an X Server client running, and have used to to successfully launch native Windows applications through WSL (terminator, emacs, etc.)
Is what I'm trying to do possible? If so, how can I make it work?
Amazing that you asked this! I was attempting to do the exact same thing at (it seems) the exact same time. Here's my process.
Install XMing
Install the xfree apps
Set DISPLAY=:0
Run xeyes ==> Awesome googly eyes!
Attempt to build vscode from source. The build docs seem to be incomplete b/c I had to install a ton of libraries beyond those listed e.g.
yarn
gulp
gulp-cli
pkg-config
libx11-dev
libxkbfile-dev
libsecret-1-dev
libgtk2.0-dev
libxss-dev
gnome-dev
libgconf2-dev
libnss3-dev
libasound2-dev
Eventually get the yarn tasks to finish such that I could run code.sh
./scripts/code.sh
[20474:1128/153959.035267:ERROR:bus.cc(427)] Failed to connect to the bus: F
ailed to connect to socket /var/run/dbus/system_bus_socket: No such file or
directory
[20474:1128/153959.081986:WARNING:audio_manager.cc(295)] Multiple instances
of AudioManager detected
[20474:1128/153959.082101:WARNING:audio_manager.cc(254)] Multiple instances
of AudioManager detected
Looking at ps I see that the process was running.
Conjectures
It seems that building from source from WSL is not yet supported. Or maybe you can build the artifact, but you can't connect to the Windows display to show it. Based on the quality of the xeyes session, it looks like a very, very, very primitive experience e.g. still using WinXP-style minimize / maximize / close icons.
I was literally writing an Issue on their github page when I thought I'd do one last search and found this post. Much of vscode treats WSL as a second-class environment on Windows. Recent work seems to suggest that things are going to get better as driving to integration between Windows' two internal environments continues to improve (e.g. https://github.com/Microsoft/vscode/issues/39144)
Update 2017-11-30
Based on some pursuit via Github, it seems that this issue has been reported to the WSL team: https://github.com/Microsoft/WSL/issues/2293. It appears to be under active consideration by the WSL team. I've added some commentary about my use case there.

For OpenMPI, I keep getting a world size of 1, even though I am utilizing 2 processors [duplicate]

I'm writing a parallel program using Open MPI. I'm running Snow Leopard 10.6.4, and I installed Open MPI through the homebrew package manager.
When I run my program using mpirun -np 8 ./test, every process reports that it has rank 0, and believes the total number of processes to be 1, and 8 lines of process rank: 0, total processes: 1 get spit out to the console.
I know it's not a code issue, since the exact same code will compile and run as expected on some Ubuntu machines in my college's computer lab. I've checked homebrew's bug tracker, and no-one's reported an issue with the Open MPI package. I'm at a loss.
Check which mpirun you are invoking. The mpirun that is being executed is launching 8 independent instances of the binary. So each instance is an MPI application with a universe size of 1 and rank 0.
Also, unless you are planning to run the final code on a cluster of OS X boxes, I highly recommend installing a Linux version in a VM, like virtualbox, to test & develop these codes.
Uninstall previous MPI implementation completely.
In my case I installed MPICH2 first, then uninstalled it, and changed to OpenMPI. Then same case occured, all process' rank were 0. What I did to fix this problem is: uninstall MPICH2 completely from my system (I use Ubuntu/Debian Linux).
# apt-get remove mpich2
# apt-get autoremove
Today I met the same problem like you. And finally I got the solution.
See https://wiki.mpich.org/mpich/index.php/Frequently_Asked_Questions#Q:_All_my_processes_get_rank_0
Simply speaking, the answer says, MPI needs suitable PMI to tell processes about their ranks and something else. Therefore, we need to use corresponding mpirun/mpiexec to run the MPI program.
I guess that your problem is related to the mismatch between mpi program compiler and the mpirun tool. So try to uninstall all, and install MPICH/openMPI(make sure just install one of them).
I have had the same problem with openMPI in C on linux. Using MPIch2 in stead , the problem was fixed (but remember to run MPI_Finalize() at the end or it gets weird.)

Resources