Does Erlang R15B02 have support for hybrid heap?
I run configure --enable-hybrid-heap blabla... and make it, but no [hybrid-heap] flag showing:
$ erl
Erlang R15B02 (erts-5.9.2) [source] [64-bit halfword] [smp:16:16] [async-threads:0] [kernel-poll:false]
I get to know this flag from here: https://stackoverflow.com/a/1185490/940313
The experimental Hybrid heap implementation is gone in R15B02:
OTP-10105 Remove all code, documentation, options and diagnostic
functions which were related to the experimental hybrid heap
implementation.
See http://www.erlang.org/download/otp_src_R15B02.readme
Related
we have an application that is mostly Go (1.17) that makes a lot of calls through CGo (GCC 7.5) to CUDA on an ARM processor. We occasionally see panics that look like something has done bad things to the heap in the C side. I tried running the whole application under valgrind, but I get too many messages like
==14869== Thread 1:
==14869== Invalid read of size 8
==14869== at 0x4783AC: runtime.startm (proc.go:2508)
==14869== by 0x47890B: runtime.wakep (proc.go:2584)
==14869== by 0x47CF8F: runtime.newproc.func1 (proc.go:4261)
==14869== by 0x4A476B: runtime.systemstack (asm_arm64.s:230)
==14869== by 0x4A465F: runtime.mstart (asm_arm64.s:117)
==14869== Address 0x1fff0001a8 is on thread 1's stack
==14869== 8 bytes below stack pointer
to see anything useful. I am assuming these are false positives, and the Go runtime is not in fact riddled with undefined behaviour. I can't see a flag to suppress that check. Have I missed it? Is there some other way to investigate this problem? I could write test harnesses in C++ but that will change the use pattern which I suspect is key to the problem.
I build the cgo software to an executable with
go build
which produces an executable by the name of the directory you are in. Let's say it's call "mycgoprog"
I then run valgrind against it with this command:
G_SLICE=always-malloc G_DEBUG=gc-friendly valgrind -v --tool=memcheck --leak-check=full --num-callers=40 --log-file=valgrind.log ./mycgoprog
The valgrind.log then contains the all the indiscretions detected by valgrind in the linked c code.
I want to compile Chromium for windows with different patches from different projects for myself (at least for today). I follow this official instruction:
https://chromium.googlesource.com/chromium/src/+/main/docs/windows_build_instructions.md
All steps successfully passed, but i get awful low perfomance. In speeDOMeter benchmark (https://browserbench.org/Speedometer2.0/) my build get only 5 points, while official build get 116! This is commands for GIT what i use:
gclient
mkdir chromium && cd chromium
fetch --no-history chromium
cd src
gn gen --ide=vs out/Default "--args=target_cpu=\"x86\""
autoninja -C out/Default chrome -j 6
What i was try and this is not help:
Delete depot tools folder, and start over
Decrease -j from 10 to 8 and to 6
Delete argument --ide=vs
Fetch with history
Use command "gn gen" without --args part
Disable windows defender & firewall
As Asesh correctly noted, the problem was in debug mode. But adding key "is_debug=false" to gn gen command is not enough. The best solution will be the adding "is_official_build=true" key. Here is description:
is_official_build
Current value (from the default) = false
From //build/config/BUILDCONFIG.gn:136
Set to enable the official build level of optimization. This has nothing
to do with branding, but enables an additional level of optimization above
release (!is_debug). This might be better expressed as a tri-state
(debug, release, official) but for historical reasons there are two
separate flags.
IMPORTANT NOTE: (!is_debug) is *not* sufficient to get satisfying
performance. In particular, DCHECK()s are still enabled for release builds,
which can halve overall performance, and do increase memory usage. Always
set "is_official_build" to true for any build intended to ship to end-users.
Thanks Asesh for pointing me in the right direction.
I'm new to golang. I was debugging my go application.
While I tried to run "info goroutines", it threw out:
Undefined info command: "goroutines".
Try "help info
What did I miss in my gdb configuration?
The article "Debugging Go Code with GDB" does mention:
(gdb) info goroutines
But only in the context of loading extension scripts for a given binary.
The tool chain uses this to extend GDB with a handful of commands to inspect internals of the runtime code (such as goroutines) and to pretty print the built-in map, slice and channel types.
If you'd like to see how this works, or want to extend it, take a look at src/pkg/runtime/runtime-gdb.py in the Go source distribution.
It depends on some special magic types (hash<T,U>) and variables (runtime.m and runtime.g) that the linker (src/cmd/ld/dwarf.c) ensures are described in the DWARF code.
If you're interested in what the debugging information looks like, run 'objdump -W 6.out' and browse through the .debug_* sections.
So make sure your debug session is run with those extensions activated.
in the gdb session run
source $GOROOT/src/runtime/runtime-gdb.py
where $GOROOT is go lives (see go env | grep ROOT)
you should use https://github.com/go-delve/delve as recommended by golang docs https://golang.org/doc/gdb
I am deploying a large Java project on Sonar using "Findbugs" as profile and getting the error below:
Caused by: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
Java heap space
What i have tried to resolve this:
Replaced %SONAR_RUNNER_OPTS% with -Xms256m -Xmx1024m to increase the heap size in sonar-runner bat file.
Put "sonar.findbugs.effort" parameter as "Min" in Sonar global parameters.
But both of above methods didn't work for me.
I had the same problem and found a very different solution, perhaps because I'm having a hard time swallowing the previous answers / comments. With 10 million lines of code (that's more code than is in an F16 fighter jet), if you have a 100 characters per line (a crazy size), you could load the whole code base into 1GB of memory. I set it 8GB of memory and it still failed. Why?
Answer: Because the community Sonar C++ scanner seems to have a bug where it picks up ANY file with the letter 'c' in its extension. That includes .doc, .docx, .ipch, etc. Hence, the reason it's running out of memory is because it's trying to read some file that it thinks is 300mb of pure code but really it should be ignored.
Solution: Find the extensions used by all of the files in your project (see more here):
dir /s /b | perl -ne 'print $1 if m/\.([^^.\\\\]+)$/' | sort -u | grep c
Then add these other extensions as exclusions in your sonar.properties file:
sonar.exclusions=**/*.doc,**/*.docx,**/*.ipch
Then set your memory limits back to regular amounts.
%JAVA_EXEC% -Xmx1024m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m %SONAR_RUNNER_OPTS% ...
this has worked for me:
SONAR_RUNNER_OPTS="-Xmx3062m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=128m"
I set it direct in the sonar-runner(.bat) file
I had the same problem when running sonar with maven. In my case it helped to call sonar separately:
mvn clean install && mvn sonar:sonar
instead of
mvn clean install sonar:sonar
http://docs.sonarqube.org/display/SONAR/Analyzing+with+Maven
Remark: Because my solution is connected to maven, this is not the direct answer for the question. But it might help other users who stumple upon it.
What you can do it to create your own quality profile with just some Findbugs rules at first, and then progressively add more and more until you reach his OutOfMemoryError. There's probably only a single rule that makes all this fail because your code violates it - and if you deactivate this rule, it will certainly work.
I know this thread is a bit old but this info might help someone.
For me the problem was not like suggested by the top-answer with the C++ plugin.
Instead my problem was the Xml-Plugin (https://docs.sonarqube.org/display/PLUG/SonarXML)
after I deactivated it the analysis worked again.
You can solve this issue by increase the maximum memory allocated to the appropriate process by increasing the -Xmx memory setting for the corresponding Java process in your sonar.properties file
under SonarQube/conf/sonar.properties
uncomment below lines and increase the memory as you want:
For Web: Xmx5123m -Xms1536m -XX:+HeapDumpOnOutOfMemoryError
For ElasticSearch: Xms512m -Xmx1536m -XX:+HeapDumpOnOutOfMemoryError
For Compute Engine: sonar.ce.javaOpts=-Xmx1536m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
The problem is on FindBugs side. I suppose you're analyzing a large project that probably has many violations. Take a look at two threads in Sonar's mailing list having the same issue. There are some ideas you can try for yourself.
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td4898141.html
http://sonar.15.n6.nabble.com/java-lang-OutOfMemoryError-Java-heap-space-td5001587.html
I know this is old, but I am just posting my answer anyway. I realized I was using the 32bit JDK (version 8) and after uninstalling it and then installing 64bit JDK (version 12) the problem disappeared.
What configure options a should use to compile mpich2 (ver 1.1.1p1 or 1.2.1p1) with sctp ?
In my try there is a error when linking cpi.c (small example).
/home/op02/mpiopt/sctp/lib/libmpich.a(ch3u_rma_sync.o)(.text+0x20a7): In functio
n `MPIDI_Win_post':
: undefined reference to `PMPI_Group_translate_ranks'
/home/op02/mpiopt/sctp/lib/libmpich.a(ch3u_rma_sync.o)(.text+0x21bd): In functio
n `MPIDI_Win_post':
: undefined reference to `PMPI_Group_free'
/home/op02/mpiopt/sctp/lib/libmpich.a(ch3u_rma_sync.o)(.text+0x25c4): In functio
n `MPIDI_Win_complete':
: undefined reference to `PMPI_Group_translate_ranks'
....
My options was
../mpich2-1.1.1p1/configure --enable-fast=O1 \
--host=x86_64-unknown-linux-gnu \
--target=x86_64-secret-linux-gnu \
--with-device=ch3:sctp --with-pm=hydra \
--with-cross=x8664secret.cross --disable-f77 --disable-f90 \
>conf.log 2>&1
with x8664secret.cross being an output of getcross.c program. Host, target, and this file are here to force a cross-compilation. (it is a requirement for this build)
Is sctp in mpich2 in active state and can it be compiled?
Does sctp network module support cross-building?
Try 1.3.1 instead. I see that Brad Penoff committed a couple of small changes to the build system since 1.2.1p1 was released, so it may be in better shape now. Alternatively, try using (the rather old) MPICH2 1.0.8, where I believe things were still working.
If the cross-compile step is what's really causing the problem and you still need to solve this problem, you can get more interactive support from mpich-discuss#lists.anl.gov. We can dig in over there instead.