Sometimes when I run this command npx hardhat compile on my windows cli I get the error below:
Compiling 72 files with 0.7.0
contracts/libraries/ERC20.sol: Warning: SPDX license identifier not provided in source file. Before publishing, consider adding a comment containing "SPDX-License-Identifier: <SPDX-License>" to each source file. Use "SPDX-License-Identifier: UNLICENSED" for non-open-source code.
Please see https://spdx.org for more information.
contracts/libraries/ERC1155/EnumerableSet.sol:158:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(Bytes32Set storage set, bytes32 value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
contracts/libraries/ERC1155/EnumerableSet.sol:224:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(AddressSet storage set, address value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
Compiling 1 file with 0.8.0
<--- Last few GCs --->
[8432:042B0460] 263058 ms: Mark-sweep (reduce) 349.8 (356.3) -> 248.2 (262.4) MB, 434.4 / 0.2 ms (+ 70.9 ms in 3 steps since start of marking, biggest step 69.6 ms, walltime since start of marking 800 ms) (average mu = 0.989, current mu = 0.990) memory[8432:042B0460] 263627
ms: Mark-sweep (reduce) 248.2 (259.4) -> 248.2 (252.1) MB, 555.5 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 556 ms) (average mu = 0.969, current
mu = 0.023) memory p
<--- JS stacktrace --->
FATAL ERROR: NewNativeModule Allocation failed - process out of memory`
After some time the error just kind of goes away.
It goes away probably after I've restarted my system or created a new Hardhat project and imported the code there.
But this is happening too often, what could be the cause?
I've done quite some research and some answers suggested it might be a problem with Node and the application's memory allocation, but I don't know how I would apply the solutions to a Hardhat project.
Here is a link to one possible solution: https://medium.com/#vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c
OS: WINDOWS 10
CLI: WINDOWS CMD
Issue
I have a go package, with a test suite.
When I run the test suite for this package, the total runtime is ~ 7 seconds :
$ go test ./mydbpackage/ -count 1
ok mymodule/mydbpackage 7.253s
However, when I add a -cpuprofile=cpu.out option, the sampling does not cover the whole run :
$ go test ./mydbpackage/ -count 1 -cpuprofile=cpu.out
ok mymodule/mydbpackage 7.029s
$ go tool pprof -text -cum cpu.out
File: mydbpackage.test
Type: cpu
Time: Aug 6, 2020 at 9:42am (CEST)
Duration: 5.22s, Total samples = 780ms (14.95%) # <--- depending on the runs, I get 400ms to 1s
Showing nodes accounting for 780ms, 100% of 780ms total
flat flat% sum% cum cum%
0 0% 0% 440ms 56.41% testing.tRunner
10ms 1.28% 1.28% 220ms 28.21% database/sql.withLock
10ms 1.28% 2.56% 180ms 23.08% runtime.findrunnable
0 0% 2.56% 180ms 23.08% runtime.mcall
...
Looking at the collected samples :
# sample from another run :
$ go tool pprof -traces cpu.out | grep "ms " # get the first line of each sample
10ms runtime.nanotime
10ms fmt.(*readRune).ReadRune
30ms syscall.Syscall
10ms runtime.scanobject
10ms runtime.gentraceback
...
# 98 samples collected, for a total sum of 1.12s
The issue I see is : for some reason, the sampling profiler stops collecting samples, or is blocked/slowed down at some point.
Context
go version is 1.14.6, platform is linux/amd64
$ go version
go version go1.14.6 linux/amd64
This package contains code that interact with a database, and the tests are run against a live postgresql server.
One thing I tried : t.Skip() internally calls runtime.Goexit(), so I replaced calls to t.Skip and variants with a simple return ; but it didn't change the outcome.
Question
Why aren't more samples collected ? I there some known pattern that blocks/slows down the sampler, or terminates the sampler earlier than it should ?
#Volker guided me to the answer in his comments :
-cpuprofile creates a profile in which only goroutines actively using the CPU are sampled.
In my use case : my go code spends a lot of time waiting for the answers of the postgresql server.
Generating a trace using go test -trace=trace.out, and then extracting a network blocking profile using go tool trace -pprof=net trace.out > network.out yielded much more relevant information.
For reference, on top of opening the complete trace using go tool trace trace.out, here are the values you can pass to -pprof= :
from go tool trace docs :
net: network blocking profile
sync: synchronization blocking profile
syscall: syscall blocking profile
sched: scheduler latency profile
Running SonarQube 5.6.6 from Jenkins on CentOS 7.3, I got the following error:
2017.09.01 19:05:16 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Failed to execute task AV485bp0qXlQ-QPWWE9A
java.lang.OutOfMemoryError: Java heap space
2017.09.01 19:05:17 ERROR [o.s.s.c.t.CeWorkerCallableImpl] Executed task | project=PP::Symphony3M | type=REPORT | id=AV485bp0qXlQ-QPWWE9A | time=74089ms
sonar.ce.javaOpts is set like below:
sonar.ce.javaOpts=-Xmx60g -Xms1g -XX:+HeapDumpOnOutOfMemoryError -Djava.net.preferIPv4Stack=true
How much heap space should I give to SonarQube, when analyzing a one million LOC project? Or is there another way of avoiding Java heap space issues?
The max heap you can allocate depends on the free Ram on your server. free command can help identify the stats. Based on free Ram you can set your Xmx values.
BTW, make sure the code compiles on a server. If you are able to compile and not scan then only the increasing the heap will help.
I have setup a project with a specific date in sonar.timemachine.period5 in my project.properties file. This usually works perfectly, but sometimes the sonarqube runner doesn't make the comparison.
sonar.timemachine.period5=2015-11-04
Here is a part of the log output from two consecutive sonar-runner analysis:
This one is not comparing against period 5:
10:28:17.546 INFO - Loaded quality gate 'MyProject'
10:28:17.591 INFO - Compare to previous analysis (2015-10-26)
10:28:17.596 INFO - Compare over 30 days (2015-10-24, analysis of Mon Oct 26 09:26:01 CET 2015)
10:28:17.597 INFO - Compare to previous version (2015-10-26)
while this one is....
10:37:43.996 INFO - Loaded quality gate 'MyProject'
10:37:44.054 INFO - Compare to previous analysis (2015-11-23)
10:37:44.060 INFO - Compare over 30 days (2015-10-24, analysis of Mon Oct 26 09:26:01 CET 2015)
10:37:44.061 INFO - Compare to previous version (2015-11-23)
10:37:44.062 INFO - Compare to date 2015-11-04 (analysis of 2015-11-23
Any clues on why this is happening?
The result is that the project sometimes passes the qualitygate when it certainly shouldn't.
Im running SonarQube 5.1.2 and using Sonar-Runner 2.4
Periods 4 and 5 are not set globally but on the project level. Double-check your first project to make sure it has a valid Period 5 value.
This question is related to:
Riak node not working, but using 100% cpu
but since the poster seems to have left I'm posting my case here.
Last night I installed erlang(R15B01) from source, using the config options from the Riak website:
http://docs.basho.com/riak/1.2.1/tutorials/installation/Installing-Erlang/#Installing-on-Mac-OS-X
and Riak(1.4.1) on my 2013 MacBook Pro (2.8GHz i7, 16GB ram, OSX 10.8.3). I did not change the ulimit, as I assumed it would be fine for a vanilla run.
Installation went fine; warnings but no errors, and I was able to run the toy examples no problem.
However the empty instance quickly ate through all 4 cores and my machine started whining and overheating.
Looking in the logs I see the following error repeated a jillion times:
2013-10-11 09:04:04.266 [error] CRASH REPORT ¥
Process with 0 neighbours exited with reason: ¥
call to undefined function eleveldb:o
also tons of crash reports:
2013-10-11 09:14:47 =CRASH REPORT====
crasher:
initial call: riak_kv_index_hashtree:init/1
pid:
registered_name: []
exception exit: {{undef,[{eleveldb,open,
["./data/anti_entropy/479555224749202520035584085735030365824602865664",
[{create_if_missing,true},{max_open_files,20},{write_buffer_size,12886952}]],[]},
{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,499}]},{hashtree,new,2,
[{file,"src/hashtree.erl"},{line,215}]},{riak_kv_index_hashtree,do_new_tree,2,
[{file,"src/riak_kv_index_hashtree.erl"},{line,421}]},{lists,foldl,3,[{file,"lists.erl"},
{line,1197}]},{riak_kv_index_hashtree,init_trees,2,[{file,"src/riak_kv_index_hashtree.erl"},
{line,366}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},
{line,226}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}]},
[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [,riak_core_vnode_sup,riak_core_sup,]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 987
stack_size: 24
reductions: 492
neighbours:
erlang.log says
=====
===== LOGGING STARTED Fri Oct 11 09:04:01 CEST 2013
=====
Node 'riak#127.0.0.1' not responding to pings.
config is OK
!!!!
!!!! WARNING: ulimit -n is 2560; 4096 is the recommended minimum.
!!!!
Exec: /tmp/riak-1.4.1/rel/riak/bin/../erts-5.9.1/bin/erlexec
-boot /tmp/riak-1.4.1/rel/riak/bin/../releases/1.4.1/riak
-config /tmp/riak-1.4.1/rel/riak/bin/../etc/app.config
-pa /tmp/riak-1.4.1/rel/riak/bin/../lib/basho-patches
-args_file /tmp/riak-1.4.1/rel/riak/bin/../etc/vm.args -- console
Root: /tmp/riak-1.4.1/rel/riak/bin/..
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:8:8] [async-threads:64]
[kernel-poll:true]
Eshell V5.9.1 (abort with ^G)
(riak#127.0.0.1)1>
After less than 10m there are already 144MB of logging files with variations of the above.
I had the same problem by building riak 1.4.6 from source.
I changed in the file etc/app.config the line to
{anti_entropy, {off, []}},
Leveldb is used by AAE. See the config parameter anti_entropy_leveldb_opts.
Use process of elimination:
It's hard to say without more information. Is the 200% being used by
beam.smp? Do you see anything in console.log, error.log or crash.log
that would indicate something odd is happening? Are there clients
communicating with the cluster at the time? If so what client/protocol are
they using and what types of operations are being performed (e.g.
get/put/map reduce/etc)?
References
Riak consuming too much CPU
Interesting sawtooth increasing CPU usage on lightly-used Riak
Inspecting a Node
Riak Performance Tuning
Open Files Limit
Configuration Files