Sometimes when I run `npx hardhat compile` I get this error: FATAL ERROR: NewNativeModule Allocation failed - process out of memor - chainlink

Sometimes when I run this command npx hardhat compile on my windows cli I get the error below:
Compiling 72 files with 0.7.0
contracts/libraries/ERC20.sol: Warning: SPDX license identifier not provided in source file. Before publishing, consider adding a comment containing "SPDX-License-Identifier: <SPDX-License>" to each source file. Use "SPDX-License-Identifier: UNLICENSED" for non-open-source code.
Please see https://spdx.org for more information.
contracts/libraries/ERC1155/EnumerableSet.sol:158:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(Bytes32Set storage set, bytes32 value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
contracts/libraries/ERC1155/EnumerableSet.sol:224:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(AddressSet storage set, address value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
Compiling 1 file with 0.8.0
<--- Last few GCs --->
[8432:042B0460] 263058 ms: Mark-sweep (reduce) 349.8 (356.3) -> 248.2 (262.4) MB, 434.4 / 0.2 ms (+ 70.9 ms in 3 steps since start of marking, biggest step 69.6 ms, walltime since start of marking 800 ms) (average mu = 0.989, current mu = 0.990) memory[8432:042B0460] 263627
ms: Mark-sweep (reduce) 248.2 (259.4) -> 248.2 (252.1) MB, 555.5 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 556 ms) (average mu = 0.969, current
mu = 0.023) memory p
<--- JS stacktrace --->
FATAL ERROR: NewNativeModule Allocation failed - process out of memory`
After some time the error just kind of goes away.
It goes away probably after I've restarted my system or created a new Hardhat project and imported the code there.
But this is happening too often, what could be the cause?
I've done quite some research and some answers suggested it might be a problem with Node and the application's memory allocation, but I don't know how I would apply the solutions to a Hardhat project.
Here is a link to one possible solution: https://medium.com/#vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c
OS: WINDOWS 10
CLI: WINDOWS CMD

Related

AWS Lambda Chalice Layers Segmentation Fault

I am deploying a Python 3.7 Lambda function via Chalice. Because the code with its environment requirements, is larger than 50 MB limit, I am using the "automatic_layer" feature of Chalice to generate the layer with the requirements, which is awswrangler.
Because the generated layer is > 50 MB, I am uploading the generated managed-layer-...-python3.7.zip manually to s3 and create a Lambda layer. Then I re-deploy with chalice, removing the automatic_layer option and setting the layers to the generated ARN of the layer I manually created.
The function deployed this way worked OK for a couple of times, then started failing occasionally with "Segmentation Fault". The error rate increased shortly and now it is failing 100%.
Traceback:
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> START RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Version: $LATEST
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> END RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc
> REPORT RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Duration: 7165.04 ms Billed Duration: 7166 ms Memory Size: 128 MB Max Memory Used: 41 MB
> RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Error: Runtime exited with error: signal: segmentation fault (core dumped)
> Runtime.ExitError
As awswrangler itself requires boto3 & botocore, and they are already in the Lambda environment, I suspected that there might be a conflict of different versions of boto. I tried the same flow by explicitly including boto3 and botocore in the requirements but I am still receiving the same segmentation fault error.
Any help is much appreciated.
You could use AWS X-Ray to get more information on the problem : https://docs.aws.amazon.com/lambda/latest/dg/python-tracing.html
Moreover you might analyze the core dump generated executing your lambda function on a bash shell:
ulimit -c unlimited
cd /tmp
ececute your python ...
You should find a file named /tmp/core..... that you should analyze with gdb after download. The command "man core" could help you.

"go test -cpuprofile" does not generate a full trace

Issue
I have a go package, with a test suite.
When I run the test suite for this package, the total runtime is ~ 7 seconds :
$ go test ./mydbpackage/ -count 1
ok mymodule/mydbpackage 7.253s
However, when I add a -cpuprofile=cpu.out option, the sampling does not cover the whole run :
$ go test ./mydbpackage/ -count 1 -cpuprofile=cpu.out
ok mymodule/mydbpackage 7.029s
$ go tool pprof -text -cum cpu.out
File: mydbpackage.test
Type: cpu
Time: Aug 6, 2020 at 9:42am (CEST)
Duration: 5.22s, Total samples = 780ms (14.95%) # <--- depending on the runs, I get 400ms to 1s
Showing nodes accounting for 780ms, 100% of 780ms total
flat flat% sum% cum cum%
0 0% 0% 440ms 56.41% testing.tRunner
10ms 1.28% 1.28% 220ms 28.21% database/sql.withLock
10ms 1.28% 2.56% 180ms 23.08% runtime.findrunnable
0 0% 2.56% 180ms 23.08% runtime.mcall
...
Looking at the collected samples :
# sample from another run :
$ go tool pprof -traces cpu.out | grep "ms " # get the first line of each sample
10ms runtime.nanotime
10ms fmt.(*readRune).ReadRune
30ms syscall.Syscall
10ms runtime.scanobject
10ms runtime.gentraceback
...
# 98 samples collected, for a total sum of 1.12s
The issue I see is : for some reason, the sampling profiler stops collecting samples, or is blocked/slowed down at some point.
Context
go version is 1.14.6, platform is linux/amd64
$ go version
go version go1.14.6 linux/amd64
This package contains code that interact with a database, and the tests are run against a live postgresql server.
One thing I tried : t.Skip() internally calls runtime.Goexit(), so I replaced calls to t.Skip and variants with a simple return ; but it didn't change the outcome.
Question
Why aren't more samples collected ? I there some known pattern that blocks/slows down the sampler, or terminates the sampler earlier than it should ?
#Volker guided me to the answer in his comments :
-cpuprofile creates a profile in which only goroutines actively using the CPU are sampled.
In my use case : my go code spends a lot of time waiting for the answers of the postgresql server.
Generating a trace using go test -trace=trace.out, and then extracting a network blocking profile using go tool trace -pprof=net trace.out > network.out yielded much more relevant information.
For reference, on top of opening the complete trace using go tool trace trace.out, here are the values you can pass to -pprof= :
from go tool trace docs :
net: network blocking profile
sync: synchronization blocking profile
syscall: syscall blocking profile
sched: scheduler latency profile

Is it possible to disable type checks when transpiling typescript via tsc to speed up transpiling?

TypeScript checks the entire codebase on transpiling, even if only one file has actually changed. For small projects, that is fine, yet since our codebase grew, it takes quite a long time.
During development, I want quick response time of my unit tests. The unit test should run as soon as possible.
Unfortunately, I have to wait on each run about 10-15 seconds for the unit test to even start as the the tsc takes a long time to transpile, and of that time 60%-80% is spent on checking.
These example runs are just from removing and adding a newline in one file:
yarn tsc v0.27.5
$ "/home/philipp/fancyProject/node_modules/.bin/tsc" "--watch" "--diagnostics"
Files: 511
Lines: 260611
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 704680K
I/O read: 0.17s
I/O write: 0.09s
Parse time: 2.61s
Bind time: 0.95s
Check time: 7.65s
Emit time: 1.45s
Total time: 12.65s
00:35:34 - Compilation complete. Watching for file changes.
00:41:58 - File change detected. Starting incremental compilation...
Files: 511
Lines: 260612
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 1085950K
I/O read: 0.00s
I/O write: 0.04s
Parse time: 0.68s
Bind time: 0.00s
Check time: 12.65s
Emit time: 1.36s
Total time: 14.69s
00:42:13 - Compilation complete. Watching for file changes.
00:42:17 - File change detected. Starting incremental compilation...
Files: 511
Lines: 260611
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 1106446K
I/O read: 0.00s
I/O write: 0.12s
Parse time: 0.32s
Bind time: 0.01s
Check time: 9.28s
Emit time: 0.89s
Total time: 10.50s
00:42:27 - Compilation complete. Watching for file changes.
I wonder if there is a way to tell typescript:
Just treat everything as OK and just dump the JavaScript as quickly as possible to the disk.
I want to ensure first that my unit test pass in order to have a quick feedback loop.
And since my IDE takes care of the type checks already within the file I am currently working on, I rarely have mistake in the check of the transpiling anyway. And if there was a big issue, my unit tests should catch them.
When building the project, I would just use the classic tsc with the checks. As I have said, this is only for development and having a quick feedback loop.
Start using WebPack
Add awesome-typescript-loader
Set transpileOnly in setting to true
Also you can change other paramers, which can boost speed: ignoreDiagnostics, forceIsolatedModules, etc.

What are these "Expectation failed" messages in VS2010 PGO and how do I fix them?

When I perform the PGO optimization step (using LINK.EXE /LTCG:PGU), the Visual Studio 2010 linker complains:
Merging foo!1.pgc
'FOO_EDGE::get_input': Arc 2 --> 4 has negative count (-414343)
Expectation failed: f line 4241
'FOO_DELAY::set_delay': Block 18 outgoing counts differ from block count (-9 diff)
Expectation failed: f line 4261
Expectation failed: f line 4211
'FOO_DELAY::set_delay': Arc 12 --> 23 has negative count (-3)
Expectation failed: f line 4220
Generating code
907 of 4948 ( 18.33%) profiled functions will be compiled for speed
4948 of 4948 functions (100.0%) were optimized using profile data
42912225037 of 42912225037 instructions (100.0%) were optimized using profile data
What's causing these "expectation failures"? How should I address them? It seems like PGO is still optimizing the code, but I'm a little suspicious of the quality/completeness of the optimizations in the presence of these messages.
It seems that these errors occur when performing PGO-instrumented runs of multithreaded applications. They can be avoided by compiling (not linking) with the /PogoSafeMode flag on x64.
I didn't find the MSDN documentation on this flag particularly clear; the correct procedure for performing PGO on multithreaded code is:
Compile with cl.exe /PogoSafeMode
Link with link.exe /LTCG:PGI
Execute your multithreaded profiling run(s)
Re-link with link.exe /LTCG:PGO

No result with opencover + xunit

I was trying to use OpenCover (downloaded today) to get coverage of my tests. Here is the command line I've used :
OpenCover.Console.exe -target:"c:\Programmes2\xunit\xunit.console.clr4.x86.exe" -targetargs:"""C:\Sources\Project\BackOffice.Tests\bin\Debug\BackOffice.Tests.dll"" /noshadow " -output:bo.coverage.xml -targetdir:"C:\Sources\Project\BackOffice.Tests\bin\Debug" -filter:+[*]*
And here is the output I get
xUnit.net console test runner (32-bit .NET 4.0.30319.269)
Copyright (C) 2007-11 Microsoft Corporation.
xunit.dll: Version 1.9.0.1566
Test assembly: C:\Sources\Project\BackOffice.Tests\bin\Debug\BackOffice.Tests.dll
31 total, 0 failed, 0 skipped, took 2.760 seconds
Committing...
No results - no assemblies that matched the supplied filter were instrumented
this could be due to missing PDBs for the assemblies that match the filter
please review the output file and refer to the Usage guide (Usage.rtf)
The generated report is always the same:
<?xml version="1.0" encoding="utf-8"?>
<CoverageSession xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Modules />
</CoverageSession>
A bit more context : The PDB's are present in the target folder, I'm running the Command prompt as an administrator. The project tested is an .net 4/mvc 3 application. My computer is running windows 7/32bits. On that topic, not sure if that's relevant in any way, but the x86 folder in the is empty, even if I force the target plateform to be x86.
Also, when I try to register the OpenCover.Profiler.dll with regsvr32, I get an error that says that the dll may not be compatible with my windows version.
If I try to user the -register or the -register:user parameters, I get an exception:
An exception occured: Failed to register(user:True,register:True,is64:False):3 the profiler assembly; you may want to look into permissions or using the -register:user option instead. C:\Windows\system32\regsvr32.exe /s /n /i:user "C:\Sources\Opencover\sawilde-opencover-be6e491\main\bin\Debug\x86\OpenCover.Profiler.dll"
stack:
à OpenCover.Framework.ProfilerRegistration.ExecuteRegsvr32(Boolean userRegistration, Boolean register, Boolean is64) dans C:\Sources\Opencover\sawilde-opencover-be6e491\main\OpenCover.Framework\ProfilerRegistration.cs:ligne 59
à OpenCover.Framework.ProfilerRegistration.ExecuteRegsvr32(Boolean userRegistration, Boolean register) dans C:\Sources\Opencover\sawilde-opencover-be6e491\main\OpenCover.Framework\ProfilerRegistration.cs:ligne 45
à OpenCover.Framework.ProfilerRegistration.Register(Boolean userRegistration) dans C:\Sources\Opencover\sawilde-opencover-be6e491\main\OpenCover.Framework\ProfilerRegistration.cs:ligne 31
à OpenCover.Console.Program.Main(String[] args) dans C:\Sources\Opencover\sawilde-opencover-be6e491\main\OpenCover.Console\Program.cs:ligne 82
I also tried with a DLL project (.net4) tested by a different project (xunit also), with the same (lack of) result.
Any help appreciated !
Downloading the release package solved the exception from the register parameter. But running the same command line generated multiple errors of this kind:
BackOffice.Tests.HomeControllerShould.Redirect_To_Action_Feed_Index [FAIL]
System.MissingMethodException : Méthode introuvable : 'Void System.CannotUnloadAppDomainException.SafeVisited(Int32)'.
Stack Trace:
à BackOffice.Tests.HomeControllerShould..ctor()
with this result:
31 total, 31 failed, 0 skipped, took 0.241 seconds
Committing...
Visited Classes 0 of 44 (0)
Visited Methods 0 of 183 (0)
Visited Points 0 of 1352 (0)
Visited Branches 0 of 322 (0)
==== Alternative Results (includes all methods including those without corresponding source) ====
Alternative Visited Classes 0 of 53 (0)
Alternative Visited Methods 0 of 268 (0)
After looking around for similar issues, I found this issue on github and tried the -oldStyle parameter. It solved mine.
#Shaun Wilde, if by any chance you see this question again, could you tell us if it's the recommended way to solve it and if we lose something from using it against the "normal" way (I would also suggest adding this parameter to the documentation page

Resources