Sstate summary: Wanted 2730 Local 0 Mirrors 0 Missed 2730 Current 0 (0% match, 0% complete)
Im building linux with yocto and compilation process takes too much time. And i couldnt benefit from global sscache for poky. There is no firewall on my network. What could be the problem?
Related
Sometimes when I run this command npx hardhat compile on my windows cli I get the error below:
Compiling 72 files with 0.7.0
contracts/libraries/ERC20.sol: Warning: SPDX license identifier not provided in source file. Before publishing, consider adding a comment containing "SPDX-License-Identifier: <SPDX-License>" to each source file. Use "SPDX-License-Identifier: UNLICENSED" for non-open-source code.
Please see https://spdx.org for more information.
contracts/libraries/ERC1155/EnumerableSet.sol:158:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(Bytes32Set storage set, bytes32 value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
contracts/libraries/ERC1155/EnumerableSet.sol:224:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(AddressSet storage set, address value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
Compiling 1 file with 0.8.0
<--- Last few GCs --->
[8432:042B0460] 263058 ms: Mark-sweep (reduce) 349.8 (356.3) -> 248.2 (262.4) MB, 434.4 / 0.2 ms (+ 70.9 ms in 3 steps since start of marking, biggest step 69.6 ms, walltime since start of marking 800 ms) (average mu = 0.989, current mu = 0.990) memory[8432:042B0460] 263627
ms: Mark-sweep (reduce) 248.2 (259.4) -> 248.2 (252.1) MB, 555.5 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 556 ms) (average mu = 0.969, current
mu = 0.023) memory p
<--- JS stacktrace --->
FATAL ERROR: NewNativeModule Allocation failed - process out of memory`
After some time the error just kind of goes away.
It goes away probably after I've restarted my system or created a new Hardhat project and imported the code there.
But this is happening too often, what could be the cause?
I've done quite some research and some answers suggested it might be a problem with Node and the application's memory allocation, but I don't know how I would apply the solutions to a Hardhat project.
Here is a link to one possible solution: https://medium.com/#vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c
OS: WINDOWS 10
CLI: WINDOWS CMD
The tail of logging shows the following:
22:09:11.016 DEBUG: GET 200 http://someserversomewhere:9000/api/rules/search.protobuf?f=repo,name,severity,lang,internalKey,templateKey,params,actives,createdAt,updatedAt&activation=true&qprofile=AXaXXXXXXXXXXXXXXXw0&ps=500&p=1 | time=427ms
22:09:11.038 INFO: Load active rules (done) | time=12755ms
I have mounted the running container to see if the scanner process is pegged/running/etc and it shows the following:
Mem: 2960944K used, 106248K free, 67380K shrd, 5032K buff, 209352K cached
CPU: 0% usr 0% sys 0% nic 99% idle 0% io 0% irq 0% sirq
Load average: 5.01 5.03 4.83 1/752 46
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
1 0 root S 3811m 127% 1 0% /opt/java/openjdk/bin/java -Djava.awt.headless=true -classpath /opt/sonar-scanner/lib/sonar-scann
40 0 root S 2424 0% 0 0% bash
46 40 root R 1584 0% 0 0% top
I was unable to find any logging in the sonar-scanner-cli container to help indicate the state. It appears to just be hung and waiting for something to happen.
I am running Sonarqube locally from docker at the lts version 7.9.5
I am also running the docker container sonarsource:sonar-scanner-cli which is currently using the following version in the Dockerfile.
SONAR_SCANNER_VERSION=4.5.0.2216
I am triggering the scan via the following command:
docker run --rm \
-e SONAR_HOST_URL="http://someserversomewhere:9000" \
-e SONAR_LOGIN="nottherealusername" \
-e SONAR_PASSWORD="not12345likeinspaceballs" \
-v "$DOCKER_TEST_DIRECTORY:/usr/src" \
--link "myDockerContainerNameForSonarQube" \
sonarsource/sonar-scanner-cli -X -Dsonar.password=not12345likeinspaceballs -Dsonar.verbose=true \
-Dsonar.sources=app -Dsonar.tests=test -Dsonar.branch=master \
-Dsonar.projectKey="${PROJECT_KEY}" -Dsonar.log.level=TRACE \
-Dsonar.projectBaseDir=/usr/src/$PROJECT_NAME -Dsonar.working.directory=/usr/src/$PROJECT_NAME/$SCANNER_WORK_DIR
I have done a lot of digging to try to find anyone with similar issues and found the following older issue which seems to be similar but it is unclear how to determine if I am experiencing something related. Why does sonar-maven-plugin hang at loading global settings or active rules?
I am stuck and not sure what to do next any help or hints would be appreciated.
Additional note is that this process does work for the 8.4.2-developer version of Sonarqube that I am planning migrate to. The purpose of verifying 7.9.5 is to follow the recommended upgrade path from Sonarqube that recommends the interim step of first bringing your current version to the latest LTS then running the data migration before jumping to the next major version.
Issue
I have a go package, with a test suite.
When I run the test suite for this package, the total runtime is ~ 7 seconds :
$ go test ./mydbpackage/ -count 1
ok mymodule/mydbpackage 7.253s
However, when I add a -cpuprofile=cpu.out option, the sampling does not cover the whole run :
$ go test ./mydbpackage/ -count 1 -cpuprofile=cpu.out
ok mymodule/mydbpackage 7.029s
$ go tool pprof -text -cum cpu.out
File: mydbpackage.test
Type: cpu
Time: Aug 6, 2020 at 9:42am (CEST)
Duration: 5.22s, Total samples = 780ms (14.95%) # <--- depending on the runs, I get 400ms to 1s
Showing nodes accounting for 780ms, 100% of 780ms total
flat flat% sum% cum cum%
0 0% 0% 440ms 56.41% testing.tRunner
10ms 1.28% 1.28% 220ms 28.21% database/sql.withLock
10ms 1.28% 2.56% 180ms 23.08% runtime.findrunnable
0 0% 2.56% 180ms 23.08% runtime.mcall
...
Looking at the collected samples :
# sample from another run :
$ go tool pprof -traces cpu.out | grep "ms " # get the first line of each sample
10ms runtime.nanotime
10ms fmt.(*readRune).ReadRune
30ms syscall.Syscall
10ms runtime.scanobject
10ms runtime.gentraceback
...
# 98 samples collected, for a total sum of 1.12s
The issue I see is : for some reason, the sampling profiler stops collecting samples, or is blocked/slowed down at some point.
Context
go version is 1.14.6, platform is linux/amd64
$ go version
go version go1.14.6 linux/amd64
This package contains code that interact with a database, and the tests are run against a live postgresql server.
One thing I tried : t.Skip() internally calls runtime.Goexit(), so I replaced calls to t.Skip and variants with a simple return ; but it didn't change the outcome.
Question
Why aren't more samples collected ? I there some known pattern that blocks/slows down the sampler, or terminates the sampler earlier than it should ?
#Volker guided me to the answer in his comments :
-cpuprofile creates a profile in which only goroutines actively using the CPU are sampled.
In my use case : my go code spends a lot of time waiting for the answers of the postgresql server.
Generating a trace using go test -trace=trace.out, and then extracting a network blocking profile using go tool trace -pprof=net trace.out > network.out yielded much more relevant information.
For reference, on top of opening the complete trace using go tool trace trace.out, here are the values you can pass to -pprof= :
from go tool trace docs :
net: network blocking profile
sync: synchronization blocking profile
syscall: syscall blocking profile
sched: scheduler latency profile
TypeScript checks the entire codebase on transpiling, even if only one file has actually changed. For small projects, that is fine, yet since our codebase grew, it takes quite a long time.
During development, I want quick response time of my unit tests. The unit test should run as soon as possible.
Unfortunately, I have to wait on each run about 10-15 seconds for the unit test to even start as the the tsc takes a long time to transpile, and of that time 60%-80% is spent on checking.
These example runs are just from removing and adding a newline in one file:
yarn tsc v0.27.5
$ "/home/philipp/fancyProject/node_modules/.bin/tsc" "--watch" "--diagnostics"
Files: 511
Lines: 260611
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 704680K
I/O read: 0.17s
I/O write: 0.09s
Parse time: 2.61s
Bind time: 0.95s
Check time: 7.65s
Emit time: 1.45s
Total time: 12.65s
00:35:34 - Compilation complete. Watching for file changes.
00:41:58 - File change detected. Starting incremental compilation...
Files: 511
Lines: 260612
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 1085950K
I/O read: 0.00s
I/O write: 0.04s
Parse time: 0.68s
Bind time: 0.00s
Check time: 12.65s
Emit time: 1.36s
Total time: 14.69s
00:42:13 - Compilation complete. Watching for file changes.
00:42:17 - File change detected. Starting incremental compilation...
Files: 511
Lines: 260611
Nodes: 898141
Identifiers: 323004
Symbols: 863060
Types: 302553
Memory used: 1106446K
I/O read: 0.00s
I/O write: 0.12s
Parse time: 0.32s
Bind time: 0.01s
Check time: 9.28s
Emit time: 0.89s
Total time: 10.50s
00:42:27 - Compilation complete. Watching for file changes.
I wonder if there is a way to tell typescript:
Just treat everything as OK and just dump the JavaScript as quickly as possible to the disk.
I want to ensure first that my unit test pass in order to have a quick feedback loop.
And since my IDE takes care of the type checks already within the file I am currently working on, I rarely have mistake in the check of the transpiling anyway. And if there was a big issue, my unit tests should catch them.
When building the project, I would just use the classic tsc with the checks. As I have said, this is only for development and having a quick feedback loop.
Start using WebPack
Add awesome-typescript-loader
Set transpileOnly in setting to true
Also you can change other paramers, which can boost speed: ignoreDiagnostics, forceIsolatedModules, etc.
I'm trying to build the dtrace target in the Xcode project using the
advice here:
http://osx86.boeaja.info/2009/10/building-xnu-kernel-on-snow-leopard/
But I get:
libproc.m:24:49: error: CoreSymbolication/CoreSymbolication.h: No such
file or directory
I realize CoreSymbolication is a private framework, but Apple must
make this header available somewhere in order for me to build dtrace,
right? Can someone point me to the necessary files to build dtrace?
As you probably figured out, Apple only has to release parts of the kernel which are taken from other open-source projects, and that doesn't include the userland libraries that they build on top of the kernel. CoreSymbolication/CoreSymbolication.h sounds a lot like a userspace header for Obj-C though, so you can probably build the kernel DTrace utilities without it. (Although I could very well be wrong.)
I would guess it's being used for symbol identification in the userland dtrace(1m) command. If only there was a tool that could help us figure this out... :-D
# dtrace -n 'pid$target:CoreSymbolication::entry {}' -c 'dtrace -ln syscall::write:entry'
dtrace: description 'pid$target:CoreSymbolication::entry ' matched 246 probes
ID PROVIDER MODULE FUNCTION NAME
147 syscall write entry
dtrace: pid 88089 has exited
CPU ID FUNCTION:NAME
2 6538 CSSymbolOwnerGetRegionWithName:entry
2 5014 CSSymbolOwnerForeachRegionWithName:entry
2 5078 CSRegionForeachSymbol:entry
2 6495 CSSymbolicatorGetSymbolOwnerWithUUIDAtTime:entry
2 6493 CSSymbolicatorForeachSymbolOwnerWithUUIDAtTime:entry
2 6494 CSSymbolicatorForeachSymbolOwnerWithCFUUIDBytesAtTime:entry
2 5048 CSSymbolOwnerGetDataFlags:entry
2 6538 CSSymbolOwnerGetRegionWithName:entry
2 5014 CSSymbolOwnerForeachRegionWithName:entry
2 5078 CSRegionForeachSymbol:entry
2 5092 CSSymbolIsExternal:entry
2 5092 CSSymbolIsExternal:entry
...
It looks like the library is in use by the dtrace command, anyway.