Yarn install dead with JavaScript heap out of memory - yarnpkg

I used to have a fully functional yarn installation, everything working fine with my corporate proxy. But after 5 weeks I left it rotting, all of a sudden it is not working anymore.
When I run a simple yarn install, it stays there for some 2 minutes, then dies. Its output:
yarn install v1.22.4
<--- Last few GCs --->
[1464:0000000000441F90] 104273 ms: Scavenge 1396.5 (1425.7) -> 1395.7 (1426.2) MB, 4.5 / 0.0 ms (average mu = 0.147, current mu = 0.083) allocation failure
[1464:0000000000441F90] 104362 ms: Scavenge 1396.9 (1426.2) -> 1396.2 (1427.2) MB, 5.5 / 0.0 ms (average mu = 0.147, current mu = 0.083) allocation failure
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 000000013F3FDD8A v8::internal::GCIdleTimeHandler::GCIdleTimeHandler+4506
2: 000000013F3D8886 node::MakeCallback+4534
3: 000000013F3D9200 node_module_register+2032
4: 000000013F6F30DE v8::internal::FatalProcessOutOfMemory+846
5: 000000013F6F300F v8::internal::FatalProcessOutOfMemory+639
6: 000000013F8D9804 v8::internal::Heap::MaxHeapGrowingFactor+9620
7: 000000013F8D07E6 v8::internal::ScavengeJob::operator=+24550
8: 000000013F8CEE3C v8::internal::ScavengeJob::operator=+17980
9: 000000013F8D4D87 v8::internal::Heap::CreateFillerObjectAt+1175
10: 000000013FC727D3 v8::internal::NativesCollection<0>::GetScriptsSource+547
11: 000000013F35FD92 v8::internal::StackGuard::ArchiveSpacePerThread+52242
12: 000000013F360453 v8::internal::StackGuard::ArchiveSpacePerThread+53971
13: 000000013F441614 uv_dlerror+2452
14: 000000013F4423E8 uv_run+232
15: 000000013F3DFE7E node::NewContext+1390
16: 000000013F3E048B node::NewIsolate+603
17: 000000013F3E08E7 node::Start+823
18: 000000013F28F3CC node::MultiIsolatePlatform::MultiIsolatePlatform+604
19: 000000013FED863C v8::internal::compiler::OperationTyper::ToBoolean+129516
20: 0000000077B6556D BaseThreadInitThunk+13
21: 0000000077CC372D RtlUserThreadStart+29

After a while I found the culprit: at .yarnrc I had an invalid cache-folder setting:
cache-folder "L:\\yarn-cache"
So how can it be it used to work, but doesn't anymore?
When my corporation was setting up for social isolation, I decided to collect all the dependencies of all my active (and not so-active) projects in a pendrive, at the time at L:.
Now I'm accessing the desktop remotely, and the pendrive is not physically there.
I just wish that yarn could fail with a better message error.

Related

YOLOV5 running on mac

I have configured the environment to run on the new Metal Performance Shaders (MPS) backend for GPU training acceleration for PyTorch and when running Yolov5 on my Macbook M2 Air it is always creating an error.
RES_DIR = set_res_dir()
if TRAIN:
!python /Users/krishpatel/yolov5/train.py --data /Users/krishpatel/yolov5/roboflow/data.yaml --weights yolov5s.pt \
--img 640 --epochs {EPOCHS} --batch-size 32 --device mps --name {RES_DIR}
this is the error
screenshot of the error
UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.)
t = t[j] # filter
0%| | 0/20 [00:16<?, ?it/s]
Traceback (most recent call last):
File "/Users/krishpatel/yolov5/train.py", line 630, in <module>
main(opt)
File "/Users/krishpatel/yolov5/train.py", line 524, in main
train(opt.hyp, opt, device, callbacks)
File "/Users/krishpatel/yolov5/train.py", line 307, in train
loss, loss_items = compute_loss(pred, targets.to(device)) # loss scaled by batch_size
File "/Users/krishpatel/yolov5/utils/loss.py", line 125, in __call__
tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets
File "/Users/krishpatel/yolov5/utils/loss.py", line 213, in build_targets
j, k = ((gxy % 1 < g) & (gxy > 1)).T
NotImplementedError: The operator 'aten::remainder.Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
what should i do? because when running in just cpu it is taking a very very very long time? any suggestions would be appreciated.
So I tried to search everywhere but couldn't find anything for macbook. I think if it doesn't help i would have to run it on google colab but then what would be the point of buying expensive macbook and not use gpu for running.

Sometimes when I run `npx hardhat compile` I get this error: FATAL ERROR: NewNativeModule Allocation failed - process out of memor

Sometimes when I run this command npx hardhat compile on my windows cli I get the error below:
Compiling 72 files with 0.7.0
contracts/libraries/ERC20.sol: Warning: SPDX license identifier not provided in source file. Before publishing, consider adding a comment containing "SPDX-License-Identifier: <SPDX-License>" to each source file. Use "SPDX-License-Identifier: UNLICENSED" for non-open-source code.
Please see https://spdx.org for more information.
contracts/libraries/ERC1155/EnumerableSet.sol:158:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(Bytes32Set storage set, bytes32 value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
contracts/libraries/ERC1155/EnumerableSet.sol:224:5: Warning: Variable is shadowed in inline assembly by an instruction of the same name
function add(AddressSet storage set, address value) internal returns (bool) {
^ (Relevant source part starts here and spans across multiple lines).
Compiling 1 file with 0.8.0
<--- Last few GCs --->
[8432:042B0460] 263058 ms: Mark-sweep (reduce) 349.8 (356.3) -> 248.2 (262.4) MB, 434.4 / 0.2 ms (+ 70.9 ms in 3 steps since start of marking, biggest step 69.6 ms, walltime since start of marking 800 ms) (average mu = 0.989, current mu = 0.990) memory[8432:042B0460] 263627
ms: Mark-sweep (reduce) 248.2 (259.4) -> 248.2 (252.1) MB, 555.5 / 0.0 ms (+ 0.0 ms in 0 steps since start of marking, biggest step 0.0 ms, walltime since start of marking 556 ms) (average mu = 0.969, current
mu = 0.023) memory p
<--- JS stacktrace --->
FATAL ERROR: NewNativeModule Allocation failed - process out of memory`
After some time the error just kind of goes away.
It goes away probably after I've restarted my system or created a new Hardhat project and imported the code there.
But this is happening too often, what could be the cause?
I've done quite some research and some answers suggested it might be a problem with Node and the application's memory allocation, but I don't know how I would apply the solutions to a Hardhat project.
Here is a link to one possible solution: https://medium.com/#vuongtran/how-to-solve-process-out-of-memory-in-node-js-5f0de8f8464c
OS: WINDOWS 10
CLI: WINDOWS CMD

MACOSX M1 ARM64 Visual Studio Code on MACOS Fails to start debugger w/ C++ application

I am seeing the following error from the console window on VSC.
ERROR: Unable to start debugging. Unexpected LLDB output from command "-exec-run". process exited with status -1 (attach failed ((os/kern) invalid argument))
The program '/Users/torsi/vsc-workspace/Helloworld' has exited with code 42 (0x0000002a).
But if I run lldb from the command-line the application is fully debuggable.
I believe this is an entitlement issue having to do with these two required entitlements
that I verified are not set for the VSC applications. I used codesign to verify the entitlements for the application by reference the .app directory of VSC.
<key>com.apple.security.network.client</key>
<key>get-task-allow</key>
Here is the logging output from lldb:
1: (245) LaunchOptions{"name":"clang++ - Build and debug active file","type":"cppdbg","request":"launch","targetArchitecture":"arm64","program":"/Users/torsi/vsc-workspace/Helloworld","args":[],"stopAtEntry":true,"cwd":"/Users/torsi/vsc-workspace","environment":[],"externalConsole":false,"MIMode":"lldb","preLaunchTask":"clang++ build active file","logging":{"engineLogging":true},"__configurationTarget":5,"__sessionId":"7b7e7bcf-4626-413b-aec8-0b1bf19605b9","miDebuggerPath":"/Users/torsi/.vscode/extensions/ms-vscode.cpptools-1.3.1/debugAdapters/lldb-mi/bin/lldb-mi"}
1: (420) Starting: "/Users/torsi/.vscode/extensions/ms-vscode.cpptools-1.3.1/debugAdapters/lldb-mi/bin/lldb-mi" --interpreter=mi
1: (485) DebuggerPid=39408
1: (506) ->(gdb)
1: (549) <-1001-interpreter-exec console "version"
1: (551) ->~"lldb-1200.0.44.2\nApple Swift version 5.3.2 (swiftlang-1200.0.45 clang-1200.0.32.28)\n"
1: (553) ->1001^done
1: (554) ->(gdb)
1: (564) 1001: elapsed time 17
1: (571) <-1002-gdb-set auto-solib-add on
1: (572) ->1002^done
1: (572) ->(gdb)
1: (578) 1002: elapsed time 7
1: (580) <-1003-gdb-set solib-search-path "/Users/torsi/vsc-workspace:"
1: (580) ->1003^done
1: (580) ->(gdb)
1: (580) 1003: elapsed time 0
1: (581) <-1004-environment-cd /Users/torsi/vsc-workspace
1: (581) ->1004^done,path="/Users/torsi/vsc-workspace"
1: (581) ->(gdb)
1: (585) 1004: elapsed time 4
1: (585) <-1005-file-exec-and-symbols /Users/torsi/vsc-workspace/Helloworld
1: (811) ->1005^done
1: (811) ->(gdb)
1: (811) ->=library-loaded,id="/Users/torsi/vsc-workspace/Helloworld",target-name="/Users/torsi/vsc-workspace/Helloworld",host-name="/Users/torsi/vsc-workspace/Helloworld",symbols-loaded="1",symbols-path="/Users/torsi/vsc-workspace/Helloworld.dSYM/Contents/Resources/DWARF/Helloworld",loaded_addr="-",size="16384"
1: (811) 1005: elapsed time 226
1: (812) <-1006-interpreter-exec console "platform status"
1: (812) ->~" Platform: host\n Triple: x86_64-apple-macosx\nOS Version: 10.16 (20D80)\n Kernel: Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101\n Hostname: 127.0.0.1\nWorkingDir: /Users/torsi/vsc-workspace\n"
1: (812) ->1006^done
1: (813) ->(gdb)
1: (818) 1006: elapsed time 6
1: (821) <-1007-break-insert -f main
1: (823) ->1007^done,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="0x0000000100003f60",func="main",file="Helloworld.cpp",fullname="/Users/torsi/vsc-workspace/Helloworld.cpp",line="8",pending=["main"],times="0",original-location="main"}
1: (823) ->(gdb)
1: (823) ->=breakpoint-modified,bkpt={number="1",type="breakpoint",disp="keep",enabled="y",addr="0x0000000100003f60",func="main",file="Helloworld.cpp",fullname="/Users/torsi/vsc-workspace/Helloworld.cpp",line="8",pending=["main"],times="0",original-location="main"}
1: (823) ->(gdb)
1: (828) 1007: elapsed time 7
1: (847) Send Event AD7EngineCreateEvent
1: (852) Send Event AD7ProgramCreateEvent
1: (895) Send Event AD7LoadCompleteEvent
1: (902) <-1008-exec-run
1: (5954) ->1008^error,msg="process exited with status -1 (attach failed ((os/kern) invalid argument))"
1: (5955) ->(gdb)
1: (5959) 1008: elapsed time 5056
1: (6005) Send Event AD7MessageEvent
ERROR: Unable to start debugging. Unexpected LLDB output from command "-exec-run". process exited with status -1 (attach failed ((os/kern) invalid argument))
1: (6018) <--gdb-exit
1: (6019) ->^exit
1: (6019) ->=thread-group-exited,id="i1"
1: (6019) ->(gdb)
1: (6022) <-logout
1: (6043) Send Event AD7ProgramDestroyEvent
The program '/Users/torsi/vsc-workspace/Helloworld' has exited with code 42 (0x0000002a).
How do I set the entitlements? if this is the problem?
Thanks,
Tom
Visual Studio Code does not support native LLDB debugging on Apple M1 CPU as of today.
Source: https://github.com/microsoft/vscode-cpptools/issues/7035

nt!KiUnexpectedInterruptShadow when dumping IDT

I am trying to dump IDT, and this is what I get:
0: kd> !idt
Dumping IDT: 80f27000
30: 82127570 nt!KiUnexpectedInterrupt0Shadow
31: 82127580 nt!KiUnexpectedInterrupt1Shadow
32: 82127590 nt!KiUnexpectedInterrupt2Shadow
33: 821275a0 nt!KiUnexpectedInterrupt3Shadow
34: 821275b0 nt!KiUnexpectedInterrupt4Shadow
35: 821275c0 nt!KiUnexpectedInterrupt5Shadow
36: 821275d0 nt!KiUnexpectedInterrupt6Shadow
Running on Win 10 x86.
How can I see the "normal" IDT?
Thanks in advance.
Edit:
When I say normal I mean something like this:
31: 80dd816c i8042prt!I8042KeyboardInterruptService (KINTERRUPT 80dd8130)
32: 804ddd04 nt!KiUnexpectedInterrupt2
33: 80dd3224 serial!SerialCIsrSw (KINTERRUPT 80dd31e8)
34: 804ddd18 nt!KiUnexpectedInterrupt4
35: 804ddd22 nt!KiUnexpectedInterrupt5
36: 804ddd2c nt!KiUnexpectedInterrupt6
37: 804ddd36 nt!KiUnexpectedInterrupt7
38: 806edef0 hal!HalpProfileInterrupt```
It was KVA Shadowing that was making me a problem. I fixed it by disabling Spectre and Meltdown vulnerabilities.
Edit:
You can download program form here: https://www.grc.com/inspectre.htm and just disable it, or you can go to registry and changes some values more about that here: https://www.unknowncheats.me/forum/anti-cheat-bypass/360795-disable-kvashadowing-1809-1903-windows.html. But changing registry didn't work for me on latest win 10. So just download inspectre and disable it, should be no problem.

Espressif ESP8266 NONOS_SDK - Makefile

I would like to compile my source-code for the ESP8266 (Extensa NONOS_SDK Toolchain is already installed and working).
This is my folder structure:
I use this Makefile from an example from Espressif: https://github.com/espressif/ESP8266_NONOS_SDK/blob/master/examples/simple_pair/Makefile
and I also use this gen_misc.sh: https://github.com/espressif/ESP8266_NONOS_SDK/blob/master/examples/simple_pair/gen_misc.sh
I am running Ubuntu 18 as a Linux Subsystem for Windows 10. This is how I called gen_misc.sh from cmd:
./gen_misc.sh
gen_misc.sh version 20150511
Please follow below steps(1-5) to generate specific bin(s):
STEP 1: choose boot version(0=boot_v1.1, 1=boot_v1.2+, 2=none)
enter(0/1/2, default 2):
0
boot mode: old
STEP 2: choose bin generate(0=eagle.flash.bin+eagle.irom0text.bin, 1=user1.bin, 2=user2.bin)
enter (0/1/2, default 0):
0
ignore boot
generate bin: eagle.flash.bin+eagle.irom0text.bin
STEP 3: choose spi speed(0=20MHz, 1=26.7MHz, 2=40MHz, 3=80MHz)
enter (0/1/2/3, default 2):
2
spi speed: 40 MHz
STEP 4: choose spi mode(0=QIO, 1=QOUT, 2=DIO, 3=DOUT)
enter (0/1/2/3, default 0):
2
spi mode: DIO
STEP 5: choose spi size and map
0= 512KB( 256KB+ 256KB)
2=1024KB( 512KB+ 512KB)
3=2048KB( 512KB+ 512KB)
4=4096KB( 512KB+ 512KB)
5=2048KB(1024KB+1024KB)
6=4096KB(1024KB+1024KB)
7=4096KB(2048KB+2048KB) not support ,just for compatible with nodeMCU board
8=8192KB(1024KB+1024KB)
9=16384KB(1024KB+1024KB)
enter (0/2/3/4/5/6/7/8/9, default 0):
4
spi size: 4096KB
spi ota map: 512KB + 512KB
This is what I get as output:
start...
make: Nothing to be done for 'FORCE'.
Any ideas or help what I am doing wrong are greatly appreciated.
Please don't hesitate do ask if I didn't include any information you might need to answer this question.
the solution as suggested here is to copy the folder where you are running the script to the root of the sdk folder and run the script.

Resources