Cypress + ECONNRESET - cypress

We've suddenly been having an issue with cypress automation, and it is impacting a number of people / different pc's.
It will run then stop with this error
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:211:20)
{
errno: -4077,
code: 'ECONNRESET',
syscall: 'read'
}
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:211:20)
Details...
Cypress Version 9.5.0 (but we have rolled back all the way to 9.2 and issue is present in all)
Node.js Version 16 (again, we've rolled back various version of Node 16 and tried 17)
Browser - Edge, Chrome, Firefox, all have this issue, (ONLY electron will run and stay alive)
Our Infrastructure team has helped roll back patching, group policy, proxy has been opened.....to the point its a standard pc with a direct connection to internet and the issue is still present.
Running Cypress in Debug mode gives us...
cypress:server:api request to url: POST https://api.cypress.io/exceptions with params: {"body":{"err":{"name":"Error","message":"read ECONNRESET","stack":"Error: read ECONNRESET\n at TCP.onStreamRead (node:internal<stripped-path>stream_base_commons:211:20)\n"},"version":"9.5.0","osName":"win32","osVersion":"10.0.19043","osCpus":[{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":35718,"nice":0,"sys":34734,"idle":522062,"irq":1421}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":26468,"nice":0,"sys":18562,"idle":547484,"irq":171}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":48437,"nice":0,"sys":32609,"idle":511328,"irq":250}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":29453,"nice":0,"sys":15000,"idle":547921,"irq":281}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":36593,"nice":0,"sys":20656,"idle":535125,"irq":187}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":50828,"nice":0,"sys":13906,"idle":527640,"irq":234}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":37796,"nice":0,"sys":19703,"idle":534875,"irq":46}},{"model":"Intel(R) Core(TM) i7-6700K CPU # 4.00GHz","speed":4008,"times":{"user":27937,"nice":0,"sys":13546,"idle":550890,"irq":140}}],"osMemory":{"free":26111950848,"total":34228842496}},"headers":{"x-os-name":"win32","x-cypress-version":"9.5.0"}} and token: undefined +0ms
cypress:network:agent addRequest called { isHttps: true, href: 'https://api.cypress.io/exceptions' } +11s
cypress:network:connect beginning getAddress { hostname: 'api.cypress.io', port: 443 } +14s
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:211:20)
{
errno: -4077,
code: 'ECONNRESET',
syscall: 'read'
}
Error: read ECONNRESET
at TCP.onStreamRead (node:internal/stream_base_commons:211:20)
https://api.cypress.io has been opened up in the proxy and even bypassing the proxy completely so the pc has a direct connection to the internet still had the same result.
Any suggestions?

The way we got by this is to use Electron as your default browser to run your test. This will not get blocked by your antivirus.

Update...
Resolved (for us at least) after a process of elimination removing various apps / policies etc, we identified there was a setting within anti-virus that was scanning and interfering with internet traffic (how it got turned on who knows as our infra is so tightly controlled). Turned this off and it all started working as expected.

It is possible to opt out of sending information to https://api.cypress.io/exceptions by setting the environment variable CYPRESS_CRASH_REPORTS=0
See this link for more information: https://github.com/cypress-io/cypress/issues/4386

To resolve this issue please remove the antivirus or any other protection that is installed in your system.

Related

The reasons of this unexpected terminated event, Omnet++ was terminated automatically after about 20 seconds of IDE loading

When I run the Omnet++, and the IDE of it is loaded completely, after about 20 seconds, the Omnet++ IDE and the Omnet++ are terminated automatically.
What are the possible reasons for this unwanted phenomenon?
The Omnet++ is installed on Ubuntu LTS 22 in the virtual state by the VirtualBox.
Thanks in advance
Error Log :
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f5970257bd2, pid=2831, tid=3159
#
# JRE version: OpenJDK Runtime Environment Temurin-17.0.1+12 (17.0.1+12) (build 17.0.1+12)
# Java VM: OpenJDK 64-Bit Server VM Temurin-17.0.1+12 (17.0.1+12, mixed mode, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C [libc.so.6+0x7fbd2] fread+0x22
#
# Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E" (or dumping to /home/maryam/core.2831)
#
# An error report file with more information is saved as:
# /home/maryam/hs_err_pid2831.log
#
# If you would like to submit a bug report, please visit:
# https://github.com/adoptium/adoptium-support/issues
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
If the IDE just disappears without any interaction or error report, the most likely cause is that your VM is configured with very low amount of memory and the omkiller kills the IDE process because of low memory conditions.

AWS Lambda Chalice Layers Segmentation Fault

I am deploying a Python 3.7 Lambda function via Chalice. Because the code with its environment requirements, is larger than 50 MB limit, I am using the "automatic_layer" feature of Chalice to generate the layer with the requirements, which is awswrangler.
Because the generated layer is > 50 MB, I am uploading the generated managed-layer-...-python3.7.zip manually to s3 and create a Lambda layer. Then I re-deploy with chalice, removing the automatic_layer option and setting the layers to the generated ARN of the layer I manually created.
The function deployed this way worked OK for a couple of times, then started failing occasionally with "Segmentation Fault". The error rate increased shortly and now it is failing 100%.
Traceback:
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> START RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Version: $LATEST
> OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
> END RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc
> REPORT RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Duration: 7165.04 ms Billed Duration: 7166 ms Memory Size: 128 MB Max Memory Used: 41 MB
> RequestId: 3b98bd4b-6cda-4d21-8090-1a49b17c06fc Error: Runtime exited with error: signal: segmentation fault (core dumped)
> Runtime.ExitError
As awswrangler itself requires boto3 & botocore, and they are already in the Lambda environment, I suspected that there might be a conflict of different versions of boto. I tried the same flow by explicitly including boto3 and botocore in the requirements but I am still receiving the same segmentation fault error.
Any help is much appreciated.
You could use AWS X-Ray to get more information on the problem : https://docs.aws.amazon.com/lambda/latest/dg/python-tracing.html
Moreover you might analyze the core dump generated executing your lambda function on a bash shell:
ulimit -c unlimited
cd /tmp
ececute your python ...
You should find a file named /tmp/core..... that you should analyze with gdb after download. The command "man core" could help you.

ESP32 WROVER-B programming issue

trying to program ESP32-WROVER-B it stops just after start, I have connected button EN (on pin EN) and tried various combinations but it didn't help.
Also tried to change baudrate, fix flash size to 4MB but still nothing.
This is the output:
$ make -j4 flash monitor
Toolchain path: /opt/xtensa-esp32-elf/bin/xtensa-esp32-elf-gcc
Toolchain version: crosstool-ng-1.22.0-80-g6c4433a5
Compiler version: 5.2.0
App "websocket_server" version: b4b6984-dirty
Python requirements from C:/ESP32/esp-idf/requirements.txt are satisfied.
Flashing binaries to serial port COM8 (app at offset 0x10000)...
esptool.py v2.6-beta1
Serial port COM8
Connecting........_____..
Chip is ESP32D0WDQ5 (revision 1)
Features: WiFi, BT, Dual Core, 240MHz, VRef calibration in efuse, Coding Scheme None
MAC: 24:6f:28:4c:9b:4c
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 230400
Changed.
Configuring flash size...
Compressed 24240 bytes to 14517...
Wrote 24240 bytes (14517 compressed) at 0x00001000 in 0.7 seconds (effective 295.4 kbit/s)...
A fatal error occurred: Timed out waiting for packet header
make: *** [/c/ESP32/esp-idf/components/esptool_py/Makefile.projbuild:63: flash] Error 2
Any hint?
In case your ESP32 looks like this one you need to keep pressed the RST button while uploading the new code to avoid that error.
Hope this helps you!

Thread 'main' panicked at 'could not initialize thread_rng: All entropy sources failed

After running a cross-compiled Rust ARM binary on a Raspberry Pi Zero for a few hours, the process panics with the following error:
1 Feb 02 12:03:17 raspberrypi monitoring-service[339]: thread 'main'
panicked at 'could not initialize thread_rng: All entropy sources failed (permanently unavailable); cause: getrandom not ready (not ready
yet); cause: Resource temporarily unavailable (os error 11)', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/rand-0.6.1/src/rngs/thread.rs:82:17
systemd tried to restart the process, but it failed with the same error several times. The next day I was able to manually restart it, but countdown to when it fails again.
I suspect this is caused by the ws websocket crate indirectly using the rand v0.6.1 crate, but I'm not sure.
Is there a way to force these packages to use a newer version of the rand crate, or do I need to tweak an OS setting on Raspbian? I'm running Raspbian Stretch (v9), kernel v4.14.79+. As an internal monitoring tool, my application requires no encryption or privacy so ideally I can get around the entropy issue.

Riak eating 100% CPU on OSX install

This question is related to:
Riak node not working, but using 100% cpu
but since the poster seems to have left I'm posting my case here.
Last night I installed erlang(R15B01) from source, using the config options from the Riak website:
http://docs.basho.com/riak/1.2.1/tutorials/installation/Installing-Erlang/#Installing-on-Mac-OS-X
and Riak(1.4.1) on my 2013 MacBook Pro (2.8GHz i7, 16GB ram, OSX 10.8.3). I did not change the ulimit, as I assumed it would be fine for a vanilla run.
Installation went fine; warnings but no errors, and I was able to run the toy examples no problem.
However the empty instance quickly ate through all 4 cores and my machine started whining and overheating.
Looking in the logs I see the following error repeated a jillion times:
2013-10-11 09:04:04.266 [error] CRASH REPORT ¥
Process with 0 neighbours exited with reason: ¥
call to undefined function eleveldb:o
also tons of crash reports:
2013-10-11 09:14:47 =CRASH REPORT====
crasher:
initial call: riak_kv_index_hashtree:init/1
pid:
registered_name: []
exception exit: {{undef,[{eleveldb,open,
["./data/anti_entropy/479555224749202520035584085735030365824602865664",
[{create_if_missing,true},{max_open_files,20},{write_buffer_size,12886952}]],[]},
{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,499}]},{hashtree,new,2,
[{file,"src/hashtree.erl"},{line,215}]},{riak_kv_index_hashtree,do_new_tree,2,
[{file,"src/riak_kv_index_hashtree.erl"},{line,421}]},{lists,foldl,3,[{file,"lists.erl"},
{line,1197}]},{riak_kv_index_hashtree,init_trees,2,[{file,"src/riak_kv_index_hashtree.erl"},
{line,366}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},
{line,226}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]}]},
[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,
[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [,riak_core_vnode_sup,riak_core_sup,]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 987
stack_size: 24
reductions: 492
neighbours:
erlang.log says
=====
===== LOGGING STARTED Fri Oct 11 09:04:01 CEST 2013
=====
Node 'riak#127.0.0.1' not responding to pings.
config is OK
!!!!
!!!! WARNING: ulimit -n is 2560; 4096 is the recommended minimum.
!!!!
Exec: /tmp/riak-1.4.1/rel/riak/bin/../erts-5.9.1/bin/erlexec
-boot /tmp/riak-1.4.1/rel/riak/bin/../releases/1.4.1/riak
-config /tmp/riak-1.4.1/rel/riak/bin/../etc/app.config
-pa /tmp/riak-1.4.1/rel/riak/bin/../lib/basho-patches
-args_file /tmp/riak-1.4.1/rel/riak/bin/../etc/vm.args -- console
Root: /tmp/riak-1.4.1/rel/riak/bin/..
Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:8:8] [async-threads:64]
[kernel-poll:true]
Eshell V5.9.1 (abort with ^G)
(riak#127.0.0.1)1>
After less than 10m there are already 144MB of logging files with variations of the above.
I had the same problem by building riak 1.4.6 from source.
I changed in the file etc/app.config the line to
{anti_entropy, {off, []}},
Leveldb is used by AAE. See the config parameter anti_entropy_leveldb_opts.
Use process of elimination:
It's hard to say without more information. Is the 200% being used by
beam.smp? Do you see anything in console.log, error.log or crash.log
that would indicate something odd is happening? Are there clients
communicating with the cluster at the time? If so what client/protocol are
they using and what types of operations are being performed (e.g.
get/put/map reduce/etc)?
References
Riak consuming too much CPU
Interesting sawtooth increasing CPU usage on lightly-used Riak
Inspecting a Node
Riak Performance Tuning
Open Files Limit
Configuration Files

Resources