ESP32S3 - Micropython - network.WLAN.active(True) Hangs - esp32

I purchased this from Digikey (docs).
I flashed the latest firmware release with the following commands:
$ esptool.py \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
--before default_reset \
--after hard_reset \
--chip esp32s3 \
erase_flash
$ esptool.py \
--chip esp32s3 \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
write_flash \
-z 0 \
GENERIC_S3-20220117-v1.18.bin
With the resultant interpreter prompt, I instantiate a network.WLAN object and call the .active(True) method to enable the interface, but it hangs forever. I replicated the issue on identical hardware. Not sure if this is a software bug or a hardware bug. I haven only tested this with the device being powered via USB port connected to my dev laptop, so I haven't looked into whether this could be a power issue.
.venv/bin/python -m serial.tools.miniterm /dev/tty.usbmodem1234561 115200
--- Miniterm on /dev/tty.usbmodem1234561 115200,8,N,1 ---
--- Quit: Ctrl+] | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
>>>
>>> import network
>>> wlan = network.WLAN(network.STA_IF)
>>> wlan.active()
False
>>> wlan.active(True) <--------- this method call hangs forever
Any ideas I could try to get wifi working?
Thanks in advance.

Related

How to run QEMU with Fedora + Custom kernel

I am trying to run Fedora with QEMU, but with a custom kernel that i built by using the first steps from the readme.txt file from here. The kernel describes T-HEAD's C910 processor, and I want it in order to run benchmarks (Fedora was the first OS that I found that supports RISC-V).
Following the steps from Fedora's guide, QEMU finally successfully booted, but with the guide's suggested kernel. The command run is shown here.
qemu-system-riscv64 \
-bios none \
-nographic \
-machine virt \
-smp 8 \
-m 2G \
-kernel Fedora-Developer-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf \
-object rng-random,filename=/dev/urandom,id=rng0 \
-device virtio-rng-device,rng=rng0 \
-device virtio-blk-device,drive=hd0 \
-drive file=Fedora-Developer-Rawhide-*.raw,format=raw,id=hd0 \
-device virtio-net-device,netdev=usernet \
-netdev user,id=usernet,hostfwd=tcp::10000-:22
Also, the command from the readme.txt file from the first link works as well, but it just boots into the kernel alone, no OS (hence the need for the OS).
LD_LIBRARY_PATH=./host/lib ./host/csky-qemu/bin/qemu-system-riscv64 -M virt -kernel fw_jump.elf -device loader,file=Image,addr=0x80200000 -append "rootwait root=/dev/vda ro" -drive file=rootfs.ext2,format=raw,id=hd0 -device virtio-blk-device,drive=hd0 -nographic -smp 1
I tried modifing the -kernel argument from the command at Fedora's guide to the path to the custom kernel. By executing the command, it hangs at the start, as shown in the image below:
It's probably a matter of arguments that are supplied to QEMU since it's clear that the one from readme.txt and the one from Fedora's guide differ by a lot

Do --VIrtual-time-budget still work at Chrome 106.0.5249.119?

At Chrome 87,when I add --Virual-time-budget, the process will wait util all of the js load
but when I use Chrome 106.0.5249.119,this arg seems not working anymore. The Chrome process will stop within 1s, no matter what is the real situation of my page loading.
Question is, why --Virual-time-budget is good in Chrome 87 but not good in Chrome 106? What's the different between them? How can I make the browser to wait until my page loading is done?
My command is below:
/usr/bin/google-chrome \
--headless \
--disable-gpu \
--no-sandbox \
--ignore-certificate-errors \
--log-level=0 \
--disable-dev-shm-usage \
--start-maximized \
--window-size=1920,1080 \
--virtual-time-budget=300000 \
--enable-logging=stderr \
--v=1 \
--enable-features=NetworkService \
--disable-features=IsolateOrigins,site-per-process \
--disable-web-security \
'http://nginx/ngsoc/REPORT/api/v1/set-cookie-and-redirect?cookies=auto_report=1;access_token=RJtApMMmLP5SAqmjJ7lkNkEP6FA&redirectUrl=aHR0cDovL25naW54OjgwLyMvcmVwb3J0L3JlcG9ydF90ZW1wbGF0ZV9lZGl0P3RlbXBsYXRlTmFtZT13eWh0ZXN0MyZpZD02MzU5ZTM1ZDEwNjVlNjJmOTkzZTVjNGMmdGltZVJhbmdlPXsidGltZUZpZWxkIjpudWxsLCJiZWdpbiI6eyJ0eXBlIjoiYWJzb2x1dGUiLCJ1bml0IjpudWxsLCJ2YWx1ZSI6MTU0NjI3MjAwMH0sImVuZCI6eyJ0eXBlIjoiYWJzb2x1dGUiLCJ1bml0IjpudWxsLCJ2YWx1ZSI6MTU1MDU5MjAwMH19JnJlcG9ydE5hbWU9d3lodGVzdDNfJUU2JTlDJTg4JUU2JThBJUE1'
If you have any idea,please make a comment,I need your help.

Espressif ESP32-S3-WROOM-1 + Micropython --- invalid header: 0xffffffff

I purchased this from Digikey (docs).
I flashed the latest firmware release with the following commands:
$ esptool.py \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
--before default_reset \
--after hard_reset \
--chip esp32s3 \
erase_flash
$ esptool.py \
--chip esp32s3 \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
write_flash \
-z 0x1000 \
GENERIC_S3-20220117-v1.18.bin
Output:
esptool.py v3.2
Serial port /dev/tty.usbmodem14201
Connecting...
Chip is ESP32-S3
Features: WiFi, BLE
Crystal is 40MHz
MAC: 84:f7:03:c0:33:f8
Uploading stub...
Running stub...
Stub running...
Changing baud rate to 460800
Changed.
Configuring flash size...
Flash will be erased from 0x00001000 to 0x00154fff...
Compressed 1390128 bytes to 917154...
Wrote 1390128 bytes (917154 compressed) at 0x00001000 in 15.2 seconds (effective 731.0 kbit/s)...
Hash of data verified.
Leaving...
Hard resetting via RTS pin...
When I connect to the UART with the following command:
$ python -m serial.tools.miniterm /dev/tty.usbmodem14201 115200
--- Miniterm on /dev/tty.usbmodem14201 115200,8,N,1 ---
--- Quit: Ctrl+] | Menu: Ctrl+T | Help: Ctrl+T followed by Ctrl+H ---
invalid header: 0xffffffff
invalid header: 0xffffffff
invalid header: 0xffffffff
invalid header: 0xffffffff
...
I've also tried building from source. I'm thinking it has something to do with the ESP32-S3 devkit that I purchased. Can someone help me figure out why I can't get micropython installed on this devkit? Thanks!
To flash the firmware you're running the command:
esptool.py \
--chip esp32s3 \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
write_flash \
-z 0x1000 \
GENERIC_S3-20220117-v1.18.bin
The instructions at the firmware link you shared say to use location 0 not 0x1000:
esptool.py --chip esp32s3 --port /dev/ttyACM0 write_flash -z 0 board-20210902-v1.17.bin
This isn't an argument to -z, which means compress the image; it's the address to start writing the image at.
Try using the offset provided in the directions:
esptool.py \
--chip esp32s3 \
--port /dev/tty.usbmodem14201 \
--baud 460800 \
write_flash \
-z \
0 GENERIC_S3-20220117-v1.18.bin
Micropython definitely isn't going to load and work properly if it's not flashed to the correct location.

GitLab CE Docker - empty response when using other port than 80

I have installed latest GitLab Community Edition Docker image. Environment is macOS (High Sierra) with Docker Community Edition installed.
I have followed the instruction here for how to start the GitLab image:
https://docs.gitlab.com/omnibus/docker/
I have not done any modifications within the running container (e.g. not changed the gitlab.rb file), just started the image from the host.
Things seem to work well if I use the default ports, e.g. 80 for HTTP, e.g.
sudo docker run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com'; gitlab_rails['gitlab_shell_ssh_port'] = 22;" \
--publish 192.168.0.119:443:443 \
--publish 192.168.0.119:80:80 \
--publish 192.168.0.119:22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
I want to run GitLab on non-standard ports, e.g. 10080 for HTTP, so modify the docker command to this:
sudo docker run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com:10080'; gitlab_rails['gitlab_shell_ssh_port'] = 22;" \
--publish 192.168.0.119:443:443 \
--publish 192.168.0.119:10080:80 \
--publish 192.168.0.119:22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
But that results in "empty reply from server" when trying to access the GitLab dashboard with a Web browser or curl, here is curl run:
$ curl -v http://192.168.0.119:10080
* Rebuilt URL to: http://192.168.0.119:10080/
* Trying 192.168.0.119...
* TCP_NODELAY set
* Connected to 192.168.0.119 (192.168.0.119) port 10080 (#0)
> GET / HTTP/1.1
> Host: 192.168.0.119:10080
> User-Agent: curl/7.58.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host 192.168.0.119 left intact
curl: (52) Empty reply from server
I can also run lsof to verify that the GitLab docker container is indeed listening on the port
$ lsof -nP -i4TCP:10080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 890 jo 19u IPv4 0x871834297e946edb 0t0 TCP 192.168.0.119:10080 (LISTEN)
To verify that port 10080 is usable, I have run other apps listening on it, and they work as expected.
Anyone else run into this, or have suggestions for what the reason might be, or options to try out?!
Cheers
-jo
Old thread, but I have the correct answer after encountering the same issue :)
When updating external_url, the docker image will set up nginx to bind to the port of this URL.
So you need to update the port redirection like this (changing 10080:80 to 10080:10080):
sudo docker run --detach \
--hostname gitlab.example.com \
--env GITLAB_OMNIBUS_CONFIG="external_url 'http://gitlab.example.com:10080'; gitlab_rails['gitlab_shell_ssh_port'] = 22;" \
--publish 192.168.0.119:443:443 \
--publish 192.168.0.119:10080:10080 \
--publish 192.168.0.119:22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
Can't believe this has been unanswered for 3 years
Change: 'http://gitlab.example.com:10080' to 'http://localhost:80'
That url needs to reflect the internal port, not the mapped one, and it should be the actual url. localhost works. ip address. Whatever your hostname is will work.

Vagrant keeps losing file doing provision

I'm running into an odd behavior on the latest version of vagrant in a Windows7/msys/Virtualbox environment setup, where after executing a vagrant up command I get an error with rsync; 'file has vanished: "/c/Users/spencerd/workspace/watcher/.LISTEN' doing the provisioning stage.
Since google, irc, and issue trackers have little to no documentation on this issue I wonder if anyone else ran into this and what would the fix be?
And for the record I have successfully build a box using the same vagrant file and provisioning script. For those that want to look, the project code is up at https://gist.github.com/denzuko/a6b7cce2eae636b0512d, with the debug log at gist.github.com/
After digging further into the directory structure and running into issues with git pushing code up I was able to find a non-existant file that needed to be removed after a reboot.
Thus, doing a reboot and a rm -rf -- "./.LISTEN\ \ \ \ \ 0\ \ \ \ \ \ 100\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ " did the trick.

Resources