ESP32 Fails on set wifi Hostname - esp32

I am using the AT command binary (provided by Espresif) to interface my wi-fi application. In order to identify the device over a network, I've changed the Hostname to a known name, but when I scan the network, the Hostname still being "Espressif", instead of being my "Own Hostname".
Does anyone knows how to fix that? I actually think that it's an error on AT command's binary.

I've got the same issue.
Code looks like:
#include <Arduino.h>
#include "WiFi.h"
void setup() {
// Start the Wifi connection ...
WiFi.enableSTA(true);
WiFi.begin(ssid, password);
// TODO Hostname setting does not work. Always shows up as "espressif"
if(WiFi.setHostname("myHostname")) {
Serial.printf("\nHostname set!\n");
} else {
Serial.printf("\nHostname NOT set!\n");
}
}

There is a bug in the Espressif code. The workaround is to reset the WIFI before setting hostname and starting the WIFI:
WiFi.disconnect(true);
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE);
WiFi.setHostname(hostname);
Also note that the OTA may make the issue even harder to fix. If using MDSN and OTA, please add the following code (after the WIFI-stuff) to ensure that the hostname gets correlcty set:
MDNS.begin(hostname);
ArduinoOTA.setHostname(hostname);
ArduinoOTA.begin();
For details, please read the issue 2537 in the Espressif GitHub repo.

Try to wait for Wifi to set-up. easiest (never the best) way with Delay(150).

After literally more than 10 hours of research, trying to pinpoint, analyze and/or fix the issue one way or another, I gave up and accepted the workaround. You can find it here.
In short, what I do and what works for me is:
WiFi.disconnect();
WiFi.config(INADDR_NONE, INADDR_NONE, INADDR_NONE); // This is a MUST!
if (!WiFi.setHostname("myFancyESP32")) {
Serial.println("Hostname failed to configure");
}
WiFi.begin(ssid, password);
This is really frustrating but for the time being it seems that the issue comes from the ESP IDF and unless it is fixed there, it will not work.

Related

Elrond Dapp Tutorial not up to date?

Hello I was following this tutorial: https://docs.elrond.com/developers/tutorials/your-first-dapp/
With the help of: https://www.youtube.com/watch?v=IdkgvlK3rb8
But I think there is some difference between the dApp repository and the tutorial, first the src/config.devnet.tsx disappeared we now have an src/config.tsx already present (not a big deal).
I'm blocked when I try to do the ping, in the console I got the error Sender not allowed with value erd1qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq6gq4hu.
So my guess is that I've done something wrong deploying the contract, but I tried to redeploy other contracts I always ended up with the same error.
I tried natively on my ubuntu 20.04, and then in a devcontainer using an Ubuntu 22.04 image.
I'm pretty new to Elrond, Crypto (and also Node) so I might be missing something.
Thanks for your help!
I've just completed the tutorial and I think that you put the wrong SC address in src/config.tsx. The address you provided is the SC that handles other SC deployments. You have to replace it with your SC's address generated after deployment.

java7 SecureRandom performance on MacOSX

I'm running tomcat7 with jdk7 on MacOSX Mavericks DP3.
Everything goes well , and it only takes 500 ms to start up .
But suddenly, it slows down to 35 seconds.
The log message shows SecureRandom is the root cause.
Thanks for google, I found it's a jre bug , and following code to verify:
import java.security.SecureRandom;
class JRand {
public static void main(String args[]) throws Exception {
System.out.println("Ok: " +
SecureRandom.getInstance("SHA1PRNG").nextLong());
}
}
Yes. The simplest code also takes 35 seconds.
But it seems that all those related solutions do not work for me.
Both /dev/random and /dev/urandom are not block devices on Mac.
cat /dev/random | hexdump -C
output very quickly!
When switched back to jre6, it's very fast to generate randoms.
Download the latest jdk8-ea, the problem still exists.
In fact , not only tomcat slows down significantly, Netbeans, glassfish are all affected.
After struggling with for hours, I gave up at last.
This morning, when I came to office, plugged in Ethernet, guess what ?
It recovered!
So my question is , what happens behind ? It's really weird.
Thanks.
Haha, resolved.
Get InetAddress.java source code (can copy from IDE);
modify method getLocalHost from
String local = impl.getLocalHostName();
to :
String local = "localhost"; // impl.getLocalHostName();
recompile it, and add java.net.InetAddress.class back to JDK/jre/lib/rt.jar.
RESOLVED.
Don't change InetAddress, other code may rely on it. Instead, change sun.security.provider.SeedGenerator::getSystemEntropy() to not use the local ip address. (How secure is that anyway?) As an added bonus, you now become slightly more secure via obscurity :)

Error while registering Public key in Openshift

I'm getting an error while add my public key & it showing this error code: "
Error updating settings on gear: 01779aa6c3e04c71be82fbaa10662fcf with status: -1 and output: "
Any idea that why this is showing everytime when I am registering public key.....//
We believe this is a problem that arose in our most recent update on tuesday and are now investigating. When you add an SSH key we copy it to each of your applications (so git will work) - it looks like the copy process started failing.
EDIT: We fixed an issue in production that was affecting a small number of users that resulted in this symptom. Please let us know if the issue is not fixed, and we'll investigate further.
I am getting the same error. Looks like its their internal server problem..
EDIT: it seems you can't put security on applications that are available for test in openShift. It makes sense. Remove the test applications that you got from OpenShift.
I got it solved. That number {01779aa6c3e04c71be82fbaa10662fcf} is an application you currently have in your domain. I removed all applications in there. Have backups first, then clean your domain. Update your public key again and I am 100% sure it will work.
Please do this with care. Back up your application first. To remove
rhc app destroy -a {appName} -d
It's was my silly mistake, just I have to add my private key at the time of authentication in Putty.

quickfix session config issues

I've compiled and trolled around the quickfix ( http://www.quickfixengine.org ) source and the examples. I figured a good starting point would be to compile (C++) and run the 'executor' example, then use the 'tradeclient' example to connect to 'executor', and send it order requests.
I created two seperate session files one for the 'executor' as an acceptor, and one for the 'tradeclient' as the initiator. They're both running on the same Win7 pc.
'executor' runs, but tradeclient can't connect to it, and I can't figure out why. I downloaded Mini-fix and was able to send messages to executor, so I know that executor is working. I figure that the problem is with the tradeclient session settings. I've included both of them below, I was hoping someone could point out what's causing them to not communicate. They're both running on the same computer using port 56156.
--accceptor session.txt----
[DEFAULT]
ConnectionType=acceptor
ReconnectInterval=5
SenderCompID=EXEC
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=SENDER
HeartBtInt=5
#SocketConnectPort=
SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileStorePath=store
---- initiator session.txt ---
[DEFAULT]
ConnectionType=initiator
ReconnectInterval=5
SenderCompID=SENDER
DefaultApplVerID=FIX.5.0
[SESSION]
BeginString=FIXT.1.1
TargetCompID=EXEC
HeartBtInt=5
SocketConnectPort=56156
#SocketAcceptPort=56156
SocketConnectHost=127.0.0.1
TransportDataDictionary=pathToXml/spec/FIX50.xml
StartTime=07:00:00
EndTime=23:00:00
FileLogPath=log
FileStorePath=store
--------end------
Update: Thanks for the resonses... Turns out that my logfile directories didn't exist. Once I created them, they both started communicating. Must have been some logging error that didn't throw an exception, but disabled proper behavior.
Is there an error condition that I should be checking? I was relying on exceptions, but that's obviously not enough.
It doesn't seem to be config, check that your message sequence numbers are in synch, especially since you've been connecting to a different server using the same settings.
Try setting the TargetCompID and SenderCompID on the acceptor to *

Debugging Node.js processes with cluster.fork()

I've got some code that looks very much like the sample in the Cluster documentation at http://nodejs.org/docs/v0.6.0/api/cluster.html, to wit:
var cluster = require('cluster');
var server = require('./mycustomserver');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
var i;
// Master process
for (i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('death', function (worker) {
console.log('Worker ' + worker.pid + ' died');
});
} else {
// Worker process
server.createServer({port: 80}, function(err, result) {
if (err) {
throw err;
} else {
console.log('Thread listening on port ' + result.port);
}
});
}
I've installed node-inspector and tried using both it and the Eclipse V8 plugin detailed at https://github.com/joyent/node/wiki/Using-Eclipse-as-Node-Applications-Debugger to debug my application, but it looks like I can't hook a debugger up to forked cluster instances to put breakpoints at the interesting server logic--I can only debug the part of the application that spawns the cluster processes. Does anybody know if I can in fact do such a thing, or am I going to have to refactor my application to use only a single thread when in debugging mode?
I'm a Node.js newbie, so I'm hoping there's something obvious I'm missing here.
var fixedExecArgv=[];
fixedExecArgv.push('--debug-brk=5859');
cluster.setupMaster({
execArgv: fixedExecArgv
});
Credit goes to Sergey's post.
I changed my server.js to fork only one worker mainly for testing this then added the code above the forking. This fixed the debugging issue for me. Thank you Sergey for explaining and providing the solution!!!
For those who wish to debug child processes in VS Code, just add this to launch.json configuration:
"autoAttachChildProcesses": true
https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_remote-debugging
I already opened a ticket about this here: https://github.com/dannycoates/node-inspector/issues/130
Although it's not fixed yet, there's a workaround:
FWIW: The reason I suspect is that the node debugger needs to bind to the debug port (default: 5858). If you are using Cluster, I am guessing the master/controller binds first, and succeeds, causing the bind in the children/workers to fail. While a port can be supplied to node --debug=N there seems to be no easy way to do this when node is invoked within Cluster for the worker (it might be possible to programmatically set process.debug_port and then enable debugging, but I haven't got that working yet). Which leaves a bunch of options: 1) start node without the --debug option, and once it is running, find the pid for the worker process you want to debug/profile, and send it a USR1 signal to enable debugging. Another option is to write a wrapper for node that calls the real node binary with --debug set to a unique port each time. There are possibly options in Cluster that let you pass such as arg as well.
if you use VSCode to debug, you need to specify the port and and "autoAttachChildProcesses": true in the lanuch.json file.
If you debug directly in DevTool, you need to add a connection to the corresponding port in the console.
For anyone looking at this in 2018+, no startup arguments are necessary.
From this Github issue:
Just a time saver for anyone who might have been in the same boat as me-- the Node.js V8 --inspector Manager (NiM) seems to introduce this issue when it otherwise wouldn't be present-- I spent about an hour banging my head before disabling the Chrome plugin and discovering that everything worked fine when opening from chrome://inspect.
I also spent hours reading github posts, tweaking the settings of gulp-typescript and gulp-sourcemaps, etc, only to have that plugin be the issue. Also worth noting is that I had to add port N+1 to the remote targets of chrome://inspect, so localhost:9230, to debug my worker process.
Use the --inspect flag for node version higher than or equal to 7.7.0 to debug the node js process,
if someone wants more information on how to debug cluster processing and setting up chrome debugger tools for Node JS, please follow my post here.

Resources