Arduino WebServer.streamFile wierdness - nodemcu

Got me stumped...
Using NodeMCU with SD ... classic WebServer setup working a simple File Server over HTTP: Here is the SendFile handler:
void handleFileRequest()
{
Serial.println("handleFileRequest");
File32 file = sd.open(fileName, O_READ);
if ( file == 0 ) // Opening the file with return code of 0 is an error in SDFile.open
{
handleError(404,"File Not Found");
return;
}
int fsizeDisk = file.size();
if (file.isDirectory()|| fsizeDisk <=0)
{
handleError(500,"cannot send folder only file");
return;
}
Serial.print("file size: "); Serial.println(fsizeDisk);
ledState = LOW;
digitalWrite(LED_BUILTIN, ledState);
unsigned long timeBegin = micros();
server.sendHeader("Content-Length", (String)(fsizeDisk));
server.sendHeader("Content-disposition", "attachment; filename=\"" + fileName + "\"");
server.sendHeader("Cache-Control", "max-age=0, no-store"); // do not allow cache
server.sendHeader("Connection", "close");
size_t sent = server.streamFile(file, "application/octet-stream");
unsigned long timeEnd = micros();
unsigned long duration = timeEnd - timeBegin;
double averageDuration = (double)duration / 1000.0;
Serial.println("Duration: ");
Serial.print(averageDuration); Serial.println("s");
server.client().stop();
ledState = HIGH;
digitalWrite(LED_BUILTIN, ledState);
file.close();
delay(200);
Serial.print("Data Sent: ");
Serial.println(sent);
delay(200);
}
And here is the Serial Terminal log...
Connected to hc406-ng
IP address: 192.168.0.162
MDNS responder started #GrnAcres-Hi
SdFat version: 2.0.6
Test with GrnAcres-Hi.local/download?file=test.jpg
Or with 192.168.0.162/download?file=test.jpg
HTTP server started
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.78s
Data Sent: 7
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.78s
Data Sent: 7
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.76s
Data Sent: 7
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.73s
Data Sent: 7
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.78s
Data Sent: 7
handleFileRequest
Request for: Hello.txt
file size: 13
Duration:
1002.83s
Data Sent: 7
No matter which file I use... (test.jpg is 15K)... I get 50% of the file transmitted and then the handler is called again... and again... and again?
PLEASE NOTE: the duration in seconds is OFF... this is only a debugging value it takes only 1 second per loop. I simply wanted a ratio between each sucessive call
Any comments would help greatly... thanking you all in advance.
Cheers

Related

Not getting error response of sent packet from ESP32C3 LwIP TCP server

I am facing issue of not receiving error in return of "send()" function in case of TCP socket disconnected at client side. I am using example TCP server(ESP32C3) by using LwIP stack (version v2.1.3).
To debug this issue I have added delay of 50ms after sending data and then send same data again on same socket then I am getting error of -14 which is as expected .
I have modified default TCP server example which is given by ESP-IDF to send 256 bytes continuously.
Any specific time and function for which we need to wait and then check for ACK/NACK from TCP socket of packet which server sent to client?
while (to_write > 0)
{
vTaskDelay(20000 / portTICK_PERIOD_MS); // disconnecting TCP client manually (Hercules.exe)
int written = send(sock, rx_buffer + (len - to_write), 256, 0);
vTaskDelay(50 / portTICK_PERIOD_MS);
written = send(sock, rx_buffer + (len - to_write), 256, 0);
if (written < 0)
{
ESP_LOGE(TAG, "Error occurred during sending: errno %d", errno);
}
ESP_LOGE(TAG, "Error occurred during sending: errno %d", errno); // not getting error 1st time
errno = 0;
to_write -= written;
}
I have disabled nagle's algorithm .
int val = true;
if (!(setsockopt(sock_internal, IPPROTO_TCP, TCP_NODELAY, (char *)&val, sizeof(int)) == ESP_OK))
{
ESP_LOGE(TAG, "failed to set tcp no delay");
}

Minimizing ZeroMQ round trip latency

My question is about minimizing the latency between a ZMQ client and server
I have the following modified Hello World ZMQ (JeroMQ 0.5.1)
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
public class server {
public static void main(String[] args) {
try (ZContext context = new ZContext()) {
// Socket to talk to clients
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.bind("tcp://*:5555");
while (!Thread.currentThread().isInterrupted()) {
byte[] reply = socket.recv(0);
System.out.println(
"Received " + ": [" + reply.length+ "]"
);
String response = "world";
socket.send(response.getBytes(ZMQ.CHARSET), 0);
}
}
}
}
and client:
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ;
public class client {
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
System.out.println("Connecting to hello world server" + args[0] + args[1] + args[2] );
ZMQ.Socket socket = context.createSocket(SocketType.REQ);
socket.connect("tcp://"+args[0]+":"+args[1]);
for (int requestNbr = 1; requestNbr != 10; requestNbr++) {
byte[] request = new byte[requestNbr*(Integer.parseInt(args[2]))];
System.out.println("Sending Hello " + requestNbr);
long time = System.nanoTime();
socket.send(request, 0);
byte[] reply = socket.recv(0);
double restime = (System.nanoTime() - time)/1000000.0;
System.out.println(
"Received " + new String(reply, ZMQ.CHARSET) + " " +
requestNbr + " " + restime
);
}
}
}
}
I'm running the server and the client over a network with latency (160ms round trip). I create the latency using tc on both the client and the server:
tc qdisc del dev eth0 root
tc class add dev eth0 parent 1: classid 1:155 htb rate 1000mbit
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 flowid 1:155 match ip dst 192.168.181.1/24
tc qdisc add dev eth0 parent 1:155 handle 155: netem delay $t1 $dt1 distribution normal
Now when I run java -jar client.jar 192.168.181.3 5555 100000 I get the following output:
Sending Hello 1
Received world 1 1103.392783
Sending Hello 2
Received world 2 322.553512
Sending Hello 3
Received world 3 478.10143
Sending Hello 4
Received world 4 606.396567
Sending Hello 5
Received world 5 641.465041
Sending Hello 6
Received world 6 772.961712
Sending Hello 7
Received world 7 910.848674
Sending Hello 8
Received world 8 966.694224
Sending Hello 9
Received world 9 940.645636
which means that as we increase the size of the message, it takes more round trips to send the message and receive the ack (you can play with the message size to see for yourself). I was wondering what I need to do to prevent that from happening, that is: send everything in one go and minimize the latency to the roundtrip time.
Note: In my original application, I'm using a REQ-ROUTER pattern as I have multiple clients, but the issue with the latency and large messages lingers on

How can I get start time of rtsp-sesson via ffmpeg (C++)? start_time_realtime always equal -9223372036854775808

I'm trying to get a frame by rtsp and calculate its real-world timestamp. I previously used Live555 for this (presentationTime).
As far as I understand, ffmpeg does not provide such functionality, but provides the ability to read the relative time of each frame and the start time of the stream. In my case, the frame timestamps (pts) works correctly, but the stream start time (start_time_realtime) is always -9223372036854775808.
I'm trying to use simple example from this Q: https://stackoverflow.com/a/11054652/5355846
Value does not change. regardless of the position in the code
int main(int argc, char** argv) {
// Open the initial context variables that are needed
SwsContext *img_convert_ctx;
AVFormatContext* format_ctx = avformat_alloc_context();
AVCodecContext* codec_ctx = NULL;
int video_stream_index;
// Register everything
av_register_all();
avformat_network_init();
//open RTSP
if (avformat_open_input(&format_ctx, "path_to_rtsp_stream",
NULL, NULL) != 0) {
return EXIT_FAILURE;
}
...
}
while (av_read_frame(format_ctx, &packet) >= 0 && cnt < 1000) { //read ~ 1000 frames
//// here!
std::cout<< " ***** "
<< std::to_string(format_ctx->start_time_realtime)
<< " | "<<format_ctx->start_time
<< " | "<<packet.pts
<< " | "
<<picture->best_effort_timestamp;
...
}
***** -9223372036854775808 | 0 | 4120 | 40801 Frame: 103
What am I doing wrong?
-9223372036854775808 is AV_NOPTS_VALUE, meaning start_time_realtime is unknown to ffmpeg.

Processing: one file will write, other won't

I have a strange error which I really can't seem to find out. The situation is that I have an arduino board with a temperature and a light sensor: the light sensor is used to see if a certain room is 'open' (light goes out if there's no movement in the room for a certain time). I'm using a Serial port to push data to a server running a Processing script. The arduino board pushes 'OPEN' if the sensed light is above a certain treshhold, 'CLOSED' if it's not. It also pushes the temperature on a newline. It repeats every two seconds.
Monitoring the serial port with minicom that all seems to work fine. Even in the script, the data that I output confirms that everything should work fine. Except that when I try to write data to a 'open.txt' it seems to not to so, while the 'temp.txt' works fine (Tried to tail -f both files, temp.txt gets updated while open.txt just kept being empty.
import processing.serial.*;
Serial mySerial;
PrintWriter openClosedFile;
String openClosedFileName;
String currentOpenClosed;
PrintWriter temperatureFile;
String temperatureFileName;
int currentTemp;
void setup()
{
mySerial = new Serial(this, Serial.list()[0], 9600);
openClosedFileName = "open.txt";
openClosedFile = createWriter(openClosedFileName);
currentOpenClosed = "CLOSED";
temperatureFileName = "temp.txt";
temperatureFile = createWriter(temperatureFileName);
currentTemp = 0;
}
void draw()
{
if (mySerial.available() > 0 )
{
String value = mySerial.readStringUntil('\n');
if ( value != null )
{
String timestamp = nf(day(),2) + "/" + nf(month(), 2) + "/" + year() + " " + nf(hour(),2) + ":" + nf(minute(),2) + ":" + nf(second(),2);
println(timestamp);
value = trim(value);
if (isNumeral(value))
writeTemperature(value);
else
writeOpenClosed(value);
}
}
}
void writeOpenClosed(String val)
{
print("OpenClosed: ");
println(val);
boolean writtenToFile = false;
openClosedFile = createWriter(openClosedFileName);
if (val.equals("OPEN") && !currentOpenClosed.equals("OPEN"))
{
println("val=OPEN and currentOpenClosed!=OPEN");
openClosedFile.print("1");
writtenToFile = true;
}
else if (val.equals("CLOSED") && !currentOpenClosed.equals("CLOSED"))
{
println("val=CLOSED and currentOpenClosed!=CLOSED");
openClosedFile.print("0");
writtenToFile = true;
}
if (writtenToFile)
{
currentOpenClosed = val;
openClosedFile.flush();
openClosedFile.close();
println("Written OpenClosed To File");
}
}
void writeTemperature(String val)
{
print("temperature: ");
println(val);
int intTemp = Integer.parseInt(val);
if (intTemp != currentTemp)
{
currentTemp = intTemp;
temperatureFile = createWriter(temperatureFileName);
temperatureFile.print(val);
temperatureFile.flush();
temperatureFile.close();
println("Written Temperature To File");
}
}
boolean isNumeral(String val)
{
for (int i = 0; i < val.length(); i++)
{
if (val.charAt(i) < 48 || val.charAt(i) > 57)
return false;
}
return true;
}
I expect there to be some syntactical error (I haven't used processing before), but both functions seems to be doing the same...
Some example output:
Listening for transport dt_socket at address: 8212
30/10/2014 12:14:57
OpenClosed: CD
30/10/2014 12:14:57
temperature: 24
Written Temperature To File
30/10/2014 12:14:59
OpenClosed: CLOSED
30/10/2014 12:14:59
temperature: 25
Written Temperature To File
30/10/2014 12:15:01
OpenClosed: CLOSED
30/10/2014 12:15:01
temperature: 24
Written Temperature To File
30/10/2014 12:15:03
OpenClosed: CLOSED
30/10/2014 12:15:03
temperature: 25
Written Temperature To File
30/10/2014 12:15:05
OpenClosed: OPEN
val=OPEN and currentOpenClosed!=OPEN
Written OpenClosed To File
30/10/2014 12:15:05
temperature: 20
Written Temperature To File
30/10/2014 12:15:07
OpenClosed: OPEN
30/10/2014 12:15:07
temperature: 20
30/10/2014 12:15:09
OpenClosed: OPEN
30/10/2014 12:15:09
temperature: 20
^C
Am I not seeing something, or what could be going on here?
Okay, I think I figured it out, it was actually me being dumb.
For eventual future readers: calling
openClosedFile = createWriter(openClosedFileName);
in the writeOpenClosed() function will open the file, and if nothing has to be written it will never get closed. Since the file will never be closed (as there will be a new object made next time the function gets called), it won't release the file to be read.

How to disable G-WAN servlet internal cache?

gwan version: 3.12.26
servlet type: C and Perl
problem:
gwan internal cache make request not re-read the script
test:
create 'log' dir :
[bash]# mkdir -p /dev/shm/random-c
[bash]# chmod 777 /dev/shm/random-c
create /path/to/gwan/0.0.0.0_8080/#0.0.0.0/csp/random.c
// ============================================================================
// C servlet sample for the G-WAN Web Application Server (http://trustleap.ch/)
// ----------------------------------------------------------------------------
// hello.c: just used with Lighty's Weighttp to benchmark a minimalist servlet
// ============================================================================
// imported functions:
// get_reply(): get a pointer on the 'reply' dynamic buffer from the server
// xbuf_cat(): like strcat(), but it works in the specified dynamic buffer
// ----------------------------------------------------------------------------
#include <sys/time.h>
#include "gwan.h" // G-WAN exported functions
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
//------------------
void init_random(){
struct /*sys/time.h->*/timeval res;
/*sys/time.h->*/gettimeofday(&res,NULL);
/*stdlib.h->*/srand( (unsigned int)/*stdlib.h->*/time(NULL) + res.tv_usec);
}
//------------------
char *get_rnd_char(int num){
char *char_list = "1234567890abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
int char_list_len = 62;
char *ret = (char *)/*stdlib.h->*/malloc((num * sizeof(char)) + 1);
int i,r;
for(i=0;i<num;i++){
r=(int) (/*stdlib.h->*/rand() % char_list_len);
ret[i] = char_list[r==char_list_len ? r-1 : r];
}
ret[num] = '\0';
return ret;
}
//------------------
int main(int argc, char *argv[])
{
char *rnd_out; //-- random data for browser output and file input
char *rnd_file; //-- random file
char *rnd_path; //-- for speed let's make on ramdisk /dev/shm/random-c/
char *t;
FILE *F;
int num_char=10;
int arg_cnt=1;
if(argc>0){
//-- why nobody love C ? one of the reason is these kind parsing thing
while ((t = /*string.h->*/strtok(argv[0], "=")) != NULL) {
argv[0] = NULL;
if(arg_cnt == 2){
num_char = /*stdlib.h->*/atoi(t);
}
arg_cnt++;
}
}else{
//-- get random number betwen 1 to 1000
num_char = (rand() % 1000)+1;
}
init_random();
//-- create random data
rnd_out = get_rnd_char(num_char);
//-- creating "log" path
//-- why nobody love C ? more reason
rnd_file = get_rnd_char(20);
// "/dev/shm/random-c/xxxxxxxxxxxxxxxxxxxx" -> 38 chars + 1 for \0
rnd_path = (char *)/*stdlib.h->*/malloc((38 * sizeof(char)) + 1);
rnd_path[0] = '\0';
/*string.h->*/strcat(rnd_path,"/dev/shm/random-c/");
/*string.h->*/strcat(rnd_path,rnd_file);
//-- save to file
F = /*stdio.h->*/fopen(rnd_path,"w");
/*stdio.h->*/fprintf(F,"%s",rnd_out);
/*stdio.h->*/fclose(F);
//-- send output to browser
/*gwan.h->*/xbuf_cat(get_reply(argv), rnd_out);
//-- cleanup memory
//-- why nobody love C ? MAIN reason: no easy way of memory management
/*stdlib.h->*/free(rnd_file);
/*stdlib.h->*/free(rnd_out);
/*stdlib.h->*/free(rnd_path);
return 200; // return an HTTP code (200:'OK')
}
// ============================================================================
// End of Source Code
// ============================================================================
run on browser:
http://localhost:8080/?random.c
then you should have one 20char random file at /dev/shm/random-c/
here the 'problem', run:
ab -n 1000 'http://localhost:8080/?random.c'
my ubuntu have output:
Finished 1000 requests
Server Software: G-WAN
Server Hostname: localhost
Server Port: 8080
Document Path: /?random.c
Document Length: 440 bytes
Concurrency Level: 1
Time taken for tests: 0.368 seconds
Complete requests: 1000
Failed requests: 361
(Connect: 0, Receive: 0, Length: 361, Exceptions: 0)
Write errors: 0
Total transferred: 556492 bytes
HTML transferred: 286575 bytes
Requests per second: 2718.73 [#/sec] (mean)
Time per request: 0.368 [ms] (mean)
Time per request: 0.368 [ms] (mean, across all concurrent requests)
Transfer rate: 1477.49 [Kbytes/sec] received
try:
[bash]# ls /dev/shm/random-c/
the directory only list 4 or 5 random files, which expected was 1000files
tested on random.c and perl's version random.pl
so the back to beginning question, how to disable GWAN internal cache, I try to read gwan user guide for set something in handler, but found nothing (or I miss something in that guide ).
thanks for GWAN team for this great product.
any answer welcome .. thanks
I think that the feature you are talking about is micro caching. To disable it, the URI needs to be unique on each request within 200 ms. (Like adding random number on URI)
The G-WAN FAQ state:
"To spare the need for a frontend cache server (and to let G-WAN be used as a caching reverse-proxy) G-WAN supports micro-caching, a RESTful feature. When a given URI is invoked at high concurrencies and when generating the payload take a lot of time, then G-WAN will automatically cache a page for 200 milliseconds (the average latency on the Internet) to make sure that the cache is up-to-date: within 200 ms, consecutive requests provide the expected result. To prevent micro-caching from being triggered, use a changing query parameter (per user session id, random, counter, etc.) for concurrent requests."
Note that for v4.10+ caching is disabled by default, look at the gwan/init.c file.

Resources