Rest server ESP32 IDF - esp32

Colleagues programmers would like information, I set up through esp32 a rest application according to the example of Espressif. Everything is working but it has a detail that I couldn't solve about a variable.
this is part of the espressif example
/* Simple handler for getting system handler */
static esp_err_t system_info_get_handler(httpd_req_t *req)
{
httpd_resp_set_type(req, "application/json");
cJSON *root = cJSON_CreateObject();
esp_chip_info_t chip_info;
esp_chip_info(&chip_info);
cJSON_AddStringToObject(root, "version", IDF_VER);
cJSON_AddNumberToObject(root, "cores", chip_info.cores);
const char *sys_info = cJSON_Print(root);
httpd_resp_sendstr(req, sys_info);
free((void *)sys_info);
cJSON_Delete(root);
return ESP_OK;
}
What I did I replaced
const char * sys_info
for
std :: string sys_info
and removed
free((void *)sys_info);
static esp_err_t system_info_get_handler(httpd_req_t *req)
{
httpd_resp_set_type(req, "application/json");
cJSON *root = cJSON_CreateObject();
esp_chip_info_t chip_info;
esp_chip_info(&chip_info);
cJSON_AddStringToObject(root, "version", IDF_VER);
cJSON_AddNumberToObject(root, "cores", chip_info.cores);
//const char *sys_info = cJSON_Print(root); (replace)
std::string sys_info = cJSON_Print(root);
httpd_resp_sendstr(req, sys_info);
//free((void *)sys_info); (removed)
cJSON_Delete(root);
return ESP_OK;
}
this is working but the only problem is that whenever I make a request like for example in the postman I keep monitoring the memory and with each request the impression it creates a new variable std :: string sys_info because the available memory drops, work as in the first example with free ((void *) sys_info); the available memory does not change. Can someone give me guidance

cJSON_Print still allocates so you need to free the original.
When you make a std::string from a const char*, it copies to new memory.
If you really want a std::string you should have
const char * const tmp = cJSON_Print(root);
std::string sys_info = tmp;
free((void *)tmp); //needed the pointer after copying into string in order to free
There is no benefit to this however so you should stick with the old version.

Related

memory corruption after structure copy

Note: Following code is causing memory corruption in process function input req is assigned to rsp. I didn't understand what happened here. after removing "rsp = req" then it is working. does this assignment will cause shallow copy of the structures in req?
does req and rsp structures are pointing same memory here?
struct info
{
uint8_t id;
uint64_t post_id;
uint64_t time_id;
};
struct updates
{
uint32_t id;
uint32_t fcn;
uint16_t icp;
uint64_t num_oh;
uint64_t num_rna;
bool is_rbn;
};
struct rbn_rel_info
{
uint16_t icp;
uint32_t fcn;
info relation;
uint32_t id_length;
uint32_t id;
};
struct rbn_info
{
uint16_t icp;
uint32_t fcn;
};
struct ind_info
{
info _info;
uint16_t num_rbn;
rbn_info _rbn_info[32];
uint16_t num_rel;
rbn_rel_info rel[32];
uint8_t nums;
updates _updates[32];
};
void process(struct ind_info req)
{
struct ind_info rsp = req;
//process req and send rsp
send_rsp(rsp);
}
int main()
{
struct ind_info req = {};
process(req);
return 0;
}
From this code I cannot see how a memory corruption could occur. The argument is passed by value and cannot corrupt any memory. There are no pointers in the structures either. The assignment as well, both use the compiler generated Trivial copy assignment operator.
The only reason I can think of is that something funny happens in send_rsp but I can't tell without looking at the code.

Basic Kernel HWMon Driver Module for LPC Chip?

I'm working on writing a kernel hwmon driver module for a chip that communicates over LPC (ISA style bus). I have the following code so far
umode_t qnap_ec_is_visible(const void* data, enum hwmon_sensor_types type, u32 attr, int channel)
{
}
int qnap_ec_read(struct device* dev, enum hwmon_sensor_types type, u32 attr, int channel, long* val)
{
}
int qnap_ec_write(struct device* dev, enum hwmon_sensor_types type, u32 attr, int channel, long val)
{
}
static const struct hwmon_ops qnap_ec_ops = {
.is_visible = qnap_ec_is_visible,
.read = qnap_ec_read,
.write = qnap_ec_write
};
static const struct hwmon_channel_info *qnap_ec_channel_info[] = {
HWMON_CHANNEL_INFO(pwm, HWMON_PWM_INPUT),
HWMON_CHANNEL_INFO(fan, HWMON_F_INPUT),
HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT),
NULL
};
static const struct hwmon_chip_info qnap_ec_chip_info = {
.ops = &qnap_ec_ops,
.info = qnap_ec_channel_info
};
static int qnap_ec_probe(struct platform_device* platform_dev)
{
struct device* dev;
dev = devm_hwmon_device_register_with_info(&platform_dev->dev, "qnap_ec_hwmon", NULL,
&qnap_ec_chip_info, NULL);
return PTR_ERR_OR_ZERO(dev);
}
static const struct of_device_id qnap_ec_of_match[] = {
{ .compatible = "???" },
{}
};
MODULE_DEVICE_TABLE(of, qnap_ec_of_match);
static struct platform_driver qnap_ec_driver = {
.driver = {
.name = "qnap_ec_hwmon",
.of_match_table = qnap_ec_of_match
},
.probe = qnap_ec_probe
};
module_platform_driver(qnap_ec_driver);
however I'm pretty sure this approach (using a device ID and having the kernel call the probe function when it finds that device ID on the system) won't work for something on the LPC bus. The IT87 driver which also communicates over the LPC bus uses __init/__exit functions to enter the driver, however, that driver is very large and probably not an ideal example of a simple driver module. Are there any examples available of how to write a basic (ie: no real functionality just the skeleton) kernel hwmon driver for a LPC device? Some of the things I can't find answers for, for example, is if I use the __init/__exit functions, can I still register the driver using the devm_hwmon_device_register_with_info function or do I need to use another approach (the it87 driver for example uses the platform_driver_register function, but I'm not sure why since there doesn't seem to be any documentation on the correct approach for LPC devices).

NKE. Can't handle file uploads and other high load connections

I got a problem when using a kernel extension that filters network traffic.
My code was written according to Apple's tcplognke example.
Everything goes OK but when I attempt to upload a file bigger than 500 kb - connection drops.
Here is simplified kext code:
errno_t tl_data_fn(void *cookie, socket_t so, const struct sockaddr *addr, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags, FilterSocketDataDirection direction) {
errno_t result = 0;
if (check_tag(data, gidtag, FILTER_TAG_TYPE, direction == FilterSocketDataDirectionIn ? IN_DONE : OUT_DONE)) {
return result;
}
if (!cookie) return result;
filter_cookie *f_cookie = get_filter_cookie(cookie);
uint32_t data_size = (uint32_t)mbuf_pkthdr_len(*data);
uint32_t offset = 0;
printf("tl_data_ft: %d", data_size);
while (offset < data_size) {
FilterNotification notification;
if (direction == FilterSocketDataDirectionIn) {
notification.event = FilterEventDataIn;
} else {
notification.event = FilterEventDataOut;
}
notification.socketId = (uint64_t)so;
notification.inputoutput.dataSize = min(data_size - offset, sizeof(notification.inputoutput.data));
mbuf_copydata(*data, offset, notification.inputoutput.dataSize, notification.inputoutput.data);
offset += notification.inputoutput.dataSize;
send_notification(f_cookie, &notification);
}
result = EJUSTRETURN;
if (result == EJUSTRETURN) {
mbuf_freem(*data);
if (control != NULL && *control != NULL)
mbuf_freem(*control);
}
return result;
}
errno_t tl_data_in_fn(void *cookie, socket_t so, const struct sockaddr *from, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags) {
return tl_data_fn(cookie, so, from, data, control, flags, FilterSocketDataDirectionIn);
}
errno_t tl_data_out_fn(void *cookie, socket_t so, const struct sockaddr *to, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags) {
return tl_data_fn(cookie, so, to, data, control, flags, FilterSocketDataDirectionOut);
}
And the user space code:
int s = socket(PF_SYSTEM, SOCK_DGRAM, SYSPROTO_CONTROL);
//connect to driver
FilterNotification notification;
while (recv(s, &notification, sizeof(FilterNotification), 0) == sizeof(FilterNotification)) {
FilterClientResponse response;
response.socketId = notification.socketId;
response.direction = (notification.event == FilterEventDataIn) ? FilterSocketDataDirectionIn : FilterSocketDataDirectionOut;
response.dataSize = notification.inputoutput.dataSize;
memcpy(response.data, notification.inputoutput.data, notification.inputoutput.dataSize);
send(s, &response, sizeof(response), 0);
}
When I asked on apple develper forum, the developer said "I don’t see any attempt to handle send-side flow control here. Without that a file upload can easily eat up all of the available mbufs, and things will go badly from there" but there is not examples at all. Can someone help me? thanks.
The problem was in socket buffer. When I inject a lot of data very quickly, the buffer becomes full and inject_data_in/inject_data_out functions returns error.
The workaround is to store pending packets in kernel space (You can use TAILQ for example) and then, when socket becomes available for writing (To get this event you can use kqueue on OS X) continue injection

libwebsockets write to all active connections after receive

I am toying around with a libwebsockets tutorial trying to make it such that, after it receives a message from a connection over a given protocol, it sends a response to all active connections implementing that protocol. I have used the function libwebsocket_callback_all_protocol but it is not doing what I think it should do from its name (I'm not quite sure what it does from the documentation).
The goal is to have two webpages open and, when info is sent from one, the result will be relayed to both. Below is my code - you'll see that libwebsocket_callback_all_protocol is called in main (which currently does nothing, I think....) :
#include <stdio.h>
#include <stdlib.h>
#include <libwebsockets.h>
#include <string.h>
static int callback_http(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason, void *user,
void *in, size_t len)
{
return 0;
}
static int callback_dumb_increment(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason,
void *user, void *in, size_t len)
{
switch (reason) {
case LWS_CALLBACK_ESTABLISHED: // just log message that someone is connecting
printf("connection established\n");
break;
case LWS_CALLBACK_RECEIVE: { // the funny part
// create a buffer to hold our response
// it has to have some pre and post padding. You don't need to care
// what comes there, libwebsockets will do everything for you. For more info see
// http://git.warmcat.com/cgi-bin/cgit/libwebsockets/tree/lib/libwebsockets.h#n597
unsigned char *buf = (unsigned char*) malloc(LWS_SEND_BUFFER_PRE_PADDING + len +
LWS_SEND_BUFFER_POST_PADDING);
int i;
// pointer to `void *in` holds the incomming request
// we're just going to put it in reverse order and put it in `buf` with
// correct offset. `len` holds length of the request.
for (i=0; i < len; i++) {
buf[LWS_SEND_BUFFER_PRE_PADDING + (len - 1) - i ] = ((char *) in)[i];
}
// log what we recieved and what we're going to send as a response.
// that disco syntax `%.*s` is used to print just a part of our buffer
// http://stackoverflow.com/questions/5189071/print-part-of-char-array
printf("received data: %s, replying: %.*s\n", (char *) in, (int) len,
buf + LWS_SEND_BUFFER_PRE_PADDING);
// send response
// just notice that we have to tell where exactly our response starts. That's
// why there's `buf[LWS_SEND_BUFFER_PRE_PADDING]` and how long it is.
// we know that our response has the same length as request because
// it's the same message in reverse order.
libwebsocket_write(wsi, &buf[LWS_SEND_BUFFER_PRE_PADDING], len, LWS_WRITE_TEXT);
// release memory back into the wild
free(buf);
break;
}
default:
break;
}
return 0;
}
static struct libwebsocket_protocols protocols[] = {
/* first protocol must always be HTTP handler */
{
"http-only", // name
callback_http, // callback
0, // per_session_data_size
0
},
{
"dumb-increment-protocol", // protocol name - very important!
callback_dumb_increment, // callback
0, // we don't use any per session data
0
},
{
NULL, NULL, 0, 0 /* End of list */
}
};
int main(void) {
// server url will be http://localhost:9000
int port = 9000;
const char *interface = NULL;
struct libwebsocket_context *context;
// we're not using ssl
const char *cert_path = NULL;
const char *key_path = NULL;
// no special options
int opts = 0;
// create libwebsocket context representing this server
struct lws_context_creation_info info;
memset(&info, 0, sizeof info);
info.port = port;
info.iface = interface;
info.protocols = protocols;
info.extensions = libwebsocket_get_internal_extensions();
info.ssl_cert_filepath = cert_path;
info.ssl_private_key_filepath = key_path;
info.gid = -1;
info.uid = -1;
info.options = opts;
info.user = NULL;
info.ka_time = 0;
info.ka_probes = 0;
info.ka_interval = 0;
/*context = libwebsocket_create_context(port, interface, protocols,
libwebsocket_get_internal_extensions,
cert_path, key_path, -1, -1, opts);
*/
context = libwebsocket_create_context(&info);
if (context == NULL) {
fprintf(stderr, "libwebsocket init failed\n");
return -1;
}
libwebsocket_callback_all_protocol(&protocols[1], LWS_CALLBACK_RECEIVE);
printf("starting server...\n");
// infinite loop, to end this server send SIGTERM. (CTRL+C)
while (1) {
libwebsocket_service(context, 50);
// libwebsocket_service will process all waiting events with their
// callback functions and then wait 50 ms.
// (this is a single threaded webserver and this will keep our server
// from generating load while there are not requests to process)
}
libwebsocket_context_destroy(context);
return 0;
}
I had the same problem, the libwebsocket_write on LWS_CALLBACK_ESTABLISHED generate some random segfault so using the mail list the libwebsockets developer Andy Green instructed me the correct way is to use libwebsocket_callback_on_writable_all_protocol, the file test-server/test-server.c in library source code shows sample of use.
libwebsocket_callback_on_writable_all_protocol(libwebsockets_get_protocol(wsi))
It worked very well to notify all instances, but it only call the write method in all connected instances, it do not define the data to send. You need to manage the data yourself. The sample source file test-server.c show a sample ring buffer to do it.
http://ml.libwebsockets.org/pipermail/libwebsockets/2015-January/001580.html
Hope it helps.
From what I can quickly grab from the documentation, in order to send a message to all clients, what you should do is store somewhere (in a vector, a hashmap, an array, whatever) the struct libwebsocket * wsi that you have access when your clients connect.
Then when you receive a message and want to broadcast it, simply call libwebsocket_write on all wsi * instances.
That's what I'd do, anyway.

Unordered set (const char) much slower than unordered set (string)

I'm loading a very long list from disk into an unordered_set. If I use a set of strings, it is very fast. A test list of about 7 MB loads in about 1 second. However, using a set of char pointers takes about 2.1 minutes!
Here is the code for the string version:
unordered_set<string> Set;
string key;
while (getline(fin, key))
{
Set.insert(key);
}
Here is the code for the char* version:
struct unordered_eqstr
{
bool operator()(const char* s1, const char* s2) const
{
return strcmp(s1, s2) == 0;
}
};
struct unordered_deref
{
template <typename T>
size_t operator()(const T* p) const
{
return hash<T>()(*p);
}
};
unordered_set<const char*, unordered_deref, unordered_eqstr> Set;
string key;
while (getline(fin, key))
{
char* str = new(mem) char[key.size()+1];
strcpy(str, key.c_str());
Set.insert(str);
}
The "new(mem)" is because I'm using a custom memory manager so I can allocate big blocks of memory and give them out to tiny objects like c strings. However, I've tested this with regular "new" and the results are identical. I've also used my memory manager in other tools with no problems.
The two structs are necessary to make the insert and find hash based on the actual c string and not its address. The unordered_deref I actually found here on stack overflow.
Eventually I need to load multi-gigabyte files. This is why I'm using a custom memory manager, but it's also why this horrible slow down is unacceptable. Any ideas?
Here we go.
struct unordered_deref
{
size_t operator()(const char* p) const
{
return hash<string>()(p);
}
};

Resources