URL rewrite on G-WAN for .JPG - url-rewriting

I am testing G-WAN server and I'd like using rewrite rules.
With apache the rule is :
RewriteRule ^(.+)-(.+)-(.+)-1.jpg$ imagesproduitnew/$3/$2.jpg [L]
I am trying to do it by handlers JPG, but I have lot of difficulties.
Has anybody already done something like that ?
My handlers is called url_wr.c in the path /0.0.0.0_80/#0.0.0.0/handlers
Here is the script
int init(char *argv[], int argc);
int main(int argc, char *argv[])
{
const long state = (long)argv[0];
if(state == HDL_AFTER_READ)
{
xbuf_t *read_xbuf = (xbuf_t*)get_env(argv, READ_XBUF);
xbuf_replfrto(read_xbuf, read_xbuf->ptr, read_xbuf->ptr + 16, "/blog", "/?blog");
}
return 255; // execute next connection step
}
int clean(char *argv[], int argc);
In gwan.log, it is not writen loaded url_wr.c
If I put printf in each function, it doesn't work.
The servlet bloc.c works well.
I also tried tu put the code in handlers/main.c and in the root of gwan directory.
I have only a error.log file for the site which says just error404 without any details of the handlers.
Thanks by advance for your support

You must use a G-WAN connection handler, either to use:
a plain-rewrite: one example is given at the end of the developers page,
OR,
a regex library (libc provides regex calls) if you target a more general rewrite scheme. Here is an example in C and the explanations are there, courtesy of "Regular Expressions in C" from the "Linux Gazette".
This could also be made rom a servlet, but then you would have to trigger a redirection (unless the resource was explicitely placed into a cache). If this is acceptable, then v3.10+ will let you do it in C#, PHP, Python, etc.
UPDATE following the code published in the question:
Your init() call is empty so main() is never called. You should do this instead:
// ----------------------------------------------------------------------------
// init() will initialize your data structures, load your files, etc.
// ----------------------------------------------------------------------------
// init() should return -1 if failure (to allocate memory for example)
int init(int argc, char *argv[])
{
// define which handler states we want to be notified in main():
// enum HANDLER_ACT {
// HDL_INIT = 0,
// HDL_AFTER_ACCEPT, // just after accept (only client IP address setup)
// HDL_AFTER_READ, // each time a read was done until HTTP request OK
// HDL_BEFORE_PARSE, // HTTP verb/URI validated but HTTP headers are not
// HDL_AFTER_PARSE, // HTTP headers validated, ready to build reply
// HDL_BEFORE_WRITE, // after a reply was built, but before it is sent
// HDL_HTTP_ERRORS, // when G-WAN is going to reply with an HTTP error
// HDL_CLEANUP };
//
u32 *states = (u32*)get_env(argv, US_HANDLER_STATES);
*states = 1 << HDL_AFTER_READ; // we assume "GET /hello" sent in one shot
puts("init()");
return 0;
}
Also, make sure that connection handlers are named main.c. In contrast, content handlers carry the name of the targeted file extension (gif.c, html.c, etc).

Related

Is there the ways to forward output from XCode to Processing?

I'm trying out to forward output stream from XCode (v12.4) to Processing (https://processing.org/).
My goal is: To draw a simple object in Processing according to my XCode project data.
I need to see value of my variable in the Processing.
int main(int argc, const char * argv[]) {
// insert code here...
for (int i=0; i<10; i++)
std::cout << "How to send value of i to the Processing!\n";
return 0;
}
Finally I found the way. Hope it help someone. Share it.
Xcode app ->(127.0.0.1:UDP)-> Processing sketch
Source Links:
Sending string over UDP in C++
https://discourse.processing.org/t/receive-udp-packets/19832
Xcode app (C++):
int main(int argc, char const *argv[])
{
std::string hostname{"127.0.0.1"};
uint16_t port = 6000;
int sock = ::socket(AF_INET, SOCK_DGRAM, 0);
sockaddr_in destination;
destination.sin_family = AF_INET;
destination.sin_port = htons(port);
destination.sin_addr.s_addr = inet_addr(hostname.c_str());
std::string msg = "Hello world!";
for(int i=0; i<5; i++){
long n_bytes = ::sendto(sock, msg.c_str(), msg.length(), 0, reinterpret_cast<sockaddr*>(&destination), sizeof(destination));
std::cout << n_bytes << " bytes sent" << std::endl;
}
::close(sock);
return 0;
}
Processing code:
import java.net.*;
import java.io.*;
import java.util.Arrays;
DatagramSocket socket;
DatagramPacket packet;
byte[] buf = new byte[12]; //Set your buffer size as desired
void setup() {
try {
socket = new DatagramSocket(6000); // Set your port here
}
catch (Exception e) {
e.printStackTrace();
println(e.getMessage());
}
}
void draw() {
try {
DatagramPacket packet = new DatagramPacket(buf, buf.length);
socket.receive(packet);
InetAddress address = packet.getAddress();
int port = packet.getPort();
packet = new DatagramPacket(buf, buf.length, address, port);
//Received as bytes:
println(Arrays.toString(buf));
//If you wish to receive as String:
String received = new String(packet.getData(), 0, packet.getLength());
println(received);
}
catch (IOException e) {
e.printStackTrace();
println(e.getMessage());
}
}
The assumption is you're using c++ in Xcode (and not Objective-C, nor Swift).
Every processing sketch inherits the args property (very similar to main's const char * argv[] in c++ program). You can make use of that to initialise a Processing sketch with options from c++.
You could have something like:
int main(int argc, const char * argv[]) {
system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0,1,2,3,4,5,6,7,8,9");
return 0;
}
(This is oversimplified, you'd have your for loop accumulate ints into a string with a separator character, maybe setup variables for paths to processing-java and the processing sketch)
To clarify, processing-java is a command line utility that ships with Processing. (You can find it in inside the Processing.app folder (via show contents), alongside the processing executable and install it via Tools menu inside Processing). It allows you to easily run a sketch from the command line. Alternatively, you can export an application, however if you're prototyping, the processing-java option might be more practical.
In Processing you'd check if the sketch was launched with arguments, and if so, parse those arguments.
void setup(){
if(args != null){
printArray(args);
}
}
You can use split() to split 0,1,2,3,4,5,6,7,8,9 into individual numbers that can be parsed (via int() for example).
If you have more complex data, you can consider formatting your c++ output as JSON, then using parseJSONObject() / parseJSONArray().
(If you don't want to split individual values, you can just use spaces with command line arguments: /path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run 0 1 2 3 4 5 6 7 8 9. If you want to send a JSON formatted string from c++, be aware you may need to escape " (e.g. system("/path/to/processing-java --sketch-path=/path/to/your/processing/sketch/folder --run {\"myCppData\":[0,1,2]}");)
This would work if you need to launch the processing sketch once and initialise with values from your c++ program at startup. Outside of the scope of your question, if you need to continously send values from c++ to Processing, you can look at opening a local socket connection (TCP or UDP) to estabish communication between the two programs. One easy to use protocol is OSC (via UDP). You can use oscpack in raw c++ and oscp5 in Processing. (Optionally, depending on your setup you can consider openFrameworks which (already has oscpack integrated as ofxOsc and ships with send/receive examples): its ofApp is similar Processing's PApplet (e.g. setup()/draw()/mousePressed(), etc.)

winsock2: How to get the ipv4/ipv6 address of a connected client after server side code calls `accept()`

There are other similar questions on this site, but they either do not related to winsock2 or they are suitable only for use with ipv4 address spaces. The default compiler for Visual Studio 2019 produces an error when the ntoa function is used, hence an ipv4 and ipv6 solution is required.
I did once produce the code to do this for a Linux system however I am currently at work and do not have access to that. It may or may not be "copy and paste"-able into a windows environment with winsock2. (Edit: I will of course add that code later this evening, but of course it might not be useful.)
The following contains an example, however this is an example for client side code, not server side code.
https://www.winsocketdotnetworkprogramming.com/winsock2programming/winsock2advancedInternet3c.html
Here, the getaddrinfo() function is used to obtain a structure containing matching ipv4 and ipv6 addresses. To obtain this information there is some interaction with DNS, which is not required in this case.
I have some server code which calls accept() (after bind and listen) to accept a client connection. I want to be able to print the client ip address and port to stdout.
The most closely related question on this site is here. However the answer uses ntoa and is only ipv4 compatible.
What I have so far:
So far I have something sketched out like this:
SOCKET acceptSocket = INVALID_SOCKET;
SOCKADDR_IN addr; // both of these are NOT like standard unix sockets
// I don't know how they differ and if they can be used with standard
// unix like function calls (eg: inet_ntop)
int addrlen = sizeof addr;
acceptSocket = accept(listenSocket, (SOCKADDR*)&addr, &addrlen);
if(acceptSocket == INVALID_SOCKET)
{
// some stuff
}
else
{
const std::size_t addrbuflen = INET6_ADDRSRTLEN;
char addrbuf[addrbuflen] = '\0'
inet_ntop(AF_INET, (void*)addr.sin_addr, (PSTR)addrbuf, addrbuflen);
// above line does not compile and mixes unix style function calls
// with winsock2 structures
std::cout << addrbuf << ':' << addr.sin_port << std::endl;
}
getpeername()
int ret = getpeername(acceptSocket, addrbuf, &addrbuflen);
// addrbuf cannot convert from char[65] to sockaddr*
if(ret == ???)
{
// TODO
}
You need to access the SOCKADDR. This is effectively a discriminated union. The first field tells you whether its an IPv4 (==AF_INET) or IPv6 (==AF_INET6) address. Depending on that you cast the addr pointer to be either struct sockaddr_in* or struct sockaddr_in6*, and then read off the IP address from the relevant field.
C++ code snippet in vs2019:
char* CPortListener::get_ip_str(struct sockaddr* sa, char* s, size_t maxlen)
{
switch (sa->sa_family) {
case AF_INET:
inet_ntop(AF_INET, &(((struct sockaddr_in*)sa)->sin_addr),
s, maxlen);
break;
case AF_INET6:
inet_ntop(AF_INET6, &(((struct sockaddr_in6*)sa)->sin6_addr),
s, maxlen);
break;
default:
strncpy(s, "Unknown AF", maxlen);
return NULL;
}
return s;
}
Example:
{
...
char s[INET6_ADDRSTRLEN];
sockaddr_storage ca;
socklen_t al = sizeof(ca);
SOCKET recv = accept(sd, (sockaddr*)&ca, &al);
pObj->m_ip = get_ip_str(((sockaddr*)&ca),s,sizeof(s));
}

AVFormatContext: interrupt callback proper usage?

AVFormatContext's interrupt_callback field is a
Custom interrupt callbacks for the I/O layer.
It's type is AVIOInterruptCB, and it explains in comment section:
Callback for checking whether to abort blocking functions.
AVERROR_EXIT is returned in this case by the interrupted function. During blocking operations, callback is called with opaque as parameter. If the callback returns 1, the blocking operation will be aborted.
No members can be added to this struct without a major bump, if new elements have been added after this struct in AVFormatContext or AVIOContext.
I have 2 questions:
what does the last section means? Especially "without a major bump"?
If I use this along with an RTSP source, when I close the input by avformat_close_input, the "TEARDOWN" message is being sent out, however it won't reach the RTSP server.
For 2: here is a quick pseudo-code for demo:
int pkts = 0;
bool early_exit = false;
int InterruptCallback(void* ctx) {
return early_exit ? 1 : 0;
}
void main() {
ctx = avformat_alloc_context
ctx->interrupt_callback.callback = InterruptCallback;
avformat_open_input
avformat_find_stream_info
pkts=0;
while(!early_exit) {
av_read_frame
if (pkts++ > 100) early_exit=true;
}
avformat_close_input
}
In case I don't use the interrupt callback at all, TEARDOWN is being sent out, and it also reaches the RTSP server so it can actually tear down the connection. Otherwise, it won't tear down it, and I have to wait until TCP socket times out.
What is the proper way of using this interrupt callback?
It means that they are not going to change anything for this structure (AVIOInterruptCB). However, if thats the case it would be in a major bump (major change from 4.4 eg to 5.0)
You need to pass a meaningful parameter to void* ctx. Anything that you like so you can check it within the static function. For example a bool that you will set as cancel so you will interrupt the av_read_frame (which will return an AVERROR_EXIT). Usually you pass a class of your decoder context or something similar which also holds all the info that you required to check whether to return 1 to interrupt or 0 to continue the requests properly. A real example would be that you open a wrong rtsp and then you want to open another one (the right one) so you need to cancel your previous requests.

How to get data out of Boost mutable_buffers_1?

I’m developing a system for our application to get data from an external device. As soon as I send it a specific message, it sends back short messages to us 10x/second (so about 1 message per 100 milliseconds). I’m using Boost for this communication.
The process is rather simple: I create the socket, send the message, giving it a handler for the message receive:
// Header file:
...
std::unique_ptr<boost::asio::io_service> _theIOService;
std::unique_ptr<boost::asio::ip::tcp::socket> _theSocket;
int size_of_the_data = 100;
std::vector<char> _raw_buffer = std::vector<char>(size_of_the_data);
boost::asio::mutable_buffers_1 _data_buffer = boost::asio::buffer(_raw_buffer, size_of_the_data);
...
// Implementation file:
...
void DeviceDataListener::initiateTransfer() {
// create and connect the socket up here
...
// send the message
boost::system::error_code error;
boost::asio::write(*_theSocket,
boost::asio::buffer(beginMessage),
boost::asio::transfer_all(), error);
// start the receive
auto handler = boost::bind(&SCUDataListener::dataHandler, this, _1, _2);
_theSocket->async_receive( _data_buffer, handler );
std::thread run_thread([&]{ _theIOService->run(); });
...
}
void DeviceDataListener::dataHandler (
const boost::system::error_code& error, // Result of operation.
std::size_t bytes_transferred // Number of bytes received.
) {
int foo = bytes_transferred;
// this line crashes application
char* pData = static_cast<char*>(_data_buffer.data());
}
It works, my handler gets called immediately, as it should. The problem is, I can’t get the data out of _data_buffer. This:
auto it = _data_buffer.begin();
causes a crash, even though _data_buffer is valid. This:
const char* pData = static_cast<char*>(_data_buffer.data());
won’t compile. The error is “Method 'data' could not be resolved”. The mutable_buffer_1 API says data() is a completely valid method that returns the beginning of the memory range.
Inspecting via a debugger, I can see that there is no error and I can see data as a member of _data_buffer and the memory address it contains does contain the data we’re expecting. The thing is, I can’t get to it via code. Does anyone know how to get to the data in a Boost mutable_buffers_1?
We’re using Eclipse CDT, C++11 and gcc running on Linux.
“Method 'data' could not be resolved”.
this error may be true, but it depends on what version of Boost you use. data() is member of mutable_buffer since >= 1.66 version. Because mutable_buffer is the base class for mutable_buffers_1 your code should compile if you use at least 1.66 version of Boost.
If your version is < 1.66 you should use
char* p1 = boost::asio::buffer_cast<char*>(_data_buffer);
to get the pointer to data in the buffer.
_data_buffer.begin();
you should not use begin() method, it returns pointer to mutable_buffer_1 itself. This method is used by internal functions of asio-boost library, for instance to copy sequence of buffers, then begin() points the particular buffer to be copied.

How do I perform a nonblocking read using asio?

I am attempting to use boost::asio to read and write from a device on a serial port. Both boost::asio:read() and boost::asio::serial_port::read_some() block when there is nothing to read. Instead I would like to detect this condition and write a command to the port to kick-start the device.
How can I either detect that no data is available?
If necessary I can do everything asynchronously, I would just rather avoid the extra complexity if I can.
You have a couple of options, actually. You can either use the serial port's built-in async_read_some function, or you can use the stand-alone function boost::asio::async_read (or async_read_some).
You'll still run into the situation where you are effectively "blocked", since neither of these will call the callback unless (1) data has been read or (2) an error occurs. To get around this, you'll want to use a deadline_timer object to set a timeout. If the timeout fires first, no data was available. Otherwise, you will have read data.
The added complexity isn't really all that bad. You'll end up with two callbacks with similar behavior. If either the "read" or the "timeout" callback fires with an error, you know it's the race loser. If either one fires without an error, then you know it's the race winner (and you should cancel the other call). In the place where you would have had your blocking call to read_some, you will now have a call to io_svc.run(). Your function will still block as before when it calls run, but this time you control the duration.
Here's an example:
void foo()
{
io_service io_svc;
serial_port ser_port(io_svc, "your string here");
deadline_timer timeout(io_svc);
unsigned char my_buffer[1];
bool data_available = false;
ser_port.async_read_some(boost::asio::buffer(my_buffer),
boost::bind(&read_callback, boost::ref(data_available), boost::ref(timeout),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
timeout.expires_from_now(boost::posix_time::milliseconds(<<your_timeout_here>>));
timeout.async_wait(boost::bind(&wait_callback, boost::ref(ser_port),
boost::asio::placeholders::error));
io_svc.run(); // will block until async callbacks are finished
if (!data_available)
{
kick_start_the_device();
}
}
void read_callback(bool& data_available, deadline_timer& timeout, const boost::system::error_code& error, std::size_t bytes_transferred)
{
if (error || !bytes_transferred)
{
// No data was read!
data_available = false;
return;
}
timeout.cancel(); // will cause wait_callback to fire with an error
data_available = true;
}
void wait_callback(serial_port& ser_port, const boost::system::error_code& error)
{
if (error)
{
// Data was read and this timeout was canceled
return;
}
ser_port.cancel(); // will cause read_callback to fire with an error
}
That should get you started with only a few tweaks here and there to suit your specific needs. I hope this helps!
Another note: No extra threads were necessary to handle callbacks. Everything is handled within the call to run(). Not sure if you were already aware of this...
Its actually a lot simpler than the answers here have implied, and you can do it synchronously:
Suppose your blocking read was something like this:
size_t len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint);
Then you replace it with
socket.non_blocking(true);
size_t len = 0;
error = boost::asio::error::would_block;
while (error == boost::asio::error::would_block)
//do other things here like go and make coffee
len = socket.receive_from(boost::asio::buffer(recv_buf), sender_endpoint, 0, error);
std::cout.write(recv_buf.data(), len);
You use the alternative overloaded form of receive_from which almost all the send/receive methods have. They unfortunately take a flags argument but 0 seems to work fine.
You have to use the free-function asio::async_read.

Resources