I got a problem when using a kernel extension that filters network traffic.
My code was written according to Apple's tcplognke example.
Everything goes OK but when I attempt to upload a file bigger than 500 kb - connection drops.
Here is simplified kext code:
errno_t tl_data_fn(void *cookie, socket_t so, const struct sockaddr *addr, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags, FilterSocketDataDirection direction) {
errno_t result = 0;
if (check_tag(data, gidtag, FILTER_TAG_TYPE, direction == FilterSocketDataDirectionIn ? IN_DONE : OUT_DONE)) {
return result;
}
if (!cookie) return result;
filter_cookie *f_cookie = get_filter_cookie(cookie);
uint32_t data_size = (uint32_t)mbuf_pkthdr_len(*data);
uint32_t offset = 0;
printf("tl_data_ft: %d", data_size);
while (offset < data_size) {
FilterNotification notification;
if (direction == FilterSocketDataDirectionIn) {
notification.event = FilterEventDataIn;
} else {
notification.event = FilterEventDataOut;
}
notification.socketId = (uint64_t)so;
notification.inputoutput.dataSize = min(data_size - offset, sizeof(notification.inputoutput.data));
mbuf_copydata(*data, offset, notification.inputoutput.dataSize, notification.inputoutput.data);
offset += notification.inputoutput.dataSize;
send_notification(f_cookie, ¬ification);
}
result = EJUSTRETURN;
if (result == EJUSTRETURN) {
mbuf_freem(*data);
if (control != NULL && *control != NULL)
mbuf_freem(*control);
}
return result;
}
errno_t tl_data_in_fn(void *cookie, socket_t so, const struct sockaddr *from, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags) {
return tl_data_fn(cookie, so, from, data, control, flags, FilterSocketDataDirectionIn);
}
errno_t tl_data_out_fn(void *cookie, socket_t so, const struct sockaddr *to, mbuf_t *data, mbuf_t *control, sflt_data_flag_t flags) {
return tl_data_fn(cookie, so, to, data, control, flags, FilterSocketDataDirectionOut);
}
And the user space code:
int s = socket(PF_SYSTEM, SOCK_DGRAM, SYSPROTO_CONTROL);
//connect to driver
FilterNotification notification;
while (recv(s, ¬ification, sizeof(FilterNotification), 0) == sizeof(FilterNotification)) {
FilterClientResponse response;
response.socketId = notification.socketId;
response.direction = (notification.event == FilterEventDataIn) ? FilterSocketDataDirectionIn : FilterSocketDataDirectionOut;
response.dataSize = notification.inputoutput.dataSize;
memcpy(response.data, notification.inputoutput.data, notification.inputoutput.dataSize);
send(s, &response, sizeof(response), 0);
}
When I asked on apple develper forum, the developer said "I don’t see any attempt to handle send-side flow control here. Without that a file upload can easily eat up all of the available mbufs, and things will go badly from there" but there is not examples at all. Can someone help me? thanks.
The problem was in socket buffer. When I inject a lot of data very quickly, the buffer becomes full and inject_data_in/inject_data_out functions returns error.
The workaround is to store pending packets in kernel space (You can use TAILQ for example) and then, when socket becomes available for writing (To get this event you can use kqueue on OS X) continue injection
Related
I have implemented a custom storage interface in libtorrent as described in the help section here.
The storage_interface is working fine, although I can't figure out why readv is only called randomly while downloading a torrent. From my view the overriden virtual function readv should get called each time I call handle->read_piece in piece_finished_alert. It should read the piece for read_piece_alert?
The buffer is provided in read_piece_alert without getting notified in readv.
So the question is why it is called only randomly and why it's not called on a read_piece() call? Is my storage_interface maybe wrong?
The code looks like this:
struct temp_storage : storage_interface
{
virtual int readv(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
// Only called on random pieces while downloading a larger torrent
std::map<int, std::vector<char> >::const_iterator i = m_file_data.find(piece);
if (i == m_file_data.end()) return 0;
int available = i->second.size() - offset;
if (available <= 0) return 0;
if (available > num_bufs) available = num_bufs;
memcpy(&bufs, &i->second[offset], available);
return available;
}
virtual int writev(file::iovec_t const* bufs, int num_bufs
, int piece, int offset, int flags, storage_error& ec)
{
std::vector<char>& data = m_file_data[piece];
if (data.size() < offset + num_bufs) data.resize(offset + num_bufs);
std::memcpy(&data[offset], bufs, num_bufs);
return num_bufs;
}
virtual bool has_any_file(storage_error& ec) { return false; }
virtual ...
virtual ...
}
Intialized with
storage_interface* temp_storage_constructor(storage_params const& params)
{
printf("NEW INTERFACE\n");
return new temp_storage(*params.files);
}
p.storage = &temp_storage_constructor;
The function below sets up alerts and invokes read_piece on each completed piece.
while(true) {
std::vector<alert*> alerts;
s.pop_alerts(&alerts);
for (alert* i : alerts)
{
switch (i->type()) {
case read_piece_alert::alert_type:
{
read_piece_alert* p = (read_piece_alert*)i;
if (p->ec) {
// read_piece failed
break;
}
// piece buffer, size is provided without readv
// notification after invoking read_piece in piece_finished_alert
break;
}
case piece_finished_alert::alert_type: {
piece_finished_alert* p = (piece_finished_alert*)i;
p->handle.read_piece(p->piece_index);
// Once the piece is finished, we read it to obtain the buffer in read_piece_alert.
break;
}
default:
break;
}
}
Sleep(100);
}
I will answer my own question. As Arvid said in the comments: readv was not invoked because of caching. Setting settings_pack::use_read_cache to false will invoke readv always.
I am following this guide.
I used the following code to display LDAP servers in the domain:
#include <dns_sd.h>
#include <stdio.h>
#include <pthread.h>
void ResolveCallBack(DNSServiceRef sdRef,
DNSServiceFlags flags,
uint32_t interfaceIndex,
DNSServiceErrorType errorCode,
const char *fullname,
const char *hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char *txtRecord,
void *context) {
}
void BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext) {
DNSServiceErrorType err = DNSServiceResolve(&inServiceRef,
0, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
NULL);
printf("DNSServiceResolve err = %x, name = %s, type=%s, domain=%s\n",
err, inName, inType, inDomain);
}
int main() {
DNSServiceRef ServiceRef;
DNSServiceErrorType err = DNSServiceBrowse(&ServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsDefault, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_ldap._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
NULL); // Callback context.
printf("err = 0x%x\n", err);
int sockfd = DNSServiceRefSockFD(ServiceRef);
printf("sockfd = %d\n", sockfd);
pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, 0);
pthread_setcanceltype(PTHREAD_CANCEL_ASYNCHRONOUS, 0);
struct timeval timeout;
timeout.tv_sec = 1;
timeout.tv_usec = 0;
fd_set descriptors;
FD_ZERO(&descriptors);
FD_SET(sockfd, &descriptors);
int r = select(sockfd + 1, &descriptors, NULL, NULL, &timeout);
printf("r = %d\n", r);
fflush(stdout);
if (r > 0) {
if (FD_ISSET(sockfd, &descriptors)) {
// This function will call the appropiate callback to process the
// event, in this case the BrowseReply static method.
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError) {
printf("Error on process an event in event loop, e = 0x%x\n", err);
}
}
} else if (r == -1) {
printf("The select() call failed");
}
return 0;
}
However, this didn't give me any LDAP server.
Any Help on this?
Thanks in advance
N.B:
This command returns results:
$nslookup -type=any _ldap._tcp
So there is LDAP servers in the domain.
When I tried "_http._tcp" as the registration type this returns
results.
Operating system is Mac OS X 10.9.
I am toying around with a libwebsockets tutorial trying to make it such that, after it receives a message from a connection over a given protocol, it sends a response to all active connections implementing that protocol. I have used the function libwebsocket_callback_all_protocol but it is not doing what I think it should do from its name (I'm not quite sure what it does from the documentation).
The goal is to have two webpages open and, when info is sent from one, the result will be relayed to both. Below is my code - you'll see that libwebsocket_callback_all_protocol is called in main (which currently does nothing, I think....) :
#include <stdio.h>
#include <stdlib.h>
#include <libwebsockets.h>
#include <string.h>
static int callback_http(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason, void *user,
void *in, size_t len)
{
return 0;
}
static int callback_dumb_increment(struct libwebsocket_context * this,
struct libwebsocket *wsi,
enum libwebsocket_callback_reasons reason,
void *user, void *in, size_t len)
{
switch (reason) {
case LWS_CALLBACK_ESTABLISHED: // just log message that someone is connecting
printf("connection established\n");
break;
case LWS_CALLBACK_RECEIVE: { // the funny part
// create a buffer to hold our response
// it has to have some pre and post padding. You don't need to care
// what comes there, libwebsockets will do everything for you. For more info see
// http://git.warmcat.com/cgi-bin/cgit/libwebsockets/tree/lib/libwebsockets.h#n597
unsigned char *buf = (unsigned char*) malloc(LWS_SEND_BUFFER_PRE_PADDING + len +
LWS_SEND_BUFFER_POST_PADDING);
int i;
// pointer to `void *in` holds the incomming request
// we're just going to put it in reverse order and put it in `buf` with
// correct offset. `len` holds length of the request.
for (i=0; i < len; i++) {
buf[LWS_SEND_BUFFER_PRE_PADDING + (len - 1) - i ] = ((char *) in)[i];
}
// log what we recieved and what we're going to send as a response.
// that disco syntax `%.*s` is used to print just a part of our buffer
// http://stackoverflow.com/questions/5189071/print-part-of-char-array
printf("received data: %s, replying: %.*s\n", (char *) in, (int) len,
buf + LWS_SEND_BUFFER_PRE_PADDING);
// send response
// just notice that we have to tell where exactly our response starts. That's
// why there's `buf[LWS_SEND_BUFFER_PRE_PADDING]` and how long it is.
// we know that our response has the same length as request because
// it's the same message in reverse order.
libwebsocket_write(wsi, &buf[LWS_SEND_BUFFER_PRE_PADDING], len, LWS_WRITE_TEXT);
// release memory back into the wild
free(buf);
break;
}
default:
break;
}
return 0;
}
static struct libwebsocket_protocols protocols[] = {
/* first protocol must always be HTTP handler */
{
"http-only", // name
callback_http, // callback
0, // per_session_data_size
0
},
{
"dumb-increment-protocol", // protocol name - very important!
callback_dumb_increment, // callback
0, // we don't use any per session data
0
},
{
NULL, NULL, 0, 0 /* End of list */
}
};
int main(void) {
// server url will be http://localhost:9000
int port = 9000;
const char *interface = NULL;
struct libwebsocket_context *context;
// we're not using ssl
const char *cert_path = NULL;
const char *key_path = NULL;
// no special options
int opts = 0;
// create libwebsocket context representing this server
struct lws_context_creation_info info;
memset(&info, 0, sizeof info);
info.port = port;
info.iface = interface;
info.protocols = protocols;
info.extensions = libwebsocket_get_internal_extensions();
info.ssl_cert_filepath = cert_path;
info.ssl_private_key_filepath = key_path;
info.gid = -1;
info.uid = -1;
info.options = opts;
info.user = NULL;
info.ka_time = 0;
info.ka_probes = 0;
info.ka_interval = 0;
/*context = libwebsocket_create_context(port, interface, protocols,
libwebsocket_get_internal_extensions,
cert_path, key_path, -1, -1, opts);
*/
context = libwebsocket_create_context(&info);
if (context == NULL) {
fprintf(stderr, "libwebsocket init failed\n");
return -1;
}
libwebsocket_callback_all_protocol(&protocols[1], LWS_CALLBACK_RECEIVE);
printf("starting server...\n");
// infinite loop, to end this server send SIGTERM. (CTRL+C)
while (1) {
libwebsocket_service(context, 50);
// libwebsocket_service will process all waiting events with their
// callback functions and then wait 50 ms.
// (this is a single threaded webserver and this will keep our server
// from generating load while there are not requests to process)
}
libwebsocket_context_destroy(context);
return 0;
}
I had the same problem, the libwebsocket_write on LWS_CALLBACK_ESTABLISHED generate some random segfault so using the mail list the libwebsockets developer Andy Green instructed me the correct way is to use libwebsocket_callback_on_writable_all_protocol, the file test-server/test-server.c in library source code shows sample of use.
libwebsocket_callback_on_writable_all_protocol(libwebsockets_get_protocol(wsi))
It worked very well to notify all instances, but it only call the write method in all connected instances, it do not define the data to send. You need to manage the data yourself. The sample source file test-server.c show a sample ring buffer to do it.
http://ml.libwebsockets.org/pipermail/libwebsockets/2015-January/001580.html
Hope it helps.
From what I can quickly grab from the documentation, in order to send a message to all clients, what you should do is store somewhere (in a vector, a hashmap, an array, whatever) the struct libwebsocket * wsi that you have access when your clients connect.
Then when you receive a message and want to broadcast it, simply call libwebsocket_write on all wsi * instances.
That's what I'd do, anyway.
I'm trying to write block device driver that implements read/write operations.
The tricky thing is that the information is not in the hardware, but in a user space process. Therefore, during the read/write system call I would like to interact the user space (i.e. sendign signal to the user space).
However, my user space process catching the signal only after the read/write system call returned. adding wait in the system call implementation seems to be ignored somehow.
I used this code at the read system call:
ssize_t sleepy_read(struct file *filp, char *buf, size_t count, loff_t *f_pos)
{
struct siginfo info;
struct task_struct *t;
int ret;
#define SIG_TEST 44
memset(&info, 0, sizeof(struct siginfo));
info.si_signo = SIG_TEST;
info.si_code = SI_QUEUE;
info.si_int = 1234;
rcu_read_lock();
t = pid_task(find_pid_ns(current->pid, &init_pid_ns), PIDTYPE_PID);
if(t == NULL){
printk(KERN_ERR "no such pid\n");
rcu_read_unlock();
return -ENODEV;
}
rcu_read_unlock();
ret = send_sig_info(SIG_TEST, &info, t); //send the signal
if (ret < 0) {
printk("error sending signal\n");
return ret;
}
wait_event_interruptible(wq, flag != 0);
msleep(10000);
return (0);
}
and this code at user space:
#define SIG_TEST 44
int g_devFile = -1;
void receiveData(int n, siginfo_t *info, void *unused)
{
printf("received value %i\n", info->si_int);
}
int main(void)
{
struct sigaction sig;
sig.sa_sigaction = receiveData;
sig.sa_flags = SA_SIGINFO;
sigaction(SIG_TEST, &sig, NULL);
g_devFile = open(devname, O_RDWR);
if ( g_devFile < 0 ) {
fprintf(stderr,"Error opening device[%s] file err[%s]\n",devname,strerror(errno));
return -1;
} else {
fprintf (stderr, "device opened. ptr=%p\n", (void*)g_devFile);
}
i = read(g_devFile, &buff, 11);
}
Currently I'm catching my signal (in user space) only after the 10 seconds sleep expieres (the wait seems to be ignored).
Any idea will be appriceated. Thanks.
I'm having a problem with a UDPSocket wrapper I've written. I have a high bandwidth low latency local network over which I'm sending UDP packets back and forth. I don't have a concern too much for reliability of packet arrival, but it is incredibly important that the packets that do arrive do so quickly. Here's the relevant code for setting up a socket:
bool UDPSocket::create() {
int on = 1;
#ifdef WIN32
if(WSAStartup(MAKEWORD(1,1), &SocketInfo) != 0) {
MessageBox(NULL, "Cannot initialize WinSock", "WSAStartup", MB_OK);
}
#endif
m_sock = socket(PF_INET, SOCK_DGRAM, 0);
#ifdef WIN32
if(setsockopt(m_sock, SOL_SOCKET, SO_REUSEADDR, (const char*)&on, sizeof(on)) == SOCKET_ERROR)
return false;
#else
if(setsockopt(m_sock, SOL_SOCKET, SO_REUSEADDR, (const char*)&on, sizeof(on)) == -1)
return false;
#endif
addrLen = sizeof(struct sockaddr);
return true;
}
bool UDPSocket::bind(const int port) {
if(!is_valid())
return false;
m_addr.sin_family = AF_INET;
m_addr.sin_addr.s_addr = htonl(INADDR_ANY);
m_addr.sin_port = htons(port);
if(::bind(m_sock, (struct sockaddr*)&m_addr, sizeof(m_addr))<0) {
std::cout << "UDPSocket: error on bind" << std::endl;
return false;
}
return true;
}
bool UDPSocket::send(const std::string s) const {
const char* buf = s.c_str();
::sendto(m_sock, buf, strlen(buf), 0, (const sockaddr*)&clientAddr, addrLen);
return true;
}
bool UDPSocket::setDestination(const std::string ip, const int port) {
memset(&clientAddr, 0, sizeof(clientAddr));
clientAddr.sin_family = AF_INET;
clientAddr.sin_addr.s_addr = inet_addr(ip.c_str());
clientAddr.sin_port = htons(port);
return true;
}
int UDPSocket::recv(std::string& s) const {
char buffer[MAXRECV + 1];
struct timeval tv;
fd_set fdset;
int rc, nread;
memset(&buffer, 0, sizeof(buffer));
FD_ZERO(&fdset);
FD_SET(m_sock, &fdset);
tv.tv_sec = 0;
tv.tv_usec = m_timeout;
rc = select(m_sock + 1, &fdset, (fd_set *) 0, (fd_set *) 0, &tv);
if(FD_ISSET(m_sock, &fdset)) {
#ifdef WIN32
nread = ::recvfrom(m_sock, buffer, MAXRECV, 0, (sockaddr*)&clientAddr, const_cast< int * __w64 >(&addrLen));
#else
nread = ::recvfrom(m_sock, buffer, MAXRECV, 0, (sockaddr*)&clientAddr, (socklen_t*)&addrLen);
#endif
if(nread < 0) {
return -1;
} else if(nread == 0) {
return 0;
}
s = std::string(buffer);
return nread;
} else {
return 0;
}
}
void UDPSocket::set_non_blocking(const bool b) {
mNonBlocking = b;
#ifdef WIN32
u_long argp = b ? 1 : 0;
ioctlsocket(m_sock, FIONBIO, &argp);
#else
int opts = fcntl(m_sock, F_GETFL);
if(opts < 0) return;
if(b)
opts |= O_NONBLOCK;
else
opts &= ~O_NONBLOCK;
fcntl(m_sock, F_SETFL, opts);
#endif
}
My user code, on both ends, create a "sending" and "receiving" UDPSocket and bind them to their respective ports, then use send() to send data and recv() to receive. On one hand, the linux side seems to receive practically immediately, but the Windows side has a delay up to 1 second before receiving data. However, ::recv() never returns 0 in this time. Am I missing something obvious?
have you tried all four combinations (linux->linux, win->linux, linux->win, win->win)? which have delays and which not?
also, use a packet sniffer (like tcpdump or wireshark) to see if the delay is before or after hitting the wire.
It's not a direct answer. May be you can try tools like ttcp (http://www.pcausa.com/Utilities/pcattcp.htm), which can both test tcp and udp performance. And, you may also check their source code, which is in public domain.
If you've got the time to experiment, try an IOCP version of the code.
I never trusted the win select implementation, with its 64 socket limit.