SOCKADDR_INET in golang - go

I have a byte array that is actually filled with SOCKADDR_INET structures by C++ function in the kernel, after transfer to user mode I need to marshal it in the slice of SOCKADDR_INET equivalent. The syscall package contains sockaddr_in and sockaddr_in6 but as I see golang has a lack of a union to emulate SOCKADDR_INET. I'm new in golang so wondering about the right solution for this case.

The si_family field at the end identifies if the bytes should be interpreted as IPv4 or IPv6 SOCKADDR structure.
The most easy route is likey to make a function to unmarshall the bytes into a go version of the SOCKADDR_IN or SOCKADDR_IN6 struct depending on the value of si_family.

Related

How to send multipart messages using libnl and generic netlink?

I'm trying to send a relatively big string (6Kb) through libnl and generic netlink, however, I'm receiving the error -5 (NL_ENOMEM) from the function nla_put_string in this process. I've made a lot of research but I didn't find any information about these two questions:
What's the maximum string size supported by generic netlink and libnl nla_put_string function?
How to use the multipart mechanism of generic netlink to broke this string in smaller parts to send and reassemble it on the Kernel side?
If there is a place to study such subject I appreciate that.
How to use the multipart mechanism of generic netlink to broke this string in smaller parts to send and reassemble it on the Kernel side?
Netlink's Multipart feature "might" help you transmit an already fragmented string, but it won't help you with the actual string fragmentation operation. That's your job. Multipart is a means to transmit several small correlated objects through several packets, not one big object. In general, Netlink as a whole is designed with the assumption that any atomic piece of data you want to send will fit in a single packet. I would agree with the notion that 6Kbs worth of string is a bit of an oddball.
In actuality, Multipart is a rather ill-defined gimmic in my opinion. The problem is that the kernel doesn't actually handle it in any generic capacity; if you look at all the NLMSG_DONE usage instances, you will notice not only that it is very rarely read (most of them are writes), but also, it's not the Netlink code but rather some specific protocol doing it for some static (ie. private) operation. In other words, the semantics of NLMSG_DONE are given by you, not by the kernel. Linux will not save you any work if you choose to use it.
On the other hand, libnl-genl-3 does appear to perform some automatic juggling with the Multipart flags (NLMSG_DONE and NLM_F_MULTI), but that only applies when you're sending something from Kernelspace to Userspace, and on top of that, even the library itself admits that it doesn't really work.
Also, NLMSG_DONE is supposed to be placed in the "type" Netlink header field, not in the "flags" field. This is baffling to me, because Generic Netlink stores the family identifier in type, so it doesn't look like there's a way to simultaneously tell Netlink that the message belongs to you, AND that it's supposed to end some data stream. Unless I'm missing something important, Multipart and Generic Netlink are incompatible with each other.
I would therefore recommend implementing your own message control if necessary and forget about Multipart.
What's the maximum string size supported by generic netlink and libnl nla_put_string function?
It's not a constant. nlmsg_alloc() reserves
getpagesize() bytes per packet by default. You can tweak this default with nlmsg_set_default_size(), or more to the point you can override it with nlmsg_alloc_size().
Then you'd have to query the actual allocated size (because it's not guaranteed to be what you requested) and build from there. To get the available payload you'd have to subtract the Netlink header length, the Generic Header length and the Attribute Header lengths for any attributes you want to add. Also the user header length, if you have one. You would also have to align all these components because their sizeof is not necessarily their actual size (example).
All that said, the kernel will still reject packets which exceed the page size, so even if you specify a custom size you will still need to fragment your string.
So really, just forget all of the above. Just fragment the string to something like getpagesize() / 2 or whatever, and send it in separate chunks.
This is the general idea:
static void do_request(struct nl_sock *sk, int fam, char const *string)
{
struct nl_msg *msg;
msg = nlmsg_alloc();
genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, fam,
0, 0, DOC_EXMPL_C_ECHO, 1);
nla_put_string(msg, DOC_EXMPL_A_MSG, string);
nl_send_auto(sk, msg);
nlmsg_free(msg);
}
int main(int argc, char **argv)
{
struct nl_sock *sk;
int fam;
sk = nl_socket_alloc();
genl_connect(sk);
fam = genl_ctrl_resolve(sk, FAMILY_NAME);
do_request(sk, fam, "I'm sending a string.");
do_request(sk, fam, "Let's pretend I'm biiiiiig.");
do_request(sk, fam, "Look at me, I'm so big.");
do_request(sk, fam, "But I'm already fragmented, so it's ok.");
nl_close(sk);
nl_socket_free(sk);
return 0;
}
I left a full sandbox in my Dropbox. See the README. (Tested in kernel 5.4.0-37-generic.)

How can dynamically-sized types be expressed safely? [duplicate]

This question already has answers here:
What's the Rust idiom to define a field pointing to a C opaque pointer?
(1 answer)
Is it possible to have stack allocated arrays with the size determined at runtime in Rust?
(2 answers)
Closed 3 years ago.
I am writing a safe wrapper over an FFI library that uses dynamically-sized types, and I am trying to determine the best way to handle them in Rust.
In particular, I am wrapping the Windows API's SID structure. It has a BYTE (u8) indicating the current revision, a BYTE (u8) indicating the number of sub-authorities, six BYTEs ([u8; 6]) indicating the identifier authority, and then a series of DWORDs (u32s) that represent the number of sub-authorities. It has as many sub-authorities as indicated by the sub-authority count.
In general, the approach is to only ever handle SIDs through WinAPI calls, so I should only ever have a pointer to it. Additionally, by keeping the internals private, I can ensure that consumers of my crate can only ever access it through a reference, Box, etc. that I control. However, I do need to declare some kind of Sid type, and I am trying to determine the best approach.
Option 1: The winapi crate just assumes that there will only be a single sub-authority:
pub struct SID {
Revision: BYTE,
SubAuthorityCount: BYTE,
IdentifierAuthority: SID_IDENTIFIER_AUTHORITY, // [u8; 6]
SubAuthority: [DWORD; 1],
}
This works when you just have references to it, but if you ever create an owned Sid and set the SubAuthorityCount to anything other than 1, WinAPI calls will read invalid SubAuthority entries.
Option 2: Optimistically allocate the maximum number of DWORDs:
pub struct SID {
Revision: BYTE,
SubAuthorityCount: BYTE,
IdentifierAuthority: SID_IDENTIFIER_AUTHORITY, // [u8; 6]
SubAuthority: [DWORD; 256],
}
This introduces memory overhead (the common case is to have a couple of sub-authorities). It also means that a WinAPI-allocated SID cannot just be cast to the SID type, because the additional SubAuthorities that Rust expects to be there were not allocated by the SID.
Option 3: Dynamically-sized array:
pub struct SID {
Revision: BYTE,
SubAuthorityCount: BYTE,
IdentifierAuthority: SID_IDENTIFIER_AUTHORITY, // [u8; 6]
SubAuthority: [DWORD],
}
Now it is no longer Sized, which accurately reflects the shape of the underlying memory. Additionally, because Sized types cannot be declared directly (I think?), it can only be created using the FFI calls. However, now all pointers to it are automatically fat pointers, which makes it much more difficult to interoperate with the FFI calls, which all expect bare (thin?) pointers.
Option 4: It's never going to be allocated, so just... nothing. It will only ever be handled as a reference, and I will never look at the underlying fields, so there is no reason to expose anything at all:
pub struct SID {}
This feels super strange. I know that zero-sized types have different behavior in Rust than in C, and I feel uncomfortable using them around FFI calls. However, it does reflect the way the SID type should be used: it should only be by reference (or Box or another pointer), and the fields are irrelevant. But I am definitely not certain about the behavior when, for example, casting ZSTs &s to raw pointers and back.
Or maybe something else? Currently I am using option 1, keeping everything private, and just not accessing any of the fields directly (everything is through WinAPI calls), but I am curious about which approach people recommend and whether my analysis was correct.

What is the most efficient protobuf type (in C++) for storing ipv4 or ipv6 address? My address is a boost::asio::ip::address_v4 (or v6)

I read that protobuf has a type called "bytes" which can store arbitrary number of bytes and is the equivalent of "C++ string". The reason why I don't prefer to use "bytes" is that it expects input as a C++ string i.e., boost IP will need to be converted to a string. Now my concern lies here : I want to perform serialize and send the encoded protobuf message over TCP socket. I want to ensure that the encoded message size is as small as possible.
Currently, I am using the below .proto file :
syntax = "proto2";
message profile
{
repeated **uint32** localEndpoint = 1;
repeated **uint32** remoteEndpoint = 2;
}
In order to save boost IP in the protobuf message, I am first converting boost IP into byte-format array by using "boost::asio::ip::address_v4::to_bytes()". So for a v4 IP, resultant array size is 4. Then I am converting 1st 4 bytes from the resultant byte-array into one uint32_t number and then storing in "localEndpoint" field of the protobuf message. Likewise, I repeat for the next 4 bytes (for v6). I am taking 4 bytes at a time so as to utilize full 32 bits of the uint32.
Hence for a v4 address, 1 occurrence of "localEndpoint" field is used.
Similarly, for a v6 address, 4 occurrence of "localEndpoint" field is used.
Please allow me to highlight that if I had used "bytes" here, my input string itself would have been of size 15 bytes for a v4 ip like 111.111.111.111
Using uint32 instead of "bytes" does save me some encoded-data-size but I am looking for a more efficient protobuf type requiring lesser number of bytes.
Sorry for a long description but I wanted to explain my query in details. Please help me.. Thanks a lot in advance :)
An ipv4 address should require exactly 4 bytes. If you're somehow getting 8, you're doing something wrong - are you perhaps hex-encoding it? You don't need that here. Likewise, ipv6 should be 16 bytes.
4 bytes with a usually-set high byte is most effectively stored as fixed32 - varint would be overhead here, due to the high bits. 16 bytes is more subtle - I'd go with bytes (field header plus length), since it is simpler to form into a union, and if the field-number is large, it avoids having to pay for multiple multi-byte field headers (a length prefix of 16 will always be single-byte).
I'd then create a union of these via oneof:
oneof ip_addr {
fixed32 v4 = 1;
bytes v6 = 2;
}

How to get local port and IP in Network Kernel Extension(Socket filter) in kernel context of MacOS

I am looking to collect Port and IP of the connecting socket to the socket filter in Network Kernel Extention (NKE).
I have tried to collect it in sf_bind, sf_connect_in, sf_connect_out call back functions of struct sflt_filter.
In sf_bind(void *cookie,socket_t so,const struct sockaddr *to) callback. it specified for to param "The local address of the socket will be bound to" So I have used code:
printf("Local port:<%u> and IP<%04X>",
ntohs(((struct sockaddr_in *)to)->sin_port),
ntohs(((struct sockaddr_in*)to)->sin_addr.s_addr));
Sometimes both IP and Port is empty and sometimes one of the both is appearing.
And in same and other callbacks I have tried with code:
unsigned char szAddrStr[256];
struct sockaddr_in addrLcl;
sock_getsockname(so, (struct sockaddr *)&addrLcl, sizeof(addrLcl));
inet_ntop(AF_INET, &addrLcl.sin_addr, (char *)szAddrStr, sizeof(szAddrStr));
printf("IP String <%s> Port Hex:<%X>", szAddrStr, ntohs(addrLcl.sin_port));
But this code always gives IP and Port empty.
Has anyone had any idea? or another way to get it? Thanks in advance.
About your first code fragment: It's hard to say what is exactly wrong without full callback code sample, but you definetely should check sa_family field of "to" before trying to print sin_port and sin_addr as port and ip.
I think the values you are looking for are AF_INET and AF_INET6.
About your second code fragment: you should again check sa_family;
Also, calling sock_getsockname on "so" in bind callback could be wrong, because that "callbacks" are called before relevant operations and you will not have filled sockaddr for "so".
There is tcplognke sample (it was once on apple developers resources, but they are not good at care about them), you should check it out for examples.
And there is an information about Apple plans to make NKE technology deprecated soon, so be careful using it :)

Filling up struct sockaddr_in from the ip header __be32 saddr and udp header __be16 source

I'm implementing a protocol in the linux kernel on top of UDP and want to send a reply from within the kernel when I receive a packet. For this, I want to run ip4_datagram_connect to get a route to the destination (which is the source address in the received packet) and then send a reply.
To call the ip4_datagram_connect, I need to fill in a sockaddr_in structure to pass as the address to the function. On comparing,
struct sockaddr_in
unsigned short sin_port;
struct in_addr sin_addr;
struct udphdr
__be16 source;
and struct iphdr
__be32 saddr;
So my question is, do I need any helper function to copy the address and the port from the packet header into the sockaddr_in structure (like how we use htons etc. in socket programming)?
The value in the raw IP and UDP header are already in network byte order. The be in e.g. __be16 stands for "Big Endian", which is network byte order. The numbers in __be16 and __be32 is the number of bits.
The fields in the sockadd_in structures are also supposed to be in network byte order. The name htons stands for "Host To Network Short", i.e. it converts a short (which usually is 16 bits) from host byte order to network byte order.
So to answer your question: No, you do not need to do anything, plain assignment to sin_port and sin_addr.s_addr should be enough.

Resources