OMNeT++ inet::b to Int - omnet++

I'm trying to make an application that gets the size of an Ethernet chunk and stores it in a vector of ints. To get the chunk length I'm using the function provided by inet: chunk->getChunkLength(). Is there a way to convert the type inet::b to int?

To obtain the size of a chunk in bits use this code:
int bitSize = b(chunk->getChunkLength()).get();
If you want to obtain the size in bytes use this way:
int byteSize = B(chunk->getChunkLength()).get();

Related

How Memory is allocated for member in the example

I was looking at Microsoft site about single inheritance. In the example given (code is copied at the end), I am not sure how memory is allocated to Name. Memory is allocated for 10 objects. But Name is a pointer member of the class. I guess I can assign constant string something like
DocLib[i]->Name = "Hello";
But we cannot change this string. In such situation, do I need allocate memory to even Name using new operator in the same for loop something like
DocLib[i]->Name = new char[50];
The code from Microsoft site is here:
// deriv_SingleInheritance4.cpp
// compile with: /W3
struct Document {
char *Name;
void PrintNameOf() {}
};
class PaperbackBook : public Document {};
int main() {
Document * DocLib[10]; // Library of ten documents.
for (int i = 0 ; i < 10 ; i++)
DocLib[i] = new Document;
}
Yes in short. Name is just a pointer to a char (or char array). The structure instantiation does not allocate space for this char (or array). You have to allocate space, and make the pointer(Name) point to that space. In the following case
DocLib[i]->Name = "Hello";
the memory (for "Hello") is allocated in the read only data section of the executable(on load) and your pointer just points to this location. Thats why its not modifiable.
Alternatively you could use string objects instead of char pointers.

What is the point of the offset variable in dispatch_data_apply for libdispatch?

I'm having trouble understanding the offset variable provided to the data applier for a dispatch_io_read function call. I see that the documentation claims the offset is the logical offset from the base of the data object. Looking at the source code for the dispatch_data_apply function confirms that this variable always starts from 0 for the first apply for a data chunk, and then is simply the sum of the range lengths.
I guess I don't understand the purpose of this variable then. I had originally assumed this was the offset for the entire read, but it's not. It seems you have to keep track of the bytes read and offset by that amount to actually properly do a read in libdispatch.
// Outside the dispatch_io_read handler...
char * currBufferPosition = destinationBuffer;
// Inside the dispatch_io_read handler...
dispatch_io_read(channel, fileOffset, bytesRequested, queue, ^(bool done, dispatch_data_t data, int error) {
// Note: Real code would handle error variable.
dispatch_data_apply(data, ^bool(dispatch_data_t region, size_t offset, const void * buffer, size_t size) {
memcpy(currBufferPosition, buffer, size);
currBufferPosition += size;
return true;
});
});
My question is: Is this the right way of using the data returned by dispatch_data_apply? And if so, what is the purpose of the offset variable passed into the applier handler? The documentation does not seem clear about this to me.
A dispatch_data_t is an sequence of bytes. The bytes can be stored in multiple non-contiguous byte arrays. For example, bytes 0-6 can be stored in an array, and then bytes 7-12 are stored in a separate array somewhere else in memory.
For efficiency, the dispatch_data_apply function lets you iterate over those arrays in-place (without copying out the data). On each call to your “applier”, you receive a pointer to one of the underlying storage arrays in the buffer argument. The size argument tells you how many bytes are in this particular array, and the offset argument tells you how (logically) far the first byte of this particular array is from the first byte of the entire dispatch_data_t.
Example:
#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
#autoreleasepool {
dispatch_data_t aData = dispatch_data_create("Hello, ", 7, nil, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
dispatch_data_t bData = dispatch_data_create("world!", 6, nil, DISPATCH_DATA_DESTRUCTOR_DEFAULT);
dispatch_data_t cData = dispatch_data_create_concat(aData, bData);
dispatch_data_apply(cData, ^bool(dispatch_data_t _Nonnull region, size_t offset, const void * _Nonnull buffer, size_t size) {
printf("applying at offset %lu, buffer %p, size %lu, contents: [%*.*s]\n", (unsigned long)offset, buffer, (unsigned long)size, (int)size, (int)size, buffer);
return true;
});
}
return 0;
}
Output:
applying at offset 0, buffer 0x100407970, size 7, contents: [Hello, ]
applying at offset 7, buffer 0x1004087b0, size 6, contents: [world!]
Okay, so that's what the offset argument is for. Now how does this relate to dispatch_io_read?
Well, dispatch_io_read doesn't pass you the same bytes twice. Once it has passed you some bytes, it discards them. The next time it passes you bytes, they are fresh, newly-read bytes. If you want the old bytes, you have to keep them around yourself. If you want to know how many old bytes you were given before the current call to your callback, you have to keep that count yourself. That is not what the offset argument is for.
It's possible that when dispatch_io_read calls you, it passes you a dispatch_data_t that has stored its bytes in multiple non-contiguous arrays, so when you call dispatch_data_apply on it, your applier gets called multiple times, with different offsets and buffers and sizes. But those calls only get you access to the fresh new bytes for the current call to your callback, not to old bytes from any prior calls to your callback.

How much storage does this Swift struct actually use?

Suppose I have the following struct –
struct MyStruct {
var value1: UInt16
var value2: UInt16
}
And I use this struct somewhere in my code like so -
var s = MyStruct(value1: UInt16(0), value2: UInt16(0))
I know that the struct will require 32-bits of storage for the two 16-bit integers –
What I am not certain about is whether swift is allocating two additional 64-bit pointers for each value in addition to one 64-bit pointer for the variable s.
Does this mean total storage requirement for the above code would result in the following?
MyStruct.value1 - 16-bits
MyStruct.value1 ptr - 64-bits
MyStruct.value2 - 16-bits
MyStruct.value2 ptr - 64-bits
s ptr - 64-bits
–––––––––––––––––––––––––––––
Total - 224-bits
Can someone please clarify?
MyStruct is 4 bytes because sizeof(UInt16) is 2 bytes. To test this for any given type, use sizeof. sizeof return the memory in bytes.
let size = sizeof(MyStruct) //4
If you want to get the size of a given instance you can use sizeOfValue.
var s = MyStruct(value1: UInt16(0), value2: UInt16(0))
let sSize = sizeofValue(s) //4
I believe the size of the pointer will depend on the architecture/compiler which is 64-bits on most computers and many newer phones but older ones might be 32 bit.
I don't think there is a way to actually get a pointer to MyStruct.value1, correct me if i'm wrong (i'm trying &s.value1.
Pointers
Structs in Swift are created and passed around on the stack, that's why they have value semantics instead of reference semantics.
When a struct is created in a function, it is stored on the stack so it's memory is freed up at the end of the function. It's reference is just an offset from the Stack Pointer or Frame Pointer.
It'll be four bytes on the stack.
Just try it in a XCode Playground:
The answer is 4 bytes.

Convert IDL structure with conformant array to header

I need to pass via Microsoft RPC structure with conformant array. This is how I write it in IDL:
struct BarStruct
{
byte a;
int b;
byte c;
long lArraySize;
[size_is(lArraySize)] char achArray[*];
};
Generated header:
struct BarStruct
{
byte a;
int b;
byte c;
long lArraySize;
char achArray[ 1 ];
} ;
Why achArray is fixed length of 1? How to pass array with for example 10 elements to it?
Something like this:
BarStruct* p = (BarStruct*)CoTaskMemAlloc(
offsetof(BarStruct, achArray) + 10*sizeof(char));
Basically, you need to allocate memory as if the structure had achArray[10] member at the end. offsetof(BarStruct, achArray) gives you the size of the fixed part of the structure, up to but not including achArray. To this, you add variable size of the array.

WSASend : Send int or struct

I would like to use MS function to send data.
I didnt find examples where they send other type of data other than const char * .
I tried to send a int, or other, but I failed.
WSASend() and send() both function only take a Char* parameters.
How should i proceed ?
Thanks
Its just a pointer to a buffer, this buffer may contains anything you want.
This char pointer is actually an address to a bytes array, this function requires a length parameter too.
An integer is a 2/4 (short/long) bytes value,
Then if you want to send an integer variable (for example) you have to pass its address, and its length.
WSASend and send are simple functions that send a memory block.
I assume you are talking about C, you have to understand that C's char variables are bytes - 8 bits block, char variables contain any value between 0 and 255.
A pointer to a char var is an address to a byte (which maybe the first cell of a bytes array).
I think thats what confuses you.
I hope you understand.
The const char* parameter indicates that the function is taking a pointer to bytes. Witch really seems to be the result of the original socket api designers being pedantic - C has a generic type to handle any kind of pointer without explicit casts: void*.
You could make a convenience wrapper for send like this - which would allow you to send any (contiguous) thing you can make a pointer to:
int MySend(SOCKET s, const void* buf, int len,int flags)
{
return send(s,(const char*)buf,len,flags);
}
Using void* in place of char* actually makes the api safer, as it can now detect when you do something stupid:
int x=0x1234;
send(s,(const char*)x,sizeof(x),0); // looks right, but is wrong.
mysend(s,x,sizeof(x),0); // this version correctly fails
mysend(s,&x,sizeof(x),0); // correct - pass a pointer to the buffer to send.
WSASend is a bit more tricky to make a convenience wapper for as you have to pass it an array of structs that contain the char*'s - but again its a case of defining an equivalent struct with const void*'s in place of the const char*'s and then casting the data structures to the WSA types in the convenience wrapper. Get it right once, and the rest of the program becomes much easier to determine correct as you don't need casts everywhere hiding potential bugs.

Resources