I try to perform a test run for an XDP BPF program. The BPF program uses the bpf_xdp_adjust_meta() helper, to adjust the meta data.
I tried:
to run bpf_prog_test_run()
to run bpf_prog_test_run_xattr()
1. bpf_prog_test_run()
(The first time I tried my bpf program's debug messages told me that adjusting the data_meta field failed.) Now it can adjust the data_meta, but the iph.ihl field is apparently not set to 5.
2. bpf_prog_test_xattr()
This always returns -1, so something failed.
The Code
packet:
struct ipv4_packet pkt_v4 = {
.eth.h_proto = __bpf_constant_htons(ETH_P_IP),
.iph.ihl = 5,
.iph.daddr = __bpf_constant_htonl(33554442),
.iph.saddr = __bpf_constant_htonl(50331658),
.iph.protocol = IPPROTO_TCP,
.iph.tot_len = __bpf_constant_htons(MAGIC_BYTES),
.tcp.urg_ptr = 123,
.tcp.doff = 5,
};
test attribute:
__u32 size, retval, duration;
char data_out[128];
struct xdp_md ctx_in, ctx_out;
struct bpf_prog_test_run_attr test_attr = {
.prog_fd = prog_fd,
.repeat = 100,
.data_in = &pkt_v4,
.data_size_in = sizeof(&pkt_v4),
.data_out = &data_out,
.data_size_out = sizeof(data_out),
.ctx_in = &ctx_in,
.ctx_size_in = sizeof(ctx_in),
.ctx_out = &ctx_out,
.ctx_size_out = sizeof(ctx_out),
.retval = &retval,
.duration = &duration,
};
test execution:
bpf_prog_test_run(main_prog_fd, 1, &pkt_v4, sizeof(pkt_v4), &data_out, &size, &retval, &duration) -> iph.ihl field is 0.
bpf_prog_test_run_xattr(&test_attr) -> returns -1.
Note
The program was already successfully attached to the hook point of a real network interface and ran as intended. I just replaced the code that attaches the program to the hook point with the above code for testing.
The struct ipv4_packet pkt_v4 was not packed.
When I replace __packed with __attribute__ ((__packed__)) it works.
For information what happens without packing, see for example this question.
Basically the compiler adds padding bytes which leads to the fields in the packet being in different places than expected.
Related
std::vector<VkWriteDescriptorSet> writeDescriptorSets;
for (int index = 0; index < descriptorBindings.size(); index++)
{
VkWriteDescriptorSet writeDescriptorSet = {};
// Binding 0 : Uniform buffer
writeDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
writeDescriptorSet.dstSet = descriptorSet;
// Binds this uniform buffer to binding point 0
writeDescriptorSet.dstBinding = index;
writeDescriptorSet.descriptorCount = descriptorBindings[index].Count;
writeDescriptorSet.pNext = nullptr;
writeDescriptorSet.pTexelBufferView = nullptr;
if (descriptorBindings[index].Type == DescriptorType::UniformBuffer)
{
VkDescriptorBufferInfo uniformBufferDescriptor = {};
uniformBufferDescriptor.buffer = descriptorBindings[index].UniformBuffer->buffer;
uniformBufferDescriptor.offset = 0;
uniformBufferDescriptor.range = descriptorBindings[index].UniformBuffer->size;
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
writeDescriptorSet.pBufferInfo = &uniformBufferDescriptor;
}
else if (descriptorBindings[index].Type == DescriptorType::TextureSampler)
{
VkDescriptorImageInfo textureDescriptor = {};
textureDescriptor.imageView = descriptorBindings[index].Texture->imageView->imageView; // The image's view (images are never directly accessed by the shader, but rather through views defining subresources)
textureDescriptor.sampler = descriptorBindings[index].Texture->sampler; // The sampler (Telling the pipeline how to sample the texture, including repeat, border, etc.)
textureDescriptor.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; // The current layout of the image (Note: Should always fit the actual use, e.g. shader read)
//printf("%d\n", textureDescriptor.imageLayout);
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
writeDescriptorSet.pImageInfo = &textureDescriptor;
}
writeDescriptorSets.push_back(writeDescriptorSet);
}
vkUpdateDescriptorSets(logicalDevice, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
I am really scratching my head over this. If I enabled optimizations inside Visual Studio then the textureDescriptor.imageLayout line, and probably the rest of the textureDescriptor, gets optimized out and it causes errors in Vulkan. If I comment out the printf below it then no problem. I suspect that the compiler detects that imageLayout is being used and doesn't get rid of it.
Do I even need optimizations? If so how can I prevent it from removing that code?
textureDescriptor is not being "optimized out". It's a stack variable whose lifetime ended before you ever give it to Vulkan.
You're going to have to create those objects in some kind of way that will outlive the block in which they were created. It needs to the call to vkUpdateDescriptorSets.
Hi I am utilizing Protobuf for my personal project about neural networks.
Here is my Protobuf definitions:
syntax = "proto3";
package NGNET;
message InputLayer {
string name = 1;
uint32 size = 2;
}
message ComputeLayer {
string name = 1;
uint32 size = 2;
repeated LayerLink inputs = 3;
}
message LayerLink {
InputLayer il_input = 1;
ComputeLayer cl_input = 2;
uint32 output_size = 3;
repeated float weights = 4;
}
message NNET {
string name = 1;
repeated ComputeLayer outputs = 3;
}
The network is created like this:
ComputeLayer output1 = ComputeLayer(10, "output1");
ComputeLayer output2 = ComputeLayer(10, "output2");
ComputeLayer hidden = ComputeLayer(100, "hidden");
InputLayer input1 = InputLayer(784, "input1");
InputLayer input2 = InputLayer(784, "input2");
output1.link(&hidden);
output2.link(&hidden);
hidden.link(&input1);
hidden.link(&input2);
hidden.link(&extra);
The link functions are defined as:
void ComputeLayer::link(ComputeLayer* to_link) {
NGNET::LayerLink* link = new NGNET::LayerLink();
link->set_output_size(internal->size());
link->set_allocated_cl_input(to_link->getInternal());
internal->mutable_inputs()->AddAllocated(link);
}
void ComputeLayer::link(InputLayer* to_link) {
NGNET::LayerLink* link = new NGNET::LayerLink();
link->set_output_size(internal->size());
link->set_allocated_il_input(to_link->getInternal());
internal->mutable_inputs()->AddAllocated(link);
}
Note: The getInternal() function returns a NGNET::ComputeLayer or NGNET::InputLayer
Then the outputs are liked to a NNET with:
nnet->mutable_outputs()->AddAllocated(output1->getInternal());
nnet->mutable_outputs()->AddAllocated(output2->getInternal());
When nnet is deleted the program crashes with a segment fault.
I believe this is due to the hidden layer gets deleted twice. Is there any way I can safely free the memory that was allocated ?
Thanks.
The add_allocated_*() and set_allocated_*() methods take ownership of the pointer they are given. This means that you have to make sure that no other code will delete those pointers later, because the Protobuf implementation will delete them when the message is destroyed.
If you don't want Protobuf to take ownership of these objects, you should make copies instead:
link->mutable_il_input()->CopyFrom(*to_link->getInternal());
nnet->mutable_outputs()->Add()->CopyFrom(*output2->getInternal());
Generally, unless you are doing intense memory allocation optimizations, you probably never want to call the "allocated" protobuf accessors.
I am debugging a kernel crash dump. There seems to be a problem with one process was trying to memory map a new region. The problem is that it was not able to hold the memory map semaphore.
When I looked into process's mm_struct and printed its content. I saw that the struct rw_semaphore mmap_sem were as seen below. Now, does he value of count seem suspicious? It has a negative value, as if there was a race condition where it was decremented twice by two different threads after checking for zero.
mmap_sem = {
count = -4294967295,
wait_lock = {
{
rlock = {
raw_lock = {
slock = 262148
}
}
}
},
wait_list = {
next = 0xffff8801f0113e48,
prev = 0xffff8801f0113e48
}
},
Sorry for the confusion. I thought crash pulls the correct data types and uses that properly when printing out the all the values ...
Looks like crash utility is not read the count member as an int ....
When I print it as int, I get the correct value.
crash> p (int) (((struct mm_struct *) 0xffff8801f15fa540)->mmap_sem).count
$13 = 1
This is in my init.c:
server_data_t **data = (server_data_t**)get_env(argv, US_SERVER_DATA);
data[0] = (server_data_t*)calloc(1, sizeof(server_data_t));
kv_t *channels = &data[0]->channels;
kv_t *users = &data[0]->users;
kv_init(channels, "channels.dat", 10*1024, 0, 0, 0);
kv_init(users, "users.dat", 10*1024, 0, 0, 0);
These initializations were only for testing puproses:
channel_t *channel = (channel_t*)calloc(1, sizeof(channel_t));
channel->name = strdup("Test channel");
channel->id = 1;
kv_item channel_item;
channel_item.key = (char*)&channel->id;
channel_item.klen = sizeof(u32);
channel_item.val = (char*)channel;
channel_item.in_use = 0;
kv_add(channels, &channel_item);
channel_t *channel2 = (channel_t*)calloc(1, sizeof(channel_t));
channel2->name = strdup("Test channel2");
channel2->id = 2;
kv_item channel_item2;
channel_item2.key = (char*)&channel2->id;
channel_item2.klen = sizeof(u32);
channel_item2.val = (char*)channel2;
channel_item2.in_use = 0;
kv_add(channels, &channel_item2);
kv_do(channels, NULL, sizeof(u32), test_proc, 0);
The user defined process kv_do process:
static int test_proc(const kv_item *item, const void *ctx)
{
return 1;
}
Starting the server segfaults after the kv_do test process starts an infinite loop when trying to visit all items. It works fine with one item in the list, then it just visits the first item and quits. I can also visit the items one by one with the ID using kv_get.
I found an edge case where it worked with two items if my key was "Test Channel" key length was strlen("Test Channel") and then the next channel key was "Test Channel2" where key-length was as long as the first items key-length. Pretty confusing.
Is the mistake in the code (pointers and such) or how the process is supposed to work with the return value of 1?
I know that gwan sometimes have trouble with KVMs so if it could be that, I'm running Oracle VM VirtualBox v5.0.24 with Ubuntu.
Are the kv.c and persistence.c G-WAN examples working for you?
If any of them crashes, then you are probably using out-of-sync. G-WAN headers (gwan.h) with a recent ./gwan executable.
I have the following code, I use to Open a File Open Dialog using Win32 API. It works fine in 32bit, but fails when I use in a 64bit (In a DLL). What am I doing wrong?
char Filestring[256];
Filter = "OBJ files\0*.obj\0\0";
char* returnstring = NULL;
OPENFILENAME opf;
opf.hwndOwner = mainHWND;
opf.lpstrFilter = Filter;
opf.lpstrCustomFilter = 0;
opf.nMaxCustFilter = 0L;
opf.nFilterIndex = 1L;
opf.lpstrFile = Filestring;
opf.lpstrFile[0] = '\0';
opf.nMaxFile = 256;
opf.lpstrFileTitle = 0;
opf.nMaxFileTitle=50;
opf.lpstrInitialDir = Path;
opf.lpstrTitle = "Open Obj File";
opf.nFileOffset = 0;
opf.nFileExtension = 0;
opf.lpstrDefExt = "*.*";
opf.lpfnHook = NULL;
opf.lCustData = 0;
opf.Flags = (OFN_PATHMUSTEXIST | OFN_OVERWRITEPROMPT) & ~OFN_ALLOWMULTISELECT;
opf.lStructSize = sizeof(OPENFILENAME);
if(GetOpenFileName(&opf))
{
returnstring = opf.lpstrFile;
if (returnstring) {
result = returnstring;
}
}
EDIT: By failing, I meant that the Open File Dialog doesn't show up. The code still returns zero without any errors.
EDIT 2: I have called CommDlgExtendedError() and it returned 1. From the MSDN reference, does it mean the dialog has invalid lStructSize? I have checked the sizeof(OPENFILENAME) and it returned 140 bytes.
UPDATE: In my Project Settings, Under Code Generation the "Struct Member Alignment" is set to 4 Bytes(/Zp4). I changed this to default and it magically worked. Look for the answers and their comments below for more information.
You aren't initialising lpTemplateName and so it contains random stack noise. This in turn will lead to 'hInstance` being references which also contains stack noise.
When calling a function like this you should first of all zero out the struct and only fill in the fields that are non-zero. Something like this:
OPENFILENAME opf={0};
opf.lStructSize = sizeof(OPENFILENAME);
opf.hwndOwner = mainHWND;
opf.lpstrFilter = Filter;
opf.nFilterIndex = 1L;
opf.lpstrFile = Filestring;
opf.lpstrFile[0] = '\0';
opf.nMaxFile = 256;
opf.lpstrInitialDir = Path;
opf.lpstrTitle = "Open Obj File";
opf.lpstrDefExt = "*.*";
opf.Flags = OFN_PATHMUSTEXIST | OFN_OVERWRITEPROMPT;
There was no need to exclude OFN_ALLOWMULTISELECT explicitly since you were not including it in the first place!
EDIT
You state in a comment that this doesn't work. Calling CommDlgExtendedError is a good idea and should tell you why it fails.
You could also try to run the minimal possible GetOpenFileName which is this:
char Filestring[MAX_PATH] = "\0";
OPENFILENAME opf={0};
opf.lStructSize = sizeof(OPENFILENAME);
opf.lpstrFile = Filestring;
opf.nMaxFile = MAX_PATH;
GetOpenFileName(&opf);
I have the very same problem and a partial solution :
+ the simple following simple example (proposed abobe) was not working in x64 mode.
+ I changed the complie option "struct Member Alignment" from 1byte /Zp1 to default which solved this problem (by introducing others !!!)
char Filestring[MAX_PATH] = "\0";
OPENFILENAME opf={0};
opf.lStructSize = sizeof(OPENFILENAME);
opf.lpstrFile = Filestring;
opf.nMaxFile = MAX_PATH;
GetOpenFileName(&opf);
To find out more you should call CommDlgExtendedError to get the error code what went wrong. Besides this I would initialize all member of the struct to 0 with
ZeroMemory(&opf, sizeof(opf));
Since the file open dialog is in reality a COM component it could be worth to check out if your thread apartment state is different under 64 bit.
if( RPC_E_CHANGED_MODE == CoInitialize(NULL) )
ASSERT(FALSE); // MTA Apartment found
CoUnitialize()
Yours,
Alois Kraus
As a note in Microsoft Office 2010 64-bit we gave up and used the internal wrappers as the structure turned into 140 bytes and we were not sure how to change alignment.
Application.GetOpenFilename(FileFilter, FilterIndex, Title, ButtonText, MultiSelect)
and Application.GetSaveAsFilename(InitialFilename, FileFilter, FilterIndex, Title, ButtonText)
http://msdn.microsoft.com/en-us/library/ff834966.aspx
http://msdn.microsoft.com/en-us/library/microsoft.office.interop.excel._application.getopenfilename.aspx
Needless to say we think all individuals with fairly heavy applications in Excel should start considering other options as maintaining future versions across multiple clients and platforms may just be... insane!
I managed to get around this problem by setting the packing appropriately before including the header file. That way, for the purpose of this one function, we were using the 'default' 16 byte alignment, but did not have to change the packing alignment for the rest of our program:
#ifdef _WIN64
#pragma pack( push )
#pragma pack( 16 )
#include "Commdlg.h"
#pragma pack( pop )
#else
#include "Commdlg.h"
#endif // _WIN64