kv_do() not working as expected - g-wan

This is in my init.c:
server_data_t **data = (server_data_t**)get_env(argv, US_SERVER_DATA);
data[0] = (server_data_t*)calloc(1, sizeof(server_data_t));
kv_t *channels = &data[0]->channels;
kv_t *users = &data[0]->users;
kv_init(channels, "channels.dat", 10*1024, 0, 0, 0);
kv_init(users, "users.dat", 10*1024, 0, 0, 0);
These initializations were only for testing puproses:
channel_t *channel = (channel_t*)calloc(1, sizeof(channel_t));
channel->name = strdup("Test channel");
channel->id = 1;
kv_item channel_item;
channel_item.key = (char*)&channel->id;
channel_item.klen = sizeof(u32);
channel_item.val = (char*)channel;
channel_item.in_use = 0;
kv_add(channels, &channel_item);
channel_t *channel2 = (channel_t*)calloc(1, sizeof(channel_t));
channel2->name = strdup("Test channel2");
channel2->id = 2;
kv_item channel_item2;
channel_item2.key = (char*)&channel2->id;
channel_item2.klen = sizeof(u32);
channel_item2.val = (char*)channel2;
channel_item2.in_use = 0;
kv_add(channels, &channel_item2);
kv_do(channels, NULL, sizeof(u32), test_proc, 0);
The user defined process kv_do process:
static int test_proc(const kv_item *item, const void *ctx)
{
return 1;
}
Starting the server segfaults after the kv_do test process starts an infinite loop when trying to visit all items. It works fine with one item in the list, then it just visits the first item and quits. I can also visit the items one by one with the ID using kv_get.
I found an edge case where it worked with two items if my key was "Test Channel" key length was strlen("Test Channel") and then the next channel key was "Test Channel2" where key-length was as long as the first items key-length. Pretty confusing.
Is the mistake in the code (pointers and such) or how the process is supposed to work with the return value of 1?
I know that gwan sometimes have trouble with KVMs so if it could be that, I'm running Oracle VM VirtualBox v5.0.24 with Ubuntu.

Are the kv.c and persistence.c G-WAN examples working for you?
If any of them crashes, then you are probably using out-of-sync. G-WAN headers (gwan.h) with a recent ./gwan executable.

Related

Optimizations causing errors

std::vector<VkWriteDescriptorSet> writeDescriptorSets;
for (int index = 0; index < descriptorBindings.size(); index++)
{
VkWriteDescriptorSet writeDescriptorSet = {};
// Binding 0 : Uniform buffer
writeDescriptorSet.sType = VK_STRUCTURE_TYPE_WRITE_DESCRIPTOR_SET;
writeDescriptorSet.dstSet = descriptorSet;
// Binds this uniform buffer to binding point 0
writeDescriptorSet.dstBinding = index;
writeDescriptorSet.descriptorCount = descriptorBindings[index].Count;
writeDescriptorSet.pNext = nullptr;
writeDescriptorSet.pTexelBufferView = nullptr;
if (descriptorBindings[index].Type == DescriptorType::UniformBuffer)
{
VkDescriptorBufferInfo uniformBufferDescriptor = {};
uniformBufferDescriptor.buffer = descriptorBindings[index].UniformBuffer->buffer;
uniformBufferDescriptor.offset = 0;
uniformBufferDescriptor.range = descriptorBindings[index].UniformBuffer->size;
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER;
writeDescriptorSet.pBufferInfo = &uniformBufferDescriptor;
}
else if (descriptorBindings[index].Type == DescriptorType::TextureSampler)
{
VkDescriptorImageInfo textureDescriptor = {};
textureDescriptor.imageView = descriptorBindings[index].Texture->imageView->imageView; // The image's view (images are never directly accessed by the shader, but rather through views defining subresources)
textureDescriptor.sampler = descriptorBindings[index].Texture->sampler; // The sampler (Telling the pipeline how to sample the texture, including repeat, border, etc.)
textureDescriptor.imageLayout = VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL; // The current layout of the image (Note: Should always fit the actual use, e.g. shader read)
//printf("%d\n", textureDescriptor.imageLayout);
writeDescriptorSet.descriptorType = VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER;
writeDescriptorSet.pImageInfo = &textureDescriptor;
}
writeDescriptorSets.push_back(writeDescriptorSet);
}
vkUpdateDescriptorSets(logicalDevice, writeDescriptorSets.size(), writeDescriptorSets.data(), 0, nullptr);
I am really scratching my head over this. If I enabled optimizations inside Visual Studio then the textureDescriptor.imageLayout line, and probably the rest of the textureDescriptor, gets optimized out and it causes errors in Vulkan. If I comment out the printf below it then no problem. I suspect that the compiler detects that imageLayout is being used and doesn't get rid of it.
Do I even need optimizations? If so how can I prevent it from removing that code?
textureDescriptor is not being "optimized out". It's a stack variable whose lifetime ended before you ever give it to Vulkan.
You're going to have to create those objects in some kind of way that will outlive the block in which they were created. It needs to the call to vkUpdateDescriptorSets.

Why does the looping in Google Sheet App Script stop in middle of the looping process and the rest of the codes do not run?

I tried the "for loop" and "do ... while ...", both of them stopped in the middle of the looping process, and the rest of the codes, which come after the loop, did not run. This becomes an issue when I loop through hundreds of rows.
I know that the use of array is a better solution as the code execution is faster, but I have a difficulty in setting the borders in batch as there is no ".setBorders(Array)" function in Google Sheets.
The sheet provided here has been simplified only to show the looping issues. The actual sheet is written to automatically create hundreds of tables with different values, font weights and horizontal alignments.
What I want to do:
Choose the option "Yes" to start the looping (the current row of looping is also tracked and recorded in "CURRENT ROW" and the "STATUS" shows "Processing ...")
The program will check the "SCORE" column, if "SCORE" is empty (""), than the font weight for that row is set to "bold", else, the font weight is set to "normal"
If the looping is done until the last row, the "STATUS" shows "Done".
The following is the copy of the Google App Script:
let app = SpreadsheetApp;
let ss = app.getActiveSpreadsheet();
let activeSheet = ss.getActiveSheet();
let sheetName = activeSheet.getName();
let sheet1 = ss.getSheetByName("Sheet1");
let loopOptionRange = sheet1.getRange(4, 4);
let loopOption = loopOptionRange.getValue();
let loopStatusRange = sheet1.getRange(4, 7);
function onEdit(e) {
}
function onOpen() {
loopOptionRange.setValue("Choose");
sheet1.getRange(4, 5).setValue(5);
loopStatusRange.setValue("");
}
function loopTest() {
const startRow = 4; //table head
const lastRow = sheet1.getLastRow();
sheet1.getRange(4, 6).setValue(lastRow);
const startRowLoop = startRow + 1; //first row of looping
try {
for (i = startRowLoop; i <= lastRow; i++) {
const testStatus = sheet1.getRange(i, 3).getValue();
sheet1.getRange(4, 5).setValue(i);
if (testStatus == "") {
sheet1.getRange(i, 1, 1, 2).setFontWeight("bold");
} else {
sheet1.getRange(i, 1, 1, 3).setFontWeight("normal");
}
}
loopStatusRange.setValue("Done");
loopOptionRange.setValue("Choose");
} catch (error) {
app.getUi().alert(`An error occured.`);
}
}
if (sheetName === "Sheet1"){
if (loopOption == "Yes") {
loopStatusRange.setValue("Processing ...");
loopTest();
} else if (loopOption === "Cancel") {
loopOptionRange.setValue("Choose");
}
}
LOOP TEST - Google Sheets file
When I saw your script, getValue, setValue and setFontWeight are used in a loop. In this case, the process cost will become high. I thought that this might be the reason for your issue. In order to reduce the process cost of your script, how about the following modification?
From:
for (i = startRowLoop; i <= lastRow; i++) {
const testStatus = sheet1.getRange(i, 3).getValue();
sheet1.getRange(4, 5).setValue(i);
if (testStatus == "") {
sheet1.getRange(i, 1, 1, 2).setFontWeight("bold");
} else {
sheet1.getRange(i, 1, 1, 3).setFontWeight("normal");
}
}
To:
const range = sheet1.getRange(startRow, 3, lastRow - startRowLoop);
const values = range.getDisplayValues().map(([c]) => [c ? null : "bold"]);
range.offset(0, -1).setFontWeights(values);
Note:
About Is it really because of "the high process cost" you mentioned in your answer?, when I saw your script for the 1st time, I thought that the reason for your issue might be due to the process cost. Because, when the script is run by OnEdit trigger, the maximum execution time is 30 seconds. And, when I tested your script and your Spreadsheet, when I saw the log after the script was stopped, an error related to the over of the maximum execution time. By this, I resulted that the reason for your issue is due to the process cost of the script.
References:
map()
setFontWeights(fontWeights)

serial COM port issue # reading

Here is the COM port opening part:
portHandle=CreateFileA(portName, GENERIC_READ|GENERIC_WRITE,0, NULL, OPEN_EXISTING, 0, NULL);
if (portHandle == INVALID_HANDLE_VALUE)
{
return -1;
}
COMMCONFIG Win_CommConfig;
COMMTIMEOUTS Win_CommTimeouts;
unsigned long confSize = sizeof(COMMCONFIG);
Win_CommConfig.dwSize = confSize;
GetCommConfig(portHandle, &Win_CommConfig, &confSize);
Win_CommConfig.dcb.Parity = 0;
Win_CommConfig.dcb.fRtsControl = RTS_CONTROL_DISABLE;
Win_CommConfig.dcb.fOutxCtsFlow = FALSE;
Win_CommConfig.dcb.fOutxDsrFlow = FALSE;
Win_CommConfig.dcb.fDtrControl = DTR_CONTROL_DISABLE;
Win_CommConfig.dcb.fDsrSensitivity = FALSE;
Win_CommConfig.dcb.fNull=FALSE;
Win_CommConfig.dcb.fTXContinueOnXoff = FALSE;
Win_CommConfig.dcb.fInX=FALSE;
Win_CommConfig.dcb.fOutX=FALSE;
Win_CommConfig.dcb.fBinary=TRUE;
Win_CommConfig.dcb.DCBlength = sizeof(DCB);
if (baudrate != -1)
{
Win_CommConfig.dcb.BaudRate = baudrate;
}
Win_CommConfig.dcb.ByteSize = 8;
Win_CommTimeouts.ReadIntervalTimeout = 50;
Win_CommTimeouts.ReadTotalTimeoutMultiplier = 0;
Win_CommTimeouts.ReadTotalTimeoutConstant = 110;
Win_CommTimeouts.WriteTotalTimeoutMultiplier = 0;
Win_CommTimeouts.WriteTotalTimeoutConstant = 110;
SetCommConfig(portHandle, &Win_CommConfig, sizeof(COMMCONFIG));
SetCommTimeouts(portHandle,&Win_CommTimeouts);
return 0;
It connects OK, I manage to issue some AT comamnds and read back OK\n> responses, even one of the upper level protocol (OBD2: the command is 0100\r) gets a proper answer. But when I attempt other commands such as scanning of supported pids (e.g 0000\n, 0101\n, 0202\n etc) the whole thing either echoes back whatever I write to it or just times out. Issuing the same sequence of commands from a hyperterminal works properly. These serial ports are virtual simulated ports should it matter - http://com0com.sourceforge.net/.
What am I missing ? Perhaps some reading / setting / resetting of someof the pins ? It has been a while since I last mingled with RS232... Thanks!
EDIT: just tried the scantool at https://www.scantool.net/downloads/diagnostic-software/ and it worked ok too.
e.g 0000\n, 0101\n, 0202\n
This was the issue. It should have been \r at the end, not \n. Hyperterminal worked because the key would insert a \r here on Windows. Probablysome validation of the input was done by the device connected and so it got to work even with the wrong terminator character fed in.

How to get current display mode (resolution, refresh rate) of a monitor/output in DXGI?

I am creating a multi-monitor full screen DXGI/D3D application. I am enumerating through the available outputs and adapters in preparation of creating their swap chains.
When creating my swap chain using DXGI's IDXGIFactory::CreateSwapChain method, I need to provide a swap chain description which includes a buffer description of type DXGI_MODE_DESC that details the width, height, refresh rate, etc. How can I find out what the output is currently set to (or how can I find out what the display mode of the output currently is)? I don't want to change the user's resolution or refresh rate when I go to full screen with this swap chain.
After looking around some more I stumbled upon the EnumDisplaySettings legacy GDI function, which allows me to access the current resolution and refresh rate. Combining this with the IDXGIOutput::FindClosestMatchingMode function I can get pretty close to the current display mode:
void getClosestDisplayModeToCurrent(IDXGIOutput* output, DXGI_MODE_DESC* outCurrentDisplayMode)
{
DXGI_OUTPUT_DESC outputDesc;
output->GetDesc(&outputDesc);
HMONITOR hMonitor = outputDesc.Monitor;
MONITORINFOEX monitorInfo;
monitorInfo.cbSize = sizeof(MONITORINFOEX);
GetMonitorInfo(hMonitor, &monitorInfo);
DEVMODE devMode;
devMode.dmSize = sizeof(DEVMODE);
devMode.dmDriverExtra = 0;
EnumDisplaySettings(monitorInfo.szDevice, ENUM_CURRENT_SETTINGS, &devMode);
DXGI_MODE_DESC current;
current.Width = devMode.dmPelsWidth;
current.Height = devMode.dmPelsHeight;
bool useDefaultRefreshRate = 1 == devMode.dmDisplayFrequency || 0 == devMode.dmDisplayFrequency;
current.RefreshRate.Numerator = useDefaultRefreshRate ? 0 : devMode.dmDisplayFrequency;
current.RefreshRate.Denominator = useDefaultRefreshRate ? 0 : 1;
current.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
current.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED;
current.Scaling = DXGI_MODE_SCALING_UNSPECIFIED;
output->FindClosestMatchingMode(&current, outCurrentDisplayMode, NULL);
}
...But I don't think that this is really the correct answer because I'm needing to use legacy functions. Is there any way to do this with DXGI to get the exact current display mode rather than using this method?
I saw solution here:
http://www.rastertek.com/dx11tut03.html
In folow part:
// Now go through all the display modes and find the one that matches the screen width and height.
// When a match is found store the numerator and denominator of the refresh rate for that monitor.
for(i=0; i<numModes; i++)
{
if(displayModeList[i].Width == (unsigned int)screenWidth)
{
if(displayModeList[i].Height == (unsigned int)screenHeight)
{
numerator = displayModeList[i].RefreshRate.Numerator;
denominator = displayModeList[i].RefreshRate.Denominator;
}
}
}
Is my understanding correct, the available resolution is in the displayModeList.
This might be what you are looking for:
// Get display mode list
std::vector<DXGI_MODE_DESC*> modeList = GetDisplayModeList(*outputItor);
for(std::vector<DXGI_MODE_DESC*>::iterator modeItor = modeList.begin(); modeItor != modeList.end(); ++modeItor)
{
// PrintDisplayModeInfo(*modeItor);
}
}
std::vector<DXGI_MODE_DESC*> GetDisplayModeList(IDXGIOutput* output)
{
UINT num = 0;
DXGI_FORMAT format = DXGI_FORMAT_R32G32B32A32_TYPELESS;
UINT flags = DXGI_ENUM_MODES_INTERLACED | DXGI_ENUM_MODES_SCALING;
// Get number of display modes
output->GetDisplayModeList(format, flags, &num, 0);
// Get display mode list
DXGI_MODE_DESC * pDescs = new DXGI_MODE_DESC[num];
output->GetDisplayModeList(format, flags, &num, pDescs);
std::vector<DXGI_MODE_DESC*> displayList;
for(int i = 0; i < num; ++i)
{
displayList.push_back(&pDescs[i]);
}
return displayList;
}

Finding DNS server settings programmatically on Mac OS X

I have some cross platform DNS client code that I use for doing end to end SMTP and on windows I can find the current DNS server ip addresses by looking in the registry. On the Mac I can probably use the SystemConfiguration framework as mentioned in the first answer, however the exact method of doing so is not immediately obvious.
For instance SCDynamicStoreCopyDHCPInfo returns some of the dynamic DHCP related data but not the DNS server addresses.
I know its very late to answer this question but may be helpful for the others.
This Code will help out for this task ..
SCPreferencesRef prefsDNS = SCPreferencesCreate(NULL, CFSTR("DNSSETTING"), NULL);
CFArrayRef services = SCNetworkServiceCopyAll(prefsDNS);
long servicesCount = CFArrayGetCount(services);
for (long i = 0; i < servicesCount; i++) {
const SCNetworkServiceRef service = (const SCNetworkServiceRef)CFArrayGetValueAtIndex(services, i);
CFStringRef interfaceServiceID = SCNetworkServiceGetServiceID(service);
CFStringRef primaryservicepath = CFStringCreateWithFormat(NULL,NULL,CFSTR("State:/Network/Service/%#/DNS"),interfaceServiceID);
SCDynamicStoreRef dynRef = SCDynamicStoreCreate(kCFAllocatorSystemDefault, CFSTR("DNSSETTING"), NULL, NULL);
CFPropertyListRef propList = SCDynamicStoreCopyValue(dynRef,primaryservicepath);
if (propList) {
CFDictionaryRef dict = (CFDictionaryRef)propList;
CFArrayRef addresses = (CFArrayRef)CFDictionaryGetValue(dict, CFSTR("ServerAddresses"));
long addressesCount = CFArrayGetCount(addresses);
for (long j = 0; j < addressesCount; j++) {
CFStringRef address = (CFStringRef)CFArrayGetValueAtIndex(addresses, j);
// Print address
CFShow(address);
}
CFRelease(propList);
}
CFRelease(dynRef);
CFRelease(primaryservicepath);
}
CFRelease(services);
CFRelease(prefsDNS);
I know it's been a long time since you needed this, but there is nothing worse than a old unsolved answer. You can't access them from "/etc/resolv.conf" because of permission issues. After much searching, and a little luck I discovered you can get it via res_ninit() function.
// Get native iOS System Resolvers
res_ninit(&_res);
res_state res = &_res;
for (int i = 0; i < res->nscount; i++) {
sa_family_t family = res->nsaddr_list[i].sin_family;
int port = ntohs(res->nsaddr_list[i].sin_port);
if (family == AF_INET) { // IPV4 address
char str[INET_ADDRSTRLEN]; // String representation of address
inet_ntop(AF_INET, & (res->nsaddr_list[i].sin_addr.s_addr), str, INET_ADDRSTRLEN);
} else if (family == AF_INET6) { // IPV6 address
char str[INET6_ADDRSTRLEN]; // String representation of address
inet_ntop(AF_INET6, &(res->nsaddr_list [i].sin_addr.s_addr), str, INET6_ADDRSTRLEN);
}
}
res_ndestroy(res);
You can use the SystemConfiguration framework. It's in C.
Update: apparently the rest of the web is harder to use than I thought. Search for the key "State:/Network/Service/ServiceID/DNS" where ServiceID is the ID of the service.
They are also available from
/etc/resolv.conf
You could read from /etc/resolv.conf.

Resources