How could SSCANF provide so strange results? - std

I am in 4-day fight with this code:
unsigned long baudrate = 0;
unsigned char databits = 0;
unsigned char stop_bits = 0;
char parity_text[10];
char flowctrl_text[4];
const char xformat[] = "%lu,%hhu,%hhu,%[^,],%[^,]\n";
const char xtext[] = "115200,8,1,EVEN,NFC\n";
int res = sscanf(xtext, xformat, &baudrate, &databits, &stop_bits, (char*) &parity_text, (char*) &flowctrl_text);
printf("Res: %d\r\n", res);
printf("baudrate: %lu, databits: %hhu, stop: %hhu, \r\n", baudrate, databits, stop_bits);
printf("parity: %s \r\n", parity_text);
printf("flowctrl: %s \r\n", flowctrl_text);
It returns:
Res: 5
baudrate: 115200, databits: 0, stop: 1,
parity:
flowctrl: NFC
Databits and parity missing !
Actually memory under the parity variable is '\0'VEN'\0',
looks like the first characters was somehow overwritten by sscanf procedure.
Return value of sscanf is 5, which suggests, that it was able to parse the input.
My configuration:
gccarmnoneeabi 7.2.1
Visual Studio Code 1.43.2
PlatformIO Core 4.3.1
PlatformIO Home 3.1.1
Lib ST-STM 6.0.0 (Mbed 5.14.1)
STM32F446RE (Nucleo-F446RE)
I have tried (without success):
compiling with mbed RTOS and without
variable types uint8_t, uint32_t
gccarm versions: 6.3.1, 8.3.1, 9.2.1
using another IDE (CLion+PlatformIO)
compiling on another computer (same config)
What actually helps:
making the variables static
compiling in Mbed online compiler
The behavior of sscanf is as whole very unpredictable, mixing the order or datatype of variables sometimes helps, but most often ends with another flaws in the output.

This took me longer than I care to admit. But like most issues it ended up being very simple.
char parity_text[10];
char flowctrl_text[4];
Needs to be changed to:
char parity_text[10] = {0};
char flowctrl_text[5] = {0};
The flowctrl_text array is not large enough at size four to hold "EVEN" and the NULL termination. If you bump it to a size of 5 you should have no problem. Just to be safe I would also initialize the arrays to 0.
Once I increased the size I had 0 issues with your existing code. Let me know if this helps.

Related

Is ZeroMemory the windows equivalent of null terminating a buffer?

For example I by convention null terminate a buffer (set buffer equal to zero) the following way, example 1:
char buffer[1024] = {0};
And with the windows.h library we can call ZeroMemory, example 2:
char buffer[1024];
ZeroMemory(buffer, sizeof(buffer));
According to the documentation provided by microsoft: ZeroMemory Fills a block of memory with zeros. I want to be accurate in my windows application so I thought what better place to ask than stack overflow.
Are these two examples equivalent in logic?
Yes, the two codes are equivalent. The entire array is filled with zeros in both cases.
In the case of char buffer[1024] = {0};, you are explicitly setting only the first char element to 0, and then the compiler implicitly value-initializes the remaining 1023 char elements to 0 for you.
In C++11 and later, you can omit that first element value:
char buffer[1024] = {};
char buffer[1024]{};

Run a process at the same physical memory location

For a research project, I have a long-running process that uses various buffers and stack variables. I'd like to be able to launch this process multiple times such that the physical addresses backing its heap, stack, code, and static variables are equal each time. I know the exact size of all of these variables, and the size of the heap and stack stay constant during execution. To help with this, I use some helper code to translate arbitrary virtual addresses in my program to their corresponding physical addresses (sourced from here):
struct pagemap
{
union status
{
struct present
{
unsigned long long pfn : 54;
unsigned char soft_dirty : 1;
unsigned char exclusive : 1;
unsigned char zeroes : 4;
unsigned char type : 1;
unsigned char swapped : 1;
unsigned char present : 1;
} present;
struct swapped
{
unsigned char swaptype : 4;
unsigned long long offset : 50;
unsigned char soft_dirty : 1;
unsigned char exclusive : 1;
unsigned char zeroes : 4;
unsigned char type : 1;
unsigned char swapped : 1;
unsigned char present : 1;
} swapped;
} status;
} __attribute__ ((packed));
unsigned long get_pfn_for_addr(void *addr)
{
unsigned long offset;
struct pagemap pagemap;
FILE *pagemap_file = fopen("/proc/self/pagemap", "rb");
offset = (unsigned long) addr / getpagesize() * 8;
if(fseek(pagemap_file, offset, SEEK_SET) != 0)
{
fprintf(stderr, "failed to seek pagemap to offset\n");
exit(1);
}
fread(&pagemap, 1, sizeof(struct pagemap), pagemap_file);
fclose(pagemap_file);
return pagemap.status.present.pfn;
}
unsigned long virt_to_phys(void *addr)
{
unsigned long pfn, page_offset, phys_addr;
pfn = get_pfn_for_addr(addr);
page_offset = (unsigned long) addr % getpagesize();
phys_addr = (pfn << PAGE_SHIFT) + page_offset;
return phys_addr;
}
So far, my methodology has only required that a specific buffer in my program is located at the same physical address for each run. For this, I was just able to exit and relaunch the process whenever the physical address for that buffer was wrong, and I would end up with the correct location relatively quickly each time. However, I'd like to extend my experiment to ensure that my process is loaded identically in physical memory between runs, and this try-and-restart method does not seem to work well for this. Ideally, I would like to be able to set apart some small number of physical page frames that can't be allocated to another process, or to the kernel itself. Then, I would pass a flag down to do_fork that notifies the kernel that this is my special process and to allocate specific page frames to it.
My questions are:
Is there any sort of isolation mechanism already built into the kernel that would let me set aside an exclusive physical memory space that I could launch my process in?
If not, what would be a starting point for modifying the kernel to support behavior like this?
Is there any other solution (not involving either of the two above) that I could use for my desired behavior?
This is something that the kernel, using virtual memory, is tasked to abstract from you, so I'm not sure it is even possible to do (without insane amounts of work).
May I ask what experiment requires this? Perhaps if you describe what you want to achieve, it is easier to offer advice.

How can i know the minor on Linux module initialisation

I am writing a linux kernel module.
Here is what i've done in module's init function:
register_chrdev(300 /* major */, "mydev", &fops);
It works fine. But i need to know the minor number.
I have read we cannot set this minor number. It is the kernel which gives us this number. If so, how can i know it in module's init function ?
Thanks
register_chrdev calls __register_chrdev internally.
static inline int register_chrdev(unsigned int major, const char *name,
const struct file_operations *fops)
{
return __register_chrdev(major, 0, 256, name, fops);
}
If you will see __register_chrdev function signature, it is
int __register_chrdev(unsigned int major, unsigned int baseminor,
unsigned int count, const char *name,
const struct file_operations *fops)
register_chrdev will pass your major number(300) and a base minor number 0 with a count of 256. So, it will reserve 0-255 minor number range for your device.
Also, in the definition of __register_chrdev, dev_t structure is created (contains major & minor number) for your device.
err = cdev_add(cdev, MKDEV(cd->major, baseminor), count);
MKDEV(cd->major, baseminor) creates it. So, the first device number(dev_t) will have 0 as its minor number. Besides, count(256) is the consecutive minor numbers that can be further used.
You can also dynamically get the major & minor number if you use alloc_chrdev_region. All you have to do is pass a dev_t struct
to alloc_chrdev_region. It will dynamically allocate a major and minor number to your device. To get the major and minor number in your module, you can use
major = MAJOR(dev);
minor = MINOR(dev);

MacOs OpenCL on HD 530 with clBuildProgram error(-11)

I have the latest mac pro(OS:10.12.2) ,with intel intergrated GPU HD 530(Gen9) which runs the OpenCL code. In my OpenCL code, I use vloadx and atomic_add instruction. change my OpenCL kernel code into bitcode like https://developer.apple.com/library/content/samplecode/OpenCLOfflineCompilation/Introduction/Intro.html#//apple_ref/doc/uid/DTS40011196-Intro-DontLinkElementID_2
. and create the program with clCreateProgramWithBinary. But when clBuildProgram, it returns error with -11 .and build log is "
error: undefined reference to _Z6vload2mPKU3AS1h()'
undefined reference to_Z8atom_addPVU3AS3ii()'
"
But in my mac air with HD 5500(Gen8), the code is ok.
Can someone tell me what should I do?
The problem here is, you cannot use incompatible binaries in different devices. Which means if you compile for Intel, you cannot use the compiled binary for AMD for example. What you need to do is compile the code for the specific device every time from the source.
If you do not want to use the OpenCL codes in different files, what you can do is put them inside your source file by stringifying them. Instead of reading a file, you use the kernel string inside your host code to pass as the kernel string. This will allow you to protect your IP. However, everytime, you need to build the code using clBuildProgram. You can also save the built program as binary, so after the first run, you won't degrade performance by building it everytime. To give an example, lets suppose you have a kernel.cl file as following:
__kernel void foo(__global int* in, __global int* out)
{
int idx = get_global_id(0);
out[idx] = in[idx] * in[idx];
}
You probably get this kernel code by reading the file with something like:
char *source_str;
fp = fopen("kernel.cl", "r");
source_str = (char *)malloc(MAX_SOURCE_SIZE);
source_size = fread(source_str, 1, MAX_SOURCE_SIZE, fp);
fclose(fp);
program = clCreateProgramWithSource(context, 1, (const char **)&source_str, (const size_t *)&source_size, &ret);
What you can do instead is something like:
const char* src = "__kernel void foo(__global int* in, __global int* out)\
{\
int idx = get_global_id(0);\
out[idx] = in[idx] * in[idx];\
}";
program = clCreateProgramWithSource(context, 1, (const char **)&src, (const size_t *)&src_size, &ret);
When you compile your C code, this string will be converted into binary, so you protect your source code.

Stack around the variable 'xyz' was corrupted

I'm trying to get some simple piece of code I found on a website to work in VC++ 2010 on windows vista 64:
#include "stdafx.h"
#include <windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dResult;
BOOL result;
char oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, sizeof(oldWallPaper)-1, oldWallPaper, 0);
fprintf(stderr, "Current desktop background is %s\n", oldWallPaper);
return 0;
}
it does compile, but when I run it, I always get this error:
Run-Time Check Failure #2 - Stack around the variable 'oldWallPaper' was corrupted.
I'm not sure what is going wrong, but I noticed, that the value of oldWallPaper looks something like "C\0:\0\0U\0s\0e\0r\0s[...]" -- I'm wondering where all the \0s come from.
A friend of mine compiled it on windows xp 32 (also VC++ 2010) and is able to run it without problems
any clues/hints/opinions?
thanks
The doc isn't very clear. The returned string is a WCHAR, two bytes per character not one, so you need to allocate twice as much space otherwise you get a buffer overrun. Try:
BOOL result;
WCHAR oldWallPaper[(MAX_PATH + 1)];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER,
_tcslen(oldWallPaper), oldWallPaper, 0);
See also:
http://msdn.microsoft.com/en-us/library/ms724947(VS.85).aspx
http://msdn.microsoft.com/en-us/library/ms235631(VS.80).aspx (string conversion)
Every Windows function has 2 versions:
SystemParametersInfoA() // Ascii
SystemParametersInfoW() // Unicode
The version ending in W is the wide character type (ie Unicode) version of the function. All the \0's you are seeing are because every character you're getting back is in Unicode - 16 bytes per character - the second byte happens to be 0. So you need to store the result in a wchar_t array, and use wprintf instead of printf
wchar_t oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, MAX_PATH-1, oldWallPaper, 0);
wprintf( L"Current desktop background is %s\n", oldWallPaper );
So you can use the A version SystemParametersInfoA() if you are hell-bent on not using Unicode. For the record you should always try to use Unicode, however.
Usually SystemParametersInfo() is a macro that evaluates to the W version, if UNICODE is defined on your system.

Resources