If the below code is compiled with UNICODE as compiler option, the GetComputerNameEx API returns junk characters.
Whereas if compiled without UNICODE option, the API returns truncated value of the hostname.
This issue is mostly seen with Asia-Pacific languages like Chinese, Japanese, Korean to name a few (i.e., non-English).
Can anyone throw some light on how this issue can be resolved.
# define INFO_SIZE 30
int main()
{
int ret;
TCHAR infoBuf[INFO_SIZE+1];
DWORD bufSize = (INFO_SIZE+1);
char *buf;
buf = (char *) malloc(INFO_SIZE+1);
if (!GetComputerNameEx((COMPUTER_NAME_FORMAT)1,
(LPTSTR)infoBuf, &bufSize))
{
printf("GetComputerNameEx failed (%d)\n", GetLastError());
return -1;
}
ret = wcstombs(buf, infoBuf, (INFO_SIZE+1));
buf[INFO_SIZE] = '\0';
return 0;
}
In the languages you mentioned, most characters are represented by more than one byte. This is because these languages have alphabets of much more than 256 characters. So you may need more than 30 bytes to encode 30 characters.
The usual pattern for calling a function like wcstombs goes like this: first get the amount of bytes required, then allocate a buffer, then convert the string.
(edit: that actually relies on a POSIX extension, which also got implemented on Windows)
size_t size = wcstombs(NULL, infoBuf, 0);
if (size == (size_t) -1) {
// some character can't be converted
}
char *buf = new char[size + 1];
size = wcstombs(buf, infoBuf, size + 1);
Related
I have a problem in regards of extracting signed int from string in c++.
Assuming that i have a string of images1234, how can i extract the 1234 from the string without knowing the position of the last non numeric character in C++.
FYI, i have try stringstream as well as lexical_cast as suggested by others through the post but stringstream returns 0 while lexical_cast stopped working.
int main()
{
string virtuallive("Images1234");
//stringstream output(virtuallive.c_str());
//int i = stoi(virtuallive);
//stringstream output(virtuallive);
int i;
i = boost::lexical_cast<int>(virtuallive.c_str());
//output >> i;
cout << i << endl;
return 0;
}
How can i extract the 1234 from the string without knowing the position of the last non numeric character in C++?
You can't. But the position is not hard to find:
auto last_non_numeric = input.find_last_not_of("1234567890");
char* endp = &input[0];
if (last_non_numeric != std::string::npos)
endp += last_non_numeric + 1;
if (*endp) { /* FAILURE, no number on the end */ }
auto i = strtol(endp, &endp, 10);
if (*endp) {/* weird FAILURE, maybe the number was really HUGE and couldn't convert */}
Another possibility would be to put the string into a stringstream, then read the number from the stream (after imbuing the stream with a locale that classifies everything except digits as white space).
// First the desired facet:
struct digits_only: std::ctype<char> {
digits_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
// everything is white-space:
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
// except digits, which are digits
std::fill(&rc['0'], &rc['9'], std::ctype_base::digit);
// and '.', which we'll call punctuation:
rc['.'] = std::ctype_base::punct;
return &rc[0];
}
};
Then the code to read the data:
std::istringstream virtuallive("Images1234");
virtuallive.imbue(locale(locale(), new digits_only);
int number;
// Since we classify the letters as white space, the stream will ignore them.
// We can just read the number as if nothing else were there:
virtuallive >> number;
This technique is useful primarily when the stream contains a substantial amount of data, and you want all the data in that stream to be interpreted in the same way (e.g., only read numbers, regardless of what else it might contain).
My Powerbuilder version is 6.5, cannot use a higher version as this is what I am supporting.
My problem is, when I am doing dw_1.ImportFile(file) the first row and first column has a funny string like this:

Which I dont understand until I tried opening the file and saving it to a new text file and trying to import that new file.which worked flawlessly without the funny string.
My conclusion is that this is happening because the file is UTF-8 (as shown in NOTEPAD++) and the new file is Ansi. The file I am trying to import is automatically given by a 3rd party and my users dont want the extra job of doing this.
How do I force convert this files to ANSI in powerbuilder. If there is none, I might have to do a command prompt conversion, any ideas?
The weird  characters are the (optional) utf-8 BOM that tells editors that the file is utf-8 encoded (as it can be difficult to know it unless we encounter an escaped character above code 127). You cannot just rid it off because if your file contains any character above 127 (accents or any special char), you will still have garbage in your displayed data (for example: é -> é, € -> €, ...) where special characters will become from 2 to 4 garbage chars.
I recently needed to convert some utf-8 encoded string to "ansi" windows 1252 encoding. With version of PB10+, a reencoding between utf-8 and ansi is as simple as
b = blob(s, encodingutf8!)
s2 = string(b, encodingansi!)
But string() and blob() do not support encoding specification before the release 10 of PB.
What you can do is to read the file yourself, skip the BOM, ask Windows to convert the string encoding via MultiByteToWideChar() + WideCharToMultiByte() and load the converted string in the DW with ImportString().
Proof of concept to get the file contents (with this reading method, the file cannot be bigger than 2GB):
string ls_path, ls_file, ls_chunk, ls_ansi
ls_path = sle_path.text
int li_file
if not fileexists(ls_path) then return
li_file = FileOpen(ls_path, streammode!)
if li_file > 0 then
FileSeek(li_file, 3, FromBeginning!) //skip the utf-8 BOM
//read the file by blocks, FileRead is limited to 32kB
do while FileRead(li_file, ls_chunk) > 0
ls_file += ls_chunk //concatenate in loop works but is not so performant
loop
FileClose(li_file)
ls_ansi = utf8_to_ansi(ls_file)
dw_tab.importstring( text!, ls_ansi)
end if
utf8_to_ansi() is a globlal function, it was written for PB9, but it should work the same with PB6.5:
global type utf8_to_ansi from function_object
end type
type prototypes
function ulong MultiByteToWideChar(ulong CodePage, ulong dwflags, ref string lpmultibytestr, ulong cchmultibyte, ref blob lpwidecharstr, ulong cchwidechar) library "kernel32.dll"
function ulong WideCharToMultiByte(ulong CodePage, ulong dwFlags, ref blob lpWideCharStr, ulong cchWideChar, ref string lpMultiByteStr, ulong cbMultiByte, ref string lpUsedDefaultChar, ref boolean lpUsedDefaultChar) library "kernel32.dll"
end prototypes
forward prototypes
global function string utf8_to_ansi (string as_utf8)
end prototypes
global function string utf8_to_ansi (string as_utf8);
//convert utf-8 -> ansi
//use a wide-char native string as pivot
constant ulong CP_ACP = 0
constant ulong CP_UTF8 = 65001
string ls_wide, ls_ansi, ls_null
blob lbl_wide
ulong ul_len
boolean lb_flag
setnull(ls_null)
lb_flag = false
//get utf-8 string length converted as wide-char
setnull(lbl_wide)
ul_len = multibytetowidechar(CP_UTF8, 0, as_utf8, -1, lbl_wide, 0)
//allocate buffer to let windows write into
ls_wide = space(ul_len * 2)
lbl_wide = blob(ls_wide)
//convert utf-8 -> wide char
ul_len = multibytetowidechar(CP_UTF8, 0, as_utf8, -1, lbl_wide, ul_len)
//get the final ansi string length
setnull(ls_ansi)
ul_len = widechartomultibyte(CP_ACP, 0, lbl_wide, -1, ls_ansi, 0, ls_null, lb_flag)
//allocate buffer to let windows write into
ls_ansi = space(ul_len)
//convert wide-char -> ansi
ul_len = widechartomultibyte(CP_ACP, 0, lbl_wide, -1, ls_ansi, ul_len, ls_null, lb_flag)
return ls_ansi
end function
I'd like to read a file line-by-line. I have fgets() working okay, but am not sure what to do if a line is longer than the buffer sizes I've passed to fgets()? And furthermore, since fgets() doesn't seem to be Unicode-aware, and I want to allow UTF-8 files, it might miss line endings and read the whole file, no?
Then I thought I'd use getline(). However, I'm on Mac OS X, and while getline() is specified in /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.8.sdk/usr/include/stdio.h, it's not in /usr/include/stdio, so gcc doesn't find it in the shell. And it's not particularly portable, obviously, and I'd like the library I'm developing to be generally useful.
So what's the best way to read a file line-by-line in C?
First of all, it's very unlikely that you need to worry about non-standard line terminators like U+2028. Normal text files are not expected to contain them, and the very overwhelming majority of all existing software that reads normal text files doesn't support them. You mention getline() which is available in glibc but not in MacOS's libc, and it would surprise me if getline() did support such fancy line terminators. It's almost a certainly that you can get away with just supporting LF (U+000A) and maybe also CR+LF (U+000D U+000A). To do that, you don't need to care about UTF-8. That's the beauty of UTF-8's ASCII compatibility and is by design.
As for supporting lines that are longer than the buffer you pass to fgets(), you can do this with a little extra logic around fgets. In pseudocode:
while true {
fgets(buffer, size, stream);
dynamically_allocated_string = strdup(buffer);
while the last char (before the terminating NUL) in the buffer is not '\n' {
concatenate the contents of buffer to the dynamically allocated string
/* the current line is not finished. read more of it */
fgets(buffer, size, stream);
}
process the whole line, as found in the dynamically allocated string
}
But again, I think you will find that there's really quite a lot of software out there that simply doesn't bother with that, from software that parses system config files like /etc/passwd to (some) scripting languages. Depending on your use case, it may very well be good enough to use a "big enough" buffer (e.g. 4096 bytes) and declare that you don't support lines longer than that. You can even call it a security feature (a line length limit is protection against resource exhaustion attacks from a crafted input file).
Based on this answer, here's what I've come up with:
#define LINE_BUF_SIZE 1024
char * getline_from(FILE *fp) {
char * line = malloc(LINE_BUF_SIZE), * linep = line;
size_t lenmax = LINE_BUF_SIZE, len = lenmax;
int c;
if(line == NULL)
return NULL;
for(;;) {
c = fgetc(fp);
if(c == EOF)
break;
if(--len == 0) {
len = lenmax;
char * linen = realloc(linep, lenmax *= 2);
if(linen == NULL) {
// Fail.
free(linep);
return NULL;
}
line = linen + (line - linep);
linep = linen;
}
if((*line++ = c) == '\n')
break;
}
*line = '\0';
return linep;
}
To read stdin:
char *line;
while ( line = getline_from(stdin) ) {
// do stuff
free(line);
}
To read some other file, I first open it with fopen():
FILE *fp;
fp = fopen ( filename, "rb" );
if (!fp) {
fprintf(stderr, "Cannot open %s: ", argv[1]);
perror(NULL);
exit(1);
}
char *line;
while ( line = getline_from(fp) ) {
// do stuff
free(line);
}
This works very nicely for me. I'd love to see an alternative that uses fgets() as suggested by #paul-tomblin, but I don't have the energy to figure it out tonight.
I've got a form with a Listbox which contains lines of four words.
When I click on one line, these words should be seen in four different textboxes.
So far, I've got everything working, yet I have a problem with chars converting.
The string from the listbox is a UnicodeString but the strtok uses a char[].
The compiler tells me it "Cannot Convert UnicodeString to Char[]". This is the code I am using for this:
{
int a;
UnicodeString b;
char * pch;
int c;
a=DatabaseList->ItemIndex; //databaselist is the listbox
b=DatabaseList->Items->Strings[a];
char str[] = b; //This is the part that fails, telling its unicode and not char[].
pch = strtok (str," ");
c=1;
while (pch!=NULL)
{
if (c==1)
{
ServerAddress->Text=pch;
} else if (c==2)
{
DatabaseName->Text=pch;
} else if (c==3)
{
Username->Text=pch;
} else if (c==4)
{
Password->Text=pch;
}
pch = strtok (NULL, " ");
c=c+1;
}
}
I know my code doesn't look nice, pretty bad actually. I'm just learning some programming in C++.
How can I convert this?
strtok actually modifies your char array, so you will need to construct an array of characters you are allowed to modify. Referencing directly into the UnicodeString string will not work.
// first convert to AnsiString instead of Unicode.
AnsiString ansiB(b);
// allocate enough memory for your char array (and the null terminator)
char* str = new char[ansiB.Length()+1];
// copy the contents of the AnsiString into your char array
strcpy(str, ansiB.c_str());
// the rest of your code goes here
// remember to delete your char array when done
delete[] str;
This works for me and saves me converting to AnsiString
// Using a static buffer
#define MAX_SIZE 256
UnicodeString ustring = "Convert me";
char mbstring[MAX_SIZE];
wcstombs(mbstring,ustring.c_str(),MAX_SIZE);
// Using dynamic buffer
char *dmbstring;
dmbstring = new char[ustring.Length() + 1];
wcstombs(dmbstring,ustring.c_str(),ustring.Length() + 1);
// use dmbstring
delete dmbstring;
I am using a toolkit to do some Elliptical Curve Cryptography on an ATMega2560. When trying to use the print functions in the toolkit I am getting an empty string. I know the print functions work because the x86 version prints the variables without a problem. I am not experienced with ATMega and would love any help on this matter. The print code is included below.
Code to print a big number (it itself calls a util_print)
void bn_print(bn_t a) {
int i;
if (a->sign == BN_NEG) {
util_print("-");
}
if (a->used == 0) {
util_print("0\n");
} else {
#if WORD == 64
util_print("%lX", (unsigned long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*lX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long int)a->dp[i]);
}
#else
util_print("%llX", (unsigned long long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*llX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long long int)a->dp[i]);
}
#endif
util_print("\n");
}
}
The code to actually print a big number variable:
static char buffer[64 + 1];
void util_printf(char *format, ...) {
#ifndef QUIET
#if ARCH == AVR
char *pointer = &buffer[1];
va_list list;
va_start(list, format);
vsnprintf(pointer, 128, format, list);
buffer[0] = (unsigned char)2;
va_end(list);
#elif ARCH == MSP
va_list list;
va_start(list, format);
vprintf(format, list);
va_end(list);
#else
va_list list;
va_start(list, format);
vprintf(format, list);
fflush(stdout);
va_end(list);
#endif
#endif
}
edit: I do have UART initialized and can output printf statments to a console.
I'm one of the authors of the RELIC toolkit. The current util_printf() function is used to print inside the Avrora simulator, for debugging purposes. I'm glad that you could adapt the code to your purposes. As a side note, the buffer size problem was already fixed in more recent releases of the toolkit.
Let me know you have further problems with the library. You can either contact me personally or write directly to the discussion group.
Thank you!
vsnprintf store it's output on the given buffer (which in this case is the address point by pointer variable), in order for it to show on the console (through UART) you must send your buffer using printf (try to add printf("%s", pointer) after vsnprintf), if you're using avr-libc don't forget to initialized std stream first before making any call to printf function
oh btw your code is vulnerable to buffer overflow attack, buffer[64 + 1] means your buffer size is only 65 bytes, vsnprintf(pointer, 128, format, list); means that the maximum buffer defined by your application is 128 bytes, try to change it below 65 bytes in order to avoid overflow
Alright so I found a workaround to print the bn numbers to a stdout on an ATMega2560. The toolkit comes with a function that writes a variable to a string (bn_write_str). So I implemented my own print function as such:
void print_bn(bn_t a)
{
char print[BN_SIZE]; // max precision of a bn number
int bi = bn_bits(a); // get the number of bits of the number
bn_write_str(print, bi, a, 16) // 16 indicates the radix (hexadecimal)
printf("%s\n"), print);
}
This function will print a bn number in hexadecimal format.
Hope this helps anyone using the RELIC toolkit with an AVR.
This skips the util_print calls.