How to convert a number to a UTF8 char ?
It must not be a CTFE function because the input parameter is varying during the run-time. I guess there must be a std function for this but I cant find it. Thx.
import std.stdio, std.conv;
char utf8_RT(int nbr)
{
return to!char(nbr);
}
void main(string args[])
{
assert( utf8_RT(2665) == '\u2665' );
}
obviously fails.
I don't know what you mean by "UTF8 char".
If you want an UTF-32 character (i.e. dchar), you can simply use dchar(2665).
If you want an UTF-8 string (one Unicode character encoded as one or more UTF-8 code units, i.e. bytes), you can use to!string(dchar(2665)). Don't forget to import std.conv.
Replace 2665 with the name of your int variable, naturally.
Related
Is there any CAPL function for converting decimal value into hexadecimal value? I have already looked in help option of CAPL browser.
Assuming that you want the number to be converted to a string, that you can print out. Either to the write window, into the testreport, etc.
You can use snprintf like this:
snprintf(buffer,elcount(buffer),"%x",integervariable);
where buffer is a char array big enough.
This example is taken from the Vector knowledge base and was among the first result on google.
For hexadecimal equivalent value:
You can make use of _pow function (returns x to the power of y) and while in the following way, which would return you the hexadecimal equivalent value:
double decToHexEquivalent(int n)
{
int counter,remainder,decimal_number,hexadecimal_number = 0;
while(n!=0)
{
remainder = decimal_number % 16;
hexadecimal_number = hexadecimal_number + remainder * _pow(10, counter);
n=n/16;
++counter;
}
return hexadecimal_number;
}
you can call the above function in the following way:
testfunction xyz(int n)
{
write("Hexadecimal:%d", decToHexa(n));
}
Caution: not tested
For hexadecimal value
declare a global variable char buffer[100] in the variable section
variables
{
char buffer[100];
}
and then using snprintf function you can convert an integer variable to a character array, like this:
void dectohexValue(int decimal_number)
{
snprintf(buffer,elcount(buffer),"%02X",decimal_number);
}
then finally you can use the function as follows:
testfunction xyz(int n)
{
dectohexValue(n);
write("Hexadecimal:%s", buffer);
}
If the below code is compiled with UNICODE as compiler option, the GetComputerNameEx API returns junk characters.
Whereas if compiled without UNICODE option, the API returns truncated value of the hostname.
This issue is mostly seen with Asia-Pacific languages like Chinese, Japanese, Korean to name a few (i.e., non-English).
Can anyone throw some light on how this issue can be resolved.
# define INFO_SIZE 30
int main()
{
int ret;
TCHAR infoBuf[INFO_SIZE+1];
DWORD bufSize = (INFO_SIZE+1);
char *buf;
buf = (char *) malloc(INFO_SIZE+1);
if (!GetComputerNameEx((COMPUTER_NAME_FORMAT)1,
(LPTSTR)infoBuf, &bufSize))
{
printf("GetComputerNameEx failed (%d)\n", GetLastError());
return -1;
}
ret = wcstombs(buf, infoBuf, (INFO_SIZE+1));
buf[INFO_SIZE] = '\0';
return 0;
}
In the languages you mentioned, most characters are represented by more than one byte. This is because these languages have alphabets of much more than 256 characters. So you may need more than 30 bytes to encode 30 characters.
The usual pattern for calling a function like wcstombs goes like this: first get the amount of bytes required, then allocate a buffer, then convert the string.
(edit: that actually relies on a POSIX extension, which also got implemented on Windows)
size_t size = wcstombs(NULL, infoBuf, 0);
if (size == (size_t) -1) {
// some character can't be converted
}
char *buf = new char[size + 1];
size = wcstombs(buf, infoBuf, size + 1);
I am using snprintf to format string using user-defined format (also given as string). The code looks like this:
void DataPoint::valueReceived( QVariant value ) {
// Get the formating QVariant, which is only considered valid if it's string
QVariant format = this->property("format");
if( format.isValid() && format.type()==QMetaType::QString && !format.isNull() ) {
// Convert QString to std string
const std::string formatStr = format.toString().toStdString();
LOGMTRTTIINFO(pointName<<"="<<value.toString().toUtf8().constData()<<"=>"<<formatStr<<"["<<formatStr.length()<<'\n');
// The attempt to catch exceptions caused by invalid formating string
try {
if( value.type() == QMetaType::QString ) {
// Treat value as string (values are allways ASCII)
const std::string array = value.toString().toStdString();
const char* data = (char*)array.c_str();
// Assume no more than 10 characters are added during formating.
char* result = (char*)calloc(array.length()+10, sizeof(char));
snprintf(result, array.length()+10, formatStr.c_str(), data);
value = result;
}
// If not string, then it's a number.
else {
double data = value.toDouble();
char* result = (char*)calloc(30, sizeof(char));
// Even 15 characters is already longer than largest number you can make any sense of
snprintf(result, 30, formatStr.c_str(), data);
LOGMTRTTIINFO(pointName<<"="<<data<<"=>"<<formatStr<<"["<<formatStr.length()<<"]=>"<<result<<'\n');
value = result;
}
} catch(...) {
LOGMTRTTIERR("Format error in "<<pointName<<'\n');
}
}
ui->value->setText(value.toString());
}
As you can see I assumed there will be some exception. But there's not, invalid formatting string results in gibberish. This is what I get if I try to format double using %s:
So is there a way to detect that invalid formatting option was selected, such as formatting number as string or vice-versa? And what if totally invalid formatting string is given?
You ask if it's possible to detect format/argument mismatch at run-time, right? Then the short and only answer is no.
To expand on that "no" it's because Variable-argument functions (functions using the ellipsis ...) have no kind of type-safety. The compiler will convert some types of arguments to others (e.g. char or short will be converted to int, float will be converted to double), and if you use a literal string for the format some compilers will be able to parse the string and check the arguments you pass.
However since you pass a variable string, that can change at run-time, the compiler have no possibility for any kind of compile-time checking, and the function must trust that the format string passed is using the correct formatting for the arguments passed. If it's not then you have undefined behavior.
It should be noted that snprintf might not actually fail when being passed mismatching format specifier and argument value.
For example if using the %d format to print an int value, but then passing a double value, the snprintf would happily extract sizeof(int) bytes from the double value, and interpret it as an int value. The value printed will be quite unexpected, but there won't be a "failure" as such. Only undefined behavior (as mentioned above).
Thus it's not really possible to detect such errors or problems at all. At least not through the code. This is something that needs proper testing and code-review to catch.
What happens when snprintf fails? When snprintf fails, POSIX requires that errno is set:
If an output error was encountered, these functions shall return a negative value and set errno to indicate the error.
Also you can find some relevant information regarding how to handle snprintf failures Here.
I have a problem in regards of extracting signed int from string in c++.
Assuming that i have a string of images1234, how can i extract the 1234 from the string without knowing the position of the last non numeric character in C++.
FYI, i have try stringstream as well as lexical_cast as suggested by others through the post but stringstream returns 0 while lexical_cast stopped working.
int main()
{
string virtuallive("Images1234");
//stringstream output(virtuallive.c_str());
//int i = stoi(virtuallive);
//stringstream output(virtuallive);
int i;
i = boost::lexical_cast<int>(virtuallive.c_str());
//output >> i;
cout << i << endl;
return 0;
}
How can i extract the 1234 from the string without knowing the position of the last non numeric character in C++?
You can't. But the position is not hard to find:
auto last_non_numeric = input.find_last_not_of("1234567890");
char* endp = &input[0];
if (last_non_numeric != std::string::npos)
endp += last_non_numeric + 1;
if (*endp) { /* FAILURE, no number on the end */ }
auto i = strtol(endp, &endp, 10);
if (*endp) {/* weird FAILURE, maybe the number was really HUGE and couldn't convert */}
Another possibility would be to put the string into a stringstream, then read the number from the stream (after imbuing the stream with a locale that classifies everything except digits as white space).
// First the desired facet:
struct digits_only: std::ctype<char> {
digits_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
// everything is white-space:
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
// except digits, which are digits
std::fill(&rc['0'], &rc['9'], std::ctype_base::digit);
// and '.', which we'll call punctuation:
rc['.'] = std::ctype_base::punct;
return &rc[0];
}
};
Then the code to read the data:
std::istringstream virtuallive("Images1234");
virtuallive.imbue(locale(locale(), new digits_only);
int number;
// Since we classify the letters as white space, the stream will ignore them.
// We can just read the number as if nothing else were there:
virtuallive >> number;
This technique is useful primarily when the stream contains a substantial amount of data, and you want all the data in that stream to be interpreted in the same way (e.g., only read numbers, regardless of what else it might contain).
I've got a form with a Listbox which contains lines of four words.
When I click on one line, these words should be seen in four different textboxes.
So far, I've got everything working, yet I have a problem with chars converting.
The string from the listbox is a UnicodeString but the strtok uses a char[].
The compiler tells me it "Cannot Convert UnicodeString to Char[]". This is the code I am using for this:
{
int a;
UnicodeString b;
char * pch;
int c;
a=DatabaseList->ItemIndex; //databaselist is the listbox
b=DatabaseList->Items->Strings[a];
char str[] = b; //This is the part that fails, telling its unicode and not char[].
pch = strtok (str," ");
c=1;
while (pch!=NULL)
{
if (c==1)
{
ServerAddress->Text=pch;
} else if (c==2)
{
DatabaseName->Text=pch;
} else if (c==3)
{
Username->Text=pch;
} else if (c==4)
{
Password->Text=pch;
}
pch = strtok (NULL, " ");
c=c+1;
}
}
I know my code doesn't look nice, pretty bad actually. I'm just learning some programming in C++.
How can I convert this?
strtok actually modifies your char array, so you will need to construct an array of characters you are allowed to modify. Referencing directly into the UnicodeString string will not work.
// first convert to AnsiString instead of Unicode.
AnsiString ansiB(b);
// allocate enough memory for your char array (and the null terminator)
char* str = new char[ansiB.Length()+1];
// copy the contents of the AnsiString into your char array
strcpy(str, ansiB.c_str());
// the rest of your code goes here
// remember to delete your char array when done
delete[] str;
This works for me and saves me converting to AnsiString
// Using a static buffer
#define MAX_SIZE 256
UnicodeString ustring = "Convert me";
char mbstring[MAX_SIZE];
wcstombs(mbstring,ustring.c_str(),MAX_SIZE);
// Using dynamic buffer
char *dmbstring;
dmbstring = new char[ustring.Length() + 1];
wcstombs(dmbstring,ustring.c_str(),ustring.Length() + 1);
// use dmbstring
delete dmbstring;