I need to show a base64 key in a TMemo. Unfortunately, it is impossible to show this base64 string appropriately: it is cut off at every '/' by a Carriage return, or at any '+' where it systematically starts a new line !
I tried everything in my knowledge to make this string in one long phrase (without carriage returns), but unsucessfully.
How is it possible to obtain a flat string in base64 (without carriage returns), if possible resizable automatically when the form and TMemo is resized ?
Many thanks.
For those who are interested, the code below: a TForm with a TMemo (memo). This solution works for me for a flat Base64 string. At last no longer string cut-off at every / or +.
Maybe the solution below needs to be tuned, but it works enough for me. Of course, before to treat the b64 string in an application, it needs to be filtered to eliminate CR-LF but that's OK.
I use the events: OnKeyDown, OnResize, OnPainting of the TMemo.
I wrote a specific function formatMemo(..) which does the job of aligning the lines appropriately.
The code accepts only true B64 characters, and filters faulty characters if any.
#define IS_B64(c) (isalnum(c) || (c == '/') || (c == '+') || (c == '='))
//Adjustments work for Courier New, standard size:
const float FW=7.2;//Font width
const diff=25;//Room for vert. scroll bar
//Gives the number of characters in one line of the TMemo:
// width : width in pixels where to put the line of chars
// font_sz : the average width of a character
// returns the number of characters by line of the TMemo
inline int nchars(int width, float font_sz)
{
return int(float(width-diff)/font_sz);
}//nchars
//---------------------------------------------------------------------------
//Formats the memo to a certain length of characters:
// *p : the memo to format
// nc : the number of characters for each line.
void formatMemo(TMemo *p, int nc)
{
if(p==0) return;
AnsiString src, dest;//UnicodeString is less fast...
//Filter everything as B64 only:
for(int i=1; i<=p->Text.Length(); ++i) {//Indexing is "1-based" like on Delphi (except on mobiles)
if(IS_B64(p->Text[i])) dest += p->Text[i];
}
p->Lines->Clear();//Erases everyting
int length=dest.Length(), units=length/nc, remain=length%nc;
for( int k=0 ; k<units ; ++k) {
p->Lines->Append( dest.SubString(1+k*nc, nc) );
}
if(remain) {
p->Lines->Append( dest.SubString(1+units*nc, remain) );
}
}//formatMemo
//---------------------------------------------------------------------------
void __fastcall TForm1::memoKeyDown(TObject *Sender, WORD &Key, System::WideChar &KeyChar,
TShiftState Shift)
{
//This event is triggered before the character is sent in Text.
//Saves caret position:
TCaretPosition p={memo->CaretPosition.Line, memo->CaretPosition.Pos};
memo->Tag=0;//Don't do a format.
if(Key==0 && !IS_B64(KeyChar))//Printable KeyChar
{
//Changes the entry into '0':
KeyChar='0';
KeyDown(Key,KeyChar,Shift);
//Put a backspace to erase:
Key=vkBack; KeyChar=0;
KeyDown(Key,KeyChar,Shift);
}
else memo->Tag=1;//Programs a format in the OnPainting
memo->SetFocus();
memo->CaretPosition=p;//Repositions the caret
}
//---------------------------------------------------------------------------
//In case of resize, reformat the TMemo
void __fastcall TForm1::memoResize(TObject *Sender)
{
formatMemo(memo, nchars(memo->Width,FW));
}
//---------------------------------------------------------------------------
void __fastcall TForm1::memoPainting(TObject *Sender, TCanvas *Canvas, const TRectF &ARect)
{
//We will use the Tag of the memo as a parameter, to plan a reformat.
if(memo->Tag){//A format is asked by OnKeyDown.
TCaretPosition p={memo->CaretPosition.Line, memo->CaretPosition.Pos};
formatMemo(memo, nchars(memo->Width,FW));
memo->SetFocus();
memo->CaretPosition=p;
memo->Tag=0;//Done
}
}
//---------------------------------------------------------------------------
Related
I trying to use MapVirtualKey[A]/[W]/[ExA]/[ExW] API to map VK_* code to character by means of its MAPVK_VK_TO_CHAR (2) mode.
I have found that it always returns 'A'..'Z' chars for 'VK_A'..'VK_Z' no matter which keyboard layout I have active.
The docs are saying that:
The uCode parameter is a virtual-key code and is translated into an
unshifted character value in the low order word of the return value.
Dead keys (diacritics) are indicated by setting the top bit of the
return value. If there is no translation, the function returns 0.
But I cannot get unshifted character value nor non-ASCII character from it.
For other buttons it works as described. And this behavior is even more annoying considering that, for example for US English keyboard layout it returns:
VK_Q (0x51) -> `Q` (U+0051 Latin Capital Letter Q)
VK_OEM_PERIOD (0xbe) -> `.` (U+002E Full Stop)
But for Russian keyboard layout it returns:
VK_Q (0x51) -> `Q` (U+0051 Latin Capital Letter Q)
^- here it should return `й` (U+0439 Cyrillic Small Letter Short I) according to docs
VK_OEM_PERIOD (0xbe) -> `ю` (U+044E Cyrillic Small Letter Yu)
How to use it properly?
MapVirtualKey has a known broken behaviour.
The docs are lying you about MAPVK_VK_TO_CHAR or 2 mode.
According to experiments and leaked Windows XP source code (in \windows\core\ntuser\kernel\xlate.c file) it contains different behaviour for 'A'..'Z' VKs (those VKs are specifically not defined in Win32 API WinUser.h header and are equivalent to 'A'..'Z' ASCII chars):
case 2:
/*
* Bogus Win3.1 functionality: despite SDK documenation, return uppercase for
* VK_A through VK_Z
*/
if ((wCode >= (WORD)'A') && (wCode <= (WORD)'Z')) {
return wCode;
}
Not sure why MS decided to pull this bug from Win 3.1 but current situation on my Windows 10 is like this.
Also some keyboard layouts can emit multiple WCHAR characters on a single key press (UTF-16 surrogate pairs or ligatures that can contain multiple Unicode code points). MapVirtualKey with MAPVK_VK_TO_CHAR is failing to return proper values for these keys too - it will return U+F002 code point in this case.
As a workaround I can recommend you to use ToUnicode[Ex] API that can do this mapping for you:
// Returns UTF-8 string
std::string GetStrFromKeyPress(uint16_t scanCode, bool isShift)
{
static BYTE keyboardState[256];
memset(keyboardState, 0, 256);
if (isShift)
{
keyboardState[VK_SHIFT] |= 0x80;
}
wchar_t chars[5] = { 0 };
const UINT vkCode = ::MapVirtualKeyW(scanCode, MAPVK_VSC_TO_VK_EX);
// This call can produce multiple UTF-16 code points
// in case of ligatures or non-BMP Unicode chars that have hi and low surrogate
// See examples: https://kbdlayout.info/features/ligatures
int code = ::ToUnicode(vkCode, scanCode, keyboardState, chars, 4, 0);
if (code < 0)
{
// Dead key
if (chars[0] == 0 || std::iswcntrl(chars[0])) {
return {};
}
code = -code;
}
// Clear keyboard state
{
memset(keyboardState, 0, 256);
const UINT clearVkCode = VK_DECIMAL;
const UINT clearScanCode = ::MapVirtualKeyW(clearVkCode, MAPVK_VK_TO_VSC);
wchar_t tmpChars[5] = { 0 };
do {} while (::ToUnicode(clearVkCode, clearScanCode, keyboardState, tmpChars, 4, 0) < 0);
}
// Do not return control characters
if (code <= 0 || (code == 1 && std::iswcntrl(chars[0]))) {
return {};
}
return utf8::narrow(chars, code);
}
Or even better: if you have Win32 message loop - just use TranslateMessage() (that calls ToUnicode() under the hood) and then process WM_CHAR message.
PS: Same applies to GetKeyNameText API since it calls the MapVirtualKey(vk, MAPVK_VK_TO_CHAR) under the hood for keys that do not have explicit name set in keyboard layout dll (usually only non-characters do have names).
I'm writing some code to take in a string, turn it into a char array and then print back to the user (before passing to another function).
Currently the code works up to dat.toCharArray(DatTim,datsize); however, the pointer does not seem to be working as the wile loop never fires
String input = "Test String for Foo";
InputParse(input);
void InputParse (String dat)
//Write Data
datsize = dat.length()+1;
const char DatTim[datsize];
dat.toCharArray(DatTim,datsize);
//Debug print back
for(int i=0;i<datsize;i++)
{
Serial.write(DatTim[i]);
}
Serial.println();
//Debug pointer print back
const char *b;
b=*DatTim;
while (*b)
{
Serial.print(*b);
b++;
}
Foo(*DatTim);
I can't figure out the difference between what I have above vs the template code provided by Majenko
void PrintString(const char *str)
{
const char *p;
p = str;
while (*p)
{
Serial.print(*p);
p++;
}
}
The expression *DatTim is the same as DatTim[0], i.e. it gets the first character in the array and then assigns it to the pointer b (something the compiler should have warned you about).
Arrays naturally decays to pointers to their first element, that is DatTim is equal to &DatTim[0].
The simple solution is to simply do
const char *b = DatTim;
I have a problem in regards of extracting signed int from string in c++.
Assuming that i have a string of images1234, how can i extract the 1234 from the string without knowing the position of the last non numeric character in C++.
FYI, i have try stringstream as well as lexical_cast as suggested by others through the post but stringstream returns 0 while lexical_cast stopped working.
int main()
{
string virtuallive("Images1234");
//stringstream output(virtuallive.c_str());
//int i = stoi(virtuallive);
//stringstream output(virtuallive);
int i;
i = boost::lexical_cast<int>(virtuallive.c_str());
//output >> i;
cout << i << endl;
return 0;
}
How can i extract the 1234 from the string without knowing the position of the last non numeric character in C++?
You can't. But the position is not hard to find:
auto last_non_numeric = input.find_last_not_of("1234567890");
char* endp = &input[0];
if (last_non_numeric != std::string::npos)
endp += last_non_numeric + 1;
if (*endp) { /* FAILURE, no number on the end */ }
auto i = strtol(endp, &endp, 10);
if (*endp) {/* weird FAILURE, maybe the number was really HUGE and couldn't convert */}
Another possibility would be to put the string into a stringstream, then read the number from the stream (after imbuing the stream with a locale that classifies everything except digits as white space).
// First the desired facet:
struct digits_only: std::ctype<char> {
digits_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
// everything is white-space:
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
// except digits, which are digits
std::fill(&rc['0'], &rc['9'], std::ctype_base::digit);
// and '.', which we'll call punctuation:
rc['.'] = std::ctype_base::punct;
return &rc[0];
}
};
Then the code to read the data:
std::istringstream virtuallive("Images1234");
virtuallive.imbue(locale(locale(), new digits_only);
int number;
// Since we classify the letters as white space, the stream will ignore them.
// We can just read the number as if nothing else were there:
virtuallive >> number;
This technique is useful primarily when the stream contains a substantial amount of data, and you want all the data in that stream to be interpreted in the same way (e.g., only read numbers, regardless of what else it might contain).
I have a program that reads from a .txt file
I use the cmd prompt to execute the program with the name of the text file to read from.
ex: program.exe myfile.txt
The problem is that sometimes it works, sometimes it doesn't.
The original file is 130KB and doesn't work.
If I copy/paste the contents, the file is 65KB and works.
If I copy/paste the file and rename it, it's 130KB and doesn't work.
Any ideas?
After more testing it shows that this is what makes it not work:
int main(int argc, char *argv[])
{
char *infile1
char tmp[1024] = { 0x0 };
FILE *in;
for (i = 1; i < argc; i++) /* Skip argv[0] (program name). */
{
if (strcmp(argv[i], "-sec") == 0) /* Process optional arguments. */
{
opt = 1; /* This is used as a boolean value. */
/*
* The last argument is argv[argc-1]. Make sure there are
* enough arguments.
*/
if (i + 1 <= argc - 1) /* There are enough arguments in argv. */
{
/*
* Increment 'i' twice so that you don't check these
* arguments the next time through the loop.
*/
i++;
optarg1 = atoi(argv[i]); /* Convert string to int. */
}
}
else /* not -sec */
{
if (infile1 == NULL) {
infile1 = argv[i];
}
else {
if (outfile == NULL) {
outfile = argv[i];
}
}
}
}
in = fopen(infile1, "r");
if (in == NULL)
{
fprintf(stderr, "Unable to open file %s: %s\n", infile1, strerror(errno));
exit(1);
}
while (fgets(tmp, sizeof(tmp), in) != 0)
{
fprintf(stderr, "string is %s.", tmp);
//Rest of code
}
}
Whether it works or not, the code inside the while loop gets executed.
When it works tmp actually has a value.
When it doesn't work tmp has no value.
EDIT:
Thanks to sneftel, we know what the problem is,
For me to use fgetws() instead of fgets(), I need tmp to be a wchar_t* instead of a char*.
Type casting seems to not work.
I tried changing the declaration of tmp to
wchar_t tmp[1024] = { 0x0 };
but I realized that tmp is a parameter in strtok() used elsewhere in my code.
I here is what I tried in that function:
//tmp is passed as the first parameter in parse()
void parse(wchar_t *record, char *delim, char arr[][MAXFLDSIZE], int *fldcnt)
{
if (*record != NULL)
{
char*p = strtok((char*)record, delim);
int fld = 0;
while (p) {
strcpy(arr[fld], p);
fld++;
p = strtok('\0', delim);
}
*fldcnt = fld;
}
else
{
fprintf(stderr, "string is null");
}
}
But typecasting to char* in strtok doesn't work either.
Now I'm looking for a way to just convert the file from UTF-16 to UTF-8 so tmp can be of type char*
I found this which looks like it can be useful but in the example it uses input from the user as UTF-16, how can that input be taken from the file instead?
http://www.cplusplus.com/reference/locale/codecvt/out/
It sounds an awful lot like the original file is UTF-16 encoded. When you copy/paste it in your text editor, you then save the result out as a new (default encoding) (ASCII or UTF-8) text file. Since a single character takes 2 bytes in a UTF-16-encode file but only 1 byte in a UTF-8-encoded file, that results in the file size being roughly halved when you save it out.
UTF-16 is fine, but you'll need to use Unicode-aware functions (that is, not fgets) to work with it. If you don't want to deal with all that Unicode jazz right now, and you don't actually have any non-ASCII characters to deal with in the file, just do the manual conversion (either with your copy/paste or with a command-line utility) before running your program.
I've got a form with a Listbox which contains lines of four words.
When I click on one line, these words should be seen in four different textboxes.
So far, I've got everything working, yet I have a problem with chars converting.
The string from the listbox is a UnicodeString but the strtok uses a char[].
The compiler tells me it "Cannot Convert UnicodeString to Char[]". This is the code I am using for this:
{
int a;
UnicodeString b;
char * pch;
int c;
a=DatabaseList->ItemIndex; //databaselist is the listbox
b=DatabaseList->Items->Strings[a];
char str[] = b; //This is the part that fails, telling its unicode and not char[].
pch = strtok (str," ");
c=1;
while (pch!=NULL)
{
if (c==1)
{
ServerAddress->Text=pch;
} else if (c==2)
{
DatabaseName->Text=pch;
} else if (c==3)
{
Username->Text=pch;
} else if (c==4)
{
Password->Text=pch;
}
pch = strtok (NULL, " ");
c=c+1;
}
}
I know my code doesn't look nice, pretty bad actually. I'm just learning some programming in C++.
How can I convert this?
strtok actually modifies your char array, so you will need to construct an array of characters you are allowed to modify. Referencing directly into the UnicodeString string will not work.
// first convert to AnsiString instead of Unicode.
AnsiString ansiB(b);
// allocate enough memory for your char array (and the null terminator)
char* str = new char[ansiB.Length()+1];
// copy the contents of the AnsiString into your char array
strcpy(str, ansiB.c_str());
// the rest of your code goes here
// remember to delete your char array when done
delete[] str;
This works for me and saves me converting to AnsiString
// Using a static buffer
#define MAX_SIZE 256
UnicodeString ustring = "Convert me";
char mbstring[MAX_SIZE];
wcstombs(mbstring,ustring.c_str(),MAX_SIZE);
// Using dynamic buffer
char *dmbstring;
dmbstring = new char[ustring.Length() + 1];
wcstombs(dmbstring,ustring.c_str(),ustring.Length() + 1);
// use dmbstring
delete dmbstring;