Accepting only alphanumerics in Golang and ncurses - go

So, I'm teaching myself some Golang by making a simple resource management game with ncurses. I'm using this library to connect Golang to ncurses.
I've made a simple text input panel that takes in one character at a time, displays it, and then adds it to a string composing the user's response. Here's what it looks like:
// Accept characters, printing them until end
ch := window.GetChar()
kstr := gc.KeyString(ch)
response := ""
cur := 0
for kstr != "enter" {
// Diagnostic print to get key code of current character
window.Move(0,0)
window.ClearToEOL()
window.MovePrint(0, 0, ch)
// If its a backspace or delete, remove a character
// Otherwise as long as its a regular character add it
if ((ch == 127 || ch == 8) && cur != 0){
cur--
response = response[:len(response)-1]
window.MovePrint(y, (x + cur), " ")
} else if (ch >= 33 && ch <= 122 && cur <= 52) {
window.MovePrint(y, (x + cur), kstr)
response = response + kstr
cur++
}
// Get next character
ch = window.GetChar()
kstr = gc.KeyString(ch)
}
However, the arrow and function keys seem to be coming up as keycodes already associated with the normal a-zA-Z characters. For example, right-arrow comes up as 67 and F1 as 80. Any ideas what I'm doing wrong here, or if there's a better approach to taking in alphanumerics through ncurses? I'd like to avoid ncurses fields and classes as much as possible, because the point here is to learn Golang, not ncurses. Thanks!

If you do not enable the keypad mode, (n)curses will return the individual bytes which make up a special key.
To fix, add this to your program's initialization:
stdscr.Keypad(true) // allow keypad input
which will return special keys such as right-arrow as values above 255. goncurses has symbols defined for those, e.g., KEY_RIGHT.

Related

MapVirtualKey returns wrong chars in MAPVK_VK_TO_CHAR mode

I trying to use MapVirtualKey[A]/[W]/[ExA]/[ExW] API to map VK_* code to character by means of its MAPVK_VK_TO_CHAR (2) mode.
I have found that it always returns 'A'..'Z' chars for 'VK_A'..'VK_Z' no matter which keyboard layout I have active.
The docs are saying that:
The uCode parameter is a virtual-key code and is translated into an
unshifted character value in the low order word of the return value.
Dead keys (diacritics) are indicated by setting the top bit of the
return value. If there is no translation, the function returns 0.
But I cannot get unshifted character value nor non-ASCII character from it.
For other buttons it works as described. And this behavior is even more annoying considering that, for example for US English keyboard layout it returns:
VK_Q (0x51) -> `Q` (U+0051 Latin Capital Letter Q)
VK_OEM_PERIOD (0xbe) -> `.` (U+002E Full Stop)
But for Russian keyboard layout it returns:
VK_Q (0x51) -> `Q` (U+0051 Latin Capital Letter Q)
^- here it should return `й` (U+0439 Cyrillic Small Letter Short I) according to docs
VK_OEM_PERIOD (0xbe) -> `ю` (U+044E Cyrillic Small Letter Yu)
How to use it properly?
MapVirtualKey has a known broken behaviour.
The docs are lying you about MAPVK_VK_TO_CHAR or 2 mode.
According to experiments and leaked Windows XP source code (in \windows\core\ntuser\kernel\xlate.c file) it contains different behaviour for 'A'..'Z' VKs (those VKs are specifically not defined in Win32 API WinUser.h header and are equivalent to 'A'..'Z' ASCII chars):
case 2:
/*
* Bogus Win3.1 functionality: despite SDK documenation, return uppercase for
* VK_A through VK_Z
*/
if ((wCode >= (WORD)'A') && (wCode <= (WORD)'Z')) {
return wCode;
}
Not sure why MS decided to pull this bug from Win 3.1 but current situation on my Windows 10 is like this.
Also some keyboard layouts can emit multiple WCHAR characters on a single key press (UTF-16 surrogate pairs or ligatures that can contain multiple Unicode code points). MapVirtualKey with MAPVK_VK_TO_CHAR is failing to return proper values for these keys too - it will return U+F002 code point in this case.
As a workaround I can recommend you to use ToUnicode[Ex] API that can do this mapping for you:
// Returns UTF-8 string
std::string GetStrFromKeyPress(uint16_t scanCode, bool isShift)
{
static BYTE keyboardState[256];
memset(keyboardState, 0, 256);
if (isShift)
{
keyboardState[VK_SHIFT] |= 0x80;
}
wchar_t chars[5] = { 0 };
const UINT vkCode = ::MapVirtualKeyW(scanCode, MAPVK_VSC_TO_VK_EX);
// This call can produce multiple UTF-16 code points
// in case of ligatures or non-BMP Unicode chars that have hi and low surrogate
// See examples: https://kbdlayout.info/features/ligatures
int code = ::ToUnicode(vkCode, scanCode, keyboardState, chars, 4, 0);
if (code < 0)
{
// Dead key
if (chars[0] == 0 || std::iswcntrl(chars[0])) {
return {};
}
code = -code;
}
// Clear keyboard state
{
memset(keyboardState, 0, 256);
const UINT clearVkCode = VK_DECIMAL;
const UINT clearScanCode = ::MapVirtualKeyW(clearVkCode, MAPVK_VK_TO_VSC);
wchar_t tmpChars[5] = { 0 };
do {} while (::ToUnicode(clearVkCode, clearScanCode, keyboardState, tmpChars, 4, 0) < 0);
}
// Do not return control characters
if (code <= 0 || (code == 1 && std::iswcntrl(chars[0]))) {
return {};
}
return utf8::narrow(chars, code);
}
Or even better: if you have Win32 message loop - just use TranslateMessage() (that calls ToUnicode() under the hood) and then process WM_CHAR message.
PS: Same applies to GetKeyNameText API since it calls the MapVirtualKey(vk, MAPVK_VK_TO_CHAR) under the hood for keys that do not have explicit name set in keyboard layout dll (usually only non-characters do have names).

No relevant answers on the actual behavior of kbhit() on characters such as ", %, ~ in Windows 10 when keyboard and locale are US (not international)

Windows 10 with latest updates installed on a Dell XPS13. US keyboard layout and US locale selected (not international). Still a call to kbhit() or _kbhit() with specific characters such as ", ~, % does not return the key hit, at least mot until a certain amount of time (~1second) and a second character has been hit.
I try to use kbhit() because I need a non-waiting function. How can I detect correctly a keyboard hit on " or % with a single keystroke?
In Linux using a timed-out select() on stdin works great, but doesn't seem to be OK with Windows.
Thanks,
-Patrick
I finally found a solution that fits my needs and fixes the issues I have with kbhit(); code below; I hope it helps others too.
– Patrick
int getkey();
//
// int getkey(): returns the typed character at keyboard or NO_CHAR if no keyboard key was pressed.
// This is done in non-blocking mode; i.e. NO_CHAR is returned if no keyboard event is read from the
// console event queue.
// This works a lot better for me than the standard call to kbhit() which is generally used as kbhit()
// keeps some characters such as ", `, %, and tries to deal with them before returning them. Not easy
// the to follow-up what's really been typed in.
//
int getkey() {
INPUT_RECORD buf; // interested in bKeyDown event
DWORD len; // seem necessary
int ch;
ch = NO_CHAR; // default return value;
PeekConsoleInput(GetStdHandle(STD_INPUT_HANDLE), &buf, 1, &len);
if (len > 0) {
if (buf.EventType == KEY_EVENT && buf.Event.KeyEvent.bKeyDown) {
ch = _getche(); // set ch to input char only under right conditions
} // _getche() returns char and echoes it to console out
FlushConsoleInputBuffer(GetStdHandle(STD_INPUT_HANDLE)); // remove consumed events
} else {
Sleep(5); // avoids too High a CPU usage when no input
}
return ch;
}
It is also possible to call ReadConsoleInput(GetStdHandle(STD_INPUT_HANDLE), &buf, 1, &len); rather than FlushConsoleInputBuffer(GetStdHandle(STD_INPUT_HANDLE)); in the code above, but for some unknown reason, it doesn't seem to reply/react as quickly and some character are missed when typing at the keyboard.

In etcd v3.0.x, how do I request all keys with a given prefix?

In etcd 3.0.x, a new API was introduced, and I'm just reading up on it. One thing is unclear to me, in the RangeRequest object. In the description of the property range_end, it says:
If the range_end is one bit larger than the given key,
then the range requests get the all keys with the prefix (the given key).
Here is the complete text, to provide some context:
// key is the first key for the range. If range_end is not given, the request only looks up key.
bytes key = 1;
// range_end is the upper bound on the requested range [key, range_end).
// If range_end is '\0', the range is all keys >= key.
// If the range_end is one bit larger than the given key,
// then the range requests get the all keys with the prefix (the given key).
// If both key and range_end are '\0', then range requests returns all keys.
bytes range_end = 2;
My question is: What is meant by
If the range_end is one bit larger than the given key
? Does it mean that range_end is 1 bit longer than key? Does it mean it must be key+1 when interpreted as integer? If the latter, in which encoding?
There's a PR which resolves this confusion.
If range_end is key plus one (e.g., "aa"+1 == "ab", "a\xff"+1 == "b"),
then the range request gets all keys prefixed with key.
UPDATE:
var key = "/aaa"
var range_end = "/aa" + String.fromCharCode("a".charCodeAt(2) + 1);
One bit bigger than the last byte of key.
For example, if key is "09903x", then range_end should be "09903y".
There is only byte stream when sending to etcd server, so you should care about the serialization of the driver, and determined the value of range_end.
A great TypeScript example here: https://github.com/mixer/etcd3/blob/7691f9bf227841e268c3aeeb7461ad71872df878/src/util.ts#L25
work js example with TextEncoder/TextDecoder:
function endRangeForPrefix(value) {
let textEncoder = new TextEncoder();
let encodeValue = textEncoder.encode(value);
for (let i = encodeValue.length - 1; i >= 0; i--) {
if (encodeValue[i] < 0xff) {
encodeValue[i]++;
encodeValue = encodeValue.slice(0, i + 1);
let textDecoder = new TextDecoder();
let decode = textDecoder.decode(encodeValue);
return decode;
}
}
return '';
}
I am using python aioetcd3. I also encountered the same problem, but I found a way in his source code.
aioetcd3/utils.py line 14
def increment_last_byte(byte_string):
s = bytearray(to_bytes(byte_string))
for i in range(len(s) - 1, -1, -1):
if s[i] < 0xff:
s[i] += 1
return bytes(s[:i+1])
else:
return b'\x00'
usage:
await Client().delete([db_key, increment_last_byte(db_key)], prev_kv=True)

C++ srand function looping

I have the following method as part of a password generating program to generate a random password which is then validated.
My problem is that the srand function never meets the validation requirements and keeps looping back to create a new password.
Im posting the code below to ask if anyone has a more efficient way to create the random password so that it will meet validation requirements instead of looping back continously.Thanks.
static bool verifyThat(bool condition, const char* error) {
if(!condition) printf("%s", error);
return !condition;
}
//method to generate a random password for user following password guidelines.
void generatePass()
{
FILE *fptr;//file pointer
int iChar,iUpper,iLower,iSymbol,iNumber,iTotal;
printf("\n\n\t\tGenerate Password selected ");
get_user_password:
printf("\n\n\t\tPassword creation in progress... ");
int i,iResult,iCount;
char password[10 + 1];
char strLower[59+1] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRTUVWXYZ!£$%^&*";
srand(time (0));
for(i = 0; i < 10;i++)
{
password[i] = strLower[(rand() % 52)];
}
password[i] = '\0';
iChar = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
//folowing statements used to validate password
iChar = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
iUpper = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
iLower =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
iSymbol =countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
iNumber = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
iTotal = countLetters(password,&iUpper,&iLower,&iSymbol,&iNumber,&iTotal);
if(verifyThat(iUpper >= 2, "Not enough uppercase letters!!!\n")
|| verifyThat(iLower >= 2, "Not enough lowercase letters!!!\n")
|| verifyThat(iSymbol >= 1, "Not enough symbols!!!\n")
|| verifyThat(iNumber >= 2, "Not enough numbers!!!\n")
|| verifyThat(iTotal >= 9, "Not enough characters!!!\n")
|| verifyThat(iTotal <= 15, "Too many characters!!!\n"))
goto get_user_password;
iResult = checkWordInFile("dictionary.txt", password);
if(verifyThat(iResult != gC_FOUND, "Password contains small common 3 letter word/s."))
goto get_user_password;
iResult = checkWordInFile("passHistory.txt",password);
if(verifyThat(iResult != gC_FOUND, "Password contains previously used password."))
goto get_user_password;
printf("\n\n\n Your new password is verified ");
printf(password);
//writing password to passHistroy file.
fptr = fopen("passHistory.txt", "w"); // create or open the file
for( iCount = 0; iCount < 8; iCount++)
{
fprintf(fptr, "%s\n", password[iCount]);
}
fclose(fptr);
printf("\n\n\n");
system("pause");
}//end of generatePass method.
I looked at your code at glance and I think I have found the reasons inspite of which validation requirements aren`t meet.
I suggest you to pay attention to the following parts of your code:
1) char strLower[59+1] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRTUVWXYZ!£$%^&*";
here you should add numbers 0..9, this is one of the reasons why requirements could not be met, because how number can be picked if it isn`t in the set of numbers from which you pick?!
replace it for ex. with:
char strLower[] = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRTUVWXYZ!£$%^&*0123456789";
2) password[i] = strLower[(rand() % 52)]; - and in this part of the code, replace 52 with total number of symbols in string from which you randomly pick numbers.
I recommend you to replace it with the following code:
password[i] = strLower[(rand() % (sizeof(strLower) / sizeof(char) - 1))];
you could alter your algorithm.
choose at random a number of Upper characeters that is above 2.
choose at random a number of Lower character that is above 2.
choose at random a number of Sybmol character that is above 1.
choose at random a number of Number characters that is above 2.
and then recompose your password with the random items, re-ordered at random. Fill with whatever character you want to pas the verifyThat predicates: >=9 and <= 15.
And please: don't use goto. Make function calls instead.

ToAscii/ToUnicode in a keyboard hook destroys dead keys

It seems that if you call ToAscii() or ToUnicode() while in a global WH_KEYBOARD_LL hook, and a dead-key is pressed, it will be 'destroyed'.
For example, say you've configured your input language in Windows as Spanish, and you want to type an accented letter á in a program. Normally, you'd press the single-quote key (the dead key), then the letter "a", and then on the screen an accented á would be displayed, as expected.
But this doesn't work if you call ToAscii() or ToUnicode() in a low-level keyboard hook function. It seems that the dead key is destroyed, and so no accented letter á shows up on screen. Removing a call to the above functions resolves the issue... but unfortunately, I need to be able to call those functions.
I Googled for a while, and while a lot of people seemed to have this issue, no good solution was provided.
Any help would be much appreciated!
EDIT: I'm calling ToAscii() to convert the virtual-key code and scan code received in my LowLevelKeyboardProc hook function into the resulting character that will be displayed on screen for the user.
I tried MapVirtualKey(kbHookData->vkCode, 2), but this isn't as "complete" a function as ToAscii(); for example, if you press Shift + 2, you'll get '2', not '#' (or whatever Shift + 2 will produce for the user's keyboard layout/language).
ToAscii() is perfect... until a dead-key is pressed.
EDIT2: Here's the hook function, with irrelevant info removed:
LRESULT CALLBACK keyboard_LL_hook_func(int code, WPARAM wParam, LPARAM lParam) {
LPKBDLLHOOKSTRUCT kbHookData = (LPKBDLLHOOKSTRUCT)lParam;
BYTE keyboard_state[256];
if (code < 0) {
return CallNextHookEx(keyHook, code, wParam, lParam);
}
WORD wCharacter = 0;
GetKeyboardState(&keyboard_state);
int ta = ToAscii((UINT)kbHookData->vkCode, kbHookData->scanCode,
keyboard_state, &wCharacter, 0);
/* If ta == -1, a dead-key was pressed. The dead-key will be "destroyed"
* and you'll no longer be able to create any accented characters. Remove
* the call to ToAscii() above, and you can then create accented characters. */
return CallNextHookEx(keyHook, code, wParam, lParam);
}
Quite an old thread. Unfortunately it didn't contain the answer I was looking for and none of the answers seemed to work properly. I finally solved the problem by checking the MSB of the MapVirtualKey function, before calling ToUnicode / ToAscii. Seems to be working like a charm:
if(!(MapVirtualKey(kbHookData->vkCode, MAPVK_VK_TO_CHAR)>>(sizeof(UINT)*8-1) & 1)) {
ToAscii((UINT)kbHookData->vkCode, kbHookData->scanCode,
keyboard_state, &wCharacter, 0);
}
Quoting MSDN on the return value of MapVirtualKey, if MAPVK_VK_TO_CHAR is used:
[...] Dead keys (diacritics) are indicated by setting the top bit of the return value. [...]
stop using ToAscii() and use ToUncode()
remember that ToUnicode may return you nothing on dead keys - this is why they are called dead keys.
Any key will have a scancode or a virtual key code but not necessary a character.
You shouldn't combine the buttons with characters - assuming that any key/button has a text representation (Unicode) is wrong.
So:
for input text use the characters reported by Windows
for checking button pressed (ex. games) use scancodes or virtual keys (probably virtual keys are better).
for keyboard shortcuts use virtual key codes.
Call 'ToAscii' function twice for a correct processing of dead-key, like in:
int ta = ToAscii((UINT)kbHookData->vkCode, kbHookData->scanCode,
keyboard_state, &wCharacter, 0);
int ta = ToAscii((UINT)kbHookData->vkCode, kbHookData->scanCode,
keyboard_state, &wCharacter, 0);
If (ta == -1)
...
Calling the ToAscii or ToUnicode twice is the answer.
I found this and converted it for Delphi, and it works!
cnt:=ToUnicode(VirtualKey, KeyStroke, KeyState, chars, 2, 0);
cnt:=ToUnicode(VirtualKey, KeyStroke, KeyState, chars, 2, 0); //yes call it twice
I encountered this issue while creating a key logger in C# and none of the above answers worked for me.
After a deep blog searching, I stumbled across this keyboard listener which handles dead keys perfectly.
Here is a full code which covers dead keys and shortcut keys using ALT + NUMPAD, basically a full implementation of a TextField input handling:
[DllImport("user32.dll")]
public static extern int ToUnicode(uint virtualKeyCode, uint scanCode, byte[] keyboardState, [Out, MarshalAs(UnmanagedType.LPWStr, SizeConst = 64)] StringBuilder receivingBuffer, int bufferSize, uint flags);
private StringBuilder _pressCharBuffer = new StringBuilder(256);
private byte[] _pressCharKeyboardState = new byte[256];
public bool PreFilterMessage(ref Message m)
{
var handled = false;
if (m.Msg == 0x0100 || m.Msg == 0x0102)
{
bool isShiftPressed = (ModifierKeys & Keys.Shift) != 0;
bool isControlPressed = (ModifierKeys & Keys.Control) != 0;
bool isAltPressed = (ModifierKeys & Keys.Alt) != 0;
bool isAltGrPressed = (ModifierKeys & Keys.RMenu) != 0;
for (int i = 0; i < 256; i++)
_pressCharKeyboardState[i] = 0;
if (isShiftPressed)
_pressCharKeyboardState[(int)Keys.ShiftKey] = 0xff;
if (isAltGrPressed)
{
_pressCharKeyboardState[(int)Keys.ControlKey] = 0xff;
_pressCharKeyboardState[(int)Keys.Menu] = 0xff;
}
if (Control.IsKeyLocked(Keys.CapsLock))
_pressCharKeyboardState[(int)Keys.CapsLock] = 0xff;
Char chr = (Char)0;
int ret = ToUnicode((uint)m.WParam.ToInt32(), 0, _pressCharKeyboardState, _pressCharBuffer, 256, 0);
if (ret == 0)
chr = Char.ConvertFromUtf32(m.WParam.ToInt32())[0];
if (ret == -1)
ToUnicode((uint)m.WParam.ToInt32(), 0, _pressCharKeyboardState, _pressCharBuffer, 256, 0);
else if (_pressCharBuffer.Length > 0)
chr = _pressCharBuffer[0];
if (m.Msg == 0x0102 && Char.IsWhiteSpace(chr))
chr = (Char)0;
if (ret >= 0 && chr > 0)
{
//DO YOUR STUFF using either "chr" as special key (UP, DOWN, etc..)
//either _pressCharBuffer.ToString()(can contain more than one character if dead key was pressed before)
//and don't forget to set the "handled" to true, so nobody else can use the message afterwards
}
}
return handled;
}
It is known that ToUnicode() and its older counterpart ToAscii() can change keyboard state of the current thread and thus mess with dead keys and ALT+NUMPAD keystrokes:
As ToUnicodeEx translates the virtual-key code, it also changes the
state of the kernel-mode keyboard buffer. This state-change affects
dead keys, ligatures, alt+numpad key entry, and so on. It might also
cause undesired side-effects if used in conjunction with
TranslateMessage (which also changes the state of the kernel-mode
keyboard buffer).
To avoid that you can do your ToUnicode() call in a separate thread (it will have a separate keyboard state) or use a special flag in wFlags param that is documented in ToUnicode() docs:
If bit 2 is set, keyboard state is not changed (Windows 10, version
1607 and newer)
Or you can prepare sc->char mapping table beforehand and update it on language change event.
I think it should work with ToAscii() too but better not use this old ANSI codepage-dependant method. Use ToUnicode() API instead that can even return ligatures and UTF-16 surrogate pairs - if keyboard layout have them. Some do.
See Asynchronous input vs synchronous input, a quick introduction
for the reason behind this.
I copy the vkCode in a queue and do the conversion from another thread
#HOOKPROC
def keyHookKFunc(code,wParam,lParam):
global gkeyQueue
gkeyQueue.append((code,wParam,kbd.vkCode))
return windll.user32.CallNextHookEx(0,code,wParam,lParam)
This has the advantage of not delaying key processing by the os
This works for me
byte[] keyState = new byte[256];
//Remove this if using
//GetKeyboardState(keyState);
//Add only the Keys you want
keysDown[(int)Keys.ShiftKey] = 0x80; // SHIFT down
keysDown[(int)Keys.Menu] = 0x80; // ALT down
keysDown[(int)Keys.ControlKey] = 0x80; // CONTROL down
//ToAscii should work fine
if (ToAscii(myKeyboardStruct.VirtualKeyCode, myKeyboardStruct.ScanCode, keyState, inBuffer, myKeyboardStruct.Flags) == 1)
{
//do something
}

Resources