Win API wrong textbox output - winapi

I've been tying to build a simple Win Api program (using CodeBlocks) and ran into a weird problem.
case WM_COMMAND:{
if (LOWORD(wParam) == Calculate) {
int A=0, ArrayReset = 0;
char textread[256];
SendMessage((HWND)Box1,(UINT) EM_GETLINE, (WPARAM)1, (LPARAM)&textread);
A = atoi(textread);
itoa(ArrayReset, textread, 10);
itoa(A, textread, 10);
SendMessage((HWND)Box1,(UINT) WM_SETTEXT, (WPARAM)1,(LPARAM)&textread);
(My program is a bit more complicated, but this is just to show the problematic point)
Now, what I expect the code to do is to read the value in Box1, convert it into integer, convert it back to char array, and print this array back on the same Box1. Basically, some converting with no difference in the end result.
However, there is this strange problem. The code works with single digit numbers just fine, but if I enter a number with more digits, like 12 or 356, I get 1200 and 3560 respectively. If the input number is bigger than a thousand, it works fine again.
Is this a problem because of my method of resetting array's value back to 0, or does it have to do something with the conversion processes?

There are some mistakes in this code.
For starters, (LPARAM)&textread should be either (LPARAM)textread or (LPARAM)&textread[0].
But more importantly, you are not preparing the EM_GETLINE message correctly:
lParam
A pointer to the buffer that receives a copy of the line. Before sending the message, set the first word of this buffer to the size, in TCHARs, of the buffer. For ANSI text, this is the number of bytes; for Unicode text, this is the number of characters. The size in the first word is overwritten by the copied line.
Try this instead:
case WM_COMMAND:{
if (LOWORD(wParam) == Calculate) {
int A = 0;
TCHAR textread[256];
*((LPWORD)&textread) = 256; // <-- add this
SendMessage(Box1, EM_GETLINE, 0, (LPARAM)textread);
_stscanf(textread, _T("%d"), &A);
_stprintf(textread, _T("%d"), A);
SendMessage(Box1, WM_SETTEXT, 0, (LPARAM)textread);
}
break;

I think you have several problems there.
First, you didn't show us what Parse1 is. Beware that you must set the size of the buffer in the first word of the buffer. Also, why do you pass 1 as the WPARAM? This is the zero-based index of the line to retrieve from a multiline edit control, but is ignored if the edit is single line.
Also, what is with the first itoa call?
Here is an example that works:
TCHAR textread[256] = {0};
*(reinterpret_cast<WORD*>(&textread)) = 256;
::SendMessage(hwnd, EM_GETLINE, 0, reinterpret_cast<LPARAM>(&textread));
auto n = _ttoi(textread);
_itot_s(n, textread, 256, 10);
::SendMessage(hwnd, WM_SETTEXT, 0, reinterpret_cast<LPARAM>(&textread));

Related

How to extract the character from WM_KEYDOWN in PreTranslateMessage(MSG*pMsg)

In a MFC application within PreTranslateMessage(MSG *pMsg) inherited from a CView, I have this:
if (pMsg->message == WM_KEYDOWN) ...
The fields in a WM_KEYDOWN are documented here. The virtual key VK_ value is in pMsg->wParam and pMsg->lParam contains several field, of which bits 16-23 is the keyboard scan code.
So in my code I use:
const int virtualKey = pMsg->wParam;
const int hardwareScanCode = (pMsg->lParam >> 16) & 0x00ff; // bits 16-23
On my non-US keyboard for example, when I press the "#" character, I get the following:
virtualKey == 0xde --> VK_OEM_7 "Used for miscellaneous characters; it can vary by keyboard."
hardwareScanCode == 0x29 (41 decimal)
The character I'd like to "capture" or process differently is ASCII "#", 0x23 (35 decimal).
MY QUESTION
How do I translate the WM_KEYDOWN information to get something I can compare against, regardless of language or keyboard layout? I need to identify the # key whether the user has a standard US keyboard, or something different.
For example, I've been looking at the following functions such as:
MapVirtualKey(virtualkey, MAPVK_VSC_TO_VK);
// previous line is useless, the key VK_OEM_7 doesn't map to anything without the scan code
ToAscii(virtualKey, hardwareScanCode, nullptr, &word, 0);
// previous line returns zero, and zero is written to `word`
Edit:
Long story short: On a U.S. keyboard, SHIFT+3 = #, while on a French keyboard SHIFT+3 = /. So I don't want to look at individual keys, instead I want to know about the character.
When handling WM_KEYDOWN, how do I translate lParam and wParam (the "keys") to find out the character which the keyboard and Windows is about to generate?
I believe this is a better solution. This one was tested with both the standard U.S. keyboard layout and the Canadian-French keyboard layout.
const int wParam = pMsg->wParam;
const int lParam = pMsg->lParam;
const int keyboardScanCode = (lParam >> 16) & 0x00ff;
const int virtualKey = wParam;
BYTE keyboardState[256];
GetKeyboardState(keyboardState);
WORD ascii = 0;
const int len = ToAscii(virtualKey, keyboardScanCode, keyboardState, &ascii, 0);
if (len == 1 && ascii == '#')
{
// ...etc...
}
Even though the help page seems to hint that keyboardState is optional for the call to ToAscii(), I found that it was required with the character I was trying to detect.
Found the magic API call that gets me what I need: GetKeyNameText()
if (pMsg->message == WM_KEYDOWN)
{
char buffer[20];
const int len = GetKeyNameTextA(pMsg->lParam, buffer, sizeof(buffer));
if (len == 1 && buffer[0] == '#')
{
// ...etc...
}
}
Nope, that code only works on keyboard layouts that have an explicit '#' key. Doesn't work on layouts like the standard U.S. layout where '#' is a combination of other keys like SHIFT + 3.
I'm not an MFC expert, but here's roughly what I believe its message loop looks like:
while (::GetMessage(&msg, NULL, 0, 0) > 0) {
if (!app->PreTranslateMessage(&msg)) { // the hook you want to use
TranslateMessage(&msg); // where WM_CHAR messages are generated
DispatchMessage(&msg); // where the original message is dispatched
}
}
Suppose a U.S. user (for whom 3 and # are on the same key) presses that key.
The PreTranslateMessage hook will see the WM_KEYDOWN message.
If it allows the message to pass through, then TranslateMessage will generate a WM_CHAR message (or something from that family of messages) and dispatch it directly. PreTranslateMessage will never see the WM_CHAR.
Whether that WM_CHAR is a '3' or a '#' depends on the keyboard state, specifically whether a Shift key is currently pressed. But the WM_KEYDOWN message doesn't contain all the keyboard state. TranslateMessage keeps track of the state by taking notes on the keyboard messages that pass through it, so it knows whether the Shift (or Ctrl or Alt) is already down.
Then DispatchMessage will dispatch the original WM_KEYDOWN message.
If you want to catch only the '#' and not the '3's, then you have two options:
Make your PreTranslateMessage hook keep track of all the keyboard state (like TranslateMessage would normally do). It would have to watch for all of the keyboard messages to track the keyboard state and use that in conjunction with the keyboard layout to figure whether the current message would normally generate a '#'. You'd then have to manually dispatch the WM_KEYDOWN message and return TRUE (so that the normal translate/dispatch doesn't happen). You'd also have to be careful to also filter the corresponding WM_KEYUP messages so that you don't confuse TranslateMessage's internal state. That's a lot of work and lots to test.
Find a place to intercept the WM_CHAR messages that TranslateMessage generates.
For that second option, you could subclass the destination window, have it intercept WM_CHAR messages when the character is '#' and pass everything else through. That seems a lot simpler and well targeted.

Getting the exact edit box dimensions for the given text

I need to be able to determine the size of the edit box according to the text I have, and a maximum width.
There are similar questions and answers, which suggest GetTextExtentPoint32 or DrawTextEx.
GetTextExtentPoint32 doesn't support multiline edit controls, so it doesn't fit.
DrawTextEx kind of works, but sometimes the edit box turns out to be larger than necessary, and, what's worse, sometimes it's too small.
Then there's EM_GETRECT and EM_GETMARGINS. I'm not sure whether I should use one of them, or maybe both.
What is the most accurate method for calculating the size? This stuff is more complicated then it should be... and I prefer not to resort to reading the source code of Wine or ReactOS.
Thanks.
Edit
Here's my code and a concrete example:
bool AutoSizeEditControl(CEdit window, LPCTSTR lpszString, int *pnWidth, int *pnHeight, int nMaxWidth = INT_MAX)
{
CFontHandle pEdtFont = window.GetFont();
if(!pEdtFont)
return false;
CClientDC oDC{ window };
CFontHandle pOldFont = oDC.SelectFont(pEdtFont);
CRect rc{ 0, 0, nMaxWidth, 0 };
oDC.DrawTextEx((LPTSTR)lpszString, -1, &rc, DT_CALCRECT | DT_EDITCONTROL | DT_WORDBREAK);
oDC.SelectFont(pOldFont);
::AdjustWindowRectEx(&rc, window.GetStyle(), (!(window.GetStyle() & WS_CHILD) && (window.GetMenu() != NULL)), window.GetExStyle());
UINT nLeftMargin, nRightMargin;
window.GetMargins(nLeftMargin, nRightMargin);
if(pnWidth)
*pnWidth = rc.Width() + nLeftMargin + nRightMargin;
if(pnHeight)
*pnHeight = rc.Height();
return true;
}
I call it with nMaxWidth = 143 and the following text (below), and get nHeight = 153, nWidth = 95. But the numbers are too small for the text to fit, on both axes.
The text (two lines):
Shopping
https://encrypted.google.com/search?q=winapi+resize+edit+control+to+text+size&source=lnms&tbm=shop&sa=X&ved=0ahUKEwiMyNaWxZjLAhUiLZoKHQcoDqUQ_AUICigE
Edit 2
I found out that the word wrapping algorithm of DrawTextEx and of the exit control are different. For example, the edit control wraps on ?, DrawTextEx doesn't. What can be done about it?

How can I insert a single byte to be sent prior to an I2C data package?

I am developing an application in Atmel Studio 6 using the xMega32a4u. I'm using the TWI libraries provided by Atmel. Everything is going well for the most part.
Here is my issue: In order to update an OLED display I am using (SSD1306 controller, 128x32), the entire contents of the display RAM must be written immediately following the I2C START command, slave address, and control byte so the display knows to enter the data into the display RAM. If the control byte does not immediately precede the display RAM package, nothing works.
I am using a Saleae logic analyzer to verify that the bus is doing what it should.
Here is the function I am using to write the display:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_array[513];
data_array[0] = SSD1306_DATA_BYTE;
for (int i=0;i<512;++i){
data_array[i+1] = buffer[i];
}
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_array,
.length = 513
};
twi_master_write(&TWIC, &buffer_send);
}
Clearly, this is very inefficient as each call to this function recreates the entire array "buffer" into a new array "data_array," one element at a time. The point of this is to insert the control byte (SSD1306_DATA_BYTE = 0x40) into the array so that the entire "package" is sent at once, and the control byte is in the right place. I could make the original "buffer" array one element larger and add the control byte as the first element, to skip this process but that makes the size 513 rather than 512, and might mess with some of the text/graphical functions that manipulate this array and depend on it being the correct size.
Now, I thought I could write the code like this:
void OLED_buffer(){ // Used to write contents of display buffer to OLED
uint8_t data_byte = SSD1306_DATA_BYTE;
OLED_command(SSD1306_SETLOWCOLUMN | 0x00);
OLED_command(SSD1306_SETHIGHCOLUMN | 0x00);
OLED_command(SSD1306_SETSTARTLINE | 0x00);
twi_package_t data_control_byte = {
.chip = OLED_BUS_ADDRESS,
.buffer = data_byte,
.length = 1
};
twi_master_write(&TWIC, &data_control_byte);
twi_package_t buffer_send = {
.chip = OLED_BUS_ADDRESS,
.buffer = buffer,
.length = 512
};
twi_master_write(&TWIC, &buffer_send);
}
/*
That doesn't work. The first "twi_master_write" command sends a START, address, control, STOP. Then the next such command sends a START, address, data buffer, STOP. Because the control byte is missing from the latter transaction, this does not work. All I need is to insert a 0x40 byte between the address byte and the buffer array when it is sent over the I2C bus. twi_master_write is a function that is provided in the Atmel TWI libraries. I've tried to examine the libraries to figure out its inner workings, but I can't make sense of it.
Surely, instead of figuring out how to recreate a twi_write function to work the way I need, there is an easier way to add this preceding control byte? Ideally one that is not so wasteful of clock cycles as my first code example? Realistically the display still updates very fast, more than enough for my needs, but that does not change the fact this is inefficient code.
I appreciate any advice you all may have. Thanks in advance!
How about having buffer and data_array pointing to the same uint8_t[513] array, but with buffer starting at its second element. Then you can continue to use buffer as you do today, but also use data_array directly without first having to copy all the elements from buffer.
uint8_t data_array[513];
uint8_t *buffer = &data_array[1];

How to get correct hDevMode values from CPrintDialogEx (PrintDlgEx)?

I'm displaying a CPrintDialogEx dialog to choose a printer and modify the settings. I set the hDevNames member so that a default printer will be selected, but I leave hDevMode set to NULL. On successful return I pull some values such as paper size out of the returned DEVMODE structure from hDevMode.
I'm having a problem because hDevMode appears to be initialized with the values from the default printer that I passed in, not the printer that was finally selected. How do I get the parameters from the actual selected printer?
As requested here's the relevant part of the code. I've deleted some of it in the interest of space. TOwnedHandle is a smart pointer I wrote for holding a memory handle and locking it automatically.
CPrintDialogEx dlg(PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION, this);
ASSERT(dlg.m_pdex.hDevMode == NULL);
ASSERT(dlg.m_pdex.hDevNames == NULL);
dlg.m_pdex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(dlg.m_pdex.hDevNames);
// ...
GlobalUnlock(dlg.m_pdex.hDevNames);
if ((dlg.DoModal() == S_OK) && (dlg.m_pdex.dwResultAction == PD_RESULT_PRINT))
{
TOwnedHandle<DEVMODE> pDevMode = dlg.m_pdex.hDevMode;
TRACE("Printer config = %dx%d %d\n", (int)pDevMode->dmPaperWidth, (int)pDevMode->dmPaperLength, (int)pDevMode->dmOrientation);
// ...
}
Edit: I've determined that I don't get the problem if I don't set the hDevNames parameter. I wonder if I've discovered a Windows bug? This is in XP, I don't have a more recent version of Windows handy to test with.
I've distilled the code into a test that doesn't use MFC, this is strictly a Windows API problem. This is the whole thing, nothing left out except the definition of pDefaultPrinter - but of course it doesn't do anything useful anymore.
PRINTDLGEX ex = {sizeof(PRINTDLGEX)};
ex.hwndOwner = m_hWnd;
ex.Flags = PD_ALLPAGES | PD_NOCURRENTPAGE | PD_NOPAGENUMS | PD_NOSELECTION;
ex.nStartPage = START_PAGE_GENERAL;
#if 1
int iSizeName = (strlen(pDefaultPrinter) + 1) * sizeof(char);
ex.hDevNames = GlobalAlloc(GHND, sizeof(DEVNAMES) + iSizeName);
DEVNAMES * pDevNames = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pDevNames != NULL);
pDevNames->wDeviceOffset = sizeof(DEVNAMES);
strcpy((char *)pDevNames + pDevNames->wDeviceOffset, pDefaultPrinter);
GlobalUnlock(ex.hDevNames);
#endif
HRESULT hr = PrintDlgEx(&ex);
if ((hr == S_OK) && (ex.dwResultAction == PD_RESULT_PRINT))
{
DEVMODE * pdm = (DEVMODE *) GlobalLock(ex.hDevMode);
ASSERT(pdm != NULL);
TRACE("Printer config = %dx%d %d\n", (int)pdm->dmPaperWidth, (int)pdm->dmPaperLength, (int)pdm->dmOrientation);
GlobalUnlock(ex.hDevMode);
DEVNAMES * pdn = (DEVNAMES *) GlobalLock(ex.hDevNames);
ASSERT(pdn != NULL);
TRACE(_T("Printer device = %s\n"), (char *)pdn + pdn->wDeviceOffset);
GlobalUnlock(ex.hDevNames);
}
If I can't get a fix, I'd love to hear of a work-around.
After much head scratching I think I've figured it out.
When the dialog comes up initially, the hDevMode member gets filled with the defaults for the printer that is initially selected. If you select a different printer before closing the dialog, that DEVMODE structure is presented to the new printer driver; if the paper size doesn't make sense to the driver it may change it, and the drivers are not consistent.
The reason this tripped me up is that I was switching between three printers: two label
printers with very different characteristics, and a laser printer with US Letter paper.
The laser printer always responds with the proper dimensions but may indicate a wrong paper size code.
The first label printer will override the size provided by the laser printer but not the other label printer.
The second label printer will accept the size provided by the first label printer, because it's capable of using that size even though it's not loaded and not configured. It modifies the size provided by the laser printer by returning the maximum width and the Letter size length of 11 inches.
I determined two ways to work around the problem. The first is to implement IPrintDialogCallback and respond to SelectionChange calls by reloading the default DEVMODE for the newly selected printer. EDIT: I tried this and it does not work. CPrintDialogEx already implements an IPrintDialogCallback interface, making this easy. It appears that PrintDlgEx has its own internal handle that it uses to track the current DEVMODE structure and only uses the one in the PRINTDLGEX structure for input/output. There's no way to affect the DEVMODE while the dialog is up, and by the time it returns it's too late.
The second solution is to ignore the returned results entirely and work from the default paper configuration for the printer. Any changes made from the printer defaults within the dialog are lost completely, but for my application this is acceptable.
bool MyDialog::GetPaperSize(const TCHAR * pPrinterName, double & dPaperWidth, double & dPaperLength)
{
// you need to open the printer before you can get its properties
HANDLE hPrinter;
if (OpenPrinter((TCHAR *)pPrinterName, &hPrinter, NULL))
{
// determine how much space is needed for the DEVMODE structure by the printer driver
int iDevModeSize = DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, NULL, NULL, 0);
ASSERT(iDevModeSize >= sizeof(DEVMODE);
// allocate a DEVMODE structure and initialize it to a clean state
std::vector<char> buffer(iDevModeSize, 0);
DEVMODE * pdm = (DEVMODE *) &buffer[0];
pdm->dmSpecVersion = DM_SPECVERSION;
DocumentProperties(m_hWnd, hPrinter, (TCHAR *)pPrinterName, pdm, NULL, DM_OUT_BUFFER);
ClosePrinter(hPrinter);
// convert paper size from tenths of a mm to inches
dPaperWidth = pdm->dmPaperWidth / 254.;
dPaperLength = pdm->dmPaperLength / 254.;
return true;
}
return false;
}

Getting PIX_FMT_YUYV422 out of libswscale

I'm trying to learn to use the different ffmpeg libs with Cocoa, and I'm trying to get frames to display with help of Core Video. It seems I have gotten the CV callbacks to work, and it gets frames which I try to put in a CVImageBufferRef that I later draw with Core Image.
The problem is I'm trying to get PIX_FMT_YUYV422 to work with libswscale, but as soon as I change the pixel format to anything other than PIX_FMT_YUV420P it crashes with EXC_BAD_ACCESS.
As long as I use YUV420P the program runs, allthough it doesn't display properly. I suspected that the pixel format isn't supported, so I wanted to try PIX_FMT_YUYV422.
I have had it running before and successfully wrote PPM files with PIX_FMT_RGB24. For some reason it just crashes on me now, and I don't see what might be wrong.
I'm a bit in over my head here, but that is how I prefer to learn. :)
Here's how I allocate the AVFrames:
inFrame = avcodec_alloc_frame();
outFrame = avcodec_alloc_frame();
int frameBytes = avpicture_get_size(PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
uint8_t *frameBuffer = malloc(frameBytes);
avpicture_fill((AVPicture *)outFrame, frameBuffer, PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);
Then I try to run it through swscale like so:
static struct SwsContext *convertContext;
if (convertContext == NULL) {
int w = cdcCtx->width;
int h = cdcCtx->height;
convertContext = sws_getContext(w, h, cdcCtx->pix_fmt, outWidth, outHeight, PIX_FMT_YUYV422, SWS_BICUBIC, NULL, NULL, NULL);
if (convertContext == NULL) {
NSLog(#"Cannot initialize the conversion context!");
return NO;
}
}
sws_scale(convertContext, inFrame->data, inFrame->linesize, 0, outHeight, outFrame->data, outFrame->linesize);
And finally I try to write it to a pixel buffer for use with Core Image:
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, &currentFrame);
With 420P it runs, but it doesnt match up with the kYUVSPixelformat for the pixel buffer, and as I understand it doesnt accept YUV420.
I would really appreciate any help, no matter how small, as it might help me struggle on. :)
This certainly isn't a complete code sample, since you never decode anything into the input frame. If you were to do that, it looks correct.
You also don't need to fill the output picture, or even allocate an AVFrame for it, really.
YUV420P is a planar format. Therefore, AVFrame.data[0] is not the whole story. I see a mistake in
int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, &currentFrame);
For planar formats, you will have to read data blocks from AVFrame.data[0] up to AVFrame.data[3]

Resources