Due to XIM and XFT usage, I have to use XDisplay sometimes in my XCB based code.
My question, should I open display at the beginning of my program, and close it at end. Or open and close every time I need to use it?
It is better to open once the XDisplay. At least that is the common practice.
IIRC, XOpenDisplay involves setting up a TCP connect to the X11 server and having a few exchanges to initialize, e.g. for X11 atoms which became standard but not predefined (I'm not sure of the last point)
Related
In Turbo Pascal 7 for DOS you can use the Crt unit to define a window. If you define a second window on top of the first one, like a popup, I don’t see a way to get rid of the second one except for redrawing the first one on top again.
Is there a window close technique I’m overlooking?
I’m considering keeping an array of screens in memory to make it work, but the TP IDE does popups like I want to do, so maybe it’s easy and I’m just looking in the wrong place?
I don't think there's a window-closing technique you're missing, if you mean one provided by the CRT unit.
The library Borland used for the TP7 IDE was called TurboVision (see https://en.wikipedia.org/wiki/Turbo_Vision) and it was eventually released to the public domain, but well before that, a number of 3rd-party screen handling/windowing libraries had become available and these were much more powerful than what could be achieved with the CRT unit. Probably the best known was Turbopower Software's Object Professional (aka OPro).
Afaik, these libraries (and, fairly obviously TurboVision) were all based on an in-memory representation of a framed window which could be rapidly copied in and out of the PC's video memory and, as in Windows with a capital W, they were treated as a stack with a z-order. So the process or closing/erasing the top level window was one of getting the window(s) that it had been covering to re-draw itself/themselves. Otoh, CRT had basically evolved from v. primitive origins similar to, if not based on, the old DEC VT100 display protocol and wasn't really up to the job of supporting independent, stackable window objects.
Although you may still be able to track down the PD release of TurboVision, it never really caught on as a library for developers. In an ideal world, a better place to start would be with OPro. It was apparently on SoureForge for a while, but seems to have been taken down sometime since about 2007, and these days even if you could get hold of a copy, there is a bit of a question mark over licensing. However ...
There was also a very popular freeware library available for TP by the name of the "Technojock's toolkit" and which had a large functionality overlap (including screen handling) with OPro and it is still available on github - see https://github.com/lallousx86/TurboPascal/tree/master/TotLib/TOTSRC11. Unlike OPro, I never used TechnoJocks myself, but devotees swore by it. Take a look.
I would like to have some advice about how to port an old C++ program written for MS-DOS in the early 90s.
This program implements a quite complex text-user interface. The interface code is well separated from the logic, and I don't think it would be too difficult to make it use ncurses.
Being a complete novice, I have a few questions:
The DOS program intercepts interrupt 0x33 to handle mouse events. The interrupt handler store events in a FIFO, which the main program pools periodically. (Every element in the FIFO is a C structure containing information about the nature of the event, the position of the mouse and the state of its buttons.) To keep the logic of the code unchanged, I was thinking of firing a thread which calls getch() asynchronously within an infinite loop and fills the FIFO in the same way the old program did. My idea is that this thread, and only this thread, should access stdin, while the main thread would only have the responsibility to access stdout (through add_wch() and similar). Is ncurses safe to use in this way, or do stdin/stdout accesses need to be always done within the same thread?
The way colors are set in this app is quite byzantine, as it uses the concept of "inherited palettes". Basically, a window usually specifies the background and foreground colors, and every widget within that window sets the foreground only (but a few widgets redefine both fg/bg). I understand that ncurses' attr() always wants to specify colors using pairs, which must be initialized using initp(), and this doesn't play nicely with the logic of this program. I am therefore thinking of using tiparm() to directly send setaf/setbf sequences when the program wants to change the fg/bg color, respectively. (I would lose the ability to run the code on terminals which do not support setaf/setbf, but this would not be a huge loss.) Is it safe to send setaf/setbf control sequences and then call functions like add_wch(), or should the latter be used only in association with attr()?
I could write a few test scripts to check that my ideas work, but I would not be sure that this approach is supposed to work always.
Thanks for any help!
There's a lot of possibilities - but the approach described sounds like terminfo (low-level) rather than curses, except for the mention of add_wch. Rather than tiparm, a curses application would use wattr_set, init_pair, start_color, etc.
ncurses I/O has to be in one thread; while ncurses can be compiled to help (by using mutexes in some places), packagers have generally ignored that (and even with that configuration, application developers still would have work to do).
Further reading:
curses color manipulation routines
curses character and window attribute control routines
curses thread support
(I work in QA, this really is for legitimate use.)
I'm trying to come up with a way to introduce forced input lag for both keyboard and mouse (in Windows). Like, when I press 'A' on the keyboard, I want to introduce a very slight delay before the OS processes that A. Or if I move the mouse, I'd like the same mouse speed, but just with the same slight delay before it kicks in. This lag needs to be present across any threads, not just the one that kicked off the process. But, the lag doesn't have to be to-the-millisecond precise every time.
I'm not even sure how to go about setting this up. I'm capable of writing it in whatever language/environment we may need, I'm just not sure where to start. I think something like AutoHotkey may be able to do what I want by essentially making an arbitrary key call a macro that delays very slightly before sending that key, but I'm not sure what function calls I may need to make it happen. Or, maybe there's a way in C to get at the input across the OS before it kicks in. I'm just not sure.
Can anyone can point me to some resources or a language/function(s) that can accomplish this? (Or even an already existing program or service.)
If you want purely software solution, I’m afraid you’ll need to develop a filter driver for your keyboard and mouse. Very expensive to develop.
Instead, you can plug your mouse and keyboard into somewhere else, have the input messages come through the network, and then introduce network latency. You could use second PC + VNC software, or second PC + software USB/IP, or hardware USB/IP device like this one.
There’s an easy but less reliable way.
You could develop a system-wide WH_KEYBOARD_LL and WH_MOUSE_LL hooks, discard original messages, and after a while send the delayed messages with SendInput API. This should work mostly, however there’re cases where it wont, e.g. I don’t expect anything happens with most videogames because raw input.
Suppose if a window with handle 123456 is closed and another window is opened. Can Windows assign handle 123456 to the new window in the rarest possibility?
There's no guarantee anywhere that Windows won't immediately re-use a handle. In practice, all extant implementations take steps to try to avoid re-using handles.
We have an application that has one or more text console windows that all essentially represent serial ports (text input and output, character by character). These windows have turned into a major performance problem in the way they are currently code... we manage to spend a very significant chunk of time in them.
The current code is structured by having the window living its own little life, and the main application thread driving it across "SendMessage()" calls. This message-passing seems to be the cause of incredible overhead. Basically, having a detour through the OS feels to be the wrong thing to do.
Note that we do draw text lines as a whole where appropriate, so that easy optimization is already done.
I am not an expert in Windows coding, so I need to ask the community if there is some other architecture to drive the display of text in a window than sending messages like this? It seems pretty heavyweight.
Note that this is in C++ or plain C, as the main application is a portable C/C++/some other languages program that also runs on Linux and Solaris.
We did some more investigations, seems that half of the overhead is preparing and sending each message using SendMessage, and the other half is the actual screen drawing. The SendMessage is done between functions in the same file...
So I guess all the advice given below is correct:
Look for how much things are redrawn
Draw things directly
Chunk drawing operations in time, to not send every character to the screen, aiming for 10 to 20 Hz update rate of the serial console.
Can you accept ALL answers?
I agree with Will Dean that the drawing in a console window or a text box is a performance bottleneck by itself. You first need to be sure that this isn't your problem. You say that you draw each line as a whole, but even this could be a problem, if the data throughput is too high.
I recommend that you don't use the SendMessage to pass data from the main application to the text window. Instead, use some other means of communication. Are these in the same process? If not, you could use shared memory. Even a file in the disk could do in some circumstances. Have the main application write to this file and the text console read from it. You could send a SendMessage notification to the text console to inform it to update the view. But do not send the message whenever a new line arrives. Define a minimum interval between two subsequent updates.
You should try profiling properly, but in lieu of that I would stop worrying about the SendMessage, which almost certainly not your problem, and think about the redrawing of the window itself.
You describe these are 'text console windows', but then say you have multiple of them - are they actually Windows Consoles? Or are they something your application is drawing?
If the latter, then I would be looking at measuring my paint code, and whether I'm invalidating too much of a window on each update.
Are the output windows part of the same application? It almost sounds like they aren't...
If they are, you should look into the Observer design pattern to get away from SendMessage(). I've used it for the same type of use case, and it worked beautifully for me.
If you can't make a change like that, perhaps you could buffer your output for something like 100ms so that you don't have so many out-going messages per second, but it should also update at a comfortable rate.
Are the output windows part of the
same application? It almost sounds
like they aren't...
Yes they are, all in the same process.
I did not write this code... but it seems like SendMessage is a bit heavy for this all in one application case.
You describe these are 'text console
windows', but then say you have
multiple of them - are they actually
Windows Consoles? Or are they
something your application is drawing?
Our app is drawing them, they are not regular windows consoles.
Note that we also need to get data back when a user types into the console, as we quite often have interactive serial sessions. Think of it as very similar to what you would see in a serial terminal program -- but using an external application is obviously even more expensive than what we have now.
If you can't make a change like that,
perhaps you could buffer your output
for something like 100ms so that you
don't have so many out-going messages
per second, but it should also update
at a comfortable rate.
Good point. Right now, every single character output causes a message to be sent.
And when we scroll the window up when a newline comes, then we redraw it line-by-line.
Note that we also have a scrollback buffer of arbitrary size, but scrolling back is an interactive case with much lower performance requirements.