What is GUI toolkit better architecture? [closed] - user-interface

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I am working on a lightweight GUI toolkit. It is designed to be easy portable in X11 (xlib), Win32 and possibly other systems with very rudimental GUI support.
As far as I know, there are mainly two possible architectures:
To use the OS provided windows services - X11 in Linux and normal windows in Win32. In this approach, every control is the same window object as its parent. It receives events from the OS and processes them, has its own painting surface, etc.
To use the OS provided windows only for the top level windows - main application window, dialog boxes, etc. All child windows are simply painted on the surface of its parent window. In this case, the toolkit has to manage parent-child relations, the events are only received by the main window and has to be dispatched to the controls.
What variant use the widespread GUI toolkits? Qt? wxWidgets? FLTK? Others? Why they choose this approach?
How are both variants related to the size and speed of the result GUI toolkit?

I can't make a comment yet but I strongly advice against going into X right now, in more or less two years Wayland (or Mir, maybe) will be the main rendering manager for Linux.
And I think the main problem with the first approach is that :
You have to get a perfect knowledge of the both systems (X & Win32 (and why not Cocoa for OS X?))
If X changes a little implementation detail somewhere you will have to change your code to take this into account, while if you only use the top level stuff, it is less likely to change.
There might be tons and tons of code duplications (ex. : handling checkboxes for Win32 and X...)

Related

What language to use for a keyboard simulation? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I want to create a small application for windows that
can simulate keyboard presses and button clicks while in background
has Optical Character Recognition
I have programmed a few things in the past but never a windows application, so i don't know what language is best for that thing.
I don't need a complete Tutorial for how to do it, i just want a hint what language may be the best for it.
Greetings
Hithfaeron
Hie,
you can use any language with which you are comfortable with.
you can use c , c++ , Java , JS (Node) , python .... etc
Most of the common languages are supported in all modern OS's.
For the this you want you achieve is to make your program is to listen for keyboard and mouse events, which any programming language does by interacting with the kernel
Eventhough I don't know what you meen exactly by Optical charcter recognition you can create a windows app like that one with nearly any common language. I've actually created a .dll that is capable of simulating a keyboard at a very decent speed (up to 3000 words per minute) with visual basic. I'ts a very basic language but as I did it implementing the user32.dll library in the code it's actually very easy to use. Would you like that I shared it with you that way you don't have to worry about the coding itself as it's very tedious and repetitive.

Windows/SDL2 + OpenGL for programs with interface [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I want to create a piece of software, that has a menu like the standard windows menus bars and supports opening the windows explorer where can i look up files paths etc. At the same time i want to render something to another section of the screen.
Think of a program where you open a .obj (3D object) via the windows explorer from a dropdown menu; then it loads into the program and then it will be rendered to one half of the window, while having some sliders and options on the other.
I know how to create an SDL2 window and use OpenGL in it, but i cannot seem to make the connection from an SDL2 window and a windows window (i think they are of the same type though). In my understanding SDL2 is just wrapper around the standard winapi while being able to do the same on linux. (waiting to get corrected)
First off, can SDL2 do what i want? or do i need to learn the winapi in addition? (it doesn't need to be portable to linux or mac)
Are there better alternatives? (what do you use?) Preferably something more low level, because if prefer knowing what i am doing.
And of course, if you can recommend some online resources that would be great.
SDL is
Simple DirectMedia Layer is a cross-platform development library
designed to provide low level access to audio, keyboard, mouse,
joystick, and graphics hardware via OpenGL and Direct3D.
While SDL is capable of create a simple window it does not provide more complex ones, like controls for input or file handling.
It's up to you to create and manage those controls, for example using the Windows win API.
There are some good API's that you can use instead of low level win API: Qt, wxWidgets, .NET, etc.

How can I detect touch input events using hooks, in Windows 10? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am unable to figure out how I can detect touch events, that are made anywhere on a touch screen monitor, using hooks. Is that even possible?
This may not be exactly what you're looking for, but there are a few ways to set hooks:
SetWinEventHook (active accessibility hook). Pros: is a supported high-level way to get events from Windows. Cons: can slow down applications, especially if you are running an "out of context" hook.
SetWindowsHookEx. Pros: very low-level hooking into applications. Cons: doesn't support out of context hooks so you need to write your own IPC and also sometimes unreliable (e.g. sometimes you miss events in Command Prompt).
Looking through the first API I don't see anything specific to touch (although I would encourage you to grab the most recent Windows SDK and look at the different events). You could, however, simply look for cursor position changes to know where the user most recently touched.
The second API may give you the kind of control you want because you can use a WH_CALLWNDPROC hook to trap touch events. But then again, a window only receives touch-related messages if they mark themselves as touch-aware. So even this may not do what you want.

Should I learn a cross-platform GUI language or go with a native? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Should I learn a cross-platform language that runs on Windows, Mac OS X, and Linux, or just develop in a native GUI language? I heard that cross-platform is slower than native. Is it that much slower? And could you please recommend some GUI languages?
It all depends on your final application. I think it's reasonable to say that the knowledge of a more flexible and cross-platform tool is much more valuable than another one that works only in a limited domain.
I would suggest you to start with wxPython, works on every platform, is well tested and has all the widgets that you might need. Python is an interpreted language, so by definition is slower than other compiled languages, but the drawback in the execution speed is not even noticeable in most of applications.
Ah, and it's simple to learn.
GUIs are not made with languages, they are made with toolkits. On proprietary operating systems, these will be integrated into the OS itself. On open source operating systems, they will be separate programs and thus inherently almost always cross-platform. For example, Qt and GTK+ will both run on all three of the biggest operating systems today. In theory, native GUIs should run faster on Windows and Mac OS X than cross-platform GUIs, but obviously that depends on a wide variety of factors. The opposite might be true.
Luckily, properly-programmed GUIs are almost never significant bottlenecks. You should consider whether various GUI toolkits have the features you want and what languages (especially ones you already know) they can be used in.

Multi-platform "easy" window programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
i'm thinking about programming a tool that would be useful in windows and mac (as we use those at work) and it's 100% necessary that is inside on a OS window.
The first thing that came to my mind was to use java - as it's cross platform - but, is there any alternative to program cross platform window based programs?
Has anyone tried to use C# windows forms with Mono in other OS's?
I'm interested in a garbage collected language if possible as I don't want to think about possible memory leaks for a tool that can be slower or faster without any trouble.
Also if it's possible to be as easy as it is in visual studio + C# it would be awesome!
Any idea will be appreciated, thanks!
Java is fine if you're comfortable with it.
Many languages have bindings to cross-platform toolkits: for example Python is very pleasant and has PyQt4 or WxPython, both of which can be used to make GUIs which work nicely on Windows or Mac.
In the manage-your-memory world, using Qt from within C++ is actually very pleasant (they have a nice API). I find it creates more elegant applications than my Java code (they feel a tad more native) though YMMV.

Resources