How are GUI's really made? - user-interface

My question is
Gui libraries like Qt and lets say for Windows operating systems
how do they create all those graphical user interfaces(windows etc).
Does each operating system gives API's or something else to do so?If yes, then how operating systems draw all those windows and things.Do they (operating systems) "control" the screen and then draw each pixel one by one to achieve their goal the GUI?
I would like an answer that explains things at the lowest level possible but well i don't demand someone to write me everything that happens( even if i would like to) because i know many things are behind all these.So for this reason comments with links or suggested books which explain with details
on what is happening under the hood would be appreciated.

Stackoverflow answers are not supposed to use links, comments can but not answers.
Each operating system and gui library is different, but, yes in some way, shape, or form they do actually draw every one of the pixels. It is often quite organized and many peformance solutions are used, optimized routines that can update a rectangle or some chunk of a screen, sometimes hardware gets involved (these days a lot of the time the hardware or basically gpus get involved the cpu asks the gpu to draw something then the gpus are busy placing all the pixels).
You would likely for example want to create some sort of font rendering function that is given the font, the string to display, the font size, and perhaps a clipping window to not go outside, or perhaps a function that with the font, size and string returns the number of pixels then you can adjust the string to fit and wrap (look around this web page for example, drag the window wider and narrower and watch what web text does).
Definitely some sort of image drawing routines with the ability to stretch or fit the drawing to the rectangle defined.
The fun stuff, games, etc has improved so rapidly over time that it is hard to go back to a simple line draw and area fill routine, etc. But also along with the technology the games brought simple things like web pages benefit...Again look around.
There are many open source programs and libraries you should just wander around the source code and see what you see.

The operating system provides libraries that interface with the monitor/display. In short, GUI libraries such as Qt interact with those libraries of the operating system and creates an easier bridge for you, the programmer to interact with the monitor. For instance, Qt might have a drawLine feature, which underneath is taking care of pixel arrangement related to drawing on the monitor/display for the operating system.

Related

How to adapt text and/or elements size while designing to smaller screens?

I have a personal project designed for the desktop that I previously created in Adobe XD, and now I would like to put it on Behance. To do so, I need to adapt the layout, designed for the desktop, to mobile.
I don't usually design for smaller screens, so I am wondering how much I need to decrease text and element sizes? For example, if I have a text with a font size of 40px, what calculations should I use to decrease the size for mobile? Is there a default percentage to reduce desktop values? Alternatively, are there visual rules that other designers follow?
I always design for Bootstrap, but I'm not sure if I am thinking about mobile the right way.
I've also posted this on the User Experience Stack Exchange forum, but I'm not sure which one is the best for my question.
Thank you for sharing your thoughts and advice.
I have designed mostly for desktops as a traditional web designer, and now I'm trying to migrate to UI/UX.
Modern devices do most of the scale conversion work for you by adequately scaling the viewport to compensate for the smaller screens and often higher resolutions. Depending on the type of application you are designing, the technology is different, but the result is very similar.
For example, if you were implementing the design for the Web, you would likely need to use browser features like media queries to manage your content.
However, because you are focusing on the design of the site, you should not need to worry about the 'how', so you can focus on what to do.
Here are some tips:
Elements and text appear roughly the same size on desktop and mobile if you hold the device at a casual but comfortable distance and compare it to the size it appears on your desktop's screen at an average viewing distance. You can try this by going to a website built for mobile like Apple's.
Because of the similar size but reduced screen dimensions, you need to simplify your design, avoid multiple columns (especially for phones).
Because you see a smaller portion of your design at once on mobile, there is less need for significant visual hierarchy. For example, if you have multiple heading levels with a significant visual size difference on the desktop, you can probably get away with making them closer in size on mobile.
If you want to see what your design looks like on mobile, try emailing the design to your phone, save it to your pictures, and load the image full screen. You may need to zoom the image in a bit so that the left and right of the design are touching the sides of your phone's screen. If your text looks too small or your elements are too large, adjust the design and load it on your phone again. Keep doing this until you get it right.
With a little practice and effort, you will get the hang of Mobile design. And, if you want to take it to the next level, try researching mobile first design. Here is just one of many articles on the subject.

Calculations and rendering in MATLAB, GUI in Anything Else

In the Hebrew University in Jerusalem there are a few MATLAB applications, consisting of both calculations and UI. Since the UI is becoming increasingly complex, it's getting very hard to maintain it.
What I'd like to do is keep the calculations and the rendering of 2D and 3D graphs in MATLAB, but control the entire UI from elsewhere. I know MATLAB exports a COM interface, which is OK for using MATLAB calculations, but I couldn't find a way to pass rendered data (MATLAB plots, basically) back through it.
Is there a way to do that?
The simplest thing for you to do would be to issue an instruction to MATLAB to create the plot (perhaps creating it offscreen, to avoid an unwelcome popup window), adjust its appearance and size, then save it to an image file. Pass the filename back, then load it in from your UI code and display it.
However, that will not of course get you a plot that is "live", so you won't be able to edit it, or click on it/interact with it, or even resize it nicely.
If you need that, I'm afraid there's no documented or supported way to do it. But if you're willing to go undocumented, then MATLAB also has a Java interface (jmi.jar) that you can call from Java, and you can embed a live MATLAB plot within a Java GUI, attaching MATLAB or Java callbacks to plot elements.
Note that that capability is completely undocumented, and may well change from release to release without warning. If you'd like to learn how to approach that, I'd recommend reading through the blog Undocumented MATLAB, and probably buying a copy of the book by that blog's author.

Changing how windows displays using Win API?

While I have some experience with the WinAPI I do not have a ton, so I have a question for people who do have much experience in it. My question concerns what the limit of our power is. Can we change how windows fundamentally displays?
For example, can I cause windows to render a screen size bigger than the display and pan across it, kind of like workspaces but without separation? Can I apply distortion to the top and bottom of the screen? If distortion is not possible can I have an application mirror what windows is displaying with very little delay?
The biggest question I have is the first one, because if I can make windows render virtual workspaces and pan seamlessly between them then I figure it is possible to make a separate application which handles the distortion on a mirrored image of the desktop. Again, apologies for the vague questions, but I really want to know if we are able to do this stuff, at least in theory, before I dive deep into learning more on the API. If the WinAPI does not allow it is there another way to do this kind of stuff in Windows?
EDIT: Some clarification. What I want to do is basically extend the desktop to a very large size (not sure on exact size yet), both vertically and horizontally. Section the large desktop into workspaces of a specific size which can seamlessly be transitioned across and windows moved across. It would transition workspaces based on a head tracking device and/or mouse movement. Note that when I say workspaces this could be achieved by zomming in and then panning the zoom as well. I also need to be able to distort the screen, such as curving the edges, and render the screen twice. That is the bare minimum of what I am wanting to do.
Yes, you can. The most feasible way I come up with is using a virtual graphics driver (like what Windows Remote Desktop does, which creates a virtual graphics card and a virtual display). Sadly you will lose the ability to run some programs needing advanced graphics API (such as 3D games, 3D modelling tools or so).
There're some examples:
http://virtualmonitor.github.io/
https://superuser.com/questions/62051/is-there-a-way-to-fake-a-dual-second-monitor
And remember Windows has a limit on display resolution (for each and for altogether). I don't remember the exact number but it should be less than 32768*32768.

Is anyone aware of a visual diagram that shows the composite parts of std. Windows controls?

For example: I would like to know exactly what system metrics to use to know how wide the borders of a window are going to be. With a given set of visual styles, what borders will appear, and how wide will they be (what system metrics can be queried to know exactly and correctly how wide they'll be)?
Similarly, for a button, how wide are its borders in various states? Using different themes? What system metrics or theme functions can give me a absolutely correct answer on how wide, how tall, how many, what offsets?
Generally speaking, my custom interface code usually contains things like:
myrect.Offset(4,4); // empirical evidence indicates it's actually 4 more pixels per side that Windows adds in but doesn't tell me about...
I hate code like this - littered with magical numbers that may change depending on version of Windows, whether Aero is enabled or not, whether the customer is running this theme or that theme, or using large font / high DPI mode, or not, etc.
But I haven't, in my 15+ years of Windows GUI programming, ever seen a really good white paper or diagram or chart that describes the actual composition of all std. windows controls, and where their various visual parts' metrics come from.
Is anyone out there aware of a resource even similar to this? A white paper? A diagram? A detailed blog discussion? Anything?
Thanks for any help you may have!
EDIT: the GUI guidelines idea doesn't actually describe the internal composition of the controls visual parts - just their overall size and spacing between controls.
I really would love to have something that describes in excruciating detail the border sizes, offsets, and what system metrics control each of these things.
What about the Windows User Experience Interaction Guidelines? [Document Index] [PDF download]
It includes diagrams like this:
This is not a complete answer to your question, though, because it only tells you how the controls are supposed to be laid out, not how they actually are. And it doesn’t address what happens under different themes.

Is it possible to create full screen color overlay effects in windows?

I remember my old Radeon graphics drivers which had a number of overlay effects or color filters (whatever they are called) that would render the screen in e.g. sepia tones or negative colors. My current NVIDIA card does not seem to have such a function so I wondered if it is possible to make my own for Vista.
I don't know if there is some way to hook into window's rendering engine or, alternatively, into NVIDIA's drivers to achieve this effect. While it would be cool to just be able to modify the color, it would be even better to modify the color based on its screen coordinates or perform other more varied functions. An example would be colors which are more desaturated the longer they are from the center of the screen.
I don't have a specific use scenario so I cannot provide much more information. Basically, I'm just curious if there is anything to work with in this area.
You could have a full-screen layered window on top of everything and passing through click events.. However that's hacky and slow compared to what could be done by getting a hook in the WDM renderer's DirectX context. However, so far it's not possible, as Microsoft does not provide any public interface into this.
The Flip 3D utility does this, though, but even there that functionality is not in the program, it's in the WDM DLL, called by ordinal (hidden/undocumented function, obviously, since it doesn't serve any other purpose). So pretty much another dead end, from where I haven't bothered to dig deeper.
On that front, the best we can do is wait for some kind of official API.

Resources