While I have some experience with the WinAPI I do not have a ton, so I have a question for people who do have much experience in it. My question concerns what the limit of our power is. Can we change how windows fundamentally displays?
For example, can I cause windows to render a screen size bigger than the display and pan across it, kind of like workspaces but without separation? Can I apply distortion to the top and bottom of the screen? If distortion is not possible can I have an application mirror what windows is displaying with very little delay?
The biggest question I have is the first one, because if I can make windows render virtual workspaces and pan seamlessly between them then I figure it is possible to make a separate application which handles the distortion on a mirrored image of the desktop. Again, apologies for the vague questions, but I really want to know if we are able to do this stuff, at least in theory, before I dive deep into learning more on the API. If the WinAPI does not allow it is there another way to do this kind of stuff in Windows?
EDIT: Some clarification. What I want to do is basically extend the desktop to a very large size (not sure on exact size yet), both vertically and horizontally. Section the large desktop into workspaces of a specific size which can seamlessly be transitioned across and windows moved across. It would transition workspaces based on a head tracking device and/or mouse movement. Note that when I say workspaces this could be achieved by zomming in and then panning the zoom as well. I also need to be able to distort the screen, such as curving the edges, and render the screen twice. That is the bare minimum of what I am wanting to do.
Yes, you can. The most feasible way I come up with is using a virtual graphics driver (like what Windows Remote Desktop does, which creates a virtual graphics card and a virtual display). Sadly you will lose the ability to run some programs needing advanced graphics API (such as 3D games, 3D modelling tools or so).
There're some examples:
http://virtualmonitor.github.io/
https://superuser.com/questions/62051/is-there-a-way-to-fake-a-dual-second-monitor
And remember Windows has a limit on display resolution (for each and for altogether). I don't remember the exact number but it should be less than 32768*32768.
Related
I have a personal project designed for the desktop that I previously created in Adobe XD, and now I would like to put it on Behance. To do so, I need to adapt the layout, designed for the desktop, to mobile.
I don't usually design for smaller screens, so I am wondering how much I need to decrease text and element sizes? For example, if I have a text with a font size of 40px, what calculations should I use to decrease the size for mobile? Is there a default percentage to reduce desktop values? Alternatively, are there visual rules that other designers follow?
I always design for Bootstrap, but I'm not sure if I am thinking about mobile the right way.
I've also posted this on the User Experience Stack Exchange forum, but I'm not sure which one is the best for my question.
Thank you for sharing your thoughts and advice.
I have designed mostly for desktops as a traditional web designer, and now I'm trying to migrate to UI/UX.
Modern devices do most of the scale conversion work for you by adequately scaling the viewport to compensate for the smaller screens and often higher resolutions. Depending on the type of application you are designing, the technology is different, but the result is very similar.
For example, if you were implementing the design for the Web, you would likely need to use browser features like media queries to manage your content.
However, because you are focusing on the design of the site, you should not need to worry about the 'how', so you can focus on what to do.
Here are some tips:
Elements and text appear roughly the same size on desktop and mobile if you hold the device at a casual but comfortable distance and compare it to the size it appears on your desktop's screen at an average viewing distance. You can try this by going to a website built for mobile like Apple's.
Because of the similar size but reduced screen dimensions, you need to simplify your design, avoid multiple columns (especially for phones).
Because you see a smaller portion of your design at once on mobile, there is less need for significant visual hierarchy. For example, if you have multiple heading levels with a significant visual size difference on the desktop, you can probably get away with making them closer in size on mobile.
If you want to see what your design looks like on mobile, try emailing the design to your phone, save it to your pictures, and load the image full screen. You may need to zoom the image in a bit so that the left and right of the design are touching the sides of your phone's screen. If your text looks too small or your elements are too large, adjust the design and load it on your phone again. Keep doing this until you get it right.
With a little practice and effort, you will get the hang of Mobile design. And, if you want to take it to the next level, try researching mobile first design. Here is just one of many articles on the subject.
My question is
Gui libraries like Qt and lets say for Windows operating systems
how do they create all those graphical user interfaces(windows etc).
Does each operating system gives API's or something else to do so?If yes, then how operating systems draw all those windows and things.Do they (operating systems) "control" the screen and then draw each pixel one by one to achieve their goal the GUI?
I would like an answer that explains things at the lowest level possible but well i don't demand someone to write me everything that happens( even if i would like to) because i know many things are behind all these.So for this reason comments with links or suggested books which explain with details
on what is happening under the hood would be appreciated.
Stackoverflow answers are not supposed to use links, comments can but not answers.
Each operating system and gui library is different, but, yes in some way, shape, or form they do actually draw every one of the pixels. It is often quite organized and many peformance solutions are used, optimized routines that can update a rectangle or some chunk of a screen, sometimes hardware gets involved (these days a lot of the time the hardware or basically gpus get involved the cpu asks the gpu to draw something then the gpus are busy placing all the pixels).
You would likely for example want to create some sort of font rendering function that is given the font, the string to display, the font size, and perhaps a clipping window to not go outside, or perhaps a function that with the font, size and string returns the number of pixels then you can adjust the string to fit and wrap (look around this web page for example, drag the window wider and narrower and watch what web text does).
Definitely some sort of image drawing routines with the ability to stretch or fit the drawing to the rectangle defined.
The fun stuff, games, etc has improved so rapidly over time that it is hard to go back to a simple line draw and area fill routine, etc. But also along with the technology the games brought simple things like web pages benefit...Again look around.
There are many open source programs and libraries you should just wander around the source code and see what you see.
The operating system provides libraries that interface with the monitor/display. In short, GUI libraries such as Qt interact with those libraries of the operating system and creates an easier bridge for you, the programmer to interact with the monitor. For instance, Qt might have a drawLine feature, which underneath is taking care of pixel arrangement related to drawing on the monitor/display for the operating system.
I want to make a skinning engine capable of drawing custom-shaped windows with alpha blending. That is, it'll use layered windows (UpdateLayeredWindow). A typical window will contain among its background a couple dozens of other bitmaps ranging from 10×10 to, say, 300×150 pixels. In the worst case most of these elements will have smooth animation up to 30 fps. Everything will be alpha-blended and I am going to use Direct2D for this (yes, I know older Windows versions doesn't support it). In general, Winamp's modern skin engine is the closest example.
Given all this and taking in account modern PCs performance, can I just redraw the whole window every single frame or do I have to constrain to some sort of clip rectangle?
D2D required you to render with WM_Paint messages
Honneslty, use The IAnimation interface, and just let D2D and windows worry about how often to redraw , though i will let you know , winamp is done with adobe air, and layerd windows with d2d causes issues. (Kinda think you have to use a DXGI render target, but with the window being layerd it needs a DC to be returned to an end paint call so it can update it's alpha channel)
I have some experience with this.
If you need to support Windows XP, using UpdateLayeredWindow is the only choice available for solving this problem. The documentation for this call says it copies the whole bitmap to the screen each time it is called and this bottleneck showed up in my benchmarking as the real limiting factor. If your window is 300x300 you pay that price on every update, even if you are careful to modify only a couple of pixels. It would be very easy to over-optimize the rendering side for no real benefit so implement something simple, measure, and then decide if you need to optimize.
If you can drop support for Windows XP then you can avoid UpdateLayeredWindow completely and use DwmExtendFrameIntoClientArea to create the same effect as a layered window. You'll write less code, avoid the UpdateLayeredWindow bottleneck, and D2D will be easier to work with.
I like Windows Phone 7's interface experience. I find it very innovative compared to other interfaces (be it mobile, desktop or web). Yet it's still no less usable. All in all a very good shift from the usual in the right direction.
Some of the effects could be used in web interfaces to enhance the experience without sacrificing usability and intuitiveness.
Effects I'm talking about:
perspective animation when you click on a particular hub on the home screen)
elements executing animation in different times (hub being clicked moves last)
horizontal slide with different slide amounts (titles and background images move less than screen width which gives it a feeling of depth dimension)
etc.
2 questions
Do you know of any public website that uses at least one of the aforementioned effects and does that without the use of plugins (like Flash or Silverlight)?
Is there any JavaScript library that would provide such effects (at least the different delay and different amount sliding technique)?
Extremely simplified example
I've taken some time to put up a simplified example of transition effect that could be adopted on mobile devices and simulates at least a bit of the fine Windows Phone 7.x transitions.
Just click on any tile and see others zoom out and slide to left.
Let me know what you think about this example.
Something came out just these days
Take a look at this HTML demo written by Microsoft (or one of its partners). Blew my mind away as being the closest to WP7 experience! Amazing!
The delay and sliding should be easily handled by jQuery, but I am not aware of someone who has alredy bundled up something to directly emulate the WP7 interface. Sounds like a fun project.
I remember my old Radeon graphics drivers which had a number of overlay effects or color filters (whatever they are called) that would render the screen in e.g. sepia tones or negative colors. My current NVIDIA card does not seem to have such a function so I wondered if it is possible to make my own for Vista.
I don't know if there is some way to hook into window's rendering engine or, alternatively, into NVIDIA's drivers to achieve this effect. While it would be cool to just be able to modify the color, it would be even better to modify the color based on its screen coordinates or perform other more varied functions. An example would be colors which are more desaturated the longer they are from the center of the screen.
I don't have a specific use scenario so I cannot provide much more information. Basically, I'm just curious if there is anything to work with in this area.
You could have a full-screen layered window on top of everything and passing through click events.. However that's hacky and slow compared to what could be done by getting a hook in the WDM renderer's DirectX context. However, so far it's not possible, as Microsoft does not provide any public interface into this.
The Flip 3D utility does this, though, but even there that functionality is not in the program, it's in the WDM DLL, called by ordinal (hidden/undocumented function, obviously, since it doesn't serve any other purpose). So pretty much another dead end, from where I haven't bothered to dig deeper.
On that front, the best we can do is wait for some kind of official API.