I recently had a client contact me asking for an update to his project that I used to maintain/develop years ago.
It was done using Macromedia Director (now Adobe Director) and Lingo. Since I haven't developed anything using these technology in such a long time I need some assistance.
The majority of the changes are simple, but what has got me stumped is making the application be able to toggle between full-screen projector and windowed-mode.
This is how it is organised:
I have a stub projector, which is lightweight and ensures a quick start-time.
The stub projector loads the main movie. ("#::Content:Main")
This stub projector is published with in full-screen mode.
Now, I can create a projector that is windowed and one that is full-screen mode by publishing separate executables. However what the client wants is the ability to switch this at runtime - is this even possible?
I have found a few workarounds that kinda work (setting the display-rect and stage-rect to the desktop size) but introduce numerous compatibility issues.
Any advice? Solutions?
I am tempted to say that it isn't possible to switch at runtime and recommend that he publishes either a full-screen or a windowed version.
For future reference: http://www.directorforum.com/showthread.php?p=38795#post38795
Well there are different ways to
define "full screen", but all can be
done at runtime:
1) The projector automatically adjusts
the computer's display resolution to
match the dimensions of the movie and
hides the taskbar/dock. This is
generally what "full screen" means in
modern parlance. You can check out
various Xtras for switching the
resolution on the fly.
2) In Director terms, publishing a
projector 'full screen' just means
that the projector window has no
titlebar, takes up the full dimensions
of the display, hides the
taskbar/dock, and has the movie
content centered on screen framed by a
solid background color. This is a
pretty lame implementation of full
screen since it doesn't make the movie
appear any bigger onscreen. This can
be set at runtime by manipulating the
rects that you mention and using an
Xtra to hide the taskbar. Not sure
what "compatibility issues" you ran
into.
3) Graphically stretch the movie so
that its actual content takes up the
entire screen. The easiest way to do
this is by altering the drawRect. But
this can result in distorted graphics
depending on how much stretching is
occurring, since no antialiasing is
used to smooth the stretched pixels.
Related
So I have a big 32 inch display with a resolution of 1440p, and I want to set the DPI scaling to 75% instead of 100%. But I can't find any way to do so on multiple monitors.
I currently have:
Display 1 [2560 x 1440] (Main display I want to change)
Display 2 [2560 x 1440] (This one is 27 inches so it's fine as is)
Display 3 [3840 x 2160] (Set to 100%, fine as it is)
This trick (click me) changes DPI scaling via some registry keys (LogPixels & Win8DpiScaling), but when I use that trick it downscales display 3 instead of display 1.
Is there a way to get this to work? I see no reason for Microsoft to limit the scaling in displays.
Note: I have a 2070 super, all the displays are plugged into the GPU via displayport directly, with the latest avalible firmware at the time of writing (september 2021)
The tl;dr:
Technical limitations aside, there are very solid user experience reasons why this probably isn't allowed.
No, Windows will not let you set UI scaling below 100%.
(even if a stable workaround were to be discovered, most users would probably be quite unhappy with the results)
While I would love¹ to be proven incorrect, the implications of scaling at less than 100% are so fraught that this limitation is unlikely to change in the near future.
Background:
This has been the case for ages, likely since Windows first introduced the feature.
Compatibility with current software
The only ~purely technical~ reason I can think of:
The 100% scaling size likely uses the smallest base image (e.g. Explorer and Taskbar icons, mouse and text cursors) resources included in various existing Microsoft and 3rd-party applications.
User experience
Going below the 100% point may cause small UI text and icons, especially in application toolbars and the Taskbar to be blurred to the point of ambiguity.
Those fine lines in the taskbar 'Windows' menu icon? Blurred or gone.
Taken to the extreme, the UI ~might~ become so unreadable that the user is effectively prevented from being able to read the text even in the 'Settings' window and therefore is 'stuck': i.e. not able to navigate through 'Settings' to restore the original '100%' scaling mode.
(Luckily, Windows is never used to run any SCADA software where confusing two icons could theoretically cost money or lives.)
Performance:
Since those carefully-designed graphic assets don't exist, if sub-100% scaling were allowed, it would also likely cause extra CPU/GPU workload - that is why only certain fixed sizes of up-sampling are shown on the normal Display settings screen and why the Advanced scaling settings screen warns that custom scaling between 100-500% is "not recommended".
That might also apply to any fixed scaling option offered below 100%, and absolutely would for custom scaling sizes.
Some people enjoy reading:
Vector-based TrueType/OpenType fonts usually contain a ~lot~ of manual tweaking / hints to enable readable display of very small point sizes.
The marketing department & friends of the C-suite
Could they implement this at a limited range of options? 90%? 75%?
Perhaps - but it's extra testing for a horrible-looking edge case.
The existence of the option, even if only available as a registry hack, might cause some people to actually use it in kiosks and other public-facing displays; this risks the same sort of bad PR as when a BSOD is seen on the 'arrivals' screen at a train station or airport monitor.
Combined with the first example below, even a 90% option could cause trouble in some environments.
Example and tutorial:
Imagine how Windows might look displayed on one of those cheapo '1080p-supported' projectors that actually only contains an imager with a native pixel resolution of, say, 1024x576 (or even 480x234).
Windows thinks it can send 1080p, since that what the HDMI connection advertises, so it does: any text / vector content looks atrocious.
(At least in this case the user could normally² unplug the projector and reconnect to a normal monitor to restore functionality.)
See for yourself... while connected to any monitor (at that monitor's native resolution), with Windows set to 100% scaling:
Open Windows Notepad
Type or paste in any block of text
Now, use the Zoom Out command from the View menu³ five or more times in a row
While not an exact analogue, you may still see how hard it could be to read down-sampled text, even when very high-contrast (the best-case scenario).
¹: As someone currently typing this very answer on a 1080p connection to a 55" 4K television as a second monitor, I came across the question very much hoping this was possible. Sadly, logic intervened and killed my potential joy.
²: Unless the computer is actually stored somewhere locked or inaccessible, such as a NUC-style PC hidden above the false ceiling in a conference room.
³: Alternatively, press <CTRL>-<Minus> five or more times.
For a screen sharing collaborative app, it is desirable to give the remote party a mouse cursor also.
Is it possible to leverage NSCursor?
If not, I guess I must create a small accurately positioned top-level window displaying a bitmap (I've asked here if there is some way to retrieve the bitmaps from the system cursors).
Code/link contributions appreciated. If I figure it out first I will (as always) answer my own question.
I have a scenario that I need some good solid advice on. The question is really about speed of WriteableBitmap vs. images in IsolatedStorage on the Windows Phone.
I have an app that displays a UserControl (#1) which is a little graphically heavy. When the user swipes it, it transitions in a push-left type of transition to bring in a new UserControl (#2) which is also a little graphically heavy. If the user swipes the other way, control #1 is brought in in the same type of push-transition, this time from the right.
What I do today is take a snapshot of #1, load #2 off screen and take a snapshot of it, put both side-by-side in a Canvas control and animate that control either left or right. One of the reasons I don't just use the controls and animate them is they may have animation that starts when they are loaded - my current technique allows me to capture a screen shot of pre-animation and post-animation, depending on which direction they go in.
What I'm wondering, however, if it would be better/faster to just do the above the first time and send the writeablebitmap to IsolatedStorage with Extenstions.SaveJPEG and just use that instead in subsequent tranistion animations.
Would load/render/WriteableBitmap each time generally be faster or load jpeg from IsolatedStorage be faster each time? I see that the Transitions control in the SDK doesn't really do either of these, so I'm open to suggestions that are different that also might improve performance.
I expect this to be very depended on the hardware and application. So it is pretty hard to give an answer based on this input. It doesn't look to hard to test (on actual hardware and with the actual application) so my advice is to build both and test.
The applications I have been working with use both approaches and to be honest I haven't noticed much difference.
Also you might try and enable bitmap caching on the controls. This will give you a writeable bitmap implementation that is very fast.
I would like to write a program that can mirror a portion of the main display into a new window. Ideally this new window could then be displayed on an external monitor. I have seen this uiltity for a flightsim that does this on a pc (a multifunction display extractor).
CLick here for a screenshot of the program (MFD Extractor)
This would be a live window ie. constantaly updated video display not just a static graphic.
I have looked at screen magnifiers or vnc clients for ideas but I think I need to write something from scratch. I have tried to do some reading on osx programing but where do I start in terms of gaining access to the display? I somehow need to extract the graphics from a particular program. Is it best to go near the final output stage (the individual pixels sent to the display) or somewhere nearer the window management stage.
Any ideas or pointers would be much appreciated. I just need somewhere to start from.
Regards,
There are a few ways to do this:
Quartz Display Services will let you get access to the video memory for a screen.
Quartz Window Services (a.k.a. CGWindow) will let you create an image of everything that lies below a window. If you create a borderless, transparent, empty, high-level window whose frame occupies an entire screen, everything below it will be everything on that screen. (Of course, you could create a smaller window in order to copy a section of the screen.)
There's also a way to do it using OpenGL that I never fully understood. That technique is demonstrated by a couple of code samples, OpenGLScreenSnapshot and OpenGLCaptureToMovie. It's more or less obsoleted by CGWindow, though.
Each of those will get you an image that you can then show or write to a file or something.
To show an image, use NSImageView or IKImageView. If you want to magnify it, IKImageView has a zoomFactor property, but if you want nearest-neighbor scaling (like Pixie, DigitalColor Meter, or xScope), I think you'll need to write a custom view for that (but even that isn't all that hard).
Many animation effects are simply gratuitous eye candy -- however, there are situations where animations effectively communicate to the user what's going on.
What are some of your favorite uses for animations, and what specific animation type would you use?
E.g.: Animate items downwards when a new item is inserted into a list
I really like Google Chrome's use when a file is being downloaded. It's hard to describe, but, it's a circle that fills like a pie chart as the download progresses, and the circle is overlaid with the icon for the file you're downloading. Very slick.
One example I can think of is the animation used by operating systems when you minimize a window.
Both Microsoft Windows and Apple OS X animate the window going down to the taskbar (or the Dock in OS X) to show the user where the window went. Otherwise novice users that hit minimize by accident might have trouble getting the window back.
I don't use linux, but I'm pretty sure it does the same. I'm not being discriminative =)
From enjoy3d.com
enjoy3d.com http://worldsware.com/images/mouse.gif
Press your mouse button
and move to look around.
There is a very nice paper by Ben Bederson and Angela Boltman in which they evaluate the impact of animation on user’s ability to build a mental map of the information in the space:
Does Animation Help Users Build Mental Maps
of Spatial Information?
I believe that all visual changes should not be swift. Be it status notification, window maximized/minimized, or data deleted/added. I cannot find a reference, but usually it is recommended that all animations should not be around 1-2 seconds, matching human's response time.
My favorite uses of animation is not in a commercial software (though Apple is good at this) but a research paper called Phosphor which I consider one of the great UI ideas that have not yet implemented into major operating systems.
AJAX loading gifs - you've got to have an indicator that you definitely registered an event and you're doing something about it
Progress bars are nice for things that take more than a moment or two, but only when they are accurate. An inaccurate progress bar is worse than none, in my opinion.