How do I automatically resize GUI elements in AnyLogic? - user-interface

When designing a GUI in most languages, you typically don't give exact dimensions for each component. Rather, you say how GUI components fit and size relative to each other. For example, Button1 should take up all the space Button2 and Button3 don't use; the TextPanel should fill as much space as it can; and the horizontal list of images should expand and shrink as the window expands and shrinks. In AnyLogic, I don't see any obvious way to do this, yet I need to develop models that work on multiple screen sizes. Is it possible to auto-scale GUI components in AnyLogic as it is in other languages? If so, how?

Unfortunately, there is no direct support for that as far as I know.
However, some of your requests can be achieved programmatically, i.e. by using the dynamic properties of your GUI elements.
There is the function getWindowWidth() (and height()) for experiments and you can set your button's width to equal that. With a bit of playing, you should be able to get your desired result.
cheers

Related

PtInRect vs child windows

I've seen cases where people use DrawFrameControl along with PtInRect (where the mouse position is tested to the rectangle of the frame control), to simulate the effect of having a control (like a button). Why would you want to do this, rather than using child windows?
An example where this technique is used is this docking framework, where the docking window's close button isn't a physical window.
For an application I'm writing, I'm using a list view control which will hold up to 1000 items. Each item will hold, let's say, 10 buttons. All buttons are custom drawn.
Would it be considered a more efficient (and faster) approach to use the PtInRect mechanism for this?
Each process has a limit of about 10,000 window handles. Not only would it be inefficient to create windows for 10 buttons on each of 1,000 items, it wouldn't necessarily be possible.
To answer your question: yes, creating "virtual" buttons by painting and hit-testing yourself is a much better solution.
Having separate child windows brings certain overhead with them. Each child has its own attributes: window class, styles, window procedure, position, owning thread, etc. If you need to manage large array of such children many of their attributes will most probably be the same, wasting system resources. It may also become harder to manage them as separate windows:
You may notice painting glitches. Each window paints somewhat independently from the others, so when someone moves another window on top of your multi-children window and then moves that away, depending on how you organize things you may see unpleasant intermediate states like the background being redrawn with holes visible at the location of each child moments before children repaint. CS_PARENTDC might help with that.
Whenever you need to reposition these children, you need use the DeferWindowPos family of functions or else suffer similar repainting issues.
All these children need separate messages to manipulate.
After all it may be even simpler to simulate these children. My feeling is that attempting to put 10000 buttons on top of a list view will be much harder to implement and will suffer from the above visual problems. Using DrawFrameControl with owner-draw List View items would most probably give you simpler implementation with better visual result.
Besides there is a limit on the number of windows you can create, as #arx noted.

What is the main idea of creating click heatmap?

in one of my projects, I would like to create heatmap of user clicks. I was searching a while and found this library - http://www.patrick-wied.at/static/heatmapjs/examples.html . That is basically exactly what I would like to make. I would like to create heatmap in SVG, if possible, that is only difference.
I would like to create my own heatmap and I'm just wondering how to do that. I have XY clicks position. Each click has mostly different XY position, but there can be exceptions time to time, a few clicks can have the came XY position.
I found a few solutions based on grid on website, where you have to check which clicks belong into the same column in this grid and according to these informations you are able to fill the most clicked columns with red or orange and so on. But it seems a little bit complicated to me and maybe slower for bigger grids.
So I'm wondering if there is another solution how to "calculate" heatmap colors or I would like to know the main idea used in library above.
Many thanks
To make this kind of heat map, you need some kind of writable array (or, as you put it, a "grid"). User clicks are added onto this array in a cumulative fashion, by adding a small "filter" sub-array (aligned around each click) to the writable array.
Unfortunately, this "grid" method seems to be the easiest, simplest way to get that kind of smooth, blobby appearance. Fortunately, this kind of operation is well-supported by software and hardware, under the name "computer graphics".
When considered as a computer graphics operation, the writable array is called an "accumulation buffer". The filter is what gives you the nice blobby appearance, even with a relatively small number of clicks -- you can tweak the size of the filter according to the needs of your application.
After accumulating the user clicks, you will need to convert from the raw accumulated values to some kind of visible color scale. This may involve looking through the entire accumulation buffer to find the largest value, and mapping your chosen color scale accordingly. Alternately, you could adjust your scale according to the number of mouse clicks, or (as in the demo you linked to) just choose a fixed scale regardless of the content of the buffer.
Finally, I should mention that SVG is not well-adapted to representing this kind of graphic. It should probably be saved as some kind of image file (.jpg or .png) instead.

Blind programmer: designing an interface in Xcode without being able to visually position UI elements

I am fairly new to Mac and iOS programming, and have recently decided to get really serious about developing applications for both platforms. The first step I took was to register to the Mac and iOS developer programs, download Xcode and study a book about Objective-C. I spent the last 6 weeks familiarizing myself with Objective-C, its syntax, concepts and the Foundation framework, and was purely developing command line applications for those purposes.
Now, the next step seems to be Cocoa and developing applications that offer graphical UIs, which I'm now looking into. Now, here's an issue I am having with this: as I am completely blind, I cannot visually see the screen. Thus, I use VoiceOver, the screen reader built into OS X and iOS. Maybe some of you devs have heard of VoiceOver at some point, as Apple has specified a variety of accessibility guidelines that concern VoiceOver. On that note, thanks a lot to all of you who abide by those guidelines, your effort is greatly appreciated!!! :-)
As for Xcode, it actually works really well with VoiceOver (VO). Adding in new UI elements is also no big deal, I can just copy them from the library and paste them into the view. However, I cannot drag them around and set them up in an appealing fashion, or at least I haven't found any way of doing it just yet as I'm just starting out!
Now, I really would like to know if there is any 'textual' way of arranging UI elements. I know the inspector has a great variety of options, but I'm not sure yet if any of them would let me, say, change the coordinates of an UI element by hand. Also, I've read about the new constraints which help create a consistent layout, but at this point I'm not really familiar with how they can be used or if they would be at all helpful in my case.
Also, I do realize that producing an interface that is 100% appealing to a sighted end user may not be possible for me, as it's hard for me to decide on color selections or to design a logo. Thus, I would probably need to hire someone for those things. However, if it would be possible for me to just specify the layout roughly, that would already help me a lot!
Thanks for any ideas / suggestions :-)
Robin
I'm not a blind user myself but I've worked with VoiceOver so hopefully some of this will help. I'm making this a community answer so feel free to add tips from your own experience if you've worked with UI layout in Xcode with VoiceOver.
When editing UI in Xcode there is an Inspector where you can change the size and position of the views. The shortcut to get to the Size Inspector is Alt+Command+5. The same shortcut works with VoiceOver.
You said that you were coding for both iOS and OS X. On iOS the y-axis starts in the top and points downward (so a higher y value means that the view is lower on the screen). This means that x and y specify the upper left corner of the view and the width and height extends to the right and down from there.
On OS X it's the opposite. The y-axis start at the bottom and a higher y value means that the view is higher on the screen. In both cases the x-axis goes from left to right. This means that x and y specifies the lower left corner and the width and height extends to the right an up.
Further, each view is positioned relative to their parent instead of absolute coordinates. This means that if you position a view at x=10 and y=30 and then position another view inside that at x=5 and y=10 it will have an absolute position on the screen that is x=15 and y=40.
If you can picture the layout in your head then you should be able to do it like this but it may still be hard to do.
Update
At the top of the hierarchy these coordinates relate to the size of the window. On iOS you have fixed sizes (320×480 for the iPhone and 1024×768 for the iPad). Depending on if the device is in landscape or portrait one of these is the width and the other is the heigh. You usually subtract 20 pixels from the height to account for the status bar. So the coordinate where y=0 would be directly below the status bar.
On OS X you can change the size of the window yourself. I will try and explain where to find it.
At the top level. Navigate to the "source code group" and interact with it. There you should find a "navigation bar group" and a "table" and a "scroll area". Interact with the table. In that table you should find a list of Placeholders and a list of Objects. One of the Objects will be the "Window". With the Window selected, all the inspectors will change properties of the Window. You can quickly jump to the Size Inspector by changing to any inspector and then back to the Size Inspector. For exaple type Alt+Command+4 and then Alt+Command+5. The first two elements in the Size Inspector should be the Width and Height of the Window.
This discussion will inevitably go off-topic, but here are my ideas:
Consider specializing in something else. There are so many things a programmer can work on. Many, if not most, programmers can enjoy long, successful careers without ever having to arrange any UI elements. Frankly, I don't think that this is particularly exciting, unless you have more of a designer mindset.
Alternatively, whatever tool you are using to arrange the elements, this layout is probably saved to some file, probably in some XML-like fashion. Find this file and edit it.
If you want to get really good at arranging UI elements, consider first hiring several people to do multiple arrangements for you. Then analyze the numbers and come up with some kind of formula that will be a good heuristic for such an arrangement. Or perhaps somebody has already come up with this kind of formula.

Should I use view constraints or minimum window size?

I'm making an app for use on OS X and I'm noticing how useful the new constraints feature is in the Interface Builder (which is built into Xcode now, of course). It's so useful and dynamic in fact that I'm questioning whether or not I should set a minimum window size or just rely on the constraints of my windows to set the minimums and maximums themselves.
I have a feeling that OS X takes minimum and maximum window sizes into consideration with matters other than limiting window size in some way and that it may be useful to set the minimums and maximums for that sake, but I also feel like it might be good style to rely on the constraints to dynamically set minimum and maximum window heights because of their dynamic behavior and all. For example, if I decide to change the minimum width of a control with constraints, I don't have to worry about also going to change the minimum window's minimum width.
Another even more crucial example of the benefits of relying on constraints to set the minimum and maximum window sizes is that if the user changes something like text size, the affected controls in my application are able to change their size constraints dynamically, but a statically set minimum and maximum window size would ruin that dynamic behavior.
Once again, all of these benefits should also be considered with the fact in mind that OS X might take minimum and maximum window sizes into consideration in some way and that it may be useful to set the minimums and maximums for that reason; I'm just not sure if OS X takes them into consideration, and if they do, how it uses them.
I've looked through Apple's documentation and cannot find anything that provides a satisfying answer.
The best thing to do in a situation like this is to try it out yourself. It took no more than two minutes to create a new application with a single window and a few controls. You don't need to add any code at all if you just want to play with a resizable window:
This window has no minimum size and no constraints, and the problem is immediately obvious. You can resize the window so that it looks like this:
Adding some constraints between the buttons shows the promise that constraints provide. Now the window looks like this at its smallest size:
A couple more constraints on the label finally gave the desired result:
That's great, but it took a bit of work to get there. I didn't add a complete set of constraints -- a vertical constraint between the two right hand buttons would have been redundant since there's already one between the buttons on the left. For a window with many controls, setting up enough constraints to cover all the views could be: a) very useful and worthwhile, or b) a pain in the butt and of little extra value. It depends on your situation. A simpler scheme is to just add up the heights of the controls that might overlap (two buttons and the label) and the desired spaces between them, and then set that as the minimum height for the window.
I can see either strategy being useful, depending on the window content. In fact, I don't think they're really two separate strategies at all... setting the minimum window dimensions is really just another kind of constraint that you're adding. For example, there may be a size below which your window would just look silly or not be very useful, so you could set the minimum window size to those dimensions. At the same time, you might want to set constraints between buttons to prevent overlapping controls in localized versions (e.g. German names tend to get pretty long).

Relative percentage UI control

I need the user to set a number of percentage values which should always add up to 100%. What are standard ways to archieve this? I came up with the following:
1) have a standard slider control for each value you need to set. Moving one slider will automatically adjust all the others so the sum will always come out as 100%. You can fix inidividual sliders with a checkbox displayed next to it. Only the remaining, "free", sliders will be adjustable.
Pro: consists entirely of standard widgets users already know
Con: lots of widgets, lots of screen real estate used, looks ugly when you have lots of sliders and thus low percentage values, normalization to 100% isn't immediately obvious.
2) have a slider control with several sliding knobs.
Pro: normalization is implicit and obvious because the length of the slider is fixed, relative weight is easy to see at a glance
Con: non-standard, knobs can easily overlap each other, knobs aren't easy to fix, no obvious place to put a text/number representation for each interval/percentage
3) display a standard pie chart.
Pro: normalization is implicit and obvious, relative weight is easy to see
Con: non-standard for interactive use, hard to make intuitive slice resizing work, no place to put a text/number representation for each slice
4) ... ?
I'm not happy with either of these hence my question here. Any better ideas? I'm dealing with 3-10 individual percentage values on a rich windows client (i.e. not web).
cheers,
Sören
What about vertical sliders? Like a sound mixer. I think it looks a lot better than a list of 10 horizontal sliders.
Or fixed width bar with several sliders on them, a bit like the gradient control of Photoshop if you know it.
Similar to the timeline idea, how about a slider like the partitioning interface in GParted or similar disk partitioning tools?
You could display the percentage values and actual numbers above the dynamically resizing bars to allow the user to edit them numerically instead of using the sliders if they want to configure it manually.
How about a time line view; (gantt chart) kind of like in Microsoft Expression Blend or in flash where you have multiple layers for each action and each action can be within a range on the scale from 0 to 100.

Resources