Preferred way to build gui components tree - user-interface

What is the preferred way to build application gui components tree?
Instantiate all components and build an entire tree, controlling it with show/hide/disable/enable operations on user events.
Dynamically creating gui with create/add/remove components based on user events.
I'm especially interested with this design problem in JavaFX.

Sorry, I don't know much about JavaFX.
But, I would suggest option 2. If you instantiate everything at the start, you're going to use up a whole load of memory when you only actually need to use memory for the gui components that are currently visible.
Create all of the components for the current screen, and show/hide/disable/enable them. But don't create components that don't live on the current screen/window/form/dialog.

The answer depends mainly on performance. I have built trees with ~3000 nodes with no problem. At some point in time the number of nodes added to the Scene does impact performance, but this is a moving target as each release of JavaFX is improving on this.
However, not all of this performance degradation is due to the number of nodes as it may be due to "BindStorming". See Jim Connors blog on this and other performance related posts.

Related

Best approach to creating a site map for a creative application?

I work as a UX designer for a company who makes a creative application for the live video events space.
I want to do some work on information architecture for our product. The software is complex (think photoshop) in its layout.
Is a site map a good approach to mapping objects with in the software with windows and menus? Is there another option for building an overview of information contained.
Cheers Jim
A Sitemap makes more sense to me when you are actually creating a website. What you describe is a software with probably multiple features and a very clear goal.
I don't want to forget my favorite tool: pen a paper.
But also, I would do a Content Inventory and use it to create an Affinity Diagram so you can Categorize which features you will display where and how the information will be shown. Don't be afraid to explore about Databases and how relationships between data might be. You will have to iterate on this several times and do some Data Modeling at this point.
Also, if it is that complex, I would create Tasks Flows and User Flows to understand how the user will complete tasks from top to bottom better.
At the end, you will have to model Navigation and processes with a certain Hierarchy. Usually, I rely on B&W wireframes better than anything as they help me represent better the visual impact step by step.
Hope this helps and lots of luck!
I don't think a "site" map in the traditional sense will really help you. However, some elements like hierarchy & categorization certainly sound useful.
I think I would use traditional artifacts like a card sort if you're talking to users. Otherwise, you run the risk of confusing the issue (applications don't utilize links in a traditional site map structure).
For you, and/or internal team members, I wonder if mind mapping tools would help?
You could also use some visual design program to lay out all the options/sub-options in physical space. Imagine an interface with all options expanded. This would give you a better way to audit all features and make sure you're accounting for everything.
Finally, journey maps or storyboards that utilize interface options in the steps, should be created for the highest priority tasks/workflows. This helps you know how to deal with interface changes (eg picking settings for a process).

User Interface Design Package

I am working on designing a user interface and want to convey my design to my peers. Typically, I'd use a mock-up of the UI and UML, but given the complexity, size, and multiple asynchronous interactions I'm not sure that this makes the most sense. It doesn't seem to allow to efficiently describe the process.
Does anyone have experience in designing large User Interface's? How would a company that designs UI's go about modeling their process/ design? This seems to be a question for the 'front-end' engineers.
well let's go for an answer even if your question is very large. I'll give you my experience and hope it will help you to decide.
To my mind, using UML in a project is very useful if you use it right, it's often clear, quite easy to understand. On the other side, when working on huge project, it's sometimes costly because it takes time to create your diagrams and nowadays, you have to be as fast as possible !
These last months I have worked on a project with a GUI and to well define it, I have first draw some screen on paper and quickly ask for some drawbacks near the coffee machine.
I have created some activity diagrams and then, created a mock-up.
How to choose now ?
First of all, if your GUI should have a beautiful design, I don't think you can avoid a mock-up, but I'm don't know if it's your goal.
My main tip would be, try to simplify your complexity in your program. If you have multiple asynchronous that could arise everytime/everywhere, you'll encounter a lot a problem.
In my project, let's say my tablet could be preempted by an external computer whenever/wherever. I just have created a class beforehand to manage this asynchronous call but my sub-activity-diagrams stayed intact.
Draw specific diagram for specific functionalities and leave useless diagrams which are not relevant/too simple.
Hard to explain you more than this without giving you any example. Hope it will help a bit.

v8 is too slow for my purpose

I'm working on a music visualization plugin for libvisual. It's an AVS clone -- AVS being from Winamp. Right now I have a superscope plugin. This element has 4 scripts, and "point" is run at every pixel. You can imagine that it has to be rather fast. The original libvisual avs clone had a JIT compiler that was really fast, but it had some bugs and wasn't fully implemented, so I decided to try v8. Well, v8 is too slow running the compiled script at every pixel. Is there any other script engine that would be pretty fast for this purpose?
If you are running your updates on a per-pixel level, I would suggest having an off-screen in-memory representation of the screen, and update the screen as a whole, not each individual pixel. I know that this is a common issue for bitmap updates in general, not V8 per-se. I don't know enough about the specific environment you are working in to be much help, only that as I said, it's a common performance issue to try to update individual pixels against a UI canvas one at a time. If you can do an offline/offscreen representation of your canvas/ui surface then update it all at once, your performance will be much better.
Also, there will be some dependencies on how your event model is worked out. If this doesn't work well, you may need to bring this logic into a compiled COM object or something, but on a per-pixel update scheme, you will have similar issues when trying to do per-pixel updates. Not saying you are, just noting again this is the most common issue with this type of problem.
sounds like you need to use native code, or maybe a Java Applet (Not that I recommend a Java Applet, use it only if you are in full control over the client environment).

Why is UI programming so time consuming, and what can you do to mitigate this? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In my experience, UI programming is very time consuming, expensive (designers, graphics, etc) and error prone - and by definition UI bugs or glitches are very visible embarasing.
What do you do to mitigate this problem?
Do you know of a solution that can automatically convert an API to a user interface (preferably a Web user interface?).
Probably something like a JMX console
with good defaults
can be tweaked with css
where fields can be configured to be radio button or drop down list, text field or text area, etc
localizable
etc
Developing UI is time consuming and error-prone because it involves design. Not just visual or sound design, but more importantly interaction design. A good API is always interaction model neutral, meaning it puts minimal constraints on actual workflow, localisation and info representation. The main driver behind this is encapsulation and code re-use.
As a result it is impossible to extract enough information from API alone to construct a good user interface tailored to a specific case of API use.
However there are UI generators that normally produce CRUD screens based on a given API. Needless to say that such generated UI's are not very well-suited for frequent users with demands for higher UI efficiency, nor are they particularly easy to learn in case of a larger system since they don't really communicate system image or interaction sequence well.
It takes a lot of effort to create a good UI because it needs to be designed according to specific user needs and not because of some mundane API-UI conversion task that can be fully automated.
To speed the process of building UI up and mitigate risks it is possible to suggest either involving UI professionals or learning more about the job yourself. Unfortunatelly, there is no shortcut or magic wand, so to speak that will produce a quality UI based entirely and only on an API without lots of additional info and analysis.
Please also see an excellent question: "Why is good UI design so hard for some developers?" that has some very insightful and valuable answers, specifically:
Shameless plug for my own answer.
Great answer by Karl Fast.
I don't believe UI programming is more time consuming than any other sort of programming, nor is it more error prone. However, bugs in the UI are often more obvious. Spotting an error in a compiler is often much more tricky.
One clear difference between UI programming is that you have a person at the other end, instead of another program, which is very often the case when you're writing compilers, protocol parsers, debuggers, and other code which talks to other programs and computers. This means that the entity you're communicating with is not well-specified and may behave very erratically.
EDIT: "unpredictable" is probably a more appropriate term. /Jesper
Your question of converting an API to a user interface just doesn't make sense to me. What are you talking about?
Looks like you are looking for the 'Naked Objects' Architectual pattern. There are various implementations available.
http://en.wikipedia.org/wiki/Naked_objects
I'm not providing a solution, but I'll attempt to answer the why.
So I don't speak for everyone, but for me at least, I believe one reason is because programmers tend to concentrate on functionality more so than usability and they tend not to be too artistic. I think they just tend to have a different type of creativity. I find that it takes me a long to time to create the right graphics, compared to how long it takes me to write the code (Though, for the most part, I haven't done any projects with too many graphical requirements).
Automatically generating user interfaces may be possible to some extent, in that it can generate controls for the required input and output of data. But UI design is much more involved than simply putting the required controls onto a screen. In order to create a usable, user friendly UI, knowledge from disciplines such as graphics design, ergonomics, psychology, etc. has to be combined. There is a reason that human-computer interaction is becoming a discipline of its own: its not trivial to create a decent UI.
So I don't think there's a real solution to your problem. UI design is a complex task that simply takes time to do properly. The only area where it is relatively easy to win some time is with the tooling: if you have powerful tools to implement the design of the user interface, you don't have to hand-code every pixel of the UI yourself.
You are absolutely correct when you say that UI is time consuming, costly and error prone!
A great compromise I have found is as follows...
I realized that a lot of data (if not most) can be presented using a simple table (such as a JTable), rather than continuously try to create custom panels and fancy GUI's. It doesn't seem obvious at first, but it's quite decent, usable and visually appealing.
Why is it so fast? Because I was able to create a reusable framework which can accept a collection of concrete models and with little to no effort can render all these models within the table. So much code-reuse, its unbelievable.
By adding a toolbar above the window, my framework can add to, remove from or edit entries in the table. Using the full power of JTables, I can hide (by filtering) and sort as needed by extending various classes (but only if/when this is required).
I find myself reusing a heck of a lot of code every time I want to display and manage new models. I make extensive use of icons (per column, rows or cells, etc) to beautify the screens. I use large icons as a window header to make each screen 'appear' different and appealing and it always looks like new and different screens, but its always the same code behind them.
A lot of work and effort was required at first to do the framework, but now its paying off big time.
I can write the GUI for an entirely new application with as many as 30 to 50 different models, consisting of as many screens in a fraction of the time it would take me using the 'custom UI method'.
I would recommend you evaluate and explore this approach!
if you already know or could learn to use Ruby on Rails, ActiveScaffold is excellent for this.
One reason is that we don't have a well-developed pattern for UTDD - User Test Driven Development. Nor have I seen many good examples of mapping User Stories to Unit Tests. Why, for example, do so few tutorials discuss User Stories?
ASP.NET Dynamic Data is something that you should investigate. It meets most, if not all your requirements
It's hard because most users/customers are dumb and can't think straight! :)
It's time consuming because UI devs/designers are so obsessive-compulsive! :)

Design & Coding - top to bottom or bottom to top? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When coding, what in your experience is a better approach?
Break the problem down into small enough pieces and then implement each piece.
Break the problem down, but then implement using a top-down approach.
Any other?
I tend to design top-down and implement bottom-up.
For implementation, building the smallest functional pieces and assembling them into the higher-level structures seems to be what works best for me. But, for design, I need to start from the overall picture and break it down to determine what those pieces will be.
Here's what I do:
Understand the domain first. Understand the problem to be solved. Make sure you and the customer (even if that customer is you!) are on the same page as to what problem is to be solved.
Then a high level solution is proposed to the problem and from that, the design will turn into bubbles or bullets on a page or whatever, but the point is that it will shake out into components that can be designed.
At that point, I write tests for the yet-to-be written classes and then flesh out the classes to pass those tests.
I use a test-first approach and build working, tested components. That is what works for me. When the component interfaces are known and the 'rules' are known for how they talk to each other and provide services to each other, then it becomes generally a straightforward 'hook everything together' exercise.
That's how I do it, and it has worked well for me.
You might want to look over the Agile Manifesto. Top down and bottom up are predicated on Built It All At Once design and construction.
The "Working software over comprehensive documentation" means the first thing you build is the smallest useful thing you can get running. Top? Bottom? Neither.
When I was younger, I worked on projects that were -- by contract -- strictly top down. This doesn't work. Indeed, it can't work. You get mountains of redundant design and code as a result. It was not a sound approach when applied mindlessly.
What I've noticed is that the Agile approach -- small pieces that work -- tends to break the problem down to parts that can be grasped all at once. The top-down/bottom-up no longer matters as much. Indeed, it may not matter at all.
Which leads do: "How do you decompose for Agile development?" The trick is to avoid creating A Big Thing that you must then decompose. If you analyze a problem, you find actors trying to accomplish use cases and failing because they don't have all the information, or they don't have it in time, or they can't execute their decisions, or something like that.
Often, these aren't Big Things that need decomposition. When they are, you need to work through the problem in the Goals Backward direction. From Goals to things that enable you to make that goal to things that enable the enablers, etc. Since goals are often Big Things, this tends to be Top Down -- from general business goal to detailed business process and step.
At some point, we overview these various steps that lead to the goals. We've done the analysis part (breaking things down). Now comes the synthesis part: we reassemble what we have into things we can actually build. Synthesis is Bottom Up. However, let's not get carried away. We have several points of view, each of which is different.
We have a model. This is often built from details into a larger conceptual model. Then, sometimes decomposed again into a model normalized for OLTP. Or decomposed into a star schema normalized for OLAP. Then we work back up to create a ORM mapping from the normalized model. Up - Down - Up.
We have processing. This is often built from summaries of the business processes down into details of processing steps. Then software is designed around the steps. Then the software is broken into classes and methods. Down - Up - Down.
[Digression. With enlightened users, this decomposition defines new job titles and ways of working. With unenlightened users, the old jobs stay and we write mountains of documentation to map old jobs onto new software.]
We have components. We often look at the pieces, look at what we know about available components, and do a kind of matching. This is the randomest process; it's akin to the way crystals form -- there are centers of nucleation and the design kind of solidifies around those centers. Web Services. Database. Transaction Management. Performance. Volume. Different features that somehow help us pick components that implement some or all of our solution. Often feels bottom-up (from feature to product), but sometimes top-down ("I'm holding a hammer, call everything a nail" == use the RDBMS for everything.)
Eventually we have to code. This is bottom up. Kind of. You have to define a package structure. You have to define classes as a whole. That part was top down. You have to write methods within the classes. I often do this bottom-up -- rough out the method, write a unit test, finish the method. Rough out the next method, write a unit test, finish the method.
The driving principle is Agile -- build something that works. The details are all over the map -- up, down, front, back, data, process, actor, subject area, business value.
Yes. Do all of those things.
It may seem sarcastic (sorry, I revert to form), but this really is a case where there is no right answer.
Also in the agile way, write your test(s) first!
Then all software is a continual cycle of
Red - the code fails the test
Green - the code passes the test
Refactor - code improvements that are intention-preserving.
defects, new features, changes. It all follows the same pattern.
Your 2nd option is a reasonable way to go. If you break the problem down into understandable chunks, the top down approach will reveal any major design flaws before you implement all the little details. You can write stubs for lower level functionality to keep everything hanging together.
I think there's more to consider than top- verses bottom-down design. You obviously need to break the design up into manageable units of work but you also need to consider prioritisation etc. And in an iterative development project, you will often redefine the problem for the next iteration once you've delivered the solution for the previous one.
When designing, I like to do middle-out. I like to model the domain, then design out the classes, move to the database and UI from there. If there are specific features that are UI-based or database-based, I may design those up front as well.
When coding, I generally like to do bottom-up (database first, then business entities, then UI) if at all possible. I find it is a lot easier to keep things straight with this method.
I believe that with good software designers (and in my opinion all software developers should also be software designers at some level), the magic is in being able to do top-down and bottom-up simultaneously.
What I was "schooled" to do by my mentors is start by very brief top-down to understand the entities involved, then move to bottom-up to figure out the basic elements I want to create, then to back up and see how I can go one level down, knowing what I know about the results of my bottom up, and so forth until "they meet in the middle".
Hope that helps.
Outside-in design.
You start with what you're trying to achieve at the top end, and you know what you've got to work with at the bottom end. Keep working both ends until they meet in the middle.
I sort of agree with all of the people saying "neither", but everyone falls somewhere on the spectrum.
I'm more of a top-down kind of guy. I pick one high level feature/point/whatever and implement it as a complete program. This lets me sketch out a basic plan and structure within the confines of the problem domain.
Then I start with another feature and refactor out everything from the original that can be used by the second into new, shared entities. Lather, rinse, repeat until application is complete.
However, I know a lot of people who are bottom up guys, who hear a problem and start thinking about all the support subsystems that they could need to build the application on top of it.
I don't believe either approach is wrong or right. They both can achieve results. I even try and find bottom up guys to work with, as we can attack the problem from two different perspectives.
Both are valid approaches. Sometimes one just "feels" more natural than the other. However, there is one big problem: some mainstream languages and especially their frameworks and libraries really heavily on IDE support, such as syntax highlighting, background type checking, background compilation, intelligent code completion, IntelliSense and so on.
However, this doesn't work with top-down coding! In top-down coding, you constantly use variables, fields, constants, functions, procedures, methods, classes, modules, traits, mixins, aspects, packages and types that you haven't implemented yet! So, the IDE will constantly yell at you because of compile errors, there will be red squiggly lines everywhere, you will get no code completion and so on. So, the IDE pretty much prohibits you from doing top-down coding.
I do a variant of top-down. I tend to try and do the interface first - I then use that as my list of features. What's good about this version is, it still works with IDE that would otherwise complain. Just comment out the one function call to what's not yet been implemented.

Resources