What is the computational model of ICON and SNOBOL - computation

I try to found computation model of programming languages like Icon and Snobol but i can't found the information about these programming languages...!
I want know What the computational model of ICON and SNOBOL is?

Snobol is functional and logic model (see the right side of this page http://en.wikipedia.org/wiki/SNOBOL) and Icon support Imperative model.

Related

GUI architecture: Separation of Layout, Rendering, and Input

All the GUI systems I've worked with involve a normalized base widget object that handles layout, input, and rendering. As I've been creating a game GUI in C++ I've been finding that specializing the roles of each widget so that each widget only handles layout, input, or rendering makes the code more composable, flexible, and organized.
However, as I've stretched that paradigm further and further it is starting to snap, and I am trying to find the sweet spot between normalized and specialized widgets. My main difficulty is I have been unable to find any precedent for this GUI model.
Is there a term for the concept I'm describing?
Are there any existing systems or articles that contain this form of widget specialization?
[Edit]
Here is a code example for the traditional GUI model:
var button = new Widget()
button.width = 100
button.height = 30
button.set_fill(FILL_COLOR)
button.set_border(BORDER_COLOR, BORDER_SIZE)
button.click(()=> submit())
button.text = "Submit"
Here is a code example for the specialized GUI model:
var button = new Box()
button.width = 100
button.height = 30
button.add(new Fill(FILL_COLOR))
button.add(new Border(BORDER_COLOR, BORDER_SIZE))
button.add(new Clickable(()=> respond_to_click())
button.add(new Text("Submit"))
The first model relies on a large base class that contains many common GUI features, while the second breaks those features into minimal classes. For the specialized approach, the Box class is only concerned about layout, while classes such as Fill and Clickable have no layout logic beyond inheriting the bounds of their parent.
In some ways you could view this as a component-based architecture, which tends to be more popular in video games. The idea here is that separate "components" take different jobs, and usually get attached to something like a root "GameObject". In that sense, "button" might not even properly exist, but rather "control" - but with virtually no top-level logic. Then you simply add components to compose a working object. Component-based architecture is not necessarily GUI specific, but it seems like a close match to the code snippets you are providing.
The good news is that there's a lot written about component-based architectures, so you'll be able to leverage that to evaluate possible strengths and weaknesses. Good luck!

Implementing Document-based Applications with CoreAnimation Layers

I am working on a diagram drawing application that is based on CoreAnimation framework. I have a common functionality set that includes: creating diagram objects, editing their geometrical properties, moving them around, etc. Each object is represented as a separate CALayer (or a group of layers).
My application is also document-based which means I am following the design imposed by Cocoa for document management.
Here is an example of what the application looks like:
image http://guitar.rizo.me/views/main.view/image3.png
Although I do understand the fundamental principles of how the things should work I can't figure out how to make a clear design separation between the model/view implementations.
Is CALayer class a view class or I can also view it as a model (since its properties are the only part of the application's data)?
What would be the ideal organization for such an application given the document-based architecture?
I can't see any clean way to solve this design problem, what would you recommend?
Thanks in advance.

in a Drawing application I want to use MVC approach but I have some doubts about Model's task

I want to develop a software with MVC approach . I am familiar with MVC and how to implementing it specially with database programs but here is my doubt:
I want to create a graphical application in iPhone which in this case I don't have any other choice except MVC but implementing a 100% MVC sometimes is hard and rules easily can be violated.
I have put my drawing function(calculation) inside View.
I have a controller as usual which is responsible to call the subview(V) and my main class(M)
And my main class(M) doesn't do much for me only storing some numbers and variables in it.
this is where my doubt started :
do I need to transfer the calculation part of drawing to model? the calculation part right now resides in view and the reason for that is I need to access some properties of View like height and width , etc ....
so I decided to put the calculation and drawing inside the view.
please help me to clarify this problem , because I want to practice software engineering using MVC and this is like a self training.
I see this as design issue that you can decide yourself. You can either say that image width/height are part of the picture, and all image attributes would be then returned as absolute X and Y coordinates. Or then you could say that the image is 100% scalable, and the view determines the size it is drawn into, and keep the calculation in the view.

Best language for quickly creating user interfaces without drag and drop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm a blind college student who is taking an introduction to programming class that focuses on user interface design. The class is using Processing, which is completely inaccessible. I'm looking for a language that will allow me to create GUI's without drag and drop and hopefully be smart enough to do most of the layout without forcing me to specify control positions in pixels.
I know Perl, Java, C/C++, c#, and HTML. I was considering creating HTA applications. My only requirements are that the language must run under MS Windows, and must not use SWING or GTK as the underlying toolkit.
I would say that xaml would be a good choice:
Pixel manipulation is not needed
Item functionality in code behind
Can add pixels changing for control
later on
There is a lot of documentation on
how to use it
Maybe if you give us an idea of what you will need the language for we can give you better suggestions.
Speaking as a blind programmer:
C# + WinForms: You can either create the code by hand and use layout managers or calculate the sizes in your head, or if you're using the JAWS screen reader then there are scripts which will help you in the WinForms designer.
C# + WPF: Here you define your UI in XML, but it is more complex to get your head around. Certainly look at this as it is a very nice solution. the other problem with WPF at the moment is that not all screen readers support this newer technology.
Jamal Mazrui at www.EmpowermentZone.com has created something called "Layout By Code", but I have no experience with this.
HTML+Javascript would be nice, but I doubt it'd be allowed in your course.
WXWidgets: I don't have a lot of experience with this cross-platform, multi-language UI toolkit, but I believe it has layout managers and is thus used by several blind programmers I know.
Finally, I used to design Win32 resource scripts by hand, calculating sizes in my head (no layout managers). This is certainly achievable if you wanted to take this route.
In summary, WPF's nice, but make sure your screen reader works with this kind of app. The next best alternative is probably WinForms. If you like Layout By Code then use it, but if this is a skill you want for employment, then keep that in mind.
take a look on XAML. I think it could be a good start for both modern Windows and Web UI creators.
Tcl/Tk will do exactly what you want. The pack and grid layout managers are based on logical relative placement of the widgets.
Although the "native" language of Tk is Tcl, many other languages have a Tk binding.
label .l -text "this is a label"
button .b -text 'quit' -command "exit"
pack .l .b
Check out this project on codeplex. It may help you (as an alternative to processing&java)
http://bling.codeplex.com/
ling is a C#-based library for easily programming images, animations, interactions, and visualizations on Microsoft's WPF/.NET. Bling is oriented towards design technologists, i.e., designers who sometimes program, to aid in the rapid prototyping of rich UI design ideas. Students, artists, researchers, and hobbyists will also find Bling useful as a tool for quickly expressing ideas or visualizations. Bling's APIs and constructs are optimized for the fast programming of throw away code as opposed to the careful programming of production code.
Bling as the following features that aid in the rapid prototyping of rich UIs:
* Declarative constraints that maintain dynamic relationships in the UI without the need for complex event handling. For example, button.Width = 100 - slider.Value causes button to shrink as the slider thumb is moved to the right, or grow as it is moved to the left. Constraints have many benefits: they allow rich custom layouts to be expressed with very little code, they are easy animate, and they support UIs with lots of dynamic behavior.
* Simplified animation with one line of code. For example, button.Left.Animate.Duration(500).To = label.Right will cause button to move to the right of label in 500 milliseconds.
* Pixel shader effects without the need to write HLSL code or boilerplate code! For example, canvas.CustomEffect = (input, uv) => new ColorBl(new Point3DBl(1,1,1) - input[uv].ScRGB, input[uv].ScA); defines and installs a pixel shader on a canvas that inverts the canvas's colors. Pixel shading in Bling takes advantage of your graphics card to create rich, pixel-level effects.
* Support for multi-pass bitmap effects such as diffuse lighting.
* An experimental UI physics engine for integrating physics into user interfaces! The physics supported by Bling is flexible, controllable, and easy to program.
* Support for 2.5D lighting.
* A rich library of geometry routines; e.g., finding where two lines intersect, the base of a triangle, the area of triangle, or a point on Bezier curve. These routines are compatible with all of Bling's features; e.g., they can be used in express constraints, pixel shaders, or physical constraints. Bling also provides a rich API for manipulating angles in both degrees and radians.
* And many smaller things; e.g., a frame-based background animation manager and slide presentation system.
* As a lightweight wrapper around WPF, Bling code is completely compatible with conventional WPF code written in C#, XAML, or other .NET languages.
Bling is an open source project created by Sean McDirmid and friends to aid in design rapid prototyping. We used Bling to enhance our productivity and would like to share it with other WPF UI design prototypers.
I'd probably try using C#. It has reasonably friendly interfaces to windows common controls and the like even without making use of Drag and Drop. Just don't make use of the designer and code as normal.
I don't program in Java but I know that Java provides for the programmatic creation of the UI AND provides some wonderful Layout Management components (Native to Java without requiring SWING). I first got exposed to Layout Managers back in the good-old-days of X11 with X Toolkits (anybody remember Motif, OpenLook, HP Open View?) and Java seems to have adopted similar technology.
You can create Windows, Dialogs and Menus all from simple layout managers.
Being sighted myself and not having worked too closely on anything that has ever been audited for accessibility or heavily accessed by blind users, I don't think my answer will be terribly thorough. My first instinct however is to say that some kind of dynamic web server architecture that generates HTML like C#, PHP or ColdFusion is going to fit your description of handling most of the layout for you without requiring that you specify control positions in pixels. There certainly is the availability to specify control positions in pixels via CSS, but it's not required. And I know HTML also has well defined standards for accessibility, whereas I'm not sure what the status is on accessibility standards with other kinds of software.
You could use javascript and html. There's a port of processing to javascript, so you know that it is powerful enough for the things that your class will cover. You can author html without knowing a single thing about what it looks like. In fact that is the preferred way to author html.
The main downside of javascript is not javascript itself, but the browser dom. That is the interface into controlling the html elements. However, a library like jquery, or mootools, or dojo can take care of most of those problems.
As for accessiblity, have a look at WAI ARIA also opera's intro to WAI ARIA.a
WAI ARIA is a way to build rich javascript applications while playing nice with screen readers. It's very cool. I've not seen more work and passion put into making the web stack accessible in any other programming stack.

How are the classes in the Sketch example AppKit application separated into Models/Views/Controllers?

I am a little confused as to the definition of classes as Models or Views in the Sketch example AppKit application (found at /Developer/Examples/AppKit/Sketch). The classes SKTRectangle, SKTCircle etc. are considered Model classes but they have drawing code.
I am under the impression that Models should be free of any view/drawing code.
Can someone clarify this?
Thanks.
The developer of Sketch outlines his/her rationale a bit in the ReadMe file:
Model-View-Controller Design
The Model layer of Sketch is mainly the SKTGraphic class and its subclasses. A Sketch Document is made up of a list of SKTGraphics. SKTGraphics are mainly data-bearing classes. Each graphic keeps all the information required to represent whatever kind of graphic it is. The SKTGraphic class defines a set of primitive methods for modifying a graphic and some of the subclasses add new primitives of their own. The SKTGraphic class also defines some extended methods for modifying a graphic which are implemented in terms of the primitives.
The SKTGraphic class defines a set of methods that allow it to draw itself. While this may not strictly seem like it should be part of the model, keep in mind that what we are modelling is a collection of visual objects. Even though a SKTGraphic knows how to render itself within a view, it is not a view itself.
I don't know if that is a satisfying answer for you or not. My personal experience with MVC is that while separation of model, view and controller is a "good thing" oftentimes in practice the lines between layers become blurred. I think compromises of design are frequently made for convenience.
For Sketch in particular, it makes sense to me that the model knows how to draw itself inside a view. The alternative to having each SKTGraphic subclass knowing how to draw itself would be to have a view class with knowledge of each SKTGraphic subclass and how to render it. In this case, adding a new SKTGraphic subclass would require editing the view class (adding a new clause to an if/else or switch statement, likely). WIth the current design, an SKTGraphic subclass can be added with no changes required to the view classes to get things working.
I'm not aware of the particular example that you're refering to, but here's my guess on the reason for that design:
Perhaps the Models (the SKTRectangle, SKTCircle that you mention) know enough to draw themselves but not enough to actually perform the drawing to the screen. The drawing to the screen is handled by the View, where the View will call the Models to find out how to draw them on the screen.
By taking this approach, the View won't have to know how to draw every single Model that it may encounter -- the View only needs to know how to ask the Model to draw itself on the screen.
I'm thinking that it's a trade-off between the MVC model and the object-oriented programming model -- strictly separating along the line of MVC will mean that the View will become extremely large and not very flexible when it comes to adding support for other Models that need to be displayed. With object-orientated design, we'd want the Models themselves to be able to be able to draw themselves on the screen, and we'd want the View to be able to handle new types of Models through facilities such as interfaces.

Resources