restkit and three20 - three20

I couldn't find a lot of good documentation on three20, so my question is what if any is the overlap between restkit and three20 in the url cache and request methods. Does it make sense to use three20's TTURLRequest with RestKit?
Are they complementary or overlapping.

You can read this tutorial about [Restkit][1]
[1]: http://mobile.tutsplus.com/tutorials/iphone/advanced-restkit-development_iphone-sdk/, and here is the section
Three20 Support At Two Toasters, the vast majority of our iOS
applications are built on top of two frameworks: RestKit and Three20.
We have found that Three20 greatly simplifies and streamlines a number
of common patterns in our iOS applications (such as the support for
URL based dispatch) and provides a rich library of UI components and
helpers that make us happier, more productive programmers. And RestKit
obviously makes working with data so much more pleasant. So it should
come as little surprise that there integration points available
between the two frameworks. Integration between RestKit and Three20
takes the form of an implementation of the TTModel protocol. TTModel
defines an interface for abstract data models to inform the Three20
user interface components of their status and provide them with data.
TTModel is the basis for all Three20 table view controllers as well as
a number of other components. RestKit ships with an optional
libRestKitThree20 target that provides an interface for driving
Three20 table views off of a RestKit object loader via the
RKRequestTTModel class. RKRequestTTModel allows us to handle all the
modeling, parsing, and object mapping with RestKit and then plug our
data model directly into Three20 for presentation. RKRequestTTModel
also provides transparent offline support and periodic data refresh in
our user interfaces. When you have used Core Data to back your data
model and utilize RKRequestTTModel in your controllers, RestKit will
automatically pull any objects from the cache that live at the
resource path you are loading in the event you are offline.
RKRequestTTModel can also be configured to hit the network only after
a certain amount of time by configuring the refreshRate property. In
addition to RKRequestTTModel, a child class RKRequestFilterableTTModel
is provided as well. RKRequestFilterableTTModel provides support for
sorting and searching a collection of loaded objects and can be useful
for providing client side filtering operations.

Related

How to manage propagation of Laravel / Eloquent Model changes to third party platforms?

I have created a Laravel application that acts as the glue between multiple external platforms, two of which are shopify and quickbooks. I have a local model: App\Models\Item , that I would like to use to maintain property synchronization with these external platforms.
For instance, if I update the price and cost of this item locally, I would like to do so in a way that triggers a change on the external platforms as well via each api. I imagine there's a solution similar to this that I can't find:
A model has some kind of linkage id associated with each platform
If the platform is registered for a user and the platform link id is present, then properties associated with the other platform are pushed as updates via the respective APIs
failures, timeouts, update status etc are managed through this abstraction layer as well
What I'm doing now, is initiating the model change via api call re: controller class. I adjust whatever property changes are requested on the Item Model and call each platform's api to make the same changes.
This is ostensibly what I want... but I have a feeling I'm not making best use of the architecture and adding unnecessary maintenance headaches. I'm struggling to find what this kind of relationship is called and how it is managed. Is / are there established pattern(s) for this problem in Laravel and if so, are any of them widely considered to be "best practice"?

Titanium Alloy MVC Architecture

I don't get completly the architecture of Titanium Alloy. Maybe someone can explain it better or draw me a picture? :)
What I understood is that it is a mvc architecture but not in the "basic" way... The Model is only a blueprint for the intern SQLite database. The Backbone Model also can be extended to check for correct input and duplicates. To synch with the extern the Controller is used. At least all the examples I found did that. And the View is basic with Titanium Style Sheets.
Unfortunately you have a very terse, incomplete understanding of what Alloy is, what it does, and how it does it. Fortunately for you though, there's is extensive and complete documentation that covers all this in guide form. Answers to all these high level architecture questions and more can be found here: http://docs.appcelerator.com/titanium/latest/#!/guide/Alloy_Framework
Well Alloy is indeed a framework that is based on the MVC architecture, maybe what you need is to get some insight on the design goals of MVC and how they can be achieved using the separate roles for each unit of software. Here is a very good article I would recommend: http://blog.codinghorror.com/understanding-model-view-controller/
The fact that you can specify view structures using only xml files and styling using only static properties means that Alloy is a very well implemented MVC framework as it does not allow you to mix the responsibilities of each role.
My 2 cents of understanding Alloy:
controller.js
Here place only code that handles ui element events such as clicks, taps and so on. Your controller should pick up an event and call a method belonging to some external common.js module that you should require using require(). It is fully supported in Alloy.
view.xml
Here you only specify the tree structure of your ui elements. This means what component belongs where and to which other component.
style.tss
Here you should specify anything that has to do with colors, position, layout etc.

Use client-side MVC/MVVM patterns with MVC server-side pattern

Considering the most popular MVC/MVVM client-side patterns (like Knockout.js, Angular.js, Ember.js, and others), I have one great doubt:
Also considering the modeling redundance in both sides, what is the advantages and disvantages to use those client-side patterns with MVC server-side patterns?
I struggled with how to answer this question... hopefully this helps, even if it is in a round-about way.
While some of the pros/cons have already been stated, I think the best rundown is in this answer.
For me, the biggest advantage to using client-side logic is the rich UI aspect.
But the key part of your question seems to be "model redundancy" (I'd call it duplicated logic, or at least having potential for duplicated logic). In my opinion, that is a problem which may exist independently of the pros/cons in the previous link.
So first of all, I think that the decision of whether or not to use a client-side framework should be made based on the well-documented pros and cons. Once that decision is made, the associated problems can be solved.
Lets assume you are using some sort of server-side framework/platform, as well as a client-side framework to provide a little bit of UI interactivity. Now there is a problem with where to put the model logic: on the client, server, or both.
One way to solve the problem is to define your model logic in only the client or the server. Then you have no code duplication, but it affects some of the higher-level pros/cons.
For example, if your model logic is 100% server-side, you lose some of the interactive part of the UI. Or, you are constantly throwing the model to/from the server, which will have a few cons.
If your model logic is 100% client-side, you could suffer performance problems, depending on the size of your view / model. This is one of the reasons Twitter is moving to a server-side processing model.
Then there is "both"... having model logic exist in both the client and the server. I think this is the best solution, as long as no logic is duplicated.
For example, on a shopping cart page, you may recalculate the cost of an order based on the price of a product, and a user-editable quantity box. I think this logic should only exist on the client. Other model properties that do not change once loaded are probably fine hosted on the server.
There's a lot of gray area here... I struggle with putting all the eggs in one basket. For example, choosing a client-side framework, creating a lot of client-side logic, and then [hypothetically] running into problems with performance, browser support, or something like that. Now you may want to tweak a page or two for performance (like move it server-side, a la Twitter). But I think being smart about how you structure your code will help mitigate that issue. If your code is maintainable and clean, moving logic from client to server won't be difficult.
The advantage is that the client side patterns are applicable at the client where the server has no direct reach. If you're building a rich, interactive HTML UI then use client side MVVM. Server side MVC may still be relevant in that case for delivering appropriate content to the client. For example, the ASP.NET WebAPI is a framework for creating HTTP APIs which has a similar controller architecture to the ASP.NET MVC framework. The API implemented with this framework may be called by client side code resulting in MVC on the server side and MVVM on the client side. Normally, when using MVC server side and MVVM client side, the responsibilities of the respective sides are very different and thus there is no redundancy.
The fact you an incorporate a MVVM model into an already implemented MVC framework is also a great thing, we recently added knockout to some new project pages to fit with in an already outdated MVC framework (old pages, not the framework itself).
I think MVVM is fantastic as the above answer states it provides an exceptional user experience with extremely fast response times, you can hide your validation calls in the backround with out slowing them down and its intuitive.
The pain however is that it is VERY hard to unit test and you can get some extremely LARGE javascript files, also the extra coding we've had to do as our legacy systems still run on IE6 is ridiculous.
But MVVM and MVC don't have to be used exclusively on there own, we use both. But having 3 levels of validation is something that still bugs me.
advantages
This can rock.
disvantages
You can screw it.
Seriously. Making use of transporting part of the frontend logic into the browser can boost your application development why you keep more strict data-processing encapsulated on server-side.
This is basically layering. Two layers, the one above talks with the one below and vice-versa:
[client] <--> [server]
You normally exchange value objects in a lightweight serialization format like Json between the two.
This can fairly well map what users expect in a useful structure while domain objects on server-side could not be that detailed.
However, the real power will be if the server-side is not in written in javascript at some certain point because I think you can not create well domain objects there. Consider Scala (or something similar expressive) then if you run into that issue.
Ten months later after this question, I have used the both patterns inside the same application.
The only problem was the need to map the models twice.
MVC (ASP.NET MVC 4 Web API)
The most important resource was the routes.
Models were created to database interactions and as arguments for
controllers' actions.
Controllers were created to manipulate the API
requisitions and to render the views.
Views were not modeled with
server-side models, but all the resources of Partial Views and
Sections.
MVVM (Knockout.js)
Models were created with the same properties as the server-side models.
Views were binded with models' properties, and decreased a lot of the views' size.
View-models were created with the values provided from API methods.
Overall, the MVC combination with MVVM were very useful, but it needed a big expertise and knowledge. Patience is required too, because you need to think about the responsibilites of each application layer.

Recommendation of a JavaScript (GUI) framework to act as a frontend to a website's backend?

I'd like to tap your experience regarding the use/deployment of a JavaScript powered framework to implement a GUI frontend for different backend tasks.
The framework would have to provide a managable way of displaying arbitrary data fetched from a database (data can be provided in every thinkable way JSON, XML etc.) and allow the manipulation of that data by means of a clean and RESTful API. Prebuilt widgets (tables/lists/dashboard) and UI (drag'n'drop/sorting) would be nice to have but aren't mandatory.
The requirements are:
Open Source (obligatory)
clean and RESTful API to fetch, display and manipulate data
Ability to extend the functionality thru plugins
Standards-compliant (IE does not have to be supported)
Thorough documentation and/or helpful community
I've figured that jQuery's UI framework comes very close to the ideal, though it lacks a decent support of general structures to master a full-fledged application.
I'm interested of what you guys would recommend.
Thanks in advance.
After years of using several of the available frameworks I now use Yahoo's YUI3 (3 - not the older 2) if I can - for "serious" apps. For HTML page enhancements I'm indifferent and may sometimes prefer jQuery.
http://developer.yahoo.com/yui/3/ (BSD license)
What I like about YUI3 are the very "deep" concepts behind it for serious "enterprise level" software development. Regardless of what framework one uses, EVERYONE seriously developing in JS should have viewed (and understood!) the videos on Yahoo Developer Theater, especially the presentations by Douglas Crockford.
http://developer.yahoo.com/yui/theater/

What's your recommendation for architecting GWT applications? MVC, MVP or custom messaging solution?

I just started a new GWT project for a client and I'm interested in hearing people's experience with various GWT MVC architectures. On a recent project, I used both GXT MVC, as well as a custom messaging solution (based on Appcelerator's MQ). GXT MVC worked OK, but it seemed like overkill for GWT and was hard to make work with browser history. I've heard of PureMVC and GWTiger, but never used them. Our custom MQ solution worked pretty well, but made it difficult to test components with JUnit.
In addition, I've heard that Google Wave (a GWT application) is written using a Model-View-Presenter pattern. A sample MVP application was recently published, but looking at the code, it doesn't seem that intuitive.
If you were building a new GWT application, which architecture would you use? What are the pros and cons of your choice?
Thanks,
Matt
It's worth noting that google has finally written out a tutorial for designing using the mvp architecture. It clarifies a lot of the elements from the google i/o talk listed above. Take a looK: https://developers.google.com/web-toolkit/articles/mvp-architecture
I am glad this question has been asked, because GWT desperatley needs a rails-like way of structuring an application. A simple approach based on best practices that will work for 90 % of all use-cases and enables super easy testability.
In the past years I have been using my own implementation of MVP with a very passive view that enslaves itself to whatever the Presenter tells him to do.
My solution consisted of the following:
an interface per widget defining the methods to control the visual appearance
an implementing class that can be a Composite or use an external widget library
a central Presenter for a screen that hosts N views that are made up of M widgets
a central model per screen that holds the data associated with the current visual appearance
generic listener classes like "SourcesAddEvents[CustomerDTO]" (the editor does not like the real symbols for java generics here, so I used thoe brackets), because otherwise you will have lots of the same interfaces who just differ by the type
The Views get a reference to the presenter as their constructor parameter, so they can initialize their events with the presenter. The presenter will handles those events and notify other widgets/views and or call gwt-rpc that on success puts its result into the model. The model has a typical "Property[List[String]] names = ...." property change listener mechanism that is registered with the presenter so that the update of a model by an gwt-rpc request goes to all views/widgets that are interested.
With this appraoch I have gotten very easy testability with EasyMock for my AsynInterfaces. I also got the ability to easily exchange the implementation of a view/widget, because all I had to rewrite was the code that notified the presenter of some event - regardless of the underlying widget (Button, Links, etc).
Problems with my approach:
My current implementation makes it hard to synchronize data-values between the central models of different screens. Say you have a screen that displays a set of categories and another screen that lets you add/edit those items. Currently it is very hard to propagate those change events across the boundaries of the screens, because the values are cached in those models and it is hard to find our whether some things are dirty (would have been easy in a traditional web1.0-html-dumb-terminal kind of scenario with serverside declarative caching).
The constructor parameters of the views enable super-easy testing, but without a solid Dependency-Injection framework, one will have some UGLY factory/setup code inside "onModuleLoad()". At the time I started this, I was not aware of Google GIN, so when I refactor my app, I will use that to get rid of this boilerplate. An interesting example here is the "HigherLower" game inside the GIN-Trunk.
I did not get History right the first time, so it is hard to navigate from one part of my app to another. My approach is not aware of History, which is a serious downturn.
My Solutions to those problems:
Use GIN to remove the setup boilerplate that is hard to maintain
While moving from Gwt-Ext to GXT, use its MVC framework as an EventBus to attach/detach modular screens, to avoid the caching/synchronization issues
Think of some kind of "Place"-Abstraction like Ray Ryan described in his talk at I/O 09, which bridges the Event-Gap between GXT-MVC and GWTs-Hitory approach
Use MVP for widgets to isolate data access
Summary:
I dont think one can use a single "MVP" approach for an entire app. One definetly needs history for app-navigation, a eventbus like GXT-MVC to attach/detach screens, and MVP to enable easy testing of data access for widgets.
I therefore propose a layered approach that combines these three elements, since I believe that the "one-event-mvp-system"-solution wont work. Navigation/Screen-Attaching/Data-Access are three separate concerns and I will refactor my app (move to GXT) in the following months to utilize all three event-frameworks for each concerns separately (best tool for the job). All three elements need not be aware of each other. I do know that my solution only applies for GXT-projects.
When writing big GWT apps, I feel like I have to reinvent something like Spring-MVC on the client, which really sucks, because it takes a lot of time and brain-power to spit out something elegant as Spring MVC. GWT needs an app framework much more than those tiny little JS-optimizations that the compiler-guys work so hard on.
Here is a recent Google IO presentation on architecting your GWT application.
Enjoy.
-JP
If you're interested in using the MVP architecture, you might want to take a look at GWTP: http://code.google.com/p/gwt-platform/ . It's an open source MVP framework I'm working on, that supports many nice features of GWT, including code splitting and history management, with a simple annotation-based API. It is quite recent, but is already being used in a number of projects.
You should have a look at GWT Portlets. We developed the GWT Portlets Framework while working on a large HR Portal application and it is now free and open source. From the GWT Portlets website (hosted on Google code):
The programming model is somewhat similar to writing JSR168 portlets for a portal server (Liferay, JBoss Portal etc.). The "portal" is your application built using the GWT Portlets framework as a library. Application functionality is developed as loosely coupled Portlets each with an optional server side DataProvider.
Every Portlet knows how to externalize its state into a serializable PortletFactory subclass (momento / DTO / factory pattern) making important functionality possible:
CRUD operations are handled by a single GWT RPC for all Portlets
The layout of Portlets on a "page" can be represented as a tree of WidgetFactory's (an interface implemented by PortletFactory)
Trees of WidgetFactory's can be serialized and marshalled to/from XML on the server, to store GUI layouts (or "pages") in XML page files
Other important features of the framework are listed below:
Pages can be edited in the browser at runtime (by developers and/or users) using the framework layout editor
Portlets are positioned absolutely so can use scrolling regions
Portlets are configurable, indicate when they are busy loading for automatic "loading spinner" display and can be maximized
Themed widgets including a styled dialog box, a CSS styled button replacement, small toolbuttons and a HTML template driven menu
GWT Portlets is implemented in Java code and does not wrap any external Javascript libraries. It does not impose any server side framework (e.g. Spring or J2EE) but is designed to work well in conjunction with such frameworks.

Resources