At the risk of sounding misinformed, I'm under the belief that this is basically useful for RAD and fast sketching of an application.
It feels somewhat Ruby-esque in the sense that it scaffolds pretty much everything you need from a CRUD application. Easier work for us, right; and most people are none the wiser.
I'm fairly green in the workplace, I just start working at an actual job as a developer (cubicles and free coffee) so my opinions might be a bit on the green side, but I'd love some comments from more senior people.
Is this somewhere between MVC2(basic scaffolding) and Microsoft Lightswitch(wizard-driven development)? Is it worth ivesting in?
Personally I like to use Dynamic Data for admin pages, those pages that nobody actually gets to see but need to be there in a usable way for some admin user. In the past those used to take quite some effort on the dev team to craft together but with Dynamic Data it's an almost out of the box experience.
I suggest you take a look at Tailspin travel which is an application in MVC 2 but uses Dynamic Data, integrated in the same UI project, for the admin side.
I was skeptical at first, but now I use Dynamic Data almost as much as I do "standard" ASP.NET sites. Out of the box, it's pretty generic, but it's customizable, and you can include standard ASP.NET pages in it.
At first, I would use it as a separate Admin site when I needed a "back door" into the data from a "standard" app. Lately, however, my approach has been to do some more planning, and decide which of the tables I would like users to access via the Dynamic Data mechanisms, and which data I want more fine control over. You can scaffold only the table you want, and this works good for "lookup" tables where you want an end user to be able to add/delete. An example would be in our email coupon program, where customers can sign up to receive coupons via email. They can choose their coupon categories - hot foods, beverages, gas, produce, etc. The administrator of the overall coupon program needs to be able to add and remove categories, and Dynamic Data is WONDERFUL for this sort of thing.
Dynamic data takes care of the data validation (a huge plus for security AND usability), mapping our relationships (a HUGE time saver) and just "does it right". In the business environment, security and productivity are two very real concerns that are handled poorly by most developers, and Dynamic Data seems to handle the basics well.
So yes, I do think it's worth it. It's very powerful and an excellent tool to have in your toolbox, but one that should be wielded with skill, which takes time and practice. And it should not be the only tool in your toolbox.
One of the best uses I've heard for Dynamic Data was to quickly build up an Django-like admin section for a site. It doesn't have to be "perfect" since it isn't aimed at users, but it does give you some nice usability quickly and easily.
I know very little about it but it doesn't sounds like something I would consider. Whenever I work on a an application we tend to follow some basic architectural guidelines such as layering/reusability etc. Typically I tend to get away from shortcut tools/frameworks as this one. There are a lot of "neat" tools that are available in the .NET world that have their place in certain small business/internal app space perhaps, but are not a great idea for a well designed application. For example embedding SQL into the datasource controls that can be bound directly to GridViews, etc.
Related
All,
I'm currently revamping an ancient IVR written using Classic ASP with VXML 2.0. Believe me, it was a mess, largely due to the mixing of routing logic between the ASP code and the VXML logic, featuring multiple postbacks a la ASP.NET. Not fun to debug.
So we're starting fresh with MVC 3 and Razor and so far so good. I've succeeded in moving pretty much all the processing logic to the controller and just letting most of the VXML be just voicing a prompt and waiting for a DTMF reply.
But, looking at a lot of sample VXML code, it's beginning to look like it might actually be simpler to do basic routing using multiple on a page and VXML's built-in DTMF processing and . More complex decision-making and database/server access would call the controller as it does now.
I'm torn between the desire to be strict about where the logic is, versus what might actually be simpler code. My VXML chops are not terribly advanced (I know enough to be dangerous), so I'm soliciting input. Have others used multiple forms on a page? Better or worse?
Thanks
Jim Stanley
Blackboard Connect Inc.
Choosing to use simple VoiceXML and moving the logic server side is a fairly common practice. Pros/Cons below.
Server-side logic
Often difficult to get retry counters to perform the way you want if you are also performing input validation (valid for grammar, but not for host or other validation logic)
Better programming language/toolkits for making logical descriptions (I'm not a fan of JavaScript, but even if you like JavaScript, you tend to have to create a lot of forms to get the flow control you want).
Usually easier to debug. Step through logical decisions and access to logging tools.
Usually easier to create reusable components that use parameters to alter component behavior.
Client side logic
Usually more scalable. VoiceXML browsers tend to use a large amount of their resources compiling and processing pages. One larger page will typically do better than a variety of smaller pages. However, platforms vary significantly and your size may make this negligible.
Better chance of using static pages. Many platforms have highly optimized caches (more than just fetched data). Like above may only matter if you have 100s of ports per device or 1000s of ports hitting a server.
Mixing and matching isn't bad until somebody requests some sort of global behavior change. You may be making the change in multiple places. Debugging techniques will also vary so it may complicate your support paths (e.g. looking in browser logs versus server logs to see what happened on a call).
Our current framework currently uses a mix of server and client. All our logic is in the VoiceXML, and the server is used for state saving and generating recognition components. Unfortunately as all our logic is in the voicexml, it makes it harder to unit test.
Rather than creating a large voicexml page that subdialogs to each question and all the routing done on the clientside, postback to the server after each collection, then work out where to go now. Obviously this has it's pros/cons as Jim pointed out, but the hope is to abstract some of the IVR/callflow from the VoiceXML and reduce the dependency on skilling up developers in VoiceXML.
I'm looking at redeveloping using MVC3, creating different views based on base IVR functions, which can then be modified based on the hosting VoiceXML platform:
Recognition
Prompts
Transfer
CTI Get/Set
Disconnect
What I'm still working out is how to create reusable components within the MVC. Whether to create something we subdialog to and return back the result (similar to how we currently do it), or redirect to a generic controller, and then redirect to the "Completed" action once the controller is done.
Jim Rush provides a pretty good overview of the pros and cons of server side versus client side logic and is pretty consistent with my discussion on this topic in my blog post "Client-side versus Server-side Development of VoiceXML Applications". I believe the pros of putting the logic on the server far outweigh putting it on the client. The VoiceXML User Group is moving towards removing most of this logic from VoiceXML in version 3.0 and suggesting using a new standard called State Chart XML (SCXML) to handle control of the voice application. I have started an open source project to make it easier to develop VoiceXML applications using ASP.NET MVC 3.0 which can be found on CodePlex and is called VoiceModel. There is an example application in this project which will demonstrate a method for keeping the logic server side, which I believe greatly improves reuse of voice objects.
Recently I had a project in which I had to get some data from particular software system to a portlet. The software used a database, and I spent a fair bit of time modeling the data I wanted and then creating a web service so that my portlet could grab the information.
Then it suddenly struck me that I was wasting my time. I grabbed BIRT, tossed it into a portlet, and then just wrote some reports that directly grabbed the necessary data from the database. I was done in an afternoon.
I understand that reporting is a one way street, but this got me thinking. Reporting tools can be very effective for creating reports (duh) from your actual data, but when you're doing this you're bypassing your model which except in simple cases is not a direct representation of your data as it exists in your database.
If you're writing a data-intensive application and require the ability to perform non-trivial reporting, do you bypass your application and use something like BIRT or Crystal Reports? How do you manage these tools as part of your overall process? Do you consider the reports you write as being part of your application and treat them as such? A report is a view and a model and a controller (if you will) all in one big mess, how do you deal with and interpret and plan for that?
Revised question: it's possible and even common that a report will perform some business calculations that in a perfect world you would like to have contained in your application. This can lead to a mismatch of information given back to the user. On the other hand, reporting tools make it so easy to gather and display information that it's hard to take a purist's approach and do everything from within the application. Are there any good techniques for ensuring that the data in your reports matches the data that you might be showing in the regular GUI?
I see reporting as simply another view on the data, not a view/model/controller in one (well, maybe a view and controller in one).
We have our reports (built in sql 2008 reporting services) consume a service in our application layer to get data (keeping with our standard, that data access is in a repository). These functions could do a simple query or handle very complex processing that would be a nightmare in your reporting evironment or a stored procedure. In practice, we find this takes no longer than coding up some one-off stored procedure that will, as your system grows and grows, become a nightmare to maintain.
Treating reporting as simply a one-off or not integrating into your application design is a huge mistake.
Reporting is crucial. Reporting is mostly crucial to share values collected in one system to external users, e.g. users not directly using the system (eg management for sales figures). So reporting is a lot more than just displaying facts and figures and is something central to almost every system that drives a commercial.
At least the more advanced systems allow you to enhance them: with your own reusable "controls". Even a way back can be implemented - if you just use the correct plugins. Once I wrote a system to send emails out of a report, because the system did not allow for change. It worked - though it was not meant to be used that way ;)
Reports make a good part of the application, and you gain a lot freedom if you make reports changeable for your customers. Sometimes you come up with more possibilities than you thought of when you built the system in the first place.
So yes, for me reporting is part of the system.
Reports are part of your app but because they are generally something a user will have strong ideas about than, say, your data capture UI, I'd sacrifice purity for convenience/speed of delivery and get back to "real" coding... :-)
As soon as you've done a report, users want another one or change the colour or optional grouping or more filtering or... something that takes you away from whizzier stuff... so I don't bust a gut maintaining purity.
This is a fine line indeed. You don't want to spend too much time building reports (that users want you to change all the time anyway) but you don't want to duplicate logic by putting business logic into your reports! With our reporting products at Data Dynamimcs I think we have reached a happy medium between these two tradeoffs.
By using the ObjectDataProvider (see links below for more info) you can bind the report directly to business objects (plain old objects) so you don't have to bypass your business layer for getting data. At the same time we provide a way to reference and use functions from other libraries in your report. This way if you have some code configured already to do some business logic calculations you can reuse those functions directly within your report. You can see an example of this in the links below too.
Binding to Objects for your Data (see "Object Provider" section): http://www.datadynamics.com/Help/ddReports/ddrconDataSetAndObjectDataSource.html
Adding Custom Code to your reports Walkthrough: http://www.datadynamics.com/Help/ddReports/ddrwlkCustomCode.html
Using Custom Assemblies (referencing shared libraries/dlls from your report): http://www.datadynamics.com/Help/ddReports/ddrconCustomCode.html, and http://www.datadynamics.com/Help/ddReports/ddrtskCreatingAnInstanceMethod.html
Scott Willeke
Data Dynamics / GrapeCity
The way I've always worked with reports is to consider part reports as part of the code-base, and stored in the source along with the application. In some contexts, reports are more important than the application, in that management makes business decisions off of report data, having the wrong information can cause them to cancel a product line, cancel a campaign, or fire a sales person. Obviously, this depends highly on your management and your application.
Regarding keeping your model consistent, this is a bit trickier question. One way to ensure consistent model between reports and your application is to use stored procedures (or views) to retrieve data, depending on your application's architecture.
I am looking to get some outside eyes to do black-box testing on a simple webform-based experiment/game I made. uTest looks very good, but it's aimed at companies with lots of cash, whereas my app is just a small-time academic research project. I want to make sure that my app won't break easily, and that it's resistant against basic reverse engineering/manipulation. This is not a mission critical project, since there are no financial transactions or exchanges of confidential data taking place. However, I might eventually give out small prizes to people based on their scores, and I don't want cheaters to come out ahead.
Any suggestions for affordable black box testing?
You could try Chorizo. It's free for one host. It's primarily aimed at PHP apps but you can use it to test any kind of website for XSS and similar vulnerabilities. Just requires you to verify that you are the owner and then you setup a proxy and start browsing.
I have an existing application written in Perl. Now I need to integrate this application with OBI. The plan is having a button the user can click on to open OBI in an iframe. OBI resides on a different server from the running application.
Has anyone done this before, know what is the best practice for doing this, and what is the effort of doing this?
Another question is is it possible to add customizations to the OBI displayed in the iframe.?
There are two ways to address the problem that I know of and tried out. According to your needs, one or the other might be more appropriate (or both, they're not mutually exclusive). In both cases, the documentation is good and readily available.
The "Go URL"
The Go URL is documented more thoroughly in the Oracle Business Intelligence Presentation Services Administration Guide. It provides a quick and easy interface to the reports you've already defined, in the form of a URL. All that's needed to get it running is to fill in a few query parameters to direct to the report you want. You might need to include authentication tokens too.
Pros: very easy to try out.
Cons: harder to get security right.
The web services
The presentation server comes with a series of web services that enable a more programmatic way of querying your repository. The functionality offered through this channel goes further: for example most catalog management, including report creation and editing is possible. The full list fills a guide of its own: the Oracle Business Intelligence Web Services Guide.
Pros: better integration (i.e., no need for an IFRAME); easier to get the security right.
Cons: harder to setup; lots of XML; more advanced features (e.g. in-place drilldown) need an HTTP bridge that was a bitch to get right in my case. The generated HTML might clash a bit with yours and require cleaning up, notably in the CSS.
Embedding OBIEE reports inside a non-ADF web app is tough. If you have an option to re-write your web application in ADF, your life will be a lot easier. Just drag and drop reports and visualizations into your web application. Oracle's own Fusion Applications also follow this approach. If your app is analytics heavy, it might be a good option to explore. Here's a link to the Oracle doc.
I've been reading through a couple of questions on here and various articles on MVC and can see how it can even be applied to GUI event intensive applications like a paint app.
Can anyone cite a situation where MVC might be a bad thing and its use ill-advised?
EDIT: I'm specifically talking about GUI applications here!
I tried MVC in my network kernel driver. The patch was rejected.
I think you're looking at it kind of backwards. The point is not to see where you can apply a pattern like MVC, the point is to learn the patterns and recognize when the problem you are trying to solve can naturally be solved by applying the pattern. So if your problem space can be naturally divided into model, view and controller then it is a good candidate for MVC. If you can't easily see which parts of your design fall into the three categories, it may not be the appropriate pattern.
MVC makes sense for web applications.
In web applications, you process some data (on SA: writing questions, adding comments, changing user info), you have state (logged in user), you don't have many different pages, but a lot of different content to fit into those pages. One Question page vs. a million questions.
For making CMS, for example, MVC is useless. You don't have any models, no controllers, just a pages of text with decorations and menus. The problem is no longer processing data - the problem now is serving that text content properly.
Tho, CMS Admin would build on top of MVC just fine, it's just user part that wouldn't.
For web services, you'd better use REST which, I believe, is a distinct paradigm.
WebDAV application wouldn't benefit greatly from MVC, either.
The caveat on Ruby for Web programming is that Rails is better suited for building Web applications. I’ve seen many projects attempt to create a WebDAV server or a content management system CMS with Rails and fail miserably. While you can do a CMS in Rails, there are much more efficient technologies for the task, such as Drupal and Django. In fact, I’d say if you’re looking at a Java Portal development effort, you should evaluate Drupal and Django for the task instead.
Anything where you want to drop in 3rd party components will make it tough to work in the MVC pattern. A good example of this is a CMS.
Each component you get will have their "own" controller objects and you won't be able to share "control" of model -> ui passing.
I don't necessarily know that MVC is ever really a bad idea for a GUI app. But there are alternatives that are arguably better (and also arguably worse depending on whose opinion you're asking). The most common is MVP. See here for an explanation: Everything You Wanted To Know About MVC and MVP But Were Afraid To Ask.
Although I suppose it might be a bad idea to use MVC if you're using a framework or otherwise interacting with software that wasn't designed with MVC in mind.
In other words, it's a lot like comparing programming languages. There's usually not many tasks that one can say that one is better than the other for. It usually boils down to programmer preference, availability of libraries, and the team's experience.
MVC shouldn't be used in applications where performance is critical. I don't know if this still applys with the increase of computing power but one example is a call center application. If you can save .5 seconds per call entering and updating information those savings add up over time. To get the last bit of performance out of your app you should use a desktop app instead of a web app and have it talk directly to the database.
When is it a bad thing? Where ever there is another code-structure that would better fit your project.
There's countless projects where MVC wouldn't "fit", but I don't see how a list of them would be of any benefit..
If MVC fits, use it, if not, use something else..
MVC and ORM are a joke....they are only appropriate when your app is not a database app, or when you want to keep the app database agnostic. If you're using an RDBMS that supports stored procedures, then that's the only way to go. Stored procs are the preferred approach for experienced application developers. MVC and ORM are only promoted by companies trying to sell products or services related to those technologies (e.g. Microsoft trying to sell VS). Stop wasting your time learning Java and C#, focus instead on what really matters, Javascript and SQL.