As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming you have three layers (Business, Data and UI). My data layer would have a linq to sql file with all the tables added.
I've seen some examples where an Interface is created in the business layer and then implemented in another class (type is of IQueryable/IEnumerable), yet other classes are using normal Linq syntax to get/save/delete/update data.
Why and when would i use an Interface which has an IQueryable/IEnumerable type?
Two of the most common situations, which you may want to do this are:
you want to protect yourself from changes to that part of your system.
you want to be able to write good unit tests.
For example, you have a business layer that talks directly to LINQ to SQL. In the future you mat have a requirement to use nHibernate or Entity Framework instead. Making this change would impact on your business layer, which is probably not good.
Instead, if you have programmed to an interface (say IDataRepository), you should be able to swap in and out concrete implementations like LINQtoSQLRepository or HibernateRepository without having to change your business layer - it only cares that it can call, say Add(), Update(), Get(), Delete() etc - but doesn't care how these operations are actually done.
Programming to interfaces is also very useful for unit testing. You don't want to be running tests against a database server for a variety of reasons such as speed and reliability. So, you can pass in a test double, fake or mock implementation to test your data layer. E.g. You have some test data that implements your IDataRepository, which allows tou to then test add(), delete() etc from your business layer without having a DB connection.
These points are generally good practice in all aspects of your application. I suggest reading up on The Repository Pattern, SOLID principles and maybe even Test Driven Development. This is a large and sometimes complex area and its difficult to give a detailed answer of exactly what to do and when as it needs to suit your scenario.
I hope this helps you get started.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In a nutshell, I'm looking for tools for tracking my progress in "fleshing-out" a complex system in Ruby.
Usually when I start working on a new system in Ruby, I first write an outline.rb file that contains stub class definitions for all the classes I think I'll want to use. Then I gradually implement the functionality.
Are there any tools out there for quickly surveying my stubs and keeping track of which ones still need to be implemented, and how long each implementation took me, in hours?
I usually track my progress through my tests. For example, if you're doing TDD/BDD, you could use rspec and create tests that are marked as "pending"-tests without a body basically.
Take this gist for example (https://gist.github.com/4150506)
describe "My API" do
it "should return a list of cities (e.g. New York, Berlin)"
it "should return a list of course categories"
it "should return a list of courses based on a given city"
it "should return a list of courses based on a category and city"
end
In it, I list a few tests that I expect the system to pass once all the implementation details are in place. This allows me to get an overall view of what I'm building without getting too deep too quickly.
Update: The idea is to be able to run the specs at the command line and rspec will tell you which tests are passing, failing or pending.
As for the time tracking part, I just use a timer app (tickspot.com for example). You can always make note of the timestamps on your spec files too to get a sense of when you started modifying the files and when you stopped.
Hope that helps.
My answer is essentially "no".
How do you define "done"? Not "just a stub", or exhibiting complete behavior? How do you define "complete" behavior? What about methods you didn't stub originally, of which I can only imagine there would be dozens, if not hundreds.
Time against stubbed methods doesn't strike me as a meaningful statistic, rather time against functionality. This should be handled by issue tracking tickets and commit logs, but that will reflect overall time, not specifically time on task, which is often significantly different.
I don't see how this can be done with any real accuracy over a project of any significant size without very granular issue tracking, time entry, and unit and behavioral tests. Even then, you'd likely need to build out some tools to help with your particular methodology.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently using Telerik Open Access which is hateful, but that said is there not an architectural issue around the use of LINQ and ORMs in general?
It occurs to me that what we are doing is moving the burden of data manipulation from the DBMS which is optimised to perform that task to in my case a webserver which is not.
Also, at least in Telerik's case we are restricting the flexibility of our coding model. In this project I have to extract and create complex data structures that do not map directly into a CRUD interface.
In Telerik Open Access at least, if I use a stored procedure to create the data and it does not map into a known entity I have to return the data as an object array.
So instead I use the "entities" created by the ORM and manipulate them using LINQ.
The resulting code is ridiculously complex compared to the relatively simple equivalent SQL statement.
I'd be interested in your views specifically around the advocacy of using an ORM and LINQ and whether this is architecturally unsound.
It certainly feels it to me.
I haven't included code samples because the actual code is irrelevant. That said it might be instructive to know that a 10 line T-SQL query (6 of those lines are joins) has turned into 300 lines (including whitespace) of LINQ statements to do the same thing.
If you use Linq2SQL or Linq2Entities they will actually generate SQL code and the "burden of data manipulation" will still be on the DBMS. The Linq code you write will be very much like SQL code in size.
Using Linq in addition to an ORM isn't architecturally unsound.
You always have some amount of data manipulation on the database side and some amount on the client side. As a developer, it is yours to find the right balance. Obviously, if your ORM obliges you to do such convoluted things as manipulating a jumble of untyped data on the client side and doing massive queries on it with Linq, there's a problem. Either with your ORM or the way your system was designed.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
what all can be done in PL SQL can also be done by embedding sql statements in an application langauage say PhP. Why do people still use PL SQL , Are there any major advantages.?
I want to avoid learning a new language and see if PHP can suffice.
PL/SQL is useful when you have the opportunity to process large chunks of data on the database side with out having to load all that data in your application.
Lets say you are running complex reports on millions of rows of data. You can simply implement the logic in pl/sql and not have to load all that data to your application and then write the results back to the DB - saves bandwidth, mem and time.
It's a matter of being the right tool for the right job.
It's up to the developer to decide when is a best time to use PL/SQL.
In addition to performing bulk operations on the DB end, certain IT setups have stringent security measures.
Instead of allowing applications to have direct access to tables, they control access through PL/SQL stored procedures. This way they know exactly how the data is being accessed instead of applications maintained by developers which may be subject to security attacks.
I suppose advantages would include:
Tight integration with the database - Performance.
Security
Reduced network traffic
Pre-compiled (and natively compiled) code
Ability to create table triggers
Integration with SQL (less datatype conversion etc)
In the end though every approach and language will have its own advantages and disadvantages. Not learning PL/SQL just because you already know PHP would be a loss to yourself both personally and possibly career-wise. If you learn PL/SQL than you will understand where it has advantages over PHP and where PHP has advantages over PL/SQL but you will be in a better position to make the judgement.
Best of luck.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm studying e-Commerce like web application. In one case study, I’m in trouble of mass data validation. What is the best practice of that in enterprise application?
Here is one scenario:
For a cargo system. There is a “Cargo” object, which contains a list of “Good” to be shipped with. Each “Good” have a string field, named “Category”, specifying what kind of “Good” it is. Such as “inflammable”, “Fragile”.
So, there are two chances for the validation to take place. The creation of the object. Or the storage in the database of the object. If we only validate at the storage stage, when some “Good” validation fails, the “Cargo” storage fails too, and the previously stored “Goods” need to be deleted. This is low efficient. If we also validate at the creation stage. There will be duplicated validation logic(a check of foreign key as I stores those “Category” in the database, and a check in the constructor).
If you are saving multiple records to the database, all the updates should be done at once in a single transaction. So you would validate ALL the objects before saving. If there was an issue during the save you could then rollback the transaction which rolls back all the database updates (ie you dont have to go back and manually delete records)
Ideally you should validate on the server, before saving data, the server validation should then propagate the validation messages back up to the User/UI. Validation on the Client/UI is also good in that its more responsive and reduces the overhead on the rest of the system.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm sure you've all seen them. Line of Business UIs that have logic such as: "When ComboA is selected, query for values based on that selection, and populate Textbox B", or "When ButtonC is pressed, disable Textboxes C and D", and on and on ... it gets particularly bad when you can have multiple permutations of the logic above.
If presented with the chance to redesign one of these lovely screens, how would you approach it? would you put a wizard in front of the UI? would you keep the single screen paradigm but use some other pattern to make the logic of the UI state maintainable? what would be the process you use to determine how this would ideally be presented and implemented?
Not that I think this should matter for the responses, but I am currently presented with just this "opportunity", and it's an ASP.NET web page that uses javascript to respond to the user's choices, disable controls, and make ajax calls for additional data.
Something you might want to look at is whether some of those dependencies don't imply that while looking similar and serving the same functions these elements should be split out over multiple pages that are similar but actually different. Someone maybe grouped these onto on page because there were enough similarities.
If you can try looking at the problem as if it were not implemented at all, how would you structure the user interface if you had to implement it now. If it is too radically different and existing users would have major problems, you might have to compromise. But as Elie said look at it from the users view. They are the ones that have to work with your product.
I would model the state of the whole UI on one object. That object should keep track of the state in which each UI object should be in, including the list of options of a combo box (and which option is selected of course).
That means that having one state object you can correctly re-draw the whole screen and not end up in broken states on the UI. Of course, refreshing all the components each time anything changes is not the way to go, so I would refresh them on callbacks from each of the setters in the state object. This would also allow you to have two UIs over the same state if you ever want that.
Start with the KISS principle, and work from there. Don't over-engineer the solution, and try to think about the problem from the user's POV. Your first impression of what would make a good layout is probably close to what you should be building, as a good UI is intuitive.
That being said, single screen versus multiple screens, JavaScript or AJAX, it doesn't really matter. If it looks good, is easy to understand, and behind the scenes, is well commented and written with clear code, it works. It should be maintainable, so aim for modular code blocks with clear functionality.
I think what is most important is the user experience and to a lesser extent the maintainability of the code. On the web I try to minimize the roundtrips as much as possible so I'm not sure that I would take a wizard approach since that would lead the user through multiple pages or require replacing nearly the entire page via AJAX (which just seems wrong). I typically work with my customers to capture the functionality that they require, though I aim at the functionality, not the implementation. I might mock up a few examples to show them alternatives or just hand draw them on a whiteboard to give them ideas. I don't mind doing "hard" or "complex" things in the app if the result is a much improved user interface. Of course, I make it as simple as I can and definitely use good practices, even in Javascript.