REST Web API Design for Bulk read [closed] - asp.net-web-api

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have to design an API or set of APIs which are used to read bulk data from SQL Server Table and I need to get data by date and other parameters. Now I don't want to have list of N no. of methods in the API as the list is indefinite and it will keep on increasing as per the user needs.
So how should I design this.

I would consider using the WCF Data Services and OData so that your method(s) can accept 'SQL over the wire' requests. This gives you a single URL which can accept filter criteria e.g.
// All the data from the Products table (enable paging on the server side!)
http://localhost/Products
// Add a WHERE clause
http://localhost/Products?$filter=Category eq 'Toys'
// SELECT a subset of columns
http://localhost/Products?$select=ToyName,ToyPrice
You could also use the ASP.NET Web API project type and enable OData support but the URL functionality is slightly more limited.
Having said that I would think it's unusual to use a web service for bulk data operations because of the overhead involved in data serialization and the way packets must be split up over HTTP. It depends how bulky your data really is.

Related

Why and when to use an Interface with Linq [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming you have three layers (Business, Data and UI). My data layer would have a linq to sql file with all the tables added.
I've seen some examples where an Interface is created in the business layer and then implemented in another class (type is of IQueryable/IEnumerable), yet other classes are using normal Linq syntax to get/save/delete/update data.
Why and when would i use an Interface which has an IQueryable/IEnumerable type?
Two of the most common situations, which you may want to do this are:
you want to protect yourself from changes to that part of your system.
you want to be able to write good unit tests.
For example, you have a business layer that talks directly to LINQ to SQL. In the future you mat have a requirement to use nHibernate or Entity Framework instead. Making this change would impact on your business layer, which is probably not good.
Instead, if you have programmed to an interface (say IDataRepository), you should be able to swap in and out concrete implementations like LINQtoSQLRepository or HibernateRepository without having to change your business layer - it only cares that it can call, say Add(), Update(), Get(), Delete() etc - but doesn't care how these operations are actually done.
Programming to interfaces is also very useful for unit testing. You don't want to be running tests against a database server for a variety of reasons such as speed and reliability. So, you can pass in a test double, fake or mock implementation to test your data layer. E.g. You have some test data that implements your IDataRepository, which allows tou to then test add(), delete() etc from your business layer without having a DB connection.
These points are generally good practice in all aspects of your application. I suggest reading up on The Repository Pattern, SOLID principles and maybe even Test Driven Development. This is a large and sometimes complex area and its difficult to give a detailed answer of exactly what to do and when as it needs to suit your scenario.
I hope this helps you get started.

Tools to track my coding progress in Ruby [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In a nutshell, I'm looking for tools for tracking my progress in "fleshing-out" a complex system in Ruby.
Usually when I start working on a new system in Ruby, I first write an outline.rb file that contains stub class definitions for all the classes I think I'll want to use. Then I gradually implement the functionality.
Are there any tools out there for quickly surveying my stubs and keeping track of which ones still need to be implemented, and how long each implementation took me, in hours?
I usually track my progress through my tests. For example, if you're doing TDD/BDD, you could use rspec and create tests that are marked as "pending"-tests without a body basically.
Take this gist for example (https://gist.github.com/4150506)
describe "My API" do
it "should return a list of cities (e.g. New York, Berlin)"
it "should return a list of course categories"
it "should return a list of courses based on a given city"
it "should return a list of courses based on a category and city"
end
In it, I list a few tests that I expect the system to pass once all the implementation details are in place. This allows me to get an overall view of what I'm building without getting too deep too quickly.
Update: The idea is to be able to run the specs at the command line and rspec will tell you which tests are passing, failing or pending.
As for the time tracking part, I just use a timer app (tickspot.com for example). You can always make note of the timestamps on your spec files too to get a sense of when you started modifying the files and when you stopped.
Hope that helps.
My answer is essentially "no".
How do you define "done"? Not "just a stub", or exhibiting complete behavior? How do you define "complete" behavior? What about methods you didn't stub originally, of which I can only imagine there would be dozens, if not hundreds.
Time against stubbed methods doesn't strike me as a meaningful statistic, rather time against functionality. This should be handled by issue tracking tickets and commit logs, but that will reflect overall time, not specifically time on task, which is often significantly different.
I don't see how this can be done with any real accuracy over a project of any significant size without very granular issue tracking, time entry, and unit and behavioral tests. Even then, you'd likely need to build out some tools to help with your particular methodology.

Is using an ORM and LINQ architecturally unsound? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently using Telerik Open Access which is hateful, but that said is there not an architectural issue around the use of LINQ and ORMs in general?
It occurs to me that what we are doing is moving the burden of data manipulation from the DBMS which is optimised to perform that task to in my case a webserver which is not.
Also, at least in Telerik's case we are restricting the flexibility of our coding model. In this project I have to extract and create complex data structures that do not map directly into a CRUD interface.
In Telerik Open Access at least, if I use a stored procedure to create the data and it does not map into a known entity I have to return the data as an object array.
So instead I use the "entities" created by the ORM and manipulate them using LINQ.
The resulting code is ridiculously complex compared to the relatively simple equivalent SQL statement.
I'd be interested in your views specifically around the advocacy of using an ORM and LINQ and whether this is architecturally unsound.
It certainly feels it to me.
I haven't included code samples because the actual code is irrelevant. That said it might be instructive to know that a 10 line T-SQL query (6 of those lines are joins) has turned into 300 lines (including whitespace) of LINQ statements to do the same thing.
If you use Linq2SQL or Linq2Entities they will actually generate SQL code and the "burden of data manipulation" will still be on the DBMS. The Linq code you write will be very much like SQL code in size.
Using Linq in addition to an ORM isn't architecturally unsound.
You always have some amount of data manipulation on the database side and some amount on the client side. As a developer, it is yours to find the right balance. Obviously, if your ORM obliges you to do such convoluted things as manipulating a jumble of untyped data on the client side and doing massive queries on it with Linq, there's a problem. Either with your ORM or the way your system was designed.

Is PL SQL really required? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
what all can be done in PL SQL can also be done by embedding sql statements in an application langauage say PhP. Why do people still use PL SQL , Are there any major advantages.?
I want to avoid learning a new language and see if PHP can suffice.
PL/SQL is useful when you have the opportunity to process large chunks of data on the database side with out having to load all that data in your application.
Lets say you are running complex reports on millions of rows of data. You can simply implement the logic in pl/sql and not have to load all that data to your application and then write the results back to the DB - saves bandwidth, mem and time.
It's a matter of being the right tool for the right job.
It's up to the developer to decide when is a best time to use PL/SQL.
In addition to performing bulk operations on the DB end, certain IT setups have stringent security measures.
Instead of allowing applications to have direct access to tables, they control access through PL/SQL stored procedures. This way they know exactly how the data is being accessed instead of applications maintained by developers which may be subject to security attacks.
I suppose advantages would include:
Tight integration with the database - Performance.
Security
Reduced network traffic
Pre-compiled (and natively compiled) code
Ability to create table triggers
Integration with SQL (less datatype conversion etc)
In the end though every approach and language will have its own advantages and disadvantages. Not learning PL/SQL just because you already know PHP would be a loss to yourself both personally and possibly career-wise. If you learn PL/SQL than you will understand where it has advantages over PHP and where PHP has advantages over PL/SQL but you will be in a better position to make the judgement.
Best of luck.

Best practice of data validation in enterprise application [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm studying e-Commerce like web application. In one case study, I’m in trouble of mass data validation. What is the best practice of that in enterprise application?
Here is one scenario:
For a cargo system. There is a “Cargo” object, which contains a list of “Good” to be shipped with. Each “Good” have a string field, named “Category”, specifying what kind of “Good” it is. Such as “inflammable”, “Fragile”.
So, there are two chances for the validation to take place. The creation of the object. Or the storage in the database of the object. If we only validate at the storage stage, when some “Good” validation fails, the “Cargo” storage fails too, and the previously stored “Goods” need to be deleted. This is low efficient. If we also validate at the creation stage. There will be duplicated validation logic(a check of foreign key as I stores those “Category” in the database, and a check in the constructor).
If you are saving multiple records to the database, all the updates should be done at once in a single transaction. So you would validate ALL the objects before saving. If there was an issue during the save you could then rollback the transaction which rolls back all the database updates (ie you dont have to go back and manually delete records)
Ideally you should validate on the server, before saving data, the server validation should then propagate the validation messages back up to the User/UI. Validation on the Client/UI is also good in that its more responsive and reduces the overhead on the rest of the system.

Resources