As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm currently using Telerik Open Access which is hateful, but that said is there not an architectural issue around the use of LINQ and ORMs in general?
It occurs to me that what we are doing is moving the burden of data manipulation from the DBMS which is optimised to perform that task to in my case a webserver which is not.
Also, at least in Telerik's case we are restricting the flexibility of our coding model. In this project I have to extract and create complex data structures that do not map directly into a CRUD interface.
In Telerik Open Access at least, if I use a stored procedure to create the data and it does not map into a known entity I have to return the data as an object array.
So instead I use the "entities" created by the ORM and manipulate them using LINQ.
The resulting code is ridiculously complex compared to the relatively simple equivalent SQL statement.
I'd be interested in your views specifically around the advocacy of using an ORM and LINQ and whether this is architecturally unsound.
It certainly feels it to me.
I haven't included code samples because the actual code is irrelevant. That said it might be instructive to know that a 10 line T-SQL query (6 of those lines are joins) has turned into 300 lines (including whitespace) of LINQ statements to do the same thing.
If you use Linq2SQL or Linq2Entities they will actually generate SQL code and the "burden of data manipulation" will still be on the DBMS. The Linq code you write will be very much like SQL code in size.
Using Linq in addition to an ORM isn't architecturally unsound.
You always have some amount of data manipulation on the database side and some amount on the client side. As a developer, it is yours to find the right balance. Obviously, if your ORM obliges you to do such convoluted things as manipulating a jumble of untyped data on the client side and doing massive queries on it with Linq, there's a problem. Either with your ORM or the way your system was designed.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I have to design an API or set of APIs which are used to read bulk data from SQL Server Table and I need to get data by date and other parameters. Now I don't want to have list of N no. of methods in the API as the list is indefinite and it will keep on increasing as per the user needs.
So how should I design this.
I would consider using the WCF Data Services and OData so that your method(s) can accept 'SQL over the wire' requests. This gives you a single URL which can accept filter criteria e.g.
// All the data from the Products table (enable paging on the server side!)
http://localhost/Products
// Add a WHERE clause
http://localhost/Products?$filter=Category eq 'Toys'
// SELECT a subset of columns
http://localhost/Products?$select=ToyName,ToyPrice
You could also use the ASP.NET Web API project type and enable OData support but the URL functionality is slightly more limited.
Having said that I would think it's unusual to use a web service for bulk data operations because of the overhead involved in data serialization and the way packets must be split up over HTTP. It depends how bulky your data really is.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming you have three layers (Business, Data and UI). My data layer would have a linq to sql file with all the tables added.
I've seen some examples where an Interface is created in the business layer and then implemented in another class (type is of IQueryable/IEnumerable), yet other classes are using normal Linq syntax to get/save/delete/update data.
Why and when would i use an Interface which has an IQueryable/IEnumerable type?
Two of the most common situations, which you may want to do this are:
you want to protect yourself from changes to that part of your system.
you want to be able to write good unit tests.
For example, you have a business layer that talks directly to LINQ to SQL. In the future you mat have a requirement to use nHibernate or Entity Framework instead. Making this change would impact on your business layer, which is probably not good.
Instead, if you have programmed to an interface (say IDataRepository), you should be able to swap in and out concrete implementations like LINQtoSQLRepository or HibernateRepository without having to change your business layer - it only cares that it can call, say Add(), Update(), Get(), Delete() etc - but doesn't care how these operations are actually done.
Programming to interfaces is also very useful for unit testing. You don't want to be running tests against a database server for a variety of reasons such as speed and reliability. So, you can pass in a test double, fake or mock implementation to test your data layer. E.g. You have some test data that implements your IDataRepository, which allows tou to then test add(), delete() etc from your business layer without having a DB connection.
These points are generally good practice in all aspects of your application. I suggest reading up on The Repository Pattern, SOLID principles and maybe even Test Driven Development. This is a large and sometimes complex area and its difficult to give a detailed answer of exactly what to do and when as it needs to suit your scenario.
I hope this helps you get started.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
what all can be done in PL SQL can also be done by embedding sql statements in an application langauage say PhP. Why do people still use PL SQL , Are there any major advantages.?
I want to avoid learning a new language and see if PHP can suffice.
PL/SQL is useful when you have the opportunity to process large chunks of data on the database side with out having to load all that data in your application.
Lets say you are running complex reports on millions of rows of data. You can simply implement the logic in pl/sql and not have to load all that data to your application and then write the results back to the DB - saves bandwidth, mem and time.
It's a matter of being the right tool for the right job.
It's up to the developer to decide when is a best time to use PL/SQL.
In addition to performing bulk operations on the DB end, certain IT setups have stringent security measures.
Instead of allowing applications to have direct access to tables, they control access through PL/SQL stored procedures. This way they know exactly how the data is being accessed instead of applications maintained by developers which may be subject to security attacks.
I suppose advantages would include:
Tight integration with the database - Performance.
Security
Reduced network traffic
Pre-compiled (and natively compiled) code
Ability to create table triggers
Integration with SQL (less datatype conversion etc)
In the end though every approach and language will have its own advantages and disadvantages. Not learning PL/SQL just because you already know PHP would be a loss to yourself both personally and possibly career-wise. If you learn PL/SQL than you will understand where it has advantages over PHP and where PHP has advantages over PL/SQL but you will be in a better position to make the judgement.
Best of luck.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm studying e-Commerce like web application. In one case study, I’m in trouble of mass data validation. What is the best practice of that in enterprise application?
Here is one scenario:
For a cargo system. There is a “Cargo” object, which contains a list of “Good” to be shipped with. Each “Good” have a string field, named “Category”, specifying what kind of “Good” it is. Such as “inflammable”, “Fragile”.
So, there are two chances for the validation to take place. The creation of the object. Or the storage in the database of the object. If we only validate at the storage stage, when some “Good” validation fails, the “Cargo” storage fails too, and the previously stored “Goods” need to be deleted. This is low efficient. If we also validate at the creation stage. There will be duplicated validation logic(a check of foreign key as I stores those “Category” in the database, and a check in the constructor).
If you are saving multiple records to the database, all the updates should be done at once in a single transaction. So you would validate ALL the objects before saving. If there was an issue during the save you could then rollback the transaction which rolls back all the database updates (ie you dont have to go back and manually delete records)
Ideally you should validate on the server, before saving data, the server validation should then propagate the validation messages back up to the User/UI. Validation on the Client/UI is also good in that its more responsive and reduces the overhead on the rest of the system.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What is the ugliest code that you wrote - not because you didn't know better, but because of limitations of the software, the hardware or the company policy?
Because of unusual choices in database layouts and programming languages, I once built a C program that read in a SQL database structure and generated another C program that'd read that database and back it up into a file, or copy it into a second database that shared more or less the same columns. It was a monster clunky code generator.
Any regular expression. :)
In the late 90s I had to write several web sites in Informix Universal Server web blade (aka Illustra web blade)
For anyone who doesn't know anything about this execrable environment, it forced you to use the most bizarre language I have ever come across. As Joel Spolsky described it
When it did run, it proved to have the only programming language I've ever seen that wasn't Turing-equivalent, if you can imagine that.
More on it here http://philip.greenspun.com/wtr/illustra-tips.html
And an example of a 'simple' if condition:
cond=$(OR,$(NXST,$email),$(NXST,$name),$(NXST,$subject))
One example of it's dire nature was the fact that it had no loops. Of any kind. It was possible to hack looping functionality by creating a query and iterating through its rows, but that is so wrong it makes me feel sick.
edit: I've managed to find a complete code sample. Behold:
<HTML>
<HEAD><TITLE>WINSTART bug</TITLE></HEAD>
<BODY>
<!--- Initialization --->
<?MIVAR NAME=WINSIZE DEFAULT=4>$WINSIZE<?/MIVAR>
<?MIVAR NAME=BEGIN DEFAULT=1>$START<?/MIVAR>
<!--- Definition of Ranges ---->
<?MIVAR NAME=BEGIN>$(IF,$(<,$BEGIN,1),1,$BEGIN)<?/MIVAR>
<?MIVAR NAME=END>$(+,$BEGIN,$WINSIZE)<?/MIVAR>
<!--- Execution --->
<TABLE BORDER>
<?MISQL WINSTART=$BEGIN WINSIZE=$WINSIZE
SQL="select tabname from systables where tabname like 'web%'
order by tabname;">
<TR><TD>$1</TD></TR>
<?/MISQL>
</TABLE>
<BR>
<?MIBLOCK COND="$(>,$BEGIN,1)">
<?MIVAR>
<A HREF=$WEB_HOME?MIval=WINWALK&START=$(-,$BEGIN,$WINSIZE)&WINSIZE=$WINSIZE>
Previous $WINSIZE Rows </A> $(IF,$(<,$MI_ROWCOUNT,$WINSIZE), No More Rows, )
<?/MIVAR>
<?/MIBLOCK>
<?MIBLOCK COND="$(AND,$(>,$END,$WINSIZE),$(>=,$MI_ROWCOUNT,$WINSIZE))">
<?MIVAR>
<A HREF=$WEB_HOME?MIval=WINWALK&START=$END&WINSIZE=$WINSIZE>
Next $WINSIZE Rows </A>
<?/MIVAR>
<?/MIBLOCK>
</BODY>
Once upon a time, I was working for a small programming house with a client who had a legacy COBOL application that they wanted converted to Visual Basic. I was never a fan of VB, but that's not an unreasonable thing to want.
Except that they wanted the interface to be preserved and to function identically to the existing version.
So we were forced to produce a VB app consisting of a single form with a grid of roughly 100 text entry boxes, all of which were completely passive. Except the one in the bottom right, which had a single event handler that was several thousand lines long and processed all the data in all the entry boxes when you exited the field.
I have my pride and do not write extreme ugly code (although the definition of ugly changes with experience). My boss pays me to write code and he expects it to be good.
Sometimes you have to write hacks. But you always have to claim the right to fix these later on else you will be faced with the concequences later on.
A program that exchanged information between two applications. Needless to say the data between the two programs was in different format, different use-cases, and even meant different things from one app to the other. There were TONS of special cases and "nice" conversions:
if (InputString == "01"))
{ Output.ClientID = Input.Address;}
else if ((InputString = "02") && (Input.Address == null) &&(Input.ClientID < 1300))
{ Output.ClientID = Input.ClientID +1;}
else if (Input.ClientID = 0 )
{ Input.ClientID = 2084; }
And on, and on for hundreds of lines.
This was for internal use in a large manufacturing plant... I cried durring most of the time I worked there.
I worked for an insurance management company. We processed online insurance applications back in the early 2000s when online quotes and applications were a bit more rare.
The ugliest part of the system was that we had to send the information back to the underwriting company. While we could gather lots of wonderful data we were forced to write all this data out to a PDF based on the physical form somebody could fill out by hand. We then would take a small subset of the data and transmit that data to the underwriters along with the filled out application. The application PDF would go into their document imaging system and the data would be placed in their ancient fixed-width database. As far as the underwriters were concerned most of the data only existed on that PDF.
We joked that the underwriters probably printed the PDF forms in order to scan them into the document imaging system. It wouldn't have surprised me if they did.