Default Sort Column with Linq to SQL - linq

I am in the process building myself a simple Linq to SQL repository pattern.
What I wanted to know is, is it possible to set a default sort column so I don't have to call orderby.
From what I have read I don't think it is and if this is the case what would recommend for a solution to this problem.
Would the best idea be to use an attribute on a partial class on my model?

the default order is the clustered index on the table you are pulling from.
What are you wanting to sort on (without sorting on) ?

If you needed something other than having it sorted by the primary key, you could look at supplying a select statement for the table instead of using the runtime generated statement. Look at the properties on the table in the designer -- you should be able to override the runtime generated select, delete, and update statements. I don't personally recommend this, though, since I'm not sure how it will interact with other orderings. I think the intent is more along the lines of allowing you to use stored procedures if you want.
Another alternative would be to create a table-valued function or stored procedure that does the ordering the way you want and has the same schema as the table. If, in the designer, you drag this onto the table, you get a strongly typed method on the data context that you can use to obtain those entities according to the definition of the function/procedure instead of the standard select. Personally I think this introduces fewer maintenance headaches because it makes it more visible, but you do have to remember to use the method instead of the Table property for that entity.

Related

Is it possible to create a table of views in oracle

It might be a stupid question, but as oracle allows to create table of different objects, I thought maybe it is possible to create a table of views.
I need to generate many, many views (which will be pivoting some table). Each view will have different column names.
I do not want to pollute the namespace with hundreds or thousands of views named
schema.xls_import_id_1234_sheet_0.
Is it possible to create a table of views? And then query them with something like
select * from table(
select a_view from xls_sheet_views where xls_id = 1234 and sheet_no = 0)
Or maybe a way to store just a query as varchar2 type, and some method to execute it automaticly?
No, you cannot create a table of views.
Depending on the precise business problem you are trying to solve, potentially you could implement the logic in pipelined table functions in a package rather than having thousands of views. If you have so many views because you are creating a separate object for every combination of attributes that you might pivot by, it may make sense to use a pipelined table function that accepts some parameters rather than having hundreds of views. Or it might make sense to have a few procedures in a package that return a SYS_REFCURSOR.
In general, if you want to use a pipelined table function, you would want to know the structure of your result. You can get tricky, though, by making use of polymorphism in Oracle object types. You can declare a single object type, derive a number of subtypes and then return instances of the subtypes from a pipelined table function defined on the parent type. Adrian Billington has an excellent example of this sort of flexible pipelined table function. You can get even crazier, though, with the Data Cartridge framework and develop a pipelined table function that returns an arbitrarily structured result. Now, just because you can do something like this, I certainly wouldn't advocate actually doing it without a lot of careful consideration. The need to have this level of dynamic code for something as common as pivoting some data would make me strongly suspect that you need to take a step back and look at the architecture of the system.

Hbase Schema Nested Entity

Does anyone have an example on how to create an Hbase table with a nested entity?
Example
UserName (string)
SSN (string)
+ Books (collection)
The books collection would look like this for example
Books
isbn
title
etc...
I cannot find a single example are how to create a table like this. I see many people talk about it, and how it is a best practice in certain scenarios, but I cannot find an example on how to do it anywhere.
Thanks...
Nested entities isn't an official feature of HBase; it's just a way some people talk about one usage pattern. In this pattern, you use the fact that "columns" in HBase are really just a big map (a bunch of key/value pairs) to let you to model a dimension of cardinality inside the row by adding one column per "row" of the nested entity.
Schema-wise, you don't need to do much on the table itself; when you create a table in HBase, you just specify the name & column family (and associated properties), like so (in hbase shell):
hbase:001:0> create 'UserWithBooks', 'cf1'
Then, it's up to you what you put in it, column wise. You could insert values like:
hbase:002:0> put 'UsersWithBooks', 'userid1234', 'cf1:username', 'my username'
hbase:003:0> put 'UsersWithBooks', 'userid1234', 'cf1:ssn', 'my ssn'
hbase:004:0> put 'UsersWithBooks', 'userid1234', 'cf1:book_id_12345', '<isbn>12345</isbn><title>mary had a little lamb</title>'
hbase:005:0> put 'UsersWithBooks', 'userid1234', 'cf1:book_id_67890', '<isbn>67890</isbn><title>the importance of being earnest</title>'
The column names are totally up to you, and there's no limit to how many you can have (within reason: see the HBase Reference Guide for more on this). Of course, doing this, you have to do your own legwork re: putting in and getting out values (and you'd probably do it with the java client in a more sophisticated way than I'm doing with these shell commands, they're just for explanatory purposes). And while you can efficiently scan just a portion of the columns in a table by key (using a column pagination filter), you can't do much with the contents of the cells other than pull them and parse them elsewhere.
Why would you do this? Probably just if you wanted atomicity around all the nested rows for one parent row. It's not very common, your best bet is probably to start by modeling them as separate tables, and only move to this approach if you really understand the tradeoffs.
There are some limitations to this. First, this technique only works to
one level deep: your nested entities can’t themselves have nested entities. You can still
have multiple different nested child entities in a single parent, and the column qualifier is their identifying attributes.
Second, it’s not as efficient to access an individual value stored as a nested column
qualifier inside a row, as compared to accessing a row in another table, as you learned
earlier in the chapter.
Still, there are compelling cases where this kind of schema design is appropriate. If
the only way you get at the child entities is via the parent entity, and you’d like to have transactional protection around all children of a parent, this can be the right way to go.

EAV - Get value using Linq to entities

In a data model like this (http://alanstorm.com/2009/img/magento-book/eav.png) I want to get the value from an EAV_Attribute using Linq to SQL.
Assuming that an EAV_Attribute only exists in one inherited table (varchar, decimal, int, etc.) how can I get it in a linq query?
I know that I can use the Inheritance for this, but I want to execute it in the SQL Database side...
Is it possible to do a kind of Coalesce in Linq, considering that the elements have different types?
EAV and linq is not a happy marriage. I think your best shot is to create an unmapped property in eav_attribute that resolves the value (as object) from it's typed attribute child. With entity framework, you won't be able to use this property in an expression (i.e. not in a Where or Select), You must convert to IEnumerable first to access it. (Linq-to-sql may allow it because it can switch to linq-to-objects below the hood).
Another option is to create a calculated column of type sql_variant that does the same, but now in t-sql code. But... EF does not suport sql_variant. You've got to use some trickery to read it.
That's the reading part.
For setting/modifying/deleting values I don't see any shortcuts. You just have to handle the objects as any object graph with parents and children. In sql server you can't use cascaded delete because it can only be defined for one foreign key. (This may tackle that, but I never tried).
So, not really good news, I'm afraid. Maybe good to know that in one project I also work with a database that has an inevitable EAV part. We do it with EF too, but it's not without friction.
First of all, I recommend using TPH and not TPT for EAV tables. (One table with multiple nullable value columns (one per type) + discriminator vs. one table per type.)
Either way, if you modelled the value entity as an abstract class (containing the two IDs) with an inheriting entity per value data type that adds the value property, then your LINQ should look like this:
var valueEntity = context.ProductAttributes.Where(pa =>
pa.ProductId == selectedProductId
&& pa.AttributeTypeId == selectedAttributeTypeId)
.SingleOrDefault() as ProductAttributeOfDouble;
if valueEntity != null
return valueEntity.Value;
return null;
Where the entity types are: Product, AttributeType, ProductAttribute, ProductAttributeOfDouble, ... ProductAttributeOfString.

EntityFramework code-first, run a database update script after DropCreate

I'm trying to find some nice work arounds for the issues of computed columns in code first. Specifically, I have a number of CreatedAt datetime columns that need to be set to getdate().
I've looked at doing this via the POCO constructors, but to do that I must remove the Computed option (or it won't persist the data), however, there is no easy to way ensure the column is only set if we are inserting a record. So this would overwrite the CreatedAt each time we update.
I'm looking to create an alter script that can be called after the DropCreate that would go through and alter various columns to include the default value of getdate().
Is there an event to hook into something like OnDropCreateCompleted where I could then run additional SQL
What would be the best way handle the alter script? I am thinking just sending raw sql to the server that would run.
Is there another way to handle the getdate() issue that might be more graceful and more inline with code first that I'm missing?
Thanks
You can just make custom initializer derived from your desired one and override Seed method where you can execute any SQL you want to use - here is some example for creating such initializer.
If you are using migrations you can just the custom SQL to Up method.

Data dictionaries and functionality behind Code Road Map

I was looking to a Code Road Map feature that Toad provides which shows dependencies of Objects.
Can anyone tell me on what basis the Toad Generate the Dependencies? I am assuming that there is a data dictionary view exists dba_dependencies which work at the backend for getting this relation.
So can we write a script to which we pass object name like package name, table_name amongst others that will show the dependency of the object passed by me.
In code Road Map there is an option to generate data for a table ...how does this work?
What is the algorithm behind it? If there is foreign key on the child table and the parent table is empty, how does this work? How it will populate the depending table first and then the child table.
Looking at user_depencies/ dba_dependencies view structure, querying the view with column REFERENCED_NAME equal to the object that you want to query with should provide you with a list of objects where the object you're searching for is referenced.
Second question is too broad & probably only the Toad developers know how they've implemented it. The data dictionaries provide information about the various constraints on a table. My guess would be the algorithm looks at data dictionary & has different code paths for handling constraints / master child relations. Another assumption would use of handled exceptions to ensure the data is generated cleanly.

Resources