Maximum number of columns in a LINQ to SQL object? - linq

I have 62 columns in a table under SQL 2005 and LINQ to SQL doesn't handle the updates though the reading would work just fine, I tried re-adding the table to the model, created a new data model but nothing worked, I'm guessing I've hit the maximum number of columns limit on an object, can anyone explain that ?

I suspect there is some issue with an identity or timestamp column (something autogenerated on the SQL server). Make sure that any column that is autogenerated is marked that way in the model. You might also want to look at how it is handling concurrency. If you have triggers that update any values on the row after it is updated (changing values) and it is checking all columns on updates, this would cause the update to fail. Typically I create my tables with a timestamp column -- LINQ2SQL picks this up when I generate the model and uses it alone for concurrency.

Solved, either one of the following two
-I'm using a UniqueIdentifier column that was not set as Primary key
-Set Unique ID primary key, checked the properties of the same column in Server Explorer and it was still not showing as Primary key, refreshed the connection,dropped the same table on the model and voila.
So I assume I made a change to my model some time before, deleted the table from the model and added the same from the Server explorer without refreshing the connection and it never used to work.
Question is, does VS Server Explorer maintain it's own table schema and requires connection refresh everytime a change is made in the database ?

There is no limit to the number of columns LINQ to SQL will handle.
Have you got other tables updating successfully?
What else is different about how you are accessing the table content?

Related

Can EF6 use oracle database links?

We have a legacy app that I am rewriting in .net. All of our databases are oracle and make use of database links. Is there any way for Entity Framework 6 to generate models based on tables located on a different database?
Currently the legacy app gets data from table like this
SELECT * FROM emp#foo2;
where its db connection is to database foo that has a database link to the database foo2.
I would like to reproduce this using EF6. So far all I have found regarding this is this question.
You can do two things that EF 4 or higher will work with:
CREATE VIEW EMP as SELECT * FROM emp#foo2;
CREATE MATERIALIZED VIEW EMP as SELECT * FROM emp#foo2;
LOBS are not accessible across a database link without some contorted PL/SQL processing to read the LOB piece by piece.
I believe fast refresh does not work across database links so you must consider the size of the table on the linked database. If you are refreshing a million rows you may find the time to do this is an issue. Most large tables are full of tombstone data that never changes so a timestamp column with the last modified date could help you create a package that only picks out the changed data.
If you are doing complicated joins for either ensure that Oracle considers the column that would be the primary key as not null.
You can add a primary key on views and materialized view but it must be disabled. See here for details.

Operations on certain tables won't finish

We have a table TRANSMISSIONS(ID, NAME) which behaves funny in the following ways:
The statement to add a foreign key in another table referencing TRANSMISSIONS.ID won't finish
The statement to add a column to TRANSMISSIONS won't finish
The statement to disable/drop a unique constraint won't finish
The statement to disable/drop a trigger won't finish
TRANSMISSION's primary key is ID, there is also a unique constraint on NAME - therefore there are indexes on ID and NAME. We also have a trigger which creates values for column ID using a sequence, so that INSERT statements do not need to provide a value for ID.
Besides TRANSMISSIONS, there are two more tables behaving like this. For other tables, the above-mentioned statements work fine.
The database is used in an application with Hibernate and due to an incorrect JPA configuration we produced high values for ID during a time. Note that we use the trigger only for "manual" INSERT statements and that Hibernate produces ID values itself, also using the sequence.
The first thought was that the problems were due to the high IDs but we have the problems also with tables that never had such high IDs.
Anyways we suspected that the indexes might be fragmented somehow and called ALTER INDEX TRANSMISSIONS_PK SHRINK SPACE COMPACT, which ran through but showed no effect.
Also, we wanted to call ALTER TABLE TRANSMISSIONS SHRINK SPACE COMPACT which didn't work because we needed to call first ALTER TABLE TRANSMISSIONS ENABLE ROW MOVEMENT which never finished.
We have another instance of the database which does not behave in such a funny way. So we think it might be that in the course of running the application the database got somehow into an inconsistent state.
Does someone have any suggestions what might have gone out of control/into an inconsitent state?
More hints:
There are no locks present on any objects in the database (according to information in v$lock and v$locked_object)
We tried all these statements in SQL Developer and also using SQLPlus (command-line tool).

T-SQL: UPDATE perfomance in case if some column values are the same

Suppose I want to perform an UPDATE of the object stored as a row in SQL Server 2012, and some of the column values are different, but most of them are the same. Question is simple - will Sql Server perform hard disk write only of the changed columns, or it will write the entire row, uselessly rewriting existing column values with just the same? And what if the object is kept in DB spread over many tables (complex object)?
I should I rephrase my question - Is perfomance hit of doing non updating UPDATEs of some columns when updating the row worth even worrying it?
BTW I use id column as clustered key.

MiniProfiler SqlServerStorage becomes quite slow

We use mini profiler in two ways:
On developer machines with the pop-up
In our staging/prod environments with SqlServerStorage storing to MS SQL
After a few weeks we find that writing to the profiling DB takes a long time (seconds), and is causing real issues on the site. Truncating all profiler tables resolves the issue.
Looking through the SqlServerStorage code, it appears the inserts also do a check to make sure a row with that id doesnt already exist. Is this to ensure DB agnostic code? This seems it would introduce a massive penalty as the number of rows increases.
How would I go about removing the performance penalty from the performance profiler? Is anyone else experiencing this slow down? Or is it something we are doing wrong?
Cheers for any help or advice.
Hmm, it looks like I made a huge mistake in how that MiniProfilers table was created when I forgot about primary key being clustered by default... and the clustered index is a GUID column, a very big no-no.
Because data is physically stored on disk in the same order as the clustered index (indeed, one could say the table is the clustered index), SQL Server has to keep every newly inserted row in that physical order. This becomes a nightmare to keep sorted when we're using essentially a random number.
The fix is to add an auto-increasing int and switch the primary key to that, just like all the other tables (why I overlooked this, I don't remember... we don't use this storage provider here on Stack Overflow or this issue would have been found long ago).
I'll update the table creation scripts and provide you with something to migrate your current table in a bit.
Edit
After looking at this again, the main MiniProfilers table could just be a heap, meaning no clustered index. All access to the rows is by that guid ID column, so no physical ordering would help.
If you don't want to recreate your MiniProfiler sql tables, you can use this script to make the primary key nonclustered:
-- first remove the clustered index from the primary key
declare #clusteredIndex varchar(50);
select #clusteredIndex = name
from sys.indexes
where type_desc = 'CLUSTERED'
and object_name(object_id) = 'MiniProfilers';
exec ('alter table MiniProfilers drop constraint ' + #clusteredIndex);
-- and then make it non-clustered
alter table MiniProfilers add constraint
PK_MiniProfilers primary key nonclustered (Id);
Another Edit
Alrighty, I've updated the creation scripts and added indexes for most querying - see the code here in GitHub.
I would highly recommended dropping all your existing tables and rerunning the updated script.

Linq insert with no primary key

I need to insert records into a table that has no primary key using LINQ to SQL. The table is poorly designed; I have NO control over the table structure. The table is comprised of a few varchar fields, a text field, and a timestamp. It is used as an audit trail for other entities.
What is the best way to accomplish the inserts? Could I extend the Linq partial class for this table and add a "fake" key? I'm open to any hack, however kludgey.
LINQ to SQL isn't meant for this task, so don't use it. Just warp the insert into a stored procedure and add the procedure to your data model. If you can't do that, write a normal function with a bit of in-line SQL.
Open your DBML file in the designer, and give the mapping a key, whether your database has one or not. This will solve your problem. Just beware, however, that you can't count on the column being used for identity or anything else if there isn't a genuine key in the database.
I was able to work around this using a composite key.
I had a similar problem with a table containing only two columns: username, role.
This table obviously does not require an identity column. So, I created a composite key with username and role. This enabled me to use LINQ for adding and deleting entries.
You might use the DataContext.ExecuteCommand method to run your own custom insert statement.
Or, you might add a primary key to a column, this will allow the objects to be tracked for inserts/updates/deletes by the datacontext. This will work even if the column isn't really an enforced primary key in the database (how would linq know?). If you're only doing inserts and never re-use a primary key value in the same datacontext, you'll be fine.

Resources