I'm moving a legacy app from MS-SQL to Postgres which uses Rails to access the data.
The columns in MS-SQL are capitalised, and while using activerecord-sql-server-adapter, they are read like this:
var something = my_model.SomeAttribute
Even though it is constant, it doesn't appear to matter since the app only reads data from the MSSQL db.
The issue I'm having now is after moving to postgres, it converts all column names etc to lowercase (as SQL is not meant to be case-sensitive). Now when I try to access SomeAttribute on my model, it raises an ActiveModel::MissingAttributeError since it's now lowercase.
Some examples of the symptom:
p.SomeAttribute
=> ActiveModel::MissingAttributeError: missing attribute: SomeAttribute
p.read_attribute(:SomeAttribute)
=> nil
p.has_attribute?(:SomeAttribute)
=> false
p.read_attribute(:someattribute)
=> 'expected value'
Is there some way I can get ActiveRecord/ActiveModel to convert attribute names to lowercase before attempting to retrieve them?
Disclaimer: this was a very temporary solution - definitely not the "right" thing to do!
Created views like this:
CREATE VIEW mymodel AS
SELECT
at.someattribute
, at.someattribute AS "SomeAttribute"
FROM actual_table at
Using the double quotes in SQL preserves the case.
I am currently working on two different projects using Rails to connect to a legacy MS SQL Server database...with table and column names that don't match up to what Rails expects.
In my most recent project I think I have just found the golden egg :-) And that is to not change anything on the Rails side at all to make things work--the golden egg is to use SQL Views to "transform" the legacy tables into something Rails understands--I currently have a project in dev right now that I'm working on where this is working fantastically, and I don't have to try and alias any table or columns names on the Rails side since everything looks peachy by the time it reaches my Rails app. I'm posting this now because I had many many issues to work through for my first project and I think this may make many other's lives much easier, and I haven't found this solution posted anywhere else.
So, for example, let's say you have a legacy table in Microsoft SQL Server 2008R2 named "Contact-Table" and it has weird column names like such:
Contact-Table:
ID_Primary
First Name
Last Name
Using MS SQL table views you can 'recreate' this same table. Create a view based off of Legacy-Table; name the view whatever you want the 'table' to be called in Rails and use column aliases to rename the columns. So, here we could create a view called "contacts" (in alignment with Rails conventions) with the following columns:
contacts:
id (alias for ID_Primary)
first_name (alias for First Name)
last_name (alias for Last Name)
Then in your Rails model all you need to do link to your 'contacts' table in MS SQL and your column names are available as expected. So far I've done this and it works with the tiny-tds gem and free-tds. I can query, create and update records and Rails associations (has_many/belongs_to, etc.) work as well. I'm very excited about using MS SQL table views instead of other methods I've used before to get Rails to talk to legacy databases! I'd love to hear what others think.
Related
I just ran into an interesting situation about relationships and databases. I am writing a ruby app and for my database I am using postgresql. I have a parent object "user" and a related object "thingies" where a user can have one or more thingies. What would be the advantage of using a separate table vs just embedding data within a field in the parent table?
Example from ActiveRecord:
using a related table:
def change
create_table :users do |i|
i.text :name
end
create_table :thingies do |i|
i.integer :thingie
i.text :discription
end
end
class User < ActiveRecord::Base
has_many :thingies
end
class Thingie < ActiveRecord::Base
belongs_to :user
end
using an embedded data structure (multidimensional array) method:
def change
create_table :users do |i|
i.text :name
i.text :thingies, array: true # example contents: [[thingie,discription],[thingie,discription]]
end
end
class User < ActiveRecord::Base
end
Relevant Information
I am using heroku and heroku-posgres as my database. I am using their free option, which limits me to 10,000 rows. This seems to make me want to use the multidimensional array way, but I don't really know.
Embedding a data structure in a field can work for simple cases but it prevents you from taking advantage of relational databases. Relational databases are designed to find, update, delete and protect your data. With an embedded field containing its own wad-o-data (array, JSON, xml etc), you wind up writing all the code to do this yourself.
There are cases where the embedded field might be more suitable, but for this question as an example I will use a case that highlights the advantages of a related table approch.
Imagine a User and Post example for a blog.
For an embedded post solution, you would have a table something like this (psuedocode - these are probably not valid ddl):
create table Users {
id int auto_increment,
name varchar(200)
post text[][],
}
With related tables, you would do something like
create table Users {
id int auto_increment,
name varchar(200)
}
create table Posts {
id auto_increment,
user_id int,
content text
}
Object Relational Mapping (ORM) tools: With the embedded post, you will be writing the code manually to add posts to a user, navigate through existing posts, validate them, delete them etc. With the separate table design, you can leverage the ActiveRecord (or whatever object relational system you are using) tools for this which should keep your code much simpler.
Flexibility: Imagine you want to add a date field to the post. You can do it with an embedded field, but you will have to write code to parse your array, validate the fields, update the existing embedded posts etc. With the separate table, this is much simpler. In addition, lets say you want to add an Editor to your system who approves all the posts. With the relational example this is easy. As an example to find all posts edited by 'Bob' with ActiveRecord, you would just need:
Editor.where(name: 'Bob').posts
For the embedded side, you would have to write code to walk through every user in the database, parse every one of their posts and look for 'Bob' in the editor field.
Performance: Imagine that you have 10,000 users with an average of 100 posts each. Now you want to find all posts done on a certain date. With the embedded field, you must loop through every record, parse the entire array of all posts, extract the dates and check agains the one you want. This will chew up both cpu and disk i/0. For the database, you can easily index the date field and pull out the exact records you need without parsing every post from every user.
Standards: Using a vendor specific data structure means that moving your application to another database could be a pain. Postgres appears to have a rich set of data types, but they are not the same as MySQL, Oracle, SQL Server etc. If you stick with standard data types, you will have a much easier time swapping backends.
These are the main issues I see off the top. I have made this mistake and paid the price for it, so unless there is a super-compelling reason do do otherwise, I would use the separate table.
what if users John and Ann have the same thingies? the records will be duplicated and if you decide to change the name of thingie you will have to change two or more records. If thingie is stored in the separate table you have to change only one record. FYI https://en.wikipedia.org/wiki/Database_normalization
Benefits of one to many:
Easier ORM (Object Relational Mapping) integration. You can use it either way, but you have to define your tables with native sql. Having distinct tables is easier and you can make use of auto-generated mappings.
Your space limitation of 10,000 rows will go further with the one to many relationship in the case that 2 or more people can have the same "thingies."
Handle users and thingies separately. In some cases, you might only care about people or thingies, not their relationship with each other. Some examples, updating a username or thingy description, getting a list of all thingies (or all users). Selecting from the single table can make it harding to work with.
Maintenance and manipulation is easier. In the case that a user or a thingy is updated (name change, email address update, etc), you only need to update 1 record in their table instead of writing update statements "where user_id=?".
Enforceable database constraints. What if a thingy is not owned by anyone? Is the user column now nillable? It would have to be in the single table case, so you could not enforce a simple "not nillable" username, for example.
There are a lot of reasons of course. If you are using a relational database, you should make use of the one to many by separating your objects (users and thingies) as separate tables. Considering your limitation on number of records and that the size of your dataset is small (under 10,000), you shouldn't feel the down side of normalized data.
The short truth is that there are benefits of both. You could, for example, get faster read times from the single table approach because you don't need complicated joins.
Here is a good reference with the pros/cons of both (normalized is the multiple table approach and denormalized is the single table approach).
http://www.ovaistariq.net/199/databases-normalization-or-denormalization-which-is-the-better-technique/
Besides the benefits other mentioned, there is also one thing about standards. If you are working on this app alone, then that's not a problem, but if someone else would want to change something, then the nightmare starts.
It may take this guy a lot of time to understand how it works alone. And modifing something like this will take even more time. This way, some simple improvement may be really time consuming. And at some point, you will be working with other people. So always code like the guy who works with your code at the end is the brutal psychopath who knows where you live.
I am just getting into Entity Framework for the first time beyond simple examples.
I am using the model-first approach and am querying the data source with LINQ-to-Entities.
I have created an entity model that I am exposing as an OData service against a database where I do not control the schema. In my model, I have two entities that are based off of two views in this database. I've created an association between the two entities. Both views have a column with the same name.
I am getting the error:
Ambiguous column name 'columnname'. Could not use view or function 'viewname' because of binding errors.
If I was writing the SQL statement myself, I'd qualify one of the column names with an alias to prevent this issue. EF apparently isn't doing that. How do I fix this, short of changing the view? (which I cannot do) I think this does have something to do with these entities being mapped to views, instead of being mapped to actual tables.
Assuming you can change the model have you tried going into the model and just changing one of the column names? I can still see how it might be problematic if the two views are pulling back the same column from the same table. I can tell that when working directly with a model mapped to tables, having identically named columns is not a problem. Even having multiple associations to the same table is handled correctly, the Navigation Properties are automatically given unique names. Depending on which version of EF you used you should be able to dig into the cs file either under the model or under the t4 template file and see what's going on. Then you can always create a partial class to bend it to your will.
I've configured my database.yml to point to my existing mysql database
how can I generate models from it?
rails generate model existing_table_name
only gives an emty model..
You can try Rmre. It can create models for existing schema and it tries to create all relationships based on foreign keys information.
A Rails model doesn't show your fields, but you can still use them. Try the following. Assuming you have a Model named ModelName and a field called "name", fire up the Rails console and type:
ModelName.find_by_name('foo')
Given a name that exists in the DB, you should see results.
Rails doesn't infer relationships though, but if your database follows Rails conventions they are easily added.
Update
I've noticed this particular lack of explicitness ("magic") is a source of confusion for newbies to Rails. You can always look in schema.rb to see the models and all the fields in one place. Also, if you would prefer to see the schema for each model in the model file, you can use the annotate_models gem, which will put the db schema in a comment at the top of the model file.
Your answer is:
$ rake db:schema:dump
That will set a new db/schema.db to create a schema of your DB.
ActiveRecord doesn't parse a schema definition. It asks the DBM for the table defs and figures out the fields on the fly.
Having the schema is useful if you are going to modify the tables via migrations.
Schema Dumping and You will help you dump it to use as a reference for building migrations.
ActiveRecord makes some suppositions about the table naming and expects an id field to be the primary key with a sequential number as the type. Having the migrations would help you to refactor the tables and/or fieldnames and types, but you can do those same things via your DBM's command-line. You don't really have to follow ActiveRecord's style but doing so helps avoid odd errors and lets AR infer things to make your life easier.
Could try Magic Model Generator
Take a look at rare_map gem.
https://github.com/wnameless/rare_map
It works both on Rail 3 and 4.
I am new to LINQ to SQL.... I need to add a new column to an existing table and I updated the dbml to include this new field. Then in my C# code, I query the database and accessing this new field. Everything is fine with the new database; however, if I load back a previous database without this new field, my program would crash when it's accessing the database (obviously because of this new field). How do I make it, either the database or my C# code to support backward compatibility?
Here's my code snip
I added a field email to Customer table and also add it to the DataContext.dbml, below is the c# code
DataContext ctx = new DataConext ();
var cusList = ctx.Customer;
foreach (var c in cusList)
{
.
.
.
//access the new field
if (c.email != null)
displayEmail (email);
.
.
.
}
When I ran through debugger, it's crashing at the very first foreach loop if I am using an older version database without the new email field.
Thanks.
Make sure you upgrade old database. That's what updates are made for.
I don't think there's a better option. But i might be wrong.
Should be a code-land fix. Make your code check for the existance of the column, and use different queries on each case.
I agree with Arnis L.: Upgrade your old database. LTS is going to want to look for that column called email in your table, and will complain if it can't find it. I could suggest a workaround that would entail using a Stored Procedure, but you'd need to update your old database to use this stored proc, so it's not a very helpful suggestion :-)
How to update your old database? This is the old-school way:
ALTER TABLE Customers
ADD Email VARCHAR(130) NULL
You could execute this manually against the older database through Query Analyzer, for example. See here for full docs on ALTER TABLE: http://msdn.microsoft.com/en-us/library/aa275462%28SQL.80%29.aspx
If you are working on a team with very strict procedures for deployment from development to production systems, you would already be writing your "change scripts" to the database in this same fashion. However, if you are developing through Enterprise Manager, it might seem counter-productive to have to do the same work, a second time, just to keep old database schemas in sync with the latest schema.
For a friendlier, more "gooey" approach to this latter style of development, I myself can't recommend enough the usage of something like the very excellent Red Gate SQL Compare tools to help you keep multiple SQL Server databases in sync. (There are other 3rd party utilities out there that supposedly can roughly the same thing, and that might even be a little cheaper, but I haven't looked much further into them.)
Best of luck!-Mike
I have two projects using legacy databases with no associations between the tables. In one, if I create associations in the DBML file, I can reference the associations in LINQ like this:
From c In context.Cities Where c.city_name = "Portland" _
Select c.State.state_name
(assuming I added the link from City.state_abbr to State.state_abbr in the DBML file.)
In a different project that uses a different database, adding the association manually doesn't seem to give me that functionality, and I'm forced to write the LINQ query like this:
From c In context.Cities Where c.city_name = "Portland" _
Join s In context.States On c.state_abbr = s.state_abbr _
Select s.state_name
Any idea what I could be missing in the second project?
Note: These are completely contrived examples - the real source tables are nothing like each other, and are very cryptic.
Check your Error List page. You might have something like the following in there:
DBML1062: The Type attribute
'[ParentTable]' of the Association
element 'ParentTable_ChildTable' of
the Type element 'ChildTable' does not
have a primary key. No code will be
generated for the association.
In which case all you should need to do is make sure that both tables have a primary key set and re-save the dbml file. This will invoke the custom tool, which will in turn update the designer.cs file and create code for the association.
It looks like my problem was my tables didn't have primary keys in the second project. Like I stated, these are legacy tables, so I had to do the linking and primary key stuff in the Database Context instead of the database itself, and I just forgot to specify the primary keys the second time around. Frustrating when you don't spot it, but it makes sense now.
Sometimes, when everything is configured correctly but still not working, the solution can be as simple as restarting Visual Studio.
I don't know why it happens sometimes, but I thought I should add this answer because having done some searching for a solution myself, it seems nobody has suggested this yet...