I was wondering if it's possible to get the generated classes like a linq-to-sql class generates classes for every table without using the actual dbml. my situation is as following:
I have an api which gets all data from database and returns them as json.
on the other hand I have an application retrieving that data but it needs to be casted to the right classes again to work with it properly. Ofcourse I could write all those classes myself with the needed properties. But it would be handy if they could be generated. The application does not need any connection to the database itself. This is why I want to protect the database by not using a dbml. with a dbml, a connection is possible. So I just need those classes to cast the json to.
Can I do this or is it impossible?
You can use T4 to generate the classes if that's what you need.
Oleg Sych has a good series of tutorials for this.
Hope it helps!
Related
I'm looking for a solution with Spring / camel to consume multiple REST services during runtime and create tables to store the data from REST API and compare the data dynamically. I don't know the schema for JSON API in advance to generate the JAVA client classes to create JPA persistent entity classes during run time.
You'll need to think through this differently. Id forget about Java class POJOs that you don't have and can't create since the class structure isn't known in advance. So anything with POJO->Entity binding would be pretty useless.
One solution is to simply parse the xml or json body manually with en event-based parser (like SAX for XML) and simply build an SQL create string as you go through the document. Your field and table names would correspond to the tags in the document. Without access to an XSD or other structure description, no meta data is available for field lengths or types. Make everything really long VARCHAR? Also perhaps an XML or other kind of database might suite your problem domain better. In any case, you could include such a thing right in your Camel route as a Processor that will process the body and create the necessary tables if they don't already exist. You could even alter a table for lengths in the process when you have a field value that is longer than what's currently defined.
I'm not so good at both Linq and SQL. But I have worked more with SQL and less with LINQ. I've gone through many articles which favors LINQ. I don't want to go the SQL way (i.e. writing stored procedures and operating data etc.)
I want to start with LINQ for every operation related with data. Here are the reasons why I want to do this:
I want to have complete control of my database via application and not by writing stored procs (as I'm not so good at writing store procedure)
I want to create my project as an easy maintainability view
Want faster development
For that, I know that:
I need to add a dbml file, drag and drop tables into that
Use dbContext class, and so on
But I want to know, is there a way:
I can avoid creating dbml file and still be able to access the database?
Do I need to use Linq to Entities for the same?
Will it be a good way to avoid using dbml file? Since for every database changes I need to drop and drop tables every time
Also I've come across many posts where linqToSql is considered deprecated and not a .net future?
I have so many doubts, but I think that's obvious when starting with a new technology?
I found this useful article which is good for beginners:
[http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx][1]
after doing some more research I came to conclusion that:
1)i can avoid creating dbml file and still be able to access database??
ANS Yes but instead of dbml now edmx files will be created.
2)Do I need to use Linq to Entities for the same?
ANS Yes you can go with linq to entities.
3)Will it be good way avoid using dbml file? since for every database changes I need to drop and drop tables every time
ANS it is not required to drop and create again the tables. their are options where you can update selected part of your database and you are not avoiding dbmls. it will created edmx file and that will almost similar to dbmls in many ways.
4) Also I've come across many posts where linqToSql is considered deprecated and not a .net future?
ANS yes in future development it will be depreciated. it supports only sql server as backend.
I hope I'm right. Please do tell me in case any other suggestions.
LINQ is a way to query and project collection of data. For example, you can use LINQ to query and shape data from a database or from an array. LINQ by it self has nothing to with the under lying database.
You use an ORM (Object Relational Mapper) technology to project data stored in tables of a database as collections of objects. Once you have the collection of objects, you can use LINQ to query them.
Now, you have many ORM technologies to select from, such as Entity Framework, NHibernate, Linq2Sql. If you don’t like to maintain a dbml file, have a look at code first approach offered by Entity Framework.
Then there are things called LINQ data providers. They would take a LINQ query, transform it to a SQL targeting a particular database, execute the query and get the results back as a set of objects. Many of the ORMs above has built in LINQ data providers as a part of them and would work behind the scene in fetching the data.
I would advise you to look up on some patterns such Repository and Unit of work for your data layer. When used correctly, these patterns will isolate your data access code from your applications upper layers. This will help you to change your data access technology is it becomes obsolete, without affecting the rest of the application.
LINQ is an awesome technology and you should definitely try it
I have composed the above answer based on my own experience and I am sure there are many SO users with better understanding of the above technologies than myself who may wish to add their own opinion
Good luck
I am working on an asp.net MVC 3 web application and I am using database first, but after I have mapped the DB tables into entity classes using entity framework, I am interacting with these tables as I will be interacting on the code first approach by dealing with Database tables as classes an objects.
So after mapping the tables into entity classes I find that the code first approach and DB first are very similar but except of start writing the entities classes from scratch (as in code first) I have created the entity classes from existing database tables - which is easier and more convenient in my case.
So are there specific cases on which i will not be able to do some functionalities unless i am using one approach over the other which till now i cannot find any?
Having dealt with many many headaches using db-1st EDMX pre EF 4.1, I am partial to code-first. But I'm not going to evangelize it.
In addition to the direct sproc mapping & function import features mentioned in Pawel's answer & comment, you won't be able to change the namespaces or any other code in the generated files when you use db-first. Afaik all of the files are nested under the .tt file. If there is a way to move them into logical folders & namespaces in your project, then I'm not aware of it.
Also if you ever want to separate your DbContext into a separate project from your entities, I recall this was possible pre-EF 4.1. But it was more cumbersome, because you had to run custom tool on both .tt files after each db change. With code-first this is pretty straightforward because you're dealing with pure OOP.
I think that the biggest limitation of CodeFirst (as compared to ModelFirst/DatabaseFirst approaches) is that you cannot map your CUD operations to stored procedures. If you are not planning to do that then you should be good to go.
To be more specific - You can invoke stored procedures using SqlQuery method on DbSet which will cause the returned entities to be tracked or more general SqlQuery and ExecuteSqlCommand on Database class (for Database.SqlQuery the returned objects do not have to be entities and there is no tracking for these objects). That's about it. You cannot map Create/Update/Delete operations to stored procedures. FunctionImports are not supported as well
EDIT
It's possible to map CUD operations to stored procedures in EF6 now
We have a J2EE app built on Struts2+spring+iBatis; not all DAO's use iBatis...some code still uses the old JDBC approach of interacting with Database. All our DAO's call Stored Procedures, we do not have any inline SQL. Since Oracle Stored Procedures return cursors, we have to drastically change our code.
It is fairly easy for us to convert current iBatis mappings (in sql) to oracle (used a groovy script to do this) also it is easy to convert Java code that was calling old mappings that were in sql.
Our problem is to convert the old DAO's that still use JDBC approach. Since we will have to modify them anyways (because we are now using oracle) we are thinking about converting them to iBatis mappings. is this a good approach? This will be a huge effort from our side...
what do you think will be the best approach to tackle this huge effort?
should we just get to work and start converting each method in every DAO
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
I appologize if the question is vague but am just looking for someone who has gone through this type of thing before and has some pointers or 'lessons learned'.
The first thing you should do is cover your DAO layer in tests. This way you'll know if you broke something during the conversion. If you are moving a stored procedure from one DBMS to Oracle, you should also write tests for that using a framework like DbUnit.
You should have a TEST DB instance populated with sample data that doesn't change. You should be able to refresh this DB with the same set of sample data after your are done running your tests. This will ensure your TEST DB is in a known state. You will then have your input parameters paired with some expected (correct) result. Your test will read in these pairs and execute them against the test DB instance and confirm the expected result is returned. Assuming your tests mutate the DB, you'll want to refresh the DB between runs of your test suite.
Second, if you're already going in and changing some data access implementations for Oracle, why not use this as an opportunity to move some of that business logic out of the DB and into Java? There are many well-documented problems with maintaining large codebases in a DBMS.
should we try to make some small script that looks at each method, parses out relevant information and makes iBatis mappings from that.
I don't recommend this. The time you'd spend tweaking the script for each special case, plus hunting down all the bugs it would introduce would be better spent doing the conversion by a thinking human.
for maintenance and seperation purpose should we have 1 iBatis mapping for each DAO
That's a fine idea. You can then combine them in your sqlMapConfig with
<sqlMap resource="sqlMaps/XXX.xml" />
This will keep your mappings more manageable. Just make sure to specify the namespace attribute in each sqlMap like:
<sqlMap namespace="User">
So that you can reuse mappings between the sqlMaps for instantiating object graphs (example: when loading a User and his Permissions, the User.xml sqlMap calls the Permission.xml mapping).
All our DAO's call Stored Procedures
I don't see what iBatis is buying you here.
It's also not clear what the migration is. Are you saying that you've decided to move all the code into stored procedures, so there's no more in-line SQL? If that's the case, I'd say don't use iBatis. If you're already using Spring, let it call into Oracle using its StoredProcedure object and map the cursors into objects.
The recommendation to create JUnit or, better yet, TestNG tests is spot on. Do that before changing anything.
I am using LINQ to access my database, and thereby gets a LINQ-created object which I want to send to the browser (this is a webservice) as a JSON-object. This works well by now, but when I add some testdata to the database (about 10-20 entries in each table) this fails miserably. The reason is that the LINQ-object contains all the referenced objects. This becomes huge pretty fast. Eg. each resourcetype contains all its resources which contains all reservationlines which contains each reservations..
Do you have any tips on how I should resolve this? Is there a setting in the serializer I can set? I use json.net for serializing the objects. Or is there some setting in LINQ?
In the best case I don't want to create new objects before I serialize them, since it is very convenient to just serialize the LINQ-objects directly :)
The best practice, at least for the moment, is to not serialize LINQ to SQL objects, or Entity Framework entities. The main reason for that is that they include implementation-dependent data from the base classes.
Instead, serialize what you want serialized. Use Data Transfer Objects matching exactly what you want to send, and copy from the LINQ to SQL objects into them before sending.