We are looking to introduce odac into our application but I am running into a number of issues and I can't seem to find any solutions that fix our issues.
We are using an oracle database and trying to use ODAC 12c Release 1 (12.1.0.1.0) with Oracle Developer Tools for Visual Studio.
In our model we would like to have multiple schemas so we can perform cross schema queries. The schemas which we select in the filter for the database connection appear when we are creating the model. However when we try to update our model from the database only the default schema is visible. Sometimes this can be fixed by opening the relevant part of the database in Server Explorer in Visual Studio but this doesn't always work. This fix never works after we add multiple connection strings for the same model (depending on the location of the user will depend on which database the user gets directed to).
The next issue we are having is that we can't the return types of stored procedures to be auto-generate. I have tried to retrieve the column information but it is never able to retrieve the metadata. I have seen a few suggestions of modifying the stored procedures then getting the column information and reverting the stored procedures but this is something we would like to avoid. Also the suggestions don't seem to work on oracle databases (but that could be me, I don't have much experience with databases).
The final issue (this is a minor issue) is that I cannot figure out how to get the generate code to omit underscores from the classes/methods generated. This isn't a huge issue it is purely to make it easier migrating the code we have.
Related
I'm using Visual Studio Pro 2013 and want to use the Fuzzy Lookup task but there seems to be a bug that prevents the component from connecting to the reference tables.
A reference file and table is specified in the Connection tab which all seems fine but the Columns tab is also needed to create the reference links between different fields of the data however there is nothing there:
The error messages are as follows:
I've read elsewhere this was a known bug on older versions of SSIS from about 2005 - anyone know what the problem is here and how I can fix this?
I should add that the connection manager and the table seem fine as they have been used many times elsewhere in the project. I've tried recreating the data flow in a new document and even restarting my PC but this simply won't work. I should add that I've not used the Fuzzy Lookup before but have looked at several references and know that the column tab should be populated with data and not be an issue.
Many Thanks,
Kw
Not a bug, it's how the product works. From the manual, it specifies the reference table must be a table in sql server. A table in Access, therefore, is unsuitable for use in the fuzzy lookup component.
The transformation needs access to a reference data source that contains the values that are used to clean and extend the input data. The reference data source must be a table in a SQL Server database
I am trying to move an existing database into a VS 2010 database project. This database has been around for a very long time and has a lot of stored procedures. Many of those stored procedures create working tables using the SELECT ... INTO syntax. This was done at the time to reduce the amount of data going through the transaction log. When I import the database into the project I deselected the work tables. Now when I look out the warning I see all of my stored procedures are saying that they can't find the work tables. Does anyone have a way of dealing with this? I am slowing converting as much as I can to SSIS but there is some logic that will have to remind in the stored procedures.
Thanks.
In this case you either have to ignore the warnings or import those working tables that are absolutely required for the stored procedures to work. I don't think the project will build properly if the tables are not present, but are expected.
If those tables are already present in the database, you should be fine as they won't be re-created. You can slowly deprecate them as you clean up your stored procedures.
I'm fairly new with EF and got myself rather confused about what's going on with my solution.
I'm in the situation where my code appears to be working however the changes aren't being written to the database that I would expect.
I'm using Web Developer 2010 and SQL 2008, code first approach but choosing to make my own changes in the database and manually ensure my classes match correctly.
Things seemed ok until I came across an error where the db hash wasn't what entity framework was expecting, so I looked at modelBuilder.Conventions.Remove(); which wasn't available - it seems that's not around in the later versions. So, I figured if the later versions doesn't do this check, I'll go ahead and remove my reference to EF 4 and put the 4.3 dll in it's place. I think I also ended up deleting and renaming my database. That didn't work, so I followed up on something I found on Scott Gu's blog about naming your dbContext the same as the database, which seemed to help.
However I'm now getting the most bizarre scenario. My code is running, the data is saving, but it's not saving to my database. In fact I'm running a profile trace on the db and it doesn't even seem to be trying to connect. I can change my connection string to something invalid too, but my code will happily run, storing the data #somewhere#
Any ideas what's going on? Might it be using a local database or cache that I'm unaware of? Should I just start my small project again and pretend this never happened? That'd be the professional approach, right?
I would suggest to use Database first approach, if you have your Database set, or want to have maximum control over database.
I have a schema in an Oracle 11g R2 database that I'm trying to connect Crystal Reports.
I have two users; an admin user (where I create the views, etc.) and a reporting user that has the ability to query certain tables/views.
In any other database tool (SQL Developer, TOAD, DB Visualizer), I can see the schema along its tables and views, and can query against them and create new views, etc. as I should be able to.
However, in Crystal Reports 2008, when attempting to access the data, the proper schemas/views aren't displayed. Examples:
Creating an ODBC datasource in Crystal (which I believe connects to one I've pre-created in windows that works just fine), only a small subset of schemas are shown in Crystal (but not the one I should be able to see).
Creating an Oracle datasource in Crystal shows me the schema, and all of the tables I believe, but only one of the views (not the one I need).
NOTE: Normally I would think that it's a permissions issue on the database, except that I can access these schemas/tables/views properly from every other client I've tried.
Any ideas? Is it the drivers that Crystal 2008 uses? Is it still somehow possibly a permissions issue? I'd appreciate any insight you fine folks have.
Looks like this was indeed an error on our DBA's part. A certain level of "select" permissions in their permission model was preventing access. It appears to have been resolved.
But if anyone would like to help me gather all copies of Crystal 2008 in a warehouse and light them on fire, be my guest. :)
I've got a better one...
I was working with this for a long time today, trying to help one of our new developers. He had developed a report from a different workstation against a different data source, and we needed to swap the data source when we transferred it to the new network. Fired up CR, Showed him how to "Set Datasource Location", we get the account information, check the connection string, etc. Get ready to show him how to replace one db w/ another... find the connection, open the server, pop out the databases, open the database to show the tables and... Nothing. Hm...
Try a different account that I know works. Strange, THAT one doesn't see any tables either. Try a different database. OK, now I'm a little off-balance... Remote into the web server to see if I can run one from there. Fire up CR, Open an existing report, hit refresh, put the PW in, and voila! Data. Lots. Copy his report up, remote in, open it, get ready to Set Datasource Location, and ... nothing.
Spoke w/ the DBA, watched/walked him through the check, still nothing.
Funny thing was, if I had a report that had connected before, it would run. Wonderful! Check the available tables... nothing. Quick jump to look at the db... I can see the privileges, I can see everything set fine. Cool. Tried again, nothing.
OK, spoke to another DBA. I walk him through CR to show him the issue, he and I are going to explicitly set permissions. I open the data source in CR, right click to look at Properties, and... noticed that I hadn't check Options. Sinking feeling in the pit of my stomach. Open Options, and notice in the Data Explorer section, TABLES is not checked.
I remember WHY I set it... a long time ago. The DB has thousands of tables, and I knew which ones I needed. I paste a command and go, I never CHOOSE tables.
So... Check TABLES, and thousands of tables show up again. Sigh.
OPEN CRYSTAL REPORT THEN CLICK FILE -> OPTIONS -> SELECT TAB DATABASE -> IN THE EXPLORER OPTIONS PUT TICK MARK ON TABLES AND Onwer Like < add schema name> click ok
this will list only that schema. Crystal Report has some limit is loading all table names so select the scheme so that it will load only that schema.
thanks,
praveen.
I usually create a solution folder in Visual Studio and put my DB scripts in them. I always use at least this set of scripts:
Drop model
Create model script
User functions
Stored procedures
Static data (lookup tables)
Test data (not deployed)
Then I simply combine them and run against an SQL Server so I'm able to recreate the whole DB in a single step (by combining these scripts into a single one and executing it).
Anyway. I've never used projects in either:
Visual Studio or
SQL Management Studio
I've tried creating SQL Server 2008 Database Project in Visual Studio 2010, but I'm somehow overwhelmed by all the possible server settings (which I prefer to stay default as set on the server anyway). So I'm a bit confused: Should I use this project template or should I just do the same thing I always did?
What do you use and why? What are advantages I may benefit from by using either?
If I were you I would continue to do it the way you are doing it. In fact I do! The advantages of having the actual .sql files right there in a folder for you to use/edit/look at in my opinion are far better than the advantages you get by using a DB project. DB Project would be used if you were doing something like Storage Reports, were you have to communicate with like 8 databases and compare then to 8 different databases and save result sets etc... Now don't get my wrong there are advantages of Database Projects, I just don't think they are actually doing much help when you have such a simple setup that works already.
Advantages of the SQL Server 2008 Database Project in VS10:
Not having to switch back and forth
from your current client you use to
communicate with your SQL server.
Decent Data and Schema compare tools.
Gives you a one-click way to reverse
engineer a database into source
control, and keep it up to date.
You can compare projects to physical
databases and vice-versa. (This makes it pretty easy to keep your database up to date, no matter where you make change it: file system database project, or in the physical database itself)
If the current tool your using is not specifically tailored to SQL Server, this one is.
Extremely helpful if you need to do
unit tests directly on the database
without using abstractions.
If you're looking for something a little less complicated, you might want to try SQL Source Control. This won't even require you to maintain scripts, as it doesn't this for you behind the scenes. It will, however, only work as a solution for you if you use either TFS or SVN. And it costs $295...
It has a 28-day trial period, so if you're happy to try it out, I'd be interested in your feedback.