VS2008 DataSet Wizard doesn't match tables for updating - visual-studio

first question ever on this site.
I've been having a real stubborn problem using Visual Studio 2008 and I'm hoping someone has figured this out before.
I have 2 libraries and 1 project that use strongly typed datasets (MSSQL backend) that I generated using the "Configure DataSet with Wizard" option on in Data Sources. I've had them working just fine for awhile and I've written a lot of code in the non-designer file for the row classes. I've also specified a lot of custom queries using the dataset designer. This is all work I can't afford to loose.
I've recently made some changes to re-organize my libraries which included changing the names of the libraries themselves. I also changed the connection string to point to a different database which is a development copy (same exact schema).
Problem is now when I open up "Configure DataSet with Wizard" to pickup a new column I've added to one of the tables it no longer matches the tables correctly in the wizard. The wizard displays all of the tables in the database and none of them have check boxes next to them (ie: are not part of this dataset). Below those it shows all of the tables again but with red Xs and these are checked. Basically meaning that Visual Studio sees all of the tables it currently has in the DataSet and sees all of the tables in the database, but believes they are no longer the same and thus do not match!
I've had this same thing happen quite awhile back and I think I just re-built the xsd from scratch and manually copied the code over and then had to redefine all of the custom queries I built in the dataset designer. That's not a good solution.
I'm looking for 2 answers:
1. What causes this to happen and how to prevent it.
2. How do I fix this so that the wizard once again believes the tables in its xsd are the same tables that are in the database (yes, they have the exact same names still).
Thanks.

The dataset designer uses the default query (The first one with a check on it) to sync up the schema for each table. Whenever you go to edit the default query, VS will actually connect to your datasource and look for changes in the query. If new columns are added, they will show up as new columns for you to add to your table. Renamed columns show up as new, since VS doesn't have any way to know that you changed the name.
Answer 1. The XSD file contains the names of the database tables that it used to create the table originally. If you change the name of the table, the designer won't know which table to sync to.
Answer 2. You can edit the XML inside the XSD file. Do a "Find and Replace" inside the XSD file replacing the old tablename for the new tablename. Make sure you have a backup of the XSD file before you do. Be careful to only change instances of the old table name and not any other working XML.

Related

Bug in Fuzzy Lookup in Visual Studio SSIS

I'm using Visual Studio Pro 2013 and want to use the Fuzzy Lookup task but there seems to be a bug that prevents the component from connecting to the reference tables.
A reference file and table is specified in the Connection tab which all seems fine but the Columns tab is also needed to create the reference links between different fields of the data however there is nothing there:
The error messages are as follows:
I've read elsewhere this was a known bug on older versions of SSIS from about 2005 - anyone know what the problem is here and how I can fix this?
I should add that the connection manager and the table seem fine as they have been used many times elsewhere in the project. I've tried recreating the data flow in a new document and even restarting my PC but this simply won't work. I should add that I've not used the Fuzzy Lookup before but have looked at several references and know that the column tab should be populated with data and not be an issue.
Many Thanks,
Kw
Not a bug, it's how the product works. From the manual, it specifies the reference table must be a table in sql server. A table in Access, therefore, is unsuitable for use in the fuzzy lookup component.
The transformation needs access to a reference data source that contains the values that are used to clean and extend the input data. The reference data source must be a table in a SQL Server database

Can't generate table from Oracle Designer 6i

a little background, I really don't know any technical terms from Oracle. My company have a pre built machine and I'm trying not having to go there backup my files and lose a day of job because I cant.
Recently I'm learning how to use Oracle Designer (6i) to build a diagram and later a table so I can request it to be created... While using the software it requested to install some file for the repository... after doing it, It screwed up every Oracle product I was using... I couldn't connect to PLSQL or even the designer...
After fixing some registry to point to the right TNSNAME and manually adjusting PLSQL, I managed to access both. The problem in hand is in oracle Designer at the "Designer Editor", when I right click a table and select generate, the message bellow shows up
Message
-------
CDD-23564: The file "C:\ORACLE\DSG6I\BIN\cds61.dll" could not be loaded or does not exist.
Cause
-----
The specified file or one its dependent files could not be loaded.
This may be because a file has not been installed, or is not
correctly defined in the system registry.
As an example dependency, the Forms Generator files require the
Form Builder files installed as part of Developer.
Action
------
Check the registry settings for the location of the required
file. Also check the product and any required dependencies
e.g.Developer have been installed correctly.
If necessary, try reinstalling.
The dll mentioned IS and EXISTS on the mentioned folder.
Considering I don't have the Oracle Developer 6i intaller, what can I do? What registry entry should I update?
Designer 6 is long out of support. Oracle has a free tool, SQL Developer Data Model, which does not break.
Even Designer 9i was flaky and would crash at random intervals and poke along with larger schemas. Anything over hundred tables could take days to edit. Ah, good times...
I managed to fix the problem by copying and replacing the whole ORACLE_HOME\DSG6I folder (in my case c:\Oracle\DSG6I, for those confused in terms like me) and the oracle system registry (regedit > HKEY_LOCAL_MACHINE\SOFTWARE\Oracle) from a coworker machine!

Is it possible to update a SSDT DB project from a database?

We have two software projects that both communicate with a single database. Right now SQL updates are all done on the database and it's relying on developers to make sure to update both sets of projects independently to use the latest database model. Making these matters worse both projects are in separate solutions in separate source control repositories.
While I acknowledge this is a terrible situation to be in, I inherited this situation, and while my long term goal is to consolidate and share the (lots) of duplicated logic between them in one common project shared among both sets of application for various reasons it is not feasible to jump right into that right now due to critical deadlines coming up and the need to combine them iteratively and schedule it with other developers to not disrupt work too much.
Keeping that in mind, I really want to use SSDT to at least start bringing the database structure under source control and make it easier to manage, as there are quite a few database changes that I'm about to do.
The problem with SSDT in this scenario is that you can only import from database once. After that the option is greyed out and unavailable, which is apparently a design decision of SSDT, since it's explicitly listed in the MSDN documentation.
Is there any easy way to update my SSDT project without nuking the current project and recreating it each time someone makes a change to the database structure?
Firstly your right, it is a horrible situation so work on improving it in the long term!
There are two things you can do, firstly you could use SSMS "Generate Scripts" to export all the objects and then use the import in SSDT to import from the scripts - this isn't greyed out.
The second thing you can do is manually bring the changes in using the schema compare in SSDT, you can set the database as the source and project as the destination and choose what you drop, update and import.
Ed
its bit delay in answer. I am using VS2017 Database project in which I have achieved this task by comparing a local database with database project once the comparison is over you can update the database by update button
Step 1 right click on the database project and click on schema compare item.
Step 2 select target -> select database connection option
Step 3 change source and target
Review Screenshots for more detail
I am going with compare solution :
Choose schema compare and make your database as a source and database project as a target then compare and update
see the this answer
Make a new temp Database project (outside of TFS) and import all the objects.
Checkout the Database project (inside TFS) and copy and paste all the folders (excluding BIN, OBJ folders) from the new temp Database Project into the Database Project (in TFS) and check in. This was way you get the latest DB object into TFS without duplicating.
If you expect new files in the copy/paste operation, then the new files should be included in the DB Project.
Delete the temp Database project folder.
You will need to do the process whenever you want to update all DB Objects into TFS.
This is a workaround which worked for me for this file duplicating issue.

SSIS value does not fall within the expected range with OLE DB Datasource

I'm using Visual studio 2013 with update 3 and a collegue of mine with update 4 installed each. We are using the data dools for sql server 2014.
I've created a few DTS packages which my collegue updated so far it worked without problems.
But all of a sudden I get "value does not fall within the expected range" warning from the datasource and can't edit columns there,.. . I needed to recreate the datasource for the message to disappear again.
My question here is can it be that the appearance of additional columns in the table which the datasource accesses was the cause of this problem? (I've seen out of sync warnings for datadestinations whenever a destination table got new columns or lost columns, but this is the first time something changed for a source table).
Or can that problem have a completely different cause?
It has been a long while since I've worked on an SSIS project, but I do recall seeing this error as well.
My experience was that it was caused to the metadata of the input being out of date in a certain way, and what you describe as your suspicion fits with this.
The solution I found to avoid this was to be very specific on all my input components, selecting the exact columns I wanted rather than selecting all. I think in the end I actually changed them all to use hand written SQL queries rather than the GUI column selector.
Also I don't remember if this was the same error but a similar one: sometimes after a schema changing when trying to open a component the GUI would throw an error and not open but when I tried again it would have resolved the error.
Sorry I couldn't be more definitive in my answer but hopefully this information helps point you in the right direction.
I used a simple method and it is working fine. In the OLE DB Source Editor while I retained the same connection manager, changed Data access mode (from Table/View) to SQL command and used SQL command to select the required columns. Error message no longer appeared and I could see the column values....

Visual Studio 2010 - How can I control the order of schema updates when using schema compare?

I've been attempting to use Visual Studio 2010 schema compare to take updates from a Dev database and move it to a UAT environment.
The compare itself works fine, but the tool continually orders the update scripts incorrectly.
It will try to update a stored procedure first, then the view that the procedure depends on. If my view includes new fields that the procedure depends on, then it will fail the update.
I've attempted to force the dependency to be recognised by qualifying all references to the dependent views with the schema name (essentially dbo.view rather than view), as suggested in http://msdn.microsoft.com/en-us/library/aa833294.aspx
Is there any way to force the scripts to a particular order (tables, views then sprocs), or is there a way to tell how and why the dependencies are calculated so I can see what's going wrong?
I don't think that either of the things I was hoping to do are possible.
What I have learnt is that the refresh on the schema compare doesn't always seem to recalculate dependencies correctly.
Closing it and starting a new one worked, just refreshing the original didn't.

Resources