So I've created a 2017 SSAS tabular model in VS. I've then deployed it to the SSAS tab work space server. But for some reason it creates two separate models. One normal and one with some random characters appended to it. Worse still, when I deploy changes to the model, it only updates the one with the characters appended.
What is going on here?
Deployment setting:
Two models showing in the work space server:
The tabular model with your username and a GUID is the workspace database. This is a local copy of the tabular model with the changes that you've applied to it when Integrated Workspace mode is not used. The workspace database is kept in memory while it's open in SSDT, and depending on the Workspace Retention property may be either removed from memory, removed from memory/disk, or still kept in memory. The default setting is to remove it from memory but not from disk, hence why you may not see this database whenever you close the model in SSDT. This property can by accessed from SSDT by highlighting the .bim file and viewing the properties (press F4).
The changes made to the workspace database should be applied to the deployed model when it's deployed using the model name as the target database on the deployment server, as in your screenshot. When you examine the model (non-workspace database) in SSMS, how do you know the changes are not applied to this, and have you tried refreshing the view from SSMS? This can be done by right-clocking the Database folder above the tabular models and pressing Refresh. Also, the deployment SSDT is succeeding without errors, correct?
Since I cannot comment on the correct answer I'll make it a separate answer.
What I saw happening was when you import a cube in Visual Studio (while creating a new project) a new tabular DB is created with the name "CubeName_username_Guid".
Everything you modify gets deployed to the real cube after you close Visual Studio. Also after you close VS, the oddly named cube also disappears.
I hope it helps.
Related
This is my first attempt at writing a Visual Basic app using Visual Studio. I am using an Access 2013 database as the source and everything was going fine until I decided to add some calculated fields to one of the tables. Now VS is telling me that I need the "Microsoft Office 14.0 Access Database Engine OLE DB Provider" to read the tables.
I've been searching and searching but can't find anything about version 14.0 of the engine. Am I just missing something simple, or can I not access calculated fields in the DB and have to just pull the data and do the calculations in the app?
I am working on a machine with Win7 64bit, Office 2013 Pro, and Visual Studio 2015 Community.
Thanks!
EDIT: The calculated fields look like this:
"http://travellermap.com/api/jumpmap?sector="+Left([Sector],4)+"&hex="+[Hex]+"&style=print&jump=3"
Where [Sector] and [Hex] are fields in the same database. As soon as I added the fields to the database VS2015 would throw the error. If I tired use the Dataset Wizard it would tell me that there were fields in the database that were missing or could not be read without v14 of the engine.
Then even after I removed the calculated fields it still could not read the data. I have to revert to a backup copy of the database that never had calculated fields in it to get things back to working.
My exact steps were as follows:
Create database with multiple tables
Create a view that pulls together data from all the tables that I needed.
Create dataset in VS that uses that view which I called SystemMaster
Build app around it using the XAML designer
Decide that it would be quicker to use a calculated field int he DB than write the VB code to put together a URL that is needed to display an image in the app.
Close out VS2015 and laucnh Access 2013
Create the calculated field in the primary table and edit the SQL code for the view to include those fields
Save and close Access, go back into VS2015
View the dataset to see if the new fields are there, see that they are not and launch the Wizard only to find out that the Wizard says it can't access anything in that view anymore because fields are either missing or can't be read with the current version of the Access engine (12.0) and that I need version 14.0.
Search for answer, don't find one. Remove the fields from the tables and try to get back to a working application.
Realize that even after removing the fields, VS2015 won't read the tables ro views that had them.
Revert to a version of the database that did not have the calculated views, and everything works again.
While I know that the proper way to do this would be to construct the URLs in the VB code, this was easier and quicker (in theory) to do at this point which is basically a proof-of-concept/prototype.
We have two software projects that both communicate with a single database. Right now SQL updates are all done on the database and it's relying on developers to make sure to update both sets of projects independently to use the latest database model. Making these matters worse both projects are in separate solutions in separate source control repositories.
While I acknowledge this is a terrible situation to be in, I inherited this situation, and while my long term goal is to consolidate and share the (lots) of duplicated logic between them in one common project shared among both sets of application for various reasons it is not feasible to jump right into that right now due to critical deadlines coming up and the need to combine them iteratively and schedule it with other developers to not disrupt work too much.
Keeping that in mind, I really want to use SSDT to at least start bringing the database structure under source control and make it easier to manage, as there are quite a few database changes that I'm about to do.
The problem with SSDT in this scenario is that you can only import from database once. After that the option is greyed out and unavailable, which is apparently a design decision of SSDT, since it's explicitly listed in the MSDN documentation.
Is there any easy way to update my SSDT project without nuking the current project and recreating it each time someone makes a change to the database structure?
Firstly your right, it is a horrible situation so work on improving it in the long term!
There are two things you can do, firstly you could use SSMS "Generate Scripts" to export all the objects and then use the import in SSDT to import from the scripts - this isn't greyed out.
The second thing you can do is manually bring the changes in using the schema compare in SSDT, you can set the database as the source and project as the destination and choose what you drop, update and import.
Ed
its bit delay in answer. I am using VS2017 Database project in which I have achieved this task by comparing a local database with database project once the comparison is over you can update the database by update button
Step 1 right click on the database project and click on schema compare item.
Step 2 select target -> select database connection option
Step 3 change source and target
Review Screenshots for more detail
I am going with compare solution :
Choose schema compare and make your database as a source and database project as a target then compare and update
see the this answer
Make a new temp Database project (outside of TFS) and import all the objects.
Checkout the Database project (inside TFS) and copy and paste all the folders (excluding BIN, OBJ folders) from the new temp Database Project into the Database Project (in TFS) and check in. This was way you get the latest DB object into TFS without duplicating.
If you expect new files in the copy/paste operation, then the new files should be included in the DB Project.
Delete the temp Database project folder.
You will need to do the process whenever you want to update all DB Objects into TFS.
This is a workaround which worked for me for this file duplicating issue.
I'm following a tutorial (http://weblogs.asp.net/scottgu/archive/2011/05/05/ef-code-first-and-data-scaffolding-with-the-asp-net-mvc-3-tools-update.aspx) to build a simple MVC 3 web application. I'm using a code first methodology and things were going good until I had to go back and add a field onto one of my models.
I got the error "Invalid column name 'Summary'." Which makes sense because this was the new field that I had added to the model.
Its my understanding that whenever Visual Studio realizes that the existing DB is different from the one that it is connected to, Visual Studio will try to recreate the DB. This is the behavior that I want so I added the following line to my Application_start in my global.asax file.
Database.SetInitializer<MyDBContext>(new DropCreateDatabaseIfModelChanges<MyDBContext>());
Now when I try to run the program I get the error "Model compatibility cannot be checked because the database does not contain model metadata. Model compatibility can only be checked for databases created using Code First or Code First Migrations."
I even deleted the existing database that I've been using so Visual Studio doesn't get confused.
What do I need to do in order to get Visual Studio to nuke the DB every time I change the models?
I've been attempting to use Visual Studio 2010 schema compare to take updates from a Dev database and move it to a UAT environment.
The compare itself works fine, but the tool continually orders the update scripts incorrectly.
It will try to update a stored procedure first, then the view that the procedure depends on. If my view includes new fields that the procedure depends on, then it will fail the update.
I've attempted to force the dependency to be recognised by qualifying all references to the dependent views with the schema name (essentially dbo.view rather than view), as suggested in http://msdn.microsoft.com/en-us/library/aa833294.aspx
Is there any way to force the scripts to a particular order (tables, views then sprocs), or is there a way to tell how and why the dependencies are calculated so I can see what's going wrong?
I don't think that either of the things I was hoping to do are possible.
What I have learnt is that the refresh on the schema compare doesn't always seem to recalculate dependencies correctly.
Closing it and starting a new one worked, just refreshing the original didn't.
My company are working at Sharepoint site that we are developing using Visual Studio. The actual installation at the customer is performed by scripts deploying the produced wsp-files. During normal development I mostly use deployment from directly from inside Visual Studio. Unfortunately I often run into problems when trying to deploy my solutions. We are using a server-farm set up, but each developer has their own virtual server, datebase instance and so on.
We have one project file that the define the basic content-type used for different department. This content-type typically define stuff like what period that the list item cover. Each department have their own project that uses the content type combined with department specific fields to form the final list.
One of my current problems is that when I make edits to the content type and deploy it the changes does not seem to propagate. Even though I rebuild the solution and deploy both the base project and the department project with success I still see the old version of the content fields when I create a new department list. Sometimes it helps to retract the projects, but often I literally have to restart everything before it works.
My question is if this problem is caused by Visual Studio not really deploying my new defintions or if there is some architectual aspect of Sharepoint 2010 that might prevent the change to propagate. What steps can I take to lessen the likelihood of the problem occuring?
Have you tried deleting the content type with Central Administration before doing a new deployment? I've found out that Sharepoint don't update/create content types when it finds other one with the same name.