I have made a tabular model which is currently deployed on a SSAS server.
I made my project in my C Drive and my Drive is now dead.
Can I somehow download my model that is deployed on the Analysis Services Server?
Or its basically lost?
I tried connecting to it but it tells me that its only possible for multi-dimensional.
Thank you !
You can create a new project in SSDT and then import the deployed tabular model. When creating a new project, select the "Import from Server (Tabular)" option, connect to the SSAS instance where it's deployed, and choose the your tabular model. One thing to keep in mind is that this option does not reset any metadata, deployment targets, or connection strings.
A good practice, if not using a Git repository or any other soruce control platform, is to save the XMLA script of the cube, from the server, periodically.
Go to the Tabular Model Server and you right click and script as, and save all the necessary objects.
Related
I am new to OBIEE tool , hence kindly bear with me if my query is basic in nature.
I have 2 RPD files, a.rpd and b.rpd. I need to switch between these 2 RPDs on same server and through same OBIEE tool.
Do I need to deploy both RPD on server to switch between these two through same OBIEE tool?
As per my own attempt, I can open both RPD file through Administration (obiee tool) : File --> Open-->Offline and without any deployment.
Is it mandatory to deploy both RPD at server to open it on line?
I guess I need to define 2 different ODBC system data sources for my repositories after deployment.
Thanks,
I got the answer to my queries through my own research, hence sharing in below so that others can be benefitted :
1) OBIEE designed to work with a single repository.
OBIEE has a single repository at any point in time. You can deploy A.RPD, use it and after a bit deploy B.RPD and use it. But it's either A or B and you will not have both on the server.
2) You can merge A and B together (the Admin tool allows you to do that and you obviously need unique names inside both or they will override things) if you want to have A+B deployed.
it's possible to safely merge 2 RPD which would have different business models and different subject areas and different physical sources. In case of conflicts you must solve it: keep A or replace it with B. It's like when you have to manage conflicts in versioning control systems etc.
3) However you can open both files locally, "offline" mode , for that all you need is the file itself.
4) It's also safer to work offline as you can do the whole work and then verify the RPD and only once you did everything you upload. If you work online and start doing changes but don't finish your work, people will be using an OBIEE system with a RPD half done. This could lead to errors. Also working online has some constraints because of how check in and out works.
Thanks,
I currently have a desktop PBIX file that I manually publish to Power BI Web.
I have to keep different version of the same PBIX file just to keep track of different sources based on environment such as Dev/QA/UAT/Prod etc
I have more than one data source for each environment i.e. in same PBIX file I have data coming from say Application Insights and REST API.
I scanned through power bi community to see how to do this but can't find relevant information. All pointers are for refreshing either the local PBIX or using Schedule Refresh option in Power BI Web.
Someone even wrote code to hit Publish code via OLE automation but that's not acceptable solution.
https://community.powerbi.com
I would like to automate this process such that
A. I can provide the data source connection string/ credentials externally based on the environment I want to publish it to.
B. Publish the report to Power BI web using a service account instead of my own.
Our current build and deployment tool set does allow use of PowerShell/ Azure CLI etc. Hence it would be helpful if the solution uses those.
Fetching data from sql Azure won't need refresh but it's expensive.
In one of the organizations I worked for they used views on sql Azure to accomplish this task
We are just configuring our first Microsoft CRM Portal using the Portals Add-In.
We are doing this in our sandbox, obviously. Question: How do we deploy this to our production system? As far as I see, most of the data is not saved in a solution but in entity records. Is there a practical way to deploy the stuff?
You’ll want to use the Portal Records Mover in the XRMToolbox. It will allow you to export/import the relevant records for the Portal.
https://community.dynamics.com/crm/b/dynamicscrmtools/archive/2017/06/13/new-xrmtoolbox-plugin-portal-records-mover
Alternatively, you can use the Configuration Migration tool (available in the SDK NuGet package) to move the records. I prefer the Portal Records Mover because it provides more granular control and supports updates without overwriting.
As suggested by #Nicknow use Configuration Migration and Portal Records Mover.
I would do the first transfer with the Configuration Migration and continue with Portal Records Mover where you can filter records that were updated on a given time frame.
You can also build your own SSIS package to filter the modified records and move them on the productive system.
Does anyone know about this problem: Any new fields I add work fine in the local back office, but when I use Webmatrix to publish to the server (discountASP.net) fields don't show up. I did a view source in the browser and they're just not there!
For example, #Umbraco.Field("comments")
Thanks!
Daniel
If you add new field, they are only added in the database. That means you would need to update the database on the production website. Webmatrix doesn't do this for you (by default).
There are a few ways to handle this scenario:
copy your database to the production server (i would advice against this, because you might overwrite content and media changes on the production server)
create the fields manually on the production server (easy solution)
use a commercial package like courrier (personally i believe it's a good solution, only if you have a content staging workflow)
use a free package like usync (http://our.umbraco.org/projects/developer-tools/usync)
We are looking to create a test TFS 2010 server based on our live instance.
One method which has been suggested is to clone the Team Project Collection (TPC) onto to another server - as detailed in this existing answer but I think there are a few additional steps?
In order to get the cloned TPC's GUID reset, I take it we would have to first reattach the cloned TPC in the admin console on the original server then detach, move and reattach on to test Server/TFS instance.
We are not running Sharepoint/WSS but would there be additional config work required on the test server with SSRS - in order for new projects to be created against the cloned TPC?
Are there additional using diffrent AD accounts for services or can all of that be resolved within the admin console on the new server?
Both servers will running on VMWare and on the same domain but different AD accounts would be used on the two servers to help prevent any unwanted interactions between the TFS instances.
I will recommended convert your TFS to virtual environment P2V using SCVMM, see this article,
http://mohamedradwan.wordpress.com/2011/06/23/converting-my-physical-domain-controller-to-a-virtual-machine-p2v/