Do I need to deploy both RPD on server to switch between these two through same OBIEE tool? - obiee

I am new to OBIEE tool , hence kindly bear with me if my query is basic in nature.
I have 2 RPD files, a.rpd and b.rpd. I need to switch between these 2 RPDs on same server and through same OBIEE tool.
Do I need to deploy both RPD on server to switch between these two through same OBIEE tool?
As per my own attempt, I can open both RPD file through Administration (obiee tool) : File --> Open-->Offline and without any deployment.
Is it mandatory to deploy both RPD at server to open it on line?
I guess I need to define 2 different ODBC system data sources for my repositories after deployment.
Thanks,

I got the answer to my queries through my own research, hence sharing in below so that others can be benefitted :
1) OBIEE designed to work with a single repository.
OBIEE has a single repository at any point in time. You can deploy A.RPD, use it and after a bit deploy B.RPD and use it. But it's either A or B and you will not have both on the server.
2) You can merge A and B together (the Admin tool allows you to do that and you obviously need unique names inside both or they will override things) if you want to have A+B deployed.
it's possible to safely merge 2 RPD which would have different business models and different subject areas and different physical sources. In case of conflicts you must solve it: keep A or replace it with B. It's like when you have to manage conflicts in versioning control systems etc.
3) However you can open both files locally, "offline" mode , for that all you need is the file itself.
4) It's also safer to work offline as you can do the whole work and then verify the RPD and only once you did everything you upload. If you work online and start doing changes but don't finish your work, people will be using an OBIEE system with a RPD half done. This could lead to errors. Also working online has some constraints because of how check in and out works.
Thanks,

Related

How to to exclude automatically Dev only tables from Dacpac for Azure Pipeline CI/CD?

I have Azure Devops for Azure databases using Dacpacs.
I can easily deploy schema from Dev to Test and Prod.
However I have a issue. Dev databases have several Dev only tables that I don't want to deploy to Test and Prod.
Excluding certain tables manually with Visual Studio have resulted to human errors and certain non wanted tables have been deployed to prod.
It there are solution for making sure that Dev only tables are automatically excluded from Dacpac?
Possible to automatically filter if table name starts with "Temp*"?
No, you won't be able to request that it just include certain object kinds. Yes, you can ask it to exclude certain object types (/p:ExcludeObjectTypes), allowing you to filter to exactly what you want while eliminating anything else. Using the DacFx API's programmatic paradigm, you can accomplish more targeted/convenient things, but it requires programming code.
You can use sqlPackage.exe to limit the modifications by using the /p:ExcludeObjectTypes argument to indicate the kinds you don't want to deploy.
Use the following as an example:/p:ExcludeObjectTypes="StoredProcedures;ScalarValuedFunctions;TableValuedFunctions"
The following is a list of potential ExcludeObjectTypes arguments: https://learn.microsoft.com/en-us/dotnet/api/microsoft.sqlserver.dac.objecttype?view=sql-dacfx-150

How to automate publishing of Power BI (PBIX) from desktop to Power BI Web with config for environment

I currently have a desktop PBIX file that I manually publish to Power BI Web.
I have to keep different version of the same PBIX file just to keep track of different sources based on environment such as Dev/QA/UAT/Prod etc
I have more than one data source for each environment i.e. in same PBIX file I have data coming from say Application Insights and REST API.
I scanned through power bi community to see how to do this but can't find relevant information. All pointers are for refreshing either the local PBIX or using Schedule Refresh option in Power BI Web.
Someone even wrote code to hit Publish code via OLE automation but that's not acceptable solution.
https://community.powerbi.com
I would like to automate this process such that
A. I can provide the data source connection string/ credentials externally based on the environment I want to publish it to.
B. Publish the report to Power BI web using a service account instead of my own.
Our current build and deployment tool set does allow use of PowerShell/ Azure CLI etc. Hence it would be helpful if the solution uses those.
Fetching data from sql Azure won't need refresh but it's expensive.
In one of the organizations I worked for they used views on sql Azure to accomplish this task

Laravel Restore a Backup

I'm fairly new to server administration. I have my Laravel app up and running and I want to make sure it has proper backups. I have researched some backup packages and I have settled on https://github.com/spatie/laravel-backup.
However, once the server fails, I need to know how to use the most recent backup (which will be on AWS S3) to restore the database on the rebuilt server. Are there any suggestions for guides on how to do this? I can't seem to find any unless it doesn't really require much learning and instead just a couple mySQL commands.
Thanks!
I would use replication and within Laravel i would try to switch connection to the replica database server so things can run smoothly until the problem is resolved.
Take a look at this Cross-Region Replication
A typical production environment is automatically running backups on most important things that your deployment needs in order to recover from a failure. Those parts would commonly be your database and storage folder, and configuration files.
Also when you deploy a laravel application there aren't many things that are "worth" backing up , you can choose the entire disk to be mirrored somewhere or you can schedule a backup script which run every N times and backups the things that are more important to your application.
Personally i wouldn't rely on an package from laravel to handle my backups , you can always use other backup utilities, replication and so on.
Update
Take a look at the link below:
User Guide » Amazon RDS DB Instance Lifecycle » Backing Up and Restoring
Backing Up and Restoring
You can call the API function RestoreDBInstanceFromDBSnapshot as showed on example.
But i don't think something automated exists that would auto restore or magically make everything work, you need to do a lot of security checks if something like that would even be attempted. Final word i believe a good solution manually entering or sending the request would be the most solid solution.

How to setup SonarQube for a large organization

I am in the process of setting up SQ for a large organization. I plan to have two separate systems one for update testing and rule development. The second would be the production system where real work occurs. I will be using SQL 2014 typically when I do that I use a SQL always On group to sync to a DR server in another datacenter. My question is with a SonarQube instance does it make sense to DR the application to that level. If my organization can wait for a period of time to stand up a new server in a DR event would that be possible with a proper backup of the DB? Further if there were no backups of the DB what would be lost with a fresh new SonarQube server besides setup/config time? Is there historical value of code scans that would be lost or would the next scan of the code base have us right back to where we were in terms of critical issues found etc.?
Thanks for your replies.
All the data is stored in the database so using DR on the database is a good idea. You should make backup of the database and restoring the database is also a good solution (note that you should do backup of installed plugins).
If you loose the database, you will also loose all the configuration (quality profiles, credentials, etc.) and the history of the analyzed projects.
So to restore a SonarQube instance, you have to :
Restore the database
Restore SonarQube or install the same version
Restore the plugins (${SONAR_HOME}/extensions/plugins)
During the first start, the ES files (${SONAR_HOME}/data/es) will be regenerated and you're instance will then be up and running.
If you have commercial plugins or if you are working with large SonarQube instance you may contact the sales team to have support on this setup.
Disclaimer : I'm working at SonarSource

Why use Oracle version control?

At work we use Oracle (12c client) to store most of our data and I use SQL Developer to connect to the database environments.
Issue:
We have issues where tables are being modified for one reason or another (too lazy to create a new table so they add new columns and change data types or lengths). This in return will break the table for others who actually utilize it for its real purpose.
Update:
We have DEV, TST, UAT, and PRD environments. We test and have scripts approved before we promote to PRD. The problem resides in DEV when we want to go back to an existing table to make an change, but that table had already been modified for different reasons.
Question 1:
Is the versioning just for stored procedures or is it possible to track changes to table structures, functions, triggers, sequences, synonyms, etc.?
As Bob Jarvis indicates you need way more than a solution to your question. You need policies and practices enforced for all developers. Some ideas from places I have worked:
every developer has a VM machine with a copy of the database installed. They can do whatever they like on it but must supply scripts to move their changes to production. These scripts are applied on a test instance and again on a QA instance before going to production.
subversion works on all OS and tortoise works well on windows. Committing scripts to a repository works well and this is integrated with SQL developer and can be done with Toad.
you have a permissions issue. Too many people have the privileges to alter tables. Remove these permissions and centralize on one or two people. Changes are funnelled through them as scripts and oversight can be applied there. Developers can have their own schema to test or a VM with a copy for development.
run this script to see who can alter tables
select * from DBA_TAB_PRIVS
WHERE PRIVILEGE = 'ALTER'
The key is a separation of concerns. Developers should have access to a schema where they can do what they need. The company needs to know who did what, when and where.
If you have more than one developer working on multiple changes to a dev environment then you need coordination and communication as well as source control. A weekly meeting to discuss overlap areas or a heads up chat message are just some ways to work together.
The approach I think works best, is to have a DEV database where all the developers manage their own set of schemas.
Scripted builds are provided with test data loads to allow any developer to create his own working schema. He then works on there, tests his changes and then commits his changes via scripts to the source control. DEV databases do not need to be large, just need enough test cases to allow for unit tests.
Script all the changes so that they can be checked into a version control system, and merged with other changes. The goal is to have a system where devA checks in changeA, and then when merged with the main trunk, devB gets changeA as he builds his schemaA.
This approach requires care if the main project schema employs PUBLIC synonyms. You will need to consider this as you go forward.
I would also advise with each change checked in an accompanying back out script should be checked in.
The advantage of this approach is that devs can manage their own schemas. With a scripted approach they dont all need to have DBA knowledge, and don't need to manage the database either. having all these on one database makes it easier to manage and control resources.
I've used this approach in teams with 50+ developers and it has worked very well.
This approach also paves the way for having devs checking scripts in and having a automatically creating a deployment package.
There is so much that can be done to make the development-test-deploy-backout cycle easier to manage.

Resources