We have a need to amalgamate 2 clearcase servers with everything then residing on one of them.
Clearcase Version 7.1.2.6 FL5 & Schema 54
Both instances have their own Admin PVOB.
How do we move and amalgamate the PVOBS or reestablish the correct links once the vobs are moved?
Is this possible?
It depends if the servers have the same architecture or not.
The general idea is to follow the Administration Guide (for CC 7.1.2) describing how to move VOB (and PVob) from one server to another.
The different scenario are described here.
Related
I am new to OBIEE tool , hence kindly bear with me if my query is basic in nature.
I have 2 RPD files, a.rpd and b.rpd. I need to switch between these 2 RPDs on same server and through same OBIEE tool.
Do I need to deploy both RPD on server to switch between these two through same OBIEE tool?
As per my own attempt, I can open both RPD file through Administration (obiee tool) : File --> Open-->Offline and without any deployment.
Is it mandatory to deploy both RPD at server to open it on line?
I guess I need to define 2 different ODBC system data sources for my repositories after deployment.
Thanks,
I got the answer to my queries through my own research, hence sharing in below so that others can be benefitted :
1) OBIEE designed to work with a single repository.
OBIEE has a single repository at any point in time. You can deploy A.RPD, use it and after a bit deploy B.RPD and use it. But it's either A or B and you will not have both on the server.
2) You can merge A and B together (the Admin tool allows you to do that and you obviously need unique names inside both or they will override things) if you want to have A+B deployed.
it's possible to safely merge 2 RPD which would have different business models and different subject areas and different physical sources. In case of conflicts you must solve it: keep A or replace it with B. It's like when you have to manage conflicts in versioning control systems etc.
3) However you can open both files locally, "offline" mode , for that all you need is the file itself.
4) It's also safer to work offline as you can do the whole work and then verify the RPD and only once you did everything you upload. If you work online and start doing changes but don't finish your work, people will be using an OBIEE system with a RPD half done. This could lead to errors. Also working online has some constraints because of how check in and out works.
Thanks,
while setting up a basic 1 x NGINX load-balancer in front of 2 backends, I ended up in what it is clear to me to be a bug: the cron of this Certified App cannot be edited:
As you can see, in this particular App the cron file is owned by root:root and doesn't have the extended attribute (the plus on the right of permissions) necessary for the file to be edited also by the logged in user (nginx in this case).
All other certified apps allow instead the main login user to have crontabs, even though I found the permissions of each file vary a lot.
I've stumbled on https://github.com/jelastic/jem/blob/master/etc/jelastic/export.conf and it seems the file to go for proposing a bugfix, but it's last update if Aug-2016, so I guess Jelastic had closed much of its source code.
How can we contribute to Certified App source code?
indeed it is a bug as cron file of nginx user isn't editable in a balancer template, by design in has to be.
As for exports.conf - this file left for backwards comparability, but no more used.
The problem will be definitely fixed in latest templates, as for existing containers - we would like to apply a patch to fix them, if you provide us more details about hosting service provider you are using - we will help with that.
As for contribution to certified templates, all the images are publicly available on Docker Hub, you can create your own version of template based on existing one if you build a docker image and in your Dockerfile you specify
"from jelastic/nginxbalancer" as a base, then you can do any modifications to the filesystem. Next step will be just to replace existing balancer with your custom one.
Anyway, let's start with fix of existing containers.
Many thanks for finding out the bug!
As part of an infrastructure upgrade we are Upgrading our instance of UCM clearcase and moving to new servers.
We currently have an Admin PVOB and all the project PVOBS are linked to the Admin Pvob and the Vobs linked to their PVOB.
When moving to the new VOB server will this hierarchy have to be moved big bang to ensure the admin pvob is consistent or could some sort of phased migration approach be used. Does anyone have a recommended approach?
You should be able to move the PVob first, stating with the Admin one.
(provided both servers are up, and the second one is accessible from the first one)
The official documentation only mentions:
If you use UCM and are moving a PVOB and one or more component VOBs, move the PVOB first, and then move the component VOBs.
From there, you should follow "Moving Vob".
The other approach is to use (for older ClearCase 7.x) Multisite to move/replicate vobs/pvobs.
I need to know if it is possible to keep multiple C5 servers in sync, while using local disks to contain the DocumentRoot for each instance. I cannot find any documentation on the subject of basic web clustering with C5.
Currently, we have a shared MySQL server, handling all DB services. (that we don't intend to change) We also use NFS to host the DocumentRoot repository, also being used by all of our hosts, to hold the data.
We want to break away from the NFS model, and use local drives on each web server instead. However, I don't know if C5 will have problems with this scenario, or what pitfalls are waiting for me.
I understand I will need some kind of mechanism to trigger the data propagation across local disks. That should be simple enough to accomplish. However C5 and its functionality may not like my plans, therefore I am asking for help.
How do "you" set up multiple C5 hosts, containing the same web sites, and keep them all in sync? Let me know!
Thanks!
You can use storage locations to store your files in a common location, and you can use database sessions to store your sessions in the common database.
Beyond that, all you need to do is make sure that you deploy any changes to file configuration in /application/config. Generally teams do that by ensuring that they don't make any configuration changes on the production site and instead deploy configuration changes from their staging environment.
If configuration becomes an issue, you can swap out the existing file configuration with a database based model pretty easily.
We currently have two SonarQube servers (v4.5.1) running on two separate Windows 2012 servers each with its own MS SQL database server. One is our Development server and the other is our production server. The idea being that we test out all rule changes on the development server first, once we are happy that they are correct we port them to the Production server.
When we first setup the two servers we simply took a backup of the Development server database and restored it on the Production server. At this point both systems were in sync.
We have recently made some modifications to the Development rules set, however when we tried the same approach to move these to the production server it did not work.
The production box seemed to remember the previous rule set. There seems to be a cache of the previous rules that we can't work out how to clear.
Before restarting SonarQube with the new DB in place we deleted the temp folder as that appears to keep a cached H2 database, but that did not solve the issue. We also tried starting it up and using the /setup url but this did not appear to work either.
Is there a way to completely reset the SonarQube server prior to restoring the database so that it has no knowledge of the previous rule set?
Alternatively is there a better way to export and re-import the entire rule set between two servers?
We looked at exporting the rule profile, but this did not appear to contain the full detail of the rules.
Thanks
Pete
For the moment, this is not possible to fully synchronize rules and quality profiles between 2 servers because of SONAR-5366. You can watch and vote for this ticket.
Concerning the cache that you seem to have, this is probably the E/S indexes which are located in <install_dir>/data/es folder. What you can do is:
stop you server
fully delete the <install_dir>/data folder
restart the server: your rules should be in sync with the DB