Configuration Database baseline - //CheckDifferentData('cpu', OGActualDataRow, OGDataRows); - objectgears

I have created have downloaded Configuration database from https://www.objectgears.eu/it-configuration-database and created CMDB baseline according to the example https://doc.objectgears.cz/vcd/en-US/case-study-record-baseline-solution-implementation/v/1.9.0.0.
//CheckDifferentData('cpu', OGActualDataRow, OGDataRows);
However, only CPU difference is highlighted with the background colour, not RAM or aother columns. What did I miss?

You have correctly created rule for the cpu column. You have to create equivalent rules also for the other columns that yu want to check - one for each of them. Then you apply the same steps also for other entities - see printscreens on https://www.objectgears.cz/configuration-management
CMDB Configuration items from various layers are described in https://doc.objectgears.cz/vcd/en-US/a_configuration_mngt

Related

'allow_concurrent_memtable_write' on a column family level

RocksDB supports concurrent writes on a memtable via the option, allow_concurrent_memtable_write which is a part of RocksDB Immutable DBOptions. Since this is a DBOption, this setting is applicable to all CFs created in the DB.
But I have a requirement where i want to enable concurrent writes in certain CFs and disable in others. Treating it more like a ColumnFamilyOptions.
I understand that, I can have two database pointers and separate the column families based on concurrent_writes setting. Still I would like to know if it can be done within the same DB.
Thanks in advance
No it is not possible, its a DB Level option not a column family option.

ServiceNow Compare Servers Attributes

How can I get a diff between two servers taking all the attributes like OS Details, Patches installed, Load Balncer associated, Software installed etc.
As far as I know there is no default option to get the diff between two records in ServiceNow. Some ideas:
Use the list view of the table, show all the columns you want to compare and do it manually.
You can create a UI Page with a simple diff functionality using Jelly or Angular, etc. You would need to get the records and compare each field in the code.
Export the records as XML files and compare them with external tools: http://text-compare.com/, notepad++, etc.
Hope it helps!
You may want to check AvailabilityGuard. It's a ServiceNow Addon that allows Server Comparison as well as compliance standard checks and out of the box Downtime and Data-Loss risk assessment.
Another option is to check Symantec's CCS. More security oriented but potentially does something similar.
Mark.

Bulk Movement Jobs in FileNet 5.2.1

I have a requirement of moving documents from one storage area to another and planning to use Bulk Movement Jobs under Sweep Jobs in FileNet P8 v5.2.1.
My filter criteria is obviously (and only) the storage area id as I want to target a specific storage area and move the content to another storage area(kinda like archiving) without altering the security, relationship containment, document class etc.
When I run the job, though I have around 100,000 objects in the storage area that I am targeting; in examined objects field the job shows 500M objects and it took around 15hrs to move the objects. DBA analyze this situation to tell me that though I have all necessary indexes created on the docverion table(as per FileNet documentation), the job's still going for the full table scan.
Why would something like this happen?
What additional indexes can be used and how would that be helpful?
Is there a better way to do this with less time consumption?
Only for 2 and 3 questions.
About indexes you can use this documentation https://www-01.ibm.com/support/knowledgecenter/SSNW2F_5.2.0/com.ibm.p8.performance.doc/p8ppt237.htm
You can improve the performance of your jobs if you split all documents throught option "*Policy controlled batch size" (as i remember) at "Sweeps subsystem" tab in the Domain settings.
Use Time Slot management
https://www-01.ibm.com/support/knowledgecenter/SSNW2F_5.2.1/com.ibm.p8.ce.admin.tasks.doc/p8pcc179.htm?lang=ru
and Filter Timelimit option
https://www-01.ibm.com/support/knowledgecenter/SSNW2F_5.2.1/com.ibm.p8.ce.admin.tasks.doc/p8pcc203.htm?lang=ru
In commons you just split all your documents to the portions and process it in separated times and threads.

Flexible hierarchies in Saiku Analytics

I have just started working on Mondrian and I am having a hard time understanding how to set hierarchies work.
Suppose that I have a Hospital dimension and I want to sum the amount of hospitals that are public or private in certain state. I have also my fact hospital with the appropriate measure hospital_amount.
The hierarchy I have built in the Schema Workbench is show below:
1- State
2- Flag (Private or Public)
3- City
4- Hospital
Doing in this way I can analyse things in Saiku Analytics plugin without major concerns, provided that I maintain the presentation order of attributes (State, Flag, City,...). But, things turn a little complicated if I want change the order that fields will be presented in the report, in other words, what if I want to build another report in Saiku without using the flag attribute.
Even if I hide the flag, Saiku will continue using it to categorize the rest of the attributes from the hierarchy (City and Hospital).
Some people said that I need to create another hierarchy in the Schema Workbench only for the flag, but this won't let me use the flag in the drill menu of Hospital.
Is there any way to build reports in Saiku without being stuck into the hierarchy order, I mean choosing fields from hierarchy in a flexible way?
Thanks in advance!
You don't mention if you are using Saiku as a BI server plugin or on standalone.
If you are using standalone, which uses Mondrian 4, you can use the "has hierarchy" attribute in your schema instead of defining a strict hierarchy which effectively creates a hierarchy for each level, which can all act independently of one another.
Or in Mondrian 3 you could just do that manually.

solr More than on entity in DataImportHandler

I need to know what is the recommended solution when I want to index my solr data using multiple queries and entities.
I ask because I have to add a new fields into schema.xml configuration. And depends of entity(query) there should be different fields definition.
query_one = "select * from car"
query_two = "select * fromm user"
Tables car and user have differents fields, so I should include this little fact in my schema.xml config (when i will be preparing fields definition).
Maybe someone of you creates a new solr instance for that kind of problem ?
I found something what is call MultiCore. Is it alright solution for my problem ?
Thanks
Solr does not stop you to host multiple entities in a single collection.
You can define the fields for both the entities and have them hosted within the Collection.
You would need to have an identifier to identify the Entities, if you want to filter the results per entity.
If your collections are small or there is a relationship between the User and Car it might be helpful to host them within the same collection
For Solr Multicore Check Answer
Solr Multicore is basically a set up for allowing Solr to host multiple cores.
These Cores which would host a complete different set of unrelated entities.
You can have a separate Core for each table as well.
For e.g. If you have collections for Documents, People, Stocks which are completely unrelated entities you would want to host then in different collections
Multicore setup would allow you to
Host unrelated entities separately so that they don't impact each other
Having a different configuration for each core with different behavior
Performing activities on each core differently (Update data, Load, Reload, Replication)
keep the size of the core in check and configure caching accordingly
Its more a matter of preference and requirements.
The main question for you is whether people will search for cars and users together. If not (they are different domains), you can setup multiple collections/cores. If they are going to be used together (e.g. a search for something that shows up in both cars and people), you may want to merge them into one index.
If you do use single collection for both types, you may want to setup dedicated request handlers returning different sets of fields and possibly tuning the searches. You can see an example of doing that (and a bit more) in the multilingual example from my book.

Resources