I have a question about informatica metadata, and i will be glad for any advices.
We can rightclick on any workflow in Informatica power center and choose
"Dependencies". After it, we see the window, where we can choose, what dependent objects we want to see.
When we press "OK", appears "VIEW Dependencies" window with list of dependent objects and info about them (object_name, object_type, timestamp, status, etc).
Does anybody know, how to select this list from informatica metadata repository tables? Or may be somebody know the way, how can i extract this select from power center designer.
I know about separeted views with mappings, sources, targets, etc, but may be you know, how to get exactly the same data, as on this window.
Thank you for any help
You need to realize that these data are in many different tables in the repository database, and it will be even more complex if you have versioning turned on in the repository.
We created a query that gives all dependent tables (sources, targets and lookup sources) for a workflow, but that took two of our smartest people more than a week, and still it has several drawbacks in a more generalized setting. One example is that it doesn't support worklets since we don't use it...
Can you narrow down the requirement, then we may be able to point you in the right direction?
We can check the dependencies through querying the Informatica repository Metadata tables. But for that we need to know the oracle connection details in which the metadata tables are resided(Mostly Informatica admins knows about the connection). If we get the connection details we can check the below tables:
OPB_MAPPING,
OPB_SUBJECT,
OPB_WIDGET(for transformations),
OPB_TASK,
OPB_WFLOW_RUN etc
Below is the sample SQL query that shows the names of all the Folders in the repository and the mappings contained in them along with last saved date, mapping version number and versioning comments, if any.
SELECT
S.SUBJ_NAME FOLDER,
M.MAPPING_NAME MAPPING,
M.VERSION_NUMBER VERSION_NUMBER,
CASE WHEN M.IS_VALID = 1 THEN 'YES' ELSE 'NO' END IS_VALID,
M.LAST_SAVED SAVED_ON,
M.CHECKOUT_USER_ID,
M.COMMENTS
FROM OPB_MAPPING M, OPB_SUBJECT S
WHERE M.SUBJECT_ID = S.SUBJ_ID
AND is_visible = 1
ORDER BY 1, 2, 3;
Related
I've been teaching myself SSDT for use on an upcoming project that I expect to be working on. My understanding of the "publish" operation is that it will take my SQL Server Data Project code, use that to generate something like a reference database, and then use that to compare against my target-deploy database, figure out what changes are required to get the schema into line with the reference db, and then make them.
But for a table rename, this did not happen, and I'm hoping somebody can explain what is wrong with my mental model of the process.
I've got a very simple "library" themed test database with tables like "Libraries", "Books", and "Categories". All very simple 2-3 columns just to experiment with. Then I added a 4th table "Books_MM_Categories" to represent a many-to-many link table between "Books" and "Categories".
I published that, and all was as expected. But, I'd deliberately named the link table 'wrong' to that I could try renaming it. So I renamed the sql file in my DB project, and changed its code to instead create a table named "Books_Categories_Link".
This time when I published, I expected the "Books_MM_Categories" table to be deleted from the DB, and the new one added... or to have some kind of sp_rename procedure show up to rename the table.
Instead, what I got was that both tables are now present. I can understand that my sloppy rename would have lost all the data, simply just causing one new table to be created, and the old one dropped, instead of ACTUALLY renamed... But what I can't figure out is why the original table is not dropped. In my mental model of how this works, a table/column/view/sproc that no longer exists in the reference should be likewise eliminated from the published database. If not, then I should expect to see some error messages telling me it chose not to drop the table because of anticipated dataloss.
I did see a couple of post explaining how to use the "refactor" option in the code view window... That is working as I would expect. So I understand how to do it properly going forward.
Can anybody explain whats wrong with my mental model of how this works? I'm sure its working as it is supposed to, but I'd like to understand where I went wrong. Why does a table not listed in my project not get deleted on publish (I've not tried it but expect the same exact behavior if I export a .dacpac first and then use that to perform the deployment of the new scheme.
Thanks
EDIT 1
Somewhat curiously, when running a "Schema Compare" operation, the extra table is detected and flagged for deletion.
Your mental model seems to be correct. Check 'Advanced' options in 'Publish Database' dialog.
In the 'Drop' tab you can enable 'Drop objects in target but not in source' to produce the intended result.
Has anyone tried to make a pivot on 3 tables?
My case is a project management.
I have projects that contain multiple customers that contain multiple tasks.
I wish I could recover all cascaded
Project::with('customers')->with('customers.tasks')->get()
I have tried several times but nothing conclusive.
To give you an idea of the result: http://dhtmlx.com/docs/products/dhtmlxGantt/01_basic.html
We have: Product launch (project) > Development (client) > Develop System (task)
Each task has a start date and an end date. So I have to be able to find these dates since the project itself (represented by the green bar).
If you have any ideas let me know :)
I think your best bet would be to create a pivot table between customers and tasks. And it would also have a column for project_id.
This would give you the ability to find all of the customer's tasks and all tasks belonging to a certain customer.
Then you would have a projects table, and you'd be able to find a project's customers/tasks using hasManyThrough. I believe this would also require you to setup a model for your customer_task table as well, but should be fairly straight forward.
I am a complete rookie to spotfire and am trying to create a calculated column but the menu option is disabled and can't figure out why. It feels like something thats really simple. Can anyone help me out? My data source is a connection to a Microsoft SQL Server database.
First, if there is no data loaded, the Insert Calculated Column option will not be active.
Second, if you're using an in-database connection (not an Information Link), then you cannot insert calculated columns. In fact, with in-db connections, there's a lot of things you can't do:
Insert Columns
Insert Rows
Insert Calculated Column
Insert Binned Column
Data Relationships
K-means Clustering
Line Similarity
Data Functions
Regression Modeling
Classification Modeling
Insert Predicted Columns
But ... if you have data loaded and you're not using an in-database connection, I suspect the License for inserting a calculated column is not enabled for you. I don't know if you are an Administrator or not, but here is what I would recommend that an Administrator do.
Open the Administration Manager (Tools > Administration Manager). On the Users tab, search for your username and select it. Then, to the right, click the Licenses tab.
I believe the license for inserting calculated columns is under TIBCO Spotfire Professional and then Insert New Column. Make sure that's checked. If it is, then I'm not sure what the problem is. If it's not checked (i.e., there's a red X), then you'll have to go to the Groups and Licenses tab and Edit the Licenses for either yourself or the Group you belong to.
Be sure to look in the Spotfire Deployment & Administration manual if you haven't already: docs.tibco.com
I think this will get you close. You might consider posting in the Tibcommunity as well. Good luck.
I am trying to run some reports in TCR I imported from the 6.2.3-TIV-ITM_TMV-Agent-Reports-FP0001
Seeing that I get this error: UDA-SQL-0196 The table or view "KSY_SUMMARIZATION_CONFIG_DV" was not found in the dictionary.
I checked and the table is not in the database.
Seeing that regarding that table it says this:
The Summarization and Pruning configuration is shown in a specific query subject (Summarization and Pruning Configuration). The result is one row that represents the most recent entry in the KSY_SUMMARIZATION_CONFIG_DV view.
Maybe the WAREHOUS is lacking something? If the agents are running shouldn't there be a view named KSY_SUMMARIZATION_CONFIG_DV?
I don't seem to find other tables like: KLZ_CPU_HV, KLZ_CPU_DV, KLZ_CPU_WV, KLZ_CPU_MV,
KLZ_CPU_QV, KLZ_CPU_YV
Thanks for your help
You have to configure historical collection for the appropriate agent attribute groups for those tables to show up in your TDW. For instance create a historical collection for the "Linux CPU" attribute group to get the KLZ_CPU table. For tables ending in _D, _H, etc, configure hourly, daily, etc, summarization for those attribute groups.
Depending on collection intervals eventually the warehouse proxy agent will create the necessary tables.
We use CRM 4.0 at our institution and have no plans to upgrade presently as we've spend the last year and a half customising and extending the CRM to work with our processes.
A tiny part of model is a simply hierarchy, we have a group of learning rooms that has a one-to-many relationship with another entity that describes the courses available for that learning room.
Another entity has a list of all potential and enrolled students who have expressed an interest in whichever course.
That bit's all straightforward and works pretty well and is modelled into 3 custom entities.
Now, we've got an Admin application that reads the rooms and then wants to show the courses for that room, but only where there are enrolled students.
In SQL this is simplified to:
SELECT DISTINCT r.CourseName, r.OtherInformation
FROM Rooms r
INNER JOIN Students S
ON S.CourseId = r.CourseId
WHERE r.RoomId = #RoomId
And this indeed is very close to the eventual SQL that CRM generates.
We use a Crm QueryEntity, a Filter and a LinkEntity to represent this same structure.
The problem now is that the CRM normalizes the a customize entity into a Base Table which has the standard CRM entity data that all share, and then an ExtensionBase Table which has our customisations. To Give a flattened access to this, it creates a view that merges both tables.
This view is what is used by the Generated SQL.
Now the base tables have indices but the view doesn't.
The problem we have is that all we want to do is return Courses where the inner join is satisfied, it's enough to prove there are entries and CRM makes it SELECT DISTINCT, so we only get one item back for Room.
At first this worked perfectly well, but now we have thousands of queries, it takes well over 30 seconds and of course causes a timeout in anything but SMS.
I'm given to believe that we can create and alter indices on tables in CRM and that's not considered to be an unsupported modification; but what about Views ?
I know that if we alter an entity then its views are recreated, which would of course make us redo our indices when this happens.
Is there any way to hint to CRM4.0 that we want a specific index in place ?
Another source recommends that where you get problems like this, then it's best to bring data closer together, but this isn't something I'd feel comfortable in trying to engineer into our solution.
I had considered putting a new entity in that only has RoomId, CourseId and Enrolment Count in to it, but that smacks of being incredibly hacky too; After all, an index would resolve the need to duplicate this data and have some kind of trigger that updates the data after every student operation.
Lastly, whilst I know we're stuck on CRM4 at the moment, is this the kind of thing that we could expect to have resolved in CRM2011 ? It would certainly add more weight to the upgrading this 5 year old product argument.
Since views are "dynamic" (conceptually, their contents are generated on-the-fly from the base tables every time they are used), they typically can't be indexed. However, SQL Server does support something called an "indexed view". You need to create a unique clustered index on the view, and the query analyzer should be able to use it to speed up your join.
Someone asked a similar question here and I see no conclusive answer. The cited concerns from Microsoft are Referential Integrity (a non-issue here) and Upgrade complications. You mention the unsupported option of adding the view and managing it over upgrades and entity changes. That is an option, as unsupported and hackish as it is, it should work.
FetchXml does have aggregation but the query execution plans still uses the views: here is the SQL generated from a simple select count from incident:
'select
top 5000 COUNT(*) as "rowcount"
, MAX("__AggLimitExceededFlag__") as "__AggregateLimitExceeded__" from (select top 50001 case when ROW_NUMBER() over(order by (SELECT 1)) > 50000 then 1 else 0 end as "__AggLimitExceededFlag__" from Incident as "incident0" ...
I dont see a supported solution for your problem.
If you are building an outside admin app and you are hosting CRM 4 on-premise you could go directly to the database for your query bypassing the CRM API. Not supported but would allow you to solve the problem.
I'm going to add this as a potential answer although I don't believe its a sustainable or indeed valid long-term solution.
After analysing the indexes that CRM had defined automatically, I realised that selecting more information in my query would be enough to fulfil the column requirements of an Index and now the query runs in less then a second.