How to find all system schemas in a given Vertex metadataStore? - google-cloud-vertex-ai

Vertex ML Metadata publishes and maintains system-defined schemas for representing common types widely used in ML workflows. How can I find all the supported system schemas in my own Vertex MetadataStore.

System defined schema are maintained by Vertex services and will also be updated when new schemas are added to the system. Those schemas are managed under namespace "system" and "google". To find all currently supported system defined schemas in a given store, one can follow the instructions listed in https://cloud.google.com/vertex-ai/docs/ml-metadata/system-schemas#list_your_schemas.

Related

How to create return objects when schema (objectGraphType, QueryType, etc) generated during runtime

In multi tenants environment where each tenant can evolve into different database schema design after lauhching the services, graphGL solution seems not straightforward.
What I was able to complete, is using single schema 'Root' as ISchema within a wrapper schema (also as ISchema), which is actually the one exposed to other layers of GraphQL architecture, and create GraphTypes (QueryTypes, etc) as Fields of the 'Root' during runtime.
It seems good at least in creating those GraphTypes as run-time registered Fields of Root schema, however when I added resolver using resolve : context, I am not able to display the returned object(s) from repository, let's say to altair with error "Error trying to resolve field '{fieldname}'"
How can I return the data so those can be actually bound to the graphQL layers up to altair?

ApolloGraphQL implement Shared Schema across a federated GraphQL service

I'm currently following a tutorial that composes multiple subgraphs into a supergraph, which is great for separation of concerns. What I'm wondering is it possible to have multiple subgraphs split out contributing data to a shared schema.
So for example, within the tutorial
Gateway. Composed of Products subgraph, locations subgraph, reviews subgraph.
What I'd like to achieve, wondering whether its even possible.
Gateway, Composed of Products from one API, along with products from another API, along with products from another API
i.e, different micro services building the data, but a shared common data model for products used across them.
Within the docs I'm reading at present "By default, exactly one subgraph is responsible for resolving each field in your supergraph schema" - which suggests that it is possible? But at this point I'm not entirely sure, or even if graphQL is the right technology solution for my problem.

Normalize FHIR bundles data into separate database tables

We get FHIR bundles from vendor, mostly patient, encounter, observation, flag and a few other resources (10 total). We have an option to store resources as json values or we can come up with a process to normalize all the nested structures into separate tables. We are going to use traditional BI tools to do some analytics and build some dashboards and these tools do not support json natively. Should we do former or latter and what is the best/easiest way to build/generate these normalized tables programmatically?
Ultimately how you decide to store these is not part of the scope of FHIR, and any answer you get on here is going to be one person's opinion. You need to figure out what method makes the most sense for the product/business you're building.
Here are some first principles that may help you:
Different vendors will send different FHIR at you. Fields may be missing, different code systems may be used.
FHIR extensions contain a lot of valuable information and the JSON representation is an Entity Attribute Value. EAV is anti-pattern for relational databases.
FHIR versions will change overtime - fields will be added and have their names changed, and new extensions will be relevant.
As far as your second question about generating the tables - I think that you will be best served by designing the data model you need, and mapping the FHIR data to it. That said there are a number of open source FHIR implementations you can study for inspiration.
Modern databases like postgresql, oracle & mssql have a good support for json datatype. To flatten FHIR resources for BI you can consider building relational (may be normalised) views. We built simple DSL, which allow you describe destination relation as set of (fhir)paths in resource.

make target as a source in ODI 12c - flow mapping

in ODI 12c a mapping can load a data from a source to the target, but sometimes there is a need in the same mapping that the target could be another source for a new target,
i.e.
Source -> target (as if source) -> target and so on...
what is the best methodology to achieve that i read about reusable mapping and lookup component but what would be the most feasible and scientific way.
You can use mapping but you should have multiple data models as your source and target.
Here is an example for two different sources and two different targets.as you can see in the shape below:
we have two sources include file technology (DM_FILE_AS_SOURCE data model) and oracle technology (DM_ORACLE_AS_SOURCE_TARGET data model) and two targets include oracle technology (DM_ORACLE_AS_SOURCE_TARGET data model) and another oracle technology (DM_ORACLE_AS_TARGET data model).
the mapping is very simple and type of "Control Append" that is working well.
hope this sample help you.

How do I determine what's using Oracle Spatial?

We have an Oracle Enterprise Edition 10 installation and as its been explained to me by our DBAs, Oracle Enterprise installs include all extensions and you're simply licensed by what you use.
We've discovered we're using Oracle Spatial but we don't want to be. I can confirm for myself that its being used with this SQL:
select * from dba_feature_usage_statistics;
Unfortunately that's all I can find out. We have a large number of applications which use Spatial elements, but having asked all of our vendors they assure us their apps are using Oracle Locator (which is the free subset of Spatial).
So my question is simple - how do I discover exactly which app is using the Oracle Spatial extension?
Alternately (brought to light by ik_zelf's answer), how do I prove I'm only using the Locator subset of Spatial.
Check the sdo metadata:
select * from mdsys.sdo_geom_metadata_table where sdo_owner not in ('MDSYS', 'OE')
when you dig a little deeper in the dba_feature_usage_statistics you will find this query as part of the determination of what is being used and what not. The schema's MDSYS and OE are not counted, even when they have sdo objects.
There is a list of functionality that is part of Oracle Spatial vs. Oracle Locator on the Oracle website: http://docs.oracle.com/cd/B19306_01/appdev.102/b14255/sdo_locator.htm#SPATL340 - specifically pay attention to the section that lists things only available in Oracle Spatial.
The short story is that (basically) the following things are off the table for Locator:
Topology
Network data model
GeoRaster
Geocoding
In-built data mining functions
Linear referencing
Some spatial aggregation functionality
Some parts of the sdo_geom package
Storage, indexing, partitioning, sdo_util package, coordinate transformations and more are all fully within Locator. I would simply check the dba_source view for any stored procedures that use any of the prohibited functions.
For code outside of the database, I guess you have to take someone's word for it, but in my experience external applications tend to use their own methods rather than Oracle in-built features.

Resources