In WebAPI Breeze implementation, can I avoid exposing Metadata function? - asp.net-web-api

Opening Metadata function using Breeze via API is actually equivalent of exposing the underlying database schema.
Is there a way to avoid opening up metadata api call? I tried not exposing it as suggested. I got the following error
Query failed Metadata query failed for: /breeze/NorthWind/Metadata; The requested resource does not support http method 'GET'.
What is the right way of avoiding exposing Breeze Metadata calls.

In addition to obtaining metadata via an API, there are another couple of approaches you could consider:
Load metadata from a script. With this approach, you embed the metadata in a script. After loading the script, the Breeze EntityManager can be initialized using the metadata from the script rather than calling an API. I've found this approach quite useful for unit testing when a dependency on accessing a server is undesirable. See Load metadata from script for more details.
Build metadata with hand-written Javascript code to configure it. You probably wouldn't want to do this for a complex data model that you have already defined in Entity Framework, but it can be useful if that isn't a constraint. See Metadata by hand for discussion on this approach.
Once you do have metadata, you can export and import the metadata (such as to the browser's window.localStorage area) using MetadataStore.exportMetadata and MetadataStore.importMetadata respectively.

You need to provide some sort of metadata for Breeze to work. If you do not want to expose your DB structure, either you have to manually change the metadata generated, or have another layer in between your DB entities and the data classes you want to expose. You can generate the metadata of the intermediate layer by data annotations.

Related

How to log calls to deprecated fields in apollo server

We marked some fields in our schema using the #deprecated directive. Now we want to log if these fields are still in use from some of our clients. What would be the best way to do this, without using Apollo Studio.
If you have access to the client code, then you can utilize GraphQL Inspector to check for deprecated usage. Using the CLI, you just do:
graphql-inspector validate DOCUMENTS SCHEMA
where DOCUMENTS is a glob pattern used to match the files containing the queries and SCHEMA is a pointer to the schema used for validation. The files containing the queries can be .graphql files or .js/.ts files. The schema pointer can be a URL to your schema or one or more .graphql files with your schema's type definitions. See here and here for additional ways to provide the schema and documents.
If you don't have access to the client code, or specifically need to log deprecated usage on every request, then you can write your own Apollo Server plugin and utilize GraphQL Inspector's programmatic API instead to validate each request's parsed document as it comes in. The parsed document will be available beginning with the validationDidStart lifecycle hook. See the docs for a complete example of how to write your own plugin.

accessing table metadata from embedded Teiid server

I have embedded Teiid 12.3 in a Spring Boot application. I want to get into the metadata of my VDB in order to generate a diagram using graphviz-java. I assume that if I have a org.teiid.metadata.Table object, I can call getIncomingObjects() to get references to tables that table depends on. I just can't figure out how to navigate from the EmbeddedServer to the Table objects.
I looked into using the administration API available via EmbeddedServer.getAdmin(). From there, I can call getVDBs(), and from there I can navigate down to getModels(), but below that level there is only the model source via getSourceMetadataText(). I also tried subclassing EmbeddedServer to make getVDBRepository() public. I can call getVDBRepository()*.getModels(), but it returns the same Model objects only get me access to the source definition of the models, not the runtime metadata model.
I tried getVDBRepository().getSystemStore() and VDBRepository.getODBCStore(), but those MetadataStores are not for the VDB I have deployed.
I haven't found any examples by Google, Teeid JIRA, Teiid forum, or StackOverflow to help me.
Take look at [1] the getSchema method on Admin API, this method returns the string form of the metadata, however you can grab Schema object for object form. If you do not want that way, Teiid also exposes system catalog using many SYS tables, you can issue SQL queries to grab the metadata of schemas and schema items in a VDB. One for internal access, another is from external access.
BTW one of users created a dependency diagram tool that may be useful if you are trying to do something similar. See [2]. Let me know if you interested in pushing that further.
[1] https://github.com/teiid/teiid/blob/master/runtime/src/main/java/org/teiid/runtime/EmbeddedAdminImpl.java#L544-L557
[2] https://github.com/teiid/metadata-catalog-ui

Persist a Spring Java Object into Magnolia repository

If i had an additional Spring application extending my Magnolia, which gets some Java Object, which will be used inside my application, how can i save it ???
I already learned to do queries, but i cannot use it yet to put something in or change it. I can only fetch data. into nodes.
where or how do i persist ??
For Info: I have a repository which shall store the special data and i have a nodetype declared for this. As it is now the spring social UserConnection i have the workspace "connections" with nodeType mgnl:userConnection
My JavaObject is a UserConnection, designed near to MgnlUser, so i also add properties, but i don't know yet, what to do with path and uuid.
i don't know yet how to declare it or where to get it.
You can store the data same way as you fetch it. Assuming you are running your spring app through Magnolia filter chain you have MgnlContext setup for given thread and can easily call MgnlContext.getJCRSession("connections") to obtain the session and node same way you do to retrieve your data, to add subnodes or set properties on given node you just call node.addNode("myNewNode") or node.setProperty("myProp", "newValue") on the node and follow that with call to session.save() to persist the session info. But I guess you already know all that.
If you want to get whole object serialised into repo for you by system instead, you can use JackRabbit OCM for this, or even easier - use integration of OCM into Magnolia - http://jira.magnolia-cms.com/browse/MJROCM
. It's already used in Shop module of Magnolia if you are looking for examples on how to work with OCM.
HTH,
Jan

Combining metadata from multiple sources

In a SPA app using breeze, how would I go about combining metadata from multiple sources for related data so that I can use them in 1 manager on the client. For example, I might have the following
Entity Framework Metadata from WebAPI controller (e.g. Account)
Custom Metadata from DTOs (e.g. Invoices)
Data from a third party service with metadata provided from client side metadata (e.g. Invoice transmission result)
In each case the data has related properties so I might want to be able to use Account.Transactions.TransmissionResults
UPDATE
I have tried several ways of getting this to work but to no avail. From Jay's answer, it is not possible at present to update the metadata from the server once it has been retrieved, so if and until that changes (see breeze user voice issue) I am left with one of the following approaches
1 Retrieve metadata from the server from Entity Framework and add metadata on the client to add extra entities. This worked to a degree but I could not add navigation properties from entity types added on the client to entity types retrieved from the server because I cannot add the foreign key association to the entity retrieved from the server, again back to the need to modifying metadata after it has been retrieved.
2 Write the complete metadata by hand, which will work but makes maintainability that much harder and seems wrong to be manually writing mostly the same code that the designer would write.
3 Generate most of the code from Entity Framework as described in the docs and then update it afterwards to add in the custom entities. Again similar issues than with option 2, it seems hacky.
Anyone else tried something similar? Is there something I am missing, which I could be, I've only started with breeze and js.
Thanks
A breeze EntityManager can have metadata from any number of DataService endpoints, and you can manually add metadata (new EntityTypes) on the client at any point. The only current restriction is that once you have metadata from a specific service, you can't change it. ( We are considering reviewing the last restriction).
So the question is, what are you trying to do that you can't right now?

Grails store and fetch data on client side

Background: We are using grails 2.1.1. We are not using any DB as of now. We make a web service call for each response on another server.
Now the problem is, there is web service call which returns some static data in XML form and this data is usable throughout the application. The size of the xml is around 40kb. This xml contains static data like, project_list, status_type_list etc. and we have to use this in various dropdowns and menu items in different gsp pages.
So, please suggest us the best way to handle this data. So that it doesn't effect our page load time and browsing experience. And also we can easily use the data on client side.
responding to your comment on the question. I would prefer using annotation based caching over the plugin, if the requirement is as simple as you state that it is.
If the calls are being made from server-side and you want to cache the results of the parsed XML then you can do something like:
#Cacheable("staticDataCache")
def getStaticDataFromXML() {}
You can then use the above method to pull the maps, lists whatever data structure you've used to store the result and it will pull it from the cache.
and then another service method to flush the cache, which you can call frequently from a Job.
#CacheFlush("staticDataCache")
def flushStaticDataCache() {}
Use the cache plugin to cache the static xml data. And then add some policy as to when the cache should be updated... (i.e. using a job to check if the xml has changed every hour)

Resources