Using a DataServiceContext with custom annotations - asp.net-web-api

When using the an instance of the DataServiceContext class to materialise objects from an odata endpoint where the endpoint exposes some custom annotations, how does one get hold of the annotations data. I can't see any obvious extensibility points.

Custom annotations aren't exposed as a first-class concept on the DataServiceContext, but you can access them by hooking into the client response processing pipeline. This code will run after every entity is finished being read:
context.Configurations.ResponsePipeline.OnEntryEnded(
entryArgs => DoSomething(entryArgs.Entry.InstanceAnnotations));
Internally, the WCF Data Services Client uses a lower-level library called ODataLib (aka Microsoft.Data.OData on NuGet). The response and request pipelines allow you to dip into that lower level to get extra information when you need it, but you still get all the conveniences of using the full-fledged WCF Data Services client library. The classes like ODataEntry, ODataFeed, etc. that you work with on the processing pipelines are all part of the ODataLib API.

Related

How to use Apollo Server DataSource to call a GraphQL API

In our GraphQL api (Apollo-Server) we would like to add a new dataSource which accesses GitHub's GraphQL api. We are looking to consume this data. It appears that using apollo-datasource-rest` is a good approach to do this. It's an established, still maintained module which provides caching, access to context and other dataSource benefits. It's also managed by the Apollo team. We want to verify that this is a good approach for making requests to other GraphQL APIs.
Other options are:
Roll your own datasource, which doesn't seem necessary or with apparent benefits
Build out a datasource using #apollo/client
There is a module, apollo-datasource-graphql, which appears fits this perfectly, though it has not been updated in two years and appears it may be unfinished with tests and request caching not complete.
Is using apollo-datasource-rest a good practice for accessing other GraphQL APIs as a dataSource in a GraphQL server service?
Is there a better, more established approach for doing this?
We are having the same concern since our backend needs to consume, as a client, a graphql api. The REST interface approach is expecting http GET queries to be cacheable, but not verbs like POST, PUT, DELETE... My understanding of GraphQL is that if you are only using http POST as a communication pattern this is going to prevent apollo-datasource-rest to handle caching for your queries and then it may not be the appropriate lib.
Other approaches to consider:
apollo-datasource-http
Apollo server (and the GraphQL specification) also supports GET queries so it may solve apollo-datasource-rest caching issues
usage of graphql-code-generator to generate the consumer of the target GraphQL api (and then use the client directly inside a service, or define a custom datasource to wrap the client)

Validate a json Object against a raml datatype in mule 3.9

Is there a mechanism in Mule 3.9.x for validating that a JSON entity is a valid instance of a specific RAML datatype, without manually generating and maintaining JSON-schema files and including them in the project?
The APIKit router does performs this kind of validation, but also requires that the incoming message contains other REST information, such as resource path and http Method.
I would like to use this to validate the input payloads of non http endpoints (JMS, amqp, etc) as well as a postcondition of REST APIs (to sooner find violations of RAML contracts in testing and lower environments)
Yes its possible. You can Create JSON Schema using this link https://jsonschema.net/ and use it on the JSON Schema validator Component In Mule Flow.
<json:validate-schema schemaLocation="myJsonSchema.json" doc:name="Validate JSON Schema"/>

Development compromises in using Spring Cloud Stream

The case for event-driven microservices such as Spring Cloud Stream is their asynchronous nature, which I do agree it makes them more scalable
But I have an issue regarding how to code it in a way where I don't lose certain key features that I have access to using synchronous services
In a servlet-based MS, I make full use of servlet context variables and servlet-based Spring autowiring functions
For e.g., I leverage heavily on HTTP headers to carry metadata between microservices without having to impact the payload. But in Spring Cloud Stream using Kafka, Kafka doesn't support message headers of any kind! I lose that immediately if I use SCS. Putting them into the payload causes all sort of changes in my model classes if I define the attributes clearly. Yes, I can use a simple Hashmap to simulate the HTTP header object but it really seems like reinventing the wheel to me.
On the auto-wiring side: I maintain an audit log record per request, which I implement by declaring a request-scoped Hashmap bean and autowiring it into any methods in the Servlet's call stack that needs to append data to the audit log. Basically it's just a global variable to hold some data within a single request. But in SCS, again, I lose that cos bean scopes that leverage on servlets are not available.
So far, there seems to be a lot of trade-offs that I have to make just to make Spring Cloud Stream work for me.
I thought about an alternative approach where I use SCS just to create an entry point but the Source method would just get the event, use a Processor to construct a HTTP request and send the request along to a HTTP endpoint. But, why go through all that trouble then?
Hoping that some more experienced devs would be able to shed some light on how they leverage on SCS.
#feicipet Thanks for the detailed question. let me try to address some of your concerns in the order you have listed them:
+1
+1
I am not sure why you are referring to it as servlet-based instead of Spring-based? Those are features provided by Spring, but read on. . .
Spring Cloud Stream doesn't use Kafka, the end user does while Spring Cloud Stream provides Kafka binder allowing Spring Cloud Stream to integrate with Kafka. Further more, while Kafka indeed did not support headers prior to version 0.11, Spring Cloud Stream always supported and will continue support headers even with Kafka pre-0.11, embedding them in the Message and then extracting them in the consumer side into the proper Message headers completely transparent to the end user. In other words one would assume that Kafka did support headers by simply using Spring Cloud Stream. With Kafka 0.11+ headers are supported natively and we have adjusted to that with the same level of transparency.
So, you don't need to put anything in the payload. Just create an appropriate Message<payload, headers> and SCSt will take care of the rest regardless of the broker (Kafka, Rabbit, Foo etc.).
Yes you do simply due to the fact that as you eluded earlier SCSt promotes an asynchronous and stateless architecture. However, I do not agree that what you are trying to accomplish is un-accomplishable. Rather it is accomplishable the way you are describing, but there are other way to maintain context and I would be more then glad to discuss it as a separate topic.
I would not call them trade-offs, rather difference in the architecture, that has its benefits, but it is a not one-size-fits-all architecture and therefore its viability should be discussed within the context of a concrete use case.
+1. You don't have to separate it as Source and Processor. You can simply create a custom Source app with exposed REST endpoint and custom processing logic. However we are currently working on enhancements i the framework to ensure that you could do the same with the existing starter apps.
Obviously we have touched on many points here and some of them would probably need to be debated further, but I hope this clears up some of your concerns.
Cheers

To wrap back-end system clients or not in Spring

I have an application with Spring, and I need to call many different types of back-end systems (legacy mainframe, ESB, RESTful...). If we take e.g. REST, I can implement a RESTful client with e.g. RestTemplate. I can A) have developers use RestTemplate client directly, to which they pass the service url and dataobject. Or I can B) wrap RestTemplate inside our own, back-end specific client and offer explicit methods that developers can use. The methods themselves would then ofcourse use RestTemplate and make the explicit back-end calls.
The good with A) is that changes in back-end systems to not need changes to client. Downside is that we don't hide the architecture. B) is more clear for developers and easier to "manage", but changes to back-end systems require us to update all applications that want to use the new back-end functionality. Even worse, a change in back-end system functionality may require all services to be updated.
Still, I am personally leaning towards option B), because it is provides such a nice separation of business logic and architecture services for developers.
I don't understand how you came to the conclusion that clients don't need an update (option A) if they want to use new functionality or if the API breaks because of a change.
I think option B is better. But I would use the HTTP client Feign to create request templates and then publish the interfaces. This way you won't even have to wrap a RestTemplate and manually implement every request.

How to create a dynamic dispatch client using apache CXF

I want to use apache CXF to build my client. Unfortunately, I do not see a way by which it allows me to dispatch a client dynamically based on the port and operation name. If there is a huge wsdl, JaxWsDynamicClientFactory would create classes for all services contained in it which is an overhead that I'd like to avoid.
I found a similar implementation in JAX-WS. Is there any api in CXF that would do the same?
CXF supports the JAX-WS Dispatch API, which is a low-level interface to SOAP.
That means you can create a Dispatch that represents a particular port-type on a service and then invoke the methods by building the message
// Set things up...
Service s = ...
Dispatch<DOMSource> dispatch =
s.createDispatch(portName, // << a QName!
DOMSource.class, Service.Mode.PAYLOAD);
// Construct the request message here
Node response = dispatch.invoke(new DOMSource(request)).getNode();
// Understand the response message here
Of course, that then means you've got to work with the DOM for the messages, which is highly annoying. I think that's the part of tooling that's really worthwhile.

Resources