Mock elasticsearch nested buckets - elasticsearch

I'm trying to mock elasticsearch nested aggregation object with the following structure. I did something as described at the attached link, but couldn't do it for nested object:
Mock Elastic Search response in.Net
Here is the real elastic object I've been tying to mock.
var obj= new AggregationDictionary
{
{
"key1",new TermsAggregation("key1")
{
Field="1234",
Aggregations = new AggregationDictionary
{
"top",new TopHitsAggregation("top")
{
Size =10
}
}
}
}
}
};

After struggling with my attempts to mock the Elasticsearch object, I came up with the following solutions:
Using in memory connection
Elasticsearch (using nest client) documentation will teach you how to "Inject" your desired response in advance. Now you can map the response to your type-safe object without creating a connection to Elasticsearch. I recommend to use this technique on integration test and not Unit test.
For unit test, the best option is to abstract your services layer with an interface and totally ignore from Elasticsearch engine.
Just override services method with your mock object. I use "Moq" and response what ever you want, for example:
mockObj= new Mock<IClientRepository>()
mockObj.Setup(x => x.GetData()).Return("put your response here");

Related

apollo kotlin multiple schema sharing fragments

The service i want to work with exposes 2 graphql api's that have overlapping types.
By which way it is possible to share as much fragments as possible?
After getting the schema.graphqls files via introspect and saving it to their source Folder i tried generating the files (query, mutation, fragments) resulting only value types getting generated.
apollo {
fun Service.configure() {
includes.add("com/example/shared")
}
service("api1") {
configure()
packageName.set("com.example")
sourceFolder.set("com/example/api1")
}
service("api2") {
configure()
packageName.set("com.example.api2")
sourceFolder.set("com/example/api2")
}
}

How to test a Nest.js parameter decorator that uses graphql execution context

I have created a decorator to retrieve the graphql query context and do some logic with it. It looks like this:
export const GraphQlProjections = (options?: ProjectionOptions) =>
createParamDecorator<ProjectionOptions, ExecutionContext, string[]>(
(opts: ProjectionOptions, ctx: ExecutionContext) => {
const gqlContext = GqlExecutionContext.create(ctx)
const info = gqlContext.getInfo()
const fields = Object.keys(fieldsProjection(info, opts))
return fields
}
)(options)
I'd like to write some unit tests for this - but I have no idea how to even go about it.
I've found some documentation for getting the decorator factory, but this does not help with setting up/mocking execution context to allow. Apollo server docs seem to reference some sort of mocking, but doesn't tell me how to achieve my goal.
I basically need to say 'Given this query, say, Query { user { name } }, what will my decorator return?'
To achieve this, it seems i need to somehow mock the execution context to contain a GraphQLResolveInfo object, which I also somehow need to generate. How can I achieve this? Or am i going about this the wrong way?

Spring Data MongoDB: How to describe aggregation $merge with Spring Aggregation?

Code that I want to execute by MongoTemplate:
{
$merge: {
into: 'someCollection',
on: "_id",
whenMatched: 'merge',
whenNotMatched: 'discard'
}
}
I did not find any suitable methods that allow me to describe $merge stage, have doubts if Spring Data MongoDB even supports this stage?
Yes, Spring Data MongoDB have support for $merge stage.
Your code can be executed by MongoTemplate following way.
MergeOperation mergeOperation = Aggregation.merge()
.intoCollection("someCollection")
.on("_id")
.whenMatched(MergeOperation.WhenDocumentsMatch.mergeDocuments())
.whenNotMatched(MergeOperation.WhenDocumentsDontMatch.discardDocument())
.build();
Use this mergeOperation with mongoTemplate.

Idiomatic way of verifying a reactive request before actually persisting to the database

I have an endpoint that accepts as well as returns a reactive type. What I'm trying to achieve is to somehow verify that the complete reactive request (that is actually an array of resources) is valid before persisting the changes to the database (read Full-Update of a ressource). The question is not so much concerned with how to actually verify the request but more with how to chain the steps together using which of springs reactive handler methods (map, flatMap and the likes) in the desired order which is basically:
verify correctness of request (the Ressource is properly annotated with JSR-303 annotations)
clear the current resource in case of valid request
persist new resources in the database after clearing the database
Let's assume the following scenario:
val service : ResourceService
#PostMapping("/resource/")
fun replaceResources(#Valid #RequestBody resources:
Flux<RessourceDto>): Flux<RessourceDto> {
var deleteWrapper = Mono.fromCallable {
service.deleteAllRessources()
}
deleteWrapper = deleteWrapper.subscribeOn(Schedulers.elastic())
return deleteWrapper.thenMany<RessourceDto> {
resources
.map(mapper::map) // map to model object
.flatMap(service::createResource)
.map(mapper::map) // map to dto object
.subscribeOn(Schedulers.parallel())
}
}
//alternative try
#PostMapping("/resourceAlternative/")
override fun replaceResourcesAlternative2(#RequestBody resources:
Flux<ResourceDto>): Flux<ResourceDto> {
return service.deleteAllResources()
.thenMany<ResourceDto> {
resources
.map(mapper::map)
.flatMap(service::createResource)
.map(mapper::map)
}
}
Whats the idiomatic way of doing this in a reactive fashion?

Apollo Server - parse REST result in Connector, Resolver or Model

I am wrapping an older REST API service with an Apollo server. Calls to the REST service results in a JSON object that nests the payload 2 to 3 levels deep. For example:
{
- MRData: {
- CatTable : {
- Cats : []
And to further complicate matters, the nesting pattern and node names are different for each resource endpoint. So my question is, since each resource result will need custom manipulation, where is the best place to do it: in the Connector, Resolver or Model.
Connector
If done in the Connector, then a custom method is needed for each resource. Seems like a lot of boilerplate.
public fetchCats(resource: string) {
return new Promise<any>((resolve, reject) => {
request.get(url, (err, resp, body) => {
err ? reject(err) : resolve(JSON.parse(body).MRData.CatTable.Cats)
})
})
}
Resolver
The resolver method receives a promise but the result cannot be manipulated:
const allCats = (_, params, context) => context.cat.getCats()
.then((data) => { // to late to manipulate data here })
Model
The Model looks promising but not quite sure how to structure it:
public getCats() {
const cats = this.connector.fetchCats('/cats.json');
return cats;
}
Apollo will be (more often than not) integrated with REST API's. I'm looking forward discovering the best way to handle this case.
I would generally recommend doing the parsing in the connector, because they should abstract over the details of the backends. If connectors abstract over the backend, you should technically be able to switch out one backend for another when appropriate. For example you could switch from querying a REST API to sending queries directly to the database where it makes sense.
The consequence of this is that you'll need to build a new connector for every REST API, because no two REST APIs are the same.

Resources