Spring cloud contracts with generic api - spring

How to use spring cloud contracts with generic api. I'm asking about REST contracts on producer service. So consider an example. I have a service which allows to store user data into different formats into database and acts like proxy between service and database. It has parameters required for all consumers, and parameters which depend on a consumer.
class Request<T> {
Long requestId;
String documentName;
T documentContent;
}
And it has two consumers.
Consumer 1:
{
"requestId": 1,
"documentName": "login-events",
"documentContent": {
"userId": 2,
"sessionId": 3
}
}
Consumer 2:
{
"requestId": 1,
"documentName": "user-details",
"documentContent": {
"userId": 2,
"name": "Levi Strauss",
"age": 11
}
}
As you can see documentContent depends on consumer. In I want to write such contracts, which will check content of this field on consumer side and ignore it on producer side. Options like
"documentContent": ["age": $(consumer(11))] //will produce .field(['age']").isEqualTo(11)
and
"documentContent": ["age": $(consumer(11), producer(optional(anInteger())))] //will require field presence
didn't work. Of course I may write "documentContent": [] or even ignore this field in contracts, but I want them to act like Rest Api documentation. Does anybody has ideas how to solve this?

Ignore the optional element and define 2 contracts. One with the age value and one without it. The one with the age value should contain also contain a priority field. You can read about priority here https://cloud.spring.io/spring-cloud-static/spring-cloud-contract/2.2.0.RELEASE/reference/html/project-features.html#contract-dsl-http-top-level-elements
It would look more or less like this (contract in YAML):
priority: 5 # lower value of priority == higher priority
request:
...
body:
documentContent:
age: 11
response:
...
and then the less concrete case (in YAML)
priority: 50 # higher value of priority == lower priority
request:
...
body:
documentContent:
# no age
response:
...

I found solution, that is more applicable for my case (groovy code):
def documentContent = [
"userId": 2,
"sessionId": 3
]
Contract.make {
response {
body(
[
............
"documentContent" : $(consumer(documentContent), producer(~/.+/)),
............
]
)
}
}
But please, take into consideration, that I stubbed documentContent value with a String ("documentContent") in producer contract test.

Related

AWS IoT Core for Lambda event missing data

I have a TEKTELIC smart room sensor connected to AWS IoT Core for Lambda. The destination publishes to a topic. In the MQTT test client I get a nicely formed message:
{
"WirelessDeviceId": "24e8d6e2-88c8-4057-a60f-66c5f3ef354e",
"PayloadData": "A2cA4ARoaAD/ASw=",
"WirelessMetadata": {
"LoRaWAN": {
"ADR": true,
"Bandwidth": 125,
"ClassB": false,
"CodeRate": "4/5",
"DataRate": "3",
"DevAddr": "019e3fcb",
"DevEui": "647fda00000089e2",
"FCnt": 4676,
"FOptLen": 0,
"FPort": 10,
"Frequency": "904700000",
"Gateways": [
{
"GatewayEui": "647fdafffe014abc",
"Rssi": -92,
"Snr": 5.800000190734863
},
{
"GatewayEui": "0080000000024245",
"Rssi": -93,
"Snr": 7.25
},
{
"GatewayEui": "24e124fffef464da",
"Rssi": -86,
"Snr": 4.25
}
],
"MIC": "eb050f05",
"MType": "UnconfirmedDataUp",
"Major": "LoRaWANR1",
"Modulation": "LORA",
"PolarizationInversion": false,
"SpreadingFactor": 7,
"Timestamp": "2022-12-07T21:46:13Z"
}
}
}
when I subscribe to the topic with a lambda:
Rule query statement: SELECT *, topic() AS topic FROM 'lora/#'
I am missing most of the data:
{
"Gateways": {
"Timestamp": "2022-12-07T21:46:13Z",
"SpreadingFactor": 7,
"PolarizationInversion": false,
"Modulation": "LORA",
"Major": "LoRaWANR1",
"MType": "UnconfirmedDataUp",
"MIC": "eb050f05",
"Snr": 4.25,
"Rssi": -86,
"GatewayEui": "24e124fffef464da"
},
"Snr": 7.25,
"Rssi": -93,
"GatewayEui": "0080000000024245",
"topic": "lora/tektelic/smart_room"
}
The relevant code is:
def handler(event, context):
print(json.dumps(event))
The event looks like approximately half the data, malformed and in reverse order. There is a Gateways [ ] in the original event, it is now an object with some data from the original array, and other data that was outside the array.
The info on the device that sent the message, and the payload I want to process are missing.
I am following this solution construct pattern, the only modifications are the lambda code and select statement.
I tried increasing the memory form the default 128M to 1024M with no changes.
I am also storing the raw messages in AWS S-3, following this construct pattern, and it matches the MQTT data. I made similar changes to select statement in it.
Thoughts on where to look for issues?
Most recent insight is that the select statement:
iot_topic_rule_props=iot.CfnTopicRuleProps(
topic_rule_payload=iot.CfnTopicRule.TopicRulePayloadProperty(rule_disabled=False, description="Processing of DTC messages from Lora Sensors.", sql="SELECT topic() AS topic, * FROM 'lora/#'", actions=[])),
Replacing the sql with:
sql="SELECT * FROM 'lora/#'",
generates a nicely formed event.
Replacing it with:
sql="SELECT topic() AS topic, * FROM 'lora/#'",
generates the same malformed event, except topic is the first tag instead of the last. I'm going to leave this open for an answer on how what is going on, because it feels like a bug. This should generate an error if it's just unhappy with the sql.
The Key to making it work is to include the aws_iot_sql_version:
sql="SELECT *, topic() AS topic FROM 'lora/#'",
aws_iot_sql_version="2016-03-23",
according to the docs the default value is "2015-10-08", however the the console uses "2016-03-23". I have not done the research to see the details.
I don't think you want to subscribe to a topic with a lambda? A lambda is a short lived bit of code...
Have you looked into Iot Rules
You are able to subscribe using a sql statement, and then trigger a lambda to do stuff with it.
https://docs.aws.amazon.com/iot/latest/developerguide/iot-rules.html

GraphQL java: return a partial response and inform a user about it

I have a SpringBoot application that uses GraphQL to return data to a request.
What I have
One of my queries returns a list of responses based on a list of ids supplied. So my .graphqls file is a follows:
type Query {
texts(ids: [String]): [Response]
}
type Response {
id: String
text: String
}
and the following are request & response:
Request
texts(ids:["id 1","id 2"]){
id
text
}
Response
{
"data": [
{
"id": "id 1",
"text": "Text 1"
},
{
"id": "id 2",
"text": "Text 2"
}
]
}
At the moment, if id(s) is/are not in aws, then exception is thrown and the response is an error block saying that certain id(s) was/were not found. Unfortunately, the response for other ids that were found is not displayed - instead the data block returns a null. If I check wether data is present in the code via ssay if/else statment, then partial response can be returned but I will not know that it is a partial response.
What I want to happen
My application fetches the data from aws and occasionally some of it may not be present, meaning that for one of the supplied ids, there will be no data. Not a problem, I can do checks and simply never process this id. But I would like to inform a user if the response I returned is partial (and some info is missing due to absence of data).
See example of the output I want at the end.
What I tried
While learning about GraphQL, I have encountered an instrumentation - a great tool for logging. Since it goes through all stages of execution, I thought that I can try and change the response midway - the Instrumentation class has a lot of methods, so I tried to find the one that works. I tried to make beginExecution(InstrumentationExecutionParameters parameters) and instrumentExecutionResult(ExecutionResult executionResult, InstrumentationExecutionParameters parameters) to work but neither worked for me.
I think the below may work, but as comments suggests there are parts that I failed to figure out
#Override
public GraphQLSchema instrumentSchema(GraphQLSchema schema, InstrumentationExecutionParameters parameters) {
String id = ""; // how to extract an id from the passed query (without needing to disect parameters.getQuery();
log.info("The id is " + id);
if(s3Service.doesExist(id)) {
return super.instrumentSchema(schema, parameters);
}
schema.transform(); // How would I add extra field
return schema;
}
I also found this post that seem to offer more simpler solution. Unfortunately, the link provided by host does not exist and link provided by the person who answered a question is very brief. I wonder if anyone know how to use this annotation and maybe have an example I can look at?
Finally, I know there is DataFetcherResult which can construct partial response. The problem here is that some of my other apps use reactive programming, so while it will be great for Spring mvc apps, it will not be so great for spring flux apps (because as I understand it, DataFetcherResult waits for all the outputs and as such is a blocker). Happy to be corrected on this one.
Desired output
I would like my response to look like so, when some data that was requested is not found.
Either
{
"data": [
{
"id": "id 1",
"text": "Text 1"
},
{
"id": "id 2",
"text": "Text 2"
},
{
"id": "Non existant id",
"msg": "This id was not found"
}
]
}
or
{
"error": [
"errors": [
{
"message": "There was a problem getting data for this id(s): Bad id 1"
}
]
],
"data": [
{
"id": "id 1",
"text": "Text 1"
},
{
"id": "id 2",
"text": "Text 2"
}
]
}
So I figured out one way of achieving this, using instrumentation and extension block (as oppose to error block which is what I wanted to use initially). The big thanks goes to fellow Joe, who answered this question. Combine it with DataFetchingEnviroment (great video here) variable and I got the working solution.
My instrumentation class is as follows
public class CustomInstrum extends SimpleInstrumentation {
#Override
public CompletableFuture<ExecutionResult> instrumentExecutionResult(
ExecutionResult executionResult,
InstrumentationExecutionParameters parameters) {
if(parameters.getGraphQLContext().hasKey("Faulty ids")) {
Map<Object, Object> currentExt = executionResult.getExtensions();
Map<Object, Object> newExtensionMap = new LinkedHashMap<>();
newExtensionMap.putAll(currentExt == null ? Collections.emptyMap() : currentExt);
newExtensionMap.put("Warning:", "No data was found for the following ids: " + parameters.getGraphQLContext().get("Faulty ids").toString());
return CompletableFuture.completedFuture(
new ExecutionResultImpl(
executionResult.getData(),
executionResult.getErrors(),
newExtensionMap));
}
return CompletableFuture.completedFuture(
new ExecutionResultImpl(
executionResult.getData(),
executionResult.getErrors(),
executionResult.getExtensions()));
}
}
and my DataFetchingEnviroment is in my resolver:
public CompletableFuture<List<Article>> articles(List<String> ids, DataFetchingEnvironment env) {
List<CompletableFuture<Article>> res = new ArrayList<>();
// Below's list would contain the bad ids
List<String> faultyIds = new ArrayList<>();
for(String id : ids) {
log.info("Getting article for id {}",id);
if(s3Service.doesExist(id)) {
res.add(filterService.gettingArticle(id));
} else {
faultyIds.add(id);// if data doesn't exist then id will not be processed
}
}
// if we have any bad ids, then we add the list to the context for instrumentations to pick it up, right before returning a response
if(!faultyIds.isEmpty()) {
env.getGraphQlContext().put("Faulty ids", faultyIds);
}
return CompletableFuture.allOf(res.toArray(new CompletableFuture[0])).thenApply(item -> res.stream()
.map(CompletableFuture::join)
.collect(Collectors.toList()));
}
You can obviously separate error related ids to different contexts but for my simple case, one will suffice. I however still interested in how can the same results be achieved via error block, so i will leave this question hanging for a bit before accepting this as a final answer.
My response looks as follows now:
{
"extensions": {
"Warning:": "No data was found for the following ids: [234]"
},
"data": { ... }
My only concern with this approach is security and "doing the right thing" - is this correct thing to do, adding something to the context and then using instrumentation to influence the response? Are there any potential security issues? If someone know anything about it and could share, it will help me greatly!
Update
After further testing it appears if exception is thrown it will still not work, so it only works if you know beforehand that something goes wrong and add appropriate exception handling. Cannot be used with try/catch block. So I am a half step back again.

In Relay.js, what is the `Client Mutation Identifier`?

In the relay documentation here, it says that:
Relay uses a common pattern for mutations, where there are root fields on the mutation type with a single argument, input, and where the input and output both contain a client mutation identifier used to reconcile requests and responses.
But in the example they provided, the input and output looked like this respectively:
// IntroducedShipInput
{
"input": {
"shipName": "B-Wing",
"factionId": "1"
}
}
// IntroducedShipPayload
{
"introduceShip": {
"ship": {
"id": "U2hpcDo5",
"name": "B-Wing"
},
"faction": {
"name": "Alliance to Restore the Republic"
}
}
}
So what is the client mutation identifier? And why, and how does it get used to reconcile requests and responses?
I'm still not 100% sure what exactly happened to the "client mutation identifier," but having done some research, it appears to have been a requirement in previous versions of Relay. This PR apparently removed the requirement by replacing it with some other mechanism, but it's not clear to me what that other mechanism does. I left a comment requesting more clarification around the documentation, which appears to be out of date.
At any rate, the client mutation identifier appears to have been related to some assumptions about mutation idempotency in Facebook's implementation of GraphQL.

Spring Data Elasticseach: How to create Completion object with multiple weights?

I have managed to build a working autocomplete service using Elasticsearch with Spring Boot, but I can't assign different weights for my autocomplete sentences.
While I am building the Completion object (org.springframework.data.elasticsearch.core.completion.Completion) I am using the standard constructor and next, I am assigning the weight to the object, for example (I am using Kotlin)
val completion = Completion(arrayOf("Sentence one", "Second sentence"))
completion.weight = 10
(...)
myEntity.suggest = completion
what produces the following JSON for Elasticsearch
{
"suggest" : {
"input": [ "Sentence one", "Second sentence" ],
"weight" : 10
}
}
But, according to the Elasticsearch documentation (https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters-completion.html) I would like to achieve something like this
{
"suggest" : [
{
"input": "Sentence one",
"weight" : 10
},
{
"input": "Second sentence",
"weight" : 5
}
]
}
Is it possible with spring-data-elasticsearch? If yes, how can I do this?
No, at the moment the second case is currently not supported by Spring Data Elasticsearch.
Both JSON you show are valid, the first one is for multiple inputs that all have the same weight, the second one is for multiple inputs, when ich input has a different weight.
Please file an issue in Spring Data Elasticsearch Jira to add support for the Completion object to support this case.

Spring Integration Java DSL: How to loop the paged Rest service?

How to loop the paged Rest service with the Java DSL Http.outboundGatewaymethod?
The rest URL is for example
http://localhost:8080/people?page=3
and it returns for example
"content": [
{"name": "Mike",
"city": "MyCity"
},
{"name": "Peter",
"city": "MyCity"
},
...
]
"pageable": {
"sort": {
"sorted": false,
"unsorted": true
},
"pageSize": 20,
"pageNumber": 3,
"offset": 60,
"paged": true,
"unpaged": false
},
"last": false,
"totalElements": 250,
"totalPages": 13,
"first": false,
"sort": {
"sorted": false,
"unsorted": true
},
"number": 3,
"numberOfElements": 20,
"size": 20
}
where the variable totalPages tells the total pages amount.
So if the implementation
integrationFlowBuilder
.handle(Http
.outboundGateway("http://localhost:8080/people?page=3")
.httpMethod(HttpMethod.GET)
.expectedResponseType(String.class))
access one page, how to loop all the pages?
The easiest way to do this is like wrapping the call to this Http.outboundGateway() with the #MessagingGateway and provide a page number as an argument:
#MessagingGateway
public interface HttpPagingGateway {
#Gateway(requestChannel = "httpPagingGatewayChannel")
String getPage(int page);
}
Then you get a JSON as a result, where you can convert it into some domain model or just perform a JsonPathUtils.evaluate() (based on json-path) to get the value of the last attribute to be sure that you need to call that getPage() for the page++ or not.
The page argument is going to be a payload of the message to send and that can be used as an uriVariable:
.handle(Http
.outboundGateway("http://localhost:8080/people?page={page}")
.httpMethod(HttpMethod.GET)
.uriVariable("page", Message::getPayload)
.expectedResponseType(String.class))
Of course, we can do something similar with Spring Integration, but there are going to be involved filter, router and some other components.
UPDATE
First of all I would suggest you to create a domain model (some Java Bean), let's say PersonPageResult, to represent that JSON response and this type to the expectedResponseType(PersonPageResult.class) property of the Http.outboundGateway(). The RestTemplate together with the MappingJackson2HttpMessageConverter out-of-the-box will do the trick for you to return such an object as a reply for the downstream processing.
Then, as I said before, looping would be better done from some Java code, which you could wrap to the service activator call. For this purpose you should daclare a gateway like this:
public interface HttpPagingGateway {
PersonPageResult getPage(int page);
}
Pay attention: no annotations at all. The trick is done via IntegrationFlow:
#Bean
public IntegrationFlow httpGatewayFlow() {
return IntegrationFlows.from(HttpPagingGateway.class)
.handle(Http
.outboundGateway("http://localhost:8080/people?page={page}")
.httpMethod(HttpMethod.GET)
.uriVariable("page", Message::getPayload)
.expectedResponseType(PersonPageResult.class))
}
See IntegrationFlows.from(Class<?> aClass) JavaDocs.
Such a HttpPagingGateway can be injected into some service with hard looping logic:
int page = 1;
boolean last = false;
while(!last) {
PersonPageResult result = this.httpPagingGateway.getPage(page++);
last = result.getLast();
List<Person> persons = result.getPersons();
// Process persons
}
For processing those persons I would suggest to have separate IntegrationFlow, which may start from the gateway as well or you can just send a Message<List<Person>> to its input channel.
This way you will separate concerns about paging and processing and will have a simple loop logic in some POJO method.

Resources