Can I call SQL/MYSQL queries from my transaction function in Hyperledger Composer?
If so, How can I?
If not, should I go with a Rest Api and Consume it from my transaction function? If there is a better way please suggest.
you can use a call-out to the REST endpoint for your SQL query - examples of that are shown here -> https://hyperledger.github.io/composer/latest/integrating/call-out. Also observe the comments on achieving deterministic results below.
the example shows you can return results to your Transaction Processor function. As for your query - if you don't already have REST APIs set up for those, there's plenty of resources out there to help
Related
I know there's a question with the same title but my question is a little different: I got a Lambda API - saveInputAPI() to save the value into a specified field. Users can invoke this API with different parameter, for example:
saveInput({"adressType",1}); //adressType is a DB field.
or
saveInput({"name","test"}) //name is a DB field.
And of course, this hosts on AWS so I'm also using API Gateway as well. But the problem is sometimes, an error like this happened:
As you can see. API No. 19 was invoked first but ended up finishing later
(10:10:16:828) -> (10:10:18:060)
While API No.18 was invoked later but finished sooner...
(10:10:17:611) -> (10:10:17:861)
This leads to a lot of problems in my project. And sometimes, the delay between 2 API was up to 10 seconds. The front project acts independently so users don't know what happens behind. They think they have set addressType to 1 but in reality, the addressType is still 2. Since this project is large and I cannot change this kind of [using only 1 API to update DB value] design. Is there any way for me to fix this problem ?? Really appreciate any idea. Thanks
If updates to Database can't be skipped if last updated timestamp is more recent than the source event timestamp, we need to decouple Api Gateway and Lambda.
Api Gateway writes to SQS FIFO Queue.
Lambda to consume SQS and process the request.
This will ensure older event is processed first.
Amazon Lambda is asynchronous by design. That means that trying to make it synchronous and predictable is kind of waste.
If your concern is avoiding "old" data (in a sense of scheduling) overwrite "fresh" data, then you might consider timestamping each data and then applying constraints like "if you want to overwrite target data, then your source timestamp have to be in the future compared to timestamp of the targeted data"
I am using Spring Boot and my application is just Monolithic for now, may switch to microservices later.
SCENARIO 1: Here My DB call Does NOT depend on REST Response
#Transactional
class MyService {
public void DBCallNotDependsOnRESTResponse(){
//DB Call
//REST Call, This restcall gives response like 200 "successfull"
}
}
SCENARIO 2: Here My DB call depends on REST Response
#Transactional
class MyService {
public void DBCallDependsOnRESTResponse(){
//REST Call, making a Real Transaction using BrainTree
//DB Call, HERE DB CALL DEPENDS ON REST RESPONSE
}
}
In case of Scenario 1, I have no issues as DB gets rolled back incase REST fails.
BUT, incase of Scenario 2, REST call cannot be rolled back, incase if any exception occurs at DB call.
I already searched in google for above, I found some solutions like we need to use something like Pub-Sub model system seems, BUT I could not able to get that concept to my head clearly.
I will be glad if someone could able to provide solution for SCENARIO 2. How Other Ecommerce businesses handling their transactions effectively, I guess my query related to some Architecture design.. Please advice some good architecture approach to solve above Transaction issue. Do you think using some Messaging system like Kafka will solve above issue..? FYI, currently, my application is Monolithic, shall I use Microservices? Do I need to use two-phase-commit or Sagas will solve my problem? Does Sagas can be used for Monolithic application?
EDIT:
Regarding RestCall: I am actually making a Real Transaction using BrainTree, which is a Rest Call.
Can you elaborate what are you achieving from rest call? Are you updating any data that will be used by the DB call?
If the 2 calls are independent, will the order be of importance? Since db call will be committed at the end of method itself
I'm trying to implement a batch query interface with GraphQL. I can get a request to work synchronously without issue, but I'm not sure how to approach making the result asynchronous. Basically, I want to be able to kick off the query and return a pointer of sorts to where the results will eventually be when the query is done. I'd like to do this because the queries can sometimes take quite a while.
In REST, this is trivial. You return a 202 and return a Location header pointing to where the client can go to fetch the result. GraphQL as a specification does not seem to have this notion; it appears to always want requests to be handled synchronously.
Is there any convention for doing things like this in GraphQL? I very much like the query specification but I'd prefer to not leave the client HTTP connection open for up to a few minutes while a large query is executed on the backend. If anything happens to kill that connection the entire query would need to be retried, even if the results themselves are durable.
What you're trying to do is not solved easily in a spec-compliant way. Apollo introduced the idea of a #defer directive that does pretty much what you're looking for but it's still an experimental feature. I believe Relay Modern is trying to do something similar.
The idea is effectively the same -- the client uses a directive to mark a field or fragment as deferrable. The server resolves the request but leaves the deferred field null. It then sends one or more patches to the client with the deferred data. The client is able to apply the initial request and the patches separately to its cache, triggering the appropriate UI changes each time as usual.
I was working on a similar issue recently. My use case was to submit a job to create a report and provide the result back to the user. Creating a report takes couple of minutes which makes it an asynchronous operation. I created a mutation which submitted the job to the backend processing system and returned a job ID. Then I periodically poll the jobs field using a query to find out about the state of the job and eventually the results. As the result is a file, I return a link to a different endpoint where it can be downloaded (similar approach Github uses).
Polling for actual results is working as expected but I guess this might be better solved by subscriptions.
I am trying to write a Lambda function (using AWS Cloud9) which makes a query to Redshift (using the node-postgres package) and then writes the result to a Google Sheet (using the googleapis package).
I currently have the code spread over two separate Lambda functions - one to make the query, and one to write to the sheet, though this same error occurred when I tried it in a single function.
Both functions individually work fine. The query function makes a query and returns a result, and the writing query writes a test payload to the sheet.
However, if I try to invoke the writing function from the query function, the whole thing freezes up and eventually times out. This is the exact log from a run.
Error:
Read timeout on endpoint URL: "https://lambda.us-east-2.amazonaws.com/2015-03-31/functions/queryRedshift/invocations"
at convertStderrToError (https://d28a1z68q19s1r.cloudfront.net/content/ce0bff16a8467f5a19e655ab833e28a385f3a62f/#aws/aws-toolkit-cloud9/configs/bundle.js:424:33)
at exports.EventEmitter.<anonymous> (https://d28a1z68q19s1r.cloudfront.net/content/ce0bff16a8467f5a19e655ab833e28a385f3a62f/#aws/aws-toolkit-cloud9/configs/bundle.js:416:70)
at exports.EventEmitter.EventEmitter.emit (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:20:23)
at Consumer.onExit (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47444:80)
at Consumer.<anonymous> (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47204:4)
at Consumer.Agent._onMessage (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47289:4)
at EngineIoTransport.EventEmitter.emit (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47041:16)
at module.exports.onMessage (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47348:6)
at module.exports.EventEmitter.emit (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:19:23)
at module.exports.ReliableSocket.onMessage (https://d373lap04ubgnu.cloudfront.net/c9-af167ac416de-ide/build/configs/ide/#aws/cloud9/configs/ide/environment-default.js:47560:76)
I have tried re-working the code to separate things, but I'm not actually sure where to start, as I can only find one other similar problem with no answer, and the log isn't pointing to where things are getting stuck (as far as I can tell - I'm not super experienced at this).
If someone can at least point me in the right direction, it would be super helpful!
Thanks in advance!
EDIT: I have now also tried the node-redshift package with the same result.
From the info you provided, below may be the situation:
Querying lambda is able to connect to redshift within AWS.
Writing lambda is able to connect to google sheet api through Internet.
Querying lambda doesn't have internet connectivity to connect to
lambda.us-east-2.amazonaws.com
For a Lambda function inside the VPC to access internet you have to do the below,
https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/
You have all your subnets attached to Internet Gateway but none of them having NAT Gateway, if I am correct.
I wanted to test the response times of a GraphQL endpoint, and a RESTful endpoint as I haven't ever used GraphQL before, and I am about to use it in my next Laravel project.
So I am using Lighthouse PHP package to serve a GraphQL endpoint from my Laravel app, and also I have created a RESTful endpoint.
Both endpoints(GraphQL and RESTful) are intended to get all Users(250 users) from my local Database.
So based on the test what I have noticed here is that, when I tested this both endpoints on Postman, the RESTful endpoint response is faster than GraphQL endpoint.
Can I know why GraphQL endpoint's response takes more time than RESTful while both endpoints are getting same data?
GraphQL endpoint result for GET request (response time: 88ms)
GraphQL endpoint result for POST request (response time: 88ms)
RESTful endpoint result (response time: 44ms)
There's no such thing as a free lunch.
GraphQL offers a lot of useful features, but those same features invariably incur some overhead. While a REST endpoint can effectively pull data from some source and regurgitate it back to the client, even for a relatively small dataset, GraphQL will have to do some additional processing to resolve and validate each individual field in the response. Not to mention the processing required to parse and validate the request itself. And this overhead only gets bigger with the size of the data returned.
If you were to introduce additional features to your REST endpoint (request and response validation, support for partial responses, ability to alias individual response fields, etc.) that mirrored GraphQL, you would see the performance gap between the two shrink. Even then, though, it's still somewhat of an apples and oranges comparison, since a GraphQL service will go through certain motions simply because that's what the spec says to do.
TLDR: Your REST example is simple and less complicated
In Lighthouse it is creating a AST for parsing the graphql request and your schema. It then afterwards passes all the directives and so on to figure out what you are trying to do. It also has to validate your query, to see if you can actually run it on the schema.
Depending on how you defined it in your application, there is a lot of steps it is passing through. However this can be reduced by multiple different ways, the parsing of your graphql schema can be cached, you could cache the result, use deferred fields (prob. wont speed up this example). You can read more about this in the performance section of the docs.
You are not specifying how your REST is setup, if you are using some kind of REST standard where it has to parse the data also. If you add more features, there is more code to run through, hence higher load speed.
As of Lighthouse v4, we have made significant performance increases by lazy-loading the minimally required fields and types from the schema. That turns out to bring about a 3x to 10x performance increase, depending on the size of your schema.
You probably still won't beat a single REST endpoint for such a simple query. Lighthouse will begin to shine on more heavily nested queries that join across multiple relationships.
Try enabling opcache on the server. This decreased my gql response time from 200ms to 20ms