Are database calls in cloud functions counted as regular request? - parse-platform

As per FAQs of parse.com,
How are requests made from Cloud Code treated under the request limit?
Calling a Cloud function will count itself as a single request. Save and delete triggers in Cloud Code are considered part of the original object save/delete request, and they will not be counted as an additional request. However, if your function or save/delete trigger uses the Parse JavaScript SDK to perform additional operations, these will be treated in the same way they would as if they were made by a regular client.
What does using JS sdk in cloud code exactly mean? If i make simple database call like query.find from a cloud function or trigger then will that be counted as regular client request?

Yes query.find will be a database request. Any query execution wil be.

Related

Can't I create a rest API in the apollo-server?

It is currently being developed using mysql-prisma-apollo server-nexus, and it is necessary to receive row data post using the REST API, not the GrqphQL statement currently developed. You want to process raw data passed to the post in Path (for example,/api/data/status). Is there a way to create a RestAPI on the apollo-server?
The apollo-server runs in a node environment, so you are able to use any http client you want.
Example:
axios
node-fetch

API Gateway Proxy Integration. Return response of second lambda function

The premise is pretty simple.
The usual application flow is as follows:
API Gateway receives a request.
API Gateway triggers Lambda Function with parameters.
Lambda Function runs the logic.
The Lambda Function's response is automatically forwarded to API Gateway as the response to step 1 (Response to the Received API Request).
Here's the issue I'm having. I need to run two functions before returning the response to the received API request. I need the return statement from the second function in step 4 to be the response sent back
to the client.
Now there are more examples where this is necessary. In the future, we might need to run a few services (such as lambda > Lambda > PostgreSQL > API Response) before responding to the request.
Is there a way to receive a request from a client, then run a host of tasks, assemble the necessary data, then use this data as a response in the original API request? So far step-functions seemed a likely solution but I don't know if it can do this.
Until recently this would've been a pain with Step Functions but around re:invent time last year they announced the ability to orchestrate synchronous express workflows: https://aws.amazon.com/blogs/compute/new-synchronous-express-workflows-for-aws-step-functions/
IMO, this would be the best / easiest way to implement what you're looking for.

stop pending requests with apollo client hooks

It looks like its possible to cancel pending requests via client.stop() but the documentation is not showing us a solution when we use apollo client hooks where we have no client.
How to stop pending requests using apollo client hooks ?
Struggled for days and made a proof of concept that finally works.
I have explained the code below and here is my POC - Github source code.
Explaination:
Step – 1:
Create a middleware that holds the logic to track and cancel duplicate request via ReactJS context API – cancelRequest.tsx (complete source code)
Step – 2:
Generate namespace UUID and pass it using requestTrackerId via query context as below.
context:{
requestTrackerId: uuidNameSpace('LOGIN', RequestNameSpace)
}
Refer source code - Line 32
Step – 3:
Finally, wiring all the middleware and setup it up as funnel layers using from API of Apollo GraphQL client and set queryDeduplication to false.
Mechanism of action:
When ever more than one request originates from the same mutation query, each query is tagged to its requestTrackerId which remains same to that particular query and different for other queries.
Using UUID library namespace is generated for each query (Read the code). With this ID the middleware associates each query to its namespace generated ID and stores in a cache object.
Subsequent incoming request are looked up using the cache object. If there’s an ongoing request which is not yet completed, it will be aborted immediately using AbortController javascript API and this request is replaced with new request.
Libraries used
UUID – Used to create unique request tracker ID and prevent namespace
collision for multiple request from same component.
ReactJS – No intro needed i guess?
Apollo GraphQL – Follow the link to know more..
Hope this answer helps. Happy coding

GraphQL endpoint for file download

Is it possible to trigger a file download in a browser from the GraphQL endpoint on an apollo-server-express application?
I have the endpoint written in a standard express app.get function (see below) but I would like to make use of the GraphQL context for file download and so I'm wondering if it's possible to cause a download from a GraphQL endpoint.
Here's a bare-bones example of what I have on the express end in the app.get function:
app.get('/download-batch/:batchId', async (req, res) => {
res.send(new Buffer('test'));
});
Any help would me much appreciated. Thanks!
Yes, but you would be required to create a custom endpoint for that. You can't use the existing endpoint which you are using for making requests.
Using the custom endpoint, you have to add a middleware and process the data into a buffer or whatever format you need. But it would not be recommended. That would again become one more endpoint instead which you can write an API to serve that.(After all graphql is built mainly on the focus of single endpoint).
Boštjan Cigan mentions here some solutions and gives details about using GraphQL as a proxy with Minio. The backend would ask Mino to generate a temporary link that can be sent back to the browser for direct access.
This is a valid solution for many use cases.

Is Google bigquery javascript API be available with gapi.client.request?

So current javascript api for bigquery leverages RpcRequest for synchronous query api call. Currently I can submit a set of queries in one http round trip by using RpcBatch.
Is there plan to migrate to gapi.client.HttpRequest for bigquery requests in the future? gapi documentation indicates RpcBatch is deprecated and should use HttpBatch instead.
Thanks,
You should already be able to use HttpRequest and HttpBatch with BigQuery. All you need to do is to use the appropriate /bigquery/v2 REST URL to build your request with gapi.client.request.

Resources