api-platform automatically issues events to a mercure hub when updating resources marked with mercure=true and publishes the topics to listen on in the according GET endpoints. Nice. Is there already an option that I can use in the generated GraphQL endpoints?
The mercure docs say: https://github.com/dunglas/mercure#how-to-use-mercure-with-graphql that mercure should integrate "very well" with any GraphQL API as long as it "returns a corresponding topic URL". But afaik the GraphQL API generated by api-platform doesn't (https://api-platform.com/docs/core/mercure/). Do I miss anything or is this upcoming? Using apollo-vue on the frontend.
To work around I could listen to changes on all entity ids using dedicated EventSource topics on the frontend. I would use an URI template like http://localhost:8080/product_streams/{id} there. But I can't manage to write an uri template that matches only certain ids like: http://localhost:8080/product_streams/{id:123,456,789} to get only updates for streams no 123 456 789 (tried https://uri-template-tester.mercure.rocks/ && https://www.rfc-editor.org/rfc/rfc6570#page-18) with no success :(
Edit: GraphQL subscriptions are now supported by API Platform. Checkout the documentation: https://api-platform.com/docs/master/core/graphql/#subscriptions
API Platform doesn't support GraphQL subscriptions yet.
Adding support for subscriptions using Mercure (that is designed among other things for this use case) is planned, but the work hasn't started yet. Any help is welcome!
Related
I want to fetch api and POST data to that api in nodejs which i have created using yo generator.
You can use the Fetch API which provides an interface for fetching resources (including across the network). It will seem familiar to anyone who has used XMLHttpRequest, but the new API provides a more powerful and flexible feature set.
in my Laravel 5.7/mysql app I need to make external api to read some data from external
app with get request and to write some data to my db with post request.
Which tools/scripts are there for this and how to make these requests safe?
MODIFIED :
Thanks for feedbacks, but looks like I badly put my question
The external app(I do not know what is it written with) need to read data from my app
and write data to my Laravel 5 app.
And how have I to test these requests while on development locally ?
Looks like I have to use Guzzle as in provided link?
Which steps have I to take for safety on my side?
Thanks!
These three libraries are popular for your use-case:
Guzzle
Curl
zttp
If the database is local you can use Eloquent, If not, remote connection to that database may help. otherwise, if you only have API access you should consume eighter of above libraries or any alternative options to make an HTTP request your application might require.
Security-wise, as long as you are only making a request to a remote server, the Suggested way is to store any key or secret string related to authorizing your request (if applicable) in your .env to prevent it from committed to your version control systems. Needless to say to always handle any possible HTTP error your remote API might throw in order to prevent any unwanted error on your application side.
And as Abir Adak mentioned in the comment check this thread for further details.
Updated Answer: On the case of MODIFIED part, generally you have 3 popular options,
REST API
This blog post is a detailed walkthrough written for Laravel
This one from Stack Overflow can help you with designing you API
This last one can help you to develop a widely accepted API response and endpoints by following its specifications.
GraphQL
Can save some time for developing your API, but I suggest to make sure that the consumers of your API are happy to use this option.
GraphQ
Laravel Package for GraphQL
If using Laravel isn't a must, and you are using PostgreSQL, you might want to look at Hasura as well.
SOAP
Have little knowledge on this option for Laravel, just know folks coding using C# and .net are happier to expose their API with this protocol. read more about it on WikiPedia
Postman is a great tool for testing your API or any other API.
Here is a simple use case to illustrate my question : imagine a bank mobile app, the features to implement are:
List the beneficiaries
Do a payment
We have one micro-service to handle the payments "PaymentService" and one to deal with the beneficiaries "BeneficiaryService". Both have a documented contract with RAML or OpenAPI.
I think it's not a good idea to let the mobile app call the two micro-services independently: it would expose too much the internal structure of the information system and provide no abstraction and so no mitigation.
So, we need to build a "facade" API that expose the routes of "PaymentService" + "BeneficiaryService" to ease the integration. Let's call it "MyAwesomeMobileApp"
I assume "MyAwesomeMobileApp" can be achieved by writing code (i.e. via an ESB or a dedicated Spring app) or via an API Gateway by configuration.
The thing is how do you provide aggregated documentation to your customer (the people coding the mobile app frontend) ?
By aggregated documentation I mean, a documentation with a set of routes from "PaymentService" and from "BeneficiaryService". A sort of third contract made with a subset of each micro-services.
Thank you
If you're providing (requiring) a gateway API between the application and the other services (which seems like a good choice in your example), you provide documentation only for the gateway API endpoints as the services it consumes are not relevant to the mobile app developer.
The way your facade implements communication with any services behind it could well be different than the services themselves (for example: hiding a field that is for internal use or using different field names) and, as such, the contract even for the service-specific models could well be different.
So, document the facade/gateway API well (and independently) and be on your way. It should internally have brokers or some other separation between the endpoints it exposes and the specific requirements of the services it consumes that allows them to be independently updated without too-tight coupling.
If you are looking for a simple concatenation of the contracts or in other words a "unified" public documentation that contains endpoints from both API specification documents/contracts, then you can give APIMatic's merging feature a try.
A detailed step-by-step walkthrough can be found here: https://docs.apimatic.io/manage-apis/api-merging/#merging-two-api-specifications---a-basic-example . However, in brief, the steps for your scenario can be:
Structure your API contracts in a root directory like shown in an example below:
dir\
payments\
openapi.json
beneficiary\
main.raml
APIMATIC-META.json
Here openapi.json and main.raml can be your OpenAPI and RAML contracts respectively.
A minimalistic APIMATIC-META.json configuration file can look like this to enable merging:
{
"MergeConfiguration": {
"MergedApiName": "My title",
"MergeApis": true
}
}
ZIP the directory, upload it and import it into the APIMatic Dashboard (You will need to sign up first).
Preview your public documentation by doing Generate > Proceed > Preview API Portal. Publish/host it as required.
If you are looking to automate the process, APIMatic has an API too: https://www.apimatic.io/docs/api#/http
My idea is to create a microservice approch with graphql and serverless.
I'am thinking about creating a service for every table in the dynamodb and then create a apigateway service, and in the apigateway service use graphql-tool to stitch the schemas together.
This work pretty good and I'am satisfied.
But now I want to add authorization to my graphql queries and mutations.
I have added a custom autherizer in the apigateway that resolves the JWT token from the client and sends it to the graphql context with the userId
But now I want to add authorization to my resolvers.
What is the best approach for this?
I want it to be as moduler as possible and and best (i think) is to add the authorization in the apigatway service so my other service stay clean. But I don't know how?
Any ideas?
You may want to look into AppSync from AWS. It will handle a lot of this for you; authorizers, querying DyanmoDB, etc.
I've built Lambda APIs using Apollo GraphQL and exposed them through API Gateway. I then used Apollo's schema stitching to connect them together. There's one really important caveat here: It's slooow. There's already a speed penalty with API Gateway and while it's acceptable, imagine jumping through multiple gateways before returning a response to a user. You can cache the schema which helps a bit. Your tolerance will depend on your app and UX of course. Maybe it's just fine - only you (or your users) can answer that.
That note aside, the way I handled auth was to accept an Authorization header and make a check manually. I did not use any custom authorizers from API Gateway. I was not using Cognito for this so it talked to another service. This all happened before the resolvers. Why are you looking to do the authorization in resolvers? Are there only some that you wish to protect? Access control?
It may not be best to add the custom authorizers to API Gateway in this case...Because you're talking about performing this action at the resolver level in the code.
GraphQL has one POST endpoint for everything. So this is not going to help with configuring API Gateway auth per resource. That means you're now beyond API Gateway and into the invocation of your Lambda anyway. You didn't prevent the invocation so you're being billed and running code now.
So you might as well write your custom logic to authenticate. If you're using Cognito then there is an SDK to help you out. Or take a look at AppSync.
We use Google APIs Calendar v3 and Google said that they'll discontinuing support for json rpc Discontinuing support for JSON-RPC and Global HTTP Batch Endpoints.
I cant find if they plan a v4 version compliant or if the current version is compliant. Documentation don't reference about it. Java Quickstart
Any information about that?
Its not just Calendar that is effected its all Google APIs discovery APIs that are effected. The batching endpoint
POST /batch HTTP/1.1
Authorization: Bearer your_auth_token
Host: www.googleapis.com
Content-Type: multipart/mixed; boundary=batch_foobarbaz
Content-Length: total_content_length
Will be discontinued around March 25, 2019. That being said i am skeptical that the client libraries have all been updated to remove it already. I am a contributor on two of them and haven't heard anything yet about removing the the batching ability from the libraries.
Google API Client Libraries have been regenerated to no longer make
requests to the global HTTP batch endpoint. Clients using these
libraries must upgrade to the latest version. Clients not using the
Google API Client Libraries and/or making custom calls to the JSON-RPC
endpoint or HTTP batch endpoint will need to make the changes outlined
below.
The global batching endpoint is
www.googleapis.com/batch
the new one is
www.googleapis.com/batch/<api>/<version>
I think the choice of words incorrect here and it they will be regenerated if needed. The change should not effect users with one exception. That being heterogeneous batch requests a single batch request containing more then one API within the call wont work due to the fact that the end point is API specific.
Now for the bad news to my knowledge there is nothing that is going to be replacing it. You will not be able to make heterogeneous batch requests. The Google apis java client library appears to use the old endpoint BatchRequest.java so if you are using heterogeneous batching your going to have to change your code by the time they update the library to support the new API specific endpoint.
Update
After a lot of back and forth with Google over the last 24 hours I have gotten some clarification on that post.
Batching will still work with the client libries
Most of the client libraries appear to already use this endpoint so there should be no change.
You will only be able to call one API within a batch request. Example you cant call drive and calendar API in the same batch request. You will have to make two batch requests one for drive and one for calendar.
There may be some edits coming to that post to clear up the language a little to be more clear.
I have updated my answer to reflect the clarifications from Google
It is not removing batching entirely.
Per the blog they are removing heterogeneous batching - accessing the same API with requests that lead to other APIs. They are also consolidating homogeneous batching (batching to the same API and leading to a singular API) to "API specific batch endpoints".
From my understanding of the blog, if you are batching several different requests, ie. a Foo request and a Bar request into a Foo API call, you will have to adjust your code to use one batch for one and one batch for the other. If you are already doing that, it is unclear whether or not you will have to change your code, perhaps newly released libraries will have a new way to handle these requests.