I have created Aws API with integration type HTTP.
the output of this API is secret color ( it could be red, yellow, blue ..etc)
I am calling it secret color because we have conditions here.
I am having five friends A( secret color: Red ), B ( secret color: Yellow), C(: Black), D(: Green), and E(: White), and each one has different favorite/secret color.
My task is if A hit my API he should get served by Red color and in the case of B, it's Yellow, and so on...
And I also have a responsibility to keep their secret a secret ..by that I mean B shouldn't by any means sneak and know what's A's color even if he tries harder.
(ps: They all know each other and they all are hitting the same API, How can I achieve my goal of keeping the above conditions on priority and maintaining the security of my API)
(colors are data here and A, B... etc are companies)
(I have created my HTTP endpoint backend in Ruby, and used AWS API gateway to expose the API and to leverage some features like throttling, API key authentication, etc.)
Related
I am building a microservice that will have a management layer, and another corresponding microservice for a public api. The management layer's client will be imported into the api service as well as a few other apps. What I am wondering is what the proper way to structure the mgmt layer's calls would be. I'm using rpc.
For example, let's say the service contains Human, Dog, Food. Food has to be owned by a Human OR Dog, not both and not neither.
Now since the mgmt service will only be accessed by client and not url, would it be better to define the specs like:
POST: /humans/id/food Client call: mgmtService.createFoodForHuman(humanId, Food)
POST: /dogs/id/food Client call: mgmtService.createFoodForDog(dogId, Food)
or
POST: /food Client call: mgmtService.createFood(Food)
where the second instance of Food will require the user to pass in one of human id or dog id.
To me, the first example seems more straightforward from a code-client perspective since the methods are more defined and the second example doesn't give any info of what kind of id is needed, but I am interested if there is any thoughts on which one is best practice. Thanks!
It is actually depends, on larger scale you would want to apply the first method. Sometimes, when you works with the other teams to build a complex services, it is possible that the other teams has to fix your bug because of your absent or any case where you can't fix the bug that time, if the bug is really an urgent cases, you wouldn't want it becomes a blocker because they did not understand how your code works.
Well, you can always provide a documentation for every API details on your platform such as Confluence, but still that would takes time to look for it.
On the other hand, with second method you can achieve that one for all API.
TL;DR -> Large scale 1st method, small scale 2nd method.
For example I have a post service. At UI I need to show post and userinfo (username and id for redirect to user page)
Options:
Should I store username and id in post service. (When every user register to system I will send subset detail to post service via RabbitMQ). (Total Request from UI= 1)
I will store only Id of user(AR). And at UI component fetch user with id(Total Request from UI=2)
Both of them are OK. The decision is based on how you map the concepts between different boundary contexts. The patterns are:
Anticorruption Layer
Shared Kernel
Open Host Service(option 2)
Separate Ways
Customer Supplier
Conformist
Partenership
Published Language
...
It is not only the personal preference, but also about the organization structure(The Conway's Law).
If both the two contexts(post and user) are controlled by your team, you could choose either of them. Considering the complexity of the option 1, I prefer option 2 since it's very straight. Start from the easier one then involve your architecture is always a good idea.
Going through the container package for the go cloud sdk, one sees that there are mainly 2 distinct type of types and methods when it comes to available resources:
projectslocation, e.g. ProjectsLocationsClustersCreateCall
projectszone e.g. ProjectsZonesClustersAddonsCall
What is their difference?
just for the record, I am looking for the pattern that one has to follow so that
a) it passes (in some method?) the project id
b) it retrieves all the available GKE clusters belonging to that project
The container/v1 Go APIs are generated from the underlying Google Kubernetes Engine (GKE) REST APIs (public documentation), which provide support for querying clusters either by zone or by location. Inspecting those docs, you will find most recommendations are to use the locations API. Although the zone-specific APIs remain available for backwards compatibility, any filtering by zone, for example, is deprecated:
From memory, I believe the ability to search for clusters by location was added when support for regional GKE clusters was announced; the control plane for such clusters is shared across multiple zones for high availability purposes and an API was provided which generalises over both zonal and regional clusters.
In order to obtain all clusters in a project via an API call, as per your request, you can use the location field of the (*container.ProjectsLocationsClustersService).List method to make such a call of the underlying APIs:
projectID := "my-project-id" // TODO fill in project ID
svc, err := container.NewService(context.TODO())
// TODO: handle err
parent := fmt.Sprintf("projects/%s/locations/-", projectID) // Location "-" matches all zones and regions
resp, err := svc.Projects.Locations.Clusters.List(parent).Do()
// TODO do something with response and error
More details of the structure of the parent parameter and the behaviour of the List call is available in the API docs.
The zonal API will provide support for listing regional clusters when queried for all zones (setting the zone parameter to -). However, as it only accepts zone arguments as filters in its List method, it offers no functionality for filtering regional clusters in a specific region. Other endpoints of the same API have a similar limitation.
https://github.com/square/connect-javascript-sdk/blob/master/docs/OAuthApi.md#obtainToken
How many access tokens can be generated per personal API token with Square SDK? A million at a time? Infinite? 100? It doesn't mention on the website, nor on any other Stripe documentation that I could find.
If you really mean Square, and you're solely talking about OAuthing other merchants (based on the link you provided) then our docs say:
By default, the OAuth API lets up to 500 Square accounts authorize your application.
It is possible to increase that number, but you would need to reach out to support and communicate the need with them.
https://docs.connect.squareup.com/api/oauth#navsection-oauth
I want to implement Hystrix in gateway(like zuul).
The gateway will discover service A, B or C, assume the service A has 10 instances and 10 Api. My question is.
What is the best practice for the command key decision? Service Name+Instance IP+Api Name.
it seems this gain the best detail level, as the different api, different instance fail will not circle break the other, But it may occupy large volume of command key.
Here is the example. Suppose I talk to service A, there are 5 instances of service A, I talk to service A with a load balancer, and the ip as below
192.168.1.1
192.168.1.2
192.168.1.3
192.168.1.4
192.168.1.5
and service A has 4 api, like
createOrder
deleteOrder
updateOrder
getOrder
Now there are many options for the command key choosen.
serivce level, like serviceA
instance level, like 192.168.1.1
instance + api level like 192.168.1.1_getOrder
for the first option, there are only one hystrix command, it take less cpu or memory, but if one api fail, all api are circle breaks.
Your HystrixCommandKey identifies a HystrixCommand, which encapsulates aService.anOperation(). Thus a HystrixCommandKey could be named using the composite key Service+Command (but not instances running the service or the IP addresses). If you do not provide an explicit name, the class name of HystrixCommand is used as the default HystrixCommandKey.
The Hystrix Dashboard then aggregates metrics for each of the HystrixCommandKey (Service+Command), from each instance running in the service cluster.
In your example, it would be serviceA_createOrder.