Fetch schema fails with Apollo managed federation - graphql

I integrate apollo managed federation with my apollo federation (gateway) with nestjs but when I run my gateway service I face with CheckFailed: one or more checks failed error.
I followed official document. but get the mentioned error. If anyone want my code, I will provide.
here is the error.

Related

ADF Oracle Service Cloud connector - correct endpoint

In Azure Data Factory, I'm trying to create a linked service by using the Oracle Service Cloud (Preview) connector to connect to my organisation's Oracle HCM instance. I'm generally following this guidance, using the copy data tool, which should be straightforward: https://learn.microsoft.com/en-us/azure/data-factory/connector-oracle-service-cloud?tabs=data-factory
I have tried the following host names...
https://xxxx.xx.xxx.oraclecloud.com/
https://xxxx.xx.xxx.oraclecloud.com/hcmRestApi
https://xxxx.xx.xxx.oraclecloud.com/hcmRestApi/resources/11.13.18.05/grades
https://xxxx.xx.xxx.oraclecloud.com:443/hcmRestApi/resources/11.13.18.05/grades
... but all of the generate the following error...
Error code 9603
ERROR [HY000] [Microsoft][OSvC] (20) Error while attempting to use REST API: Couldn't resolve host name
ERROR [HY000] [Microsoft][OSvC] (20) Error while attempting to use REST API: Couldn't resolve host name
Activity ID: 590c5007-ec6f-4729-9eb2-d05ef779dc0e.
I'm using a username and password that has been tested on Oracle, and have tried various combinations of using encrypted endpoints, host verification and peer verification as true or false.
I believe I'm using the correct endpoints, based on Oracle's guidance:
Oracle REST endpoints
https://docs.oracle.com/en/cloud/saas/human-resources/22c/farws/rest-endpoints.html
I'm not sure what else to try to get this connector to work? Has anybody else got it to work, or perhaps noticed something I'm doing wrong with the host name?

Issues with AWS lambda proxy integration

I have an AWS API Gateway endpoint that uses lambda proxy integration to retrieve data from an AWS RDS instance.
I use a yaml file to re-deploy the api, Once the API is re-deployed, The endpoint, illustrated above, throws an "Internal Server Error" every time I re-deploy the api.
The error goes off if I uncheck and then check the lambda proxy integration option/tick-box on the endpoint. This manual step is cumbersome and extremely unintuitive and can raise serious production issues.
Anyone facing similar issue know how to solve it without the additional "unchecking and checking" (manual) step?
What change in YAML can solve this problem ?
Under your AWS::ApiGateway::Method resource, make your Integration.Type as AWS_PROXY
https://docs.aws.amazon.com/apigateway/api-reference/resource/integration/#type

How to secure composer-rest-server after generating REST API?

I have configured composer-rest-server. I had also provided fabric username/password while configuring composer-rest-server (WebAppAdmin or admin). Now, I can able to access REST API without providing any credentials (through postman or loopback).
I would like to understand how we can secure composer-rest-server. Though, I have understood that we can add participant and issue identity, but not able to connect logical dots in context of how everything will work.
How to secure composer-rest-server while accessing REST API?
When and How we are going to use "username/secret" registered against any participant?
When to authenticate composer-rest-server API and When to use participant identity to access business network?
Please see the documentation on this subject:
https://hyperledger.github.io/composer/integrating/enabling-rest-authentication.html

How do I connect to Google Cloud Datastore in golang from a GCE VM?

When I try running these example functions to connect to Cloud Datastore, I get a 401 Invalid Credentials error.
I'm running the go code from a VM within a Google Cloud Project. I have enabled the Datastore API and generated a JSON key which is loaded by the example code.
This question is very similar and even mentions the same repo, but does not use the same authentication shown in the examples, and was related to a 403 Unauthorized error.
For some reason the Datastore documentation does not mention go outside the context of App Engine.

What is the best practice to architecture an oAuth server and an API server separately?

I am setting up an API for a mobile app (and down the line a website). I want to use oAuth 2.0 for authentication of the mobile client. To optimize my server setup, I wanted to setup an oAuth server (Lumen) separate from the API server (Laravel). Also, my db also lives on its own separate server.
My question is, if using separate servers and a package like lucadegasperi/oauth2-server-laravel do I need to have the package running on both server?
I am assuming this would be the case because the oAuth server will handle all of the authentication to get the access token and refresh access token functions. But then the API server will need to check the access token on protected endpoints.
Am I correct with the above assumptions? I have read so many different people recommending the oAuth server be separate from the API server, but I can't find any tutorials about how the multi-server dynamic works.
BONUS: I am migrating my DB from my API server, so I would assume I would need the oAuth packages migrations to be run from the API server also. Correct?

Resources