Apollo Federation gateway and another server implementation - apollo-server

I am reading about apollo federation and how to migrate from schema stitching and a question came when I read:
The basic strategy for migrating from a stitching gateway to Apollo Federation is to start by making the underlying services federation-capable
https://www.apollographql.com/docs/apollo-server/federation/migrating-from-stitching/#adding-federation-support-to-services
basically federation gateway can't accept another service not federation aware? so there's no way to use federation with another graphql server (such https://github.com/nuwave/lighthouse) or should I misunderstood that line?

Yes, any GraphQL service that's incorporated into the federation gateway has to implement Apollo's federation specification.
Federation relies on the service schema containing several specific types, directives and type extensions:
scalar _Any
scalar _FieldSet
union _Entity
type _Service {
sdl: String
}
extend type Query {
_entities(representations: [_Any!]!): [_Entity]!
_service: _Service!
}
directive #external on FIELD_DEFINITION
directive #requires(fields: _FieldSet!) on FIELD_DEFINITION
directive #provides(fields: _FieldSet!) on FIELD_DEFINITION
directive #key(fields: _FieldSet!) on OBJECT
directive #extends on OBJECT
The service does not have to be a GraphQL.js implementation, but it does need to implement the above additions to the schema as outlined in the spec.

Like #daniel-rearden said it does need to implement additions to the spec.
Check out graphql-transform-federation to assist you with adding the required info. Also check out this blog post

Once you have the server you also need to build the gateway, if you are using docker-compose you can use a reusable docker image as follows:
version: '3'
services:
a:
build: ./a # one service implementing federation
b:
build: ./b
gateway:
image: xmorse/apollo-federation-gateway
ports:
- 8000:80
environment:
CACHE_MAX_AGE: '5' # default cache
ENGINE_API_KEY: '...' # to connect to the apollo engine
POLL_INTERVAL: 30 # to update services changes
URL_0: "http://a"
URL_1: "http://b"
Check out the github repo for a working example.

Related

Lambda Provisioned Concurrency in CloudFormation

Note: Please read my question before flagging it as it is different from many other Provisioned Concurrency questions I've seen on SO.
I need to configure provisioned concurrency in one of my existing applications that uses CloudFormation templates with Lambda functions (AWS::Lambda::Function resource, NOT SAM with AWS::Serverless::Function resource).
I did some tests but here's where I am stuck right now:
Provisioned concurrency can only be configured for Alias or Version however...
It can't be configured for Alias that points to the Live function, it must point to a Version
It can't be configured for Version that is the $LATEST
So what's the "right" way to setup Provisioned concurrency?
When deploying CloudFormation template, I can create a Version resource which can have provisioned concurrency configured (shown below). The API Gateway endpoint can directly point to this specific Version instead of the $LATEST version.
However, there is no way to update Version resource. Once it's created, it can only be deleted.
So each time I update my lambda function code, I would have to manually remove the current Version resource from CloudFormation and add a new one so it can create a new Version. This defeats the purpose of having template to deploy.
What are my other options? How can I have a Lambda function ($LATEST, Version or Alias) that has
provisioned concurrency configured
I can make changes to Lambda code without having to modify CloudFormation template each time.
######## LambdaTest Function ########
LambdaTest:
Type: "AWS::Lambda::Function"
DependsOn:
- LambdaRole
- LambdaPolicy
Properties:
FunctionName: "LambdaTest"
Role: !GetAtt LambdaRole.Arn
Code:
S3Bucket: !Ref JarFilesBucketName
S3Key: LambdaTest.jar
Handler: com.example.RnD.LambdaTest::handleRequest
Runtime: "java11"
Timeout: 30
MemorySize: 512
####### LambdaTest Function Version ########
LambdaTestVersion:
Type: "AWS::Lambda::Version"
Properties:
FunctionName: !GetAtt LambdaTest.Arn
Description: "v1"
ProvisionedConcurrencyConfig:
ProvisionedConcurrentExecutions: 5
While you are correct, we cannot use $LATEST alias per AWS. I also think you are missing some key pieces of information, that SAM usually generates the Lambda, let me try and share what I did.
First, just FYI - SAM generates the Lambda resources that you are seeking, as part of its process. Now if you want to go directly/manually/create the lambda resources, sure you can, then you will have to wire up the AutoPublishAlias: live.
Second part, in the picture (AWS SDK) is the actual solution on deploying live parts.
My solution/workaround is to perform an AutoPublishAlias: live in the function preference
you can just add that per the ref, or follow the steps below and compare/copy/paste what the SAM file gave you vs the AWS Lambda file has
Optional for your help -->
Select Add configuration, with provisioned Concurrency enabled for a specific Lambda function version or alias (but you can’t use $LATEST).
Since you can have different settings for each version of a function. Using an alias, it is easier to enable these settings to the correct version of your function.
Select the alias live ,
note: that
you will have to keep that updated to the latest version using the AWS SAM AutoPublishAlias function preference.
Then go to the Provisioned Concurrency, use something like 500 and Save.
Now -> Provisioned Concurrency configuration is in progress and all execution environments are ready to handle the inbound concurrent requests.

How to debug an EnvoyFilter in Istio?

I have the following filter:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: proper-filter-name-here
namespace: istio-system
spec:
workloadSelector:
labels:
app: istio-ingressgateway
configPatches:
- applyTo: NETWORK_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"#type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_request(request_handle)
request_handle:logDebug("Hello World!")
end
I am checking the logs for the gateway and it does not look like the filter is applied. How do I debug an EnvoyFilter? Where can I see which filters are applied on each request?
This topic is very well described in the documentation:
The simplest kind of Istio logging is Envoy’s access logging. Envoy proxies print access information to their standard output. The standard output of Envoy’s containers can then be printed by the kubectl logs command.
You have asked:
Where can I see what filters are applied each request?
Based on this issue on github:
There is no generic mechanism.
It follows that if you wanted to see what filter was applied to each request, you would have to create your custom solution.
However, without any problem, you can get logs about each request based on this fragment in the documentation:
If you used an IstioOperator CR to install Istio, add the following field to your configuration:
spec:
meshConfig:
accessLogFile: /dev/stdout
Otherwise, add the equivalent setting to your original istioctl install command, for example:
istioctl install <flags-you-used-to-install-Istio> --set meshConfig.accessLogFile=/dev/stdout
You can also choose between JSON and text by setting accessLogEncoding to JSON or TEXT.
You may also want to customize the format of the access log by editing accessLogFormat.
Refer to global mesh options for more information on all three of these settings:
meshConfig.accessLogFile
meshConfig.accessLogEncoding
meshConfig.accessLogFormat
You can also change access log format and test the access log from linked instructions.
See also (EDIT):
How to debug your Istio networking configuration:
EnvoyFilters will manifest where you tell Istio to put them. Typically a bad EnvoyFilter will manifest as Envoy rejecting the configuration (i.e. not being in the SYNCED state above) and you need to check Istiod (Pilot) logs for the errors from Envoy rejecting the configuration.
If configuration didn’t appear in Envoy at all– Envoy did not ACK it, or it’s an EnvoyFilter configuration– it’s likely that the configuration is invalid (Istio cannot syntactically validate the configuration inside of an EnvoyFilter) or is located in the wrong spot in Envoy’s configuration.
Debugging Envoy and Istiod
Using EnvoyFilters to Debug Requests

Serverless AWS stack - need to ApiGatewayRestApi in stack with IAM roles, with no Lambdas or HTTP endpoints?

I currently have a stack deployed to AWS which has a lot of REST endpoints (Lambda functions), some other Lambdas for maintenance operations, and a DynamoDB, Cognito User Pool, Elastic Search Domain, IAM roles etc. It's all deployed with the serverless framework, using serverless.yml for defining the stack.
In order to avoid the 200 resources limit (and get a better structure), I'm trying to split the current stack into multiple stacks.
The plan is to keep the current stack for all the resources with persisted data (DynamoDB, Elastic Search, Cognito, IAM etc), and then define new stacks for the lambda functions. One for the maintenance functions, and a couple of other stacks for different types for functions invoked by HTTP through API Gateway.
Now to the problem: I have commented out the entire functions: section of serverless.yml.
I have a section containing the resources, which looks like this:
resources:
- ${file(resources/dynamoDb.yml)}
- ${file(resources/cognito.yml)}
- ${file(resources/iam.yml)}
- ${file(resources/elasticsearch.yml)}
When I try do deploy the stack now (with all functions commented out), I get this error:
Error: The CloudFormation template is invalid: Template format error: Unresolved resource dependencies [ApiGatewayRestApi] in the Resources block of the template
The reason that I get this error is probably because I have a reference to ApiGatewayRestApi in resources/iam.yml:
GetVehicleByLicensePlatePolicy:
Type: AWS::IAM::ManagedPolicy
Properties:
ManagedPolicyName: GetVehicleByLicensePlate
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: execute-api:Invoke
Resource:
Fn::Join:
- ""
- - "arn:aws:execute-api"
- ":"
- Ref: AWS::Region
- ":"
- Ref: AWS::AccountId
- ":"
- Ref: ApiGatewayRestApi
- "/*/GET/vehicles/licenseplate/*"
I understand that the ApiGatewayRestApi reference does not resolve when I have removed all functions triggered by HTTP, since there's no API Gateway being deployed in this stack.
I'll have the HTTP lambda functions in a couple of other stacks, but these stacks will depend on this one. (And I sure don't want circular dependencies.)
So how do I make it possible for my "main" stack to have a reference to the API Gateway used by the sub-stacks?
What is the common/best practice way to solve this problem?
I think I solved it by moving the API gateway specific IAM roles to the stack that contains the relevant endpoints. That actually makes sense to me, since the root stack does not need to know about api endpoint specific roles.

How do you shift/escalate your AWS lambda from one envr to another (eg. dev to prod) using alias?

I am creating a AWS serverless application with SAM. Basically what I would like to achieve is to use API Gateway's different stages (dev/test/prod) to invoke various Lambda functions alias (dev/test/prod).
I am totally stucked, I would like to know what are the strategies people have taken to shift lambda traffics, eg. from LambdaA:dev to LambdaA:prod?
I have tried to use "AutoPublishAlias", but in SAM AutoPublishAlias you can't have more then one alias in a single cloudformation stack, so that makes traffic shifting impossible.
Before using a single stack, I have also used Canary Deployment, it works ok when I separate lambda into multiple envrs (ie. dev-lambaA, test-lambdaA, prod-lambdaA) managed by different cloudformation stack. But I would like to reduce the number of lambda functions by only have lambdas reside in a single stack.
What you can do is add the following to your template.yaml file:
Resources:
ProductionAPI:
Type: AWS::Serverless::Api
Properties:
StageName: PRD
DefinitionUri: ./prdswagger.yaml
DevelopmentAPI:
Type: AWS::Serverless::Api
Properties:
StageName: DEV
DefinitionUri: ./devswagger.yaml
And use the swagger files to create your endpoints. At every endpoint add an x-amazon-apigateway-integration to the correct lambda version that you are targeting.
x-amazon-apigateway-integration:
httpMethod: "POST"
type: aws_proxy
uri: "arn:aws:apigateway:eu-central-1:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-central-1:[account_nr]:function:[myfunctionname]:PRD/invocations"
passthroughBehavior: "when_no_match"

Getting "error: GraphQL schema file should contain a valid GraphQL introspection query result" after apollo schema:download

So I set up the graphql server described here
Now, I want to generate android queries against this server using apollo android as per these instructions.
I've tried different folder configurations for the location of the generated schema against this sample server and no matter what I do I get an error at compile time saying "GraphQL schema file should contain a valid GraphQL introspection query result"
Any advice?
Turns out the solution is to use the apollo-codegen cli command and not apollo schema:download

Resources