Facing problem to set service name from env variable in serverless.yml file after upgrading to latest version - aws-lambda

I am trying to set the serverless service name from the env file. Before deploying serverless, I have set the value of ECR_NAME as
export ECR_NAME=$(echo $CI_ENVIRONMENT_SLUG | awk -v srch="-" -v repl="" '{ gsub(srch,repl,$0); print $0 }')
Then I have written it as below in the serverless.yml.
service: ${env:CI_PROJECT_NAME}-${env:ECR_NAME}
useDotenv: true
configValidationMode: error
variablesResolutionMode: 20210326
Getting the below error:
Error:
Cannot resolve serverless.yml: "service" property is not accessible (configured behind variables which cannot be resolved at this stage)
Installed version
Framework Core: 3.14.0
Plugin: 6.2.1
SDK: 4.3.2

See Issue #9313 on GitHub:
https://github.com/serverless/serverless/issues/9813
Problem:
The latest version of the serverless framework is no longer working
for AWS Lambda deployments and is throwing the following error:
Cannot resolve serverless.yml: “provider.stage” property is not accessible (configured behind variables which cannot be resolved at this stage)
Discussion:
with the new resolver, such definition is not supported. In general,
it is discouraged to configure stage behind env variables for example,
as at the point where stage is going to be resolved, not whole env
might be available (e.g. loading env vars from .env.{stage} needs to
resolve stage first in order to properly load variables from file),
which might introduce bugs that are hard to debug. Also, the
provider.stage serves more as a "default" stage and --stage flag via
CLI is the preferred way of setting it.
...
In your configuration file you explicitly opt-in to use new resolver
via variablesResolutionMode: 20210326 variable.
We are not discouraging the use of env variables - quite the contrary,
we've been promoting them as a replacement for custom CLI options for
example and it is generally a great practice to use them. As for env
source for stage specifically, this has been introduced as a fix, as
stage should be already resolved before we attempt env variables
resolution, as loading .env files can depend on stage property.
#medikoo I know we've talked about it today, do you think it could be
safe to resolve stage from env source in specific circumstances (e.g.
when dotenv is not used)?
See also:
https://www.serverless.com/framework/docs/deprecations/#new-variables-resolver
https://www.serverless.com/framework/docs/providers/aws/guide/variables/

Related

Generating OpenAPI spec for Google Config Connector

I'm trying to use the Kube OpenAPI tool openapi-gen to generate OpenAPI specs for Google's Config Connector.
I'm relatively new to Go, so I'm not sure whether this is a configuration error on my part or if I'm simply using it wrong.
I've cloned the Kube OpenAPI repo, and inside of that directory I've cloned the Config Connector repo, for simplicity.
This is what's happening when I try to generate an OpenAPI spec file.
$ go run cmd/openapi-gen/openapi-gen.go -i ./k8s-config-connector/pkg/apis/serviceusage/v1beta1 -p foo.com/foo -h ./boilerplate/boilerplate.go.txt
E0811 16:45:57.516697 22683 openapi.go:309] Error when generating: TypeMeta, TypeMeta invalid type
2021/08/11 16:45:57 OpenAPI code generation error: Failed executing generator: some packages had errors:
failed to generate default in ./k8s-config-connector/pkg/apis/serviceusage/v1beta1.Service: TypeMeta: not sure how to enforce default for Unsupported
exit status 1
What's going on here?
Here are some troubleshooting steps:
1 - GOROOT variable should be set and point to root of go installation
2- Operator-sdk depended on it being exported as an environment variable. Run the below command
export GOROOT=$(go env GOROOT)
when using the operator-sdk, it is necessary to either (a) set GOROOT in your environment, or (b) use the same GOROOT that was used to build the operator-sdk binary.
Refer to the link for additional information.

MICRONAUT_FUNCTION_NAME Environment variables is not working in AWS lambda

I want to write multiple function inside our app so instead of putting config in application.yml I use MICRONAUT_FUNCTION_NAME environment variable in AWS lambda but I keep receiving the error
No function found for name: xxx: java.lang.IllegalStateException
java.lang.IllegalStateException: No function found for name: xxx
at io.micronaut.function.executor.AbstractExecutor.lambda$resolveFunction$0(AbstractExecutor.java:60)
at java.util.Optional.orElseThrow(Optional.java:290)
at io.micronaut.function.executor.AbstractExecutor.resolveFunction(AbstractExecutor.java:60)
at io.micronaut.function.executor.StreamFunctionExecutor.execute(StreamFunctionExecutor.java:89)
at io.micronaut.function.aws.MicronautRequestStreamHandler.handleRequest(MicronautRequestStreamHandler.java:54)
Do anyone know what did I miss or it's not possible for multiple functions?
You can use io.micronaut:micronaut-function-aws:1.4.0 with micronaut version 1.3.3.
This happens because I use Micronaut version 1.3.3. If I downgrade to 1.2.11, it works perfectly.

`Cannot use e "__Schema" from another module or realm.` and `Duplicate "graphql" modules` using ApolloClient

I have a React application with ApolloClient with Apollo-Link-Schema. The application works fine locally but in our staging environment (using GOCD), we get the following error:
Uncaught Error: Cannot use e "__Schema" from another module or realm.
Ensure that there is only one instance of "graphql" in the node_modules
directory. If different versions of "graphql" are the dependencies of other
relied on modules, use "resolutions" to ensure only one version is installed.
https://yarnpkg.com/en/docs/selective-version-resolutions
Duplicate "graphql" modules cannot be used at the same time since different
versions may have different capabilities and behavior. The data from one
version used in the function from another could produce confusing and
spurious results.
at t.a (instanceOf.mjs:21)
at C (definition.mjs:37)
at _ (definition.mjs:22)
at X (definition.mjs:284)
at J (definition.mjs:287)
at new Y (definition.mjs:252)
at Y (definition.mjs:254)
at Object.<anonymous> (introspection.mjs:459)
at u (NominationsApprovals.module.js:80)
at Object.<anonymous> (validate.mjs:1)
Dependencies are installed with yarn, I've added the resolutions field to the package.json.
"resolutions": {
"graphql": "^14.5.8"
},
I've checked the yarn.lock and can only find one reference for the graphql package.
npm ls graphql does not display any duplicates.
I thought maybe its a build issue with webpack - I have a different build script for staging, but running that locally I am still able to get the react application to run with that bundle.
Can anyone suggest anything else to help me fix this?
I managed to find the cause of the issue, if this helps anyone else. The issue is not to do with duplicate instances of the package at all, this is a false positive triggered by us using webpack's DefinePlugin to set the process.env.NODE_ENV to staging for our staging build.
However, in webpack the mode (see https://webpack.js.org/configuration/mode/), which sets the process.env.NODE_ENV, only accepts none, development and production as valid values. This was triggering an env check in the graphql package to fail and trigger this error message.
In our case, we need to differentiate between staging and production as our API endpoint differs based on this, but the solution we implemented is to not rely on the process.env.NODE_ENV, but to assign a custom variable on build (e.g. process.env.API_URL)
I would try to replicate the error locally and debug it:
try this:
rm -rf node_modules yarn.lock
# also remove any lock files if you have package-lock.json too
yarn install
# build the project locally and see if you got the error
I got this problem one time where I was working with Gatsby and 2 different themes where using different versions of GraphQL. Also be more explicit with the version (without caret) and check if the error persist.
do you have a repo youc an share? that would also help us to help you :)
While changing NODE_ENV to production might solve the issue, if you have different variables for each environment and don't want to mess with your metrics this is not an ideal solution.
You said you use webpack. If the build with the issue uses some kind of source-map in your devtool, you might want to disable that to see if the problem persists. That's how I solved this without setting my NODE_ENV to production.
I had a similar problem when trying to run Apollo codegen and was able to fix it by deduping my npm packages. Run this:
rm -rf node_modules && npm i && npm dedupe
I was having this problem so I switched to yarn, and after deleting node_modules and npm lockfile, then running yarn, the problem went away :-).
I ended up here because I use the AWS CDK and the NodejsFunction Construct. I was also using bundling with minify: true.
Toggling minify to false resolved this for me.

How can I use different Cypress.env() variables for Circle testing?

I am doing some automatic testing on Circleci, with different enviromental variables: I need one port for my local testing and a different one for Circleci.
How can I make Cypress do that? I tried making cypress.env.circle, but that does not seem to work
The cypress docs explain 5 ways to set variables.
To use one port locally and one on CircleCI I would:
Add a default port to cypress.json under the env section for local use so you don't have to think about it, and anyone else contributing will have a working version.
Set an environment variable in CircleCI named cypress_VAR_NAME which will override default in cypress.json
cypress.json example
{
"env": {
"the_port": 5000
}
}
CircleCI variable would then be cypress_the_port and you would read it in your specs as parseInt(Cypress.env('the_port')) (assuming your spec needs an integer for port)

How to share environment variables across AWS CodeDeploy steps?

I am working on a new deployment strategy that leverages AWS CodeDeploy. The project I work on has many environments (e.g: preproduction, production) and instances (e.g: EMEA, US, APAC).
I have the basic scaffolding working ok but I noticed environment variables set in the BeforeInstall hook can not be retrieved from other steps (for instance, AfterInstall).
Is there a way to share environment variables across AWS CodeDeploy steps?
Content of appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /tmp/code-deploy
hooks:
BeforeInstall:
- location: utils/delivery/aws/CodeDeploy/before_install.sh
timeout: 300
AfterInstall:
- location: utils/delivery/aws/CodeDeploy/after_install.sh
timeout: 300
ApplicationStart:
- location: utils/delivery/aws/CodeDeploy/application_start.sh
timeout: 300
ValidateService:
- location: utils/delivery/aws/CodeDeploy/validate_service.sh
timeout: 300
I set an environment variable in before_install.sh:
export ENVIRONMENT=preprod
And if I reference it in after_install.sh:
$ echo $ENVIRONMENT
$
Nothing.
Thank you for your help on this one!
You could put the export into a temporary file, and then, source that file. So within before_install.sh:
ENVIRONMENT="preprod"
echo "export ENVIRONMENT=\"$ENVIRONMENT\"" > "/path/to/file"
Note: With this method, you are no longer exporting the variable in before_install.sh. You are simply writing a file to be sourced in after_install.sh:
source "/path/to/file"
echo "$ENVIRONMENT"
You should consider setting those variables up in the userdata phase of the instance launch, instead of at deploy time. This allows them to be available to all codedeploy scripts during the life of the instance.
The type of data you describe eg Environment is more associated with the instance itself, and would not normally change during code deployment.
In your Userdata you would set an instance level variable like this:
export ENVIRONMENT="preprod" >> /etc/environment
Another advantage of this approach is that your app itself may want to consult these variables when it launches, to provide environment specific configuration.
If you use Cloudformation, you can set the environment up as a parameter, and pass that on to the user data script. In this way, you can launch the stack and its resources with the appropriate parameters, and launch consistent instances for any environment.

Resources