idle_container configuration in openwhisk - openwhisk

Currently, the Invoker component configuration in the application.conf has the following configuration for container proxy :
container-proxy {
timeouts {
# The "unusedTimeout" in the ContainerProxy,
#aka 'How long should a container sit idle until we kill it?'
idle-container = 10 minutes
pause-grace = 50 milliseconds
}
I installed openwhisk on kubernetes via Helm.
How can I configure the idle-container in the values.yaml or cluster.yaml?
I tried the following method in values.yaml and cluster.yaml but does not work:
whisk:
containerProxy:
timeouts:
idleContainer: "3minutes"

To override default values from a .conf file, you set environment variables that start with CONFIG_ in the invoker/controller pods. Concretely to change whisk.container-proxy.timeouts.idle-container you would define the environment variable CONFIG_whisk_containerProxy_timeouts_idleContainer to have the desired value.
In the current OpenWhisk helm chart, this requires editing the yaml files for invoker-pod.yaml or container-pod.yaml to add the additional environment variable definition. There are multiple CONFIG_ variables being defined in these files, so you should have examples to follow.

Related

Serverless stage environment variables using dotenv (.env)

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

How can I use different Cypress.env() variables for Circle testing?

I am doing some automatic testing on Circleci, with different enviromental variables: I need one port for my local testing and a different one for Circleci.
How can I make Cypress do that? I tried making cypress.env.circle, but that does not seem to work
The cypress docs explain 5 ways to set variables.
To use one port locally and one on CircleCI I would:
Add a default port to cypress.json under the env section for local use so you don't have to think about it, and anyone else contributing will have a working version.
Set an environment variable in CircleCI named cypress_VAR_NAME which will override default in cypress.json
cypress.json example
{
"env": {
"the_port": 5000
}
}
CircleCI variable would then be cypress_the_port and you would read it in your specs as parseInt(Cypress.env('the_port')) (assuming your spec needs an integer for port)

How to share environment variables across AWS CodeDeploy steps?

I am working on a new deployment strategy that leverages AWS CodeDeploy. The project I work on has many environments (e.g: preproduction, production) and instances (e.g: EMEA, US, APAC).
I have the basic scaffolding working ok but I noticed environment variables set in the BeforeInstall hook can not be retrieved from other steps (for instance, AfterInstall).
Is there a way to share environment variables across AWS CodeDeploy steps?
Content of appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /tmp/code-deploy
hooks:
BeforeInstall:
- location: utils/delivery/aws/CodeDeploy/before_install.sh
timeout: 300
AfterInstall:
- location: utils/delivery/aws/CodeDeploy/after_install.sh
timeout: 300
ApplicationStart:
- location: utils/delivery/aws/CodeDeploy/application_start.sh
timeout: 300
ValidateService:
- location: utils/delivery/aws/CodeDeploy/validate_service.sh
timeout: 300
I set an environment variable in before_install.sh:
export ENVIRONMENT=preprod
And if I reference it in after_install.sh:
$ echo $ENVIRONMENT
$
Nothing.
Thank you for your help on this one!
You could put the export into a temporary file, and then, source that file. So within before_install.sh:
ENVIRONMENT="preprod"
echo "export ENVIRONMENT=\"$ENVIRONMENT\"" > "/path/to/file"
Note: With this method, you are no longer exporting the variable in before_install.sh. You are simply writing a file to be sourced in after_install.sh:
source "/path/to/file"
echo "$ENVIRONMENT"
You should consider setting those variables up in the userdata phase of the instance launch, instead of at deploy time. This allows them to be available to all codedeploy scripts during the life of the instance.
The type of data you describe eg Environment is more associated with the instance itself, and would not normally change during code deployment.
In your Userdata you would set an instance level variable like this:
export ENVIRONMENT="preprod" >> /etc/environment
Another advantage of this approach is that your app itself may want to consult these variables when it launches, to provide environment specific configuration.
If you use Cloudformation, you can set the environment up as a parameter, and pass that on to the user data script. In this way, you can launch the stack and its resources with the appropriate parameters, and launch consistent instances for any environment.

Spring command line JSON config containing array

I am using Grails 3 Elasticsearch plugin with Springs external JSON configuration by setting spring.application.json as system property.
The properties are available in the application but I can't find a way to initialize an array properly.
What I am trying to accomplish is to override the default values of the hosts property specified in my application.yml:
environments:
development:
elasticSearch:
client:
hosts:
- {host: "myhost.com", port: 9300}
- {host: "anotherhost.com", port: 9300}
I am setting the property from the command line as follows:
-Dspring.application.json={"environments":{"development":{"elasticSearch":{"client":{"hosts":[{"host":"override1.com", "port":9000},{"host":"override2.com", "port":9100}]}}}}}
I would expect environments.development.elasticSearch.client.hosts to contain an array like it does when initialized from the application.yml, but in fact environments.development.elasticSearch.client containes host[0] and host[1], where each contains the host and the port. The host array from the yml file is still there.
How can I achieve the same behavior using the command line as with the application.yml file?
I believe you can do this the same way that you would if it was set in a .properties file, using a list:
-Denvironments.developmet.elasticSearch.client.hosts={"host":"override1.com", "port":9000},{"host":"override2.com", "port":9100}
and I believe you can also do it as an environment variable...
set ENVIRONMENTS_DEVELOPMENT_ELASTICSEARCH_CLIENT_HOSTS='{"host":"override1.com", "port":9000},{"host":"override2.com", "port":9100}'
There may need to be some quotes around parts of these depending on the shell you are in, the OS, etc.

Access config variables in buldpack bin/compile using bush

I am creating the heroku deploy button. In app.json there is next config variable:
"env": {
"PUBLISH_APP_DIR": {
"value" : "/src/WebApp"
},
and, as expected, it is available in "Config Variables" section on https://dashboard.heroku.com/new?template=
The question is how I can access it value in buldpack bin/compile script? Bash is used as environment:
#!/usr/bin/env bash
I have checked environment variables using 'printenv', and there is no PUBLISH_APP_DIR. $(PUBLISH_APP_DIR) and $PUBLISH_APP_DIR are empty also
I have finally found in buildpack api documentation, that
The application config vars are passed to the buildpack as an argument (versus set in the environment) so that the buildpack can optionally export none, all or parts of the app config vars available in ENV_DIR when the compile script is run.
The name of the file is the config key and the contents of the file is the config value. The equivalent of config var S3_KEY=8N029N81 is a file with the name S3_KEY and contents 8N029N81.

Resources