I'm trying to create a new workflow file where I create an environment variable and use that variable in the value of some other environment variables, but it's not recognising it.
on:
workflow_dispatch:
env:
dev_environment: "my-environment"
working_dir_classic: "repo/${{ env.dev_environment }}/services/classic-service/"
working_dir_cron: "repo/${{ env.dev_environment }}/services/my-cron-service/"
Can anyone help?
Above is what I have currently, but I'm unsure what needs fixing.
Related
I am trying to set the serverless service name from the env file. Before deploying serverless, I have set the value of ECR_NAME as
export ECR_NAME=$(echo $CI_ENVIRONMENT_SLUG | awk -v srch="-" -v repl="" '{ gsub(srch,repl,$0); print $0 }')
Then I have written it as below in the serverless.yml.
service: ${env:CI_PROJECT_NAME}-${env:ECR_NAME}
useDotenv: true
configValidationMode: error
variablesResolutionMode: 20210326
Getting the below error:
Error:
Cannot resolve serverless.yml: "service" property is not accessible (configured behind variables which cannot be resolved at this stage)
Installed version
Framework Core: 3.14.0
Plugin: 6.2.1
SDK: 4.3.2
See Issue #9313 on GitHub:
https://github.com/serverless/serverless/issues/9813
Problem:
The latest version of the serverless framework is no longer working
for AWS Lambda deployments and is throwing the following error:
Cannot resolve serverless.yml: “provider.stage” property is not accessible (configured behind variables which cannot be resolved at this stage)
Discussion:
with the new resolver, such definition is not supported. In general,
it is discouraged to configure stage behind env variables for example,
as at the point where stage is going to be resolved, not whole env
might be available (e.g. loading env vars from .env.{stage} needs to
resolve stage first in order to properly load variables from file),
which might introduce bugs that are hard to debug. Also, the
provider.stage serves more as a "default" stage and --stage flag via
CLI is the preferred way of setting it.
...
In your configuration file you explicitly opt-in to use new resolver
via variablesResolutionMode: 20210326 variable.
We are not discouraging the use of env variables - quite the contrary,
we've been promoting them as a replacement for custom CLI options for
example and it is generally a great practice to use them. As for env
source for stage specifically, this has been introduced as a fix, as
stage should be already resolved before we attempt env variables
resolution, as loading .env files can depend on stage property.
#medikoo I know we've talked about it today, do you think it could be
safe to resolve stage from env source in specific circumstances (e.g.
when dotenv is not used)?
See also:
https://www.serverless.com/framework/docs/deprecations/#new-variables-resolver
https://www.serverless.com/framework/docs/providers/aws/guide/variables/
I have a file in different path between develop and production version, how to keep the same when i want to test them?
// In develop version, file in
~/project/assets/file
// In production version, file in
/service/assets/file
I like using a flag library like alecthomas/kingpin, which allows you to set a parameter like:
env := ""
appk.Flag("env", "Environnement (dev, qual, pprd or prod)").Envar("HOST_ENV").Short('e').Required().EnumVar(&env, "dev", "rct", "pprd", "prod")
Not only will you pass an environment name which is always correct (one of the four values "dev", "rct", "pprd", "prod"), but you can also not pass it directly, and it would still be detected through the system environment variable name "HOST_ENV"
You could also pass/set directly a file path/name.
But the idea remains: you can chose, with this library, between:
a config file
a parameter
an environment variable
I am trying to use packagesbuild to create packages in MacOS. I want to use generic template with environment variable so that I don't require to create separate pkgproj file for each app but packagesbuild is not able to recognize the environment variable defined in the template file.
After setting the environment variable in the terminal
export APP_PATH='/Users/sachin/Documents/Test/Client/Example.app'
and then using the same environment variable in example.pkgproj
...
<key>PATH</key>
<string>${APP_PATH}</string>
...
then triggering the command packagesbuild using the projectfile
packagesbuild example.pkgproj
getting below eror
ERROR:
Description:
Unable to copy item at path 'APP_PATH' to
'/private/tmp/S2EyTxCb/502/ExampleClient' because the item could not be
found
http://s.sudre.free.fr/Software/Packages/about.html
Does Buildbot provide an environment variable in CI jobs to allow it's identification like e.g. Travis does with TRAVIS?
Last I checked Buildbot does not set an environment variable which has for purpose to indicate that build code is being run through buildbot. In my own setup I do need a few variables that my build code uses so I've setup a dictionary like this:
from buildbot.plugins import util
env = {
'BUILDBOT': '1',
'BUILD_TAG': util.Interpolate("%(prop:buildername)s-%(prop:buildnumber)s"),
'BUILDER': util.Property('buildername')
}
This dictionary can then be used to configure builders:
util.BuilderConfig(
name="foo",
workernames=["a", "b"],
env=env, ...)
The env parameter makes it so that all shell commands issued by this builder will use the environment variables I've declared in my dictionary.
I use BUILDBOT to detect whether the code is running in buildbot at all. The other variables are passed over to services like Sauce Labs and BrowserStack in order to identify the builds there, or they are used for diagnostic purposes.
I am working on a new deployment strategy that leverages AWS CodeDeploy. The project I work on has many environments (e.g: preproduction, production) and instances (e.g: EMEA, US, APAC).
I have the basic scaffolding working ok but I noticed environment variables set in the BeforeInstall hook can not be retrieved from other steps (for instance, AfterInstall).
Is there a way to share environment variables across AWS CodeDeploy steps?
Content of appspec.yml:
version: 0.0
os: linux
files:
- source: /
destination: /tmp/code-deploy
hooks:
BeforeInstall:
- location: utils/delivery/aws/CodeDeploy/before_install.sh
timeout: 300
AfterInstall:
- location: utils/delivery/aws/CodeDeploy/after_install.sh
timeout: 300
ApplicationStart:
- location: utils/delivery/aws/CodeDeploy/application_start.sh
timeout: 300
ValidateService:
- location: utils/delivery/aws/CodeDeploy/validate_service.sh
timeout: 300
I set an environment variable in before_install.sh:
export ENVIRONMENT=preprod
And if I reference it in after_install.sh:
$ echo $ENVIRONMENT
$
Nothing.
Thank you for your help on this one!
You could put the export into a temporary file, and then, source that file. So within before_install.sh:
ENVIRONMENT="preprod"
echo "export ENVIRONMENT=\"$ENVIRONMENT\"" > "/path/to/file"
Note: With this method, you are no longer exporting the variable in before_install.sh. You are simply writing a file to be sourced in after_install.sh:
source "/path/to/file"
echo "$ENVIRONMENT"
You should consider setting those variables up in the userdata phase of the instance launch, instead of at deploy time. This allows them to be available to all codedeploy scripts during the life of the instance.
The type of data you describe eg Environment is more associated with the instance itself, and would not normally change during code deployment.
In your Userdata you would set an instance level variable like this:
export ENVIRONMENT="preprod" >> /etc/environment
Another advantage of this approach is that your app itself may want to consult these variables when it launches, to provide environment specific configuration.
If you use Cloudformation, you can set the environment up as a parameter, and pass that on to the user data script. In this way, you can launch the stack and its resources with the appropriate parameters, and launch consistent instances for any environment.