I want to hide my secret credential from my yaml, i need to use .env, so how to call .env file from my yaml, so that every I call this YAML, YAML will automatic call .env file. Please help me. thx
Instead of using an .env file, which is a simple properties file if you're following dotenv package, you can do the following:
create additional .yml file, for example .secrets.yml. you can store the secrets per stage:
prod:
MY_SECRET: foo
dev:
MY_SECRET: bar
store your secrets/configurations there
Then in serverless.yml:
load this file into an object:
custom:
secrets: ${file(.secrets.yml):${self:provider.stage}}
load object fields as environment variables:
provider:
environment:
MY_SECRET: ${self:custom.secrets.MY_SECRET}
How to test locally
In your tests you can load the secrets file this way:
const yaml = require('js-yaml');
const fs = require('fs');
const _ = require('lodash');
module.exports.loadSecrets = function (env = 'dev', path = './.secrets.yml') {
const secrets = yaml.load(fs.readFileSync(path));
_.forEach(secrets[env], (value, key) => {
process.env[key] = value;
});
}
Reference: http://www.goingserverless.com/blog/using-environment-variables-with-the-serverless-framework
Related
We are using Viper to read and parse our config file and all of that works without any issues.
However we are not able to override some of our config values using env variables. These are specific use cases where the config is bound to a struct or an array of structs.
Here is an example from our config.yaml:
app:
verifiers:
- name: "test1"
url: "http://test1.url"
cache: "5000ms"
- name: "test2"
url: "http://test2.url"
cache: "10000ms"
Which is bound to the following structs (golang):
type App struct {
AppConfig Config `yaml:"app" mapstructure:"app"`
}
type Config struct {
Verifiers []VerifierConfig `json:"verifiers" yaml:"verifiers" mapstructure:"verifiers"`
}
type VerifierConfig struct {
Name string `json:"name" yaml:"name" mapstructure:"name"`
URL string `json:"url,omitempty" yaml:"url,omitempty" mapstructure:"url"`
cache jsontime.Duration `json:"cache" yaml:"cache" mapstructure:"cache"`
}
We are unable to override the value of verifiers using env variables.
Here are the Viper options we have used:
viper.AutomaticEnv()
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
Has anyone experienced a similar issue or can confirm that Viper does not support such a use case?
Any pointers would be greatly appreciated.
Thanks
viper cannot read env variables inside config file. You have to replace them in .go file where value is being used. Here is an example on how I achieved:
Ask Viper to consider system environment variable with "ENV" prefix, that means viper expects "ENV_MONGO_USER" and "ENV_MONGO_PASSWORD" as system environment variables
viper.AutomaticEnv()
viper.SetEnvPrefix(viper.GetString("ENV"))
viper.SetEnvKeyReplacer(strings.NewReplacer(".", "_"))
In yml file:
mongodb:
url: "mongodb+srv://${username}:${password}#xxxxxxx.mongodb.net/"
database: "test"
In mongodb connection file: (before making db connection, replace ${username} and ${password} with respective environment variables but viper will read those variables without prefix )
const BasePath = "mongodb"
mongoDbUrl := viper.GetString(fmt.Sprintf("%s.%s", BasePath, "url"))
username := viper.GetString("MONGO_USER")
password := viper.GetString("MONGO_PASSWORD")
replacer := strings.NewReplacer("${username}", username, "${password}", password)
mongoDbUrl = replacer.Replace(mongoDbUrl)
I am trying to create ec2 instance using terraform. Passing credentials through terraform cli fails, while hardcoding it in main.tf works fine
This is to create ec2 instance dynamically using terraform
terraform apply works with following main.tf
provider "aws" {
region = "us-west-2"
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
resource "aws_instance" "ec2-instance" {
ami = "ami-id"
instance_type = "t2.micro"
tags {
Name = "test-inst"
}
}
while the following does not work:
terraform apply -var access_key="hard-coded-access-key" -var secret_key="hard-coded-secret-key"
Is there any difference in the above two ways of running the commands? As per terraform documentation both of the above should work.
Every terraform module can use input variables, including the main module. But before using input variables, you must declare them.
Create a variables.tf file on the same folder you have your main.tf file:
variable "credentials" {
type = object({
access_key = string
secret_key = string
})
description = "My AWS credentials"
}
Then you can reference input variables in your code like that:
provider "aws" {
region = "us-west-2"
access_key = var.credentials.access_key
secret_key = var.credentials.secret_key
}
And you can either run:
terraform apply -var credentials.access_key="hard-coded-access-key" -var credentials.secret_key="hard-coded-secret-key"
Or you could create a terraform.tfvars file with the following content:
# ------------------
# AWS Credentials
# ------------------
credentials= {
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
And then simply run terraform apply.
But the key point is that you must declare input variables before using them.
The #Felipe answer is right, But I will never recommend defining Access key and secret key in variables.tf, What you have to to do is to left it blinks and set keys using aws configure or other options is to create keys for terraform deployment purpose only using aws configure --profile terraform or without profile aws configur
so your connection.tf or main.tf will look like this,
provider "aws" {
#You can use an AWS credentials file to specify your credentials.
#The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%\.aws\credentials" for Windows users
region = "us-west-2"
# profile configured during aws configure --profile
profile = "terraform"
# you can also restrict account here, to allow particular account for deployment
allowed_account_ids = ["12*****45"]
}
You can also tell separate file for secret key and access key, The reason behind this is, as Variables.tf is part of your configuration language or bitbucket, so it's better to not place these sensitive keys in variables.tf
You can create a file somewhere in your system and give the path of the keys in the provider section.
provider "aws" {
region = "us-west-2"
shared_credentials_file = "$HOME/secret/credentials"
}
Here is the format of the credentials file
[default]
aws_access_key_id = A*******Z
aws_secret_access_key = A*******/***xyz
I have deployed my app on Heroku and on the start of my app I check a config file and in there I want to access a config var I have created on Heroku API_KEY to define my Firebase config:
module.exports = {
fireConfig: {
apiKey: process.env.API_KEY,
authDomain: "my-app.firebaseapp.com",
databaseURL: "https://my-app.firebaseio.com",
projectId: "my-project-id",
storageBucket: "my-app.appspot.com",
messagingSenderId: "my-messaging-sender-id"
}
};
this process.env.API_KEY is undefined. Should I use a different way to access it?
You can define environment variables in your nuxt.config.js file, e.g.
export default {
env: {
firebaseApiKey: process.env.API_KEY || 'default value'
}
}
Here we use the API_KEY environment variable, if it's available, and assign that to firebaseApiKey.
If that environment variable isn't set, e.g. maybe in development (you can set it there too if you want), we fall back to 'default value'. Please don't put your real API key in 'default value'. You could use a separate throwaway Firebase account key here or just omit it (take || 'default value' right out) and rely on the environment variable being set.
These will be processed at build time and then made available using the name you give them, e.g.
module.exports = {
fireConfig: {
apiKey: process.env.firebaseApiKey, # Not API_KEY
// ...
};
I want to reuse my serverless.yml in different environments (dev, test, prod).
In the config I have:
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${self:provider.stage}
Right now the value will be dev, test or prod (all in lower-case).
Is there a way to convert it toUpperCase() in a way that the input and self:provider:stage will stay as it is (i.e. lower-case) but the value of NODE_ENV will be UPPER-CASE?
Update (2022-10-13)
This answer was correct at the time of its writing (circa 2018). A better answer now is to use serverless-plugin-utils as stated in #ShashankRaj's comment below.
varName: ${upper(value)}
AFAIK, there is no such function in YAML.
You can achieve what you want though by using a map between the lowercase and uppercase names.
custom:
environments:
dev: DEV
test: TEST
prod: PROD
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${self:custom.environments.${self:provider.stage}}
You can achieve something to this effect using the reference variables in javascript files functionality provided.
To take your example, this should work (assuming you're running in a node.js environment that supports modern syntax)
serverless.yml
...
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${file(./yml-helpers.js):provider.stage.uppercase}
...
yml-helpers.js (adjacent to serverless.yml)
module.exports.provider = serverless => {
// The `serverless` argument containers all the information in the .yml file
const provider = serverless.service.provider;
return Object.entries(provider).reduce(
(accumulator, [key, value]) => ({
...accumulator,
[key]:
typeof value === 'string'
? {
lowercase: value.toLowerCase(),
uppercase: value.toUpperCase()
}
: value
}),
{}
)
};
I arrived at something that works, via reading some source code and console logging the entire serverless object. This example applies a helper function to title-case some input option values (apply str.toUpperCase() instead, as required). There is a result of parsing the input options already available in the serverless object.
// serverless-helpers.js
function toTitleCase(word) {
console.log("input word: " + word);
let lower = word.toLowerCase();
let title = lower.replace(lower[0], lower[0].toUpperCase());
console.log("output word: " + title);
return title;
}
module.exports.dynamic = function(serverless) {
// The `serverless` argument contains all the information in
// the serverless.yaml file
// serverless.cli.consoleLog('Use Serverless config and methods as well!');
// this is useful for discovery of what is available:
// serverless.cli.consoleLog(serverless);
const input_options = serverless.processedInput.options;
return {
part1Title: toTitleCase(input_options.part1),
part2Title: toTitleCase(input_options.part2)
};
};
# serverless.yaml snippet
custom:
part1: ${opt:part1}
part2: ${opt:part2}
dynamicOpts: ${file(./serverless-helpers.js):dynamic}
combined: prefix${self:custom.dynamicOpts.part1Title}${self:custom.dynamicOpts.part2Title}Suffix
This simple example assumes the input options are --part1={value} and --part2={value}, but the generalization is to traverse the properties of serverless.processedInput.options and apply any custom helpers to those values.
Using Serverless Plugin Utils:
plugins:
- serverless-plugin-utils
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${upper(${self:provider.stage})}
Thanks to #ShashankRaj...
I'm using node-windows to set up my application to run as a Windows Service. I am using node-config to manage configuration settings. Of course, everything is working fine when I run my application manually using node app.js command. When I install it as a service and it starts, the configuration settings are empty. I have production.json file in ./config folder, and I can set NODE_ENV to production in the install script. I can confirm that the variable is set correctly and still nothing. log.info('CONFIG_DIR: ' + config.util.getEnv('CONFIG_DIR')); produces undefined even if I explicitly set it in env value for the service. Looking for any insight.
install script:
var Service = require('node-windows').Service;
var path = require('path');
// Create a new service object
var svc = new Service({
name:'Excel Data Import',
description: 'Excel Data Import Service.',
script: path.join(__dirname, "app.js"), // path application file
env:[
{name:"NODE_ENV", value:"production"},
{name:"CONFIG_DIR", value: "./config"},
{name:"$NODE_CONFIG_DIR", value: "./config"}
]
});
// Listen for the "install" event, which indicates the
// process is available as a service.
svc.on('install',function(){
svc.start();
});
svc.install();
app script:
var config = require('config');
var path = require('path');
var EventLogger = require('node-windows').EventLogger;
var log = new EventLogger('Excel Data Import');
init();
function init() {
log.info("init");
if(config.has("File.fileFolder")){
var pathConfig = config.get("File.fileFolder");
log.info(pathConfig);
var DirectoryWatcher = require('directory-watcher');
DirectoryWatcher.create(pathConfig, function (err, watcher) {
//...
});
}else{
log.info("config doesn't have File.fileFolder");
}
}
I know this response is very late, but also i had the same problem, and here is how i solved it :
var svc = new Service({
name:'ProcessName',
description: 'Process Description',
script: require('path').join(__dirname,'bin\\www'),
env:[
{name: "NODE_ENV", value: "development"},
{name: "PORT", value: PORT},
{name: "NODE_CONFIG_DIR", value: "c:\\route-to-your-proyect\\config"}
]
});
When you are using windows, prefixing your enviroment variables with $ , is not required.
Also, when your run script isnĀ“t on the same dir as your config dir, you have to provide a full path to your config dir.
When you have errors with node-windows , is also helpful dig into the error log. It is located on rundirectory/daemon/processname.err.log
I hope this will help somebody.