google deployment manager, can you import files in jinja template that you call directly with --template? - google-deployment-manager

https://cloud.google.com/deployment-manager/docs/configuration/templates/create-basic-template
I can deploy a template directly like this: gcloud deployment-manager deployments create a-single-vm --template vm_template.jinja
But what if that template depends on other files that need to be imported? If using a --config file you can define import in that file and call the template as a resource. But you cant pass parameter/properties to a config file. I want to call a template directly to pass --properties via the command line but that template also needs to import other files.
EDIT: What I needed was a top level jinja template instead of a config. My confusion was that you cant use imports in a jinja template without a schema file- it was failing and I thought it wasnt supported. So the solution was just swap out the config with a jinja template (with schema file) and then I can use --properies

Maybe you can try importing the dependent files into your config file as follows:
imports:
- path: vm-template.jinja
- path: vm-template-2.jinja
# In the resources section below, the properties of the resources are replaced
# with the names of the templates.
resources:
- name: vm-1
type: vm-template.jinja
- name: vm-2
type: vm-template-2.jinja
and Set Arbitrary Metadata insito create a special variable that you can pass and might use in other applications outside of Deployment Manager:
properties:
size:
type: integer
default: 2
description: Number of Mongo Slaves
variable-x: ultra-secret-sauce
More info about gcloud deployment-manager deployments create optional flags and example can be found here.
More info about passing properties using a Schema can be found here
Hope it helps

Related

How to have optional plugins in serverless framework yml file

So i use the "serverless framework" for AWS lambdas with a variety of plugins as follow:
plugins:
- serverless-esbuild
- serverless-offline
- serverless-stack-output
- plugin4
- plugin5
- plugin6
- plugin7
- plugin8
- plugin9
- plugin10
I also have multiple 'environments' (for different AWS accounts, configs, etc) so I make the serverless.yml content vary depending on environment using imported sub yml files as follow:
vpc: ${file(serverless/environment/${env:ENVIRONMENT}.yml):vpc}
But what i need is to only make the presence of a single plugin conditionnal on the ENVIRONMENT variable. Let's say plugin8 should not be included if ENVIRONMENT=XXX
With my previous strategy, i could externalise the whole plugin list to the individual environment sub yml files but that would lead to a fair amount of duplicaiton.
Any better approach to just make one line in a yml list conditionnal on an environment variable?
Thanks

error generating documentation my component

I have created a backstage scaffolding template to create a Spring boot rest service deployed to AWS EKS.
When a component is created from it in backstage the component builds using github actions, is deployed to AWS EKS and is registered in backstage.
However clicking on docs for the component fails with the following error
1 info: Step 1 of 3: Preparing docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:54.963Z"}
2 info: Prepare step completed for entity component:default/stephendemo16, stored at /tmp/backstage-EjxBxi {"timestamp":"2022-04-28T22:36:56.663Z"}
3 info: Step 2 of 3: Generating docs for entity component:default/stephendemo16 {"timestamp":"2022-04-28T22:36:56.663Z"}
4 error: Failed to build the docs page:
Could not read MkDocs YAML config file mkdocs.yml or mkdocs.yaml for validation; caused by Error: ENOENT: no such file or directory,
open '/tmp/backstage-EjxBxi/mkdocs.yml' {"timestamp":"2022-04-28T22:36:56.664Z"}
ERROR 404: T: Page not found. This could be because there is no index.md file in the root of the docs directory of this repository.
Looks like someone dropped the mic!
Catalog-info registers the docs subdirectory
apiVersion: backstage.io/v1alpha1
kind: Component
metadata:
name: "stephendemo16"
description: "try using template"
annotations:
github.com/project-slug: xxxx/stephendemo16
backstage.io/techdocs-ref: dir:docs
The docs subdirectory contains index.md which contains
## stephendemo16
try using template
## Getting started
Start write your documentation by adding more markdown (.md) files to this folder (/docs) or replace the content in this file.
## Table of Contents
The Table of Contents on the right is generated automatically based on the hierarchy
of headings. Only use one H1 (`#` in Markdown) per file.
...
What have I missed?
Having an index.md alone is not sufficient.
Internally, TechDocs is currently using MkDocs. Mkdocs has a config file called mkdocs.yaml that defines some metadata, plugins, and your file structure (table of contents).
Place an mkdocs.yaml inside your root directory. Mkdocs expects that all markdown files are located inside a /docs sub directory. It references your index.md file relative to that folder:
# You can pass the custom site name here
site_name: 'example-docs'
nav:
# relative reference to your Markdown file and an optional title
- Home: index.md
plugins:
- techdocs-core
The location of your mkdocs.yaml is the root folder of your documentation. Therefore you have to adjust your backstage.io/techdocs-ref annotation to dir:. (means the same folder as your catalog info file).
You can find more details about using the TechDocs setup in the Backstage docs.

Serverless stage environment variables using dotenv (.env)

I'm new to serverless,
So far I was be able to deploy and use .env for the app.
then, under provider in stage property in serverless.yml file, I change it to different stage. I also made new.env.{stage}.
after re-deploy using sls deploy, It still reads the default .env file.
the documentation states:
The framework looks for .env and .env.{stage} files in service directory and then tries to load them using dotenv. If .env.{stage} is found, .env will not be loaded. If stage is not explicitly defined, it defaults to dev.
So, I still don't understand "If stage is not explicitly defined, it defaults to dev". How to explicitly define it?
The dotenv File is choosen based on your stage property configuration. You need to explicitly define the stage property in your serverless.yaml or set it within your deployment command.
This will use the .env.dev file
useDotenv: true
provider:
name: aws
stage: dev # dev [default], stage, prod
memorySize: 3008
timeout: 30
Or you set the stage property via deploy command.
This will use the .env.prod file
sls deploy --stage prod
In your serverless.yml you need to define the stage property inside the provider object.
Example:
provider:
name: aws
[...]
stage: prod
As Feb 2023 I'm going to attempt to give my solution. I'm using the Nx tootling for monorepo (this shouldn't matter but just in case) and I'm using the serverless.ts instead.
I see the purpose of this to be to enhance the developer experience in the sense that it is nice to just nx run users:serve --stage=test (in my case using Nx) or sls offline --stage=test and serverless to be able to load the appropriate variables for that specific environment.
Some people went the route of using several .env.<stage> per environment. I tried to go this route but because I'm not that good of a developer I couldn't make it work. The approach that worked for the was to concatenate variable names inside the serverless.ts. Let me explain...
I'm using just one .env file instead but changing variable names based on the --stage. The magic is happening in the serverless.ts
// .env
STAGE_development=test
DB_NAME_development=mycraftypal
DB_USER_development=postgres
DB_PASSWORD_development=abcde1234
DB_PORT_development=5432
READER_development=localhost // this could be aws rds uri per db instances
WRITER_development=localhost // this could be aws rds uri per db instances
# TEST
STAGE_test=test
DB_NAME_test=mycraftypal
DB_USER_test=postgres
DB_PASSWORD_test=abcde1234
DB_PORT_test=5433
READER_test=localhost // this could be aws rds uri per db instances
WRITER_test=localhost // this could be aws rds uri per db instances
// serverless.base.ts or serverless.ts based on your configuration
...
useDotenv: true, // this property is at the root level
...
provider: {
...
stage: '${opt:stage, "development"}', // get the --stage flag value or default to development
...,
environment: {
STAGE: '${env:STAGE_${self:provider.stage}}}',
DB_NAME: '${env:DB_NAME_${self:provider.stage}}',
DB_USER: '${env:DB_USER_${self:provider.stage}}',
DB_PASSWORD: '${env:DB_PASSWORD_${self:provider.stage}}',
READER: '${env:READER_${self:provider.stage}}',
WRITER: '${env:WRITER_${self:provider.stage}}',
DB_PORT: '${env:DB_PORT_${self:provider.stage}}',
AWS_NODEJS_CONNECTION_REUSE_ENABLED: '1',
}
...
}
When one is utilizing the useDotenv: true, serverless loads your variables from the .env and puts them in the env variable so you can access them env:STAGE.
Now I can access the variable with dynamic stage like so ${env:DB_PORT_${self:provider.stage}}. If you look at the .env file each variable has the ..._<stage> at the end. In this way I can retrieve dynamically each value.
I'm still figuring it out since I don't want to have the word production in my url but still get the values dynamically and since I'm concatenating this value ${env:DB_PORT_${self:provider.stage}}... then the actual variable becomes DB_PORT_ instead of DB_PORT.

Is there a way we can import a file into YAML in GCP Deployment Manager

I am trying to create a configuration file in GCP Deployment Manager and i have a metadata file which needs to imported as text.
I know on how do it in .py file, but wondering on how to do it in YAML.
I tried different but none seem to work.
Although Deployment Manager can use the imports statement to import Jinja2 or Python templates into the root configuration file, plain YAML cannot be imported. This is limitation of YAML. It does not have "import" or "include" functionality.
A similar question has been discussed here: https://stackoverflow.com/a/15437697/11602913.
In a pure YAML deployment file, metadata can be provided literally, as described in the document
Google Cloud Platform for AWS Professionals: Infrastructure Deployment Tools:
resources:
- name: my-first-vm-template
type: compute.v1.instance
properties:
...
metadata:
items:
- key: startup-script
value: "STARTUP-SCRIPT-CONTENTS"
If metadata should be loaded from a file, you have to use Jinja2 templates. There is an example at codelabs.developers.google.com:
Deploy Your Infrastructure Using Deployment Manager > Creating your deployment configuration
imports:
- path: instance.jinja
- path: ../startup-script.sh
name: startup-script.sh
resources:
- name: my-instance
type: instance.jinja
properties:
metadata-from-file:
startup-script: startup-script.sh

How to pass parameters to sam template with override-parameters with optional parameters

I'd like to create a SAM template.yml containing lambda and several sqs's. I'd like to deploy it with parameters but not populate all the sqs's only some depending on the environment I need to deploy it on. How do I create a template with partial parameters populated?
I found how to do it in CloudFormation:
https://aws.amazon.com/blogs/infrastructure-and-automation/conditionally-launch-aws-cloudformation-resources-based-on-user-input/
And here's how to do it in SAM template:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-specification-template-anatomy.html
For SAM templates, see these docs also for setting parameter_overrides via a samconfig.toml file:
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
You can specify the location of the config file with the --config-file /path/to/samconfig.toml argument.
Example samconfig.toml file with parameters configured:
version=0.1
[default.global.parameters]
parameter_overrides=[
"TemplateInput1=Value1",
"TemplateInput2=Value2"
]

Resources